text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Joint damage in Rheumatoid Arthritis (RA) is assessed by manually inspecting
and grading radiographs of hands and feet. This is a tedious task which
requires trained experts whose subjective assessment leads to low inter-rater
agreement. An algorithm which can automatically predict the joint level damage
in hands and feet can help optimize this process, which will eventually aid the
doctors in better patient care and research. In this paper, we propose a
two-staged approach which amalgamates object detection and convolution neural
networks with attention which can efficiently and accurately predict the
overall and joint level narrowing and erosion from patients radiographs. This
approach has been evaluated on hands and feet radiographs of patients suffering
from RA and has achieved a weighted root mean squared error (RMSE) of 1.358 and
1.404 in predicting joint level narrowing and erosion Sharp van der Heijde
(SvH) scores which is 31% and 19% improvement with respect to the baseline SvH
scores, respectively. The proposed approach achieved a weighted absolute error
of 1.456 in predicting the overall damage in hands and feet radiographs for the
patients which is a 79% improvement as compared to the baseline. Our method
also provides an inherent capability to provide explanations for model
predictions using attention weights, which is essential given the black box
nature of deep learning models. The proposed approach was developed during the
RA2 Dream Challenge hosted by Dream Challenges and secured 4th and 8th position
in predicting overall and joint level narrowing and erosion SvH scores from
radiographs. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Transfer learning is one of the subjects undergoing intense study in the area
of machine learning. In object recognition and object detection there are known
experiments for the transferability of parameters, but not for neural networks
which are suitable for object detection in real time embedded applications,
such as the SqueezeDet neural network. We use transfer learning to accelerate
the training of SqueezeDet to a new group of classes. Also, experiments are
conducted to study the transferability and co-adaptation phenomena introduced
by the transfer learning process. To accelerate training, we propose a new
implementation of the SqueezeDet training which provides a faster pipeline for
data processing and achieves 1.8 times speedup compared to the initial
implementation. Finally, we created a mechanism for automatic hyperparameter
optimization using an empirical method. | [
"cs.CV"
] |
In many scenarios, humans prefer a text-based representation of quantitative
data over numerical, tabular, or graphical representations. The attractiveness
of textual summaries for complex data has inspired research on data-to-text
systems. While there are several data-to-text tools for time series, few of
them try to mimic how humans summarize for time series. In this paper, we
propose a model to create human-like text descriptions for time series. Our
system finds patterns in time series data and ranks these patterns based on
empirical observations of human behavior using utility estimation. Our proposed
utility estimation model is a Bayesian network capturing interdependencies
between different patterns. We describe the learning steps for this network and
introduce baselines along with their performance for each step. The output of
our system is a natural language description of time series that attempts to
match a human's summary of the same data. | [
"cs.LG",
"stat.ML"
] |
Deep learning is expected to offer new opportunities and a new paradigm for
the field of architecture. One such opportunity is teaching neural networks to
visually understand architectural elements from the built environment. However,
the availability of large training datasets is one of the biggest limitations
of neural networks. Also, the vast majority of training data for visual
recognition tasks is annotated by humans. In order to resolve this bottleneck,
we present a concept of a hybrid system using both building information
modeling (BIM) and hyperrealistic (photorealistic) rendering to synthesize
datasets for training a neural network for building object recognition in
photos. For generating our training dataset BIMrAI, we used an existing BIM
model and a corresponding photo-realistically rendered model of the same
building. We created methods for using renderings to train a deep learning
model, trained a generative adversarial network (GAN) model using these
methods, and tested the output model on real-world photos. For the specific
case study presented in this paper, our results show that a neural network
trained with synthetic data; i.e., photorealistic renderings and BIM-based
semantic labels, can be used to identify building objects from photos without
using photos in the training data. Future work can enhance the presented
methods using available BIM models and renderings for more generalized mapping
and description of photographed built environments. | [
"cs.LG"
] |
Often, what is termed algorithmic bias in machine learning will be due to
historic bias in the training data. But sometimes the bias may be introduced
(or at least exacerbated) by the algorithm itself. The ways in which algorithms
can actually accentuate bias has not received a lot of attention with
researchers focusing directly on methods to eliminate bias - no matter the
source. In this paper we report on initial research to understand the factors
that contribute to bias in classification algorithms. We believe this is
important because underestimation bias is inextricably tied to regularization,
i.e. measures to address overfitting can accentuate bias. | [
"cs.LG",
"stat.ML"
] |
Deep learning-based object detection and instance segmentation have achieved
unprecedented progress. In this paper, we propose Complete-IoU (CIoU) loss and
Cluster-NMS for enhancing geometric factors in both bounding box regression and
Non-Maximum Suppression (NMS), leading to notable gains of average precision
(AP) and average recall (AR), without the sacrifice of inference efficiency. In
particular, we consider three geometric factors, i.e., overlap area, normalized
central point distance and aspect ratio, which are crucial for measuring
bounding box regression in object detection and instance segmentation. The
three geometric factors are then incorporated into CIoU loss for better
distinguishing difficult regression cases. The training of deep models using
CIoU loss results in consistent AP and AR improvements in comparison to widely
adopted $\ell_n$-norm loss and IoU-based loss. Furthermore, we propose
Cluster-NMS, where NMS during inference is done by implicitly clustering
detected boxes and usually requires less iterations. Cluster-NMS is very
efficient due to its pure GPU implementation, and geometric factors can be
incorporated to improve both AP and AR. In the experiments, CIoU loss and
Cluster-NMS have been applied to state-of-the-art instance segmentation (e.g.,
YOLACT and BlendMask-RT), and object detection (e.g., YOLO v3, SSD and Faster
R-CNN) models. Taking YOLACT on MS COCO as an example, our method achieves
performance gains as +1.7 AP and +6.2 AR$_{100}$ for object detection, and +0.9
AP and +3.5 AR$_{100}$ for instance segmentation, with 27.1 FPS on one NVIDIA
GTX 1080Ti GPU. All the source code and trained models are available at
https://github.com/Zzh-tju/CIoU | [
"cs.CV"
] |
It has been well recognized that modeling object-to-object relations would be
helpful for object detection. Nevertheless, the problem is not trivial
especially when exploring the interactions between objects to boost video
object detectors. The difficulty originates from the aspect that reliable
object relations in a video should depend on not only the objects in the
present frame but also all the supportive objects extracted over a long range
span of the video. In this paper, we introduce a new design to capture the
interactions across the objects in spatio-temporal context. Specifically, we
present Relation Distillation Networks (RDN) --- a new architecture that
novelly aggregates and propagates object relation to augment object features
for detection. Technically, object proposals are first generated via Region
Proposal Networks (RPN). RDN then, on one hand, models object relation via
multi-stage reasoning, and on the other, progressively distills relation
through refining supportive object proposals with high objectness scores in a
cascaded manner. The learnt relation verifies the efficacy on both improving
object detection in each frame and box linking across frames. Extensive
experiments are conducted on ImageNet VID dataset, and superior results are
reported when comparing to state-of-the-art methods. More remarkably, our RDN
achieves 81.8% and 83.2% mAP with ResNet-101 and ResNeXt-101, respectively.
When further equipped with linking and rescoring, we obtain to-date the best
reported mAP of 83.8% and 84.7%. | [
"cs.CV"
] |
The spatio-temporal information among video sequences is significant for
video super-resolution (SR). However, the spatio-temporal information cannot be
fully used by existing video SR methods since spatial feature extraction and
temporal motion compensation are usually performed sequentially. In this paper,
we propose a deformable 3D convolution network (D3Dnet) to incorporate
spatio-temporal information from both spatial and temporal dimensions for video
SR. Specifically, we introduce deformable 3D convolution (D3D) to integrate
deformable convolution with 3D convolution, obtaining both superior
spatio-temporal modeling capability and motion-aware modeling flexibility.
Extensive experiments have demonstrated the effectiveness of D3D in exploiting
spatio-temporal information. Comparative results show that our network achieves
state-of-the-art SR performance. Code is available at:
https://github.com/XinyiYing/D3Dnet. | [
"cs.CV"
] |
Many image segmentation techniques have been developed over the past two
decades for segmenting the images, which help for object recognition, occlusion
boundary estimation within motion or stereo systems, image compression, image
editing.
In this, there is a combined approach for segmenting the image. By using
histogram equalization to the input image, from which it gives contrast
enhancement output image .After that by applying median filtering,which will
remove noise from contrast output image . At last I applied fuzzy c-mean
clustering algorithm to denoising output image, which give segmented output
image. In this way it produce better segmented image with less computation
time. | [
"cs.CV"
] |
6D pose estimation is crucial for augmented reality, virtual reality, robotic
manipulation and visual navigation. However, the problem is challenging due to
the variety of objects in the real world. They have varying 3D shape and their
appearances in captured images are affected by sensor noise, changing lighting
conditions and occlusions between objects. Different pose estimation methods
have different strengths and weaknesses, depending on feature representations
and scene contents. At the same time, existing 3D datasets that are used for
data-driven methods to estimate 6D poses have limited view angles and low
resolution. To address these issues, we organize the Shape Retrieval Challenge
benchmark on 6D pose estimation and create a physically accurate simulator that
is able to generate photo-realistic color-and-depth image pairs with
corresponding ground truth 6D poses. From captured color and depth images, we
use this simulator to generate a 3D dataset which has 400 photo-realistic
synthesized color-and-depth image pairs with various view angles for training,
and another 100 captured and synthetic images for testing. Five research groups
register in this track and two of them submitted their results. Data-driven
methods are the current trend in 6D object pose estimation and our evaluation
results show that approaches which fully exploit the color and geometric
features are more robust for 6D pose estimation of reflective and texture-less
objects and occlusion. This benchmark and comparative evaluation results have
the potential to further enrich and boost the research of 6D object pose
estimation and its applications. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Policy evaluation algorithms are essential to reinforcement learning due to
their ability to predict the performance of a policy. However, there are two
long-standing issues lying in this prediction problem that need to be tackled:
off-policy stability and on-policy efficiency. The conventional temporal
difference (TD) algorithm is known to perform very well in the on-policy
setting, yet is not off-policy stable. On the other hand, the gradient TD and
emphatic TD algorithms are off-policy stable, but are not on-policy efficient.
This paper introduces novel algorithms that are both off-policy stable and
on-policy efficient by using the oblique projection method. The empirical
experimental results on various domains validate the effectiveness of the
proposed approach. | [
"cs.LG",
"stat.ML"
] |
This paper focuses on webly supervised learning (WSL), where datasets are
built by crawling samples from the Internet and directly using search queries
as web labels. Although WSL benefits from fast and low-cost data collection,
noises in web labels hinder better performance of the image classification
model. To alleviate this problem, in recent works, self-label supervised loss
$\mathcal{L}_s$ is utilized together with webly supervised loss
$\mathcal{L}_w$. $\mathcal{L}_s$ relies on pseudo labels predicted by the model
itself. Since the correctness of the web label or pseudo label is usually on a
case-by-case basis for each web sample, it is desirable to adjust the balance
between $\mathcal{L}_s$ and $\mathcal{L}_w$ on sample level. Inspired by the
ability of Deep Neural Networks (DNNs) in confidence prediction, we introduce
Self-Contained Confidence (SCC) by adapting model uncertainty for WSL setting,
and use it to sample-wisely balance $\mathcal{L}_s$ and $\mathcal{L}_w$.
Therefore, a simple yet effective WSL framework is proposed. A series of
SCC-friendly regularization approaches are investigated, among which the
proposed graph-enhanced mixup is the most effective method to provide
high-quality confidence to enhance our framework. The proposed WSL framework
has achieved the state-of-the-art results on two large-scale WSL datasets,
WebVision-1000 and Food101-N. Code is available at
https://github.com/bigvideoresearch/SCC. | [
"cs.CV"
] |
We present pure-transformer based models for video classification, drawing
upon the recent success of such models in image classification. Our model
extracts spatio-temporal tokens from the input video, which are then encoded by
a series of transformer layers. In order to handle the long sequences of tokens
encountered in video, we propose several, efficient variants of our model which
factorise the spatial- and temporal-dimensions of the input. Although
transformer-based models are known to only be effective when large training
datasets are available, we show how we can effectively regularise the model
during training and leverage pretrained image models to be able to train on
comparatively small datasets. We conduct thorough ablation studies, and achieve
state-of-the-art results on multiple video classification benchmarks including
Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in
Time, outperforming prior methods based on deep 3D convolutional networks. To
facilitate further research, we will release code and models. | [
"cs.CV"
] |
Inspired by the philosophy employed by human beings to determine whether a
presented face example is genuine or not, i.e., to glance at the example
globally first and then carefully observe the local regions to gain more
discriminative information, for the face anti-spoofing problem, we propose a
novel framework based on the Convolutional Neural Network (CNN) and the
Recurrent Neural Network (RNN). In particular, we model the behavior of
exploring face-spoofing-related information from image sub-patches by
leveraging deep reinforcement learning. We further introduce a recurrent
mechanism to learn representations of local information sequentially from the
explored sub-patches with an RNN. Finally, for the classification purpose, we
fuse the local information with the global one, which can be learned from the
original input image through a CNN. Moreover, we conduct extensive experiments,
including ablation study and visualization analysis, to evaluate our proposed
framework on various public databases. The experiment results show that our
method can generally achieve state-of-the-art performance among all scenarios,
demonstrating its effectiveness. | [
"cs.CV"
] |
Facial caricature is an art form of drawing faces in an exaggerated way to
convey humor or sarcasm. In this paper, we propose the first Generative
Adversarial Network (GAN) for unpaired photo-to-caricature translation, which
we call "CariGANs". It explicitly models geometric exaggeration and appearance
stylization using two components: CariGeoGAN, which only models the
geometry-to-geometry transformation from face photos to caricatures, and
CariStyGAN, which transfers the style appearance from caricatures to face
photos without any geometry deformation. In this way, a difficult cross-domain
translation problem is decoupled into two easier tasks. The perceptual study
shows that caricatures generated by our CariGANs are closer to the hand-drawn
ones, and at the same time better persevere the identity, compared to
state-of-the-art methods. Moreover, our CariGANs allow users to control the
shape exaggeration degree and change the color/texture style by tuning the
parameters or giving an example caricature. | [
"cs.CV",
"cs.AI",
"cs.GR"
] |
The development of lightweight object detectors is essential due to the
limited computation resources. To reduce the computation cost, how to generate
redundant features plays a significant role. This paper proposes a new
lightweight Convolution method Cross-Stage Lightweight (CSL) Module, to
generate redundant features from cheap operations. In the intermediate
expansion stage, we replaced Pointwise Convolution with Depthwise Convolution
to produce candidate features. The proposed CSL-Module can reduce the
computation cost significantly. Experiments conducted at MS-COCO show that the
proposed CSL-Module can approximate the fitting ability of Convolution-3x3.
Finally, we use the module to construct a lightweight detector CSL-YOLO,
achieving better detection performance with only 43% FLOPs and 52% parameters
than Tiny-YOLOv4. | [
"cs.CV"
] |
Distributed synchronization is known to occur at several scales in the brain,
and has been suggested as playing a key functional role in perceptual grouping.
State-of-the-art visual grouping algorithms, however, seem to give
comparatively little attention to neural synchronization analogies. Based on
the framework of concurrent synchronization of dynamic systems, simple networks
of neural oscillators coupled with diffusive connections are proposed to solve
visual grouping problems. Multi-layer algorithms and feedback mechanisms are
also studied. The same algorithm is shown to achieve promising results on
several classical visual grouping problems, including point clustering, contour
integration and image segmentation. | [
"cs.CV",
"cs.NE"
] |
This paper presents TrashCan, a large dataset comprised of images of
underwater trash collected from a variety of sources, annotated both using
bounding boxes and segmentation labels, for development of robust detectors of
marine debris. The dataset has two versions, TrashCan-Material and
TrashCan-Instance, corresponding to different object class configurations. The
eventual goal is to develop efficient and accurate trash detection methods
suitable for onboard robot deployment. Along with information about the
construction and sourcing of the TrashCan dataset, we present initial results
of instance segmentation from Mask R-CNN and object detection from Faster
R-CNN. These do not represent the best possible detection results but provides
an initial baseline for future work in instance segmentation and object
detection on the TrashCan dataset. | [
"cs.CV",
"cs.RO"
] |
Despite rapid advances in image-based machine learning, the threat
identification of a knife wielding attacker has not garnered substantial
academic attention. This relative research gap appears less understandable
given the high knife assault rate (>100,000 annually) and the increasing
availability of public video surveillance to analyze and forensically document.
We present three complementary methods for scoring automated threat
identification using multiple knife image datasets, each with the goal of
narrowing down possible assault intentions while minimizing misidentifying
false positives and risky false negatives. To alert an observer to the
knife-wielding threat, we test and deploy classification built around MobileNet
in a sparse and pruned neural network with a small memory requirement (< 2.2
megabytes) and 95% test accuracy. We secondly train a detection algorithm
(MaskRCNN) to segment the hand from the knife in a single image and assign
probable certainty to their relative location. This segmentation accomplishes
both localization with bounding boxes but also relative positions to infer
overhand threats. A final model built on the PoseNet architecture assigns
anatomical waypoints or skeletal features to narrow the threat characteristics
and reduce misunderstood intentions. We further identify and supplement
existing data gaps that might blind a deployed knife threat detector such as
collecting innocuous hand and fist images as important negative training sets.
When automated on commodity hardware and software solutions one original
research contribution is this systematic survey of timely and readily available
image-based alerts to task and prioritize crime prevention countermeasures
prior to a tragic outcome. | [
"cs.CV",
"cs.LG"
] |
In this paper, we investigate the use of generative adversarial networks in
the task of image generation according to subjective measures of semantic
attributes. Unlike the standard (CGAN) that generates images from discrete
categorical labels, our architecture handles both continuous and discrete
scales. Given pairwise comparisons of images, our model, called RankCGAN,
performs two tasks: it learns to rank images using a subjective measure; and it
learns a generative model that can be controlled by that measure. RankCGAN
associates each subjective measure of interest to a distinct dimension of some
latent space. We perform experiments on UT-Zap50K, PubFig and OSR datasets and
demonstrate that the model is expressive and diverse enough to conduct
two-attribute exploration and image editing. | [
"cs.CV"
] |
As a non-parametric Bayesian model which produces informative predictive
distribution, Gaussian process (GP) has been widely used in various fields,
like regression, classification and optimization. The cubic complexity of
standard GP however leads to poor scalability, which poses challenges in the
era of big data. Hence, various scalable GPs have been developed in the
literature in order to improve the scalability while retaining desirable
prediction accuracy. This paper devotes to investigating the methodological
characteristics and performance of representative global and local scalable GPs
including sparse approximations and local aggregations from four main
perspectives: scalability, capability, controllability and robustness. The
numerical experiments on two toy examples and five real-world datasets with up
to 250K points offer the following findings. In terms of scalability, most of
the scalable GPs own a time complexity that is linear to the training size. In
terms of capability, the sparse approximations capture the long-term spatial
correlations, the local aggregations capture the local patterns but suffer from
over-fitting in some scenarios. In terms of controllability, we could improve
the performance of sparse approximations by simply increasing the inducing
size. But this is not the case for local aggregations. In terms of robustness,
local aggregations are robust to various initializations of hyperparameters due
to the local attention mechanism. Finally, we highlight that the proper hybrid
of global and local scalable GPs may be a promising way to improve both the
model capability and scalability for big data. | [
"stat.ML",
"cs.LG"
] |
We consider the problem of estimating a linear time-invariant (LTI) dynamical
system from a single trajectory via streaming algorithms, which is encountered
in several applications including reinforcement learning (RL) and time-series
analysis. While the LTI system estimation problem is well-studied in the {\em
offline} setting, the practically important streaming/online setting has
received little attention. Standard streaming methods like stochastic gradient
descent (SGD) are unlikely to work since streaming points can be highly
correlated. In this work, we propose a novel streaming algorithm, SGD with
Reverse Experience Replay ($\mathsf{SGD}-\mathsf{RER}$), that is inspired by
the experience replay (ER) technique popular in the RL literature.
$\mathsf{SGD}-\mathsf{RER}$ divides data into small buffers and runs SGD
backwards on the data stored in the individual buffers. We show that this
algorithm exactly deconstructs the dependency structure and obtains information
theoretically optimal guarantees for both parameter error and prediction error.
Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style
algorithm for the classical problem of linear system identification with a
first order oracle. Furthermore, $\mathsf{SGD}-\mathsf{RER}$ can be applied to
more general settings like sparse LTI identification with known sparsity
pattern, and non-linear dynamical systems. Our work demonstrates that the
knowledge of data dependency structure can aid us in designing statistically
and computationally efficient algorithms which can "decorrelate" streaming
samples. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
A long-standing challenge in Reinforcement Learning is enabling agents to
learn a model of their environment which can be transferred to solve other
problems in a world with the same underlying rules. One reason this is
difficult is the challenge of learning accurate models of an environment. If
such a model is inaccurate, the agent's plans and actions will likely be
sub-optimal, and likely lead to the wrong outcomes. Recent progress in
model-based reinforcement learning has improved the ability for agents to learn
and use predictive models. In this paper, we extend a recent deep learning
architecture which learns a predictive model of the environment that aims to
predict only the value of a few key measurements, which are be indicative of an
agent's performance. Predicting only a few measurements rather than the entire
future state of an environment makes it more feasible to learn a valuable
predictive model. We extend this predictive model with a small, evolving neural
network that suggests the best goals to pursue in the current state. We
demonstrate that this allows the predictive model to transfer to new scenarios
where goals are different, and that the adaptive goals can even adjust agent
behavior on-line, changing its strategy to fit the current context. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
We propose an image based end-to-end learning framework that helps
lane-change decisions for human drivers and autonomous vehicles. The proposed
system, Safe Lane-Change Aid Network (SLCAN), trains a deep convolutional
neural network to classify the status of adjacent lanes from rear view images
acquired by cameras mounted on both sides of the vehicle. Rather than depending
on any explicit object detection or tracking scheme, SLCAN reads the whole
input image and directly decides whether initiation of the lane-change at the
moment is safe or not. We collected and annotated 77,273 rear side view images
to train and test SLCAN. Experimental results show that the proposed framework
achieves 96.98% classification accuracy although the test images are from
unseen roadways. We also visualize the saliency map to understand which part of
image SLCAN looks at for correct decisions. | [
"cs.CV"
] |
Large-scale and multidimensional spatiotemporal data sets are becoming
ubiquitous in many real-world applications such as monitoring urban traffic and
air quality. Making predictions on these time series has become a critical
challenge due to not only the large-scale and high-dimensional nature but also
the considerable amount of missing data. In this paper, we propose a Bayesian
temporal factorization (BTF) framework for modeling multidimensional time
series -- in particular spatiotemporal data -- in the presence of missing
values. By integrating low-rank matrix/tensor factorization and vector
autoregressive (VAR) process into a single probabilistic graphical model, this
framework can characterize both global and local consistencies in large-scale
time series data. The graphical model allows us to effectively perform
probabilistic predictions and produce uncertainty estimates without imputing
those missing values. We develop efficient Gibbs sampling algorithms for model
inference and model updating for real-time prediction and test the proposed BTF
framework on several real-world spatiotemporal data sets for both missing data
imputation and multi-step rolling prediction tasks. The numerical experiments
demonstrate the superiority of the proposed BTF approaches over existing
state-of-the-art methods. | [
"stat.ML",
"cs.LG"
] |
Policy gradient (PG) gives rise to a rich class of reinforcement learning
(RL) methods. Recently, there has been an emerging trend to accelerate the
existing PG methods such as REINFORCE by the \emph{variance reduction}
techniques. However, all existing variance-reduced PG methods heavily rely on
an uncheckable importance weight assumption made for every single iteration of
the algorithms. In this paper, a simple gradient truncation mechanism is
proposed to address this issue. Moreover, we design a Truncated Stochastic
Incremental Variance-Reduced Policy Gradient (TSIVR-PG) method, which is able
to maximize not only a cumulative sum of rewards but also a general utility
function over a policy's long-term visiting distribution. We show an
$\tilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity for TSIVR-PG to find an
$\epsilon$-stationary policy. By assuming the overparameterizaiton of policy
and exploiting the hidden convexity of the problem, we further show that
TSIVR-PG converges to global $\epsilon$-optimal policy with
$\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. | [
"cs.LG",
"stat.ML"
] |
We propose Shift R-CNN, a hybrid model for monocular 3D object detection,
which combines deep learning with the power of geometry. We adapt a Faster
R-CNN network for regressing initial 2D and 3D object properties and combine it
with a least squares solution for the inverse 2D to 3D geometric mapping
problem, using the camera projection matrix. The closed-form solution of the
mathematical system, along with the initial output of the adapted Faster R-CNN
are then passed through a final ShiftNet network that refines the result using
our newly proposed Volume Displacement Loss. Our novel, geometrically
constrained deep learning approach to monocular 3D object detection obtains top
results on KITTI 3D Object Detection Benchmark, being the best among all
monocular methods that do not use any pre-trained network for depth estimation. | [
"cs.CV",
"cs.LG"
] |
The machine learning community has been overwhelmed by a plethora of deep
learning based approaches. Many challenging computer vision tasks such as
detection, localization, recognition and segmentation of objects in
unconstrained environment are being efficiently addressed by various types of
deep neural networks like convolutional neural networks, recurrent networks,
adversarial networks, autoencoders and so on. While there have been plenty of
analytical studies regarding the object detection or recognition domain, many
new deep learning techniques have surfaced with respect to image segmentation
techniques. This paper approaches these various deep learning techniques of
image segmentation from an analytical perspective. The main goal of this work
is to provide an intuitive understanding of the major techniques that has made
significant contribution to the image segmentation domain. Starting from some
of the traditional image segmentation approaches, the paper progresses
describing the effect deep learning had on the image segmentation domain.
Thereafter, most of the major segmentation algorithms have been logically
categorized with paragraphs dedicated to their unique contribution. With an
ample amount of intuitive explanations, the reader is expected to have an
improved ability to visualize the internal dynamics of these processes. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Tree-based machine learning models such as random forests, decision trees,
and gradient boosted trees are the most popular non-linear predictive models
used in practice today, yet comparatively little attention has been paid to
explaining their predictions. Here we significantly improve the
interpretability of tree-based models through three main contributions: 1) The
first polynomial time algorithm to compute optimal explanations based on game
theory. 2) A new type of explanation that directly measures local feature
interaction effects. 3) A new set of tools for understanding global model
structure based on combining many local explanations of each prediction. We
apply these tools to three medical machine learning problems and show how
combining many high-quality local explanations allows us to represent global
structure while retaining local faithfulness to the original model. These tools
enable us to i) identify high magnitude but low frequency non-linear mortality
risk factors in the general US population, ii) highlight distinct population
sub-groups with shared risk characteristics, iii) identify non-linear
interaction effects among risk factors for chronic kidney disease, and iv)
monitor a machine learning model deployed in a hospital by identifying which
features are degrading the model's performance over time. Given the popularity
of tree-based machine learning models, these improvements to their
interpretability have implications across a broad set of domains. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep learning approaches to breast cancer detection in mammograms have
recently shown promising results. However, such models are constrained by the
limited size of publicly available mammography datasets, in large part due to
privacy concerns and the high cost of generating expert annotations. Limited
dataset size is further exacerbated by substantial class imbalance since
"normal" images dramatically outnumber those with findings. Given the rapid
progress of generative models in synthesizing realistic images, and the known
effectiveness of simple data augmentation techniques (e.g. horizontal
flipping), we ask if it is possible to synthetically augment mammogram datasets
using generative adversarial networks (GANs). We train a class-conditional GAN
to perform contextual in-filling, which we then use to synthesize lesions onto
healthy screening mammograms. First, we show that GANs are capable of
generating high-resolution synthetic mammogram patches. Next, we experimentally
evaluate using the augmented dataset to improve breast cancer classification
performance. We observe that a ResNet-50 classifier trained with GAN-augmented
training data produces a higher AUROC compared to the same model trained only
on traditionally augmented data, demonstrating the potential of our approach. | [
"cs.CV"
] |
Robustness is a key requirement for widespread deployment of machine learning
algorithms, and has received much attention in both statistics and computer
science. We study a natural model of robustness for high-dimensional
statistical estimation problems that we call the adversarial perturbation
model. An adversary can perturb every sample arbitrarily up to a specified
magnitude $\delta$ measured in some $\ell_q$ norm, say $\ell_\infty$. Our model
is motivated by emerging paradigms such as low precision machine learning and
adversarial training.
We study the classical problem of estimating the top-$r$ principal subspace
of the Gaussian covariance matrix in high dimensions, under the adversarial
perturbation model. We design a computationally efficient algorithm that given
corrupted data, recovers an estimate of the top-$r$ principal subspace with
error that depends on a robustness parameter $\kappa$ that we identify. This
parameter corresponds to the $q \to 2$ operator norm of the projector onto the
principal subspace, and generalizes well-studied analytic notions of sparsity.
Additionally, in the absence of corruptions, our algorithmic guarantees recover
existing bounds for problems such as sparse PCA and its higher rank analogs. We
also prove that the above dependence on the parameter $\kappa$ is almost
optimal asymptotically, not just in a minimax sense, but remarkably for every
instance of the problem. This instance-optimal guarantee shows that the $q \to
2$ operator norm of the subspace essentially characterizes the estimation error
under adversarial perturbations. | [
"cs.LG",
"cs.DS",
"stat.ML"
] |
Off-policy evaluation is a key component of reinforcement learning which
evaluates a target policy with offline data collected from behavior policies.
It is a crucial step towards safe reinforcement learning and has been used in
advertisement, recommender systems and many other applications. In these
applications, sometimes the offline data is collected from multiple behavior
policies. Previous works regard data from different behavior policies equally.
Nevertheless, some behavior policies are better at producing good estimators
while others are not. This paper starts with discussing how to correctly mix
estimators produced by different behavior policies. We propose three ways to
reduce the variance of the mixture estimator when all sub-estimators are
unbiased or asymptotically unbiased. Furthermore, experiments on simulated
recommender systems show that our methods are effective in reducing the
Mean-Square Error of estimation. | [
"cs.LG"
] |
We consider the problem of scaling deep generative shape models to
high-resolution. Drawing motivation from the canonical view representation of
objects, we introduce a novel method for the fast up-sampling of 3D objects in
voxel space through networks that perform super-resolution on the six
orthographic depth projections. This allows us to generate high-resolution
objects with more efficient scaling than methods which work directly in 3D. We
decompose the problem of 2D depth super-resolution into silhouette and depth
prediction to capture both structure and fine detail. This allows our method to
generate sharp edges more easily than an individual network. We evaluate our
work on multiple experiments concerning high-resolution 3D objects, and show
our system is capable of accurately predicting novel objects at resolutions as
large as 512$\mathbf{\times}$512$\mathbf{\times}$512 -- the highest resolution
reported for this task. We achieve state-of-the-art performance on 3D object
reconstruction from RGB images on the ShapeNet dataset, and further demonstrate
the first effective 3D super-resolution method. | [
"cs.CV"
] |
We present a new method to learn video representations from large-scale
unlabeled video data. Ideally, this representation will be generic and
transferable, directly usable for new tasks such as action recognition and zero
or few-shot learning. We formulate unsupervised representation learning as a
multi-modal, multi-task learning problem, where the representations are shared
across different modalities via distillation. Further, we introduce the concept
of loss function evolution by using an evolutionary search algorithm to
automatically find optimal combination of loss functions capturing many
(self-supervised) tasks and modalities. Thirdly, we propose an unsupervised
representation evaluation metric using distribution matching to a large
unlabeled dataset as a prior constraint, based on Zipf's law. This unsupervised
constraint, which is not guided by any labeling, produces similar results to
weakly-supervised, task-specific ones. The proposed unsupervised representation
learning results in a single RGB network and outperforms previous methods.
Notably, it is also more effective than several label-based methods (e.g.,
ImageNet), with the exception of large, fully labeled video datasets. | [
"cs.CV",
"cs.LG"
] |
Building models capable of generating structured output is a key challenge
for AI and robotics. While generative models have been explored on many types
of data, little work has been done on synthesizing lidar scans, which play a
key role in robot mapping and localization. In this work, we show that one can
adapt deep generative models for this task by unravelling lidar scans into a 2D
point map. Our approach can generate high quality samples, while simultaneously
learning a meaningful latent representation of the data. We demonstrate
significant improvements against state-of-the-art point cloud generation
methods. Furthermore, we propose a novel data representation that augments the
2D signal with absolute positional information. We show that this helps
robustness to noisy and imputed input; the learned model can recover the
underlying lidar scan from seemingly uninformative data | [
"cs.CV"
] |
Object detection models perform well at localizing and classifying objects
that they are shown during training. However, due to the difficulty and cost
associated with creating and annotating detection datasets, trained models
detect a limited number of object types with unknown objects treated as
background content. This hinders the adoption of conventional detectors in
real-world applications like large-scale object matching, visual grounding,
visual relation prediction, obstacle detection (where it is more important to
determine the presence and location of objects than to find specific types),
etc. We propose class-agnostic object detection as a new problem that focuses
on detecting objects irrespective of their object-classes. Specifically, the
goal is to predict bounding boxes for all objects in an image but not their
object-classes. The predicted boxes can then be consumed by another system to
perform application-specific classification, retrieval, etc. We propose
training and evaluation protocols for benchmarking class-agnostic detectors to
advance future research in this domain. Finally, we propose (1) baseline
methods and (2) a new adversarial learning framework for class-agnostic
detection that forces the model to exclude class-specific information from
features used for predictions. Experimental results show that adversarial
learning improves class-agnostic detection efficacy. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
While recent deep deblurring algorithms have achieved remarkable progress,
most existing methods focus on the global deblurring problem, where the image
blur mostly arises from severe camera shake. We argue that the local blur,
which is mostly derived from moving objects with a relatively static
background, is prevalent but remains under-explored. In this paper, we first
lay the data foundation for local deblurring by constructing, for the first
time, a LOcal-DEblur (LODE) dataset consisting of 3,700 real-world captured
locally blurred images and their corresponding ground-truth. Then, we propose a
novel framework, termed BLur-Aware DEblurring network (BladeNet), which
contains three components: the Local Blur Synthesis module generates locally
blurred training pairs, the Local Blur Perception module automatically captures
the locally blurred region and the Blur-guided Spatial Attention module guides
the deblurring network with spatial attention. This framework is flexible such
that it can be combined with many existing SotA algorithms. We carry out
extensive experiments on REDS and LODE datasets showing that BladeNet improves
PSNR by 2.5dB over SotAs for local deblurring while keeping comparable
performance for global deblurring. We will publish the dataset and codes. | [
"cs.CV"
] |
A method for counterfactual explanation of machine learning survival models
is proposed. One of the difficulties of solving the counterfactual explanation
problem is that the classes of examples are implicitly defined through outcomes
of a machine learning survival model in the form of survival functions. A
condition that establishes the difference between survival functions of the
original example and the counterfactual is introduced. This condition is based
on using a distance between mean times to event. It is shown that the
counterfactual explanation problem can be reduced to a standard convex
optimization problem with linear constraints when the explained black-box model
is the Cox model. For other black-box models, it is proposed to apply the
well-known Particle Swarm Optimization algorithm. A lot of numerical
experiments with real and synthetic data demonstrate the proposed method. | [
"cs.LG",
"stat.ML"
] |
The learning of Transformation-Equivariant Representations (TERs), which is
introduced by Hinton et al. \cite{hinton2011transforming}, has been considered
as a principle to reveal visual structures under various transformations. It
contains the celebrated Convolutional Neural Networks (CNNs) as a special case
that only equivary to the translations. In contrast, we seek to train TERs for
a generic class of transformations and train them in an {\em unsupervised}
fashion. To this end, we present a novel principled method by Autoencoding
Variational Transformations (AVT), compared with the conventional approach to
autoencoding data. Formally, given transformed images, the AVT seeks to train
the networks by maximizing the mutual information between the transformations
and representations. This ensures the resultant TERs of individual images
contain the {\em intrinsic} information about their visual structures that
would equivary {\em extricably} under various transformations in a generalized
{\em nonlinear} case. Technically, we show that the resultant optimization
problem can be efficiently solved by maximizing a variational lower-bound of
the mutual information. This variational approach introduces a transformation
decoder to approximate the intractable posterior of transformations, resulting
in an autoencoding architecture with a pair of the representation encoder and
the transformation decoder. Experiments demonstrate the proposed AVT model sets
a new record for the performances on unsupervised tasks, greatly closing the
performance gap to the supervised models. | [
"cs.CV"
] |
Generative Adversarial Network, as a promising research direction in the AI
community, recently attracts considerable attention due to its ability to
generating high-quality realistic data. GANs are a competing game between two
neural networks trained in an adversarial manner to reach a Nash equilibrium.
Despite the improvement accomplished in GANs in the last years, there remain
several issues to solve. In this way, how to tackle these issues and make
advances leads to rising research interests. This paper reviews literature that
leverages the game theory in GANs and addresses how game models can relieve
specific generative models' challenges and improve the GAN's performance. In
particular, we firstly review some preliminaries, including the basic GAN model
and some game theory backgrounds. After that, we present our taxonomy to
summarize the state-of-the-art solutions into three significant categories:
modified game model, modified architecture, and modified learning method. The
classification is based on the modifications made in the basic model by the
proposed approaches from the game-theoretic perspective. We further classify
each category into several subcategories. Following the proposed taxonomy, we
explore the main objective of each class and review the recent work in each
group. Finally, we discuss the remaining challenges in this field and present
the potential future research topics. | [
"cs.LG",
"cs.AI",
"cs.GT"
] |
It is essential for an automated vehicle in the field to perform
discretionary lane changes with appropriate roadmanship - driving safely and
efficiently without annoying or endangering other road users - under a wide
range of traffic cultures and driving conditions. While deep reinforcement
learning methods have excelled in recent years and been applied to automated
vehicle driving policy, there are concerns about their capability to quickly
adapt to unseen traffic with new environment dynamics. We formulate this
challenge as a multi-Markov Decision Processes (MDPs) adaptation problem and
developed Meta Reinforcement Learning (MRL) driving policies to showcase their
quick learning capability. Two types of distribution variation in environments
were designed and simulated to validate the fast adaptation capability of
resulting MRL driving policies which significantly outperform a baseline RL. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
An iris recognition system is vulnerable to presentation attacks, or PAs,
where an adversary presents artifacts such as printed eyes, plastic eyes, or
cosmetic contact lenses to circumvent the system. In this work, we propose an
effective and robust iris PA detector called D-NetPAD based on the DenseNet
convolutional neural network architecture. It demonstrates generalizability
across PA artifacts, sensors and datasets. Experiments conducted on a
proprietary dataset and a publicly available dataset (LivDet-2017) substantiate
the effectiveness of the proposed method for iris PA detection. The proposed
method results in a true detection rate of 98.58\% at a false detection rate of
0.2\% on the proprietary dataset and outperfoms state-of-the-art methods on the
LivDet-2017 dataset. We visualize intermediate feature distributions and
fixation heatmaps using t-SNE plots and Grad-CAM, respectively, in order to
explain the performance of D-NetPAD. Further, we conduct a frequency analysis
to explain the nature of features being extracted by the network. The source
code and trained model are available at https://github.com/iPRoBe-lab/D-NetPAD. | [
"cs.CV"
] |
Despite growing insights into the GAN training, it still suffers from
instability during the training procedure. To alleviate this problem, this
paper presents a novel convolutional layer, called perturbed-convolution
(PConv), which focuses on achieving two goals simultaneously: penalize the
discriminator for training GAN stably and prevent the overfitting problem in
the discriminator. PConv generates perturbed features by randomly disturbing an
input tensor before performing the convolution operation. This approach is
simple but surprisingly effective. First, to reliably classify real and
generated samples using the disturbed input tensor, the intermediate layers in
the discriminator should learn features having a small local Lipschitz value.
Second, due to the perturbed features in PConv, the discriminator is difficult
to memorize the real images; this makes the discriminator avoid the overfitting
problem. To show the generalization ability of the proposed method, we
conducted extensive experiments with various loss functions and datasets
including CIFAR-10, CelebA-HQ, LSUN, and tiny-ImageNet. Quantitative
evaluations demonstrate that WCL significantly improves the performance of GAN
and conditional GAN in terms of Frechet inception distance (FID). For instance,
the proposed method improves FID scores on the tiny-ImageNet dataset from 58.59
to 50.42. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We challenge a common assumption underlying most supervised deep learning:
that a model makes a prediction depending only on its parameters and the
features of a single input. To this end, we introduce a general-purpose deep
learning architecture that takes as input the entire dataset instead of
processing one datapoint at a time. Our approach uses self-attention to reason
about relationships between datapoints explicitly, which can be seen as
realizing non-parametric models using parametric attention mechanisms. However,
unlike conventional non-parametric models, we let the model learn end-to-end
from the data how to make use of other datapoints for prediction. Empirically,
our models solve cross-datapoint lookup and complex reasoning tasks unsolvable
by traditional deep learning models. We show highly competitive results on
tabular data, early results on CIFAR-10, and give insight into how the model
makes use of the interactions between points. | [
"cs.LG",
"stat.ML"
] |
Growing concerns regarding the operational usage of AI models in the
real-world has caused a surge of interest in explaining AI models' decisions to
humans. Reinforcement Learning is not an exception in this regard. In this
work, we propose a method for offering local explanations on risk in
reinforcement learning. Our method only requires a log of previous interactions
between the agent and the environment to create a state-transition model. It is
designed to work on RL environments with either continuous or discrete state
and action spaces. After creating the model, actions of any agent can be
explained in terms of the features most influential in increasing or decreasing
risk or any other desirable objective function in the locality of the agent.
Through experiments, we demonstrate the effectiveness of the proposed method in
providing such explanations. | [
"cs.LG"
] |
Point cloud completion is the task of predicting complete geometry from
partial observations using a point set representation for a 3D shape. Previous
approaches propose neural networks to directly estimate the whole point cloud
through encoder-decoder models fed by the incomplete point set. By predicting
the complete model, the current methods compute redundant information because
the output also contains the known incomplete input geometry. This paper
proposes an end-to-end neural network architecture that focuses on computing
the missing geometry and merging the known input and the predicted point cloud.
Our method is composed of two neural networks: the missing part prediction
network and the merging-refinement network. The first module focuses on
extracting information from the incomplete input to infer the missing geometry.
The second module merges both point clouds and improves the distribution of the
points. Our experiments on ShapeNet dataset show that our method outperforms
the state-of-the-art methods in point cloud completion. The code of our methods
and experiments is available in
\url{https://github.com/ivansipiran/Refinement-Point-Cloud-Completion}. | [
"cs.CV",
"cs.GR"
] |
We present Convolutional Oriented Boundaries (COB), which produces multiscale
oriented contours and region hierarchies starting from generic image
classification Convolutional Neural Networks (CNNs). COB is computationally
efficient, because it requires a single CNN forward pass for multi-scale
contour detection and it uses a novel sparse boundary representation for
hierarchical segmentation; it gives a significant leap in performance over the
state-of-the-art, and it generalizes very well to unseen categories and
datasets. Particularly, we show that learning to estimate not only contour
strength but also orientation provides more accurate results. We perform
extensive experiments for low-level applications on BSDS, PASCAL Context,
PASCAL Segmentation, and NYUD to evaluate boundary detection performance,
showing that COB provides state-of-the-art contours and region hierarchies in
all datasets. We also evaluate COB on high-level tasks when coupled with
multiple pipelines for object proposals, semantic contours, semantic
segmentation, and object detection on MS-COCO, SBD, and PASCAL; showing that
COB also improves the results for all tasks. | [
"cs.CV"
] |
A promising technique of discovering disease biomarkers is to measure the
relative protein abundance in multiple biofluid samples through liquid
chromatography with tandem mass spectrometry (LC-MS/MS) based quantitative
proteomics. The key step involves peptide feature detection in LC-MS map, along
with its charge and intensity. Existing heuristic algorithms suffer from
inaccurate parameters since different settings of the parameters result in
significantly different outcomes. Therefore, we propose PointIso, to serve the
necessity of an automated system for peptide feature detection that is able to
find out the proper parameters itself, and is easily adaptable to different
types of datasets. It consists of an attention based scanning step for
segmenting the multi-isotopic pattern of peptide features along with charge and
a sequence classification step for grouping those isotopes into potential
peptide features. PointIso is the first point cloud based, arbitrary-precision
deep learning network to address the problem and achieves 98% detection of high
quality MS/MS identifications in a benchmark dataset, which is higher than
several other widely used algorithms. Besides contributing to the proteomics
study, we believe our novel segmentation technique should serve the general
image processing domain as well. | [
"cs.CV",
"cs.LG",
"q-bio.QM"
] |
3D object recognition accuracy can be improved by learning the multi-scale
spatial features from 3D spatial geometric representations of objects such as
point clouds, 3D models, surfaces, and RGB-D data. Current deep learning
approaches learn such features either using structured data representations
(voxel grids and octrees) or from unstructured representations (graphs and
point clouds). Learning features from such structured representations is
limited by the restriction on resolution and tree depth while unstructured
representations creates a challenge due to non-uniformity among data samples.
In this paper, we propose an end-to-end multi-level learning approach on a
multi-level voxel grid to overcome these drawbacks. To demonstrate the utility
of the proposed multi-level learning, we use a multi-level voxel representation
of 3D objects to perform object recognition. The multi-level voxel
representation consists of a coarse voxel grid that contains volumetric
information of the 3D object. In addition, each voxel in the coarse grid that
contains a portion of the object boundary is subdivided into multiple
fine-level voxel grids. The performance of our multi-level learning algorithm
for object recognition is comparable to dense voxel representations while using
significantly lower memory. | [
"cs.CV",
"stat.ML"
] |
The tradeoff between receptive field size and efficiency is a crucial issue
in low level vision. Plain convolutional networks (CNNs) generally enlarge the
receptive field at the expense of computational cost. Recently, dilated
filtering has been adopted to address this issue. But it suffers from gridding
effect, and the resulting receptive field is only a sparse sampling of input
image with checkerboard patterns. In this paper, we present a novel multi-level
wavelet CNN (MWCNN) model for better tradeoff between receptive field size and
computational efficiency. With the modified U-Net architecture, wavelet
transform is introduced to reduce the size of feature maps in the contracting
subnetwork. Furthermore, another convolutional layer is further used to
decrease the channels of feature maps. In the expanding subnetwork, inverse
wavelet transform is then deployed to reconstruct the high resolution feature
maps. Our MWCNN can also be explained as the generalization of dilated
filtering and subsampling, and can be applied to many image restoration tasks.
The experimental results clearly show the effectiveness of MWCNN for image
denoising, single image super-resolution, and JPEG image artifacts removal. | [
"cs.CV"
] |
Thanks to the increasing availability of drug-drug interactions (DDI)
datasets and large biomedical knowledge graphs (KGs), accurate detection of
adverse DDI using machine learning models becomes possible. However, it remains
largely an open problem how to effectively utilize large and noisy biomedical
KG for DDI detection. Due to its sheer size and amount of noise in KGs, it is
often less beneficial to directly integrate KGs with other smaller but higher
quality data (e.g., experimental data). Most of the existing approaches ignore
KGs altogether. Some try to directly integrate KGs with other data via graph
neural networks with limited success. Furthermore, most previous works focus on
binary DDI prediction whereas the multi-typed DDI pharmacological effect
prediction is a more meaningful but harder task. To fill the gaps, we propose a
new method SumGNN: knowledge summarization graph neural network, which is
enabled by a subgraph extraction module that can efficiently anchor on relevant
subgraphs from a KG, a self-attention based subgraph summarization scheme to
generate a reasoning path within the subgraph, and a multi-channel knowledge
and data integration module that utilizes massive external biomedical knowledge
for significantly improved multi-typed DDI predictions. SumGNN outperforms the
best baseline by up to 5.54\%, and the performance gain is particularly
significant in low data relation types. In addition, SumGNN provides
interpretable prediction via the generated reasoning paths for each prediction. | [
"cs.LG",
"cs.CL",
"cs.IR",
"q-bio.QM"
] |
Vehicle re-identification (re-ID) matches images of the same vehicle across
different cameras. It is fundamentally challenging because the dramatically
different appearance caused by different viewpoints would make the framework
fail to match two vehicles of the same identity. Most existing works solved the
problem by extracting viewpoint-aware feature via spatial attention mechanism,
which, yet, usually suffers from noisy generated attention map or otherwise
requires expensive keypoint labels to improve the quality. In this work, we
propose Viewpoint-aware Channel-wise Attention Mechanism (VCAM) by observing
the attention mechanism from a different aspect. Our VCAM enables the feature
learning framework channel-wisely reweighing the importance of each feature
maps according to the "viewpoint" of input vehicle. Extensive experiments
validate the effectiveness of the proposed method and show that we perform
favorably against state-of-the-arts methods on the public VeRi-776 dataset and
obtain promising results on the 2020 AI City Challenge. We also conduct other
experiments to demonstrate the interpretability of how our VCAM practically
assists the learning framework. | [
"cs.CV"
] |
The expressive power of graph neural network formalisms is commonly measured
by their ability to distinguish graphs. For many formalisms, the k-dimensional
Weisfeiler-Leman (k-WL) graph isomorphism test is used as a yardstick. In this
paper we consider the expressive power of kth-order invariant (linear) graph
networks (k-IGNs). It is known that k-IGNs are expressive enough to simulate
k-WL. This means that for any two graphs that can be distinguished by k-WL, one
can find a k-IGN which also distinguishes those graphs. The question remains
whether k-IGNs can distinguish more graphs than k-WL. This was recently shown
to be false for k=2. Here, we generalise this result to arbitrary k. In other
words, we show that k-IGNs are bounded in expressive power by k-WL. This
implies that k-IGNs and k-WL are equally powerful in distinguishing graphs. | [
"cs.LG",
"math.CO",
"stat.ML"
] |
Among the various generative adversarial network (GAN)-based image inpainting
methods, a coarse-to-fine network with a contextual attention module (CAM) has
shown remarkable performance. However, owing to two stacked generative
networks, the coarse-to-fine network needs numerous computational resources
such as convolution operations and network parameters, which result in low
speed. To address this problem, we propose a novel network architecture called
PEPSI: parallel extended-decoder path for semantic inpainting network, which
aims at reducing the hardware costs and improving the inpainting performance.
PEPSI consists of a single shared encoding network and parallel decoding
networks called coarse and inpainting paths. The coarse path produces a
preliminary inpainting result to train the encoding network for the prediction
of features for the CAM. Simultaneously, the inpainting path generates higher
inpainting quality using the refined features reconstructed via the CAM. In
addition, we propose Diet-PEPSI that significantly reduces the network
parameters while maintaining the performance. In Diet-PEPSI, to capture the
global contextual information with low hardware costs, we propose novel
rate-adaptive dilated convolutional layers, which employ the common weights but
produce dynamic features depending on the given dilation rates. Extensive
experiments comparing the performance with state-of-the-art image inpainting
methods demonstrate that both PEPSI and Diet-PEPSI improve the qualitative
scores, i.e. the peak signal-to-noise ratio (PSNR) and structural similarity
(SSIM), as well as significantly reduce hardware costs such as computational
time and the number of network parameters. | [
"cs.CV",
"eess.IV"
] |
Recent years have witnessed an upsurge of research interests and applications
of machine learning on graphs. Automated machine learning (AutoML) on graphs is
on the horizon to automatically design the optimal machine learning algorithm
for a given graph task. However, none of the existing libraries can fully
support AutoML on graphs. To fill this gap, we present Automated Graph Learning
(AutoGL), the first library for automated machine learning on graphs. AutoGL is
open-source, easy to use, and flexible to be extended. Specifically, we propose
an automated machine learning pipeline for graph data containing four modules:
auto feature engineering, model training, hyper-parameter optimization, and
auto ensemble. For each module, we provide numerous state-of-the-art methods
and flexible base classes and APIs, which allow easy customization. We further
provide experimental results to showcase the usage of our AutoGL library. | [
"cs.LG",
"cs.AI"
] |
The ability of a graph neural network (GNN) to leverage both the graph
topology and graph labels is fundamental to building discriminative node and
graph embeddings. Building on previous work, we theoretically show that edGNN,
our model for directed labeled graphs, is as powerful as the Weisfeiler-Lehman
algorithm for graph isomorphism. Our experiments support our theoretical
findings, confirming that graph neural networks can be used effectively for
inference problems on directed graphs with both node and edge labels. Code
available at https://github.com/guillaumejaume/edGNN. | [
"cs.LG",
"stat.ML"
] |
Fine-grained location prediction on smart phones can be used to improve
app/system performance. Application scenarios include video quality adaptation
as a function of the 5G network quality at predicted user locations, and
augmented reality apps that speed up content rendering based on predicted user
locations. Such use cases require prediction error in the same range as the GPS
error, and no existing works on location prediction can achieve this level of
accuracy. We present a system for fine-grained location prediction (FGLP) of
mobile users, based on GPS traces collected on the phones. FGLP has two
components: a federated learning framework and a prediction model. The
framework runs on the phones of the users and also on a server that coordinates
learning from all users in the system. FGLP represents the user location data
as relative points in an abstract 2D space, which enables learning across
different physical spaces. The model merges Bidirectional Long Short-Term
Memory (BiLSTM) and Convolutional Neural Networks (CNN), where BiLSTM learns
the speed and direction of the mobile users, and CNN learns information such as
user movement preferences. FGLP uses federated learning to protect user privacy
and reduce bandwidth consumption. Our experimental results, using a dataset
with over 600,000 users, demonstrate that FGLP outperforms baseline models in
terms of prediction accuracy. We also demonstrate that FGLP works well in
conjunction with transfer learning, which enables model reusability. Finally,
benchmark results on several types of Android phones demonstrate FGLP's
feasibility in real life. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
We present a method for learning generalized Hamiltonian decompositions of
ordinary differential equations given a set of noisy time series measurements.
Our method simultaneously learns a continuous time model and a scalar energy
function for a general dynamical system. Learning predictive models in this
form allows one to place strong, high-level, physics inspired priors onto the
form of the learnt governing equations for general dynamical systems. Moreover,
having shown how our method extends and unifies some previous work in deep
learning with physics inspired priors, we present a novel method for learning
continuous time models from the weak form of the governing equations which is
less computationally taxing than standard adjoint methods. | [
"cs.LG"
] |
In this paper, we propose a speed-up approach for subclass discriminant
analysis and formulate a novel efficient multi-view solution to it. The
speed-up approach is developed based on graph embedding and spectral regression
approaches that involve eigendecomposition of the corresponding Laplacian
matrix and regression to its eigenvectors. We show that by exploiting the
structure of the between-class Laplacian matrix, the eigendecomposition step
can be substituted with a much faster process. Furthermore, we formulate a
novel criterion for multi-view subclass discriminant analysis and show that an
efficient solution for it can be obtained in a similar to the single-view
manner. We evaluate the proposed methods on nine single-view and nine
multi-view datasets and compare them with related existing approaches.
Experimental results show that the proposed solutions achieve competitive
performance, often outperforming the existing methods. At the same time, they
significantly decrease the training time. | [
"cs.LG",
"stat.ML"
] |
Recent work has shown that a variety of semantics emerge in the latent space
of Generative Adversarial Networks (GANs) when being trained to synthesize
images. However, it is difficult to use these learned semantics for real image
editing. A common practice of feeding a real image to a trained GAN generator
is to invert it back to a latent code. However, existing inversion methods
typically focus on reconstructing the target image by pixel values yet fail to
land the inverted code in the semantic domain of the original latent space. As
a result, the reconstructed image cannot well support semantic editing through
varying the inverted code. To solve this problem, we propose an in-domain GAN
inversion approach, which not only faithfully reconstructs the input image but
also ensures the inverted code to be semantically meaningful for editing. We
first learn a novel domain-guided encoder to project a given image to the
native latent space of GANs. We then propose domain-regularized optimization by
involving the encoder as a regularizer to fine-tune the code produced by the
encoder and better recover the target image. Extensive experiments suggest that
our inversion method achieves satisfying real image reconstruction and more
importantly facilitates various image editing tasks, significantly
outperforming start-of-the-arts. | [
"cs.CV"
] |
Transformer neural networks have achieved state-of-the-art results for
unstructured data such as text and images but their adoption for
graph-structured data has been limited. This is partly due to the difficulty of
incorporating complex structural information in the basic transformer
framework. We propose a simple yet powerful extension to the transformer -
residual edge channels. The resultant framework, which we call Edge-augmented
Graph Transformer (EGT), can directly accept, process and output structural
information as well as node information. It allows us to use global
self-attention, the key element of transformers, directly for graphs and comes
with the benefit of long-range interaction among nodes. Moreover, the edge
channels allow the structural information to evolve from layer to layer, and
prediction tasks on edges/links can be performed directly from the output
embeddings of these channels. In addition, we introduce a generalized
positional encoding scheme for graphs based on Singular Value Decomposition
which can improve the performance of EGT. Our framework, which relies on global
node feature aggregation, achieves better performance compared to
Convolutional/Message-Passing Graph Neural Networks, which rely on local
feature aggregation within a neighborhood. We verify the performance of EGT in
a supervised learning setting on a wide range of experiments on benchmark
datasets. Our findings indicate that convolutional aggregation is not an
essential inductive bias for graphs and global self-attention can serve as a
flexible and adaptive alternative. | [
"cs.LG"
] |
Sequential matching using hand-crafted heuristics has been standard practice
in route-based place recognition for enhancing pairwise similarity results for
nearly a decade. However, precision-recall performance of these algorithms
dramatically degrades when searching on short temporal window (TW) lengths,
while demanding high compute and storage costs on large robotic datasets for
autonomous navigation research. Here, influenced by biological systems that
robustly navigate spacetime scales even without vision, we develop a joint
visual and positional representation learning technique, via a sequential
process, and design a learning-based CNN+LSTM architecture, trainable via
backpropagation through time, for viewpoint- and appearance-invariant place
recognition. Our approach, Sequential Place Learning (SPL), is based on a CNN
function that visually encodes an environment from a single traversal, thus
reducing storage capacity, while an LSTM temporally fuses each visual embedding
with corresponding positional data -- obtained from any source of motion
estimation -- for direct sequential inference. Contrary to classical two-stage
pipelines, e.g., match-then-temporally-filter, our network directly eliminates
false-positive rates while jointly learning sequence matching from a single
monocular image sequence, even using short TWs. Hence, we demonstrate that our
model outperforms 15 classical methods while setting new state-of-the-art
performance standards on 4 challenging benchmark datasets, where one of them
can be considered solved with recall rates of 100% at 100% precision, correctly
matching all places under extreme sunlight-darkness changes. In addition, we
show that SPL can be up to 70x faster to deploy than classical methods on a 729
km route comprising 35,768 consecutive frames. Extensive experiments
demonstrate the... Baseline code available at
https://github.com/mchancan/deepseqslam | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
In this paper, we propose a novel end-to-end model, namely Single-Stage
Grounding network (SSG), to localize the referent given a referring expression
within an image. Different from previous multi-stage models which rely on
object proposals or detected regions, our proposed model aims to comprehend a
referring expression through one single stage without resorting to region
proposals as well as the subsequent region-wise feature extraction.
Specifically, a multimodal interactor is proposed to summarize the local region
features regarding the referring expression attentively. Subsequently, a
grounder is proposed to localize the referring expression within the given
image directly. For further improving the localization accuracy, a guided
attention mechanism is proposed to enforce the grounder to focus on the central
region of the referent. Moreover, by exploiting and predicting visual attribute
information, the grounder can further distinguish the referent objects within
an image and thereby improve the model performance. Experiments on RefCOCO,
RefCOCO+, and RefCOCOg datasets demonstrate that our proposed SSG without
relying on any region proposals can achieve comparable performance with other
advanced models. Furthermore, our SSG outperforms the previous models and
achieves the state-of-art performance on the ReferItGame dataset. More
importantly, our SSG is time efficient and can ground a referring expression in
a 416*416 image from the RefCOCO dataset in 25ms (40 referents per second) on
average with a Nvidia Tesla P40, accomplishing more than 9* speedups over the
existing multi-stage models. | [
"cs.CV"
] |
Despite great efforts, neural networks are still prone to adversarial
attacks. Recent work has shown that adversarial perturbations typically contain
high-frequency features, but the root cause of this phenomenon remains unknown.
Inspired by the theoretical work in linear full-width convolutional models
(Gunasekar et al, 2018), we hypothesize that the nonlinear local (i.e.
bounded-width) convolutional models used in practice are implicitly biased to
learn high frequency features, and that this is the root cause of high
frequency adversarial examples. To test this hypothesis, we analyzed the impact
of different choices of linear and nonlinear architectures on the implicit bias
of the learned features and the adversarial perturbations, in both spatial and
frequency domains. We find that the high-frequency adversarial perturbations
are critically dependent on the convolution operation in two ways: (i) the
translation invariance of the convolution induces an implicit bias towards
sparsity in the frequency domain; and (ii) the spatially-limited nature of
local convolutions induces an implicit bias towards high frequency features.
The explanation for the latter involves the Fourier Uncertainty Principle: a
spatially-limited (local in the space domain) filter cannot also be
frequency-limited (local in the frequency domain). Furthermore, using larger
convolution kernel sizes or avoiding convolutions altogether (e.g. by using
Visual Transformers architecture) significantly reduces this high frequency
bias, but not the overall susceptibility to attacks. Looking forward, our work
strongly suggests that understanding and controlling the implicit bias of
architectures will be essential for achieving adversarial robustness. | [
"stat.ML",
"cs.LG"
] |
Electroencephalography (EEG) headsets are the most commonly used sensing
devices for Brain-Computer Interface. In real-world applications, there are
advantages to extrapolating data from one user session to another. However,
these advantages are limited if the data arise from different hardware systems,
which often vary between application spaces. Currently, this creates a need to
recalibrate classifiers, which negatively affects people's interest in using
such systems. In this paper, we employ active weighted adaptation
regularization (AwAR), which integrates weighted adaptation regularization
(wAR) and active learning, to expedite the calibration process. wAR makes use
of labeled data from the previous headset and handles class-imbalance, and
active learning selects the most informative samples from the new headset to
label. Experiments on single-trial event-related potential classification show
that AwAR can significantly increase the classification accuracy, given the
same number of labeled samples from the new headset. In other words, AwAR can
effectively reduce the number of labeled samples required from the new headset,
given a desired classification accuracy, suggesting value in collating data for
use in wide scale transfer-learning applications. | [
"cs.LG",
"cs.HC"
] |
Recently, Convolutional Neural Networks (CNNs) have shown promising
performance in super-resolution (SR). However, these methods operate primarily
on Low Resolution (LR) inputs for memory efficiency but this limits, as we
demonstrate, their ability to (i) model high frequency information; and (ii)
smoothly translate from LR to High Resolution (HR) space. To this end, we
propose a novel Incremental Residual Learning (IRL) framework to address these
mentioned issues. In IRL, first we select a typical SR pre-trained network as a
master branch. Next we sequentially train and add residual branches to the main
branch, where each residual branch is learned to model accumulated residuals of
all previous branches. We plug state of the art methods in IRL framework and
demonstrate consistent performance improvement on public benchmark datasets to
set a new state of the art for SR at only approximately 20% increase in
training time. | [
"cs.CV"
] |
Recently, there has been an increasing number of efforts to introduce models
capable of generating natural language explanations (NLEs) for their
predictions on vision-language (VL) tasks. Such models are appealing, because
they can provide human-friendly and comprehensive explanations. However, there
is a lack of comparison between existing methods, which is due to a lack of
re-usable evaluation frameworks and a scarcity of datasets. In this work, we
introduce e-ViL and e-SNLI-VE. e-ViL is a benchmark for explainable
vision-language tasks that establishes a unified evaluation framework and
provides the first comprehensive comparison of existing approaches that
generate NLEs for VL tasks. It spans four models and three datasets and both
automatic metrics and human evaluation are used to assess model-generated
explanations. e-SNLI-VE is currently the largest existing VL dataset with NLEs
(over 430k instances). We also propose a new model that combines UNITER, which
learns joint embeddings of images and text, and GPT-2, a pre-trained language
model that is well-suited for text generation. It surpasses the previous state
of the art by a large margin across all datasets. Code and data are available
here: https://github.com/maximek3/e-ViL. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Autonomous vehicles are conceived to provide safe and secure services by
validating the safety standards as indicated by SOTIF-ISO/PAS-21448 (Safety of
the intended functionality). Keeping in this context, the perception of the
environment plays an instrumental role in conjunction with localization,
planning and control modules. As a pivotal algorithm in the perception stack,
object detection provides extensive insights into the autonomous vehicle's
surroundings. Camera and Lidar are extensively utilized for object detection
among different sensor modalities, but these exteroceptive sensors have
limitations in resolution and adverse weather conditions. In this work,
radar-based object detection is explored provides a counterpart sensor modality
to be deployed and used in adverse weather conditions. The radar gives complex
data; for this purpose, a channel boosting feature ensemble method with
transformer encoder-decoder network is proposed. The object detection task
using radar is formulated as a set prediction problem and evaluated on the
publicly available dataset in both good and good-bad weather conditions. The
proposed method's efficacy is extensively evaluated using the COCO evaluation
metric, and the best-proposed model surpasses its state-of-the-art counterpart
method by $12.55\%$ and $12.48\%$ in both good and good-bad weather conditions. | [
"cs.CV"
] |
Multi-agent reinforcement learning (MARL) has been increasingly explored to
learn the cooperative policy towards maximizing a certain global reward. Many
existing studies take advantage of graph neural networks (GNN) in MARL to
propagate critical collaborative information over the interaction graph, built
upon inter-connected agents. Nevertheless, the vanilla GNN approach yields
substantial defects in dealing with complex real-world scenarios since the
generic message passing mechanism is ineffective between heterogeneous vertices
and, moreover, simple message aggregation functions are incapable of accurately
modeling the combinational interactions from multiple neighbors. While adopting
complex GNN models with more informative message passing and aggregation
mechanisms can obviously benefit heterogeneous vertex representations and
cooperative policy learning, it could, on the other hand, increase the training
difficulty of MARL and demand more intense and direct reward signals compared
to the original global reward. To address these challenges, we propose a new
cooperative learning framework with pre-trained heterogeneous observation
representations. Particularly, we employ an encoder-decoder based graph
attention to learn the intricate interactions and heterogeneous representations
that can be more easily leveraged by MARL. Moreover, we design a pre-training
with local actor-critic algorithm to ease the difficulty in cooperative policy
learning. Extensive experiments over real-world scenarios demonstrate that our
new approach can significantly outperform existing MARL baselines as well as
operational research solutions that are widely-used in industry. | [
"cs.LG",
"cs.AI"
] |
Pretraining on large labeled datasets is a prerequisite to achieve good
performance in many computer vision tasks like 2D object recognition, video
classification etc. However, pretraining is not widely used for 3D recognition
tasks where state-of-the-art methods train models from scratch. A primary
reason is the lack of large annotated datasets because 3D data is both
difficult to acquire and time consuming to label. We present a simple
self-supervised pertaining method that can work with any 3D data - single or
multiview, indoor or outdoor, acquired by varied sensors, without 3D
registration. We pretrain standard point cloud and voxel based model
architectures, and show that joint pretraining further improves performance. We
evaluate our models on 9 benchmarks for object detection, semantic
segmentation, and object classification, where they achieve state-of-the-art
results and can outperform supervised pretraining. We set a new
state-of-the-art for object detection on ScanNet (69.0% mAP) and SUNRGBD (63.5%
mAP). Our pretrained models are label efficient and improve performance for
classes with few examples. | [
"cs.CV"
] |
Detection of moving objects is a very important task in autonomous driving
systems. After the perception phase, motion planning is typically performed in
Bird's Eye View (BEV) space. This would require projection of objects detected
on the image plane to top view BEV plane. Such a projection is prone to errors
due to lack of depth information and noisy mapping in far away areas. CNNs can
leverage the global context in the scene to project better. In this work, we
explore end-to-end Moving Object Detection (MOD) on the BEV map directly using
monocular images as input. To the best of our knowledge, such a dataset does
not exist and we create an extended KITTI-raw dataset consisting of 12.9k
images with annotations of moving object masks in BEV space for five classes.
The dataset is intended to be used for class agnostic motion cue based object
detection and classes are provided as meta-data for better tuning. We design
and implement a two-stream RGB and optical flow fusion architecture which
outputs motion segmentation directly in BEV space. We compare it with inverse
perspective mapping of state-of-the-art motion segmentation predictions on the
image plane. We observe a significant improvement of 13% in mIoU using the
simple baseline implementation. This demonstrates the ability to directly learn
motion segmentation output in BEV space. Qualitative results of our baseline
and the dataset annotations can be found in
https://sites.google.com/view/bev-modnet. | [
"cs.CV",
"cs.RO"
] |
Object detection models shipped with camera-equipped edge devices cannot
cover the objects of interest for every user. Therefore, the incremental
learning capability is a critical feature for a robust and personalized object
detection system that many applications would rely on. In this paper, we
present an efficient yet practical system, RILOD, to incrementally train an
existing object detection model such that it can detect new object classes
without losing its capability to detect old classes. The key component of RILOD
is a novel incremental learning algorithm that trains end-to-end for one-stage
deep object detection models only using training data of new object classes.
Specifically to avoid catastrophic forgetting, the algorithm distills three
types of knowledge from the old model to mimic the old model's behavior on
object classification, bounding box regression and feature extraction. In
addition, since the training data for the new classes may not be available, a
real-time dataset construction pipeline is designed to collect training images
on-the-fly and automatically label the images with both category and bounding
box annotations. We have implemented RILOD under both edge-cloud and edge-only
setups. Experiment results show that the proposed system can learn to detect a
new object class in just a few minutes, including both dataset construction and
model training. In comparison, traditional fine-tuning based method may take a
few hours for training, and in most cases would also need a tedious and costly
manual dataset labeling step. | [
"cs.CV",
"cs.AI",
"stat.ML"
] |
We developed "Comicolorization", a semi-automatic colorization system for
manga images. Given a monochrome manga and reference images as inputs, our
system generates a plausible color version of the manga. This is the first work
to address the colorization of an entire manga title (a set of manga pages).
Our method colorizes a whole page (not a single panel) semi-automatically, with
the same color for the same character across multiple panels. To colorize the
target character by the color from the reference image, we extract a color
feature from the reference and feed it to the colorization network to help the
colorization. Our approach employs adversarial loss to encourage the effect of
the color features. Optionally, our tool allows users to revise the
colorization result interactively. By feeding the color features to our deep
colorization network, we accomplish colorization of the entire manga using the
desired colors for each panel. | [
"cs.CV",
"cs.GR"
] |
Recently, many unsupervised deep learning methods have been proposed to learn
clustering with unlabelled data. By introducing data augmentation, most of the
latest methods look into deep clustering from the perspective that the original
image and its transformation should share similar semantic clustering
assignment. However, the representation features could be quite different even
they are assigned to the same cluster since softmax function is only sensitive
to the maximum value. This may result in high intra-class diversities in the
representation feature space, which will lead to unstable local optimal and
thus harm the clustering performance. To address this drawback, we proposed
Deep Robust Clustering (DRC). Different from existing methods, DRC looks into
deep clustering from two perspectives of both semantic clustering assignment
and representation feature, which can increase inter-class diversities and
decrease intra-class diversities simultaneously. Furthermore, we summarized a
general framework that can turn any maximizing mutual information into
minimizing contrastive loss by investigating the internal relationship between
mutual information and contrastive learning. And we successfully applied it in
DRC to learn invariant features and robust clusters. Extensive experiments on
six widely-adopted deep clustering benchmarks demonstrate the superiority of
DRC in both stability and accuracy. e.g., attaining 71.6% mean accuracy on
CIFAR-10, which is 7.1% higher than state-of-the-art results. | [
"cs.CV"
] |
For many systems in science and engineering, the governing differential
equation is either not known or known in an approximate sense. Analyses and
design of such systems are governed by data collected from the field and/or
laboratory experiments. This challenging scenario is further worsened when
data-collection is expensive and time-consuming. To address this issue, this
paper presents a novel multi-fidelity physics informed deep neural network
(MF-PIDNN). The framework proposed is particularly suitable when the physics of
the problem is known in an approximate sense (low-fidelity physics) and only a
few high-fidelity data are available. MF-PIDNN blends physics informed and
data-driven deep learning techniques by using the concept of transfer learning.
The approximate governing equation is first used to train a low-fidelity
physics informed deep neural network. This is followed by transfer learning
where the low-fidelity model is updated by using the available high-fidelity
data. MF-PIDNN is able to encode useful information on the physics of the
problem from the {\it approximate} governing differential equation and hence,
provides accurate prediction even in zones with no data. Additionally, no
low-fidelity data is required for training this model. Applicability and
utility of MF-PIDNN are illustrated in solving four benchmark reliability
analysis problems. Case studies to illustrate interesting features of the
proposed approach are also presented. | [
"cs.LG",
"physics.comp-ph",
"stat.ML"
] |
Deep learning-based video salient object detection has recently achieved
great success with its performance significantly outperforming any other
unsupervised methods. However, existing data-driven approaches heavily rely on
a large quantity of pixel-wise annotated video frames to deliver such promising
results. In this paper, we address the semi-supervised video salient object
detection task using pseudo-labels. Specifically, we present an effective video
saliency detector that consists of a spatial refinement network and a
spatiotemporal module. Based on the same refinement network and motion
information in terms of optical flow, we further propose a novel method for
generating pixel-level pseudo-labels from sparsely annotated frames. By
utilizing the generated pseudo-labels together with a part of manual
annotations, our video saliency detector learns spatial and temporal cues for
both contrast inference and coherence enhancement, thus producing accurate
saliency maps. Experimental results demonstrate that our proposed
semi-supervised method even greatly outperforms all the state-of-the-art fully
supervised methods across three public benchmarks of VOS, DAVIS, and FBMS. | [
"cs.CV"
] |
In this short paper, a neural network that is able to form a low dimensional
topological hidden representation is explained. The neural network can be
trained as an autoencoder, a classifier or mix of both, and produces different
low dimensional topological map for each of them. When it is trained as an
autoencoder, the inherent topological structure of the data can be visualized,
while when it is trained as a classifier, the topological structure is further
constrained by the concept, for example the labels the data, hence the
visualization is not only structural but also conceptual. The proposed neural
network significantly differ from many dimensional reduction models, primarily
in its ability to execute both supervised and unsupervised dimensional
reduction. The neural network allows multi perspective visualization of the
data, and thus giving more flexibility in data analysis. This paper is
supported by preliminary but intuitive visualization experiments. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
] |
To be truly understandable and accepted by Deaf communities, an automatic
Sign Language Production (SLP) system must generate a photo-realistic signer.
Prior approaches based on graphical avatars have proven unpopular, whereas
recent neural SLP works that produce skeleton pose sequences have been shown to
be not understandable to Deaf viewers.
In this paper, we propose SignGAN, the first SLP model to produce
photo-realistic continuous sign language videos directly from spoken language.
We employ a transformer architecture with a Mixture Density Network (MDN)
formulation to handle the translation from spoken language to skeletal pose. A
pose-conditioned human synthesis model is then introduced to generate a
photo-realistic sign language video from the skeletal pose sequence. This
allows the photo-realistic production of sign videos directly translated from
written text.
We further propose a novel keypoint-based loss function, which significantly
improves the quality of synthesized hand images, operating in the keypoint
space to avoid issues caused by motion blur. In addition, we introduce a method
for controllable video generation, enabling training on large, diverse sign
language datasets and providing the ability to control the signer appearance at
inference.
Using a dataset of eight different sign language interpreters extracted from
broadcast footage, we show that SignGAN significantly outperforms all baseline
methods for quantitative metrics and human perceptual studies. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Predictive uncertainty estimation is an essential next step for the reliable
deployment of deep object detectors in safety-critical tasks. In this work, we
focus on estimating predictive distributions for bounding box regression output
with variance networks. We show that in the context of object detection,
training variance networks with negative log likelihood (NLL) can lead to high
entropy predictive distributions regardless of the correctness of the output
mean. We propose to use the energy score as a non-local proper scoring rule and
find that when used for training, the energy score leads to better calibrated
and lower entropy predictive distributions than NLL. We also address the
widespread use of non-proper scoring metrics for evaluating predictive
distributions from deep object detectors by proposing an alternate evaluation
approach founded on proper scoring rules. Using the proposed evaluation tools,
we show that although variance networks can be used to produce high quality
predictive distributions, ad-hoc approaches used by seminal object detectors
for choosing regression targets during training do not provide wide enough data
support for reliable variance learning. We hope that our work helps shift
evaluation in probabilistic object detection to better align with predictive
uncertainty evaluation in other machine learning domains. Code for all models,
evaluation, and datasets is available at:
https://github.com/asharakeh/probdet.git. | [
"cs.CV",
"stat.ML"
] |
Puerto Rico suffered severe damage from the category 5 hurricane (Maria) in
September 2017. Total monetary damages are estimated to be ~92 billion USD, the
third most costly tropical cyclone in US history. The response to this damage
has been tempered and slow moving, with recent estimates placing 45% of the
population without power three months after the storm. Consequently, we
developed a unique data-fusion mapping approach called the Urban Development
Index (UDI) and new open source tool, Comet Time Series (CometTS), to analyze
the recovery of electricity and infrastructure in Puerto Rico. Our approach
incorporates a combination of time series visualizations and change detection
mapping to create depictions of power or infrastructure loss. It also provides
a unique independent assessment of areas that are still struggling to recover.
For this workflow, our time series approach combines nighttime imagery from the
Suomi National Polar-orbiting Partnership Visible Infrared Imaging Radiometer
Suite (NPP VIIRS), multispectral imagery from two Landsat satellites, US Census
data, and crowd-sourced building footprint labels. Based upon our approach we
can identify and evaluate: 1) the recovery of electrical power compared to
pre-storm levels, 2) the location of potentially damaged infrastructure that
has yet to recover from the storm, and 3) the number of persons without power
over time. As of May 31, 2018, declined levels of observed brightness across
the island indicate that 13.9% +/- ~5.6% of persons still lack power and/or
that 13.2% +/- ~5.3% of infrastructure has been lost. In comparison, the Puerto
Rico Electric Power Authority states that less than 1% of their customers still
are without power. | [
"cs.CV",
"eess.IV"
] |
Training deep neural networks requires intricate initialization and careful
selection of learning rates. The emergence of stochastic gradient optimization
methods that use adaptive learning rates based on squared past gradients, e.g.,
AdaGrad, AdaDelta, and Adam, eases the job slightly. However, such methods have
also been proven problematic in recent studies with their own pitfalls
including non-convergence issues and so on. Alternative variants have been
proposed for enhancement, such as AMSGrad, AdaShift and AdaBound. In this work,
we identify a new problem of adaptive learning rate methods that exhibits at
the beginning of learning where Adam produces extremely large learning rates
that inhibit the start of learning. We propose the Adaptive and Momental Bound
(AdaMod) method to restrict the adaptive learning rates with adaptive and
momental upper bounds. The dynamic learning rate bounds are based on the
exponential moving averages of the adaptive learning rates themselves, which
smooth out unexpected large learning rates and stabilize the training of deep
neural networks. Our experiments verify that AdaMod eliminates the extremely
large learning rates throughout the training and brings significant
improvements especially on complex networks such as DenseNet and Transformer,
compared to Adam. Our implementation is available at:
https://github.com/lancopku/AdaMod | [
"cs.LG",
"stat.ML"
] |
Deep metric learning, which learns discriminative features to process image
clustering and retrieval tasks, has attracted extensive attention in recent
years. A number of deep metric learning methods, which ensure that similar
examples are mapped close to each other and dissimilar examples are mapped
farther apart, have been proposed to construct effective structures for loss
functions and have shown promising results. In this paper, different from the
approaches on learning the loss structures, we propose a robust SNR distance
metric based on Signal-to-Noise Ratio (SNR) for measuring the similarity of
image pairs for deep metric learning. By exploring the properties of our SNR
distance metric from the view of geometry space and statistical theory, we
analyze the properties of our metric and show that it can preserve the semantic
similarity between image pairs, which well justify its suitability for deep
metric learning. Compared with Euclidean distance metric, our SNR distance
metric can further jointly reduce the intra-class distances and enlarge the
inter-class distances for learned features. Leveraging our SNR distance metric,
we propose Deep SNR-based Metric Learning (DSML) to generate discriminative
feature embeddings. By extensive experiments on three widely adopted
benchmarks, including CARS196, CUB200-2011 and CIFAR10, our DSML has shown its
superiority over other state-of-the-art methods. Additionally, we extend our
SNR distance metric to deep hashing learning, and conduct experiments on two
benchmarks, including CIFAR10 and NUS-WIDE, to demonstrate the effectiveness
and generality of our SNR distance metric. | [
"cs.CV"
] |
Graph Neural Networks (GNNs) have achieved a lot of success on
graph-structured data. However, it is observed that the performance of graph
neural networks does not improve as the number of layers increases. This
effect, known as over-smoothing, has been analyzed mostly in linear cases. In
this paper, we build upon previous results \cite{oono2019graph} to further
analyze the over-smoothing effect in the general graph neural network
architecture. We show when the weight matrix satisfies the conditions
determined by the spectrum of augmented normalized Laplacian, the Dirichlet
energy of embeddings will converge to zero, resulting in the loss of
discriminative power. Using Dirichlet energy to measure "expressiveness" of
embedding is conceptually clean; it leads to simpler proofs than
\cite{oono2019graph} and can handle more non-linearities. | [
"cs.LG",
"stat.ML"
] |
Topology matters. Despite the recent success of point cloud processing with
geometric deep learning, it remains arduous to capture the complex topologies
of point cloud data with a learning model. Given a point cloud dataset
containing objects with various genera, or scenes with multiple objects, we
propose an autoencoder, TearingNet, which tackles the challenging task of
representing the point clouds using a fixed-length descriptor. Unlike existing
works directly deforming predefined primitives of genus zero (e.g., a 2D square
patch) to an object-level point cloud, our TearingNet is characterized by a
proposed Tearing network module and a Folding network module interacting with
each other iteratively. Particularly, the Tearing network module learns the
point cloud topology explicitly. By breaking the edges of a primitive graph, it
tears the graph into patches or with holes to emulate the topology of a target
point cloud, leading to faithful reconstructions. Experimentation shows the
superiority of our proposal in terms of reconstructing point clouds as well as
generating more topology-friendly representations than benchmarks. | [
"cs.CV"
] |
Pole-like landmark has received increasing attention as a domain-invariant
visual cue for visual robot self-localization across domains (e.g., seasons,
times of day, weathers). However, self-localization using pole-like landmarks
can be ill-posed for a passive observer, as many viewpoints may not provide any
pole-like landmark view. To alleviate this problem, we consider an active
observer and explore a novel "domain-invariant" next-best-view (NBV) planner
that attains consistent performance over different domains (i.e.,
maintenance-free), without requiring the expensive task of training data
collection and retraining. In our approach, a novel multi-encoder deep
convolutional neural network enables to detect domain invariant pole-like
landmarks, which are then used as the sole input to a model-free deep
reinforcement learning -based domain-invariant NBV planner. Further, we develop
a practical system for active self-localization using sparse invariant
landmarks and dense discriminative landmarks. In experiments, we demonstrate
that the proposed method is effective both in efficient landmark detection and
in discriminative self-localization. | [
"cs.CV"
] |
In the last few years, there has been a growing interest in taking advantage
of the 360 panoramic images potential, while managing the new challenges they
imply. While several tasks have been improved thanks to the contextual
information these images offer, object recognition in indoor scenes still
remains a challenging problem that has not been deeply investigated. This paper
provides an object recognition system that performs object detection and
semantic segmentation tasks by using a deep learning model adapted to match the
nature of equirectangular images. From these results, instance segmentation
masks are recovered, refined and transformed into 3D bounding boxes that are
placed into the 3D model of the room. Quantitative and qualitative results
support that our method outperforms the state of the art by a large margin and
show a complete understanding of the main objects in indoor scenes. | [
"cs.CV"
] |
Most existing zero-shot learning methods consider the problem as a visual
semantic embedding one. Given the demonstrated capability of Generative
Adversarial Networks(GANs) to generate images, we instead leverage GANs to
imagine unseen categories from text descriptions and hence recognize novel
classes with no examples being seen. Specifically, we propose a simple yet
effective generative model that takes as input noisy text descriptions about an
unseen class (e.g.Wikipedia articles) and generates synthesized visual features
for this class. With added pseudo data, zero-shot learning is naturally
converted to a traditional classification problem. Additionally, to preserve
the inter-class discrimination of the generated features, a visual pivot
regularization is proposed as an explicit supervision. Unlike previous methods
using complex engineered regularizers, our approach can suppress the noise well
without additional regularization. Empirically, we show that our method
consistently outperforms the state of the art on the largest available
benchmarks on Text-based Zero-shot Learning. | [
"cs.CV"
] |
Technology and the fruition of cultural heritage are becoming increasingly
more entwined, especially with the advent of smart audio guides, virtual and
augmented reality, and interactive installations. Machine learning and computer
vision are important components of this ongoing integration, enabling new
interaction modalities between user and museum. Nonetheless, the most frequent
way of interacting with paintings and statues still remains taking pictures.
Yet images alone can only convey the aesthetics of the artwork, lacking is
information which is often required to fully understand and appreciate it.
Usually this additional knowledge comes both from the artwork itself (and
therefore the image depicting it) and from an external source of knowledge,
such as an information sheet. While the former can be inferred by computer
vision algorithms, the latter needs more structured data to pair visual content
with relevant information. Regardless of its source, this information still
must be be effectively transmitted to the user. A popular emerging trend in
computer vision is Visual Question Answering (VQA), in which users can interact
with a neural network by posing questions in natural language and receiving
answers about the visual content. We believe that this will be the evolution of
smart audio guides for museum visits and simple image browsing on personal
smartphones. This will turn the classic audio guide into a smart personal
instructor with which the visitor can interact by asking for explanations
focused on specific interests. The advantages are twofold: on the one hand the
cognitive burden of the visitor will decrease, limiting the flow of information
to what the user actually wants to hear; and on the other hand it proposes the
most natural way of interacting with a guide, favoring engagement. | [
"cs.CV",
"cs.CL"
] |
The use of deep learning techniques for automatic facial expression
recognition has recently attracted great interest but developed models are
still unable to generalize well due to the lack of large emotion datasets for
deep learning. To overcome this problem, in this paper, we propose utilizing a
novel transfer learning approach relying on PathNet and investigate how
knowledge can be accumulated within a given dataset and how the knowledge
captured from one emotion dataset can be transferred into another in order to
improve the overall performance. To evaluate the robustness of our system, we
have conducted various sets of experiments on two emotion datasets: SAVEE and
eNTERFACE. The experimental results demonstrate that our proposed system leads
to improvement in performance of emotion recognition and performs significantly
better than the recent state-of-the-art schemes adopting fine-\
tuning/pre-trained approaches. | [
"cs.CV"
] |
Because deep learning is vulnerable to noisy labels, sample selection
techniques, which train networks with only clean labeled data, have attracted a
great attention. However, if the labels are dominantly corrupted by few
classes, these noisy samples are called dominant-noisy-labeled samples, the
network also learns dominant-noisy-labeled samples rapidly via content-aware
optimization. In this study, we propose a compelling criteria to penalize
dominant-noisy-labeled samples intensively through class-wise penalty labels.
By averaging prediction confidences for the each observed label, we obtain
suitable penalty labels that have high values if the labels are largely
corrupted by some classes. Experiments were performed using benchmarks
(CIFAR-10, CIFAR-100, Tiny-ImageNet) and real-world datasets (ANIMAL-10N,
Clothing1M) to evaluate the proposed criteria in various scenarios with
different noise rates. Using the proposed sample selection, the learning
process of the network becomes significantly robust to noisy labels compared to
existing methods in several noise types. | [
"cs.LG"
] |
Malware currently presents a number of serious threats to computer users.
Signature-based malware detection methods are limited in detecting new malware
samples that are significantly different from known ones. Therefore, machine
learning-based methods have been proposed, but there are two challenges these
methods face. The first is to model the full semantics behind the assembly code
of malware. The second challenge is to provide interpretable results while
keeping excellent detection performance. In this paper, we propose an
Interpretable MAlware Detector (I-MAD) that outperforms state-of-the-art static
malware detection models regarding accuracy with excellent interpretability. To
improve the detection performance, I-MAD incorporates a novel network component
called the Galaxy Transformer network that can understand assembly code at the
basic block, function, and executable levels. It also incorporates our proposed
interpretable feed-forward neural network to provide interpretations for its
detection results by quantifying the impact of each feature with respect to the
prediction. Experiment results show that our model significantly outperforms
existing state-of-the-art static malware detection models and presents
meaningful interpretations. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Time-series forecasting is an important task in both academic and industry,
which can be applied to solve many real forecasting problems like stock,
water-supply, and sales predictions. In this paper, we study the case of
retailers' sales forecasting on Tmall|the world's leading online B2C platform.
By analyzing the data, we have two main observations, i.e., sales seasonality
after we group different groups of retails and a Tweedie distribution after we
transform the sales (target to forecast). Based on our observations, we design
two mechanisms for sales forecasting, i.e., seasonality extraction and
distribution transformation. First, we adopt Fourier decomposition to
automatically extract the seasonalities for different categories of retailers,
which can further be used as additional features for any established regression
algorithms. Second, we propose to optimize the Tweedie loss of sales after
logarithmic transformations. We apply these two mechanisms to classic
regression models, i.e., neural network and Gradient Boosting Decision Tree,
and the experimental results on Tmall dataset show that both mechanisms can
significantly improve the forecasting results. | [
"cs.LG",
"stat.AP",
"stat.ML"
] |
While facial attribute manipulation of 2D images via Generative Adversarial
Networks (GANs) has become common in computer vision and graphics due to its
many practical uses, research on 3D attribute manipulation is relatively
undeveloped. Existing 3D attribute manipulation methods are limited because the
same semantic changes are applied to every 3D face. The key challenge for
developing better 3D attribute control methods is the lack of paired training
data in which one attribute is changed while other attributes are held fixed --
e.g., a pair of 3D faces where one is male and the other is female but all
other attributes, such as race and expression, are the same. To overcome this
challenge, we design a novel pipeline for generating paired 3D faces by
harnessing the power of GANs. On top of this pipeline, we then propose an
enhanced non-linear 3D conditional attribute controller that increases the
precision and diversity of 3D attribute control compared to existing methods.
We demonstrate the validity of our dataset creation pipeline and the superior
performance of our conditional attribute controller via quantitative and
qualitative evaluations. | [
"cs.CV",
"cs.GR"
] |
In natural image matting, the goal is to estimate the opacity of the
foreground object in the image. This opacity controls the way the foreground
and background is blended in transparent regions. In recent years, advances in
deep learning have led to many natural image matting algorithms that have
achieved outstanding performance in a fully automatic manner. However, most of
these algorithms only predict the alpha matte from the image, which is not
sufficient to create high-quality compositions. Further, it is not possible to
manually interact with these algorithms in any way except by directly changing
their input or output. We propose a novel recurrent neural network that can be
used as a post-processing method to recover the foreground and background
colors of an image, given an initial alpha estimation. Our method outperforms
the state-of-the-art in color estimation for natural image matting and show
that the recurrent nature of our method allows users to easily change candidate
solutions that lead to superior color estimations. | [
"cs.CV"
] |
A key step in any scanning-based asset creation workflow is to convert
unordered point clouds to a surface. Classical methods (e.g., Poisson
reconstruction) start to degrade in the presence of noisy and partial scans.
Hence, deep learning based methods have recently been proposed to produce
complete surfaces, even from partial scans. However, such data-driven methods
struggle to generalize to new shapes with large geometric and topological
variations. We present Points2Surf, a novel patch-based learning framework that
produces accurate surfaces directly from raw scans without normals. Learning a
prior over a combination of detailed local patches and coarse global
information improves generalization performance and reconstruction accuracy.
Our extensive comparison on both synthetic and real data demonstrates a clear
advantage of our method over state-of-the-art alternatives on previously unseen
classes (on average, Points2Surf brings down reconstruction error by 30\% over
SPR and by 270\%+ over deep learning based SotA methods) at the cost of longer
computation times and a slight increase in small-scale topological noise in
some cases. Our source code, pre-trained model, and dataset are available on:
https://github.com/ErlerPhilipp/points2surf | [
"cs.CV",
"I.4.5"
] |
With a growing demand for the search by image, many works have studied the
task of fashion instance-level image retrieval (FIR). Furthermore, the recent
works introduce a concept of fashion attribute manipulation (FAM) which
manipulates a specific attribute (e.g color) of a fashion item while
maintaining the rest of the attributes (e.g shape, and pattern). In this way,
users can search not only "the same" items but also "similar" items with the
desired attributes. FAM is a challenging task in that the attributes are hard
to define, and the unique characteristics of a query are hard to be preserved.
Although both FIR and FAM are important in real-life applications, most of the
previous studies have focused on only one of these problem. In this study, we
aim to achieve competitive performance on both FIR and FAM. To do so, we
propose a novel method that converts a query into a representation with the
desired attributes. We introduce a new idea of attribute manipulation at the
feature level, by matching the distribution of manipulated features with real
features. In this fashion, the attribute manipulation can be done independently
from learning a representation from the image. By introducing the feature-level
attribute manipulation, the previous methods for FIR can perform attribute
manipulation without sacrificing their retrieval performance. | [
"cs.CV"
] |
Deep models are state-of-the-art for many computer vision tasks including
image classification and object detection. However, it has been shown that deep
models are vulnerable to adversarial examples. We highlight how one-hot
encoding directly contributes to this vulnerability and propose breaking away
from this widely-used, but highly-vulnerable mapping. We demonstrate that by
leveraging a different output encoding, multi-way encoding, we decorrelate
source and target models, making target models more secure. Our approach makes
it more difficult for adversaries to find useful gradients for generating
adversarial attacks. We present robustness for black-box and white-box attacks
on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN. The strength
of our approach is also presented in the form of an attack for model
watermarking, raising challenges in detecting stolen models. | [
"cs.CV"
] |
This paper aims to learn a compact representation of a video for video face
recognition task. We make the following contributions: first, we propose a meta
attention-based aggregation scheme which adaptively and fine-grained weighs the
feature along each feature dimension among all frames to form a compact and
discriminative representation. It makes the best to exploit the valuable or
discriminative part of each frame to promote the performance of face
recognition, without discarding or despising low quality frames as usual
methods do. Second, we build a feature aggregation network comprised of a
feature embedding module and a feature aggregation module. The embedding module
is a convolutional neural network used to extract a feature vector from a face
image, while the aggregation module consists of cascaded two meta attention
blocks which adaptively aggregate the feature vectors into a single
fixed-length representation. The network can deal with arbitrary number of
frames, and is insensitive to frame order. Third, we validate the performance
of proposed aggregation scheme. Experiments on publicly available datasets,
such as YouTube face dataset and IJB-A dataset, show the effectiveness of our
method, and it achieves competitive performances on both the verification and
identification protocols. | [
"cs.CV",
"cs.AI"
] |
This paper addresses the problem of human body detection---particularly a
human body lying on the ground (a.k.a. casualty)---using point cloud data. This
ability to detect a casualty is one of the most important features of mobile
rescue robots, in order for them to be able to operate autonomously. We propose
a deep-learning-based casualty detection method using a deep convolutional
neural network (CNN). This network is trained to be able to detect a casualty
using a point-cloud data input. In the method we propose, the point cloud input
is pre-processed to generate a depth image-like ground-projected heightmap.
This heightmap is generated based on the projected distance of each point onto
the detected ground plane within the point cloud data. The generated heightmap
-- in image form -- is then used as an input for the CNN to detect a human body
lying on the ground. To train the neural network, we propose a novel
sim-to-real approach, in which the network model is trained using synthetic
data obtained in simulation and then tested on real sensor data. To make the
model transferable to real data implementations, during the training we adopt
specific data augmentation strategies with the synthetic training data. The
experimental results show that data augmentation introduced during the training
process is essential for improving the performance of the trained model on real
data. More specifically, the results demonstrate that the data augmentations on
raw point-cloud data have contributed to a considerable improvement of the
trained model performance. | [
"cs.CV",
"cs.RO"
] |
We present a novel method for multi-view depth estimation from a single
video, which is a critical task in various applications, such as perception,
reconstruction and robot navigation. Although previous learning-based methods
have demonstrated compelling results, most works estimate depth maps of
individual video frames independently, without taking into consideration the
strong geometric and temporal coherence among the frames. Moreover, current
state-of-the-art (SOTA) models mostly adopt a fully 3D convolution network for
cost regularization and therefore require high computational cost, thus
limiting their deployment in real-world applications. Our method achieves
temporally coherent depth estimation results by using a novel Epipolar
Spatio-Temporal (EST) transformer to explicitly associate geometric and
temporal correlation with multiple estimated depth maps. Furthermore, to reduce
the computational cost, inspired by recent Mixture-of-Experts models, we design
a compact hybrid network consisting of a 2D context-aware network and a 3D
matching network which learn 2D context information and 3D disparity cues
separately. Extensive experiments demonstrate that our method achieves higher
accuracy in depth estimation and significant speedup than the SOTA methods. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.