text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Diagrams often depict complex phenomena and serve as a good test bed for
visual and textual reasoning. However, understanding diagrams using natural
image understanding approaches requires large training datasets of diagrams,
which are very hard to obtain. Instead, this can be addressed as a matching
problem either between labeled diagrams, images or both. This problem is very
challenging since the absence of significant color and texture renders local
cues ambiguous and requires global reasoning. We consider the problem of
one-shot part labeling: labeling multiple parts of an object in a target image
given only a single source image of that category. For this set-to-set matching
problem, we introduce the Structured Set Matching Network (SSMN), a structured
prediction model that incorporates convolutional neural networks. The SSMN is
trained using global normalization to maximize local match scores between
corresponding elements and a global consistency score among all matched
elements, while also enforcing a matching constraint between the two sets. The
SSMN significantly outperforms several strong baselines on three label transfer
scenarios: diagram-to-diagram, evaluated on a new diagram dataset of over 200
categories; image-to-image, evaluated on a dataset built on top of the Pascal
Part Dataset; and image-to-diagram, evaluated on transferring labels across
these datasets. | [
"cs.CV"
] |
Reinforcement learning (RL) is a general framework that allows systems to
learn autonomously through trial-and-error interaction with their environment.
In recent years combining RL with expressive, high-capacity neural network
models has led to impressive performance in a diverse range of domains.
However, dealing with the large state and action spaces often required for
problems in the real world still remains a significant challenge. In this paper
we introduce a new simulation environment, "Gambit", designed as a tool to
build scenarios that can drive RL research in a direction useful for military
analysis. Using this environment we focus on an abstracted and simplified room
clearance scenario, where a team of blue agents have to make their way through
a building and ensure that all rooms are cleared of (and remain clear) of enemy
red agents. We implement a multi-agent version of feudal hierarchical RL that
introduces a command hierarchy where a commander at the higher level sends
orders to multiple agents at the lower level who simply have to learn to follow
these orders. We find that breaking the task down in this way allows us to
solve a number of non-trivial floorplans that require the coordination of
multiple agents much more efficiently than the standard baseline RL algorithms
we compare with. We then go on to explore how qualitatively different behaviour
can emerge depending on what we prioritise in the agent's reward function (e.g.
clearing the building quickly vs. prioritising rescuing civilians). | [
"cs.LG",
"cs.AI"
] |
We consider the problem of unsupervised domain adaptation for image
classification. To learn target-domain-aware features from the unlabeled data,
we create a self-supervised pretext task by augmenting the unlabeled data with
a certain type of transformation (specifically, image rotation) and ask the
learner to predict the properties of the transformation. However, the obtained
feature representation may contain a large amount of irrelevant information
with respect to the main task. To provide further guidance, we force the
feature representation of the augmented data to be consistent with that of the
original data. Intuitively, the consistency introduces additional constraints
to representation learning, therefore, the learned representation is more
likely to focus on the right information about the main task. Our experimental
results validate the proposed method and demonstrate state-of-the-art
performance on classical domain adaptation benchmarks. Code is available at
https://github.com/Jiaolong/ss-da-consistency. | [
"cs.CV"
] |
Real-time detection of moving objects involves memorisation of features in
the template image and their comparison with those in the test image. At high
sampling rates, such techniques face the problems of high algorithmic
complexity and component delays. We present a new resistive switching based
threshold logic cell which encodes the pixels of a template image. The cell
comprises a voltage divider circuit that programs the resistances of the
memristors arranged in a single node threshold logic network and the output is
encoded as a binary value using a CMOS inverter gate. When a test image is
applied to the template-programmed cell, a mismatch in the respective pixels is
seen as a change in the output voltage of the cell. The proposed cell when
compared with CMOS equivalent implementation shows improved performance in
area, leakage power, power dissipation and delay. | [
"cs.CV",
"cs.AR",
"cs.ET"
] |
Scene understanding is a critical problem in computer vision. In this paper,
we propose a 3D point-based scene graph generation ($\mathbf{SGG_{point}}$)
framework to effectively bridge perception and reasoning to achieve scene
understanding via three sequential stages, namely scene graph construction,
reasoning, and inference. Within the reasoning stage, an EDGE-oriented Graph
Convolutional Network ($\texttt{EdgeGCN}$) is created to exploit
multi-dimensional edge features for explicit relationship modeling, together
with the exploration of two associated twinning interaction mechanisms between
nodes and edges for the independent evolution of scene graph representations.
Overall, our integrated $\mathbf{SGG_{point}}$ framework is established to seek
and infer scene structures of interest from both real-world and synthetic 3D
point-based scenes. Our experimental results show promising edge-oriented
reasoning effects on scene graph generation studies. We also demonstrate our
method advantage on several traditional graph representation learning benchmark
datasets, including the node-wise classification on citation networks and
whole-graph recognition problems for molecular analysis. | [
"cs.CV"
] |
Video anomaly detection has proved to be a challenging task owing to its
unsupervised training procedure and high spatio-temporal complexity existing in
real-world scenarios. In the absence of anomalous training samples,
state-of-the-art methods try to extract features that fully grasp normal
behaviors in both space and time domains using different approaches such as
autoencoders, or generative adversarial networks. However, these approaches
completely ignore or, by using the ability of deep networks in the hierarchical
modeling, poorly model the spatio-temporal interactions that exist between
objects. To address this issue, we propose a novel yet efficient method named
Ano-Graph for learning and modeling the interaction of normal objects. Towards
this end, a Spatio-Temporal Graph (STG) is made by considering each node as an
object's feature extracted from a real-time off-the-shelf object detector, and
edges are made based on their interactions. After that, a self-supervised
learning method is employed on the STG in such a way that encapsulates
interactions in a semantic space. Our method is data-efficient, significantly
more robust against common real-world variations such as illumination, and
passes SOTA by a large margin on the challenging datasets ADOC and Street Scene
while stays competitive on Avenue, ShanghaiTech, and UCSD. | [
"cs.CV"
] |
In this paper, we present a novel approach for initializing deep neural
networks, i.e., by turning PCA into neural layers. Usually, the initialization
of the weights of a deep neural network is done in one of the three following
ways: 1) with random values, 2) layer-wise, usually as Deep Belief Network or
as auto-encoder, and 3) re-use of layers from another network (transfer
learning). Therefore, typically, many training epochs are needed before
meaningful weights are learned, or a rather similar dataset is required for
seeding a fine-tuning of transfer learning. In this paper, we describe how to
turn a PCA into an auto-encoder, by generating an encoder layer of the PCA
parameters and furthermore adding a decoding layer. We analyze the
initialization technique on real documents. First, we show that a PCA-based
initialization is quick and leads to a very stable initialization. Furthermore,
for the task of layout analysis we investigate the effectiveness of PCA-based
initialization and show that it outperforms state-of-the-art random weight
initialization methods. | [
"cs.LG",
"stat.ML"
] |
Reinforcement learning (RL) algorithms should learn as much as possible about
the environment but not the properties of the physics engines that generate the
environment. There are multiple algorithms that solve the task in a physics
engine based environment but there is no work done so far to understand if the
RL algorithms can generalize across physics engines. In this work, we compare
the generalization performance of various deep reinforcement learning
algorithms on a variety of control tasks. Our results show that MuJoCo is the
best engine to transfer the learning to other engines. On the other hand, none
of the algorithms generalize when trained on PyBullet. We also found out that
various algorithms have a promising generalizability if the effect of random
seeds can be minimized on their performance. | [
"cs.LG",
"cs.RO"
] |
We present a system to help designers create icons that are widely used in
banners, signboards, billboards, homepages, and mobile apps. Designers are
tasked with drawing contours, whereas our system colorizes contours in
different styles. This goal is achieved by training a dual conditional
generative adversarial network (GAN) on our collected icon dataset. One
condition requires the generated image and the drawn contour to possess a
similar contour, while the other anticipates the image and the referenced icon
to be similar in color style. Accordingly, the generator takes a contour image
and a man-made icon image to colorize the contour, and then the discriminators
determine whether the result fulfills the two conditions. The trained network
is able to colorize icons demanded by designers and greatly reduces their
workload. For the evaluation, we compared our dual conditional GAN to several
state-of-the-art techniques. Experiment results demonstrate that our network is
over the previous networks. Finally, we will provide the source code, icon
dataset, and trained network for public use. | [
"cs.LG",
"cs.GR",
"eess.IV"
] |
The success of convolutional neural networks (CNNs) in computer vision
applications has been accompanied by a significant increase of computation and
memory costs, which prohibits its usage on resource-limited environments such
as mobile or embedded devices. To this end, the research of CNN compression has
recently become emerging. In this paper, we propose a novel filter pruning
scheme, termed structured sparsity regularization (SSR), to simultaneously
speedup the computation and reduce the memory overhead of CNNs, which can be
well supported by various off-the-shelf deep learning libraries. Concretely,
the proposed scheme incorporates two different regularizers of structured
sparsity into the original objective function of filter pruning, which fully
coordinates the global outputs and local pruning operations to adaptively prune
filters. We further propose an Alternative Updating with Lagrange Multipliers
(AULM) scheme to efficiently solve its optimization. AULM follows the principle
of ADMM and alternates between promoting the structured sparsity of CNNs and
optimizing the recognition loss, which leads to a very efficient solver (2.5x
to the most recent work that directly solves the group sparsity-based
regularization). Moreover, by imposing the structured sparsity, the online
inference is extremely memory-light, since the number of filters and the output
feature maps are simultaneously reduced. The proposed scheme has been deployed
to a variety of state-of-the-art CNN structures including LeNet, AlexNet, VGG,
ResNet and GoogLeNet over different datasets. Quantitative results demonstrate
that the proposed scheme achieves superior performance over the
state-of-the-art methods. We further demonstrate the proposed compression
scheme for the task of transfer learning, including domain adaptation and
object detection, which also show exciting performance gains over the
state-of-the-arts. | [
"cs.CV",
"cs.LG"
] |
Estimating a depth map from a single RGB image has been investigated widely
for localization, mapping, and 3-dimensional object detection. Recent studies
on a single-view depth estimation are mostly based on deep Convolutional neural
Networks (ConvNets) which require a large amount of training data paired with
densely annotated labels. Depth annotation tasks are both expensive and
inefficient, so it is inevitable to leverage RGB images which can be collected
very easily to boost the performance of ConvNets without depth labels. However,
most self-supervised learning algorithms are focused on capturing the semantic
information of images to improve the performance in classification or object
detection, not in depth estimation. In this paper, we show that existing
self-supervised methods do not perform well on depth estimation and propose a
gradient-based self-supervised learning algorithm with momentum contrastive
loss to help ConvNets extract the geometric information with unlabeled images.
As a result, the network can estimate the depth map accurately with a
relatively small amount of annotated data. To show that our method is
independent of the model structure, we evaluate our method with two different
monocular depth estimation algorithms. Our method outperforms the previous
state-of-the-art self-supervised learning algorithms and shows the efficiency
of labeled data in triple compared to random initialization on the NYU Depth v2
dataset. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Generative adversarial networks (GANs) are able to generate high resolution
photo-realistic images of objects that "do not exist." These synthetic images
are rather difficult to detect as fake. However, the manner in which these
generative models are trained hints at a potential for information leakage from
the supplied training data, especially in the context of synthetic faces. This
paper presents experiments suggesting that identity information in face images
can flow from the training corpus into synthetic samples without any
adversarial actions when building or using the existing model. This raises
privacy-related questions, but also stimulates discussions of (a) the face
manifold's characteristics in the feature space and (b) how to create
generative models that do not inadvertently reveal identity information of real
subjects whose images were used for training. We used five different face
matchers (face_recognition, FaceNet, ArcFace, SphereFace and Neurotechnology
MegaMatcher) and the StyleGAN2 synthesis model, and show that this identity
leakage does exist for some, but not all methods. So, can we say that these
synthetically generated faces truly do not exist? Databases of real and
synthetically generated faces are made available with this paper to allow full
replicability of the results discussed in this work. | [
"cs.CV"
] |
Event cameras are novel vision sensors that sample, in an asynchronous
fashion, brightness increments with low latency and high temporal resolution.
The resulting streams of events are of high value by themselves, especially for
high speed motion estimation. However, a growing body of work has also focused
on the reconstruction of intensity frames from the events, as this allows
bridging the gap with the existing literature on appearance- and frame-based
computer vision. Recent work has mostly approached this problem using neural
networks trained with synthetic, ground-truth data. In this work we approach,
for the first time, the intensity reconstruction problem from a self-supervised
learning perspective. Our method, which leverages the knowledge of the inner
workings of event cameras, combines estimated optical flow and the event-based
photometric constancy to train neural networks without the need for any
ground-truth or synthetic data. Results across multiple datasets show that the
performance of the proposed self-supervised approach is in line with the
state-of-the-art. Additionally, we propose a novel, lightweight neural network
for optical flow estimation that achieves high speed inference with only a
minor drop in performance. | [
"cs.CV"
] |
Medical images are naturally associated with rich semantics about the human
anatomy, reflected in an abundance of recurring anatomical patterns, offering
unique potential to foster deep semantic representation learning and yield
semantically more powerful models for different medical applications. But how
exactly such strong yet free semantics embedded in medical images can be
harnessed for self-supervised learning remains largely unexplored. To this end,
we train deep models to learn semantically enriched visual representation by
self-discovery, self-classification, and self-restoration of the anatomy
underneath medical images, resulting in a semantics-enriched, general-purpose,
pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis
with all the publicly-available pre-trained models, by either self-supervision
or fully supervision, on the six distinct target tasks, covering both
classification and segmentation in various medical modalities (i.e.,CT, MRI,
and X-ray). Our extensive experiments demonstrate that Semantic Genesis
significantly exceeds all of its 3D counterparts as well as the de facto
ImageNet-based transfer learning in 2D. This performance is attributed to our
novel self-supervised learning framework, encouraging deep models to learn
compelling semantic representation from abundant anatomical patterns resulting
from consistent anatomies embedded in medical images. Code and pre-trained
Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis . | [
"cs.CV",
"eess.IV"
] |
Convolutional neural networks (CNNs) have achieved state-of-the-art
performance for automatic medical image segmentation. However, they have not
demonstrated sufficiently accurate and robust results for clinical use. In
addition, they are limited by the lack of image-specific adaptation and the
lack of generalizability to previously unseen object classes. To address these
problems, we propose a novel deep learning-based framework for interactive
segmentation by incorporating CNNs into a bounding box and scribble-based
segmentation pipeline. We propose image-specific fine-tuning to make a CNN
model adaptive to a specific test image, which can be either unsupervised
(without additional user interactions) or supervised (with additional
scribbles). We also propose a weighted loss function considering network and
interaction-based uncertainty for the fine-tuning. We applied this framework to
two applications: 2D segmentation of multiple organs from fetal MR slices,
where only two types of these organs were annotated for training; and 3D
segmentation of brain tumor core (excluding edema) and whole brain tumor
(including edema) from different MR sequences, where only tumor cores in one MR
sequence were annotated for training. Experimental results show that 1) our
model is more robust to segment previously unseen objects than state-of-the-art
CNNs; 2) image-specific fine-tuning with the proposed weighted loss function
significantly improves segmentation accuracy; and 3) our method leads to
accurate results with fewer user interactions and less user time than
traditional interactive segmentation methods. | [
"cs.CV"
] |
Smart automated traffic enforcement solutions have been gaining popularity in
recent years. These solutions are ubiquitously used for seat-belt violation
detection, red-light violation detection and speed violation detection
purposes. Highly accurate license plate recognition is an indispensable part of
these systems. However, general license plate recognition systems require high
resolution images for high performance. In this study, we propose a novel
license plate recognition method for general roadway surveillance cameras.
Proposed segmentation free license plate recognition algorithm utilizes deep
learning based object detection techniques in the character detection and
recognition process. Proposed method has been tested on 2000 images captured on
a roadway. | [
"cs.CV"
] |
Cross-validation (CV) is one of the main tools for performance estimation and
parameter tuning in machine learning. The general recipe for computing CV
estimate is to run a learning algorithm separately for each CV fold, a
computationally expensive process. In this paper, we propose a new approach to
reduce the computational burden of CV-based performance estimation. As opposed
to all previous attempts, which are specific to a particular learning model or
problem domain, we propose a general method applicable to a large class of
incremental learning algorithms, which are uniquely fitted to big data
problems. In particular, our method applies to a wide range of supervised and
unsupervised learning tasks with different performance criteria, as long as the
base learning algorithm is incremental. We show that the running time of the
algorithm scales logarithmically, rather than linearly, in the number of CV
folds. Furthermore, the algorithm has favorable properties for parallel and
distributed implementation. Experiments with state-of-the-art incremental
learning algorithms confirm the practicality of the proposed method. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
The conditional generative adversarial network (cGAN) is a powerful tool of
generating high-quality images, but existing approaches mostly suffer
unsatisfying performance or the risk of mode collapse. This paper presents
Omni-GAN, a variant of cGAN that reveals the devil in designing a proper
discriminator for training the model. The key is to ensure that the
discriminator receives strong supervision to perceive the concepts and moderate
regularization to avoid collapse. Omni-GAN is easily implemented and freely
integrated with off-the-shelf encoding methods (e.g., implicit neural
representation, INR). Experiments validate the superior performance of Omni-GAN
and Omni-INR-GAN in a wide range of image generation and restoration tasks. In
particular, Omni-INR-GAN sets new records on the ImageNet dataset with
impressive Inception scores of 262.85 and 343.22 for the image sizes of 128 and
256, respectively, surpassing the previous records by 100+ points. Moreover,
leveraging the generator prior, Omni-INR-GAN can extrapolate low-resolution
images to arbitrary resolution, even up to x60+ higher resolution. Code is
available. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We propose a novel defense against all existing gradient based adversarial
attacks on deep neural networks for image classification problems. Our defense
is based on a combination of deep neural networks and simple image
transformations. While straightforward in implementation, this defense yields a
unique security property which we term buffer zones. We argue that our defense
based on buffer zones offers significant improvements over state-of-the-art
defenses. We are able to achieve this improvement even when the adversary has
access to the {\em entire} original training data set and unlimited query
access to the defense. We verify our claim through experimentation using
Fashion-MNIST and CIFAR-10: We demonstrate $<11\%$ attack success rate --
significantly lower than what other well-known state-of-the-art defenses offer
-- at only a price of a $11-18\%$ drop in clean accuracy. By using a new
intuitive metric, we explain why this trade-off offers a significant
improvement over prior work. | [
"cs.LG",
"cs.CR",
"eess.IV",
"stat.ML"
] |
High-dimensional time series prediction is needed in applications as diverse
as demand forecasting and climatology. Often, such applications require methods
that are both highly scalable, and deal with noisy data in terms of corruptions
or missing values. Classical time series methods usually fall short of handling
both these issues. In this paper, we propose to adapt matrix matrix completion
approaches that have previously been successfully applied to large scale noisy
data, but which fail to adequately model high-dimensional time series due to
temporal dependencies. We present a novel temporal regularized matrix
factorization (TRMF) framework which supports data-driven temporal dependency
learning and enables forecasting ability to our new matrix factorization
approach. TRMF is highly general, and subsumes many existing matrix
factorization approaches for time series data. We make interesting connections
to graph regularized matrix factorization methods in the context of learning
the dependencies. Experiments on both real and synthetic data show that TRMF
outperforms several existing approaches for common time series tasks. | [
"cs.LG",
"stat.ML"
] |
Most datasets of interest to the analytics industry are impacted by various
forms of human bias. The outcomes of Data Analytics [DA] or Machine Learning
[ML] on such data are therefore prone to replicating the bias. As a result, a
large number of biased decision-making systems based on DA/ML have recently
attracted attention. In this paper we introduce Rosa, a free, web-based tool to
easily de-bias datasets with respect to a chosen characteristic. Rosa is based
on the principles of Fair Adversarial Networks, developed by illumr Ltd., and
can therefore remove interactive, non-linear, and non-binary bias. Rosa is
stand-alone pre-processing step / API, meaning it can be used easily with any
DA/ML pipeline. We test the efficacy of Rosa in removing bias from data-driven
decision making systems by performing standard DA tasks on five real-world
datasets, selected for their relevance to current DA problems, and also their
high potential for bias. We use simple ML models to model a characteristic of
analytical interest, and compare the level of bias in the model output both
with and without Rosa as a pre-processing step. We find that in all cases there
is a substantial decrease in bias of the data-driven decision making systems
when the data is pre-processed with Rosa. | [
"cs.LG",
"stat.AP"
] |
Deep neural networks have exhibited promising performance in image
super-resolution (SR) due to the power in learning the non-linear mapping from
low-resolution (LR) images to high-resolution (HR) images. However, most deep
learning methods employ feed-forward architectures, and thus the dependencies
between LR and HR images are not fully exploited, leading to limited learning
performance. Moreover, most deep learning based SR methods apply the pixel-wise
reconstruction error as the loss, which, however, may fail to capture
high-frequency information and produce perceptually unsatisfying results,
whilst the recent perceptual loss relies on some pre-trained deep model and
they may not generalize well. In this paper, we introduce a mask to separate
the image into low- and high-frequency parts based on image gradient magnitude,
and then devise a gradient sensitive loss to well capture the structures in the
image without sacrificing the recovery of low-frequency content. Moreover, by
investigating the duality in SR, we develop a dual reconstruction network (DRN)
to improve the SR performance. We provide theoretical analysis on the
generalization performance of our method and demonstrate its effectiveness and
superiority with thorough experiments. | [
"cs.CV"
] |
Deep learning has been shown to be successful in a number of domains, ranging
from acoustics, images, to natural language processing. However, applying deep
learning to the ubiquitous graph data is non-trivial because of the unique
characteristics of graphs. Recently, substantial research efforts have been
devoted to applying deep learning methods to graphs, resulting in beneficial
advances in graph analysis techniques. In this survey, we comprehensively
review the different types of deep learning methods on graphs. We divide the
existing methods into five categories based on their model architectures and
training strategies: graph recurrent neural networks, graph convolutional
networks, graph autoencoders, graph reinforcement learning, and graph
adversarial methods. We then provide a comprehensive overview of these methods
in a systematic manner mainly by following their development history. We also
analyze the differences and compositions of different methods. Finally, we
briefly outline the applications in which they have been used and discuss
potential future research directions. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
We propose an end-to-end framework for training domain specific models (DSMs)
to obtain both high accuracy and computational efficiency for object detection
tasks. DSMs are trained with distillation \cite{hinton2015distilling} and focus
on achieving high accuracy at a limited domain (e.g. fixed view of an
intersection). We argue that DSMs can capture essential features well even with
a small model size, enabling higher accuracy and efficiency than traditional
techniques. In addition, we improve the training efficiency by reducing the
dataset size by culling easy to classify images from the training set. For the
limited domain, we observed that compact DSMs significantly surpass the
accuracy of COCO trained models of the same size. By training on a compact
dataset, we show that with an accuracy drop of only 3.6\%, the training time
can be reduced by 93\%. The codes are uploaded in
https://github.com/kentaroy47/training-domain-specific-models. | [
"cs.CV",
"cs.LG"
] |
Since its introduction, unsupervised representation learning has attracted a
lot of attention from the research community, as it is demonstrated to be
highly effective and easy-to-apply in tasks such as dimension reduction,
clustering, visualization, information retrieval, and semi-supervised learning.
In this work, we propose a novel unsupervised representation learning framework
called neighbor-encoder, in which domain knowledge can be easily incorporated
into the learning process without modifying the general encoder-decoder
architecture of the classic autoencoder.In contrast to autoencoder, which
reconstructs the input data itself, neighbor-encoder reconstructs the input
data's neighbors. As the proposed representation learning problem is
essentially a neighbor reconstruction problem, domain knowledge can be easily
incorporated in the form of an appropriate definition of similarity between
objects. Based on that observation, our framework can leverage any
off-the-shelf similarity search algorithms or side information to find the
neighbor of an input object. Applications of other algorithms (e.g.,
association rule mining) in our framework are also possible, given that the
appropriate definition of neighbor can vary in different contexts. We have
demonstrated the effectiveness of our framework in many diverse domains,
including images, text, and time series, and for various data mining tasks
including classification, clustering, and visualization. Experimental results
show that neighbor-encoder not only outperforms autoencoder in most of the
scenarios we consider, but also achieves the state-of-the-art performance on
text document clustering. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Brain image segmentation is used for visualizing and quantifying anatomical
structures of the brain. We present an automated ap-proach using 2D deep
residual dilated networks which captures rich context information of different
tissues for the segmentation of eight brain structures. The proposed system was
evaluated in the MICCAI Brain Segmentation Challenge and ranked 9th out of 22
teams. We further compared the method with traditional U-Net using
leave-one-subject-out cross-validation setting on the public dataset.
Experimental results shows that the proposed method outperforms traditional
U-Net (i.e. 80.9% vs 78.3% in averaged Dice score, 4.35mm vs 11.59mm in
averaged robust Hausdorff distance) and is computationally efficient. | [
"cs.CV"
] |
Graph embedding has recently gained momentum in the research community, in
particular after the introduction of random walk and neural network based
approaches. However, most of the embedding approaches focus on representing the
local neighborhood of nodes and fail to capture the global graph structure,
i.e. to retain the relations to distant nodes. To counter that problem, we
propose a novel extension to random walk based graph embedding, which removes a
percentage of least frequent nodes from the walks at different levels. By this
removal, we simulate farther distant nodes to reside in the close neighborhood
of a node and hence explicitly represent their connection. Besides the common
evaluation tasks for graph embeddings, such as node classification and link
prediction, we evaluate and compare our approach against related methods on
shortest path approximation. The results indicate, that extensions to random
walk based methods (including our own) improve the predictive performance only
slightly - if at all. | [
"cs.LG",
"stat.ML"
] |
Motivated by real-world applications such as fast fashion retailing and
online advertising, the Multinomial Logit Bandit (MNL-bandit) is a popular
model in online learning and operations research, and has attracted much
attention in the past decade. However, it is a bit surprising that pure
exploration, a basic problem in bandit theory, has not been well studied in
MNL-bandit so far. In this paper we give efficient algorithms for pure
exploration in MNL-bandit. Our algorithms achieve instance-sensitive pull
complexities. We also complement the upper bounds by an almost matching lower
bound. | [
"cs.LG",
"cs.DS"
] |
Autonomous Micro Aerial Vehicles (MAVs) gained tremendous attention in recent
years. Autonomous flight in indoor requires a dense depth map for navigable
space detection which is the fundamental component for autonomous navigation.
In this paper, we address the problem of reconstructing dense depth while a
drone is hovering (small camera motion) in indoor scenes using already
estimated cameras and sparse point cloud obtained from a vSLAM. We start by
segmenting the scene based on sudden depth variation using sparse 3D points and
introduce a patch-based local plane fitting via energy minimization which
combines photometric consistency and co-planarity with neighbouring patches.
The method also combines a plane sweep technique for image segments having
almost no sparse point for initialization. Experiments show, the proposed
method produces better depth for indoor in artificial lighting condition,
low-textured environment compared to earlier literature in small motion. | [
"cs.CV"
] |
We address the problem of generating images across two drastically different
views, namely ground (street) and aerial (overhead) views. Image synthesis by
itself is a very challenging computer vision task and is even more so when
generation is conditioned on an image in another view. Due the difference in
viewpoints, there is small overlapping field of view and little common content
between these two views. Here, we try to preserve the pixel information between
the views so that the generated image is a realistic representation of cross
view input image. For this, we propose to use homography as a guide to map the
images between the views based on the common field of view to preserve the
details in the input image. We then use generative adversarial networks to
inpaint the missing regions in the transformed image and add realism to it. Our
exhaustive evaluation and model comparison demonstrate that utilizing geometry
constraints adds fine details to the generated images and can be a better
approach for cross view image synthesis than purely pixel based synthesis
methods. | [
"cs.CV"
] |
In some memory-constrained settings like IoT devices and over-the-network
data pipelines, it can be advantageous to have smaller contextual embeddings.
We investigate the efficacy of projecting contextual embedding data (BERT) onto
a manifold, and using nonlinear dimensionality reduction techniques to compress
these embeddings. In particular, we propose a novel post-processing approach,
applying a combination of Isomap and PCA. We find that the geodesic distance
estimations, estimates of the shortest path on a Riemannian manifold, from
Isomap's k-Nearest Neighbors graph bolstered the performance of the compressed
embeddings to be comparable to the original BERT embeddings. On one dataset, we
find that despite a 12-fold dimensionality reduction, the compressed embeddings
performed within 0.1% of the original BERT embeddings on a downstream
classification task. In addition, we find that this approach works particularly
well on tasks reliant on syntactic data, when compared with linear
dimensionality reduction. These results show promise for a novel geometric
approach to achieve lower dimensional text embeddings from existing
transformers and pave the way for data-specific and application-specific
embedding compressions. | [
"cs.LG",
"cs.CL"
] |
Representation learning is at the heart of what makes deep learning
effective. In this work, we introduce a new framework for representation
learning that we call "Holographic Neural Architectures" (HNAs). In the same
way that an observer can experience the 3D structure of a holographed object by
looking at its hologram from several angles, HNAs derive Holographic
Representations from the training set. These representations can then be
explored by moving along a continuous bounded single dimension. We show that
HNAs can be used to make generative networks, state-of-the-art regression
models and that they are inherently highly resistant to noise. Finally, we
argue that because of their denoising abilities and their capacity to
generalize well from very few examples, models based upon HNAs are particularly
well suited for biological applications where training examples are rare or
noisy. | [
"stat.ML",
"cs.AI",
"cs.LG",
"q-bio.GN",
"q-bio.TO",
"68T30, 68T05, 62-07l",
"I.2.0; I.2.4; I.2.6; G.3"
] |
In this work, we investigate the application of trainable and spectrally
initializable matrix transformations on the feature maps produced by
convolution operations. While previous literature has already demonstrated the
possibility of adding static spectral transformations as feature processors,
our focus is on more general trainable transforms. We study the transforms in
various architectural configurations on four datasets of different nature: from
medical (ColorectalHist, HAM10000) and natural (Flowers, ImageNet) images to
historical documents (CB55) and handwriting recognition (GPDS). With rigorous
experiments that control for the number of parameters and randomness, we show
that networks utilizing the introduced matrix transformations outperform
vanilla neural networks. The observed accuracy increases by an average of 2.2
across all datasets. In addition, we show that the benefit of spectral
initialization leads to significantly faster convergence, as opposed to
randomly initialized matrix transformations. The transformations are
implemented as auto-differentiable PyTorch modules that can be incorporated
into any neural network architecture. The entire code base is open-source. | [
"cs.CV",
"cs.LG"
] |
In this work, a system for creating a relightable 3D portrait of a human head
is presented. Our neural pipeline operates on a sequence of frames captured by
a smartphone camera with the flash blinking (flash-no flash sequence). A coarse
point cloud reconstructed via structure-from-motion software and multi-view
denoising is then used as a geometric proxy. Afterwards, a deep rendering
network is trained to regress dense albedo, normals, and environmental lighting
maps for arbitrary new viewpoints. Effectively, the proxy geometry and the
rendering network constitute a relightable 3D portrait model, that can be
synthesized from an arbitrary viewpoint and under arbitrary lighting, e.g.
directional light, point light, or an environment map. The model is fitted to
the sequence of frames with human face-specific priors that enforce the
plausibility of albedo-lighting decomposition and operates at the interactive
frame rate. We evaluate the performance of the method under varying lighting
conditions and at the extrapolated viewpoints and compare with existing
relighting methods. | [
"cs.CV"
] |
This work presents a supervised learning based approach to the computer
vision problem of frame interpolation. The presented technique could also be
used in the cartoon animations since drawing each individual frame consumes a
noticeable amount of time. The most existing solutions to this problem use
unsupervised methods and focus only on real life videos with already high frame
rate. However, the experiments show that such methods do not work as well when
the frame rate becomes low and object displacements between frames becomes
large. This is due to the fact that interpolation of the large displacement
motion requires knowledge of the motion structure thus the simple techniques
such as frame averaging start to fail. In this work the deep convolutional
neural network is used to solve the frame interpolation problem. In addition,
it is shown that incorporating the prior information such as optical flow
improves the interpolation quality significantly. | [
"cs.CV"
] |
In many machine learning applications, we are faced with incomplete datasets.
In the literature, missing data imputation techniques have been mostly
concerned with filling missing values. However, the existence of missing values
is synonymous with uncertainties not only over the distribution of missing
values but also over target class assignments that require careful
consideration. In this paper, we propose a simple and effective method for
imputing missing features and estimating the distribution of target assignments
given incomplete data. In order to make imputations, we train a simple and
effective generator network to generate imputations that a discriminator
network is tasked to distinguish. Following this, a predictor network is
trained using the imputed samples from the generator network to capture the
classification uncertainties and make predictions accordingly. The proposed
method is evaluated on CIFAR-10 and MNIST image datasets as well as five
real-world tabular classification datasets, under different missingness rates
and structures. Our experimental results show the effectiveness of the proposed
method in generating imputations as well as providing estimates for the class
uncertainties in a classification task when faced with missing values. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
When doing representation learning on data that lives on a known non-trivial
manifold embedded in high dimensional space, it is natural to desire the
encoder to be homeomorphic when restricted to the manifold, so that it is
bijective and continuous with a continuous inverse. Using topological
arguments, we show that when the manifold is non-trivial, the encoder must be
globally discontinuous and propose a universal, albeit impractical,
construction. In addition, we derive necessary constraints which need to be
satisfied when designing manifold-specific practical encoders. These are used
to analyse candidates for a homeomorphic encoder for the manifold of 3D
rotations $SO(3)$. | [
"stat.ML",
"cs.LG"
] |
Reinforcement learning (RL) provides an appealing formalism for learning
control policies from experience. However, the classic active formulation of RL
necessitates a lengthy active exploration process for each behavior, making it
difficult to apply in real-world settings such as robotic control. If we can
instead allow RL algorithms to effectively use previously collected data to aid
the online learning process, such applications could be made substantially more
practical: the prior data would provide a starting point that mitigates
challenges due to exploration and sample complexity, while the online training
enables the agent to perfect the desired skill. Such prior data could either
constitute expert demonstrations or sub-optimal prior data that illustrates
potentially useful transitions. While a number of prior methods have either
used optimal demonstrations to bootstrap RL, or have used sub-optimal data to
train purely offline, it remains exceptionally difficult to train a policy with
offline data and actually continue to improve it further with online RL. In
this paper we analyze why this problem is so challenging, and propose an
algorithm that combines sample efficient dynamic programming with maximum
likelihood policy updates, providing a simple and effective framework that is
able to leverage large amounts of offline data and then quickly perform online
fine-tuning of RL policies. We show that our method, advantage weighted actor
critic (AWAC), enables rapid learning of skills with a combination of prior
demonstration data and online experience. We demonstrate these benefits on
simulated and real-world robotics domains, including dexterous manipulation
with a real multi-fingered hand, drawer opening with a robotic arm, and
rotating a valve. Our results show that incorporating prior data can reduce the
time required to learn a range of robotic skills to practical time-scales. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
Over the last decade, a variety of new neurophysiological experiments have
led to new insights as to how, when and where retinal processing takes place,
and the nature of the retinal representation encoding sent to the cortex for
further processing. Based on these neurobiological discoveries, in our previous
work, we provided computer simulation evidence to suggest that Geometrical
illusions are explained in part, by the interaction of multiscale visual
processing performed in the retina. The output of our retinal stage model,
named Vis-CRF, is presented here for a sample of natural image and for several
types of Tilt Illusion, in which the final tilt percept arises from multiple
scale processing of Difference of Gaussians (DoG) and the perceptual
interaction of foreground and background elements (Nematzadeh and Powers, 2019;
Nematzadeh, 2018; Nematzadeh, Powers and Lewis, 2017; Nematzadeh, Lewis and
Powers, 2015). | [
"cs.CV"
] |
Multivariate time series prediction has applications in a wide variety of
domains and is considered to be a very challenging task, especially when the
variables have correlations and exhibit complex temporal patterns, such as
seasonality and trend. Many existing methods suffer from strong statistical
assumptions, numerical issues with high dimensionality, manual feature
engineering efforts, and scalability. In this work, we present a novel deep
learning architecture, known as Temporal Tensor Transformation Network, which
transforms the original multivariate time series into a higher order of tensor
through the proposed Temporal-Slicing Stack Transformation. This yields a new
representation of the original multivariate time series, which enables the
convolution kernel to extract complex and non-linear features as well as
variable interactional signals from a relatively large temporal region.
Experimental results show that Temporal Tensor Transformation Network
outperforms several state-of-the-art methods on window-based predictions across
various tasks. The proposed architecture also demonstrates robust prediction
performance through an extensive sensitivity analysis. | [
"cs.LG",
"stat.ML"
] |
Complex data structures such as time series are increasingly present in
modern data science problems. A fundamental question is whether two such
time-series are statistically dependent. Many current approaches make
parametric assumptions on the random processes, only detect linear association,
require multiple tests, or forfeit power in high-dimensional, nonlinear
settings. Estimating the distribution of any test statistic under the null is
non-trivial, as the permutation test is invalid. This work juxtaposes distance
correlation (Dcorr) and multiscale graph correlation (MGC) from independence
testing literature and block permutation from time series analysis to address
these challenges. The proposed nonparametric procedure is valid and consistent,
building upon prior work by characterizing the geometry of the relationship,
estimating the time lag at which dependence is maximized, avoiding the need for
multiple testing, and exhibiting superior power in high-dimensional, low sample
size, nonlinear settings. Neural connectivity is analyzed via fMRI data,
revealing linear dependence of signals within the visual network and default
mode network, and nonlinear relationships in other networks. This work uncovers
a first-resort data analysis tool with open-source code available, directly
impacting a wide range of scientific disciplines. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
Recently, very deep convolutional neural networks (CNNs) have been attracting
considerable attention in image restoration. However, as the depth grows, the
long-term dependency problem is rarely realized for these very deep models,
which results in the prior states/layers having little influence on the
subsequent ones. Motivated by the fact that human thoughts have persistency, we
propose a very deep persistent memory network (MemNet) that introduces a memory
block, consisting of a recursive unit and a gate unit, to explicitly mine
persistent memory through an adaptive learning process. The recursive unit
learns multi-level representations of the current state under different
receptive fields. The representations and the outputs from the previous memory
blocks are concatenated and sent to the gate unit, which adaptively controls
how much of the previous states should be reserved, and decides how much of the
current state should be stored. We apply MemNet to three image restoration
tasks, i.e., image denosing, super-resolution and JPEG deblocking.
Comprehensive experiments demonstrate the necessity of the MemNet and its
unanimous superiority on all three tasks over the state of the arts. Code is
available at https://github.com/tyshiwo/MemNet. | [
"cs.CV"
] |
Graph representation learning is to learn universal node representations that
preserve both node attributes and structural information. The derived node
representations can be used to serve various downstream tasks, such as node
classification and node clustering. When a graph is heterogeneous, the problem
becomes more challenging than the homogeneous graph node learning problem.
Inspired by the emerging information theoretic-based learning algorithm, in
this paper we propose an unsupervised graph neural network Heterogeneous Deep
Graph Infomax (HDGI) for heterogeneous graph representation learning. We use
the meta-path structure to analyze the connections involving semantics in
heterogeneous graphs and utilize graph convolution module and semantic-level
attention mechanism to capture local representations. By maximizing
local-global mutual information, HDGI effectively learns high-level node
representations that can be utilized in downstream graph-related tasks.
Experiment results show that HDGI remarkably outperforms state-of-the-art
unsupervised graph representation learning methods on both classification and
clustering tasks. By feeding the learned representations into a parametric
model, such as logistic regression, we even achieve comparable performance in
node classification tasks when comparing with state-of-the-art supervised
end-to-end GNN models. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
We introduce Egocentric Object Manipulation Graphs (Ego-OMG) - a novel
representation for activity modeling and anticipation of near future actions
integrating three components: 1) semantic temporal structure of activities, 2)
short-term dynamics, and 3) representations for appearance. Semantic temporal
structure is modeled through a graph, embedded through a Graph Convolutional
Network, whose states model characteristics of and relations between hands and
objects. These state representations derive from all three levels of
abstraction, and span segments delimited by the making and breaking of
hand-object contact. Short-term dynamics are modeled in two ways: A) through 3D
convolutions, and B) through anticipating the spatiotemporal end points of hand
trajectories, where hands come into contact with objects. Appearance is modeled
through deep spatiotemporal features produced through existing methods. We note
that in Ego-OMG it is simple to swap these appearance features, and thus
Ego-OMG is complementary to most existing action anticipation methods. We
evaluate Ego-OMG on the EPIC Kitchens Action Anticipation Challenge. The
consistency of the egocentric perspective of EPIC Kitchens allows for the
utilization of the hand-centric cues upon which Ego-OMG relies. We demonstrate
state-of-the-art performance, outranking all other previous published methods
by large margins and ranking first on the unseen test set and second on the
seen test set of the EPIC Kitchens Action Anticipation Challenge. We attribute
the success of Ego-OMG to the modeling of semantic structure captured over long
timespans. We evaluate the design choices made through several ablation
studies. Code will be released upon acceptance | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
The goal of this paper is to present a method for simultaneous trajectory and
local stabilizing policy optimization to generate local policies for
trajectory-centric model-based reinforcement learning (MBRL). This is motivated
by the fact that global policy optimization for non-linear systems could be a
very challenging problem both algorithmically and numerically. However, a lot
of robotic manipulation tasks are trajectory-centric, and thus do not require a
global model or policy. Due to inaccuracies in the learned model estimates, an
open-loop trajectory optimization process mostly results in very poor
performance when used on the real system. Motivated by these problems, we try
to formulate the problem of trajectory optimization and local policy synthesis
as a single optimization problem. It is then solved simultaneously as an
instance of nonlinear programming. We provide some results for analysis as well
as achieved performance of the proposed technique under some simplifying
assumptions. | [
"cs.LG",
"cs.RO",
"cs.SY",
"eess.SY",
"stat.ML"
] |
We study the problem of off-policy value evaluation in reinforcement learning
(RL), where one aims to estimate the value of a new policy based on data
collected by a different policy. This problem is often a critical step when
applying RL in real-world problems. Despite its importance, existing general
methods either have uncontrolled bias or suffer high variance. In this work, we
extend the doubly robust estimator for bandits to sequential decision-making
problems, which gets the best of both worlds: it is guaranteed to be unbiased
and can have a much lower variance than the popular importance sampling
estimators. We demonstrate the estimator's accuracy in several benchmark
problems, and illustrate its use as a subroutine in safe policy improvement. We
also provide theoretical results on the hardness of the problem, and show that
our estimator can match the lower bound in certain scenarios. | [
"cs.LG",
"cs.AI",
"cs.SY",
"stat.ME",
"stat.ML"
] |
Across a majority of pedestrian detection datasets, it is typically assumed
that pedestrians will be standing upright with respect to the image coordinate
system. This assumption, however, is not always valid for many vision-equipped
mobile platforms such as mobile phones, UAVs or construction vehicles on rugged
terrain. In these situations, the motion of the camera can cause images of
pedestrians to be captured at extreme angles. This can lead to very poor
pedestrian detection performance when using standard pedestrian detectors. To
address this issue, we propose a Rotational Rectification Network (R2N) that
can be inserted into any CNN-based pedestrian (or object) detector to adapt it
to significant changes in camera rotation. The rotational rectification network
uses a 2D rotation estimation module that passes rotational information to a
spatial transformer network to undistort image features. To enable robust
rotation estimation, we propose a Global Polar Pooling (GP-Pooling) operator to
capture rotational shifts in convolutional features. Through our experiments,
we show how our rotational rectification network can be used to improve the
performance of the state-of-the-art pedestrian detector under heavy image
rotation by up to 45% | [
"cs.CV"
] |
In this paper we show that High-Definition (HD) maps provide strong priors
that can boost the performance and robustness of modern 3D object detectors.
Towards this goal, we design a single stage detector that extracts geometric
and semantic features from the HD maps. As maps might not be available
everywhere, we also propose a map prediction module that estimates the map on
the fly from raw LiDAR data. We conduct extensive experiments on KITTI as well
as a large-scale 3D detection benchmark containing 1 million frames, and show
that the proposed map-aware detector consistently outperforms the
state-of-the-art in both mapped and un-mapped scenarios. Importantly the whole
framework runs at 20 frames per second. | [
"cs.CV"
] |
Recommender systems play a crucial role in mitigating the problem of
information overload by suggesting users' personalized items or services. The
vast majority of traditional recommender systems consider the recommendation
procedure as a static process and make recommendations following a fixed
strategy. In this paper, we propose a novel recommender system with the
capability of continuously improving its strategies during the interactions
with users. We model the sequential interactions between users and a
recommender system as a Markov Decision Process (MDP) and leverage
Reinforcement Learning (RL) to automatically learn the optimal strategies via
recommending trial-and-error items and receiving reinforcements of these items
from users' feedbacks. In particular, we introduce an online user-agent
interacting environment simulator, which can pre-train and evaluate model
parameters offline before applying the model online. Moreover, we validate the
importance of list-wise recommendations during the interactions between users
and agent, and develop a novel approach to incorporate them into the proposed
framework LIRD for list-wide recommendations. The experimental results based on
a real-world e-commerce dataset demonstrate the effectiveness of the proposed
framework. | [
"cs.LG",
"stat.ML"
] |
Depth map super-resolution is a task with high practical application
requirements in the industry. Existing color-guided depth map super-resolution
methods usually necessitate an extra branch to extract high-frequency detail
information from RGB image to guide the low-resolution depth map
reconstruction. However, because there are still some differences between the
two modalities, direct information transmission in the feature dimension or
edge map dimension cannot achieve satisfactory result, and may even trigger
texture copying in areas where the structures of the RGB-D pair are
inconsistent. Inspired by the multi-task learning, we propose a joint learning
network of depth map super-resolution (DSR) and monocular depth estimation
(MDE) without introducing additional supervision labels. For the interaction of
two subnetworks, we adopt a differentiated guidance strategy and design two
bridges correspondingly. One is the high-frequency attention bridge (HABdg)
designed for the feature encoding process, which learns the high-frequency
information of the MDE task to guide the DSR task. The other is the content
guidance bridge (CGBdg) designed for the depth map reconstruction process,
which provides the content guidance learned from DSR task for MDE task. The
entire network architecture is highly portable and can provide a paradigm for
associating the DSR and MDE tasks. Extensive experiments on benchmark datasets
demonstrate that our method achieves competitive performance. Our code and
models are available at https://rmcong.github.io/proj_BridgeNet.html. | [
"cs.CV"
] |
As a crucial task of autonomous driving, 3D object detection has made great
progress in recent years. However, monocular 3D object detection remains a
challenging problem due to the unsatisfactory performance in depth estimation.
Most existing monocular methods typically directly regress the scene depth
while ignoring important relationships between the depth and various geometric
elements (e.g. bounding box sizes, 3D object dimensions, and object poses). In
this paper, we propose to learn geometry-guided depth estimation with
projective modeling to advance monocular 3D object detection. Specifically, a
principled geometry formula with projective modeling of 2D and 3D depth
predictions in the monocular 3D object detection network is devised. We further
implement and embed the proposed formula to enable geometry-aware deep
representation learning, allowing effective 2D and 3D interactions for boosting
the depth estimation. Moreover, we provide a strong baseline through addressing
substantial misalignment between 2D annotation and projected boxes to ensure
robust learning with the proposed geometric formula. Experiments on the KITTI
dataset show that our method remarkably improves the detection performance of
the state-of-the-art monocular-based method without extra data by 2.80% on the
moderate test setting. The model and code will be released at
https://github.com/YinminZhang/MonoGeo. | [
"cs.CV"
] |
We present Sandwich Batch Normalization (SaBN), an embarrassingly easy
improvement of Batch Normalization (BN) with only a few lines of code changes.
SaBN is motivated by addressing the inherent feature distribution heterogeneity
that one can be identified in many tasks, which can arise from data
heterogeneity (multiple input domains) or model heterogeneity (dynamic
architectures, model conditioning, etc.). Our SaBN factorizes the BN affine
layer into one shared sandwich affine layer, cascaded by several parallel
independent affine layers. Concrete analysis reveals that, during optimization,
SaBN promotes balanced gradient norms while still preserving diverse gradient
directions: a property that many application tasks seem to favor. We
demonstrate the prevailing effectiveness of SaBN as a drop-in replacement in
four tasks: $\textbf{conditional image generation}$, $\textbf{neural
architecture search}$ (NAS), $\textbf{adversarial training}$, and
$\textbf{arbitrary style transfer}$. Leveraging SaBN immediately achieves
better Inception Score and FID on CIFAR-10 and ImageNet conditional image
generation with three state-of-the-art GANs; boosts the performance of a
state-of-the-art weight-sharing NAS algorithm significantly on NAS-Bench-201;
substantially improves the robust and standard accuracies for adversarial
defense; and produces superior arbitrary stylized results. We also provide
visualizations and analysis to help understand why SaBN works. Codes are
available at https://github.com/VITA-Group/Sandwich-Batch-Normalization. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
In this paper, we propose a deep generative adversarial network for
super-resolution considering the trade-off between perception and distortion.
Based on good performance of a recently developed model for super-resolution,
i.e., deep residual network using enhanced upscale modules (EUSR), the proposed
model is trained to improve perceptual performance with only slight increase of
distortion. For this purpose, together with the conventional content loss,
i.e., reconstruction loss such as L1 or L2, we consider additional losses in
the training phase, which are the discrete cosine transform coefficients loss
and differential content loss. These consider perceptual part in the content
loss, i.e., consideration of proper high frequency components is helpful for
the trade-off problem in super-resolution. The experimental results show that
our proposed model has good performance for both perception and distortion, and
is effective in perceptual super-resolution applications. | [
"cs.CV"
] |
The emergence of digital technologies such as smartphones in healthcare
applications have demonstrated the possibility of developing rich, continuous,
and objective measures of multiple sclerosis (MS) disability that can be
administered remotely and out-of-clinic. In this work, deep convolutional
neural networks (DCNN) applied to smartphone inertial sensor data were shown to
better distinguish healthy from MS participant ambulation, compared to standard
Support Vector Machine (SVM) feature-based methodologies. To overcome the
typical limitations associated with remotely generated health data, such as low
subject numbers, sparsity, and heterogeneous data, a transfer learning (TL)
model from similar large open-source datasets was proposed. Our TL framework
utilised the ambulatory information learned on Human Activity Recognition (HAR)
tasks collected from similar smartphone-based sensor data. A lack of
transparency of "black-box" deep networks remains one of the largest stumbling
blocks to the wider acceptance of deep learning for clinical applications.
Ensuing work therefore aimed to visualise DCNN decisions attributed by
relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the
LRP framework, the patterns captured from smartphone-based inertial sensor data
that were reflective of those who are healthy versus persons with MS (PwMS)
could begin to be established and understood. Interpretations suggested that
cadence-based measures, gait speed, and ambulation-related signal perturbations
were distinct characteristics that distinguished MS disability from healthy
participants. Robust and interpretable outcomes, generated from high-frequency
out-of-clinic assessments, could greatly augment the current in-clinic
assessment picture for PwMS, to inform better disease management techniques,
and enable the development of better therapeutic interventions. | [
"cs.LG",
"eess.SP"
] |
Monocular estimation of 3d human pose has attracted increased attention with
the availability of large ground-truth motion capture datasets. However, the
diversity of training data available is limited and it is not clear to what
extent methods generalize outside the specific datasets they are trained on. In
this work we carry out a systematic study of the diversity and biases present
in specific datasets and its effect on cross-dataset generalization across a
compendium of 5 pose datasets. We specifically focus on systematic differences
in the distribution of camera viewpoints relative to a body-centered coordinate
frame. Based on this observation, we propose an auxiliary task of predicting
the camera viewpoint in addition to pose. We find that models trained to
jointly predict viewpoint and pose systematically show significantly improved
cross-dataset generalization. | [
"cs.CV"
] |
Abnormal event detection is a challenging task that requires effectively
handling intricate features of appearance and motion. In this paper, we present
an approach of detecting anomalies in videos by learning a novel LSTM based
self-contained network on normal dense optical flow. Due to their sigmoid
implementations, standard LSTM's forget gate is susceptible to overlooking and
dismissing relevant content in long sequence tasks like abnormality detection.
The forget gate mitigates participation of previous hidden state for
computation of cell state prioritizing current input. In addition, the
hyperbolic tangent activation of standard LSTMs sacrifices performance when a
network gets deeper. To tackle these two limitations, we introduce a bi-gated,
light LSTM cell by discarding the forget gate and introducing sigmoid
activation. Specifically, the LSTM architecture we come up with fully sustains
content from previous hidden state thereby enabling the trained model to be
robust and make context-independent decision during evaluation. Removing the
forget gate results in a simplified and undemanding LSTM cell with improved
performance effectiveness and computational efficiency. Empirical evaluations
show that the proposed bi-gated LSTM based network outperforms various LSTM
based models verifying its effectiveness for abnormality detection and
generalization tasks on CUHK Avenue and UCSD datasets. | [
"cs.CV"
] |
Long training times of deep neural networks are a bottleneck in machine
learning research. The major impediment to fast training is the quadratic
growth of both memory and compute requirements of dense and convolutional
layers with respect to their information bandwidth. Recently, training `a
priori' sparse networks has been proposed as a method for allowing layers to
retain high information bandwidth, while keeping memory and compute low.
However, the choice of which sparse topology should be used in these networks
is unclear. In this work, we provide a theoretical foundation for the choice of
intra-layer topology. First, we derive a new sparse neural network
initialization scheme that allows us to explore the space of very deep sparse
networks. Next, we evaluate several topologies and show that seemingly similar
topologies can often have a large difference in attainable accuracy. To explain
these differences, we develop a data-free heuristic that can evaluate a
topology independently from the dataset the network will be trained on. We then
derive a set of requirements that make a good topology, and arrive at a single
topology that satisfies all of them. | [
"cs.LG",
"stat.ML"
] |
In the current age, use of natural communication in human computer
interaction is a known and well installed thought. Hand gesture recognition and
gesture based applications has gained a significant amount of popularity
amongst people all over the world. It has a number of applications ranging from
security to entertainment. These applications generally are real time
applications and need fast, accurate communication with machines. On the other
end, gesture based communications have few limitations also like bent finger
information is not provided in vision based techniques. In this paper, a novel
method for fingertip detection and for angle calculation of both hands bent
fingers is discussed. Angle calculation has been done before with sensor based
gloves/devices. This study has been conducted in the context of natural
computing for calculating angles without using any wired equipment, colors,
marker or any device. The pre-processing and segmentation of the region of
interest is performed in a HSV color space and a binary format respectively.
Fingertips are detected using level-set method and angles were calculated using
geometrical analysis. This technique requires no training for system to perform
the task. | [
"cs.CV"
] |
Graphs can be used to effectively represent complex data structures. Learning
these irregular data in graphs is challenging and still suffers from shallow
learning. Applying deep learning on graphs has recently showed good performance
in many applications in social analysis, bioinformatics etc. A message passing
graph convolution network is such a powerful method which has expressive power
to learn graph structures. Meanwhile, circRNA is a type of non-coding RNA which
plays a critical role in human diseases. Identifying the associations between
circRNAs and diseases is important to diagnosis and treatment of complex
diseases. However, there are limited number of known associations between them
and conducting biological experiments to identify new associations is time
consuming and expensive. As a result, there is a need of building efficient and
feasible computation methods to predict potential circRNA-disease associations.
In this paper, we propose a novel graph convolution network framework to learn
features from a graph built with multi-source similarity information to predict
circRNA-disease associations. First we use multi-source information of circRNA
similarity, disease and circRNA Gaussian Interaction Profile (GIP) kernel
similarity to extract the features using first graph convolution. Then we
predict disease associations for each circRNA with second graph convolution.
Proposed framework with five-fold cross validation on various experiments shows
promising results in predicting circRNA-disease association and outperforms
other existing methods. | [
"cs.LG",
"stat.ML"
] |
Tracking multiple objects in videos relies on modeling the spatial-temporal
interactions of the objects. In this paper, we propose a solution named
TransMOT, which leverages powerful graph transformers to efficiently model the
spatial and temporal interactions among the objects. TransMOT effectively
models the interactions of a large number of objects by arranging the
trajectories of the tracked objects as a set of sparse weighted graphs, and
constructing a spatial graph transformer encoder layer, a temporal transformer
encoder layer, and a spatial graph transformer decoder layer based on the
graphs. TransMOT is not only more computationally efficient than the
traditional Transformer, but it also achieves better tracking accuracy. To
further improve the tracking speed and accuracy, we propose a cascade
association framework to handle low-score detections and long-term occlusions
that require large computational resources to model in TransMOT. The proposed
method is evaluated on multiple benchmark datasets including MOT15, MOT16,
MOT17, and MOT20, and it achieves state-of-the-art performance on all the
datasets. | [
"cs.CV"
] |
Machine learning has been widely adopted for medical image analysis in recent
years given its promising performance in image segmentation and classification
tasks. As a data-driven science, the success of machine learning, in particular
supervised learning, largely depends on the availability of manually annotated
datasets. For medical imaging applications, such annotated datasets are not
easy to acquire. It takes a substantial amount of time and resource to curate
an annotated medical image set. In this paper, we propose an efficient
annotation framework for brain tumour images that is able to suggest
informative sample images for human experts to annotate. Our experiments show
that training a segmentation model with only 19% suggestively annotated patient
scans from BraTS 2019 dataset can achieve a comparable performance to training
a model on the full dataset for whole tumour segmentation task. It demonstrates
a promising way to save manual annotation cost and improve data efficiency in
medical imaging applications. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We present a general framework for exemplar-based image translation, which
synthesizes a photo-realistic image from the input in a distinct domain (e.g.,
semantic segmentation mask, or edge map, or pose keypoints), given an exemplar
image. The output has the style (e.g., color, texture) in consistency with the
semantically corresponding objects in the exemplar. We propose to jointly learn
the crossdomain correspondence and the image translation, where both tasks
facilitate each other and thus can be learned with weak supervision. The images
from distinct domains are first aligned to an intermediate domain where dense
correspondence is established. Then, the network synthesizes images based on
the appearance of semantically corresponding patches in the exemplar. We
demonstrate the effectiveness of our approach in several image translation
tasks. Our method is superior to state-of-the-art methods in terms of image
quality significantly, with the image style faithful to the exemplar with
semantic consistency. Moreover, we show the utility of our method for several
applications | [
"cs.CV",
"cs.GR",
"eess.IV"
] |
The size of Transformer models is growing at an unprecedented pace. It has
only taken less than one year to reach trillion-level parameters after the
release of GPT-3 (175B). Training such models requires both substantial
engineering efforts and enormous computing resources, which are luxuries most
research teams cannot afford. In this paper, we propose PipeTransformer, which
leverages automated and elastic pipelining and data parallelism for efficient
distributed training of Transformer models. PipeTransformer automatically
adjusts the pipelining and data parallelism by identifying and freezing some
layers during the training, and instead allocates resources for training of the
remaining active layers. More specifically, PipeTransformer dynamically
excludes converged layers from the pipeline, packs active layers into fewer
GPUs, and forks more replicas to increase data-parallel width. We evaluate
PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and
SQuAD datasets. Our results show that PipeTransformer attains a 2.4 fold
speedup compared to the state-of-the-art baseline. We also provide various
performance analyses for a more comprehensive understanding of our algorithmic
and system-wise design. We also develop open-sourced flexible APIs for
PipeTransformer, which offer a clean separation among the freeze algorithm,
model definitions, and training accelerations, hence allowing it to be applied
to other algorithms that require similar freezing strategies. | [
"cs.LG"
] |
Data augmentation methods have been shown to be a fundamental technique to
improve generalization in tasks such as image, text and audio classification.
Recently, automated augmentation methods have led to further improvements on
image classification and object detection leading to state-of-the-art
performances. Nevertheless, little work has been done on time-series data, an
area that could greatly benefit from automated data augmentation given the
usually limited size of the datasets. We present two sample-adaptive automatic
weighting schemes for data augmentation: the first learns to weight the
contribution of the augmented samples to the loss, and the second method
selects a subset of transformations based on the ranking of the predicted
training loss. We validate our proposed methods on a large, noisy financial
dataset and on time-series datasets from the UCR archive. On the financial
dataset, we show that the methods in combination with a trading strategy lead
to improvements in annualized returns of over 50$\%$, and on the time-series
data we outperform state-of-the-art models on over half of the datasets, and
achieve similar performance in accuracy on the others. | [
"cs.LG",
"stat.ML"
] |
We show that viewing graphs as sets of node features and incorporating
structural and positional information into a transformer architecture is able
to outperform representations learned with classical graph neural networks
(GNNs). Our model, GraphiT, encodes such information by (i) leveraging relative
positional encoding strategies in self-attention scores based on positive
definite kernels on graphs, and (ii) enumerating and encoding local
sub-structures such as paths of short length. We thoroughly evaluate these two
ideas on many classification and regression tasks, demonstrating the
effectiveness of each of them independently, as well as their combination. In
addition to performing well on standard benchmarks, our model also admits
natural visualization mechanisms for interpreting graph motifs explaining the
predictions, making it a potentially strong candidate for scientific
applications where interpretation is important. Code available at
https://github.com/inria-thoth/GraphiT. | [
"cs.LG"
] |
Oscillations lie at the core of many biological processes, from the cell
cycle, to circadian oscillations and developmental processes. Time-keeping
mechanisms are essential to enable organisms to adapt to varying conditions in
environmental cycles, from day/night to seasonal. Transcriptional regulatory
networks are one of the mechanisms behind these biological oscillations.
However, while identifying cyclically expressed genes from time series
measurements is relatively easy, determining the structure of the interaction
network underpinning the oscillation is a far more challenging problem. Here,
we explicitly leverage the oscillatory nature of the transcriptional signals
and present a method for reconstructing network interactions tailored to this
special but important class of genetic circuits. Our method is based on
projecting the signal onto a set of oscillatory basis functions using a
Discrete Fourier Transform. We build a Bayesian Hierarchical model within a
frequency domain linear model in order to enforce sparsity and incorporate
prior knowledge about the network structure. Experiments on real and simulated
data show that the method can lead to substantial improvements over competing
approaches if the oscillatory assumption is met, and remains competitive also
in cases it is not. | [
"stat.ML",
"q-bio.QM"
] |
We consider the problem of imitation learning from a finite set of expert
trajectories, without access to reinforcement signals. The classical approach
of extracting the expert's reward function via inverse reinforcement learning,
followed by reinforcement learning is indirect and may be computationally
expensive. Recent generative adversarial methods based on matching the policy
distribution between the expert and the agent could be unstable during
training. We propose a new framework for imitation learning by estimating the
support of the expert policy to compute a fixed reward function, which allows
us to re-frame imitation learning within the standard reinforcement learning
setting. We demonstrate the efficacy of our reward function on both discrete
and continuous domains, achieving comparable or better performance than the
state of the art under different reinforcement learning algorithms. | [
"cs.LG",
"stat.ML"
] |
State-of-the-art single depth image-based 3D hand pose estimation methods are
based on dense predictions, including voxel-to-voxel predictions,
point-to-point regression, and pixel-wise estimations. Despite the good
performance, those methods have a few issues in nature, such as the poor
trade-off between accuracy and efficiency, and plain feature representation
learning with local convolutions. In this paper, a novel pixel-wise
prediction-based method is proposed to address the above issues. The key ideas
are two-fold: a) explicitly modeling the dependencies among joints and the
relations between the pixels and the joints for better local feature
representation learning; b) unifying the dense pixel-wise offset predictions
and direct joint regression for end-to-end training. Specifically, we first
propose a graph convolutional network (GCN) based joint graph reasoning module
to model the complex dependencies among joints and augment the representation
capability of each pixel. Then we densely estimate all pixels' offsets to
joints in both image plane and depth space and calculate the joints' positions
by a weighted average over all pixels' predictions, totally discarding the
complex postprocessing operations. The proposed model is implemented with an
efficient 2D fully convolutional network (FCN) backbone and has only about 1.4M
parameters. Extensive experiments on multiple 3D hand pose estimation
benchmarks demonstrate that the proposed method achieves new state-of-the-art
accuracy while running very efficiently with around a speed of 110fps on a
single NVIDIA 1080Ti GPU. | [
"cs.CV"
] |
Over the past few years, several new methods for scene text recognition have
been proposed. Most of these methods propose novel building blocks for neural
networks. These novel building blocks are specially tailored for the task of
scene text recognition and can thus hardly be used in any other tasks. In this
paper, we introduce a new model for scene text recognition that only consists
of off-the-shelf building blocks for neural networks. Our model (KISS) consists
of two ResNet based feature extractors, a spatial transformer, and a
transformer. We train our model only on publicly available, synthetic training
data and evaluate it on a range of scene text recognition benchmarks, where we
reach state-of-the-art or competitive performance, although our model does not
use methods like 2D-attention, or image rectification. | [
"cs.CV"
] |
In this paper we present an end-to-end framework for addressing the problem
of dynamic pricing (DP) on E-commerce platform using methods based on deep
reinforcement learning (DRL). By using four groups of different business data
to represent the states of each time period, we model the dynamic pricing
problem as a Markov Decision Process (MDP). Compared with the state-of-the-art
DRL-based dynamic pricing algorithms, our approaches make the following three
contributions. First, we extend the discrete set problem to the continuous
price set. Second, instead of using revenue as the reward function directly, we
define a new function named difference of revenue conversion rates (DRCR).
Third, the cold-start problem of MDP is tackled by pre-training and evaluation
using some carefully chosen historical sales data. Our approaches are evaluated
by both offline evaluation method using real dataset of Alibaba Inc., and
online field experiments starting from July 2018 with thousands of items,
lasting for months on Tmall.com. To our knowledge, there is no other DP field
experiment using DRL before. Field experiment results suggest that DRCR is a
more appropriate reward function than revenue, which is widely used by current
literature. Also, continuous price sets have better performance than discrete
sets and our approaches significantly outperformed the manual pricing by
operation experts. | [
"cs.LG",
"stat.ML"
] |
It is expensive to generate real-life image labels and there is a domain gap
between real-life and simulated images, hence a model trained on the latter
cannot adapt to the former. Solving this can totally eliminate the need for
labeling real-life datasets completely. Class balanced self-training is one of
the existing techniques that attempt to reduce the domain gap. Moreover,
augmenting RGB with flow maps has improved performance in simple semantic
segmentation and geometry is preserved across domains. Hence, by augmenting
images with dense optical flow map, domain adaptation in semantic segmentation
can be improved. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Arrhythmia detection from ECG is an important research subject in the
prevention and diagnosis of cardiovascular diseases. The prevailing studies
formulate arrhythmia detection from ECG as a time series classification
problem. Meanwhile, early detection of arrhythmia presents a real-world demand
for early prevention and diagnosis. In this paper, we address a problem of
cardiovascular disease early classification, which is a varied-length and
long-length time series early classification problem as well. For solving this
problem, we propose a deep reinforcement learning-based framework, namely
Snippet Policy Network (SPN), consisting of four modules, snippet generator,
backbone network, controlling agent, and discriminator. Comparing to the
existing approaches, the proposed framework features flexible input length,
solves the dual-optimization solution of the earliness and accuracy goals.
Experimental results demonstrate that SPN achieves an excellent performance of
over 80\% in terms of accuracy. Compared to the state-of-the-art methods, at
least 7% improvement on different metrics, including the precision, recall,
F1-score, and harmonic mean, is delivered by the proposed SPN. To the best of
our knowledge, this is the first work focusing on solving the cardiovascular
early classification problem based on varied-length ECG data. Based on these
excellent features from SPN, it offers a good exemplification for addressing
all kinds of varied-length time series early classification problems. | [
"cs.LG",
"eess.SP"
] |
Using a model of the environment, reinforcement learning agents can plan
their future moves and achieve superhuman performance in board games like
Chess, Shogi, and Go, while remaining relatively sample-efficient. As
demonstrated by the MuZero Algorithm, the environment model can even be learned
dynamically, generalizing the agent to many more tasks while at the same time
achieving state-of-the-art performance. Notably, MuZero uses internal state
representations derived from real environment states for its predictions. In
this paper, we bind the model's predicted internal state representation to the
environment state via two additional terms: a reconstruction model loss and a
simpler consistency loss, both of which work independently and unsupervised,
acting as constraints to stabilize the learning process. Our experiments show
that this new integration of reconstruction model loss and simpler consistency
loss provide a significant performance increase in OpenAI Gym environments. Our
modifications also enable self-supervised pretraining for MuZero, so the
algorithm can learn about environment dynamics before a goal is made available. | [
"cs.LG"
] |
Deep Reinforcement Learning has shown its ability in solving complicated
problems directly from high-dimensional observations. However, in end-to-end
settings, Reinforcement Learning algorithms are not sample-efficient and
requires long training times and quantities of data. In this work, we proposed
a framework for sample-efficient Reinforcement Learning that take advantage of
state and action representations to transform a high-dimensional problem into a
low-dimensional one. Moreover, we seek to find the optimal policy mapping
latent states to latent actions. Because now the policy is learned on abstract
representations, we enforce, using auxiliary loss functions, the lifting of
such policy to the original problem domain. Results show that the novel
framework can efficiently learn low-dimensional and interpretable state and
action representations and the optimal latent policy. | [
"cs.LG",
"cs.AI"
] |
Clustering is a fundamental task in data analysis, and spectral clustering
has been recognized as a promising approach to it. Given a graph describing the
relationship between data, spectral clustering explores the underlying cluster
structure in two stages. The first stage embeds the nodes of the graph in real
space, and the second stage groups the embedded nodes into several clusters.
The use of the $k$-means method in the grouping stage is currently standard
practice. We present a spectral clustering algorithm that uses convex
programming in the grouping stage and study how well it works. This algorithm
is designed based on the following observation. If a graph is well-clustered,
then the nodes with the largest degree in each cluster can be found by
computing an enclosing ellipsoid of the nodes embedded in real space, and the
clusters can be identified by using those nodes. We show that, for
well-clustered graphs, the algorithm can find clusters of nodes with minimal
conductance. We also give an experimental assessment of the algorithm's
performance. | [
"cs.LG",
"stat.ML"
] |
The fully convolutional network (FCN) has dominated salient object detection
for a long period. However, the locality of CNN requires the model deep enough
to have a global receptive field and such a deep model always leads to the loss
of local details. In this paper, we introduce a new attention-based encoder,
vision transformer, into salient object detection to ensure the globalization
of the representations from shallow to deep layers. With the global view in
very shallow layers, the transformer encoder preserves more local
representations to recover the spatial details in final saliency maps. Besides,
as each layer can capture a global view of its previous layer, adjacent layers
can implicitly maximize the representation differences and minimize the
redundant features, making that every output feature of transformer layers
contributes uniquely for final prediction. To decode features from the
transformer, we propose a simple yet effective deeply-transformed decoder. The
decoder densely decodes and upsamples the transformer features, generating the
final saliency map with less noise injection. Experimental results demonstrate
that our method significantly outperforms other FCN-based and transformer-based
methods in five benchmarks by a large margin, with an average of 12.17%
improvement in terms of Mean Absolute Error (MAE). Code will be available at
https://github.com/OliverRensu/GLSTR. | [
"cs.CV"
] |
Molecular activity prediction is critical in drug design. Machine learning
techniques such as kernel methods and random forests have been successful for
this task. These models require fixed-size feature vectors as input while the
molecules are variable in size and structure. As a result, fixed-size
fingerprint representation is poor in handling substructures for large
molecules. In addition, molecular activity tests, or a so-called BioAssays, are
relatively small in the number of tested molecules due to its complexity. Here
we approach the problem through deep neural networks as they are flexible in
modeling structured data such as grids, sequences and graphs. We train multiple
BioAssays using a multi-task learning framework, which combines information
from multiple sources to improve the performance of prediction, especially on
small datasets. We propose Graph Memory Network (GraphMem), a memory-augmented
neural network to model the graph structure in molecules. GraphMem consists of
a recurrent controller coupled with an external memory whose cells dynamically
interact and change through a multi-hop reasoning process. Applied to the
molecules, the dynamic interactions enable an iterative refinement of the
representation of molecular graphs with multiple bond types. GraphMem is
capable of jointly training on multiple datasets by using a specific-task query
fed to the controller as an input. We demonstrate the effectiveness of the
proposed model for separately and jointly training on more than 100K
measurements, spanning across 9 BioAssay activity tests. | [
"cs.LG"
] |
Sophisticated trajectory prediction models that effectively mimic team
dynamics have many potential uses for sports coaches, broadcasters and
spectators. However, through experiments on soccer data we found that it can be
surprisingly challenging to train a deep learning model for player trajectory
prediction which outperforms linear extrapolation on average distance between
predicted and true future trajectories. We propose and test a novel method for
improving training by predicting a sparse trajectory and interpolating using
constant acceleration, which improves performance for several models. This
interpolation can also be used on models that aren't trained with sparse
outputs, and we find that this consistently improves performance for all tested
models. Additionally, we find that the accuracy of predicted trajectories for a
subset of players can be improved by conditioning on the full trajectories of
the other players, and that this is further improved when combined with sparse
predictions. We also propose a novel architecture using graph networks and
multi-head attention (GraN-MA) which achieves better performance than other
tested state-of-the-art models on our dataset and is trivially adapted for both
sparse trajectories and full-trajectory conditioned trajectory prediction. | [
"cs.LG",
"stat.ML",
"I.2.6"
] |
Graph neural networks are increasingly becoming the go-to approach in various
fields such as computer vision, computational biology and chemistry, where data
are naturally explained by graphs. However, unlike traditional convolutional
neural networks, deep graph networks do not necessarily yield better
performance than shallow graph networks. This behavior usually stems from the
over-smoothing phenomenon. In this work, we propose a family of architectures
to control this behavior by design. Our networks are motivated by numerical
methods for solving Partial Differential Equations (PDEs) on manifolds, and as
such, their behavior can be explained by similar analysis. Moreover, as we
demonstrate using an extensive set of experiments, our PDE-motivated networks
can generalize and be effective for various types of problems from different
fields. Our architectures obtain better or on par with the current
state-of-the-art results for problems that are typically approached using
different architectures. | [
"cs.LG",
"cs.CV",
"cs.NE"
] |
Image segmentation is a vital part of image processing. Segmentation has its
application widespread in the field of medical images in order to diagnose
curious diseases. The same medical images can be segmented manually. But the
accuracy of image segmentation using the segmentation algorithms is more when
compared with the manual segmentation. In the field of medical diagnosis an
extensive diversity of imaging techniques is presently available, such as
radiography, computed tomography (CT) and magnetic resonance imaging (MRI).
Medical image segmentation is an essential step for most consequent image
analysis tasks. Although the original FCM algorithm yields good results for
segmenting noise free images, it fails to segment images corrupted by noise,
outliers and other imaging artifact. This paper presents an image segmentation
approach using Modified Fuzzy C-Means (FCM) algorithm and Fuzzy Possibilistic
c-means algorithm (FPCM). This approach is a generalized version of standard
Fuzzy CMeans Clustering (FCM) algorithm. The limitation of the conventional FCM
technique is eliminated in modifying the standard technique. The Modified FCM
algorithm is formulated by modifying the distance measurement of the standard
FCM algorithm to permit the labeling of a pixel to be influenced by other
pixels and to restrain the noise effect during segmentation. Instead of having
one term in the objective function, a second term is included, forcing the
membership to be as high as possible without a maximum limit constraint of one.
Experiments are conducted on real images to investigate the performance of the
proposed modified FCM technique in segmenting the medical images. Standard FCM,
Modified FCM, Fuzzy Possibilistic CMeans algorithm (FPCM) are compared to
explore the accuracy of our proposed approach. | [
"cs.CV"
] |
The maximum entropy principle is largely used in thresholding and
segmentation of images. Among the several formulations of this principle, the
most effectively applied is that based on Tsallis non-extensive entropy. Here,
we discuss the role of its entropic index in determining the thresholds. When
this index is spanning the interval (0,1), for some images, the values of
thresholds can have large leaps. In this manner, we observe abrupt transitions
in the appearance of corresponding bi-level or multi-level images. These
gray-level image transitions are analogous to order or texture transitions
observed in physical systems, transitions which are driven by the temperature
or by other physical quantities. | [
"cs.CV"
] |
In online reinforcement learning (RL), efficient exploration remains
particularly challenging in high-dimensional environments with sparse rewards.
In low-dimensional environments, where tabular parameterization is possible,
count-based upper confidence bound (UCB) exploration methods achieve minimax
near-optimal rates. However, it remains unclear how to efficiently implement
UCB in realistic RL tasks that involve non-linear function approximation. To
address this, we propose a new exploration approach via \textit{maximizing} the
deviation of the occupancy of the next policy from the explored regions. We add
this term as an adaptive regularizer to the standard RL objective to balance
exploration vs. exploitation. We pair the new objective with a provably
convergent algorithm, giving rise to a new intrinsic reward that adjusts
existing bonuses. The proposed intrinsic reward is easy to implement and
combine with other existing RL algorithms to conduct exploration. As a proof of
concept, we evaluate the new intrinsic reward on tabular examples across a
variety of model-based and model-free algorithms, showing improvements over
count-only exploration strategies. When tested on navigation and locomotion
tasks from MiniGrid and DeepMind Control Suite benchmarks, our approach
significantly improves sample efficiency over state-of-the-art methods. Our
code is available at https://github.com/tianjunz/MADE. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We take a deep look into the behavior of self-attention heads in the
transformer architecture. In light of recent work discouraging the use of
attention distributions for explaining a model's behavior, we show that
attention distributions can nevertheless provide insights into the local
behavior of attention heads. This way, we propose a distinction between local
patterns revealed by attention and global patterns that refer back to the
input, and analyze BERT from both angles. We use gradient attribution to
analyze how the output of an attention attention head depends on the input
tokens, effectively extending the local attention-based analysis to account for
the mixing of information throughout the transformer layers. We find that there
is a significant discrepancy between attention and attribution distributions,
caused by the mixing of context inside the model. We quantify this discrepancy
and observe that interestingly, there are some patterns that persist across all
layers despite the mixing. | [
"cs.LG",
"cs.CL"
] |
An important part of many machine learning workflows on graphs is vertex
representation learning, i.e., learning a low-dimensional vector representation
for each vertex in the graph. Recently, several powerful techniques for
unsupervised representation learning have been demonstrated to give the
state-of-the-art performance in downstream tasks such as vertex classification
and edge prediction. These techniques rely on random walks performed on the
graph in order to capture its structural properties. These structural
properties are then encoded in the vector representation space.
However, most contemporary representation learning methods only apply to
static graphs while real-world graphs are often dynamic and change over time.
Static representation learning methods are not able to update the vector
representations when the graph changes; therefore, they must re-generate the
vector representations on an updated static snapshot of the graph regardless of
the extent of the change in the graph. In this work, we propose computationally
efficient algorithms for vertex representation learning that extend random walk
based methods to dynamic graphs. The computation complexity of our algorithms
depends upon the extent and rate of changes (the number of edges changed per
update) and on the density of the graph. We empirically evaluate our algorithms
on real world datasets for downstream machine learning tasks of multi-class and
multi-label vertex classification. The results show that our algorithms can
achieve competitive results to the state-of-the-art methods while being
computationally efficient. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Acquisition of data in task-specific applications of machine learning like
plant disease recognition is a costly endeavor owing to the requirements of
professional human diligence and time constraints. In this paper, we present a
simple pipeline that uses GANs in an unsupervised image translation environment
to improve learning with respect to the data distribution in a plant disease
dataset, reducing the partiality introduced by acute class imbalance and hence
shifting the classification decision boundary towards better performance. The
empirical analysis of our method is demonstrated on a limited dataset of 2789
tomato plant disease images, highly corrupted with an imbalance in the 9
disease categories. First, we extend the state of the art for the GAN-based
image-to-image translation method by enhancing the perceptual quality of the
generated images and preserving the semantics. We introduce AR-GAN, where in
addition to the adversarial loss, our synthetic image generator optimizes on
Activation Reconstruction loss (ARL) function that optimizes feature
activations against the natural image. We present visually more compelling
synthetic images in comparison to most prominent existing models and evaluate
the performance of our GAN framework in terms of various datasets and metrics.
Second, we evaluate the performance of a baseline convolutional neural network
classifier for improved recognition using the resulting synthetic samples to
augment our training set and compare it with the classical data augmentation
scheme. We observe a significant improvement in classification accuracy (+5.2%)
using generated synthetic samples as compared to (+0.8%) increase using classic
augmentation in an equal class distribution environment. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Deep residual networks have recently emerged as the state-of-the-art
architecture in image segmentation and object detection. In this paper, we
propose new image features (called ResFeats) extracted from the last
convolutional layer of deep residual networks pre-trained on ImageNet. We
propose to use ResFeats for diverse image classification tasks namely, object
classification, scene classification and coral classification and show that
ResFeats consistently perform better than their CNN counterparts on these
classification tasks. Since the ResFeats are large feature vectors, we propose
to use PCA for dimensionality reduction. Experimental results are provided to
show the effectiveness of ResFeats with state-of-the-art classification
accuracies on Caltech-101, Caltech-256 and MLC datasets and a significant
performance improvement on MIT-67 dataset compared to the widely used CNN
features. | [
"cs.CV"
] |
In this work we propose an adversarial learning approach to generate high
resolution MRI scans from low resolution images. The architecture, based on the
SRGAN model, adopts 3D convolutions to exploit volumetric information. For the
discriminator, the adversarial loss uses least squares in order to stabilize
the training. For the generator, the loss function is a combination of a least
squares adversarial loss and a content term based on mean square error and
image gradients in order to improve the quality of the generated images. We
explore different solutions for the upsampling phase. We present promising
results that improve classical interpolation, showing the potential of the
approach for 3D medical imaging super-resolution. Source code available at
https://github.com/imatge-upc/3D-GAN-superresolution | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
OpenStreetMap (OSM) is currently the richest publicly available information
source on geographic entities (e.g., buildings and roads) worldwide. However,
using OSM entities in machine learning models and other applications is
challenging due to the large scale of OSM, the extreme heterogeneity of entity
annotations, and a lack of a well-defined ontology to describe entity semantics
and properties. This paper presents GeoVectors - a unique, comprehensive
world-scale linked open corpus of OSM entity embeddings covering the entire OSM
dataset and providing latent representations of over 980 million geographic
entities in 180 countries. The GeoVectors corpus captures semantic and
geographic dimensions of OSM entities and makes these entities directly
accessible to machine learning algorithms and semantic applications. We create
a semantic description of the GeoVectors corpus, including identity links to
the Wikidata and DBpedia knowledge graphs to supply context information.
Furthermore, we provide a SPARQL endpoint - a semantic interface that offers
direct access to the semantic and latent representations of geographic entities
in OSM. | [
"cs.LG"
] |
Graph embedding, aiming to learn low-dimensional representations (aka.
embeddings) of nodes, has received significant attention recently. Recent years
have witnessed a surge of efforts made on static graphs, among which Graph
Convolutional Network (GCN) has emerged as an effective class of models.
However, these methods mainly focus on the static graph embedding. In this
work, we propose an efficient dynamic graph embedding approach, Dynamic Graph
Convolutional Network (DyGCN), which is an extension of GCN-based methods. We
naturally generalizes the embedding propagation scheme of GCN to dynamic
setting in an efficient manner, which is to propagate the change along the
graph to update node embeddings. The most affected nodes are first updated, and
then their changes are propagated to the further nodes and leads to their
update. Extensive experiments conducted on various dynamic graphs demonstrate
that our model can update the node embeddings in a time-saving and
performance-preserving way. | [
"cs.LG",
"cs.IR"
] |
Correctly estimating the discrepancy between two data distributions has
always been an important task in Machine Learning. Recently, Cuturi proposed
the Sinkhorn distance which makes use of an approximate Optimal Transport cost
between two distributions as a distance to describe distribution discrepancy.
Although it has been successfully adopted in various machine learning
applications (e.g. in Natural Language Processing and Computer Vision) since
then, the Sinkhorn distance also suffers from two unnegligible limitations. The
first one is that the Sinkhorn distance only gives an approximation of the real
Wasserstein distance, the second one is the `divide by zero' problem which
often occurs during matrix scaling when setting the entropy regularization
coefficient to a small value. In this paper, we introduce a new Brenier
approach for calculating a more accurate Wasserstein distance between two
discrete distributions, this approach successfully avoids the two limitations
shown above for Sinkhorn distance and gives an alternative way for estimating
distribution discrepancy. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Many machine learning and data science tasks require solving non-convex
optimization problems. When the loss function is a sum of multiple terms, a
popular method is the stochastic gradient descent. Viewed as a process for
sampling the loss function landscape, the stochastic gradient descent is known
to prefer flat minima. Though this is desired for certain optimization problems
such as in deep learning, it causes issues when the goal is to find the global
minimum, especially if the global minimum resides in a sharp valley.
Illustrated with a simple motivating example, we show that the fundamental
reason is that the difference in the Lipschitz constants of multiple terms in
the loss function causes stochastic gradient descent to experience different
variances at different minima. In order to mitigate this effect and perform
faithful optimization, we propose a combined resampling-reweighting scheme to
balance the variance at local minima and extend to general loss functions. We
explain from the numerical stability perspective how the proposed scheme is
more likely to select the true global minimum, and the local convergence
analysis perspective how it converges to a minimum faster when compared with
the vanilla stochastic gradient descent. Experiments from robust statistics and
computational chemistry are provided to demonstrate the theoretical findings. | [
"cs.LG",
"math.OC"
] |
Human pose estimation - the process of recognizing a human's limb positions
and orientations in a video - has many important applications including
surveillance, diagnosis of movement disorders, and computer animation. While
deep learning has lead to great advances in 2D and 3D pose estimation from
single video sources, the problem of estimating 3D human pose from multiple
video sensors with overlapping fields of view has received less attention. When
the application allows use of multiple cameras, 3D human pose estimates may be
greatly improved through fusion of multi-view pose estimates and observation of
limbs that are fully or partially occluded in some views. Past approaches to
multi-view 3D pose estimation have used probabilistic graphical models to
reason over constraints, including per-image pose estimates, temporal
smoothness, and limb length. In this paper, we present a pipeline for
multi-view 3D pose estimation of multiple individuals which combines a
state-of-art 2D pose detector with a factor graph of 3D limb constraints
optimized with belief propagation. We evaluate our results on the TUM-Campus
and Shelf datasets for multi-person 3D pose estimation and show that our system
significantly out-performs the previous state-of-the-art with a simpler model
of limb dependency. | [
"cs.CV"
] |
Data analyses based on linear methods constitute the simplest, most robust,
and transparent approaches to the automatic processing of large amounts of data
for building supervised or unsupervised machine learning models. Principal
covariates regression (PCovR) is an underappreciated method that interpolates
between principal component analysis and linear regression, and can be used to
conveniently reveal structure-property relations in terms of
simple-to-interpret, low-dimensional maps. Here we provide a pedagogic overview
of these data analysis schemes, including the use of the kernel trick to
introduce an element of non-linearity, while maintaining most of the
convenience and the simplicity of linear approaches. We then introduce a
kernelized version of PCovR and a sparsified extension, and demonstrate the
performance of this approach in revealing and predicting structure-property
relations in chemistry and materials science, showing a variety of examples
including elemental carbon, porous silicate frameworks, organic molecules,
amino acid conformers, and molecular materials. | [
"stat.ML",
"cond-mat.mtrl-sci",
"cs.LG",
"physics.chem-ph"
] |
We propose a Healthcare Graph Convolutional Network (HealGCN) to offer
disease self-diagnosis service for online users based on Electronic Healthcare
Records (EHRs). Two main challenges are focused in this paper for online
disease diagnosis: (1) serving cold-start users via graph convolutional
networks and (2) handling scarce clinical description via a symptom retrieval
system. To this end, we first organize the EHR data into a heterogeneous graph
that is capable of modeling complex interactions among users, symptoms and
diseases, and tailor the graph representation learning towards disease
diagnosis with an inductive learning paradigm. Then, we build a disease
self-diagnosis system with a corresponding EHR Graph-based Symptom Retrieval
System (GraphRet) that can search and provide a list of relevant alternative
symptoms by tracing the predefined meta-paths. GraphRet helps enrich the seed
symptom set through the EHR graph when confronting users with scarce
descriptions, hence yield better diagnosis accuracy. At last, we validate the
superiority of our model on a large-scale EHR dataset. | [
"cs.LG",
"cs.AI",
"cs.IR"
] |
In this work, we propose a multi-task recurrent neural network with attention
mechanism for predicting cardiovascular events from electronic health records
(EHRs) at different time horizons. The proposed approach is compared to a
standard clinical risk predictor (QRISK) and machine learning alternatives
using 5-year data from a NHS Foundation Trust. The proposed model outperforms
standard clinical risk scores in predicting stroke (AUC=0.85) and myocardial
infarction (AUC=0.89), considering the largest time horizon. Benefit of using
an \gls{mt} setting becomes visible for very short time horizons, which results
in an AUC increase between 2-6%. Further, we explored the importance of
individual features and attention weights in predicting cardiovascular events.
Our results indicate that the recurrent neural network approach benefits from
the hospital longitudinal information and demonstrates how machine learning
techniques can be applied to secondary care. | [
"cs.LG",
"stat.ML"
] |
Handwritten Text Line Segmentation (HTLS) is a low-level but important task
for many higher-level document processing tasks like handwritten text
recognition. It is often formulated in terms of semantic segmentation or object
detection in deep learning. However, both formulations have serious
shortcomings. The former requires heavy post-processing of splitting/merging
adjacent segments, while the latter may fail on dense or curved texts. In this
paper, we propose a novel Line Counting formulation for HTLS -- that involves
counting the number of text lines from the top at every pixel location. This
formulation helps learn an end-to-end HTLS solution that directly predicts
per-pixel line number for a given document image. Furthermore, we propose a
deep neural network (DNN) model LineCounter to perform HTLS through the Line
Counting formulation. Our extensive experiments on the three public datasets
(ICDAR2013-HSC, HIT-MW, and VML-AHTE) demonstrate that LineCounter outperforms
state-of-the-art HTLS approaches. Source code is available at
https://github.com/Leedeng/Line-Counter. | [
"cs.CV"
] |
Deep generative models of 3D shapes have received a great deal of research
interest. Yet, almost all of them generate discrete shape representations, such
as voxels, point clouds, and polygon meshes. We present the first 3D generative
model for a drastically different shape representation --- describing a shape
as a sequence of computer-aided design (CAD) operations. Unlike meshes and
point clouds, CAD models encode the user creation process of 3D shapes, widely
used in numerous industrial and engineering design tasks. However, the
sequential and irregular structure of CAD operations poses significant
challenges for existing 3D generative models. Drawing an analogy between CAD
operations and natural language, we propose a CAD generative network based on
the Transformer. We demonstrate the performance of our model for both shape
autoencoding and random shape generation. To train our network, we create a new
CAD dataset consisting of 178,238 models and their CAD construction sequences.
We have made this dataset publicly available to promote future research on this
topic. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Generative systems have a significant potential to synthesize innovative
design alternatives. Still, most of the common systems that have been adopted
in design require the designer to explicitly define the specifications of the
procedures and in some cases the design space. In contrast, a generative system
could potentially learn both aspects through processing a database of existing
solutions without the supervision of the designer. To explore this possibility,
we review recent advancements of generative models in machine learning and
current applications of learning techniques in design. Then, we describe the
development of a data-driven generative system titled DeepCloud. It combines an
autoencoder architecture for point clouds with a web-based interface and analog
input devices to provide an intuitive experience for data-driven generation of
design alternatives. We delineate the implementation of two prototypes of
DeepCloud, their contributions, and potentials for generative design. | [
"cs.LG",
"stat.ML"
] |
Modeling complex spatial and temporal correlations in the correlated time
series data is indispensable for understanding the traffic dynamics and
predicting the future status of an evolving traffic system. Recent works focus
on designing complicated graph neural network architectures to capture shared
patterns with the help of pre-defined graphs. In this paper, we argue that
learning node-specific patterns is essential for traffic forecasting while the
pre-defined graph is avoidable. To this end, we propose two adaptive modules
for enhancing Graph Convolutional Network (GCN) with new capabilities: 1) a
Node Adaptive Parameter Learning (NAPL) module to capture node-specific
patterns; 2) a Data Adaptive Graph Generation (DAGG) module to infer the
inter-dependencies among different traffic series automatically. We further
propose an Adaptive Graph Convolutional Recurrent Network (AGCRN) to capture
fine-grained spatial and temporal correlations in traffic series automatically
based on the two modules and recurrent networks. Our experiments on two
real-world traffic datasets show AGCRN outperforms state-of-the-art by a
significant margin without pre-defined graphs about spatial connections. | [
"cs.LG",
"stat.ML"
] |
This paper presents a new methodology for clustering multivariate time series
leveraging optimal transport between copulas. Copulas are used to encode both
(i) intra-dependence of a multivariate time series, and (ii) inter-dependence
between two time series. Then, optimal copula transport allows us to define two
distances between multivariate time series: (i) one for measuring
intra-dependence dissimilarity, (ii) another one for measuring inter-dependence
dissimilarity based on a new multivariate dependence coefficient which is
robust to noise, deterministic, and which can target specified dependencies. | [
"cs.LG",
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.