text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
This paper presents online-capable deep learning model for probabilistic
vehicle trajectory prediction. We propose a simple encoder-decoder architecture
based on multi-head attention. The proposed model generates the distribution of
the predicted trajectories for multiple vehicles in parallel. Our approach to
model the interactions can learn to attend to a few influential vehicles in an
unsupervised manner, which can improve the interpretability of the network. The
experiments using naturalistic trajectories at highway show the clear
improvement in terms of positional error on both longitudinal and lateral
direction. | [
"cs.CV",
"cs.LG",
"cs.RO",
"eess.SP"
] |
Recent advances in multi-agent reinforcement learning have been largely
limited in training one model from scratch for every new task. The limitation
is due to the restricted model architecture related to fixed input and output
dimensions. This hinders the experience accumulation and transfer of the
learned agent over tasks with diverse levels of difficulty (e.g. 3 vs 3 or 5 vs
6 multi-agent games). In this paper, we make the first attempt to explore a
universal multi-agent reinforcement learning pipeline, designing one single
architecture to fit tasks with the requirement of different observation and
action configurations. Unlike previous RNN-based models, we utilize a
transformer-based model to generate a flexible policy by decoupling the policy
distribution from the intertwined input observation with an importance weight
measured by the merits of the self-attention mechanism. Compared to a standard
transformer block, the proposed model, named as Universal Policy Decoupling
Transformer (UPDeT), further relaxes the action restriction and makes the
multi-agent task's decision process more explainable. UPDeT is general enough
to be plugged into any multi-agent reinforcement learning pipeline and equip
them with strong generalization abilities that enables the handling of multiple
tasks at a time. Extensive experiments on large-scale SMAC multi-agent
competitive games demonstrate that the proposed UPDeT-based multi-agent
reinforcement learning achieves significant results relative to
state-of-the-art approaches, demonstrating advantageous transfer capability in
terms of both performance and training speed (10 times faster). | [
"cs.LG",
"cs.AI"
] |
We study the multi-round response generation in visual dialog, where a
response is generated according to a visually grounded conversational history.
Given a triplet: an image, Q&A history, and current question, all the
prevailing methods follow a codec (i.e., encoder-decoder) fashion in a
supervised learning paradigm: a multimodal encoder encodes the triplet into a
feature vector, which is then fed into the decoder for the current answer
generation, supervised by the ground-truth. However, this conventional
supervised learning does NOT take into account the impact of imperfect history,
violating the conversational nature of visual dialog and thus making the codec
more inclined to learn history bias but not contextual reasoning. To this end,
inspired by the actor-critic policy gradient in reinforcement learning, we
propose a novel training paradigm called History Advantage Sequence Training
(HAST). Specifically, we intentionally impose wrong answers in the history,
obtaining an adverse critic, and see how the historic error impacts the codec's
future behavior by History Advantage-a quantity obtained by subtracting the
adverse critic from the gold reward of ground-truth history. Moreover, to make
the codec more sensitive to the history, we propose a novel attention network
called History-Aware Co-Attention Network (HACAN) which can be effectively
trained by using HAST. Experimental results on three benchmarks: VisDial
v0.9&v1.0 and GuessWhat?!, show that the proposed HAST strategy consistently
outperforms the state-of-the-art supervised counterparts. | [
"cs.CV"
] |
Graph convolutional neural networks (GCNs) generalize tradition convolutional
neural networks (CNNs) from low-dimensional regular graphs (e.g., image) to
high dimensional irregular graphs (e.g., text documents on word embeddings).
Due to inevitable faulty data collection instruments, deceptive data
manipulation, or other system errors, the data might be error-contaminated.
Even a small amount of error such as noise can compromise the ability of GCNs
and render them inadmissible to a large extent. The key challenge is how to
effectively and efficiently employ GCNs in the presence of erroneous data. In
this paper, we propose a novel Robust Graph Convolutional Neural Networks for
possible erroneous single-view or multi-view data where data may come from
multiple sources. By incorporating an extra layers via Autoencoders into
traditional graph convolutional networks, we characterize and handle typical
error models explicitly. Experimental results on various real-world datasets
demonstrate the superiority of the proposed model over the baseline methods and
its robustness against different types of error. | [
"cs.LG"
] |
In this paper, we present UNet++, a new, more powerful architecture for
medical image segmentation. Our architecture is essentially a deeply-supervised
encoder-decoder network where the encoder and decoder sub-networks are
connected through a series of nested, dense skip pathways. The re-designed skip
pathways aim at reducing the semantic gap between the feature maps of the
encoder and decoder sub-networks. We argue that the optimizer would deal with
an easier learning task when the feature maps from the decoder and encoder
networks are semantically similar. We have evaluated UNet++ in comparison with
U-Net and wide U-Net architectures across multiple medical image segmentation
tasks: nodule segmentation in the low-dose CT scans of chest, nuclei
segmentation in the microscopy images, liver segmentation in abdominal CT
scans, and polyp segmentation in colonoscopy videos. Our experiments
demonstrate that UNet++ with deep supervision achieves an average IoU gain of
3.9 and 3.4 points over U-Net and wide U-Net, respectively. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Despite the recent developments in spatiotemporal local features for action
recognition in video sequences, local color information has so far been
ignored. However, color has been proved an important element to the success of
automated recognition of objects and scenes. In this paper we extend the
space-time interest point descriptor STIP to take into account the color
information on the features' neighborhood. We compare the performance of our
color-aware version of STIP (which we have called HueSTIP) with the original
one. | [
"cs.CV"
] |
Airline Crew Pairing Optimization (CPO) aims at generating a set of legal
flight sequences (crew pairings), to cover an airline's flight schedule, at
minimum cost. It is usually performed using Column Generation (CG), a
mathematical programming technique for guided search-space exploration. CG
exploits the interdependencies between the current and the preceding
CG-iteration for generating new variables (pairings) during the
optimization-search. However, with the unprecedented scale and complexity of
the emergent flight networks, it has become imperative to learn higher-order
interdependencies among the flight-connection graphs, and utilize those to
enhance the efficacy of the CPO. In first of its kind and what marks a
significant departure from the state-of-the-art, this paper proposes a novel
adaptation of the Variational Graph Auto-Encoder for learning plausible
combinatorial patterns among the flight-connection data obtained through the
search-space exploration by an Airline Crew Pairing Optimizer, AirCROP
(developed by the authors and validated by the research consortium's industrial
sponsor, GE Aviation). The resulting flight-connection predictions are combined
on-the-fly using a novel heuristic to generate new pairings for the optimizer.
The utility of the proposed approach is demonstrated on large-scale (over 4200
flights), real-world, complex flight-networks of US-based airlines,
characterized by multiple hub-and-spoke subnetworks and several crew bases. | [
"cs.LG",
"math.OC",
"stat.AP",
"stat.ML"
] |
While lots of people may think branding begins and ends with a logo, fashion
brands communicate their uniqueness through a wide range of visual cues such as
color, patterns and shapes. In this work, we analyze learned visual
representations by deep networks that are trained to recognize fashion brands.
In particular, the activation strength and extent of neurons are studied to
provide interesting insights about visual brand expressions. The proposed
method identifies where a brand stands in the spectrum of branding strategy,
i.e., from trademark-emblazoned goods with bold logos to implicit no logo
marketing. By quantifying attention maps, we are able to interpret the visual
characteristics of a brand present in a single image and model the general
design direction of a brand as a whole. We further investigate versatility of
neurons and discover "specialists" that are highly brand-specific and
"generalists" that detect diverse visual features. A human experiment based on
three main visual scenarios of fashion brands is conducted to verify the
alignment of our quantitative measures with the human perception of brands.
This paper demonstrate how deep networks go beyond logos in order to recognize
clothing brands in an image. | [
"cs.CV"
] |
This paper presents an efficient object detection method from satellite
imagery. Among a number of machine learning algorithms, we proposed a
combination of two convolutional neural networks (CNN) aimed at high precision
and high recall, respectively. We validated our models using golf courses as
target objects. The proposed deep learning method demonstrated higher accuracy
than previous object identification methods. | [
"cs.CV"
] |
Since with massive data growth, the need for autonomous and generic anomaly
detection system is increased. However, developing one stand-alone generic
anomaly detection system that is accurate and fast is still a challenge. In
this paper, we propose conventional time-series analysis approaches, the
Seasonal Autoregressive Integrated Moving Average (SARIMA) model and Seasonal
Trend decomposition using Loess (STL), to detect complex and various anomalies.
Usually, SARIMA and STL are used only for stationary and periodic time-series,
but by combining, we show they can detect anomalies with high accuracy for data
that is even noisy and non-periodic. We compared the algorithm to Long Short
Term Memory (LSTM), a deep-learning-based algorithm used for anomaly detection
system. We used a total of seven real-world datasets and four artificial
datasets with different time-series properties to verify the performance of the
proposed algorithm. | [
"cs.LG",
"stat.ML"
] |
Single-pixel imaging is a novel imaging scheme that has gained popularity due
to its huge computational gain and potential for a low-cost alternative to
imaging beyond the visible spectrum. The traditional reconstruction methods
struggle to produce a clear recovery when one limits the number of illumination
patterns from a spatial light modulator. As a remedy, several
deep-learning-based solutions have been proposed which lack good generalization
ability due to the architectural setup and loss functions. In this paper, we
propose a generative adversarial network-based reconstruction framework for
single-pixel imaging, referred to as SPI-GAN. Our method can reconstruct images
with 17.92 dB PSNR and 0.487 SSIM, even if the sampling ratio drops to 5%. This
facilitates much faster reconstruction making our method suitable for
single-pixel video. Furthermore, our ResNet-like architecture for the generator
leads to useful representation learning that allows us to reconstruct
completely unseen objects. The experimental results demonstrate that SPI-GAN
achieves significant performance gain, e.g. near 3dB PSNR gain, over the
current state-of-the-art method. | [
"cs.CV",
"cs.LG",
"eess.IV",
"eess.SP"
] |
DuctTake is a system designed to enable practical compositing of multiple
takes of a scene into a single video. Current industry solutions are based
around object segmentation, a hard problem that requires extensive manual input
and cleanup, making compositing an expensive part of the film-making process.
Our method instead composites shots together by finding optimal spatiotemporal
seams using motion-compensated 3D graph cuts through the video volume. We
describe in detail the required components, decisions, and new techniques that
together make a usable, interactive tool for compositing HD video, paying
special attention to running time and performance of each section. We validate
our approach by presenting a wide variety of examples and by comparing result
quality and creation time to composites made by professional artists using
current state-of-the-art tools. | [
"cs.CV"
] |
Multi-simulator training has contributed to the recent success of Deep
Reinforcement Learning by stabilizing learning and allowing for higher training
throughputs. We propose Gossip-based Actor-Learner Architectures (GALA) where
several actor-learners (such as A2C agents) are organized in a peer-to-peer
communication topology, and exchange information through asynchronous gossip in
order to take advantage of a large number of distributed simulators. We prove
that GALA agents remain within an epsilon-ball of one-another during training
when using loosely coupled asynchronous communication. By reducing the amount
of synchronization between agents, GALA is more computationally efficient and
scalable compared to A2C, its fully-synchronous counterpart. GALA also
outperforms A2C, being more robust and sample efficient. We show that we can
run several loosely coupled GALA agents in parallel on a single GPU and achieve
significantly higher hardware utilization and frame-rates than vanilla A2C at
comparable power draws. | [
"cs.LG",
"cs.AI",
"cs.MA",
"math.OC",
"stat.ML"
] |
Deep visual models are susceptible to adversarial perturbations to inputs.
Although these signals are carefully crafted, they still appear noise-like
patterns to humans. This observation has led to the argument that deep visual
representation is misaligned with human perception. We counter-argue by
providing evidence of human-meaningful patterns in adversarial perturbations.
We first propose an attack that fools a network to confuse a whole category of
objects (source class) with a target label. Our attack also limits the
unintended fooling by samples from non-sources classes, thereby circumscribing
human-defined semantic notions for network fooling. We show that the proposed
attack not only leads to the emergence of regular geometric patterns in the
perturbations, but also reveals insightful information about the decision
boundaries of deep models. Exploring this phenomenon further, we alter the
`adversarial' objective of our attack to use it as a tool to `explain' deep
visual representation. We show that by careful channeling and projection of the
perturbations computed by our method, we can visualize a model's understanding
of human-defined semantic notions. Finally, we exploit the explanability
properties of our perturbations to perform image generation, inpainting and
interactive image manipulation by attacking adversarialy robust
`classifiers'.In all, our major contribution is a novel pragmatic adversarial
attack that is subsequently transformed into a tool to interpret the visual
models. The article also makes secondary contributions in terms of establishing
the utility of our attack beyond the adversarial objective with multiple
interesting applications. | [
"cs.CV",
"cs.AI",
"cs.CR",
"cs.LG"
] |
Real world datasets often contain entries with missing elements e.g. in a
medical dataset, a patient is unlikely to have taken all possible diagnostic
tests. Variational Autoencoders (VAEs) are popular generative models often used
for unsupervised learning. Despite their widespread use it is unclear how best
to apply VAEs to datasets with missing data. We develop a novel latent variable
model of a corruption process which generates missing data, and derive a
corresponding tractable evidence lower bound (ELBO). Our model is
straightforward to implement, can handle both missing completely at random
(MCAR) and missing not at random (MNAR) data, scales to high dimensional inputs
and gives both the VAE encoder and decoder principled access to indicator
variables for whether a data element is missing or not. On the MNIST and SVHN
datasets we demonstrate improved marginal log-likelihood of observed data and
better missing data imputation, compared to existing approaches. | [
"cs.LG",
"stat.ML"
] |
Object detection is an important research area in the field of computer
vision. Many detection algorithms have been proposed. However, each object
detector relies on specific assumptions of the object appearance and imaging
conditions. As a consequence, no algorithm can be considered as universal. With
the large variety of object detectors, the subsequent question is how to select
and combine them.
In this paper, we propose a framework to learn how to combine object
detectors. The proposed method uses (single) detectors like DPM, CN and EES,
and exploits their correlation by high level contextual features to yield a
combined detection list.
Experiments on the PASCAL VOC07 and VOC10 datasets show that the proposed
method significantly outperforms single object detectors, DPM (8.4%), CN (6.8%)
and EES (17.0%) on VOC07 and DPM (6.5%), CN (5.5%) and EES (16.2%) on VOC10. | [
"cs.CV"
] |
This paper introduces a new algorithm for unsupervised learning of keypoint
detectors and descriptors, which demonstrates fast convergence and good
performance across different datasets. The training procedure uses homographic
transformation of images. The proposed model learns to detect points and
generate descriptors on pairs of transformed images, which are easy for it to
distinguish and repeatedly detect. The trained model follows SuperPoint
architecture for ease of comparison, and demonstrates similar performance on
natural images from HPatches dataset, and better performance on retina images
from Fundus Image Registration Dataset, which contain low number of corner-like
features. For HPatches and other datasets, coverage was also computed to
provide better estimation of model quality. | [
"cs.CV",
"cs.LG"
] |
Current unsupervised image-to-image translation techniques struggle to focus
their attention on individual objects without altering the background or the
way multiple objects interact within a scene. Motivated by the important role
of attention in human perception, we tackle this limitation by introducing
unsupervised attention mechanisms that are jointly adversarialy trained with
the generators and discriminators. We demonstrate qualitatively and
quantitatively that our approach is able to attend to relevant regions in the
image without requiring supervision, and that by doing so it achieves more
realistic mappings compared to recent approaches. | [
"cs.CV",
"cs.AI"
] |
In the environment of fair lending laws and the General Data Protection
Regulation (GDPR), the ability to explain a model's prediction is of paramount
importance. High quality explanations are the first step in assessing fairness.
Counterfactuals are valuable tools for explainability. They provide actionable,
comprehensible explanations for the individual who is subject to decisions made
from the prediction. It is important to find a baseline for producing them. We
propose a simple method for generating counterfactuals by using gradient
descent to search in the latent space of an autoencoder and benchmark our
method against approaches that search for counterfactuals in feature space.
Additionally, we implement metrics to concretely evaluate the quality of the
counterfactuals. We show that latent space counterfactual generation strikes a
balance between the speed of basic feature gradient descent methods and the
sparseness and authenticity of counterfactuals generated by more complex
feature space oriented techniques. | [
"cs.LG"
] |
In this work we propose a new neural network architecture that efficiently
implements and learns general purpose set-equivariant functions. Such a
function f maps a set of entities x = {x1, . . . , xn} from one domain to a set
of same cardinality y = f (x) = {y1, . . . , yn} in another domain regardless
of the ordering of the entities. The architecture is based on a gated recurrent
network which is iteratively applied to all entities individually and at the
same time syncs with the progression of the whole population. In reminiscence
to this pattern, which can be frequently observed in nature, we call our
approach SWARM mapping. Set-equivariant and generally permutation invariant
functions are important building blocks for many state of the art machine
learning approaches. Even in applications where the permutation invariance is
not of primary interest, as to be seen in the recent success of attention based
transformer models (Vaswani et. al. 2017). Accordingly, we demonstrate the
power and usefulness of SWARM mappings in different applications. We compare
the performance of our approach with another recently proposed set-equivariant
function, the Set Transformer (Lee et.al. 2018) and we demonstrate that models
solely based on SWARM layers gives state of the art results. | [
"cs.LG",
"stat.ML"
] |
Generative Adversarial Networks have been crucial in the developments made in
unsupervised learning in recent times. Exemplars of image synthesis from text
or other images, these networks have shown remarkable improvements over
conventional methods in terms of performance. Trained on the adversarial
training philosophy, these networks aim to estimate the potential distribution
from the real data and then use this as input to generate the synthetic data.
Based on this fundamental principle, several frameworks can be generated that
are paragon implementations in several real-life applications such as art
synthesis, generation of high resolution outputs and synthesis of images from
human drawn sketches, to name a few. While theoretically GANs present better
results and prove to be an improvement over conventional methods in many
factors, the implementation of these frameworks for dedicated applications
remains a challenge. This study explores and presents a taxonomy of these
frameworks and their use in various image to image synthesis and text to image
synthesis applications. The basic GANs, as well as a variety of different niche
frameworks, are critically analyzed. The advantages of GANs for image
generation over conventional methods as well their disadvantages amongst other
frameworks are presented. The future applications of GANs in industries such as
healthcare, art and entertainment are also discussed. | [
"cs.LG",
"cs.CV",
"eess.IV",
"stat.ML"
] |
Automatically detecting/segmenting object(s) that blend in with their
surroundings is difficult for current models. A major challenge is that the
intrinsic similarities between such foreground objects and background
surroundings make the features extracted by deep model indistinguishable. To
overcome this challenge, an ideal model should be able to seek valuable, extra
clues from the given scene and incorporate them into a joint learning framework
for representation co-enhancement. With this inspiration, we design a novel
Mutual Graph Learning (MGL) model, which generalizes the idea of conventional
mutual learning from regular grids to the graph domain. Specifically, MGL
decouples an image into two task-specific feature maps -- one for roughly
locating the target and the other for accurately capturing its boundary details
-- and fully exploits the mutual benefits by recurrently reasoning their
high-order relations through graphs. Importantly, in contrast to most mutual
learning approaches that use a shared function to model all between-task
interactions, MGL is equipped with typed functions for handling different
complementary relations to maximize information interactions. Experiments on
challenging datasets, including CHAMELEON, CAMO and COD10K, demonstrate the
effectiveness of our MGL with superior performance to existing state-of-the-art
methods. | [
"cs.CV"
] |
Forecasting human trajectories is critical for tasks such as robot crowd
navigation and autonomous driving. Modeling social interactions is of great
importance for accurate group-wise motion prediction. However, most existing
methods do not consider information about coherence within the crowd, but
rather only pairwise interactions. In this work, we propose a novel framework,
coherent motion aware graph convolutional network (CoMoGCN), for trajectory
prediction in crowded scenes with group constraints. First, we cluster
pedestrian trajectories into groups according to motion coherence. Then, we use
graph convolutional networks to aggregate crowd information efficiently. The
CoMoGCN also takes advantage of variational autoencoders to capture the
multimodal nature of the human trajectories by modeling the distribution. Our
method achieves state-of-the-art performance on several different trajectory
prediction benchmarks, and the best average performance among all benchmarks
considered. | [
"cs.CV"
] |
Graphs are ubiquitous data structures for representing interactions between
entities. With an emphasis on the use of graphs to represent chemical
molecules, we explore the task of learning to generate graphs that conform to a
distribution observed in training data. We propose a variational autoencoder
model in which both encoder and decoder are graph-structured. Our decoder
assumes a sequential ordering of graph extension steps and we discuss and
analyze design choices that mitigate the potential downsides of this
linearization. Experiments compare our approach with a wide range of baselines
on the molecule generation task and show that our method is more successful at
matching the statistics of the original dataset on semantically important
metrics. Furthermore, we show that by using appropriate shaping of the latent
space, our model allows us to design molecules that are (locally) optimal in
desired properties. | [
"cs.LG",
"stat.ML"
] |
Applications of machine learning (ML) models and convolutional neural
networks (CNNs) have been rapidly increased. Although ML models provide high
accuracy in many applications, recent investigations show that such networks
are highly vulnerable to adversarial attacks. The black-box adversarial attack
is one type of attack that the attacker does not have any knowledge about the
model or the training dataset. In this paper, we propose a novel approach to
generate a black-box attack in sparse domain whereas the most important
information of an image can be observed. Our investigation shows that large
sparse components play a critical role in the performance of the image
classifiers. Under this presumption, to generate adversarial example, we
transfer an image into a sparse domain and put a threshold to choose only k
largest components. In contrast to the very recent works that randomly perturb
k low frequency (LoF) components, we perturb k largest sparse (LaS)components
either randomly (query-based) or in the direction of the most correlated sparse
signal from a different class. We show that LaS components contain some middle
or higher frequency components information which can help us fool the
classifiers with a fewer number of queries. We also demonstrate the
effectiveness of this approach by fooling the TensorFlow Lite (TFLite) model of
Google Cloud Vision platform. Mean squared error (MSE) and peak signal to noise
ratio (PSNR) are used as quality metrics. We present a theoretical proof to
connect these metrics to the level of perturbation in the sparse domain. We
tested our adversarial examples to the state-of-the-art CNNs and support vector
machine (SVM) classifiers on color and grayscale image datasets. The results
show the proposed method can highly increase the misclassification rate of the
classifiers. | [
"cs.LG",
"cs.CR"
] |
A novel algorithm is proposed for segmenting an image into multiple levels
using its mean and variance. Starting from the extreme pixel values at both
ends of the histogram plot, the algorithm is applied recursively on sub-ranges
computed from the previous step, so as to find a threshold level and a new
sub-range for the next step, until no significant improvement in image quality
can be achieved. The method makes use of the fact that a number of
distributions tend towards Dirac delta function, peaking at the mean, in the
limiting condition of vanishing variance. The procedure naturally provides for
variable size segmentation with bigger blocks near the extreme pixel values and
finer divisions around the mean or other chosen value for better visualization.
Experiments on a variety of images show that the new algorithm effectively
segments the image in computationally very less time. | [
"cs.CV"
] |
We summarized both common and novel predictive models used for stock price
prediction and combined them with technical indices, fundamental
characteristics and text-based sentiment data to predict S&P stock prices. A
66.18% accuracy in S&P 500 index directional prediction and 62.09% accuracy in
individual stock directional prediction was achieved by combining different
machine learning models such as Random Forest and LSTM together into
state-of-the-art ensemble models. The data we use contains weekly historical
prices, finance reports, and text information from news items associated with
518 different common stocks issued by current and former S&P 500 large-cap
companies, from January 1, 2000 to December 31, 2019. Our study's innovation
includes utilizing deep language models to categorize and infer financial news
item sentiment; fusing different models containing different combinations of
variables and stocks to jointly make predictions; and overcoming the
insufficient data problem for machine learning models in time series by using
data across different stocks. | [
"stat.ML",
"cs.LG"
] |
Density estimation is an important technique for characterizing distributions
given observations. Much existing research on density estimation has focused on
cases wherein the data lies in a Euclidean space. However, some kinds of data
are not well-modeled by supposing that their underlying geometry is Euclidean.
Instead, it can be useful to model such data as lying on a {\it manifold} with
some known structure. For instance, some kinds of data may be known to lie on
the surface of a sphere. We study the problem of estimating densities on
manifolds. We propose a method, inspired by the literature on "dequantization,"
which we interpret through the lens of a coordinate transformation of an
ambient Euclidean space and a smooth manifold of interest. Using methods from
normalizing flows, we apply this method to the dequantization of smooth
manifold structures in order to model densities on the sphere, tori, and the
orthogonal group. | [
"stat.ML",
"cs.LG"
] |
The robustness of the much-used Graph Convolutional Networks (GCNs) to
perturbations of their input is becoming a topic of increasing importance. In
this paper, the random GCN is introduced for which a random matrix theory
analysis is possible. This analysis suggests that if the graph is sufficiently
perturbed, or in the extreme case random, then the GCN fails to benefit from
the node features. It is furthermore observed that enhancing the message
passing step in GCNs by adding the node feature kernel to the adjacency matrix
of the graph structure solves this problem. An empirical study of a GCN
utilised for node classification on six real datasets further confirms the
theoretical findings and demonstrates that perturbations of the graph structure
can result in GCNs performing significantly worse than Multi-Layer Perceptrons
run on the node features alone. In practice, adding a node feature kernel to
the message passing of perturbed graphs results in a significant improvement of
the GCN's performance, thereby rendering it more robust to graph perturbations.
Our code is publicly available at:https://github.com/ChangminWu/RobustGCN. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
A graph is a powerful concept for representation of relations between pairs
of entities. Data with underlying graph structure can be found across many
disciplines and there is a natural desire for understanding such data better.
Deep learning (DL) has achieved significant breakthroughs in a variety of
machine learning tasks in recent years, especially where data is structured on
a grid, such as in text, speech, or image understanding. However, surprisingly
little has been done to explore the applicability of DL on arbitrary
graph-structured data directly.
The goal of this thesis is to investigate architectures for DL on graphs and
study how to transfer, adapt or generalize concepts that work well on
sequential and image data to this domain. We concentrate on two important
primitives: embedding graphs or their nodes into a continuous vector space
representation (encoding) and, conversely, generating graphs from such vectors
back (decoding). To that end, we make the following contributions.
First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like
operation on graphs performed in the spatial domain where filters are
dynamically generated based on edge attributes. The method is used to encode
graphs with arbitrary and varying structure.
Second, we propose SuperPoint Graph, an intermediate point cloud
representation with rich edge attributes encoding the contextual relationship
between object parts. Based on this representation, ECC is employed to segment
large-scale point clouds without major sacrifice in fine details.
Third, we present GraphVAE, a graph generator allowing us to decode graphs
with variable but upper-bounded number of nodes making use of approximate graph
matching for aligning the predictions of an autoencoder with its inputs. The
method is applied to the task of molecule generation. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
] |
Recent advances in Generative Adversarial Learning allow for new modalities
of image super-resolution by learning low to high resolution mappings. In this
paper we present our work using Generative Adversarial Networks (GANs) with
applications to overhead and satellite imagery. We have experimented with
several state-of-the-art architectures. We propose a GAN-based architecture
using densely connected convolutional neural networks (DenseNets) to be able to
super-resolve overhead imagery with a factor of up to 8x. We have also
investigated resolution limits of these networks. We report results on several
publicly available datasets, including SpaceNet data and IARPA Multi-View
Stereo Challenge, and compare performance with other state-of-the-art
architectures. | [
"cs.CV"
] |
Learning disentangled representation from any unlabelled data is a
non-trivial problem. In this paper we propose Information Maximising
Autoencoder (InfoAE) where the encoder learns powerful disentangled
representation through maximizing the mutual information between the
representation and given information in an unsupervised fashion. We have
evaluated our model on MNIST dataset and achieved 98.9 ($\pm .1$) $\%$ test
accuracy while using complete unsupervised training. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Nowadays, liquid rocket engines use closed-loop control at most near steady
operating conditions. The control of the transient phases is traditionally
performed in open-loop due to highly nonlinear system dynamics. This situation
is unsatisfactory, in particular for reusable engines. The open-loop control
system cannot provide optimal engine performance due to external disturbances
or the degeneration of engine components over time. In this paper, we study a
deep reinforcement learning approach for optimal control of a generic
gas-generator engine's continuous start-up phase. It is shown that the learned
policy can reach different steady-state operating points and convincingly adapt
to changing system parameters. A quantitative comparison with carefully tuned
open-loop sequences and PID controllers is included. The deep reinforcement
learning controller achieves the highest performance and requires only minimal
computational effort to calculate the control action, which is a big advantage
over approaches that require online optimization, such as model predictive
control. control. | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC",
"stat.ML"
] |
Nowadays, Light Detection And Ranging (LiDAR) has been widely used in
autonomous vehicles for perception and localization. However, the cost of a
high-resolution LiDAR is still prohibitively expensive, while its
low-resolution counterpart is much more affordable. Therefore, using
low-resolution LiDAR for autonomous driving perception tasks instead of
high-resolution LiDAR is an economically feasible solution. In this paper, we
propose a novel framework for 3D object detection in Bird-Eye View (BEV) using
a low-resolution LiDAR and a monocular camera. Taking the low-resolution LiDAR
point cloud and the monocular image as input, our depth completion network is
able to produce dense point cloud that is subsequently processed by a
voxel-based network for 3D object detection. Evaluated with KITTI dataset, the
experimental results shows that the proposed approach performs significantly
better than directly applying the 16-line LiDAR point cloud for object
detection. For both easy and moderate cases, our detection results are
comparable to those from 64-line high-resolution LiDAR. The network
architecture and performance evaluations are analyzed in detail. | [
"cs.CV",
"cs.RO"
] |
For applications such as airport border control, biometric technologies that
can process many capture subjects quickly, efficiently, with weak supervision,
and with minimal discomfort are desirable. Facial recognition is particularly
appealing because it is minimally invasive yet offers relatively good
recognition performance. Unfortunately, the combination of weak supervision and
minimal invasiveness makes even highly accurate facial recognition systems
susceptible to spoofing via presentation attacks. Thus, there is great demand
for an effective and low cost system capable of rejecting such attacks.To this
end we introduce PARAPH -- a novel hardware extension that exploits different
measurements of light polarization to yield an image space in which
presentation media are readily discernible from Bona Fide facial
characteristics. The PARAPH system is inexpensive with an added cost of less
than 10 US dollars. The system makes two polarization measurements in rapid
succession, allowing them to be approximately pixel-aligned, with a frame rate
limited by the camera, not the system. There are no moving parts above the
molecular level, due to the efficient use of twisted nematic liquid crystals.
We present evaluation images using three presentation attack media next to an
actual face -- high quality photos on glossy and matte paper and a video of the
face on an LCD. In each case, the actual face in the image generated by PARAPH
is structurally discernible from the presentations, which appear either as
noise (print attacks) or saturated images (replay attacks). | [
"cs.CV"
] |
Explanation methods applied to sequential models for multivariate time series
prediction are receiving more attention in machine learning literature. While
current methods perform well at providing instance-wise explanations, they
struggle to efficiently and accurately make attributions over long periods of
time and with complex feature interactions. We propose WinIT, a framework for
evaluating feature importance in time series prediction settings by quantifying
the shift in predictive distribution over multiple instances in a windowed
setting. Comprehensive empirical evidence shows our method improves on the
previous state-of-the-art, FIT, by capturing temporal dependencies in feature
importance. We also demonstrate how the solution improves the appropriate
attribution of features within time steps, which existing interpretability
methods often fail to do. We compare with baselines on simulated and real-world
clinical data. WinIT achieves 2.47x better performance than FIT and other
feature importance methods on real-world clinical MIMIC-mortality task. The
code for this work is available at https://github.com/layer6ai-labs/WinIT. | [
"cs.LG"
] |
In saliency detection, every pixel needs contextual information to make
saliency prediction. Previous models usually incorporate contexts holistically.
However, for each pixel, usually only part of its context region is useful and
contributes to its prediction, while some other part may serve as noises and
distractions. In this paper, we propose a novel pixel-wise contextual attention
network, \ie PiCANet, to learn to selectively attend to informative context
locations at each pixel. Specifically, PiCANet generates an attention map over
the context region of each pixel, where each attention weight corresponds to
the relevance of a context location w.r.t the referred pixel. Then, attentive
contextual features can be constructed via selectively incorporating the
features of useful context locations with the learned attention. We propose
three specific formulations of the PiCANet via embedding the pixel-wise
contextual attention mechanism into the pooling and convolution operations with
attending to global or local contexts. All the three models are fully
differentiable and can be integrated with CNNs with joint training. We
introduce the proposed PiCANets into a U-Net architecture for salient object
detection. Experimental results indicate that the proposed PiCANets can
significantly improve the saliency detection performance. The generated global
and local attention can learn to incorporate global contrast and smoothness,
respectively, which help localize salient objects more accurately and highlight
them more uniformly. Consequently, our saliency model performs favorably
against other state-of-the-art methods. Moreover, we also validate that
PiCANets can also improve semantic segmentation and object detection
performances, which further demonstrates their effectiveness and generalization
ability. | [
"cs.CV"
] |
We describe the solution of team ISMLL for the ECML-PKDD 2016 Discovery
Challenge on Bank Card Usage for both tasks. Our solution is based on three
pillars. Gradient boosted decision trees as a strong regression and
classification model, an intensive search for good hyperparameter
configurations and strong features that exploit geolocation information. This
approach achieved the best performance on the public leaderboard for the first
task and a decent fourth position for the second task. | [
"cs.LG",
"cs.AI"
] |
Generic object detection algorithms have proven their excellent performance
in recent years. However, object detection on underwater datasets is still less
explored. In contrast to generic datasets, underwater images usually have color
shift and low contrast; sediment would cause blurring in underwater images. In
addition, underwater creatures often appear closely to each other on images due
to their living habits. To address these issues, our work investigates
augmentation policies to simulate overlapping, occluded and blurred objects,
and we construct a model capable of achieving better generalization. We propose
an augmentation method called RoIMix, which characterizes interactions among
images. Proposals extracted from different images are mixed together. Previous
data augmentation methods operate on a single image while we apply RoIMix to
multiple images to create enhanced samples as training data. Experiments show
that our proposed method improves the performance of region-based object
detectors on both Pascal VOC and URPC datasets. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Various modifications of TRANSFORMER were recently used to solve time-series
forecasting problem. We propose Query Selector - an efficient, deterministic
algorithm for sparse attention matrix. Experiments show it achieves
state-of-the art results on ETT, Helpdesk and BPI'12 datasets. | [
"cs.LG"
] |
Spatial Independent Component Analysis (ICA) is an increasingly used
data-driven method to analyze functional Magnetic Resonance Imaging (fMRI)
data. To date, it has been used to extract meaningful patterns without prior
information. However, ICA is not robust to mild data variation and remains a
parameter-sensitive algorithm. The validity of the extracted patterns is hard
to establish, as well as the significance of differences between patterns
extracted from different groups of subjects. We start from a generative model
of the fMRI group data to introduce a probabilistic ICA pattern-extraction
algorithm, called CanICA (Canonical ICA). Thanks to an explicit noise model and
canonical correlation analysis, our method is auto-calibrated and identifies
the group-reproducible data subspace before performing ICA. We compare our
method to state-of-the-art multi-subject fMRI ICA methods and show that the
features extracted are more reproducible. | [
"cs.CV",
"stat.AP"
] |
Interpreting how does deep neural networks (DNNs) make predictions is a vital
field in artificial intelligence, which hinders wide applications of DNNs.
Visualization of learned representations helps we humans understand the vision
of DNNs. In this work, visualized images that can activate the neural network
to the target classes are generated by back-propagation method. Here, rotation
and scaling operations are applied to introduce the transformation invariance
in the image generating process, which we find a significant improvement on
visualization effect. Finally, we show some cases that such method can help us
to gain insight into neural networks. | [
"cs.CV",
"68T45",
"I.2.10"
] |
In value-based deep reinforcement learning methods, approximation of value
functions induces overestimation bias and leads to suboptimal policies. We show
that in deep actor-critic methods that aim to overcome the overestimation bias,
if the reinforcement signals received by the agent have a high variance, a
significant underestimation bias arises. To minimize the underestimation, we
introduce a parameter-free, novel deep Q-learning variant. Our Q-value update
rule combines the notions behind Clipped Double Q-learning and Maxmin
Q-learning by computing the critic objective through the nested combination of
maximum and minimum operators to bound the approximate value estimates. We
evaluate our modification on the suite of several OpenAI Gym continuous control
tasks, improving the state-of-the-art in every environment tested. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Reward-free reinforcement learning (RL) is a framework which is suitable for
both the batch RL setting and the setting where there are many reward functions
of interest. During the exploration phase, an agent collects samples without
using a pre-specified reward function. After the exploration phase, a reward
function is given, and the agent uses samples collected during the exploration
phase to compute a near-optimal policy. Jin et al. [2020] showed that in the
tabular setting, the agent only needs to collect polynomial number of samples
(in terms of the number states, the number of actions, and the planning
horizon) for reward-free RL. However, in practice, the number of states and
actions can be large, and thus function approximation schemes are required for
generalization. In this work, we give both positive and negative results for
reward-free RL with linear function approximation. We give an algorithm for
reward-free RL in the linear Markov decision process setting where both the
transition and the reward admit linear representations. The sample complexity
of our algorithm is polynomial in the feature dimension and the planning
horizon, and is completely independent of the number of states and actions. We
further give an exponential lower bound for reward-free RL in the setting where
only the optimal $Q$-function admits a linear representation. Our results imply
several interesting exponential separations on the sample complexity of
reward-free RL. | [
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] |
Existing self-supervised learning methods learn representation by means of
pretext tasks which are either (1) discriminating that explicitly specify which
features should be separated or (2) aligning that precisely indicate which
features should be closed together, but ignore the fact how to jointly and
principally define which features to be repelled and which ones to be
attracted. In this work, we combine the positive aspects of the discriminating
and aligning methods, and design a hybrid method that addresses the above
issue. Our method explicitly specifies the repulsion and attraction mechanism
respectively by discriminative predictive task and concurrently maximizing
mutual information between paired views sharing redundant information. We
qualitatively and quantitatively show that our proposed model learns better
features that are more effective for the diverse downstream tasks ranging from
classification to semantic segmentation. Our experiments on nine established
benchmarks show that the proposed model consistently outperforms the existing
state-of-the-art results of self-supervised and transfer learning protocol. | [
"cs.CV",
"cs.LG"
] |
Following the success of dot-product attention in Transformers, numerous
approximations have been recently proposed to address its quadratic complexity
with respect to the input length. However, all approximations thus far have
ignored the contribution of the $\textit{value vectors}$ to the quality of
approximation. In this work, we argue that research efforts should be directed
towards approximating the true output of the attention sub-layer, which
includes the value vectors. We propose a value-aware objective, and show
theoretically and empirically that an optimal approximation of a value-aware
objective substantially outperforms an optimal approximation that ignores
values, in the context of language modeling. Moreover, we show that the choice
of kernel function for computing attention similarity can substantially affect
the quality of sparse approximations, where kernel functions that are less
skewed are more affected by the value vectors. | [
"cs.LG",
"cs.CL"
] |
Neural ordinary differential equations (NODEs) presented a new paradigm to
construct (continuous-time) neural networks. While showing several good
characteristics in terms of the number of parameters and the flexibility in
constructing neural networks, they also have a couple of well-known
limitations: i) theoretically NODEs learn homeomorphic mapping functions only,
and ii) sometimes NODEs show numerical instability in solving integral
problems. To handle this, many enhancements have been proposed. To our
knowledge, however, integrating attention into NODEs has been overlooked for a
while. To this end, we present a novel method of attentive dual co-evolving
NODE (ACE-NODE): one main NODE for a downstream machine learning task and the
other for providing attention to the main NODE. Our ACE-NODE supports both
pairwise and elementwise attention. In our experiments, our method outperforms
existing NODE-based and non-NODE-based baselines in almost all cases by
non-trivial margins. | [
"cs.LG"
] |
This note discusses a simple modification of cross-conformal prediction
inspired by recent work on e-values. The precursor of conformal prediction
developed in the 1990s by Gammerman, Vapnik, and Vovk was also based on
e-values and is called conformal e-prediction in this note. Replacing e-values
by p-values led to conformal prediction, which has important advantages over
conformal e-prediction without obvious disadvantages. The situation with
cross-conformal prediction is, however, different: whereas for cross-conformal
prediction validity is only an empirical fact (and can be broken with excessive
randomization), this note draws the reader's attention to the obvious fact that
cross-conformal e-prediction enjoys a guaranteed property of validity. | [
"cs.LG",
"stat.ML",
"68T05"
] |
Iterative Closest Point (ICP) solves the rigid point cloud registration
problem iteratively in two steps: (1) make hard assignments of spatially
closest point correspondences, and then (2) find the least-squares rigid
transformation. The hard assignments of closest point correspondences based on
spatial distances are sensitive to the initial rigid transformation and
noisy/outlier points, which often cause ICP to converge to wrong local minima.
In this paper, we propose the RPM-Net -- a less sensitive to initialization and
more robust deep learning-based approach for rigid point cloud registration. To
this end, our network uses the differentiable Sinkhorn layer and annealing to
get soft assignments of point correspondences from hybrid features learned from
both spatial coordinates and local geometry. To further improve registration
performance, we introduce a secondary network to predict optimal annealing
parameters. Unlike some existing methods, our RPM-Net handles missing
correspondences and point clouds with partial visibility. Experimental results
show that our RPM-Net achieves state-of-the-art performance compared to
existing non-deep learning and recent deep learning methods. Our source code is
available at the project website https://github.com/yewzijian/RPMNet . | [
"cs.CV"
] |
Identifying optimal values for a high-dimensional set of hyperparameters is a
problem that has received growing attention given its importance to large-scale
machine learning applications such as neural architecture search. Recently
developed optimization methods can be used to select thousands or even millions
of hyperparameters. Such methods often yield overfit models, however, leading
to poor performance on unseen data. We argue that this overfitting results from
using the standard hyperparameter optimization objective function. Here we
present an alternative objective that is equivalent to a Probably Approximately
Correct-Bayes (PAC-Bayes) bound on the expected out-of-sample error. We then
devise an efficient gradient-based algorithm to minimize this objective; the
proposed method has asymptotic space and time complexity equal to or better
than other gradient-based hyperparameter optimization methods. We show that
this new method significantly reduces out-of-sample error when applied to
hyperparameter optimization problems known to be prone to overfitting. | [
"stat.ML",
"cs.LG"
] |
Machine learning systems have received much attention recently for their
ability to achieve expert-level performance on clinical tasks, particularly in
medical imaging. Here, we examine the extent to which state-of-the-art deep
learning classifiers trained to yield diagnostic labels from X-ray images are
biased with respect to protected attributes. We train convolution neural
networks to predict 14 diagnostic labels in 3 prominent public chest X-ray
datasets: MIMIC-CXR, Chest-Xray8, CheXpert, as well as a multi-site aggregation
of all those datasets. We evaluate the TPR disparity -- the difference in true
positive rates (TPR) -- among different protected attributes such as patient
sex, age, race, and insurance type as a proxy for socioeconomic status. We
demonstrate that TPR disparities exist in the state-of-the-art classifiers in
all datasets, for all clinical tasks, and all subgroups. A multi-source dataset
corresponds to the smallest disparities, suggesting one way to reduce bias. We
find that TPR disparities are not significantly correlated with a subgroup's
proportional disease burden. As clinical models move from papers to products,
we encourage clinical decision makers to carefully audit for algorithmic
disparities prior to deployment. Our code can be found at,
https://github.com/LalehSeyyed/CheXclusion | [
"cs.CV",
"cs.AI",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Interpretation of the underlying mechanisms of Deep Convolutional Neural
Networks has become an important aspect of research in the field of deep
learning due to their applications in high-risk environments. To explain these
black-box architectures there have been many methods applied so the internal
decisions can be analyzed and understood. In this paper, built on the top of
Score-CAM, we introduce an enhanced visual explanation in terms of visual
sharpness called SS-CAM, which produces centralized localization of object
features within an image through a smooth operation. We evaluate our method on
the ILSVRC 2012 Validation dataset, which outperforms Score-CAM on both
faithfulness and localization tasks. | [
"cs.CV"
] |
We address the challenging problem of semi-supervised learning in the context
of multiple visual interpretations of the world by finding consensus in a graph
of neural networks. Each graph node is a scene interpretation layer, while each
edge is a deep net that transforms one layer at one node into another from a
different node. During the supervised phase edge networks are trained
independently. During the next unsupervised stage edge nets are trained on the
pseudo-ground truth provided by consensus among multiple paths that reach the
nets' start and end nodes. These paths act as ensemble teachers for any given
edge and strong consensus is used for high-confidence supervisory signal. The
unsupervised learning process is repeated over several generations, in which
each edge becomes a "student" and also part of different ensemble "teachers"
for training other students. By optimizing such consensus between different
paths, the graph reaches consistency and robustness over multiple
interpretations and generations, in the face of unknown labels. We give
theoretical justifications of the proposed idea and validate it on a large
dataset. We show how prediction of different representations such as depth,
semantic segmentation, surface normals and pose from RGB input could be
effectively learned through self-supervised consensus in our graph. We also
compare to state-of-the-art methods for multi-task and semi-supervised learning
and show superior performance. | [
"cs.CV"
] |
We present a real-time approach for image-based localization within large
scenes that have been reconstructed offline using structure from motion (Sfm).
From monocular video, our method continuously computes a precise 6-DOF camera
pose, by efficiently tracking natural features and matching them to 3D points
in the Sfm point cloud. Our main contribution lies in efficiently interleaving
a fast keypoint tracker that uses inexpensive binary feature descriptors with a
new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the
need for online extraction of scale-invariant features. Instead, offline we
construct an indexed database containing multiple DAISY descriptors per 3D
point extracted at multiple scales. The key to the efficiency of our method
lies in invoking DAISY descriptor extraction and matching sparingly during
localization, and in distributing this computation over a window of successive
frames. This enables the algorithm to run in real-time, without fluctuations in
the latency over long durations. We evaluate the method in large indoor and
outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a
low-power, mobile computer suitable for onboard computation on a quadrotor
micro aerial vehicle. | [
"cs.CV",
"cs.RO"
] |
One of the major characteristics of financial time series is that they
contain a large amount of non-stationary noise, which is challenging for deep
neural networks. People normally use various features to address this problem.
However, the performance of these features depends on the choice of
hyper-parameters. In this paper, we propose to use neural networks to represent
these indicators and train a large network constructed of smaller networks as
feature layers to fine-tune the prior knowledge represented by the indicators.
During back propagation, prior knowledge is transferred from human logic to
machine logic via gradient descent. Prior knowledge is the deep belief of
neural network and teaches the network to not be affected by non-stationary
noise. Moreover, co-distillation is applied to distill the structure into a
much smaller size to reduce redundant features and the risk of overfitting. In
addition, the decisions of the smaller networks in terms of gradient descent
are more robust and cautious than those of large networks. In numerical
experiments, we find that our algorithm is faster and more accurate than
traditional methods on real financial datasets. We also conduct experiments to
verify and comprehend the method. | [
"cs.LG",
"q-fin.TR",
"stat.ML"
] |
This paper proposes an efficient neural network (NN) architecture design
methodology called Chameleon that honors given resource constraints. Instead of
developing new building blocks or using computationally-intensive reinforcement
learning algorithms, our approach leverages existing efficient network building
blocks and focuses on exploiting hardware traits and adapting computation
resources to fit target latency and/or energy constraints. We formulate
platform-aware NN architecture search in an optimization framework and propose
a novel algorithm to search for optimal architectures aided by efficient
accuracy and resource (latency and/or energy) predictors. At the core of our
algorithm lies an accuracy predictor built atop Gaussian Process with Bayesian
optimization for iterative sampling. With a one-time building cost for the
predictors, our algorithm produces state-of-the-art model architectures on
different platforms under given constraints in just minutes. Our results show
that adapting computation resources to building blocks is critical to model
performance. Without the addition of any bells and whistles, our models achieve
significant accuracy improvements against state-of-the-art hand-crafted and
automatically designed architectures. We achieve 73.8% and 75.3% top-1 accuracy
on ImageNet at 20ms latency on a mobile CPU and DSP. At reduced latency, our
models achieve up to 8.5% (4.8%) and 6.6% (9.3%) absolute top-1 accuracy
improvements compared to MobileNetV2 and MnasNet, respectively, on a mobile CPU
(DSP), and 2.7% (4.6%) and 5.6% (2.6%) accuracy gains over ResNet-101 and
ResNet-152, respectively, on an Nvidia GPU (Intel CPU). | [
"cs.CV",
"cs.NE"
] |
Video prediction methods generally consume substantial computing resources in
training and deployment, among which keypoint-based approaches show promising
improvement in efficiency by simplifying dense image prediction to light
keypoint prediction. However, keypoint locations are often modeled only as
continuous coordinates, so noise from semantically insignificant deviations in
videos easily disrupt learning stability, leading to inaccurate keypoint
modeling. In this paper, we design a new grid keypoint learning framework,
aiming at a robust and explainable intermediate keypoint representation for
long-term efficient video prediction. We have two major technical
contributions. First, we detect keypoints by jumping among candidate locations
in our raised grid space and formulate a condensation loss to encourage
meaningful keypoints with strong representative capability. Second, we
introduce a 2D binary map to represent the detected grid keypoints and then
suggest propagating keypoint locations with stochasticity by selecting entries
in the discrete grid space, thus preserving the spatial structure of keypoints
in the longterm horizon for better future frame generation. Extensive
experiments verify that our method outperforms the state-ofthe-art stochastic
video prediction methods while saves more than 98% of computing resources. We
also demonstrate our method on a robotic-assisted surgery dataset with
promising results. Our code is available at
https://github.com/xjgaocs/Grid-Keypoint-Learning. | [
"cs.CV"
] |
Semantic segmentation and depth completion are two challenging tasks in scene
understanding, and they are widely used in robotics and autonomous driving.
Although several works are proposed to jointly train these two tasks using some
small modifications, like changing the last layer, the result of one task is
not utilized to improve the performance of the other one despite that there are
some similarities between these two tasks. In this paper, we propose multi-task
generative adversarial networks (Multi-task GANs), which are not only competent
in semantic segmentation and depth completion, but also improve the accuracy of
depth completion through generated semantic images. In addition, we improve the
details of generated semantic images based on CycleGAN by introducing
multi-scale spatial pooling blocks and the structural similarity reconstruction
loss. Furthermore, considering the inner consistency between semantic and
geometric structures, we develop a semantic-guided smoothness loss to improve
depth completion results. Extensive experiments on Cityscapes dataset and KITTI
depth completion benchmark show that the Multi-task GANs are capable of
achieving competitive performance for both semantic segmentation and depth
completion tasks. | [
"cs.CV"
] |
Semantic segmentation of medical images is a crucial step for the
quantification of healthy anatomy and diseases alike. The majority of the
current state-of-the-art segmentation algorithms are based on deep neural
networks and rely on large datasets with full pixel-wise annotations. Producing
such annotations can often only be done by medical professionals and requires
large amounts of valuable time. Training a medical image segmentation network
with weak annotations remains a relatively unexplored topic. In this work we
investigate training strategies to learn the parameters of a pixel-wise
segmentation network from scribble annotations alone. We evaluate the
techniques on public cardiac (ACDC) and prostate (NCI-ISBI) segmentation
datasets. We find that the networks trained on scribbles suffer from a
remarkably small degradation in Dice of only 2.9% (cardiac) and 4.5% (prostate)
with respect to a network trained on full annotations. | [
"cs.CV"
] |
Adversarial Robustness Toolbox (ART) is a Python library supporting
developers and researchers in defending Machine Learning models (Deep Neural
Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random
Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn
Pipelines, etc.) against adversarial threats and helps making AI systems more
secure and trustworthy. Machine Learning models are vulnerable to adversarial
examples, which are inputs (images, texts, tabular data, etc.) deliberately
modified to produce a desired response by the Machine Learning model. ART
provides the tools to build and deploy defences and test them with adversarial
attacks. Defending Machine Learning models involves certifying and verifying
model robustness and model hardening with approaches such as pre-processing
inputs, augmenting training data with adversarial samples, and leveraging
runtime detection methods to flag any inputs that might have been modified by
an adversary. The attacks implemented in ART allow creating adversarial attacks
against Machine Learning models which is required to test defenses with
state-of-the-art threat models. Supported Machine Learning Libraries include
TensorFlow (v1 and v2), Keras, PyTorch, MXNet, Scikit-learn, XGBoost, LightGBM,
CatBoost, and GPy. The source code of ART is released with MIT license at
https://github.com/IBM/adversarial-robustness-toolbox. The release includes
code examples, notebooks with tutorials and documentation
(http://adversarial-robustness-toolbox.readthedocs.io). | [
"cs.LG",
"stat.ML"
] |
Detecting objects and estimating their viewpoint in images are key tasks of
3D scene understanding. Recent approaches have achieved excellent results on
very large benchmarks for object detection and viewpoint estimation. However,
performances are still lagging behind for novel object categories with few
samples. In this paper, we tackle the problems of few-shot object detection and
few-shot viewpoint estimation. We propose a meta-learning framework that can be
applied to both tasks, possibly including 3D data. Our models improve the
results on objects of novel classes by leveraging on rich feature information
originating from base classes with many samples. A simple joint feature
embedding module is proposed to make the most of this feature sharing. Despite
its simplicity, our method outperforms state-of-the-art methods by a large
margin on a range of datasets, including PASCAL VOC and MS COCO for few-shot
object detection, and Pascal3D+ and ObjectNet3D for few-shot viewpoint
estimation. And for the first time, we tackle the combination of both few-shot
tasks, on Object- Net3D, showing promising results. Our code and data are
available at http://imagine.enpc.fr/~xiaoy/FSDetView/. | [
"cs.CV"
] |
This paper presents a technique for reduced-order Markov modeling for compact
representation of time-series data. In this work, symbolic dynamics-based tools
have been used to infer an approximate generative Markov model. The time-series
data are first symbolized by partitioning the continuous measurement space of
the signal and then, the discrete sequential data are modeled using symbolic
dynamics. In the proposed approach, the size of temporal memory of the symbol
sequence is estimated from spectral properties of the resulting stochastic
matrix corresponding to a first-order Markov model of the symbol sequence.
Then, hierarchical clustering is used to represent the states of the
corresponding full-state Markov model to construct a reduced-order or size
Markov model with a non-deterministic algebraic structure. Subsequently, the
parameters of the reduced-order Markov model are identified from the original
model by making use of a Bayesian inference rule. The final model is selected
using information-theoretic criteria. The proposed concept is elucidated and
validated on two different data sets as examples. The first example analyzes a
set of pressure data from a swirl-stabilized combustor, where controlled
protocols are used to induce flame instabilities. Variations in the complexity
of the derived Markov model represent how the system operating condition
changes from a stable to an unstable combustion regime. In the second example,
the data set is taken from NASA's data repository for prognostics of bearings
on rotating shafts. We show that, even with a very small state-space, the
reduced-order models are able to achieve comparable performance and that the
proposed approach provides flexibility in the selection of a final model for
representation and learning. | [
"stat.ML"
] |
Paleness or pallor is a manifestation of blood loss or low hemoglobin
concentrations in the human blood that can be caused by pathologies such as
anemia. This work presents the first automated screening system that utilizes
pallor site images, segments, and extracts color and intensity-based features
for multi-class classification of patients with high pallor due to anemia-like
pathologies, normal patients and patients with other abnormalities. This work
analyzes the pallor sites of conjunctiva and tongue for anemia screening
purposes. First, for the eye pallor site images, the sclera and conjunctiva
regions are automatically segmented for regions of interest. Similarly, for the
tongue pallor site images, the inner and outer tongue regions are segmented.
Then, color-plane based feature extraction is performed followed by machine
learning algorithms for feature reduction and image level classification for
anemia. In this work, a suite of classification algorithms image-level
classifications for normal (class 0), pallor (class 1) and other abnormalities
(class 2). The proposed method achieves 86% accuracy, 85% precision and 67%
recall in eye pallor site images and 98.2% accuracy and precision with 100%
recall in tongue pallor site images for classification of images with pallor.
The proposed pallor screening system can be further fine-tuned to detect the
severity of anemia-like pathologies using controlled set of local images that
can then be used for future benchmarking purposes. | [
"cs.CV"
] |
This paper describes the AVA-Kinetics localized human actions video dataset.
The dataset is collected by annotating videos from the Kinetics-700 dataset
using the AVA annotation protocol, and extending the original AVA dataset with
these new AVA annotated Kinetics clips. The dataset contains over 230k clips
annotated with the 80 AVA action classes for each of the humans in key-frames.
We describe the annotation process and provide statistics about the new
dataset. We also include a baseline evaluation using the Video Action
Transformer Network on the AVA-Kinetics dataset, demonstrating improved
performance for action classification on the AVA test set. The dataset can be
downloaded from https://research.google.com/ava/ | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Convolutional neural networks for visual recognition require large amounts of
training samples and usually benefit from data augmentation. This paper
proposes PatchMix, a data augmentation method that creates new samples by
composing patches from pairs of images in a grid-like pattern. These new
samples' ground truth labels are set as proportional to the number of patches
from each image. We then add a set of additional losses at the patch-level to
regularize and to encourage good representations at both the patch and image
levels. A ResNet-50 model trained on ImageNet using PatchMix exhibits superior
transfer learning capabilities across a wide array of benchmarks. Although
PatchMix can rely on random pairings and random grid-like patterns for mixing,
we explore evolutionary search as a guiding strategy to discover optimal
grid-like patterns and image pairing jointly. For this purpose, we conceive a
fitness function that bypasses the need to re-train a model to evaluate each
choice. In this way, PatchMix outperforms a base model on CIFAR-10 (+1.91),
CIFAR-100 (+5.31), Tiny Imagenet (+3.52), and ImageNet (+1.16) by significant
margins, also outperforming previous state-of-the-art pairwise augmentation
strategies. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Recent works (White et al., 2020a; Yan et al., 2020) demonstrate the
importance of architecture encodings in Neural Architecture Search (NAS). These
encodings encode either structure or computation information of the neural
architectures. Compared to structure-aware encodings, computation-aware
encodings map architectures with similar accuracies to the same region, which
improves the downstream architecture search performance (Zhang et al., 2019;
White et al., 2020a). In this work, we introduce a Computation-Aware
Transformer-based Encoding method called CATE. Different from existing
computation-aware encodings based on fixed transformation (e.g. path encoding),
CATE employs a pairwise pre-training scheme to learn computation-aware
encodings using Transformers with cross-attention. Such learned encodings
contain dense and contextualized computation information of neural
architectures. We compare CATE with eleven encodings under three major
encoding-dependent NAS subroutines in both small and large search spaces. Our
experiments show that CATE is beneficial to the downstream search, especially
in the large search space. Moreover, the outside search space experiment
demonstrates its superior generalization ability beyond the search space on
which it was trained. Our code is available at:
https://github.com/MSU-MLSys-Lab/CATE. | [
"cs.LG"
] |
Motivated by the fact that most of the information relevant to the prediction
of target tokens is drawn from the source sentence $S=s_1, \ldots, s_S$, we
propose truncating the target-side window used for computing self-attention by
making an $N$-gram assumption. Experiments on WMT EnDe and EnFr data sets show
that the $N$-gram masked self-attention model loses very little in BLEU score
for $N$ values in the range $4, \ldots, 8$, depending on the task. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
This paper presents a semantic planar SLAM system that improves pose
estimation and mapping using cues from an instance planar segmentation network.
While the mainstream approaches are using RGB-D sensors, employing a monocular
camera with such a system still faces challenges such as robust data
association and precise geometric model fitting. In the majority of existing
work, geometric model estimation problems such as homography estimation and
piece-wise planar reconstruction (PPR) are usually solved by standard (greedy)
RANSAC separately and sequentially. However, setting the inlier-outlier
threshold is difficult in absence of information about the scene (i.e. the
scale). In this work, we revisit these problems and argue that two mentioned
geometric models (homographies/3D planes) can be solved by minimizing an energy
function that exploits the spatial coherence, i.e. with graph-cut optimization,
which also tackles the practical issue when the output of a trained CNN is
inaccurate. Moreover, we propose an adaptive parameter setting strategy based
on our experiments, and report a comprehensive evaluation on various
open-source datasets. | [
"cs.CV",
"cs.RO"
] |
Most conventional Reinforcement Learning (RL) algorithms aim to optimize
decision-making rules in terms of the expected returns. However, especially for
risk management purposes, other risk-sensitive criteria such as the
value-at-risk or the expected shortfall are sometimes preferred in real
applications. Here, we describe a parametric method for estimating density of
the returns, which allows us to handle various criteria in a unified manner. We
first extend the Bellman equation for the conditional expected return to cover
a conditional probability density of the returns. Then we derive an extension
of the TD-learning algorithm for estimating the return densities in an unknown
environment. As test instances, several parametric density estimation
algorithms are presented for the Gaussian, Laplace, and skewed Laplace
distributions. We show that these algorithms lead to risk-sensitive as well as
robust RL paradigms through numerical experiments. | [
"cs.LG",
"stat.ML"
] |
Thompson sampling (TS) has emerged as a robust technique for contextual
bandit problems. However, TS requires posterior inference and optimization for
action generation, prohibiting its use in many internet applications where
latency and ease of deployment are of concern. We propose a novel
imitation-learning-based algorithm that distills a TS policy into an explicit
policy representation by performing posterior inference and optimization
offline. The explicit policy representation enables fast online decision-making
and easy deployment in mobile and server-based environments. Our algorithm
iteratively performs offline batch updates to the TS policy and learns a new
imitation policy. Since we update the TS policy with observations collected
under the imitation policy, our algorithm emulates an off-policy version of TS.
Our imitation algorithm guarantees Bayes regret comparable to TS, up to the sum
of single-step imitation errors. We show these imitation errors can be made
arbitrarily small when unlabeled contexts are cheaply available, which is the
case for most large-scale internet applications. Empirically, we show that our
imitation policy achieves comparable regret to TS, while reducing decision-time
latency by over an order of magnitude. | [
"cs.LG",
"cs.AI"
] |
Query reformulation is the process by which a input search query is refined
by the user to match documents outside the original top-n results. On average,
roughly 50% of text search queries involve some form of reformulation, and term
suggestion tools are used 35% of the time when offered to users. As prevalent
as text search queries are, however, such a feature has yet to be explored at
scale for visual search. This is because reformulation for images presents a
novel challenge to seamlessly transform visual features to match user intent
within the context of a typical user session. In this paper, we present methods
of semantically transforming visual queries, such as utilizing operations in
the latent space of a generative adversarial model for the scenarios of fashion
and product search. | [
"cs.CV",
"cs.IR",
"cs.LG",
"eess.IV"
] |
Recent progress has been made in using attention based encoder-decoder
framework for video captioning. However, most existing decoders apply the
attention mechanism to every generated word including both visual words (e.g.,
"gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these
non-visual words can be easily predicted using natural language model without
considering visual signals or attention. Imposing attention mechanism on
non-visual words could mislead and decrease the overall performance of video
captioning. To address this issue, we propose a hierarchical LSTM with adjusted
temporal attention (hLSTMat) approach for video captioning. Specifically, the
proposed framework utilizes the temporal attention for selecting specific
frames to predict the related words, while the adjusted temporal attention is
for deciding whether to depend on the visual information or the language
context information. Also, a hierarchical LSTMs is designed to simultaneously
consider both low-level visual information and high-level language context
information to support the video caption generation. To demonstrate the
effectiveness of our proposed framework, we test our method on two prevalent
datasets: MSVD and MSR-VTT, and experimental results show that our approach
outperforms the state-of-the-art methods on both two datasets. | [
"cs.CV"
] |
Visual storytelling is a task of generating relevant and interesting stories
for given image sequences. In this work we aim at increasing the diversity of
the generated stories while preserving the informative content from the images.
We propose to foster the diversity and informativeness of a generated story by
using a concept selection module that suggests a set of concept candidates.
Then, we utilize a large scale pre-trained model to convert concepts and images
into full stories. To enrich the candidate concepts, a commonsense knowledge
graph is created for each image sequence from which the concept candidates are
proposed. To obtain appropriate concepts from the graph, we propose two novel
modules that consider the correlation among candidate concepts and the
image-concept correlation. Extensive automatic and human evaluation results
demonstrate that our model can produce reasonable concepts. This enables our
model to outperform the previous models by a large margin on the diversity and
informativeness of the story, while retaining the relevance of the story to the
image sequence. | [
"cs.CV",
"cs.CL"
] |
Deep neural perception and control networks have become key components of
self-driving vehicles. User acceptance is likely to benefit from
easy-to-interpret textual explanations which allow end-users to understand what
triggered a particular behavior. Explanations may be triggered by the neural
controller, namely introspective explanations, or informed by the neural
controller's output, namely rationalizations. We propose a new approach to
introspective explanations which consists of two parts. First, we use a visual
(spatial) attention model to train a convolutional network end-to-end from
images to the vehicle control commands, i.e., acceleration and change of
course. The controller's attention identifies image regions that potentially
influence the network's output. Second, we use an attention-based video-to-text
model to produce textual explanations of model actions. The attention maps of
controller and explanation model are aligned so that explanations are grounded
in the parts of the scene that mattered to the controller. We explore two
approaches to attention alignment, strong- and weak-alignment. Finally, we
explore a version of our model that generates rationalizations, and compare
with introspective explanations on the same video segments. We evaluate these
models on a novel driving dataset with ground-truth human explanations, the
Berkeley DeepDrive eXplanation (BDD-X) dataset. Code is available at
https://github.com/JinkyuKimUCB/explainable-deep-driving. | [
"cs.CV"
] |
In this paper we present a technique to train neural network models on small
amounts of data. Current methods for training neural networks on small amounts
of rich data typically rely on strategies such as fine-tuning a pre-trained
neural network or the use of domain-specific hand-engineered features. Here we
take the approach of treating network layers, or entire networks, as modules
and combine pre-trained modules with untrained modules, to learn the shift in
distributions between data sets. The central impact of using a modular approach
comes from adding new representations to a network, as opposed to replacing
representations via fine-tuning. Using this technique, we are able surpass
results using standard fine-tuning transfer learning approaches, and we are
also able to significantly increase performance over such approaches when using
smaller amounts of data. | [
"cs.LG",
"cs.CL"
] |
Automatic text recognition from ancient handwritten record images is an
important problem in the genealogy domain. However, critical challenges such as
varying noise conditions, vanishing texts, and variations in handwriting make
the recognition task difficult. We tackle this problem by developing a
handwritten-to-machine-print conditional Generative Adversarial network
(HW2MP-GAN) model that formulates handwritten recognition as a
text-Image-to-text-Image translation problem where a given image, typically in
an illegible form, is converted into another image, close to its machine-print
form. The proposed model consists of three-components including a generator,
and word-level and character-level discriminators. The model incorporates
Sliced Wasserstein distance (SWD) and U-Net architectures in HW2MP-GAN for
better quality image-to-image transformation. Our experiments reveal that
HW2MP-GAN outperforms state-of-the-art baseline cGAN models by almost 30 in
Frechet Handwritten Distance (FHD), 0.6 on average Levenshtein distance and 39%
in word accuracy for image-to-image translation on IAM database. Further,
HW2MP-GAN improves handwritten recognition word accuracy by 1.3% compared to
baseline handwritten recognition models on the IAM database. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
We study how to introduce locality mechanisms into vision transformers. The
transformer network originates from machine translation and is particularly
good at modelling long-range dependencies within a long sequence. Although the
global interaction between the token embeddings could be well modelled by the
self-attention mechanism of transformers, what is lacking a locality mechanism
for information exchange within a local region. Yet, locality is essential for
images since it pertains to structures like lines, edges, shapes, and even
objects.
We add locality to vision transformers by introducing depth-wise convolution
into the feed-forward network. This seemingly simple solution is inspired by
the comparison between feed-forward networks and inverted residual blocks. The
importance of locality mechanisms is validated in two ways: 1) A wide range of
design choices (activation function, layer placement, expansion ratio) are
available for incorporating locality mechanisms and all proper choices can lead
to a performance gain over the baseline, and 2) The same locality mechanism is
successfully applied to 4 vision transformers, which shows the generalization
of the locality concept. In particular, for ImageNet2012 classification, the
locality-enhanced transformers outperform the baselines DeiT-T and PVT-T by
2.6\% and 3.1\% with a negligible increase in the number of parameters and
computational effort. Code is available at
\url{https://github.com/ofsoundof/LocalViT}. | [
"cs.CV"
] |
In this study, we address image retargeting, which is a task that adjusts
input images to arbitrary sizes. In one of the best-performing methods called
MULTIOP, multiple retargeting operators were combined and retargeted images at
each stage were generated to find the optimal sequence of operators that
minimized the distance between original and retargeted images. The limitation
of this method is in its tremendous processing time, which severely prohibits
its practical use. Therefore, the purpose of this study is to find the optimal
combination of operators within a reasonable processing time; we propose a
method of predicting the optimal operator for each step using a reinforcement
learning agent. The technical contributions of this study are as follows.
Firstly, we propose a reward based on self-play, which will be insensitive to
the large variance in the content-dependent distance measured in MULTIOP.
Secondly, we propose to dynamically change the loss weight for each action to
prevent the algorithm from falling into a local optimum and from choosing only
the most frequently used operator in its training. Our experiments showed that
we achieved multi-operator image retargeting with less processing time by three
orders of magnitude and the same quality as the original multi-operator-based
method, which was the best-performing algorithm in retargeting tasks. | [
"cs.CV"
] |
In the animation industry, cartoon videos are usually produced at low frame
rate since hand drawing of such frames is costly and time-consuming. Therefore,
it is desirable to develop computational models that can automatically
interpolate the in-between animation frames. However, existing video
interpolation methods fail to produce satisfying results on animation data.
Compared to natural videos, animation videos possess two unique characteristics
that make frame interpolation difficult: 1) cartoons comprise lines and smooth
color pieces. The smooth areas lack textures and make it difficult to estimate
accurate motions on animation videos. 2) cartoons express stories via
exaggeration. Some of the motions are non-linear and extremely large. In this
work, we formally define and study the animation video interpolation problem
for the first time. To address the aforementioned challenges, we propose an
effective framework, AnimeInterp, with two dedicated modules in a
coarse-to-fine manner. Specifically, 1) Segment-Guided Matching resolves the
"lack of textures" challenge by exploiting global matching among color pieces
that are piece-wise coherent. 2) Recurrent Flow Refinement resolves the
"non-linear and extremely large motion" challenge by recurrent predictions
using a transformer-like architecture. To facilitate comprehensive training and
evaluations, we build a large-scale animation triplet dataset, ATD-12K, which
comprises 12,000 triplets with rich annotations. Extensive experiments
demonstrate that our approach outperforms existing state-of-the-art
interpolation methods for animation videos. Notably, AnimeInterp shows
favorable perceptual quality and robustness for animation scenarios in the
wild. The proposed dataset and code are available at
https://github.com/lisiyao21/AnimeInterp/. | [
"cs.CV"
] |
In the supervised learning domain, considering the recent prevalence of
algorithms with high computational cost, the attention is steering towards
simpler, lighter, and less computationally extensive training and inference
approaches. In particular, randomized algorithms are currently having a
resurgence, given their generalized elementary approach. By using randomized
neural networks, we study distributed classification, which can be employed in
situations were data cannot be stored at a central location nor shared. We
propose a more efficient solution for distributed classification by making use
of a lossy compression approach applied when sharing the local classifiers with
other agents. This approach originates from the framework of hyperdimensional
computing, and is adapted herein. The results of experiments on a collection of
datasets demonstrate that the proposed approach has usually higher accuracy
than local classifiers and getting close to the benchmark - the centralized
classifier. This work can be considered as the first step towards analyzing the
variegated horizon of distributed randomized neural networks. | [
"cs.LG",
"cs.DC"
] |
Deep learning based 3D reconstruction of single view 2D image is becoming
increasingly popular due to their wide range of real-world applications, but
this task is inherently challenging because of the partial observability of an
object from a single perspective. Recently, state of the art probability based
Occupancy Networks reconstructed 3D surfaces from three different types of
input domains: single view 2D image, point cloud and voxel. In this study, we
extend the work on Occupancy Networks by exploiting cross-domain learning of
image and point cloud domains. Specifically, we first convert the single view
2D image into a simpler point cloud representation, and then reconstruct a 3D
surface from it. Our network, the Double Occupancy Network (D-OccNet)
outperforms Occupancy Networks in terms of visual quality and details captured
in the 3D reconstruction. | [
"cs.CV"
] |
In this paper, we propose an effective method to synthesize speaker-specific
speech waveforms by conditioning on videos of an individual's face. Using a
generative adversarial network (GAN) with linguistic and speaker characteristic
features as auxiliary conditions, our method directly converts face images into
speech waveforms under an end-to-end training framework. The linguistic
features are extracted from lip movements using a lip-reading model, and the
speaker characteristic features are predicted from face images using
cross-modal learning with a pre-trained acoustic model. Since these two
features are uncorrelated and controlled independently, we can flexibly
synthesize speech waveforms whose speaker characteristics vary depending on the
input face images. Therefore, our method can be regarded as a multi-speaker
face-to-speech waveform model. We show the superiority of our proposed model
over conventional methods in terms of both objective and subjective evaluation
results. Specifically, we evaluate the performances of the linguistic feature
and the speaker characteristic generation modules by measuring the accuracy of
automatic speech recognition and automatic speaker/gender recognition tasks,
respectively. We also evaluate the naturalness of the synthesized speech
waveforms using a mean opinion score (MOS) test. | [
"cs.CV",
"cs.LG",
"cs.SD",
"eess.AS"
] |
Most digital cameras use sensors coated with a Color Filter Array (CFA) to
capture channel components at every pixel location, resulting in a mosaic image
that does not contain pixel values in all channels. Current research on
reconstructing these missing channels, also known as demosaicing, introduces
many artifacts, such as zipper effect and false color. Many deep learning
demosaicing techniques outperform other classical techniques in reducing the
impact of artifacts. However, most of these models tend to be
over-parametrized. Consequently, edge implementation of the state-of-the-art
deep learning-based demosaicing algorithms on low-end edge devices is a major
challenge. We provide an exhaustive search of deep neural network architectures
and obtain a pareto front of Color Peak Signal to Noise Ratio (CPSNR) as the
performance criterion versus the number of parameters as the model complexity
that beats the state-of-the-art. Architectures on the pareto front can then be
used to choose the best architecture for a variety of resource constraints.
Simple architecture search methods such as exhaustive search and grid search
require some conditions of the loss function to converge to the optimum. We
clarify these conditions in a brief theoretical study. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
We initiate the study of multi-stage episodic reinforcement learning under
adversarial corruptions in both the rewards and the transition probabilities of
the underlying system extending recent results for the special case of
stochastic bandits. We provide a framework which modifies the aggressive
exploration enjoyed by existing reinforcement learning approaches based on
"optimism in the face of uncertainty", by complementing them with principles
from "action elimination". Importantly, our framework circumvents the major
challenges posed by naively applying action elimination in the RL setting, as
formalized by a lower bound we demonstrate. Our framework yields efficient
algorithms which (a) attain near-optimal regret in the absence of corruptions
and (b) adapt to unknown levels corruption, enjoying regret guarantees which
degrade gracefully in the total corruption encountered. To showcase the
generality of our approach, we derive results for both tabular settings (where
states and actions are finite) as well as linear-function-approximation
settings (where the dynamics and rewards admit a linear underlying
representation). Notably, our work provides the first sublinear regret
guarantee which accommodates any deviation from purely i.i.d. transitions in
the bandit-feedback model for episodic reinforcement learning. | [
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ML"
] |
We present Attention Mesh, a lightweight architecture for 3D face mesh
prediction that uses attention to semantically meaningful regions. Our neural
network is designed for real-time on-device inference and runs at over 50 FPS
on a Pixel 2 phone. Our solution enables applications like AR makeup, eye
tracking and AR puppeteering that rely on highly accurate landmarks for eye and
lips regions. Our main contribution is a unified network architecture that
achieves the same accuracy on facial landmarks as a multi-stage cascaded
approach, while being 30 percent faster. | [
"cs.CV"
] |
Energy-Based Models (EBMs), also known as non-normalized probabilistic
models, specify probability density or mass functions up to an unknown
normalizing constant. Unlike most other probabilistic models, EBMs do not place
a restriction on the tractability of the normalizing constant, thus are more
flexible to parameterize and can model a more expressive family of probability
distributions. However, the unknown normalizing constant of EBMs makes training
particularly difficult. Our goal is to provide a friendly introduction to
modern approaches for EBM training. We start by explaining maximum likelihood
training with Markov chain Monte Carlo (MCMC), and proceed to elaborate on
MCMC-free approaches, including Score Matching (SM) and Noise Constrastive
Estimation (NCE). We highlight theoretical connections among these three
approaches, and end with a brief survey on alternative training methods, which
are still under active research. Our tutorial is targeted at an audience with
basic understanding of generative models who want to apply EBMs or start a
research project in this direction. | [
"cs.LG",
"stat.ML"
] |
Models based on self-attention mechanisms have been successful in analyzing
temporal data and have been widely used in the natural language domain. We
propose a new model architecture for video face representation and recognition
based on a self-attention mechanism. Our approach could be used for video with
single and multiple identities. To the best of our knowledge, no one has
explored the aggregation approaches that consider the video with multiple
identities. The proposed approach utilizes existing models to get the face
representation for each video frame, e.g., ArcFace and MobileFaceNet, and the
aggregation module produces the aggregated face representation vector for video
by taking into consideration the order of frames and their quality scores. We
demonstrate empirical results on a public dataset for video face recognition
called IJB-C to indicate that the self-attention aggregation network (SAAN)
outperforms naive average pooling. Moreover, we introduce a new multi-identity
video dataset based on the publicly available UMDFaces dataset and collected
GIFs from Giphy. We show that SAAN is capable of producing a compact face
representation for both single and multiple identities in a video. The dataset
and source code will be publicly available. | [
"cs.CV"
] |
Model compression becomes a recent trend due to the requirement of deploying
neural networks on embedded and mobile devices. Hence, both accuracy and
efficiency are of critical importance. To explore a balance between them, a
knowledge distillation strategy is proposed for general visual representation
learning. It utilizes our well-designed activation map adaptive module to
replace some blocks of the teacher network, exploring the most appropriate
supervisory features adaptively during the training process. Using the
teacher's hidden layer output to prompt the student network to train so as to
transfer effective semantic information.To verify the effectiveness of our
strategy, this paper applied our method to cifar-10 dataset. Results
demonstrate that the method can boost the accuracy of the student network by
0.6% with 6.5% loss reduction, and significantly improve its training speed. | [
"cs.CV",
"cs.LG"
] |
This paper presents the final results of the ICDAR 2021 Competition on
Historical Map Segmentation (MapSeg), encouraging research on a series of
historical atlases of Paris, France, drawn at 1/5000 scale between 1894 and
1937. The competition featured three tasks, awarded separately. Task~1 consists
in detecting building blocks and was won by the L3IRIS team using a
DenseNet-121 network trained in a weakly supervised fashion. This task is
evaluated on 3 large images containing hundreds of shapes to detect. Task~2
consists in segmenting map content from the larger map sheet, and was won by
the UWB team using a U-Net-like FCN combined with a binarization method to
increase detection edge accuracy. Task~3 consists in locating intersection
points of geo-referencing lines, and was also won by the UWB team who used a
dedicated pipeline combining binarization, line detection with Hough transform,
candidate filtering, and template matching for intersection refinement. Tasks~2
and~3 are evaluated on 95 map sheets with complex content. Dataset, evaluation
tools and results are available under permissive licensing at
\url{https://icdar21-mapseg.github.io/}. | [
"cs.CV"
] |
The extraction of a scene graph with objects as nodes and mutual
relationships as edges is the basis for a deep understanding of image content.
Despite recent advances, such as message passing and joint classification, the
detection of visual relationships remains a challenging task due to sub-optimal
exploration of the mutual interaction among the visual objects. In this work,
we propose a novel transformer formulation for scene graph generation and
relation prediction. We leverage the encoder-decoder architecture of the
transformer for rich feature embedding of nodes and edges. Specifically, we
model the node-to-node interaction with the self-attention of the transformer
encoder and the edge-to-node interaction with the cross-attention of the
transformer decoder. Further, we introduce a novel positional embedding
suitable to handle edges in the decoder. Finally, our relation prediction
module classifies the directed relation from the learned node and edge
embedding. We name this architecture as Relation Transformer Network (RTN). On
the Visual Genome and GQA dataset, we have achieved an overall mean of 4.85%
and 3.1% point improvement in comparison with state-of-the-art methods. Our
experiments show that Relation Transformer can efficiently model context across
various datasets with small, medium, and large-scale relation classification. | [
"cs.CV"
] |
Modern deep learning is primarily an experimental science, in which empirical
advances occasionally come at the expense of probabilistic rigor. Here we focus
on one such example; namely the use of the categorical cross-entropy loss to
model data that is not strictly categorical, but rather takes values on the
simplex. This practice is standard in neural network architectures with label
smoothing and actor-mimic reinforcement learning, amongst others. Drawing on
the recently discovered continuous-categorical distribution, we propose
probabilistically-inspired alternatives to these models, providing an approach
that is more principled and theoretically appealing. Through careful
experimentation, including an ablation study, we identify the potential for
outperformance in these models, thereby highlighting the importance of a proper
probabilistic treatment, as well as illustrating some of the failure modes
thereof. | [
"stat.ML",
"cs.LG"
] |
The shortcomings of maximum likelihood estimation in the context of
model-based reinforcement learning have been highlighted by an increasing
number of papers. When the model class is misspecified or has a limited
representational capacity, model parameters with high likelihood might not
necessarily result in high performance of the agent on a downstream control
task. To alleviate this problem, we propose an end-to-end approach for model
learning which directly optimizes the expected returns using implicit
differentiation. We treat a value function that satisfies the Bellman
optimality operator induced by the model as an implicit function of model
parameters and show how to differentiate the function. We provide theoretical
and empirical evidence highlighting the benefits of our approach in the model
misspecification regime compared to likelihood-based methods. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Self-supervised learning is currently gaining a lot of attention, as it
allows neural networks to learn robust representations from large quantities of
unlabeled data. Additionally, multi-task learning can further improve
representation learning by training networks simultaneously on related tasks,
leading to significant performance improvements. In this paper, we propose
three novel self-supervised auxiliary tasks to train graph-based neural network
models in a multi-task fashion. Since Graph Convolutional Networks are among
the most promising approaches for capturing relationships among structured data
points, we use them as a building block to achieve competitive results on
standard semi-supervised graph classification tasks. | [
"cs.LG"
] |
Monocular depth estimation (MDE) is a fundamental task in many applications
such as scene understanding and reconstruction. However, most of the existing
methods rely on accurately labeled datasets. A weakly-supervised framework
based on attention nested U-net (ANU) named as ANUW is introduced in this paper
for cases with wrong labels. The ANUW is trained end-to-end to convert an input
single RGB image into a depth image. It consists of a dense residual network
structure, an adaptive weight channel attention (AWCA) module, a patch second
non-local (PSNL) module and a soft label generation method. The dense residual
network is the main body of the network to encode and decode the input. The
AWCA module can adaptively adjust the channel weights to extract important
features. The PSNL module implements the spatial attention mechanism through a
second-order non-local method. The proposed soft label generation method uses
the prior knowledge of the dataset to produce soft labels to replace false
ones. The proposed ANUW is trained on a defective monocular depth dataset and
the trained model is tested on three public datasets, and the results
demonstrate the superiority of ANUW in comparison with the state-of-the-art MDE
methods. | [
"cs.CV"
] |
In this paper, we show how sparse or isoperimetric cuts of a probability
density function relate to Cheeger cuts of its principal eigenfunction, for
appropriate definitions of `sparse cut' and `principal eigenfunction'.
We construct these appropriate definitions of sparse cut and principal
eigenfunction in the probability density setting. Then, we prove Cheeger and
Buser type inequalities similar to those for the normalized graph Laplacian of
Alon-Milman. We demonstrate that no such inequalities hold for most prior
definitions of sparse cut and principal eigenfunction. We apply this result to
generate novel algorithms for cutting probability densities and clustering
data, including a principled variant of spectral clustering. | [
"cs.LG",
"cs.DM",
"stat.ML"
] |
Deep convolutional neural networks demonstrate impressive results in the
super-resolution domain. A series of studies concentrate on improving peak
signal noise ratio (PSNR) by using much deeper layers, which are not friendly
to constrained resources. Pursuing a trade-off between the restoration capacity
and the simplicity of models is still non-trivial. Recent contributions are
struggling to manually maximize this balance, while our work achieves the same
goal automatically with neural architecture search. Specifically, we handle
super-resolution with a multi-objective approach. We also propose an elastic
search tactic at both micro and macro level, based on a hybrid controller that
profits from evolutionary computation and reinforcement learning. Quantitative
experiments help us to draw a conclusion that our generated models dominate
most of the state-of-the-art methods with respect to the individual FLOPS. | [
"cs.CV",
"cs.LG"
] |
Multi-task learns multiple tasks, while sharing knowledge and computation
among them. However, it suffers from catastrophic forgetting of previous
knowledge when learned incrementally without access to the old data. Most
existing object detectors are domain-specific and static, while some are
learned incrementally but only within a single domain. Training an object
detector incrementally across various domains has rarely been explored. In this
work, we propose three incremental learning scenarios across various domains
and categories for object detection. To mitigate catastrophic forgetting,
attentive feature distillation is proposed to leverages both bottom-up and
top-down attentions to extract important information for distillation. We then
systematically analyze the proposed distillation method in different scenarios.
We find out that, contrary to common understanding, domain gaps have smaller
negative impact on incremental detection, while category differences are
problematic. For the difficult cases, where the domain gaps and especially
category differences are large, we explore three different exemplar sampling
methods and show the proposed adaptive sampling method is effective to select
diverse and informative samples from entire datasets, to further prevent
forgetting. Experimental results show that we achieve the significant
improvement in three different scenarios across seven object detection
benchmark datasets. | [
"cs.CV"
] |
Quantification of uncertainty is one of the most promising approaches to
establish safe machine learning. Despite its importance, it is far from being
generally solved, especially for neural networks. One of the most commonly used
approaches so far is Monte Carlo dropout, which is computationally cheap and
easy to apply in practice. However, it can underestimate the uncertainty. We
propose a new objective, referred to as second-moment loss (SML), to address
this issue. While the full network is encouraged to model the mean, the dropout
networks are explicitly used to optimize the model variance. We analyze the
performance of the new objective on various toy and UCI regression datasets.
Comparing to the state-of-the-art of deep ensembles, SML leads to comparable
prediction accuracies and uncertainty estimates while only requiring a single
model. Under distribution shift, we observe moderate improvements. From a
safety perspective also the study of worst-case uncertainties is crucial. In
this regard we improve considerably. Finally, we show that SML can be
successfully applied to SqueezeDet, a modern object detection network. We
improve on its uncertainty-related scores while not deteriorating regression
quality. As a side result, we introduce an intuitive Wasserstein distance-based
uncertainty measure that is non-saturating and thus allows to resolve quality
differences between any two uncertainty estimates. | [
"cs.LG",
"stat.ML"
] |
Self-supervised, multi-modal learning has been successful in holistic
representation of complex scenarios. This can be useful to consolidate
information from multiple modalities which have multiple, versatile uses. Its
application in surgical robotics can lead to simultaneously developing a
generalised machine understanding of the surgical process and reduce the
dependency on quality, expert annotations which are generally difficult to
obtain. We develop a self-supervised, multi-modal representation learning
paradigm that learns representations for surgical gestures from video and
kinematics. We use an encoder-decoder network configuration that encodes
representations from surgical videos and decodes them to yield kinematics. We
quantitatively demonstrate the efficacy of our learnt representations for
gesture recognition (with accuracy between 69.6 % and 77.8 %), transfer
learning across multiple tasks (with accuracy between 44.6 % and 64.8 %) and
surgeon skill classification (with accuracy between 76.8 % and 81.2 %).
Further, we qualitatively demonstrate that our self-supervised representations
cluster in semantically meaningful properties (surgeon skill and gestures). | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Predicting motion of surrounding agents is critical to real-world
applications of tactical path planning for autonomous driving. Due to the
complex temporal dependencies and social interactions of agents, on-line
trajectory prediction is a challenging task. With the development of attention
mechanism in recent years, transformer model has been applied in natural
language sequence processing first and then image processing. In this paper, we
present a Spatial-Channel Transformer Network for trajectory prediction with
attention functions. Instead of RNN models, we employ transformer model to
capture the spatial-temporal features of agents. A channel-wise module is
inserted to measure the social interaction between agents. We find that the
Spatial-Channel Transformer Network achieves promising results on real-world
trajectory prediction datasets on the traffic scenes. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.