text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Cross-correlator plays a significant role in many visual perception tasks,
such as object detection and tracking. Beyond the linear cross-correlator, this
paper proposes a kernel cross-correlator (KCC) that breaks traditional
limitations. First, by introducing the kernel trick, the KCC extends the linear
cross-correlation to non-linear space, which is more robust to signal noises
and distortions. Second, the connection to the existing works shows that KCC
provides a unified solution for correlation filters. Third, KCC is applicable
to any kernel function and is not limited to circulant structure on training
data, thus it is able to predict affine transformations with customized
properties. Last, by leveraging the fast Fourier transform (FFT), KCC
eliminates direct calculation of kernel vectors, thus achieves better
performance yet still with a reasonable computational cost. Comprehensive
experiments on visual tracking and human activity recognition using wearable
devices demonstrate its robustness, flexibility, and efficiency. The source
codes of both experiments are released at https://github.com/wang-chen/KCC | [
"cs.CV"
] |
Deep neural network (DNN) predictions have been shown to be vulnerable to
carefully crafted adversarial perturbations. Specifically, image-agnostic
(universal adversarial) perturbations added to any image can fool a target
network into making erroneous predictions. Departing from existing defense
strategies that work mostly in the image domain, we present a novel defense
which operates in the DNN feature domain and effectively defends against such
universal perturbations. Our approach identifies pre-trained convolutional
features that are most vulnerable to adversarial noise and deploys trainable
feature regeneration units which transform these DNN filter activations into
resilient features that are robust to universal perturbations. Regenerating
only the top 50% adversarially susceptible activations in at most 6 DNN layers
and leaving all remaining DNN activations unchanged, we outperform existing
defense strategies across different network architectures by more than 10% in
restored accuracy. We show that without any additional modification, our
defense trained on ImageNet with one type of universal attack examples
effectively defends against other types of unseen universal attacks. | [
"cs.CV"
] |
Despite the evolution of deep-learning-based visual-textual processing
systems, precise multi-modal matching remains a challenging task. In this work,
we tackle the task of cross-modal retrieval through image-sentence matching
based on word-region alignments, using supervision only at the global
image-sentence level. Specifically, we present a novel approach called
Transformer Encoder Reasoning and Alignment Network (TERAN). TERAN enforces a
fine-grained match between the underlying components of images and sentences,
i.e., image regions and words, respectively, in order to preserve the
informative richness of both modalities. TERAN obtains state-of-the-art results
on the image retrieval task on both MS-COCO and Flickr30k datasets. Moreover,
on MS-COCO, it also outperforms current approaches on the sentence retrieval
task.
Focusing on scalable cross-modal information retrieval, TERAN is designed to
keep the visual and textual data pipelines well separated. Cross-attention
links invalidate any chance to separately extract visual and textual features
needed for the online search and the offline indexing steps in large-scale
retrieval systems. In this respect, TERAN merges the information from the two
domains only during the final alignment phase, immediately before the loss
computation. We argue that the fine-grained alignments produced by TERAN pave
the way towards the research for effective and efficient methods for
large-scale cross-modal information retrieval. We compare the effectiveness of
our approach against relevant state-of-the-art methods. On the MS-COCO 1K test
set, we obtain an improvement of 5.7% and 3.5% respectively on the image and
the sentence retrieval tasks on the Recall@1 metric. The code used for the
experiments is publicly available on GitHub at
https://github.com/mesnico/TERAN. | [
"cs.CV"
] |
Reinforcement learning in multi-agent scenarios is important for real-world
applications but presents challenges beyond those seen in single-agent
settings. We present an actor-critic algorithm that trains decentralized
policies in multi-agent settings, using centrally computed critics that share
an attention mechanism which selects relevant information for each agent at
every timestep. This attention mechanism enables more effective and scalable
learning in complex multi-agent environments, when compared to recent
approaches. Our approach is applicable not only to cooperative settings with
shared rewards, but also individualized reward settings, including adversarial
settings, as well as settings that do not provide global states, and it makes
no assumptions about the action spaces of the agents. As such, it is flexible
enough to be applied to most multi-agent learning problems. | [
"cs.LG",
"cs.AI",
"cs.MA",
"stat.ML"
] |
Generative adversarial networks (GANs) are emerging machine learning models
for generating synthesized data similar to real data by jointly training a
generator and a discriminator. In many applications, data and computational
resources are distributed over many devices, so centralized computation with
all data in one location is infeasible due to privacy and/or communication
constraints. This paper proposes a new framework for training GANs in a
distributed fashion: Each device computes a local discriminator using local
data; a single server aggregates their results and computes a global GAN.
Specifically, in each iteration, the server sends the global GAN to the
devices, which then update their local discriminators; the devices send their
results to the server, which then computes their average as the global
discriminator and updates the global generator accordingly. Two different
update schedules are designed with different levels of parallelism between the
devices and the server. Numerical results obtained using three popular datasets
demonstrate that the proposed framework can outperform a state-of-the-art
framework in terms of convergence speed. | [
"cs.LG",
"cs.DC",
"cs.IT",
"cs.NI",
"math.IT"
] |
Quantum physics experiments produce interesting phenomena such as
interference or entanglement, which is a core property of numerous future
quantum technologies. The complex relationship between a quantum experiment's
structure and its entanglement properties is essential to fundamental research
in quantum optics but is difficult to intuitively understand. We present the
first deep generative model of quantum optics experiments where a variational
autoencoder (QOVAE) is trained on a dataset of experimental setups. In a series
of computational experiments, we investigate the learned representation of the
QOVAE and its internal understanding of the quantum optics world. We
demonstrate that the QOVAE learns an intrepretable representation of quantum
optics experiments and the relationship between experiment structure and
entanglement. We show the QOVAE is able to generate novel experiments for
highly entangled quantum states with specific distributions that match its
training data. Importantly, we are able to fully interpret how the QOVAE
structures its latent space, finding curious patterns that we can entirely
explain in terms of quantum physics. The results demonstrate how we can
successfully use and understand the internal representations of deep generative
models in a complex scientific domain. The QOVAE and the insights from our
investigations can be immediately applied to other physical systems throughout
fundamental scientific research. | [
"cs.LG",
"quant-ph"
] |
The success of machine learning stems from its structured data
representation. Similar data have close representation as compressed codes for
classification or emerged labels for clustering. We observe that the frequency
of the internal representation follows power laws in both supervised and
unsupervised learning. The scale-invariant distribution implies that machine
learning largely compresses frequent typical data, and at the same time,
differentiates many atypical data as outliers. In this study, we derive how the
power laws can naturally arise in machine learning. In terms of information
theory, the scale-invariant representation corresponds to a maximally uncertain
data grouping among possible representations that guarantee pre-specified
learning accuracy. | [
"cs.LG",
"cs.IT",
"math.IT",
"physics.data-an"
] |
The transition from today's mostly human-driven traffic to a purely automated
one will be a gradual evolution, with the effect that we will likely experience
mixed traffic in the near future. Connected and automated vehicles can benefit
human-driven ones and the whole traffic system in different ways, for example
by improving collision avoidance and reducing traffic waves. Many studies have
been carried out to improve intersection management, a significant bottleneck
in traffic, with intelligent traffic signals or exclusively automated vehicles.
However, the problem of how to improve mixed traffic at unsignalized
intersections has received less attention. In this paper, we propose a novel
approach to optimizing traffic flow at intersections in mixed traffic
situations using deep reinforcement learning. Our reinforcement learning agent
learns a policy for a centralized controller to let connected autonomous
vehicles at unsignalized intersections give up their right of way and yield to
other vehicles to optimize traffic flow. We implemented our approach and tested
it in the traffic simulator SUMO based on simulated and real traffic data. The
experimental evaluation demonstrates that our method significantly improves
traffic flow through unsignalized intersections in mixed traffic settings and
also provides better performance on a wide range of traffic situations compared
to the state-of-the-art traffic signal controller for the corresponding
signalized intersection. | [
"cs.LG",
"cs.RO"
] |
In this paper, we present a new feature representation for first-person
videos. In first-person video understanding (e.g., activity recognition), it is
very important to capture both entire scene dynamics (i.e., egomotion) and
salient local motion observed in videos. We describe a representation framework
based on time series pooling, which is designed to abstract
short-term/long-term changes in feature descriptor elements. The idea is to
keep track of how descriptor values are changing over time and summarize them
to represent motion in the activity video. The framework is general, handling
any types of per-frame feature descriptors including conventional motion
descriptors like histogram of optical flows (HOF) as well as appearance
descriptors from more recent convolutional neural networks (CNN). We
experimentally confirm that our approach clearly outperforms previous feature
representations including bag-of-visual-words and improved Fisher vector (IFV)
when using identical underlying feature descriptors. We also confirm that our
feature representation has superior performance to existing state-of-the-art
features like local spatio-temporal features and Improved Trajectory Features
(originally developed for 3rd-person videos) when handling first-person videos.
Multiple first-person activity datasets were tested under various settings to
confirm these findings. | [
"cs.CV"
] |
The ability to detect and count certain substructures in graphs is important
for solving many tasks on graph-structured data, especially in the contexts of
computational chemistry and biology as well as social network analysis.
Inspired by this, we propose to study the expressive power of graph neural
networks (GNNs) via their ability to count attributed graph substructures,
extending recent works that examine their power in graph isomorphism testing
and function approximation. We distinguish between two types of substructure
counting: induced-subgraph-count and subgraph-count, and establish both
positive and negative answers for popular GNN architectures. Specifically, we
prove that Message Passing Neural Networks (MPNNs), 2-Weisfeiler-Lehman (2-WL)
and 2-Invariant Graph Networks (2-IGNs) cannot perform induced-subgraph-count
of substructures consisting of 3 or more nodes, while they can perform
subgraph-count of star-shaped substructures. As an intermediary step, we prove
that 2-WL and 2-IGNs are equivalent in distinguishing non-isomorphic graphs,
partly answering an open problem raised in Maron et al. (2019). We also prove
positive results for k-WL and k-IGNs as well as negative results for k-WL with
a finite number of iterations. We then conduct experiments that support the
theoretical results for MPNNs and 2-IGNs. Moreover, motivated by substructure
counting and inspired by Murphy et al. (2019), we propose the Local Relational
Pooling model and demonstrate that it is not only effective for substructure
counting but also able to achieve competitive performance on molecular
prediction tasks. | [
"cs.LG",
"cs.DM",
"stat.ML"
] |
We propose a novel method for imputing missing data by adapting the
well-known Generative Adversarial Nets (GAN) framework. Accordingly, we call
our method Generative Adversarial Imputation Nets (GAIN). The generator (G)
observes some components of a real data vector, imputes the missing components
conditioned on what is actually observed, and outputs a completed vector. The
discriminator (D) then takes a completed vector and attempts to determine which
components were actually observed and which were imputed. To ensure that D
forces G to learn the desired distribution, we provide D with some additional
information in the form of a hint vector. The hint reveals to D partial
information about the missingness of the original sample, which is used by D to
focus its attention on the imputation quality of particular components. This
hint ensures that G does in fact learn to generate according to the true data
distribution. We tested our method on various datasets and found that GAIN
significantly outperforms state-of-the-art imputation methods. | [
"cs.LG",
"stat.ML"
] |
The vision for precision medicine is to use individual patient
characteristics to inform a personalized treatment plan that leads to the best
healthcare possible for each patient. Mobile technologies have an important
role to play in this vision as they offer a means to monitor a patient's health
status in real-time and subsequently to deliver interventions if, when, and in
the dose that they are needed. Dynamic treatment regimes formalize
individualized treatment plans as sequences of decision rules, one per stage of
clinical intervention, that map current patient information to a recommended
treatment. However, existing methods for estimating optimal dynamic treatment
regimes are designed for a small number of fixed decision points occurring on a
coarse time-scale. We propose a new reinforcement learning method for
estimating an optimal treatment regime that is applicable to data collected
using mobile technologies in an outpatient setting. The proposed method
accommodates an indefinite time horizon and minute-by-minute decision making
that are common in mobile health applications. We show the proposed estimators
are consistent and asymptotically normal under mild conditions. The proposed
methods are applied to estimate an optimal dynamic treatment regime for
controlling blood glucose levels in patients with type 1 diabetes. | [
"stat.ML"
] |
Recent advances in deep learning from probability distributions successfully
achieve classification or regression from distribution samples, thus invariant
under permutation of the samples. The first contribution of the paper is to
extend these neural architectures to achieve invariance under permutation of
the features, too. The proposed architecture, called Dida, inherits the NN
properties of universal approximation, and its robustness w.r.t.
Lipschitz-bounded transformations of the input distribution is established. The
second contribution is to empirically and comparatively demonstrate the merits
of the approach on two tasks defined at the dataset level. On both tasks, Dida
learns meta-features supporting the characterization of a (labelled) dataset.
The first task consists of predicting whether two dataset patches are extracted
from the same initial dataset. The second task consists of predicting whether
the learning performance achieved by a hyper-parameter configuration under a
fixed algorithm (ranging in k-NN, SVM, logistic regression and linear
classifier with SGD) dominates that of another configuration, for a dataset
extracted from the OpenML benchmarking suite. On both tasks, Dida outperforms
the state of the art: DSS (Maron et al., 2020) and Dataset2Vec (Jomaa et al.,
2019) architectures, as well as the models based on the hand-crafted
meta-features of the literature. | [
"stat.ML",
"cs.LG"
] |
Offline reinforcement learning (RL) algorithms have shown promising results
in domains where abundant pre-collected data is available. However, prior
methods focus on solving individual problems from scratch with an offline
dataset without considering how an offline RL agent can acquire multiple
skills. We argue that a natural use case of offline RL is in settings where we
can pool large amounts of data collected in various scenarios for solving
different tasks, and utilize all of this data to learn behaviors for all the
tasks more effectively rather than training each one in isolation. However,
sharing data across all tasks in multi-task offline RL performs surprisingly
poorly in practice. Thorough empirical analysis, we find that sharing data can
actually exacerbate the distributional shift between the learned policy and the
dataset, which in turn can lead to divergence of the learned policy and poor
performance. To address this challenge, we develop a simple technique for
data-sharing in multi-task offline RL that routes data based on the improvement
over the task-specific data. We call this approach conservative data sharing
(CDS), and it can be applied with multiple single-task offline RL methods. On a
range of challenging multi-task locomotion, navigation, and vision-based
robotic manipulation problems, CDS achieves the best or comparable performance
compared to prior offline multi-task RL methods and previous data sharing
approaches. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
In this work, we propose TransTrack, a simple but efficient scheme to solve
the multiple object tracking problems. TransTrack leverages the transformer
architecture, which is an attention-based query-key mechanism. It applies
object features from the previous frame as a query of the current frame and
introduces a set of learned object queries to enable detecting new-coming
objects. It builds up a novel joint-detection-and-tracking paradigm by
accomplishing object detection and object association in a single shot,
simplifying complicated multi-step settings in tracking-by-detection methods.
On MOT17 and MOT20 benchmark, TransTrack achieves 74.5\% and 64.5\% MOTA,
respectively, competitive to the state-of-the-art methods. We expect TransTrack
to provide a novel perspective for multiple object tracking. The code is
available at: \url{https://github.com/PeizeSun/TransTrack}. | [
"cs.CV"
] |
Using offline training schemes, researchers have tackled the event
segmentation problem by providing full or weak-supervision through manually
annotated labels or self-supervised epoch-based training. Most works consider
videos that are at most 10's of minutes long. We present a self-supervised
perceptual prediction framework capable of temporal event segmentation by
building stable representations of objects over time and demonstrate it on long
videos, spanning several days. The approach is deceptively simple but quite
effective. We rely on predictions of high-level features computed by a standard
deep learning backbone. For prediction, we use an LSTM, augmented with an
attention mechanism, trained in a self-supervised manner using the prediction
error. The self-learned attention maps effectively localize and track the
event-related objects in each frame. The proposed approach does not require
labels. It requires only a single pass through the video, with no separate
training set. Given the lack of datasets of very long videos, we demonstrate
our method on video from 10 days (254 hours) of continuous wildlife monitoring
data that we had collected with required permissions. We find that the approach
is robust to various environmental conditions such as day/night conditions,
rain, sharp shadows, and windy conditions. For the task of temporally locating
events, we had an 80% recall rate at 20% false-positive rate for frame-level
segmentation. At the activity level, we had an 80% activity recall rate for one
false activity detection every 50 minutes. We will make the dataset, which is
the first of its kind, and the code available to the research community. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In the last few years, there has been a growing interest in taking advantage
of the 360 panoramic images potential, while managing the new challenges they
imply. While several tasks have been improved thanks to the contextual
information these images offer, object recognition in indoor scenes still
remains a challenging problem that has not been deeply investigated. This paper
provides an object recognition system that performs object detection and
semantic segmentation tasks by using a deep learning model adapted to match the
nature of equirectangular images. From these results, instance segmentation
masks are recovered, refined and transformed into 3D bounding boxes that are
placed into the 3D model of the room. Quantitative and qualitative results
support that our method outperforms the state of the art by a large margin and
show a complete understanding of the main objects in indoor scenes. | [
"cs.CV"
] |
Deep convolutional neural network (DCNN) is the state-of-the-art method for
image segmentation, which is one of key challenging computer vision tasks.
However, DCNN requires a lot of training images with corresponding image masks
to get a good segmentation result. Image annotation software which is easy to
use and allows fast image mask generation is in great demand. To the best of
our knowledge, all existing image annotation software support only drawing
bounding polygons, bounding boxes, or bounding ellipses to mark target objects.
These existing software are inefficient when targeting objects that have
irregular shapes (e.g., defects in fabric images or tire images). In this paper
we design an easy-to-use image annotation software called Mask Editor for image
mask generation. Mask Editor allows drawing any bounding curve to mark objects
and improves efficiency to mark objects with irregular shapes. Mask Editor also
supports drawing bounding polygons, drawing bounding boxes, drawing bounding
ellipses, painting, erasing, super-pixel-marking, image cropping, multi-class
masks, mask loading, and mask modifying. | [
"cs.CV"
] |
Solving the Maximum a Posteriori on Markov Random Field, MRF-MAP, is a
prevailing method in recent interactive image segmentation tools. Although
mathematically explicit in its computational targets, and impressive for the
segmentation quality, MRF-MAP is hard to accomplish without the interactive
information from users. So it is rarely adopted in the automatic style up to
today. In this paper, we present an automatic image segmentation algorithm,
NegCut, based on the approximation to MRF-MAP. First we prove MRF-MAP is
NP-hard when the probabilistic models are unknown, and then present an
approximation function in the form of minimum cuts on graphs with negative
weights. Finally, the binary segmentation is taken from the largest eigenvector
of the target matrix, with a tuned version of the Lanczos eigensolver. It is
shown competitive at the segmentation quality in our experiments. | [
"cs.CV"
] |
With monocular Visual-Inertial Odometry (VIO) system, 3D point cloud and
camera motion can be estimated simultaneously. Because pure sparse 3D points
provide a structureless representation of the environment, generating 3D mesh
from sparse points can further model the environment topology and produce dense
mapping. To improve the accuracy of 3D mesh generation and localization, we
propose a tightly-coupled monocular VIO system, PLP-VIO, which exploits point
features and line features as well as plane regularities. The co-planarity
constraints are used to leverage additional structure information for the more
accurate estimation of 3D points and spatial lines in state estimator. To
detect plane and 3D mesh robustly, we combine both the line features with point
features in the detection method. The effectiveness of the proposed method is
verified on both synthetic data and public datasets and is compared with other
state-of-the-art algorithms. | [
"cs.CV",
"cs.RO"
] |
Time series forecasting is difficult. It is difficult even for recurrent
neural networks with their inherent ability to learn sequentiality. This
article presents a recurrent neural network based time series forecasting
framework covering feature engineering, feature importances, point and interval
predictions, and forecast evaluation. The description of the method is followed
by an empirical study using both LSTM and GRU networks. | [
"cs.LG",
"stat.ML"
] |
Adversarial examples are data points misclassified by neural networks.
Originally, adversarial examples were limited to adding small perturbations to
a given image. Recent work introduced the generalized concept of unrestricted
adversarial examples, without limits on the added perturbations. In this paper,
we introduce a new category of attacks that create unrestricted adversarial
examples for object detection. Our key idea is to generate adversarial objects
that are unrelated to the classes identified by the target object detector.
Different from previous attacks, we use off-the-shelf Generative Adversarial
Networks (GAN), without requiring any further training or modification. Our
method consists of searching over the latent normal space of the GAN for
adversarial objects that are wrongly identified by the target object detector.
We evaluate this method on the commonly used Faster R-CNN ResNet-101, Inception
v2 and SSD Mobilenet v1 object detectors using logo generative iWGAN-LC and
SNGAN trained on CIFAR-10. The empirical results show that the generated
adversarial objects are indistinguishable from non-adversarial objects
generated by the GANs, transferable between the object detectors and robust in
the physical world. This is the first work to study unrestricted false positive
adversarial examples for object detection. | [
"cs.LG",
"cs.CR",
"cs.CV",
"stat.ML"
] |
Graphs are the natural data structure to represent relational and structural
information in many domains. To cover the broad range of graph-data
applications including graph classification as well as graph generation, it is
desirable to have a general and flexible model consisting of an encoder and a
decoder that can handle graph data. Although the representative encoder-decoder
model, Transformer, shows superior performance in various tasks especially of
natural language processing, it is not immediately available for graphs due to
their non-sequential characteristics. To tackle this incompatibility, we
propose GRaph-Aware Transformer (GRAT), the first Transformer-based model which
can encode and decode whole graphs in end-to-end fashion. GRAT is featured with
a self-attention mechanism adaptive to the edge information and an
auto-regressive decoding mechanism based on the two-path approach consisting of
sub-graph encoding path and node-and-edge generation path for each decoding
step. We empirically evaluated GRAT on multiple setups including encoder-based
tasks such as molecule property predictions on QM9 datasets and
encoder-decoder-based tasks such as molecule graph generation in the organic
molecule synthesis domain. GRAT has shown very promising results including
state-of-the-art performance on 4 regression tasks in QM9 benchmark. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Federated Learning (FL), arising as a novel secure learning paradigm, has
received notable attention from the public. In each round of synchronous FL
training, only a fraction of available clients are chosen to participate and
the selection decision might have a significant effect on the training
efficiency, as well as the final model performance. In this paper, we
investigate the client selection problem under a volatile context, in which the
local training of heterogeneous clients is likely to fail due to various kinds
of reasons and in different levels of frequency. Intuitively, too much training
failure might potentially reduce the training efficiency, while too much
selection on clients with greater stability might introduce bias, and thereby
result in degradation of the training effectiveness. To tackle this tradeoff,
we in this paper formulate the client selection problem under joint
consideration of effective participation and fairness. Further, we propose
E3CS, a stochastic client selection scheme on the basis of an adversarial
bandit solution, and we further corroborate its effectiveness by conducting
real data-based experiments. According to the experimental results, our
proposed selection scheme is able to achieve up to 2x faster convergence to a
fixed model accuracy while maintaining the same level of final model accuracy,
in comparison to the vanilla selection scheme in FL. | [
"cs.LG",
"cs.DC"
] |
We present a reinforcement learning approach for detecting objects within an
image. Our approach performs a step-wise deformation of a bounding box with the
goal of tightly framing the object. It uses a hierarchical tree-like
representation of predefined region candidates, which the agent can zoom in on.
This reduces the number of region candidates that must be evaluated so that the
agent can afford to compute new feature maps before each step to enhance
detection quality. We compare an approach that is based purely on zoom actions
with one that is extended by a second refinement stage to fine-tune the
bounding box after each zoom step. We also improve the fitting ability by
allowing for different aspect ratios of the bounding box. Finally, we propose
different reward functions to lead to a better guidance of the agent while
following its search trajectories. Experiments indicate that each of these
extensions leads to more correct detections. The best performing approach
comprises a zoom stage and a refinement stage, uses aspect-ratio modifying
actions and is trained using a combination of three different reward metrics. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Multi-modal Record Linkage is the process of matching multi-modal records
from multiple sources that represent the same entity. This field has not been
explored in research and we propose two solutions based on Deep Learning
architectures that are inspired by recent work in Visual Question Answering.
The neural networks we propose use two different fusion modules, the Recurrent
Neural Network + Convolutional Neural Network fusion module and the Stacked
Attention Network fusion module, that jointly combine the visual and the
textual data of the records. The output of these fusion models is the input of
a Siamese Neural Network that computes the similarity of the records. Using
data from the Avito Duplicate Advertisements Detection dataset, we train these
solutions and from the experiments, we concluded that the Recurrent Neural
Network + Convolutional Neural Network fusion module outperforms a simple model
that uses hand-crafted features. We also find that the Recurrent Neural Network
+ Convolutional Neural Network fusion module classifies dissimilar
advertisements as similar more frequently if their average description is
bigger than 40 words. We conclude that the reason for this is that the longer
advertisements have a different distribution then the shorter advertisements
who are more prevalent in the dataset. In the end, we also conclude that
further research needs to be done with the Stacked Attention Network, to
further explore the effects of the visual data on the performance of the fusion
modules. | [
"cs.LG",
"cs.AI",
"cs.DB",
"stat.ML"
] |
In this work, we propose a novel approach to prioritize the depth map
computation of multi-view stereo (MVS) to obtain compact 3D point clouds of
high quality and completeness at low computational cost. Our prioritization
approach operates before the MVS algorithm is executed and consists of two
steps. In the first step, we aim to find a good set of matching partners for
each view. In the second step, we rank the resulting view clusters (i.e. key
views with matching partners) according to their impact on the fulfillment of
desired quality parameters such as completeness, ground resolution and
accuracy. Additional to geometric analysis, we use a novel machine learning
technique for training a confidence predictor. The purpose of this confidence
predictor is to estimate the chances of a successful depth reconstruction for
each pixel in each image for one specific MVS algorithm based on the RGB images
and the image constellation. The underlying machine learning technique does not
require any ground truth or manually labeled data for training, but instead
adapts ideas from depth map fusion for providing a supervision signal. The
trained confidence predictor allows us to evaluate the quality of image
constellations and their potential impact to the resulting 3D reconstruction
and thus builds a solid foundation for our prioritization approach. In our
experiments, we are thus able to reach more than 70% of the maximal reachable
quality fulfillment using only 5% of the available images as key views. For
evaluating our approach within and across different domains, we use two
completely different scenarios, i.e. cultural heritage preservation and
reconstruction of single family houses. | [
"cs.CV"
] |
Graph-based convolutional model such as non-local block has shown to be
effective for strengthening the context modeling ability in convolutional
neural networks (CNNs). However, its pixel-wise computational overhead is
prohibitive which renders it unsuitable for high resolution imagery. In this
paper, we explore the efficiency of context graph reasoning and propose a novel
framework called Squeeze Reasoning. Instead of propagating information on the
spatial map, we first learn to squeeze the input feature into a channel-wise
global vector and perform reasoning within the single vector where the
computation cost can be significantly reduced. Specifically, we build the node
graph in the vector where each node represents an abstract semantic concept.
The refined feature within the same semantic category results to be consistent,
which is thus beneficial for downstream tasks. We show that our approach can be
modularized as an end-to-end trained block and can be easily plugged into
existing networks. {Despite its simplicity and being lightweight, the proposed
strategy allows us to establish the considerable results on different semantic
segmentation datasets and shows significant improvements with respect to strong
baselines on various other scene understanding tasks including object
detection, instance segmentation and panoptic segmentation.} Code is available
at \url{https://github.com/lxtGH/SFSegNets}. | [
"cs.CV"
] |
The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task. | [
"cs.CV"
] |
Sparse representation learning has recently gained a great success in signal
and image processing, thanks to recent advances in dictionary learning. To this
end, the $\ell_0$-norm is often used to control the sparsity level.
Nevertheless, optimization problems based on the $\ell_0$-norm are non-convex
and NP-hard. For these reasons, relaxation techniques have been attracting much
attention of researchers, by priorly targeting approximation solutions (e.g.
$\ell_1$-norm, pursuit strategies). On the contrary, this paper considers the
exact $\ell_0$-norm optimization problem and proves that it can be solved
effectively, despite of its complexity. The proposed method reformulates the
problem as a Mixed-Integer Quadratic Program (MIQP) and gets the global optimal
solution by applying existing optimization software. Because the main
difficulty of this approach is its computational time, two techniques are
introduced that improve the computational speed. Finally, our method is applied
to image denoising which shows its feasibility and relevance compared to the
state-of-the-art. | [
"cs.CV",
"eess.IV"
] |
System identification of complex and nonlinear systems is a central problem
for model predictive control and model-based reinforcement learning. Despite
their complexity, such systems can often be approximated well by a set of
linear dynamical systems if broken into appropriate subsequences. This
mechanism not only helps us find good approximations of dynamics, but also
gives us deeper insight into the underlying system. Leveraging Bayesian
inference, Variational Autoencoders and Concrete relaxations, we show how to
learn a richer and more meaningful state space, e.g. encoding joint constraints
and collisions with walls in a maze, from partial and high-dimensional
observations. This representation translates into a gain of accuracy of learned
dynamics showcased on various simulated tasks. | [
"stat.ML",
"cs.LG"
] |
I propose a new tool to characterize the resolution of uncertainty around
FOMC press conferences. It relies on the construction of a measure capturing
the level of discussion complexity between the Fed Chair and reporters during
the Q&A sessions. I show that complex discussions are associated with higher
equity returns and a drop in realized volatility. The method creates an
attention score by quantifying how much the Chair needs to rely on reading
internal documents to be able to answer a question. This is accomplished by
building a novel dataset of video images of the press conferences and
leveraging recent deep learning algorithms from computer vision. This
alternative data provides new information on nonverbal communication that
cannot be extracted from the widely analyzed FOMC transcripts. This paper can
be seen as a proof of concept that certain videos contain valuable information
for the study of financial markets. | [
"stat.ML",
"cs.CV",
"cs.LG",
"q-fin.GN"
] |
Visual relations, such as "person ride bike" and "bike next to car", offer a
comprehensive scene understanding of an image, and have already shown their
great utility in connecting computer vision and natural language. However, due
to the challenging combinatorial complexity of modeling
subject-predicate-object relation triplets, very little work has been done to
localize and predict visual relations. Inspired by the recent advances in
relational representation learning of knowledge bases and convolutional object
detection networks, we propose a Visual Translation Embedding network (VTransE)
for visual relation detection. VTransE places objects in a low-dimensional
relation space where a relation can be modeled as a simple vector translation,
i.e., subject + predicate $\approx$ object. We propose a novel feature
extraction layer that enables object-relation knowledge transfer in a
fully-convolutional fashion that supports training and inference in a single
forward/backward pass. To the best of our knowledge, VTransE is the first
end-to-end relation detection network. We demonstrate the effectiveness of
VTransE over other state-of-the-art methods on two large-scale datasets: Visual
Relationship and Visual Genome. Note that even though VTransE is a purely
visual model, it is still competitive to the Lu's multi-modal model with
language priors. | [
"cs.CV",
"I.4"
] |
We propose estimation methods for change points in high-dimensional
covariance structures with an emphasis on challenging scenarios with missing
values. We advocate three imputation like methods and investigate their
implications on common losses used for change point detection. We also discuss
how model selection methods have to be adapted to the setting of incomplete
data. The methods are compared in a simulation study and applied to a time
series from an environmental monitoring system. An implementation of our
proposals within the R-package hdcd is available via the Supplementary
materials. | [
"stat.ML",
"cs.LG",
"stat.AP",
"stat.ME"
] |
Popularized as 'bottom-up' attention, bounding box (or region) based visual
features have recently surpassed vanilla grid-based convolutional features as
the de facto standard for vision and language tasks like visual question
answering (VQA). However, it is not clear whether the advantages of regions
(e.g. better localization) are the key reasons for the success of bottom-up
attention. In this paper, we revisit grid features for VQA, and find they can
work surprisingly well - running more than an order of magnitude faster with
the same accuracy (e.g. if pre-trained in a similar fashion). Through extensive
experiments, we verify that this observation holds true across different VQA
models (reporting a state-of-the-art accuracy on VQA 2.0 test-std, 72.71),
datasets, and generalizes well to other tasks like image captioning. As grid
features make the model design and training process much simpler, this enables
us to train them end-to-end and also use a more flexible network design. We
learn VQA models end-to-end, from pixels directly to answers, and show that
strong performance is achievable without using any region annotations in
pre-training. We hope our findings help further improve the scientific
understanding and the practical application of VQA. Code and features will be
made available. | [
"cs.CV"
] |
We present network embedding algorithms that capture information about a node
from the local distribution over node attributes around it, as observed over
random walks following an approach similar to Skip-gram. Observations from
neighborhoods of different sizes are either pooled (AE) or encoded distinctly
in a multi-scale approach (MUSAE). Capturing attribute-neighborhood
relationships over multiple scales is useful for a diverse range of
applications, including latent feature identification across disconnected
networks with similar attributes. We prove theoretically that matrices of
node-feature pointwise mutual information are implicitly factorized by the
embeddings. Experiments show that our algorithms are robust, computationally
efficient and outperform comparable models on social networks and web graphs. | [
"cs.LG",
"cs.NI",
"cs.SI",
"stat.ML"
] |
Nowadays, nonnegative matrix factorization (NMF) based methods have been
widely applied to blind spectral unmixing. Introducing proper regularizers to
NMF is crucial for mathematically constraining the solutions and physically
exploiting spectral and spatial properties of images. Generally, properly
handcrafting regularizers and solving the associated complex optimization
problem are non-trivial tasks. In our work, we propose an NMF based unmixing
framework which jointly uses a handcrafting regularizer and a learnt
regularizer from data. we plug learnt priors of abundances where the associated
subproblem can be addressed using various image denoisers, and we consider an
l_2,1-norm regularizer to the abundance matrix to promote sparse unmixing
results. The proposed framework is flexible and extendable. Both synthetic data
and real airborne data are conducted to confirm the effectiveness of our
method. | [
"cs.CV",
"eess.IV"
] |
Biomedical imaging is a driver of scientific discovery and core component of
medical care, currently stimulated by the field of deep learning. While
semantic segmentation algorithms enable 3D image analysis and quantification in
many applications, the design of respective specialised solutions is
non-trivial and highly dependent on dataset properties and hardware conditions.
We propose nnU-Net, a deep learning framework that condenses the current domain
knowledge and autonomously takes the key decisions required to transfer a basic
architecture to different datasets and segmentation tasks. Without manual
tuning, nnU-Net surpasses most specialised deep learning pipelines in 19 public
international competitions and sets a new state of the art in the majority of
the 49 tasks. The results demonstrate a vast hidden potential in the systematic
adaptation of deep learning methods to different datasets. We make nnU-Net
publicly available as an open-source tool that can effectively be used
out-of-the-box, rendering state of the art segmentation accessible to
non-experts and catalyzing scientific progress as a framework for automated
method design. | [
"cs.CV"
] |
We present neural architectures that disentangle RGB-D images into objects'
shapes and styles and a map of the background scene, and explore their
applications for few-shot 3D object detection and few-shot concept
classification. Our networks incorporate architectural biases that reflect the
image formation process, 3D geometry of the world scene, and shape-style
interplay. They are trained end-to-end self-supervised by predicting views in
static scenes, alongside a small number of 3D object boxes. Objects and scenes
are represented in terms of 3D feature grids in the bottleneck of the network.
We show that the proposed 3D neural representations are compositional: they can
generate novel 3D scene feature maps by mixing object shapes and styles,
resizing and adding the resulting object 3D feature maps over background scene
feature maps. We show that classifiers for object categories, color, materials,
and spatial relationships trained over the disentangled 3D feature sub-spaces
generalize better with dramatically fewer examples than the current
state-of-the-art, and enable a visual question answering system that uses them
as its modules to generalize one-shot to novel objects in the scene. | [
"cs.CV"
] |
Recently, Reinforcement Learning (RL) approaches have demonstrated advanced
performance in image captioning by directly optimizing the metric used for
testing. However, this shaped reward introduces learning biases, which reduces
the readability of generated text. In addition, the large sample space makes
training unstable and slow. To alleviate these issues, we propose a simple
coherent solution that constrains the action space using an n-gram language
prior. Quantitative and qualitative evaluations on benchmarks show that RL with
the simple add-on module performs favorably against its counterpart in terms of
both readability and speed of convergence. Human evaluation results show that
our model is more human readable and graceful. The implementation will become
publicly available upon the acceptance of the paper. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
In this paper, we address the Online Unsupervised Domain Adaptation (OUDA)
problem, where the target data are unlabelled and arriving sequentially. The
traditional methods on the OUDA problem mainly focus on transforming each
arriving target data to the source domain, and they do not sufficiently
consider the temporal coherency and accumulative statistics among the arriving
target data. We propose a multi-step framework for the OUDA problem, which
institutes a novel method to compute the mean-target subspace inspired by the
geometrical interpretation on the Euclidean space. This mean-target subspace
contains accumulative temporal information among the arrived target data.
Moreover, the transformation matrix computed from the mean-target subspace is
applied to the next target data as a preprocessing step, aligning the target
data closer to the source domain. Experiments on four datasets demonstrated the
contribution of each step in our proposed multi-step OUDA framework and its
performance over previous approaches. | [
"cs.LG",
"stat.ML"
] |
Ren et al. recently introduced a method for aggregating multiple decision
trees into a strong predictor by interpreting a path taken by a sample down
each tree as a binary vector and performing linear regression on top of these
vectors stacked together. They provided experimental evidence that the method
offers advantages over the usual approaches for combining decision trees
(random forests and boosting). The method truly shines when the regression
target is a large vector with correlated dimensions, such as a 2D face shape
represented with the positions of several facial landmarks. However, we argue
that their basic method is not applicable in many practical scenarios due to
large memory requirements. This paper shows how this issue can be solved
through the use of quantization and architectural changes of the predictor that
maps decision tree-derived encodings to the desired output. | [
"cs.CV",
"cs.NE"
] |
Motion blurry images challenge many computer vision algorithms, e.g, feature
detection, motion estimation, or object recognition. Deep convolutional neural
networks are state-of-the-art for image deblurring. However, obtaining training
data with corresponding sharp and blurry image pairs can be difficult. In this
paper, we present a differentiable reblur model for self-supervised motion
deblurring, which enables the network to learn from real-world blurry image
sequences without relying on sharp images for supervision. Our key insight is
that motion cues obtained from consecutive images yield sufficient information
to inform the deblurring task. We therefore formulate deblurring as an inverse
rendering problem, taking into account the physical image formation process: we
first predict two deblurred images from which we estimate the corresponding
optical flow. Using these predictions, we re-render the blurred images and
minimize the difference with respect to the original blurry inputs. We use both
synthetic and real dataset for experimental evaluations. Our experiments
demonstrate that self-supervised single image deblurring is really feasible and
leads to visually compelling results. | [
"cs.CV"
] |
We propose a multi-head attention mechanism as a blending layer in a neural
network model that translates natural language to a high level behavioral
language for indoor robot navigation. We follow the framework established by
(Zang et al., 2018a) that proposes the use of a navigation graph as a knowledge
base for the task. Our results show significant performance gains when
translating instructions on previously unseen environments, therefore,
improving the generalization capabilities of the model. | [
"cs.LG",
"cs.CL",
"cs.RO",
"stat.ML"
] |
Recent advances in computer graphics and computer vision have found
successful application of deep neural network models for 3D shapes based on
signed distance functions (SDFs) that are useful for shape representation,
retrieval, and completion. However, this approach has been limited by the need
to have query shapes in the same canonical scale and pose as those observed
during training, restricting its effectiveness on real world scenes. We present
a formulation to overcome this issue by jointly estimating shape and similarity
transform parameters. We conduct experiments to demonstrate the effectiveness
of this formulation on synthetic and real datasets and report favorable
comparisons to the state of the art. Finally, we also emphasize the viability
of this approach as a form of 3D model compression. | [
"cs.CV"
] |
In this paper, we propose a color to grayscale image conversion algorithm
(C2G) that aims to preserve the perceptual properties of the color image as
much as possible. To this end, we propose measures for two perceptual
properties based on contemporary research in vision science: brightness and
multi-scale contrast. The brightness measurement is based on the idea that the
brightness of a grayscale image will affect the perception of the probability
of color information. The color contrast measurement is based on the idea that
the contrast of a given pixel to its surroundings can be measured as a linear
combination of color contrast at different scales. Based on these measures we
propose a graph based optimization framework to balance the brightness and
contrast measurements. To solve the optimization, an $\ell_1$-norm based method
is provided which converts color discontinuities to brightness discontinuities.
To validate our methods, we evaluate against the existing \cadik and Color250
datasets, and against NeoColor, a new dataset that improves over existing C2G
datasets. NeoColor contains around 300 images from typical C2G scenarios,
including: commercial photograph, printing, books, magazines, masterpiece
artworks and computer designed graphics. We show improvements in metrics of
performance, and further through a user study, we validate the performance of
both the algorithm and the metric. | [
"cs.CV"
] |
The problem of representative selection amounts to sampling few informative
exemplars from large datasets. This paper presents MOSAIC, a novel
representative selection approach from high-dimensional data that may exhibit
non-linear structures. Resting upon a novel quadratic formulation, Our method
advances a multi-criteria selection approach that maximizes the global
representation power of the sampled subset, ensures diversity, and rejects
disruptive information by effectively detecting outliers. Through theoretical
analyses we characterize the obtained sketch and reveal that the sampled
representatives maximize a well-defined notion of data coverage in a
transformed space. In addition, we present a highly scalable randomized
implementation of the proposed algorithm shown to bring about substantial
speedups. MOSAIC's superiority in achieving the desired characteristics of a
representative subset all at once while exhibiting remarkable robustness to
various outlier types is demonstrated via extensive experiments conducted on
both real and synthetic data with comparisons to state-of-the-art algorithms. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Human players in professional team sports achieve high level coordination by
dynamically choosing complementary skills and executing primitive actions to
perform these skills. As a step toward creating intelligent agents with this
capability for fully cooperative multi-agent settings, we propose a two-level
hierarchical multi-agent reinforcement learning (MARL) algorithm with
unsupervised skill discovery. Agents learn useful and distinct skills at the
low level via independent Q-learning, while they learn to select complementary
latent skill variables at the high level via centralized multi-agent training
with an extrinsic team reward. The set of low-level skills emerges from an
intrinsic reward that solely promotes the decodability of latent skill
variables from the trajectory of a low-level skill, without the need for
hand-crafted rewards for each skill. For scalable decentralized execution, each
agent independently chooses latent skill variables and primitive actions based
on local observations. Our overall method enables the use of general
cooperative MARL algorithms for training high level policies and single-agent
RL for training low level skills. Experiments on a stochastic high dimensional
team game show the emergence of useful skills and cooperative team play. The
interpretability of the learned skills show the promise of the proposed method
for achieving human-AI cooperation in team sports games. | [
"cs.LG",
"cs.MA",
"stat.ML"
] |
State-of-the-art approaches for semantic image segmentation are built on
Convolutional Neural Networks (CNNs). The typical segmentation architecture is
composed of (a) a downsampling path responsible for extracting coarse semantic
features, followed by (b) an upsampling path trained to recover the input image
resolution at the output of the model and, optionally, (c) a post-processing
module (e.g. Conditional Random Fields) to refine the model predictions.
Recently, a new CNN architecture, Densely Connected Convolutional Networks
(DenseNets), has shown excellent results on image classification tasks. The
idea of DenseNets is based on the observation that if each layer is directly
connected to every other layer in a feed-forward fashion then the network will
be more accurate and easier to train.
In this paper, we extend DenseNets to deal with the problem of semantic
segmentation. We achieve state-of-the-art results on urban scene benchmark
datasets such as CamVid and Gatech, without any further post-processing module
nor pretraining. Moreover, due to smart construction of the model, our approach
has much less parameters than currently published best entries for these
datasets.
Code to reproduce the experiments is available here :
https://github.com/SimJeg/FC-DenseNet/blob/master/train.py | [
"cs.CV"
] |
Varying density of point clouds increases the difficulty of 3D detection. In
this paper, we present a context-aware dynamic network (CADNet) to capture the
variance of density by considering both point context and semantic context.
Point-level contexts are generated from original point clouds to enlarge the
effective receptive filed. They are extracted around the voxelized pillars
based on our extended voxelization method and processed with the context
encoder in parallel with the pillar features. With a large perception range, we
are able to capture the variance of features for potential objects and generate
attentive spatial guidance to help adjust the strengths for different regions.
In the region proposal network, considering the limited representation ability
of traditional convolution where same kernels are shared among different
samples and positions, we propose a decomposable dynamic convolutional layer to
adapt to the variance of input features by learning from local semantic
context. It adaptively generates the position-dependent coefficients for
multiple fixed kernels and combines them to convolve with local feature
windows. Based on our dynamic convolution, we design a dual-path convolution
block to further improve the representation ability. We conduct experiments
with our Network on KITTI dataset and achieve good performance on 3D detection
task for both precision and speed. Our one-stage detector outperforms SECOND
and PointPillars by a large margin and achieves the speed of 30 FPS. | [
"cs.CV"
] |
Deep networks often make confident, yet incorrect, predictions when tested
with outlier data that is far removed from their training distributions.
Likelihoods computed by deep generative models are a candidate metric for
outlier detection with unlabeled data. Yet, previous studies have shown that
such likelihoods are unreliable and can be easily biased by simple
transformations to input data. Here, we examine outlier detection with
variational autoencoders (VAEs), among the simplest class of deep generative
models. First, we show that a theoretically-grounded correction readily
ameliorates a key bias with VAE likelihood estimates. The bias correction is
model-free, sample-specific, and accurately computed with the Bernoulli and
continuous Bernoulli visible distributions. Second, we show that a well-known
preprocessing technique, contrast normalization, extends the effectiveness of
bias correction to natural image datasets. Third, we show that the variance of
the likelihoods computed over an ensemble of VAEs also enables robust outlier
detection. We perform a comprehensive evaluation of our remedies with nine
(grayscale and natural) image datasets, and demonstrate significant advantages,
in terms of both speed and accuracy, over four other state-of-the-art methods.
Our lightweight remedies are biologically inspired and may serve to achieve
efficient outlier detection with many types of deep generative models. | [
"cs.LG",
"cs.CV",
"I.2.10; I.4.8; I.5.4"
] |
Standard model-free deep reinforcement learning (RL) algorithms sample a new
initial state for each trial, allowing them to optimize policies that can
perform well even in highly stochastic environments. However, problems that
exhibit considerable initial state variation typically produce high-variance
gradient estimates for model-free RL, making direct policy or value function
optimization challenging. In this paper, we develop a novel algorithm that
instead partitions the initial state space into "slices", and optimizes an
ensemble of policies, each on a different slice. The ensemble is gradually
unified into a single policy that can succeed on the whole state space. This
approach, which we term divide-and-conquer RL, is able to solve complex tasks
where conventional deep RL methods are ineffective. Our results show that
divide-and-conquer RL greatly outperforms conventional policy gradient methods
on challenging grasping, manipulation, and locomotion tasks, and exceeds the
performance of a variety of prior methods. Videos of policies learned by our
algorithm can be viewed at http://bit.ly/dnc-rl | [
"cs.LG",
"cs.RO"
] |
Current modeling approaches for hydrological modeling often rely on either
physics-based or data-science methods, including Machine Learning (ML)
algorithms. While physics-based models tend to rigid structure resulting in
unrealistic parameter values in certain instances, ML algorithms establish the
input-output relationship while ignoring the constraints imposed by well-known
physical processes. While there is a notion that the physics model enables
better process understanding and ML algorithms exhibit better predictive
skills, scientific knowledge that does not add to predictive ability may be
deceptive. Hence, there is a need for a hybrid modeling approach to couple ML
algorithms and physics-based models in a synergistic manner. Here we develop a
Physics Informed Machine Learning (PIML) model that combines the process
understanding of conceptual hydrological model with predictive abilities of
state-of-the-art ML models. We apply the proposed model to predict the monthly
time series of the target (streamflow) and intermediate variables (actual
evapotranspiration) in the Narmada river basin in India. Our results show the
capability of the PIML model to outperform a purely conceptual model ($abcd$
model) and ML algorithms while ensuring the physical consistency in outputs
validated through water balance analysis. The systematic approach for combining
conceptual model structure with ML algorithms could be used to improve the
predictive accuracy of crucial hydrological processes important for flood risk
assessment. | [
"stat.ML",
"cs.LG",
"stat.AP"
] |
Panoptic segmentation is a complex full scene parsing task requiring
simultaneous instance and semantic segmentation at high resolution. Current
state-of-the-art approaches cannot run in real-time, and simplifying these
architectures to improve efficiency severely degrades their accuracy. In this
paper, we propose a new single-shot panoptic segmentation network that
leverages dense detections and a global self-attention mechanism to operate in
real-time with performance approaching the state of the art. We introduce a
novel parameter-free mask construction method that substantially reduces
computational complexity by efficiently reusing information from the object
detection and semantic segmentation sub-tasks. The resulting network has a
simple data flow that does not require feature map re-sampling or clustering
post-processing, enabling significant hardware acceleration. Our experiments on
the Cityscapes and COCO benchmarks show that our network works at 30 FPS on
1024x2048 resolution, trading a 3% relative performance degradation from the
current state of the art for up to 440% faster inference. | [
"cs.CV",
"cs.LG"
] |
In this paper, we propose a fast and accurate coordinate regression method
for face alignment. Unlike most existing facial landmark regression methods
which usually employ fully connected layers to convert feature maps into
landmark coordinate, we present a structure coherence component to explicitly
take the relation among facial landmarks into account. Due to the geometric
structure of human face, structure coherence between different facial parts
provides important cues for effectively localizing facial landmarks. However,
the dense connection in the fully connected layers overuses such coherence,
making the important cues unable to be distinguished from all connections.
Instead, our structure coherence component leverages a dynamic sparse graph
structure to passing features among the most related landmarks. Furthermore, we
propose a novel objective function, named Soft Wing loss, to improve the
accuracy. Extensive experiments on three popular benchmarks, including WFLW,
COFW and 300W, demonstrate the effectiveness of the proposed method, achieving
state-of-the-art performance with fast speed. Our approach is especially robust
to challenging cases resulting in impressively low failure rate (0% and 2.88%)
in COFW and WFLW datasets. | [
"cs.CV"
] |
Usually, Neural Networks models are trained with a large dataset of images in
homogeneous backgrounds. The issue is that the performance of the network
models trained could be significantly degraded in a complex and heterogeneous
environment. To mitigate the issue, this paper develops a framework that
permits to autonomously generate a training dataset in heterogeneous cluttered
backgrounds. It is clear that the learning effectiveness of the proposed
framework should be improved in complex and heterogeneous environments,
compared with the ones with the typical dataset. In our framework, a
state-of-the-art image segmentation technique called DeepLab is used to extract
objects of interest from a picture and Chroma-key technique is then used to
merge the extracted objects of interest into specific heterogeneous
backgrounds. The performance of the proposed framework is investigated through
empirical tests and compared with that of the model trained with the COCO
dataset. The results show that the proposed framework outperforms the model
compared. This implies that the learning effectiveness of the framework
developed is superior to the models with the typical dataset. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Transformers have become one of the most important architectural innovations
in deep learning and have enabled many breakthroughs over the past few years.
Here we propose a simple network architecture, gMLP, based on MLPs with gating,
and show that it can perform as well as Transformers in key language and vision
applications. Our comparisons show that self-attention is not critical for
Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model
achieves parity with Transformers on pretraining perplexity and is better on
some downstream NLP tasks. On finetuning tasks where gMLP performs worse,
making the gMLP model substantially larger can close the gap with Transformers.
In general, our experiments show that gMLP can scale as well as Transformers
over increased data and compute. | [
"cs.LG",
"cs.CL",
"cs.CV"
] |
Tissue microarray (TMA) images have emerged as an important high-throughput
tool for cancer study and the validation of biomarkers. Efforts have been
dedicated to further improve the accuracy of TACOMA, a cutting-edge automatic
scoring algorithm for TMA images. One major advance is due to deepTacoma, an
algorithm that incorporates suitable deep representations of a group nature.
Inspired by the recent advance in semi-supervised learning and deep learning,
we propose mfTacoma to learn alternative deep representations in the context of
TMA image scoring. In particular, mfTacoma learns the low-dimensional
manifolds, a common latent structure in high dimensional data. Deep
representation learning and manifold learning typically requires large data. By
encoding deep representation of the manifolds as regularizing features,
mfTacoma effectively leverages the manifold information that is potentially
crude due to small data. Our experiments show that deep features by manifolds
outperforms two alternatives -- deep features by linear manifolds with
principal component analysis or by leveraging the group property. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Recent work has shown significant progress in the direction of synthetic data
generation using Generative Adversarial Networks (GANs). GANs have been applied
in many fields of computer vision including text-to-image conversion, domain
transfer, super-resolution, and image-to-video applications. In computer
vision, traditional GANs are based on deep convolutional neural networks.
However, deep convolutional neural networks can require extensive computational
resources because they are based on multiple operations performed by
convolutional layers, which can consist of millions of trainable parameters.
Training a GAN model can be difficult and it takes a significant amount of time
to reach an equilibrium point. In this paper, we investigate the use of
depthwise separable convolutions to reduce training time while maintaining data
generation performance. Our results show that a DepthwiseGAN architecture can
generate realistic images in shorter training periods when compared to a
StarGan architecture, but that model capacity still plays a significant role in
generative modelling. In addition, we show that depthwise separable
convolutions perform best when only applied to the generator. For quality
evaluation of generated images, we use the Fr\'echet Inception Distance (FID),
which compares the similarity between the generated image distribution and that
of the training dataset. | [
"cs.CV",
"eess.IV"
] |
We propose a representation learning framework for medical diagnosis domain.
It is based on heterogeneous network-based model of diagnostic data as well as
modified metapath2vec algorithm for learning latent node representation. We
compare the proposed algorithm with other representation learning methods in
two practical case studies: symptom/disease classification and disease
prediction. We observe a significant performance boost in these task resulting
from learning representations of domain data in a form of heterogeneous
network. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Many real-world tasks such as classification of digital histopathology images
and 3D object detection involve learning from a set of instances. In these
cases, only a group of instances or a set, collectively, contains meaningful
information and therefore only the sets have labels, and not individual data
instances. In this work, we present a permutation invariant neural network
called Memory-based Exchangeable Model (MEM) for learning set functions. The
MEM model consists of memory units that embed an input sequence to high-level
features enabling the model to learn inter-dependencies among instances through
a self-attention mechanism. We evaluated the learning ability of MEM on various
toy datasets, point cloud classification, and classification of lung whole
slide images (WSIs) into two subtypes of lung cancer---Lung Adenocarcinoma, and
Lung Squamous Cell Carcinoma. We systematically extracted patches from lung
WSIs downloaded from The Cancer Genome Atlas~(TCGA) dataset, the largest public
repository of WSIs, achieving a competitive accuracy of 84.84\% for
classification of two sub-types of lung cancer. The results on other datasets
are promising as well, and demonstrate the efficacy of our model. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Provenance is a record that describes how entities, activities, and agents
have influenced a piece of data; it is commonly represented as graphs with
relevant labels on both their nodes and edges. With the growing adoption of
provenance in a wide range of application domains, users are increasingly
confronted with an abundance of graph data, which may prove challenging to
process. Graph kernels, on the other hand, have been successfully used to
efficiently analyse graphs. In this paper, we introduce a novel graph kernel
called provenance kernel, which is inspired by and tailored for provenance
data. It decomposes a provenance graph into tree-patterns rooted at a given
node and considers the labels of edges and nodes up to a certain distance from
the root. We employ provenance kernels to classify provenance graphs from three
application domains. Our evaluation shows that they perform well in terms of
classification accuracy and yield competitive results when compared against
existing graph kernel methods and the provenance network analytics method while
more efficient in computing time. Moreover, the provenance types used by
provenance kernels also help improve the explainability of predictive models
built on them. | [
"cs.LG",
"cs.AI",
"cs.DB",
"I.2.6"
] |
It has been widely recognized that the success of deep learning in image
segmentation relies overwhelmingly on a myriad amount of densely annotated
training data, which, however, are difficult to obtain due to the tremendous
labor and expertise required, particularly for annotating 3D medical images.
Although self-supervised learning (SSL) has shown great potential to address
this issue, most SSL approaches focus only on image-level global consistency,
but ignore the local consistency which plays a pivotal role in capturing
structural information for dense prediction tasks such as segmentation. In this
paper, we propose a PriorGuided Local (PGL) self-supervised model that learns
the region-wise local consistency in the latent feature space. Specifically, we
use the spatial transformations, which produce different augmented views of the
same image, as a prior to deduce the location relation between two views, which
is then used to align the feature maps of the same local region but being
extracted on two views. Next, we construct a local consistency loss to minimize
the voxel-wise discrepancy between the aligned feature maps. Thus, our PGL
model learns the distinctive representations of local regions, and hence is
able to retain structural information. This ability is conducive to downstream
segmentation tasks. We conducted an extensive evaluation on four public
computerized tomography (CT) datasets that cover 11 kinds of major human organs
and two tumors. The results indicate that using pre-trained PGL model to
initialize a downstream network leads to a substantial performance improvement
over both random initialization and the initialization with global
consistency-based models. Code and pre-trained weights will be made available
at: https://git.io/PGL. | [
"cs.CV"
] |
Recently, state-of-the-art results have been achieved in semantic
segmentation using fully convolutional networks (FCNs). Most of these networks
employ encoder-decoder style architecture similar to U-Net and are trained with
images and the corresponding segmentation maps as a pixel-wise classification
task. Such frameworks only exploit class information by using the ground truth
segmentation maps. In this paper, we propose a multi-task learning framework
with the main aim of exploiting structural and spatial information along with
the class information. We modify the decoder part of the FCN to exploit class
information and the structural information as well. We intend to do this while
also keeping the parameters of the network as low as possible. We obtain the
structural information using either of the two ways: i) using the contour map
and ii) using the distance map, both of which can be obtained from ground truth
segmentation maps with no additional annotation costs. We also explore
different ways in which distance maps can be computed and study the effects of
different distance maps on the segmentation performance. We also experiment
extensively on two different medical image segmentation applications: i.e i)
using color fundus images for optic disc and cup segmentation and ii) using
endoscopic images for polyp segmentation. Through our experiments, we report
results comparable to, and in some cases performing better than the current
state-of-the-art architectures and with an order of 2x reduction in the number
of parameters. | [
"cs.CV"
] |
We tackle the challenge of disentangled representation learning in generative
adversarial networks (GANs) from the perspective of regularized optimal
transport (OT). Specifically, a smoothed OT loss gives rise to an implicit
transportation plan between the latent space and the data space. Based on this
theoretical observation, we exploit a structured regularization on the
transportation plan to encourage a prescribed latent subspace to be
informative. This yields the formulation of a novel informative OT-based GAN.
By convex duality, we obtain the equivalent view that this leads to perturbed
ground costs favoring sparsity in the informative latent dimensions.
Practically, we devise a stable training algorithm for the proposed informative
GAN. Our experiments support the hypothesis that such regularizations
effectively yield the discovery of disentangled and interpretable latent
representations. Our work showcases potential power of a regularized OT
framework in the context of generative modeling through its access to the
transport plan. Further challenges are addressed in this line. | [
"cs.LG",
"stat.ML"
] |
Kendall transformation is a conversion of an ordered feature into a vector of
pairwise order relations between individual values. This way, it preserves
ranking of observations and represents it in a categorical form.
Such transformation allows for generalisation of methods requiring strictly
categorical input, especially in the limit of small number of observations,
when discretisation becomes problematic. In particular, many approaches of
information theory can be directly applied to Kendall-transformed continuous
data without relying on differential entropy or any additional parameters.
Moreover, by filtering information to this contained in ranking, Kendall
transformation leads to a better robustness at a reasonable cost of dropping
sophisticated interactions which are anyhow unlikely to be correctly estimated.
In bivariate analysis, Kendall transformation can be related to popular
non-parametric methods, showing the soundness of the approach. The paper also
demonstrates its efficiency in multivariate problems, as well as provides an
example analysis of a real-world data. | [
"cs.LG",
"stat.ML"
] |
The lack of interpretability is an inevitable problem when using neural
network models in real applications. In this paper, an explainable neural
network based on generalized additive models with structured interactions
(GAMI-Net) is proposed to pursue a good balance between prediction accuracy and
model interpretability. GAMI-Net is a disentangled feedforward network with
multiple additive subnetworks; each subnetwork consists of multiple hidden
layers and is designed for capturing one main effect or one pairwise
interaction. Three interpretability aspects are further considered, including
a) sparsity, to select the most significant effects for parsimonious
representations; b) heredity, a pairwise interaction could only be included
when at least one of its parent main effects exists; and c) marginal clarity,
to make main effects and pairwise interactions mutually distinguishable. An
adaptive training algorithm is developed, where main effects are first trained
and then pairwise interactions are fitted to the residuals. Numerical
experiments on both synthetic functions and real-world datasets show that the
proposed model enjoys superior interpretability and it maintains competitive
prediction accuracy in comparison to the explainable boosting machine and other
classic machine learning models. | [
"stat.ML",
"cs.LG",
"stat.CO"
] |
Deep learning is attracting significant interest in the neuroimaging
community as a means to diagnose psychiatric and neurological disorders from
structural magnetic resonance images. However, there is a tendency amongst
researchers to adopt architectures optimized for traditional computer vision
tasks, rather than design networks customized for neuroimaging data. We address
this by introducing NEURO-DRAM, a 3D recurrent visual attention model tailored
for neuroimaging classification. The model comprises an agent which, trained by
reinforcement learning, learns to navigate through volumetric images,
selectively attending to the most informative regions for a given task. When
applied to Alzheimer's disease prediction, NEURODRAM achieves state-of-the-art
classification accuracy on an out-of-sample dataset, significantly
outperforming a baseline convolutional neural network. When further applied to
the task of predicting which patients with mild cognitive impairment will be
diagnosed with Alzheimer's disease within two years, the model achieves
state-of-the-art accuracy with no additional training. Encouragingly, the agent
learns, without explicit instruction, a search policy in agreement with
standardized radiological hallmarks of Alzheimer's disease, suggesting a route
to automated biomarker discovery for more poorly understood disorders. | [
"cs.LG",
"stat.ML"
] |
While convolutional neural networks (CNNs) trained by back-propagation have
seen unprecedented success at semantic segmentation tasks, they are known to
struggle on out-of-distribution data. Markov random fields (MRFs) on the other
hand, encode simpler distributions over labels that, although less flexible
than UNets, are less prone to over-fitting. In this paper, we propose to fuse
both strategies by computing the product of distributions of a UNet and an MRF.
As this product is intractable, we solve for an approximate distribution using
an iterative mean-field approach. The resulting MRF-UNet is trained jointly by
back-propagation. Compared to other works using conditional random fields
(CRFs), the MRF has no dependency on the imaging data, which should allow for
less over-fitting. We show on 3D neuroimaging data that this novel network
improves generalisation to out-of-distribution samples. Furthermore, it allows
the overall number of parameters to be reduced while preserving high accuracy.
These results suggest that a classic MRF smoothness prior can allow for less
over-fitting when principally integrated into a CNN model. Our implementation
is available at https://github.com/balbasty/nitorch. | [
"cs.CV",
"eess.IV"
] |
An important goal in deep learning is to learn versatile, high-level feature
representations of input data. However, standard networks' representations seem
to possess shortcomings that, as we illustrate, prevent them from fully
realizing this goal. In this work, we show that robust optimization can be
re-cast as a tool for enforcing priors on the features learned by deep neural
networks. It turns out that representations learned by robust models address
the aforementioned shortcomings and make significant progress towards learning
a high-level encoding of inputs. In particular, these representations are
approximately invertible, while allowing for direct visualization and
manipulation of salient input features. More broadly, our results indicate
adversarial robustness as a promising avenue for improving learned
representations. Our code and models for reproducing these results is available
at https://git.io/robust-reps . | [
"stat.ML",
"cs.CV",
"cs.LG",
"cs.NE"
] |
In real-world maintenance applications, deep generative models have shown
promising performance in detecting anomalous events of entities from
time-series signals collected from multiple sensors. Nevertheless, we outline
two important challenges of leveraging such models for times-series anomaly
detection: 1) developing effective and efficient reconstruction models and 2)
exploiting the similarity and interrelation structures among the multivariate
time series data channels. To address these challenges, in this paper we
propose a stacking variational auto-encoder (VAE) model with graph neural
networks for the effective and interpretable time-series anomaly detection.
Specifically, we propose a stacking block-wise reconstruction framework with a
weight-sharing scheme for the multivariate time series data with similarities
among channels. Moreover, with a graph learning module, our model learns a
sparse adjacency matrix to explicitly capture the stable interrelation
structure information among multiple time series data channels for
interpretable reconstruction of series patterns. Experimental results show that
our proposed model outperforms the strong baselines on three public datasets
with considerable improvements and meanwhile still maintains the training
efficiency. Furthermore, we demonstrate that the intuitive stable structure
learned by our model significantly improves the interpretability of our
detection results. | [
"cs.LG"
] |
The control of nonlinear dynamical systems remains a major challenge for
autonomous agents. Current trends in reinforcement learning (RL) focus on
complex representations of dynamics and policies, which have yielded impressive
results in solving a variety of hard control tasks. However, this new
sophistication and extremely over-parameterized models have come with the cost
of an overall reduction in our ability to interpret the resulting policies. In
this paper, we take inspiration from the control community and apply the
principles of hybrid switching systems in order to break down complex dynamics
into simpler components. We exploit the rich representational power of
probabilistic graphical models and derive an expectation-maximization (EM)
algorithm for learning a sequence model to capture the temporal structure of
the data and automatically decompose nonlinear dynamics into stochastic
switching linear dynamical systems. Moreover, we show how this framework of
switching models enables extracting hierarchies of Markovian and
auto-regressive locally linear controllers from nonlinear experts in an
imitation learning scenario. | [
"cs.LG",
"stat.ML"
] |
Motion estimation across low-resolution frames and the reconstruction of
high-resolution images are two coupled subproblems of multi-frame
super-resolution. This paper introduces a new joint optimization approach for
motion estimation and image reconstruction to address this interdependence. Our
method is formulated via non-linear least squares optimization and combines two
principles of robust super-resolution. First, to enhance the robustness of the
joint estimation, we propose a confidence-aware energy minimization framework
augmented with sparse regularization. Second, we develop a tailor-made
Levenberg-Marquardt iteration scheme to jointly estimate motion parameters and
the high-resolution image along with the corresponding model confidence
parameters. Our experiments on simulated and real images confirm that the
proposed approach outperforms decoupled motion estimation and image
reconstruction as well as related state-of-the-art joint estimation algorithms. | [
"cs.CV"
] |
In this paper, we tackle a fully unsupervised super-resolution problem, i.e.,
neither paired images nor ground truth HR images. We assume that low resolution
(LR) images are relatively easy to collect compared to high resolution (HR)
images. By allowing multiple LR images, we build a set of pseudo pairs by
denoising and downsampling LR images and cast the original unsupervised problem
into a supervised learning problem but in one level lower. Though this line of
study is easy to think of and thus should have been investigated prior to any
complicated unsupervised methods, surprisingly, there are currently none. Even
more, we show that this simple method outperforms the state-of-the-art
unsupervised method with a dramatically shorter latency at runtime, and
significantly reduces the gap to the HR supervised models. We submitted our
method in NTIRE 2020 super-resolution challenge and won 1st in PSNR, 2nd in
SSIM, and 13th in LPIPS. This simple method should be used as the baseline to
beat in the future, especially when multiple LR images are allowed during the
training phase. However, even in the zero-shot condition, we argue that this
method can serve as a useful baseline to see the gap between supervised and
unsupervised frameworks. | [
"cs.CV"
] |
In recent years, the idea of using morphological operations as networks has
received much attention. Mathematical morphology provides very efficient and
useful image processing and image analysis tools based on basic operators like
dilation and erosion, defined in terms of kernels. Many other morphological
operations are built up using the dilation and erosion operations. Although the
learning of structuring elements such as dilation or erosion using the
backpropagation algorithm is not new, the order and the way these morphological
operations are used is not standard. In this paper, we have theoretically
analyzed the use of morphological operations for processing 1D feature vectors
and shown that this gets extended to the 2D case in a simple manner. Our
theoretical results show that a morphological block represents a sum of hinge
functions. Hinge functions are used in many places for classification and
regression tasks (Breiman (1993)). We have also proved a universal
approximation theorem -- a stack of two morphological blocks can approximate
any continuous function over arbitrary compact sets. To experimentally validate
the efficacy of this network in real-life applications, we have evaluated its
performance on satellite image classification datasets since morphological
operations are very sensitive to geometrical shapes and structures. We have
also shown results on a few tasks like segmentation of blood vessels from
fundus images, segmentation of lungs from chest x-ray and image dehazing. The
results are encouraging and further establishes the potential of morphological
networks. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
] |
The widespread use of automated decision processes in many areas of our
society raises serious ethical issues concerning the fairness of the process
and the possible resulting discriminations. In this work, we propose a novel
approach called GANsan whose objective is to prevent the possibility of any
discrimination i.e., direct and indirect) based on a sensitive attribute by
removing the attribute itself as well as the existing correlations with the
remaining attributes. Our sanitization algorithm GANsan is partially inspired
by the powerful framework of generative adversarial networks (in particular the
Cycle-GANs), which offers a flexible way to learn a distribution empirically or
to translate between two different distributions.
In contrast to prior work, one of the strengths of our approach is that the
sanitization is performed in the same space as the original data by only
modifying the other attributes as little as possible and thus preserving the
interpretability of the sanitized data. As a consequence, once the sanitizer is
trained, it can be applied to new data, such as for instance, locally by an
individual on his profile before releasing it. Finally, experiments on a real
dataset demonstrate the effectiveness of the proposed approach as well as the
achievable trade-off between fairness and utility. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Inspired by the tremendous success of deep Convolutional Neural Networks as
generic feature extractors for images, we propose TimeNet: a deep recurrent
neural network (RNN) trained on diverse time series in an unsupervised manner
using sequence to sequence (seq2seq) models to extract features from time
series. Rather than relying on data from the problem domain, TimeNet attempts
to generalize time series representation across domains by ingesting time
series from several domains simultaneously. Once trained, TimeNet can be used
as a generic off-the-shelf feature extractor for time series. The
representations or embeddings given by a pre-trained TimeNet are found to be
useful for time series classification (TSC). For several publicly available
datasets from UCR TSC Archive and an industrial telematics sensor data from
vehicles, we observe that a classifier learned over the TimeNet embeddings
yields significantly better performance compared to (i) a classifier learned
over the embeddings given by a domain-specific RNN, as well as (ii) a nearest
neighbor classifier based on Dynamic Time Warping. | [
"cs.LG"
] |
On-device learning enables edge devices to continually adapt the AI models to
new data, which requires a small memory footprint to fit the tight memory
constraint of edge devices. Existing work solves this problem by reducing the
number of trainable parameters. However, this doesn't directly translate to
memory saving since the major bottleneck is the activations, not parameters. In
this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient
on-device learning. TinyTL freezes the weights while only learns the bias
modules, thus no need to store the intermediate activations. To maintain the
adaptation capacity, we introduce a new memory-efficient bias module, the lite
residual module, to refine the feature extractor by learning small residual
feature maps adding only 3.8% memory overhead. Extensive experiments show that
TinyTL significantly saves the memory (up to 6.5x) with little accuracy loss
compared to fine-tuning the full network. Compared to fine-tuning the last
layer, TinyTL provides significant accuracy improvements (up to 34.1%) with
little memory overhead. Furthermore, combined with feature extractor
adaptation, TinyTL provides 7.3-12.9x memory saving without sacrificing
accuracy compared to fine-tuning the full Inception-V3. | [
"cs.CV",
"cs.LG"
] |
Face recognition has been of great importance in many applications as a
biometric for its throughput, convenience, and non-invasiveness. Recent
advancements in deep Convolutional Neural Network (CNN) architectures have
boosted significantly the performance of face recognition based on
two-dimensional (2D) facial texture images and outperformed the previous state
of the art using conventional methods. However, the accuracy of 2D face
recognition is still challenged by the change of pose, illumination, make-up,
and expression. On the other hand, the geometric information contained in
three-dimensional (3D) face data has the potential to overcome the fundamental
limitations of 2D face data.
We propose a multi-Channel deep 3D face network for face recognition based on
3D face data. We compute the geometric information of a 3D face based on its
piecewise-linear triangular mesh structure and then conformally flatten
geometric information along with the color from 3D to 2D plane to leverage the
state-of-the-art deep CNN architectures. We modify the input layer of the
network to take images with nine channels instead of three only such that more
geometric information can be explicitly fed to it. We pre-train the network
using images from the VGG-Face \cite{Parkhi2015} and then fine-tune it with the
generated multi-channel face images. The face recognition accuracy of the
multi-Channel deep 3D face network has achieved 98.6. The experimental results
also clearly show that the network performs much better when a 9-channel image
is flattened to plane based on the conformal map compared with the orthographic
projection. | [
"cs.CV",
"cs.CG"
] |
We introduce Performers, Transformer architectures which can estimate regular
(softmax) full-rank-attention Transformers with provable accuracy, but using
only linear (as opposed to quadratic) space and time complexity, without
relying on any priors such as sparsity or low-rankness. To approximate softmax
attention-kernels, Performers use a novel Fast Attention Via positive
Orthogonal Random features approach (FAVOR+), which may be of independent
interest for scalable kernel methods. FAVOR+ can be also used to efficiently
model kernelizable attention mechanisms beyond softmax. This representational
power is crucial to accurately compare softmax with other kernels for the first
time on large-scale tasks, beyond the reach of regular Transformers, and
investigate optimal attention-kernels. Performers are linear architectures
fully compatible with regular Transformers and with strong theoretical
guarantees: unbiased or nearly-unbiased estimation of the attention matrix,
uniform convergence and low estimation variance. We tested Performers on a rich
set of tasks stretching from pixel-prediction through text models to protein
sequence modeling. We demonstrate competitive results with other examined
efficient sparse and dense attention methods, showcasing effectiveness of the
novel attention-learning paradigm leveraged by Performers. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
In this paper we explore two ways of using context for object detection. The
first model focusses on people and the objects they commonly interact with,
such as fashion and sports accessories. The second model considers more general
object detection and uses the spatial relationships between objects and between
objects and scenes. Our models are able to capture precise spatial
relationships between the context and the object of interest, and make
effective use of the appearance of the contextual region. On the newly released
COCO dataset, our models provide relative improvements of up to 5% over
CNN-based state-of-the-art detectors, with the gains concentrated on hard cases
such as small objects (10% relative improvement). | [
"cs.CV"
] |
Human-Object Interaction (HOI) detection is important to human-centric scene
understanding tasks. Existing works tend to assume that the same verb has
similar visual characteristics in different HOI categories, an approach that
ignores the diverse semantic meanings of the verb. To address this issue, in
this paper, we propose a novel Polysemy Deciphering Network (PD-Net) that
decodes the visual polysemy of verbs for HOI detection in three distinct ways.
First, we refine features for HOI detection to be polysemyaware through the use
of two novel modules: namely, Language Prior-guided Channel Attention (LPCA)
and Language Prior-based Feature Augmentation (LPFA). LPCA highlights important
elements in human and object appearance features for each HOI category to be
identified; moreover, LPFA augments human pose and spatial features for HOI
detection using language priors, enabling the verb classifiers to receive
language hints that reduce intra-class variation for the same verb. Second, we
introduce a novel Polysemy-Aware Modal Fusion module (PAMF), which guides
PD-Net to make decisions based on feature types deemed more important according
to the language priors. Third, we propose to relieve the verb polysemy problem
through sharing verb classifiers for semantically similar HOI categories.
Furthermore, to expedite research on the verb polysemy problem, we build a new
benchmark dataset named HOI-VerbPolysemy (HOIVP), which includes common verbs
(predicates) that have diverse semantic meanings in the real world. Finally,
through deciphering the visual polysemy of verbs, our approach is demonstrated
to outperform state-of-the-art methods by significant margins on the HICO-DET,
V-COCO, and HOI-VP databases. Code and data in this paper are available at
https://github.com/MuchHair/PD-Net. | [
"cs.CV"
] |
The advancement of visual tracking has continuously been brought by deep
learning models. Typically, supervised learning is employed to train these
models with expensive labeled data. In order to reduce the workload of manual
annotations and learn to track arbitrary objects, we propose an unsupervised
learning method for visual tracking. The motivation of our unsupervised
learning is that a robust tracker should be effective in bidirectional
tracking. Specifically, the tracker is able to forward localize a target object
in successive frames and backtrace to its initial position in the first frame.
Based on such a motivation, in the training process, we measure the consistency
between forward and backward trajectories to learn a robust tracker from
scratch merely using unlabeled videos. We build our framework on a Siamese
correlation filter network, and propose a multi-frame validation scheme and a
cost-sensitive loss to facilitate unsupervised learning. Without bells and
whistles, the proposed unsupervised tracker achieves the baseline accuracy as
classic fully supervised trackers while achieving a real-time speed.
Furthermore, our unsupervised framework exhibits a potential in leveraging more
unlabeled or weakly labeled data to further improve the tracking accuracy. | [
"cs.CV"
] |
Mesh is a powerful data structure for 3D shapes. Representation learning for
3D meshes is important in many computer vision and graphics applications. The
recent success of convolutional neural networks (CNNs) for structured data
(e.g., images) suggests the value of adapting insight from CNN for 3D shapes.
However, 3D shape data are irregular since each node's neighbors are unordered.
Various graph neural networks for 3D shapes have been developed with isotropic
filters or predefined local coordinate systems to overcome the node
inconsistency on graphs. However, isotropic filters or predefined local
coordinate systems limit the representation power. In this paper, we propose a
local structure-aware anisotropic convolutional operation (LSA-Conv) that
learns adaptive weighting matrices for each node according to the local
neighboring structure and performs shared anisotropic filters. In fact, the
learnable weighting matrix is similar to the attention matrix in the random
synthesizer -- a new Transformer model for natural language processing (NLP).
Comprehensive experiments demonstrate that our model produces significant
improvement in 3D shape reconstruction compared to state-of-the-art methods. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Assigning meaning to parts of image data is the goal of semantic image
segmentation. Machine learning methods, specifically supervised learning is
commonly used in a variety of tasks formulated as semantic segmentation. One of
the major challenges in the supervised learning approaches is expressing and
collecting the rich knowledge that experts have with respect to the meaning
present in the image data. Towards this, typically a fixed set of labels is
specified and experts are tasked with annotating the pixels, patches or
segments in the images with the given labels. In general, however, the set of
classes does not fully capture the rich semantic information present in the
images. For example, in medical imaging such as histology images, the different
parts of cells could be grouped and sub-grouped based on the expertise of the
pathologist.
To achieve such a precise semantic representation of the concepts in the
image, we need access to the full depth of knowledge of the annotator. In this
work, we develop a novel approach to collect segmentation annotations from
experts based on psychometric testing. Our method consists of the psychometric
testing procedure, active query selection, query enhancement, and a deep metric
learning model to achieve a patch-level image embedding that allows for
semantic segmentation of images. We show the merits of our method with
evaluation on the synthetically generated image, aerial image and histology
image. | [
"cs.CV",
"cs.AI"
] |
Human-object interaction(HOI) detection is an important task for
understanding human activity. Graph structure is appropriate to denote the HOIs
in the scene. Since there is an subordination between human and object---human
play subjective role and object play objective role in HOI, the relations
between homogeneous entities and heterogeneous entities in the scene should
also not be equally the same. However, previous graph models regard human and
object as the same kind of nodes and do not consider that the messages are not
equally the same between different entities. In this work, we address such a
problem for HOI task by proposing a heterogeneous graph network that models
humans and objects as different kinds of nodes and incorporates intra-class
messages between homogeneous nodes and inter-class messages between
heterogeneous nodes. In addition, a graph attention mechanism based on the
intra-class context and inter-class context is exploited to improve the
learning. Extensive experiments on the benchmark datasets V-COCO and HICO-DET
demonstrate that the intra-class and inter-class messages are very important in
HOI detection and verify the effectiveness of our method. | [
"cs.CV"
] |
Recently, deep-learning based approaches have achieved impressive performance
for autonomous driving. However, end-to-end vision-based methods typically have
limited interpretability, making the behaviors of the deep networks difficult
to explain. Hence, their potential applications could be limited in practice.
To address this problem, we propose an interpretable end-to-end vision-based
motion planning approach for autonomous driving, referred to as IVMP. Given a
set of past surrounding-view images, our IVMP first predicts future egocentric
semantic maps in bird's-eye-view space, which are then employed to plan
trajectories for self-driving vehicles. The predicted future semantic maps not
only provide useful interpretable information, but also allow our motion
planning module to handle objects with low probability, thus improving the
safety of autonomous driving. Moreover, we also develop an optical flow
distillation paradigm, which can effectively enhance the network while still
maintaining its real-time performance. Extensive experiments on the nuScenes
dataset and closed-loop simulation show that our IVMP significantly outperforms
the state-of-the-art approaches in imitating human drivers with a much higher
success rate. Our project page is available at
https://sites.google.com/view/ivmp. | [
"cs.CV",
"cs.RO"
] |
Deep learning is a kind of feature learning method with strong nonliear
feature transformation and becomes more and more important in many fields of
artificial intelligence. Deep autoencoder is one representative method of the
deep learning methods, and can effectively extract abstract the information of
datasets. However, it does not consider the complementarity between the deep
features and original features during deep feature transformation. Besides, it
suffers from small sample problem. In order to solve these problems, a novel
deep autoencoder - hybrid feature embedded stacked sparse autoencoder(HESSAE)
has been proposed in this paper. HFESAE is capable to learn discriminant deep
features with the help of embedding original features to filter weak
hidden-layer outputs during training. For the issue that class representation
ability of abstract information is limited by small sample problem, a feature
fusion strategy has been designed aiming to combining abstract information
learned by HFESAE with original feature and obtain hybrid features for feature
reduction. The strategy is hybrid feature selection strategy based on L1
regularization followed by an support vector machine(SVM) ensemble model, in
which weighted local discriminant preservation projection (w_LPPD), is designed
and employed on each base classifier. At the end of this paper, several
representative public datasets are used to verify the effectiveness of the
proposed algorithm. The experimental results demonstrated that, the proposed
feature learning method yields superior performance compared to other existing
and state of art feature learning algorithms including some representative deep
autoencoder methods. | [
"cs.LG"
] |
Unsupervised image-to-image translation consists of learning a pair of
mappings between two domains without known pairwise correspondences between
points. The current convention is to approach this task with cycle-consistent
GANs: using a discriminator to encourage the generator to change the image to
match the target domain, while training the generator to be inverted with
another mapping. While ending up with paired inverse functions may be a good
end result, enforcing this restriction at all times during training can be a
hindrance to effective modeling. We propose an alternate approach that directly
restricts the generator to performing a simple sparse transformation in a
latent layer, motivated by recent work from cognitive neuroscience suggesting
an architectural prior on representations corresponding to consciousness. Our
biologically motivated approach leads to representations more amenable to
transformation by disentangling high-level abstract concepts in the latent
space. We demonstrate that image-to-image domain translation with many
different domains can be learned more effectively with our architecturally
constrained, simple transformation than with previous unconstrained
architectures that rely on a cycle-consistency loss. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Vision transformers have attracted much attention from computer vision
researchers as they are not restricted to the spatial inductive bias of
ConvNets. However, although Transformer-based backbones have achieved much
progress on ImageNet classification, it is still unclear whether the learned
representations are as transferable as or even more transferable than ConvNets'
features. To address this point, we systematically investigate the transfer
learning ability of ConvNets and vision transformers in 15 single-task and
multi-task performance evaluations. Given the strong correlation between the
performance of pre-trained models and transfer learning, we include 2 residual
ConvNets (i.e., R-101x3 and R-152x4) and 3 Transformer-based visual backbones
(i.e., ViT-B, ViT-L and Swin-B), which have close error rates on ImageNet, that
indicate similar transfer learning performance on downstream datasets.
We observe consistent advantages of Transformer-based backbones on 13
downstream tasks (out of 15), including but not limited to fine-grained
classification, scene recognition (classification, segmentation and depth
estimation), open-domain classification, face recognition, etc. More
specifically, we find that two ViT models heavily rely on whole network
fine-tuning to achieve performance gains while Swin Transformer does not have
such a requirement. Moreover, vision transformers behave more robustly in
multi-task learning, i.e., bringing more improvements when managing mutually
beneficial tasks and reducing performance losses when tackling irrelevant
tasks. We hope our discoveries can facilitate the exploration and exploitation
of vision transformers in the future. | [
"cs.CV"
] |
In this paper we investigate the use of model-based reinforcement learning to
assist people with Type 1 Diabetes with insulin dose decisions. The proposed
architecture consists of multiple Echo State Networks to predict blood glucose
levels combined with Model Predictive Controller for planning. Echo State
Network is a version of recurrent neural networks which allows us to learn long
term dependencies in the input of time series data in an online manner.
Additionally, we address the quantification of uncertainty for a more robust
control. Here, we used ensembles of Echo State Networks to capture model
(epistemic) uncertainty. We evaluated the approach with the FDA-approved
UVa/Padova Type 1 Diabetes simulator and compared the results against baseline
algorithms such as Basal-Bolus controller and Deep Q-learning. The results
suggest that the model-based reinforcement learning algorithm can perform
equally or better than the baseline algorithms for the majority of virtual Type
1 Diabetes person profiles tested. | [
"cs.LG"
] |
Advances in face rotation, along with other face-based generative tasks, are
more frequent as we advance further in topics of deep learning. Even as
impressive milestones are achieved in synthesizing faces, the importance of
preserving identity is needed in practice and should not be overlooked. Also,
the difficulty should not be more for data with obscured faces, heavier poses,
and lower quality. Existing methods tend to focus on samples with variation in
pose, but with the assumption data is high in quality. We propose a generative
adversarial network (GAN) -based model to generate high-quality, identity
preserving frontal faces from one or multiple low-resolution (LR) faces with
extreme poses. Specifically, we propose SuperFront-GAN (SF-GAN) to synthesize a
high-resolution (HR), frontal face from one-to-many LR faces with various poses
and with the identity-preserved. We integrate a super-resolution (SR) side-view
module into SF-GAN to preserve identity information and fine details of the
side-views in HR space, which helps model reconstruct high-frequency
information of faces (i.e., periocular, nose, and mouth regions). Moreover,
SF-GAN accepts multiple LR faces as input, and improves each added sample. We
squeeze additional gain in performance with an orthogonal constraint in the
generator to penalize redundant latent representations and, hence, diversify
the learned features space. Quantitative and qualitative results demonstrate
the superiority of SF-GAN over others. | [
"cs.CV"
] |
We present ViLBERT (short for Vision-and-Language BERT), a model for learning
task-agnostic joint representations of image content and natural language. We
extend the popular BERT architecture to a multi-modal two-stream model,
pro-cessing both visual and textual inputs in separate streams that interact
through co-attentional transformer layers. We pretrain our model through two
proxy tasks on the large, automatically collected Conceptual Captions dataset
and then transfer it to multiple established vision-and-language tasks --
visual question answering, visual commonsense reasoning, referring expressions,
and caption-based image retrieval -- by making only minor additions to the base
architecture. We observe significant improvements across tasks compared to
existing task-specific models -- achieving state-of-the-art on all four tasks.
Our work represents a shift away from learning groundings between vision and
language only as part of task training and towards treating visual grounding as
a pretrainable and transferable capability. | [
"cs.CV",
"cs.CL"
] |
Most of the existing single-stage and two-stage 3D object detectors are
anchor-based methods, while the efficient but challenging anchor-free
single-stage 3D object detection is not well investigated. Recent studies on 2D
object detection show that the anchor-free methods also are of great potential.
However, the unordered and sparse properties of point clouds prevent us from
directly leveraging the advanced 2D methods on 3D point clouds. We overcome
this by converting the voxel-based sparse 3D feature volumes into the sparse 2D
feature maps. We propose an attentive module to fit the sparse feature maps to
dense mostly on the object regions through the deformable convolution tower and
the supervised mask-guided attention. By directly regressing the 3D bounding
box from the enhanced and dense feature maps, we construct a novel single-stage
3D detector for point clouds in an anchor-free manner. We propose an IoU-based
detection confidence re-calibration scheme to improve the correlation between
the detection confidence score and the accuracy of the bounding box regression.
Our code is publicly available at \url{https://github.com/jialeli1/MGAF-3DSSD}. | [
"cs.CV"
] |
We propose a notation for tensors with named axes, which relieves the author,
reader, and future implementers from the burden of keeping track of the order
of axes and the purpose of each. It also makes it easy to extend operations on
low-order tensors to higher order ones (e.g., to extend an operation on images
to minibatches of images, or extend the attention mechanism to multiple
attention heads). After a brief overview of our notation, we illustrate it
through several examples from modern machine learning, from building blocks
like attention and convolution to full models like Transformers and LeNet.
Finally, we give formal definitions and describe some extensions. Our proposals
build on ideas from many previous papers and software libraries. We hope that
this document will encourage more authors to use named tensors, resulting in
clearer papers and less bug-prone implementations.
The source code for this document can be found at
https://github.com/namedtensor/notation/. We invite anyone to make comments on
this proposal by submitting issues or pull requests on this repository. | [
"cs.LG",
"cs.CL"
] |
We consider the exploration-exploitation trade-off in reinforcement learning
and we show that an agent imbued with an epistemic-risk-seeking utility
function is able to explore efficiently, as measured by regret. The parameter
that controls how risk-seeking the agent is can be optimized to minimize
regret, or annealed according to a schedule. We call the resulting algorithm
K-learning and we show that the K-values that the agent maintains are
optimistic for the expected optimal Q-values at each state-action pair. The
utility function approach induces a natural Boltzmann exploration policy for
which the 'temperature' parameter is equal to the risk-seeking parameter. This
policy achieves a Bayesian regret bound of $\tilde O(L^{3/2} \sqrt{SAT})$,
where L is the time horizon, S is the number of states, A is the number of
actions, and T is the total number of elapsed time-steps. K-learning can be
interpreted as mirror descent in the policy space, and it is similar to other
well-known methods in the literature, including Q-learning, soft-Q-learning,
and maximum entropy policy gradient. K-learning is simple to implement, as it
only requires adding a bonus to the reward at each state-action and then
solving a Bellman equation. We conclude with a numerical example demonstrating
that K-learning is competitive with other state-of-the-art algorithms in
practice. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Graph neural networks have become a staple in problems addressing learning
and analysis of data defined over graphs. However, several results suggest an
inherent difficulty in extracting better performance by increasing the number
of layers. Recent works attribute this to a phenomenon peculiar to the
extraction of node features in graph-based tasks, i.e., the need to consider
multiple neighborhood sizes at the same time and adaptively tune them. In this
paper, we investigate the recently proposed randomly wired architectures in the
context of graph neural networks. Instead of building deeper networks by
stacking many layers, we prove that employing a randomly-wired architecture can
be a more effective way to increase the capacity of the network and obtain
richer representations. We show that such architectures behave like an ensemble
of paths, which are able to merge contributions from receptive fields of varied
size. Moreover, these receptive fields can also be modulated to be wider or
narrower through the trainable weights over the paths. We also provide
extensive experimental evidence of the superior performance of randomly wired
architectures over multiple tasks and four graph convolution definitions, using
recent benchmarking frameworks that addresses the reliability of previous
testing methodologies. | [
"cs.LG",
"cs.CV"
] |
Recent breakthroughs in Deep Neural Networks (DNNs) have fueled a
tremendously growing demand for bringing DNN-powered intelligence into mobile
platforms. While the potential of deploying DNNs on resource-constrained
platforms has been demonstrated by DNN compression techniques, the current
practice suffers from two limitations: 1) merely stand-alone compression
schemes are investigated even though each compression technique only suit for
certain types of DNN layers; and 2) mostly compression techniques are optimized
for DNNs' inference accuracy, without explicitly considering other
application-driven system performance (e.g., latency and energy cost) and the
varying resource availability across platforms (e.g., storage and processing
capability). To this end, we propose AdaDeep, a usage-driven, automated DNN
compression framework for systematically exploring the desired trade-off
between performance and resource constraints, from a holistic system level.
Specifically, in a layer-wise manner, AdaDeep automatically selects the most
suitable combination of compression techniques and the corresponding
compression hyperparameters for a given DNN. Thorough evaluations on six
datasets and across twelve devices demonstrate that AdaDeep can achieve up to
$18.6\times$ latency reduction, $9.8\times$ energy-efficiency improvement, and
$37.3\times$ storage reduction in DNNs while incurring negligible accuracy
loss. Furthermore, AdaDeep also uncovers multiple novel combinations of
compression techniques. | [
"cs.LG"
] |
Learning interpretable and transferable subpolicies and performing task
decomposition from a single, complex task is difficult. Some traditional
hierarchical reinforcement learning techniques enforce this decomposition in a
top-down manner, while meta-learning techniques require a task distribution at
hand to learn such decompositions. This paper presents a framework for using
diverse suboptimal world models to decompose complex task solutions into
simpler modular subpolicies. This framework performs automatic decomposition of
a single source task in a bottom up manner, concurrently learning the required
modular subpolicies as well as a controller to coordinate them. We perform a
series of experiments on high dimensional continuous action control tasks to
demonstrate the effectiveness of this approach at both complex single task
learning and lifelong learning. Finally, we perform ablation studies to
understand the importance and robustness of different elements in the framework
and limitations to this approach. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Segmentation of organs of interest in 3D medical images is necessary for
accurate diagnosis and longitudinal studies. Though recent advances using deep
learning have shown success for many segmentation tasks, large datasets are
required for high performance and the annotation process is both time consuming
and labor intensive. In this paper, we propose a 3D few shot segmentation
framework for accurate organ segmentation using limited training samples of the
target organ annotation. To achieve this, a U-Net like network is designed to
predict segmentation by learning the relationship between 2D slices of support
data and a query image, including a bidirectional gated recurrent unit (GRU)
that learns consistency of encoded features between adjacent slices. Also, we
introduce a transfer learning method to adapt the characteristics of the target
image and organ by updating the model before testing with arbitrary support and
query data sampled from the support data. We evaluate our proposed model using
three 3D CT datasets with annotations of different organs. Our model yielded
significantly improved performance over state-of-the-art few shot segmentation
models and was comparable to a fully supervised model trained with more target
training data. | [
"cs.CV",
"cs.AI"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.