text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
One-stage object detection is commonly implemented by optimizing two
sub-tasks: object classification and localization, using heads with two
parallel branches, which might lead to a certain level of spatial misalignment
in predictions between the two tasks. In this work, we propose a Task-aligned
One-stage Object Detection (TOOD) that explicitly aligns the two tasks in a
learning-based manner. First, we design a novel Task-aligned Head (T-Head)
which offers a better balance between learning task-interactive and
task-specific features, as well as a greater flexibility to learn the alignment
via a task-aligned predictor. Second, we propose Task Alignment Learning (TAL)
to explicitly pull closer (or even unify) the optimal anchors for the two tasks
during training via a designed sample assignment scheme and a task-aligned
loss. Extensive experiments are conducted on MS-COCO, where TOOD achieves a
51.1 AP at single-model single-scale testing. This surpasses the recent
one-stage detectors by a large margin, such as ATSS (47.7 AP), GFL (48.2 AP),
and PAA (49.0 AP), with fewer parameters and FLOPs. Qualitative results also
demonstrate the effectiveness of TOOD for better aligning the tasks of object
classification and localization. Code is available at
https://github.com/fcjian/TOOD. | [
"cs.CV"
] |
Researchers have proposed a wide variety of model explanation approaches, but
it remains unclear how most methods are related or when one method is
preferable to another. We establish a new class of methods, removal-based
explanations, that are based on the principle of simulating feature removal to
quantify each feature's influence. These methods vary in several respects, so
we develop a framework that characterizes each method along three dimensions:
1) how the method removes features, 2) what model behavior the method explains,
and 3) how the method summarizes each feature's influence. Our framework
unifies 25 existing methods, including several of the most widely used
approaches (SHAP, LIME, Meaningful Perturbations, permutation tests). This new
class of explanation methods has rich connections that we examine using tools
that have been largely overlooked by the explainability literature. To anchor
removal-based explanations in cognitive psychology, we show that feature
removal is a simple application of subtractive counterfactual reasoning. Ideas
from cooperative game theory shed light on the relationships and trade-offs
among different methods, and we derive conditions under which all removal-based
explanations have information-theoretic interpretations. Through this analysis,
we develop a unified framework that helps practitioners better understand model
explanation tools, and that offers a strong theoretical foundation upon which
future explainability research can build. | [
"cs.LG",
"stat.ML"
] |
With the advent of state of the art nature-inspired pure attention based
models i.e. transformers, and their success in natural language processing
(NLP), their extension to machine vision (MV) tasks was inevitable and much
felt. Subsequently, vision transformers (ViTs) were introduced which are giving
quite a challenge to the established deep learning based machine vision
techniques. However, pure attention based models/architectures like
transformers require huge data, large training times and large computational
resources. Some recent works suggest that combinations of these two varied
fields can prove to build systems which have the advantages of both these
fields. Accordingly, this state of the art survey paper is introduced which
hopefully will help readers get useful information about this interesting and
potential research area. A gentle introduction to attention mechanisms is
given, followed by a discussion of the popular attention based deep
architectures. Subsequently, the major categories of the intersection of
attention mechanisms and deep learning for machine vision (MV) based are
discussed. Afterwards, the major algorithms, issues and trends within the scope
of the paper are discussed. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Supervised object detection and semantic segmentation require object or even
pixel level annotations. When there exist image level labels only, it is
challenging for weakly supervised algorithms to achieve accurate predictions.
The accuracy achieved by top weakly supervised algorithms is still
significantly lower than their fully supervised counterparts. In this paper, we
propose a novel weakly supervised curriculum learning pipeline for multi-label
object recognition, detection and semantic segmentation. In this pipeline, we
first obtain intermediate object localization and pixel labeling results for
the training images, and then use such results to train task-specific deep
networks in a fully supervised manner. The entire process consists of four
stages, including object localization in the training images, filtering and
fusing object instances, pixel labeling for the training images, and
task-specific network training. To obtain clean object instances in the
training images, we propose a novel algorithm for filtering, fusing and
classifying object instances collected from multiple solution mechanisms. In
this algorithm, we incorporate both metric learning and density-based
clustering to filter detected object instances. Experiments show that our
weakly supervised pipeline achieves state-of-the-art results in multi-label
image classification as well as weakly supervised object detection and very
competitive results in weakly supervised semantic segmentation on MS-COCO,
PASCAL VOC 2007 and PASCAL VOC 2012. | [
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] |
In existing visual representation learning tasks, deep convolutional neural
networks (CNNs) are often trained on images annotated with single tags, such as
ImageNet. However, a single tag cannot describe all important contents of one
image, and some useful visual information may be wasted during training. In
this work, we propose to train CNNs from images annotated with multiple tags,
to enhance the quality of visual representation of the trained CNN model. To
this end, we build a large-scale multi-label image database with 18M images and
11K categories, dubbed Tencent ML-Images. We efficiently train the ResNet-101
model with multi-label outputs on Tencent ML-Images, taking 90 hours for 60
epochs, based on a large-scale distributed deep learning framework,i.e.,TFplus.
The good quality of the visual representation of the Tencent ML-Images
checkpoint is verified through three transfer learning tasks, including
single-label image classification on ImageNet and Caltech-256, object detection
on PASCAL VOC 2007, and semantic segmentation on PASCAL VOC 2012. The Tencent
ML-Images database, the checkpoints of ResNet-101, and all the training
codehave been released at https://github.com/Tencent/tencent-ml-images. It is
expected to promote other vision tasks in the research and industry community. | [
"cs.CV"
] |
CNN is a powerful tool for many computer vision tasks, achieving much better
result than traditional methods. Since CNN has a very large capacity, training
such a neural network often requires many data, but it is often expensive to
obtain labeled images in real practice, especially for object detection, where
collecting bounding box of every object in training set requires many human
efforts. This is the case in detection of retail products where there can be
many different categories. In this paper, we focus on applying CNN to detect
324-categories products in situ, while requiring no extra effort of labeling
bounding box for any image. Our approach is based on an algorithm that extracts
bounding box from in-vitro dataset and an algorithm to simulate occlusion. We
have successfully shown the effectiveness and usefulness of our methods to
build up a Faster RCNN detection model. Similar idea is also applicable in
other scenarios. | [
"cs.CV"
] |
Deep learning approaches have become the standard solution to many problems
in computer vision and robotics, but obtaining sufficient training data in high
enough quality is challenging, as human labor is error prone, time consuming,
and expensive. Solutions based on simulation have become more popular in recent
years, but the gap between simulation and reality is still a major issue. In
this paper, we introduce a novel method for augmenting synthetic image data
through unsupervised image-to-image translation by applying the style of real
world images to simulated images with open source frameworks. The generated
dataset is combined with conventional augmentation methods and is then applied
to a neural network model running in real-time on autonomous soccer robots. Our
evaluation shows a significant improvement compared to models trained on images
generated entirely in simulation. | [
"cs.LG",
"cs.CV",
"cs.RO"
] |
We develop a model using deep learning techniques and natural language
processing on unstructured text from medical records to predict hospital-wide
$30$-day unplanned readmission, with c-statistic $.70$. Our model is
constructed to allow physicians to interpret the significant features for
prediction. | [
"stat.ML"
] |
In this paper we propose a novel square root sliding-window bundle adjustment
suitable for real-time odometry applications. The square root formulation
pervades three major aspects of our optimization-based sliding-window
estimator: for bundle adjustment we eliminate landmark variables with nullspace
projection; to store the marginalization prior we employ a matrix square root
of the Hessian; and when marginalizing old poses we avoid forming normal
equations and update the square root prior directly with a specialized QR
decomposition. We show that the proposed square root marginalization is
algebraically equivalent to the conventional use of Schur complement (SC) on
the Hessian. Moreover, it elegantly deals with rank-deficient Jacobians
producing a prior equivalent to SC with Moore-Penrose inverse. Our evaluation
of visual and visual-inertial odometry on real-world datasets demonstrates that
the proposed estimator is 36% faster than the baseline. It furthermore shows
that in single precision, conventional Hessian-based marginalization leads to
numeric failures and reduced accuracy. We analyse numeric properties of the
marginalization prior to explain why our square root form does not suffer from
the same effect and therefore entails superior performance. | [
"cs.CV"
] |
We propose a new modeling approach that is a generalization of generative and
discriminative models. The core idea is to use an implicit parameterization of
a joint probability distribution by specifying only the conditional
distributions. The proposed scheme combines the advantages of both worlds -- it
can use powerful complex discriminative models as its parts, having at the same
time better generalization capabilities. We thoroughly evaluate the proposed
method for a simple classification task with artificial data and illustrate its
advantages for real-word scenarios on a semantic image segmentation problem. | [
"cs.LG"
] |
While successful for various computer vision tasks, deep neural networks have
shown to be vulnerable to texture style shifts and small perturbations to which
humans are robust. In this work, we show that the robustness of neural networks
can be greatly improved through the use of random convolutions as data
augmentation. Random convolutions are approximately shape-preserving and may
distort local textures. Intuitively, randomized convolutions create an infinite
number of new domains with similar global shapes but random local textures.
Therefore, we explore using outputs of multi-scale random convolutions as new
images or mixing them with the original images during training. When applying a
network trained with our approach to unseen domains, our method consistently
improves the performance on domain generalization benchmarks and is scalable to
ImageNet. In particular, in the challenging scenario of generalizing to the
sketch domain in PACS and to ImageNet-Sketch, our method outperforms
state-of-art methods by a large margin. More interestingly, our method can
benefit downstream tasks by providing a more robust pretrained visual
representation. | [
"cs.CV",
"cs.LG"
] |
Motivated by the success of Transformers in natural language processing (NLP)
tasks, there emerge some attempts (e.g., ViT and DeiT) to apply Transformers to
the vision domain. However, pure Transformer architectures often require a
large amount of training data or extra supervision to obtain comparable
performance with convolutional neural networks (CNNs). To overcome these
limitations, we analyze the potential drawbacks when directly borrowing
Transformer architectures from NLP. Then we propose a new
\textbf{Convolution-enhanced image Transformer (CeiT)} which combines the
advantages of CNNs in extracting low-level features, strengthening locality,
and the advantages of Transformers in establishing long-range dependencies.
Three modifications are made to the original Transformer: \textbf{1)} instead
of the straightforward tokenization from raw input images, we design an
\textbf{Image-to-Tokens (I2T)} module that extracts patches from generated
low-level features; \textbf{2)} the feed-froward network in each encoder block
is replaced with a \textbf{Locally-enhanced Feed-Forward (LeFF)} layer that
promotes the correlation among neighboring tokens in the spatial dimension;
\textbf{3)} a \textbf{Layer-wise Class token Attention (LCA)} is attached at
the top of the Transformer that utilizes the multi-level representations.
Experimental results on ImageNet and seven downstream tasks show the
effectiveness and generalization ability of CeiT compared with previous
Transformers and state-of-the-art CNNs, without requiring a large amount of
training data and extra CNN teachers. Besides, CeiT models also demonstrate
better convergence with $3\times$ fewer training iterations, which can reduce
the training cost significantly\footnote{Code and models will be released upon
acceptance.}. | [
"cs.CV"
] |
We introduce the Precise Synthetic Image and LiDAR (PreSIL) dataset for
autonomous vehicle perception. Grand Theft Auto V (GTA V), a commercial video
game, has a large detailed world with realistic graphics, which provides a
diverse data collection environment. Existing works creating synthetic LiDAR
data for autonomous driving with GTA V have not released their datasets, rely
on an in-game raycasting function which represents people as cylinders, and can
fail to capture vehicles past 30 metres. Our work creates a precise LiDAR
simulator within GTA V which collides with detailed models for all entities no
matter the type or position. The PreSIL dataset consists of over 50,000 frames
and includes high-definition images with full resolution depth information,
semantic segmentation (images), point-wise segmentation (point clouds), and
detailed annotations for all vehicles and people. Collecting additional data
with our framework is entirely automatic and requires no human annotation of
any kind. We demonstrate the effectiveness of our dataset by showing an
improvement of up to 5% average precision on the KITTI 3D Object Detection
benchmark challenge when state-of-the-art 3D object detection networks are
pre-trained with our data. The data and code are available at
https://tinyurl.com/y3tb9sxy | [
"cs.CV",
"cs.RO"
] |
Long sequence time-series forecasting (LSTF) has become increasingly popular
for its wide range of applications. Though superior models have been proposed
to enhance the prediction effectiveness and efficiency, it is reckless to
neglect or underestimate one of the most natural and basic temporal properties
of time-series. In this paper, we introduce a new baseline for LSTF, the
historical inertia (HI), which refers to the most recent historical data-points
in the input time series. We experimentally evaluate the power of historical
inertia on four public real-word datasets. The results demonstrate that up to
82\% relative improvement over state-of-the-art works can be achieved even by
adopting HI directly as output. | [
"cs.LG"
] |
Reinforcement learning has recently shown promise as a technique for training
an artificial neural network to parse sentences in some unknown format. A key
aspect of this approach is that rather than explicitly inferring a grammar that
describes the format, the neural network learns to perform various parsing
actions (such as merging two tokens) over a corpus of sentences, with the goal
of maximizing the total reward, which is roughly based on the estimated
frequency of the resulting parse structures. This can allow the learning
process to more easily explore different action choices, since a given choice
may change the optimality of the parse (as expressed by the total reward), but
will not result in the failure to parse a sentence. However, the approach also
exhibits limitations: first, the neural network does not provide production
rules for the grammar that it uses during parsing; second, because this neural
network can successfully parse any sentence, it cannot be directly used to
identify sentences that deviate from the format of the training sentences,
i.e., that are anomalous. In this paper, we address these limitations by
presenting procedures for extracting production rules from the neural network,
and for using these rules to determine whether a given sentence is nominal or
anomalous, when compared to structures observed within training data. In the
latter case, an attempt is made to identify the location of the anomaly.
Additionally, a two pass mechanism is presented for dealing with formats
containing high-entropy information. We empirically evaluate the approach on
artificial formats, demonstrating effectiveness, but also identifying
limitations. By further improving parser learning, and leveraging rule
extraction and anomaly detection, one might begin to understand common errors,
either benign or malicious, in practical formats. | [
"cs.LG",
"cs.FL",
"68T07 (Primary) 68Q42 (Secondary)",
"I.2.6; F.4.2"
] |
Unsupervised representation learning holds the promise of exploiting large
amounts of unlabeled data to learn general representations. A promising
technique for unsupervised learning is the framework of Variational
Auto-encoders (VAEs). However, unsupervised representations learned by VAEs are
significantly outperformed by those learned by supervised learning for
recognition. Our hypothesis is that to learn useful representations for
recognition the model needs to be encouraged to learn about repeating and
consistent patterns in data. Drawing inspiration from the mid-level
representation discovery work, we propose PatchVAE, that reasons about images
at patch level. Our key contribution is a bottleneck formulation that
encourages mid-level style representations in the VAE framework. Our
experiments demonstrate that representations learned by our method perform much
better on the recognition tasks compared to those learned by vanilla VAEs. | [
"cs.CV",
"cs.LG"
] |
In this work, we address the problem of cross-view geo-localization, which
estimates the geospatial location of a street view image by matching it with a
database of geo-tagged aerial images. The cross-view matching task is extremely
challenging due to drastic appearance and geometry differences across views.
Unlike existing methods that predominantly fall back on CNN, here we devise a
novel evolving geo-localization Transformer (EgoTR) that utilizes the
properties of self-attention in Transformer to model global dependencies, thus
significantly decreasing visual ambiguities in cross-view geo-localization. We
also exploit the positional encoding of Transformer to help the EgoTR
understand and correspond geometric configurations between ground and aerial
images. Compared to state-of-the-art methods that impose strong assumption on
geometry knowledge, the EgoTR flexibly learns the positional embeddings through
the training objective and hence becomes more practical in many real-world
scenarios. Although Transformer is well suited to our task, its vanilla
self-attention mechanism independently interacts within image patches in each
layer, which overlooks correlations between layers. Instead, this paper propose
a simple yet effective self-cross attention mechanism to improve the quality of
learned representations. The self-cross attention models global dependencies
between adjacent layers, which relates between image patches while modeling how
features evolve in the previous layer. As a result, the proposed self-cross
attention leads to more stable training, improves the generalization ability
and encourages representations to keep evolving as the network goes deeper.
Extensive experiments demonstrate that our EgoTR performs favorably against
state-of-the-art methods on standard, fine-grained and cross-dataset cross-view
geo-localization tasks. | [
"cs.CV"
] |
Given a measurement graph $G= (V,E)$ and an unknown signal $r \in
\mathbb{R}^n$, we investigate algorithms for recovering $r$ from pairwise
measurements of the form $r_i - r_j$; $\{i,j\} \in E$. This problem arises in a
variety of applications, such as ranking teams in sports data and time
synchronization of distributed networks. Framed in the context of ranking, the
task is to recover the ranking of $n$ teams (induced by $r$) given a small
subset of noisy pairwise rank offsets. We propose a simple SVD-based
algorithmic pipeline for both the problem of time synchronization and ranking.
We provide a detailed theoretical analysis in terms of robustness against both
sampling sparsity and noise perturbations with outliers, using results from
matrix perturbation and random matrix theory. Our theoretical findings are
complemented by a detailed set of numerical experiments on both synthetic and
real data, showcasing the competitiveness of our proposed algorithms with other
state-of-the-art methods. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] |
Advancing research in the emerging field of deep graph learning requires new
tools to support tensor computation over graphs. In this paper, we present the
design principles and implementation of Deep Graph Library (DGL). DGL distills
the computational patterns of GNNs into a few generalized sparse tensor
operations suitable for extensive parallelization. By advocating graph as the
central programming abstraction, DGL can perform optimizations transparently.
By cautiously adopting a framework-neutral design, DGL allows users to easily
port and leverage the existing components across multiple deep learning
frameworks. Our evaluation shows that DGL significantly outperforms other
popular GNN-oriented frameworks in both speed and memory consumption over a
variety of benchmarks and has little overhead for small scale workloads. | [
"cs.LG",
"stat.ML"
] |
We propose a new approach for Zero-Shot Human-Object Interaction Recognition
in the challenging setting that involves interactions with unseen actions (as
opposed to just unseen combinations of seen actions and objects). Our approach
makes use of knowledge external to the image content in the form of a graph
that models affordance relations between actions and objects, i.e., whether an
action can be performed on the given object or not. We propose a loss function
with the aim of distilling the knowledge contained in the graph into the model,
while also using the graph to regularise learnt representations by imposing a
local structure on the latent space. We evaluate our approach on several
datasets (including the popular HICO and HICO-DET) and show that it outperforms
the current state of the art. | [
"cs.CV"
] |
This paper provides a unifying view of a wide range of problems of interest
in machine learning by framing them as the minimization of functionals defined
on the space of probability measures. In particular, we show that generative
adversarial networks, variational inference, and actor-critic methods in
reinforcement learning can all be seen through the lens of our framework. We
then discuss a generic optimization algorithm for our formulation, called
probability functional descent (PFD), and show how this algorithm recovers
existing methods developed independently in the settings mentioned earlier. | [
"cs.LG",
"stat.ML"
] |
Accurate prediction of pedestrian crossing behaviors by autonomous vehicles
can significantly improve traffic safety. Existing approaches often model
pedestrian behaviors using trajectories or poses but do not offer a deeper
semantic interpretation of a person's actions or how actions influence a
pedestrian's intention to cross in the future. In this work, we follow the
neuroscience and psychological literature to define pedestrian crossing
behavior as a combination of an unobserved inner will (a probabilistic
representation of binary intent of crossing vs. not crossing) and a set of
multi-class actions (e.g., walking, standing, etc.). Intent generates actions,
and the future actions in turn reflect the intent. We present a novel
multi-task network that predicts future pedestrian actions and uses predicted
future action as a prior to detect the present intent and action of the
pedestrian. We also designed an attention relation network to incorporate
external environmental contexts thus further improve intent and action
detection performance. We evaluated our approach on two naturalistic driving
datasets, PIE and JAAD, and extensive experiments show significantly improved
and more explainable results for both intent detection and action prediction
over state-of-the-art approaches. Our code is available at:
https://github.com/umautobots/pedestrian_intent_action_detection. | [
"cs.CV"
] |
Machine learning methods have been adopted in the literature as contenders to
conventional methods to solve the energy time series forecasting (TSF)
problems. Recently, deep learning methods have been emerged in the artificial
intelligence field attaining astonishing performance in a wide range of
applications. Yet, the evidence about their performance in to solve the energy
TSF problems, in terms of accuracy and computational requirements, is scanty.
Most of the review articles that handle the energy TSF problem are systematic
reviews, however, a qualitative and quantitative study for the energy TSF
problem is not yet available in the literature. The purpose of this paper is
twofold, first it provides a comprehensive analytical assessment for
conventional,machine learning, and deep learning methods that can be utilized
to solve various energy TSF problems. Second, the paper carries out an
empirical assessment for many selected methods through three real-world
datasets. These datasets related to electrical energy consumption problem,
natural gas problem, and electric power consumption of an individual household
problem.The first two problems are univariate TSF and the third problem is a
multivariate TSF. Com-pared to both conventional and machine learning
contenders, the deep learning methods attain a significant improvement in terms
of accuracy and forecasting horizons examined. In the mean-time, their
computational requirements are notably greater than other contenders.
Eventually,the paper identifies a number of challenges, potential research
directions, and recommendations to the research community may serve as a basis
for further research in the energy forecasting domain. | [
"cs.LG"
] |
We present an adversarial active exploration for inverse dynamics model
learning, a simple yet effective learning scheme that incentivizes exploration
in an environment without any human intervention. Our framework consists of a
deep reinforcement learning (DRL) agent and an inverse dynamics model
contesting with each other. The former collects training samples for the
latter, with an objective to maximize the error of the latter. The latter is
trained with samples collected by the former, and generates rewards for the
former when it fails to predict the actual action taken by the former. In such
a competitive setting, the DRL agent learns to generate samples that the
inverse dynamics model fails to predict correctly, while the inverse dynamics
model learns to adapt to the challenging samples. We further propose a reward
structure that ensures the DRL agent to collect only moderately hard samples
but not overly hard ones that prevent the inverse model from predicting
effectively. We evaluate the effectiveness of our method on several robotic arm
and hand manipulation tasks against multiple baseline models. Experimental
results show that our method is comparable to those directly trained with
expert demonstrations, and superior to the other baselines even without any
human priors. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Seeking effective neural networks is a critical and practical field in deep
learning. Besides designing the depth, type of convolution, normalization, and
nonlinearities, the topological connectivity of neural networks is also
important. Previous principles of rule-based modular design simplify the
difficulty of building an effective architecture, but constrain the possible
topologies in limited spaces. In this paper, we attempt to optimize the
connectivity in neural networks. We propose a topological perspective to
represent a network into a complete graph for analysis, where nodes carry out
aggregation and transformation of features, and edges determine the flow of
information. By assigning learnable parameters to the edges which reflect the
magnitude of connections, the learning process can be performed in a
differentiable manner. We further attach auxiliary sparsity constraint to the
distribution of connectedness, which promotes the learned topology focus on
critical connections. This learning process is compatible with existing
networks and owns adaptability to larger search spaces and different tasks.
Quantitative results of experiments reflect the learned connectivity is
superior to traditional rule-based ones, such as random, residual, and
complete. In addition, it obtains significant improvements in image
classification and object detection without introducing excessive computation
burden. | [
"cs.CV"
] |
We propose the Graph Context Encoder (GCE), a simple but efficient approach
for graph representation learning based on graph feature masking and
reconstruction.
GCE models are trained to efficiently reconstruct input graphs similarly to a
graph autoencoder where node and edge labels are masked. In particular, our
model is also allowed to change graph structures by masking and reconstructing
graphs augmented by random pseudo-edges.
We show that GCE can be used for novel graph generation, with applications
for molecule generation. Used as a pretraining method, we also show that GCE
improves baseline performances in supervised classification tasks tested on
multiple standard benchmark graph datasets. | [
"cs.LG",
"68T07"
] |
Interference effects of tall buildings have attracted numerous studies due to
the boom of clusters of tall buildings in megacities. To fully understand the
interference effects of buildings, it often requires a substantial amount of
wind tunnel tests. Limited wind tunnel tests that only cover part of
interference scenarios are unable to fully reveal the interference effects.
This study used machine learning techniques to resolve the conflicting
requirement between limited wind tunnel tests that produce unreliable results
and a completed investigation of the interference effects that is costly and
time-consuming. Four machine learning models including decision tree, random
forest, XGBoost, generative adversarial networks (GANs), were trained based on
30% of a dataset to predict both mean and fluctuating pressure coefficients on
the principal building. The GANs model exhibited the best performance in
predicting these pressure coefficients. A number of GANs models were then
trained based on different portions of the dataset ranging from 10% to 90%. It
was found that the GANs model based on 30% of the dataset is capable of
predicting both mean and fluctuating pressure coefficients under unseen
interference conditions accurately. By using this GANs model, 70% of the wind
tunnel test cases can be saved, largely alleviating the cost of this kind of
wind tunnel testing study. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Applying probabilistic models to reinforcement learning (RL) enables the
application of powerful optimisation tools such as variational inference to RL.
However, existing inference frameworks and their algorithms pose significant
challenges for learning optimal policies, e.g., the absence of mode capturing
behaviour in pseudo-likelihood methods and difficulties learning deterministic
policies in maximum entropy RL based approaches. We propose VIREL, a novel,
theoretically grounded probabilistic inference framework for RL that utilises a
parametrised action-value function to summarise future dynamics of the
underlying MDP. This gives VIREL a mode-seeking form of KL divergence, the
ability to learn deterministic optimal polices naturally from inference and the
ability to optimise value functions and policies in separate, iterative steps.
In applying variational expectation-maximisation to VIREL we thus show that the
actor-critic algorithm can be reduced to expectation-maximisation, with policy
improvement equivalent to an E-step and policy evaluation to an M-step. We then
derive a family of actor-critic methods from VIREL, including a scheme for
adaptive exploration. Finally, we demonstrate that actor-critic algorithms from
this family outperform state-of-the-art methods based on soft value functions
in several domains. | [
"cs.LG",
"stat.ML"
] |
Generative adversarial networks (GANs) have emerged as a powerful
unsupervised method to model the statistical patterns of real-world data sets,
such as natural images. These networks are trained to map random inputs in
their latent space to new samples representative of the learned data. However,
the structure of the latent space is hard to intuit due to its high
dimensionality and the non-linearity of the generator, which limits the
usefulness of the models. Understanding the latent space requires a way to
identify input codes for existing real-world images (inversion), and a way to
identify directions with known image transformations (interpretability). Here,
we use a geometric framework to address both issues simultaneously. We develop
an architecture-agnostic method to compute the Riemannian metric of the image
manifold created by GANs. The eigen-decomposition of the metric isolates axes
that account for different levels of image variability. An empirical analysis
of several pretrained GANs shows that image variation around each position is
concentrated along surprisingly few major axes (the space is highly
anisotropic) and the directions that create this large variation are similar at
different positions in the space (the space is homogeneous). We show that many
of the top eigenvectors correspond to interpretable transforms in the image
space, with a substantial part of eigenspace corresponding to minor transforms
which could be compressed out. This geometric understanding unifies key
previous results related to GAN interpretability. We show that the use of this
metric allows for more efficient optimization in the latent space (e.g. GAN
inversion) and facilitates unsupervised discovery of interpretable axes. Our
results illustrate that defining the geometry of the GAN image manifold can
serve as a general framework for understanding GANs. | [
"cs.LG",
"cs.NA",
"cs.NE",
"math.NA",
"I.2.10; I.3.3; I.3.5; G.1.4"
] |
In performative prediction, predictions guide decision-making and hence can
influence the distribution of future data. To date, work on performative
prediction has focused on finding performatively stable models, which are the
fixed points of repeated retraining. However, stable solutions can be far from
optimal when evaluated in terms of the performative risk, the loss experienced
by the decision maker when deploying a model. In this paper, we shift attention
beyond performative stability and focus on optimizing the performative risk
directly. We identify a natural set of properties of the loss function and
model-induced distribution shift under which the performative risk is convex, a
property which does not follow from convexity of the loss alone. Furthermore,
we develop algorithms that leverage our structural assumptions to optimize the
performative risk with better sample efficiency than generic methods for
derivative-free convex optimization. | [
"cs.LG",
"stat.ML"
] |
Humans and animals have the ability to continually acquire, fine-tune, and
transfer knowledge and skills throughout their lifespan. This ability, referred
to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms
that together contribute to the development and specialization of our
sensorimotor skills as well as to long-term memory consolidation and retrieval.
Consequently, lifelong learning capabilities are crucial for autonomous agents
interacting in the real world and processing continuous streams of information.
However, lifelong learning remains a long-standing challenge for machine
learning and neural network models since the continual acquisition of
incrementally available information from non-stationary data distributions
generally leads to catastrophic forgetting or interference. This limitation
represents a major drawback for state-of-the-art deep neural network models
that typically learn representations from stationary batches of training data,
thus without accounting for situations in which information becomes
incrementally available over time. In this review, we critically summarize the
main challenges linked to lifelong learning for artificial learning systems and
compare existing neural network approaches that alleviate, to different
extents, catastrophic forgetting. We discuss well-established and emerging
research motivated by lifelong learning factors in biological systems such as
structural plasticity, memory replay, curriculum and transfer learning,
intrinsic motivation, and multisensory integration. | [
"cs.LG",
"q-bio.NC",
"stat.ML"
] |
In recent years, attention models have been extensively used for person and
vehicle re-identification. Most re-identification methods are designed to focus
attention on key-point locations. However, depending on the orientation, the
contribution of each key-point varies. In this paper, we present a novel
dual-path adaptive attention model for vehicle re-identification (AAVER). The
global appearance path captures macroscopic vehicle features while the
orientation conditioned part appearance path learns to capture localized
discriminative features by focusing attention on the most informative
key-points. Through extensive experimentation, we show that the proposed AAVER
method is able to accurately re-identify vehicles in unconstrained scenarios,
yielding state of the art results on the challenging dataset VeRi-776. As a
byproduct, the proposed system is also able to accurately predict vehicle
key-points and shows an improvement of more than 7% over state of the art. The
code for key-point estimation model is available at
https://github.com/Pirazh/Vehicle_Key_Point_Orientation_Estimation. | [
"cs.CV"
] |
Despite the remarkable success of generative adversarial networks, their
performance seems less impressive for diverse training sets, requiring learning
of discontinuous mapping functions. Though multi-mode prior or multi-generator
models have been proposed to alleviate this problem, such approaches may fail
depending on the empirically chosen initial mode components. In contrast to
such bottom-up approaches, we present GAN-Tree, which follows a hierarchical
divisive strategy to address such discontinuous multi-modal data. Devoid of any
assumption on the number of modes, GAN-Tree utilizes a novel mode-splitting
algorithm to effectively split the parent mode to semantically cohesive
children modes, facilitating unsupervised clustering. Further, it also enables
incremental addition of new data modes to an already trained GAN-Tree, by
updating only a single branch of the tree structure. As compared to prior
approaches, the proposed framework offers a higher degree of flexibility in
choosing a large variety of mutually exclusive and exhaustive tree nodes called
GAN-Set. Extensive experiments on synthetic and natural image datasets
including ImageNet demonstrate the superiority of GAN-Tree against the prior
state-of-the-arts. | [
"cs.CV",
"cs.LG"
] |
Photo-realistic re-rendering of a human from a single image with explicit
control over body pose, shape and appearance enables a wide range of
applications, such as human appearance transfer, virtual try-on, motion
imitation, and novel view synthesis. While significant progress has been made
in this direction using learning-based image generation tools, such as GANs,
existing approaches yield noticeable artefacts such as blurring of fine
details, unrealistic distortions of the body parts and garments as well as
severe changes of the textures. We, therefore, propose a new method for
synthesising photo-realistic human images with explicit control over pose and
part-based appearance, i.e., StylePoseGAN, where we extend a non-controllable
generator to accept conditioning of pose and appearance separately. Our network
can be trained in a fully supervised way with human images to disentangle pose,
appearance and body parts, and it significantly outperforms existing single
image re-rendering methods. Our disentangled representation opens up further
applications such as garment transfer, motion transfer, virtual try-on, head
(identity) swap and appearance interpolation. StylePoseGAN achieves
state-of-the-art image generation fidelity on common perceptual metrics
compared to the current best-performing methods and convinces in a
comprehensive user study. | [
"cs.CV"
] |
Attention mechanism is a hot spot in deep learning field. Using channel
attention model is an effective method for improving the performance of the
convolutional neural network. Squeeze-and-Excitation block takes advantage of
the channel dependence, selectively emphasizing the important channels and
compressing the relatively useless channel. In this paper, we proposed a
variant of SE block based on channel locality. Instead of using full connection
layers to explore the global channel dependence, we adopt convolutional layers
to learn the correlation between the nearby channels. We term this new
algorithm Channel Locality(C-Local) block. We evaluate SE block and C-Local
block by applying them to different CNNs architectures on cifar-10 dataset. We
observed that our C-Local block got higher accuracy than SE block did. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Generative Adversarial Networks (GANs) have recently demonstrated the
capability to synthesize compelling real-world images, such as room interiors,
album covers, manga, faces, birds, and flowers. While existing models can
synthesize images based on global constraints such as a class label or caption,
they do not provide control over pose or object location. We propose a new
model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes
images given instructions describing what content to draw in which location. We
show high-quality 128 x 128 image synthesis on the Caltech-UCSD Birds dataset,
conditioned on both informal text descriptions and also object location. Our
system exposes control over both the bounding box around the bird and its
constituent parts. By modeling the conditional distributions over part
locations, our system also enables conditioning on arbitrary subsets of parts
(e.g. only the beak and tail), yielding an efficient interface for picking part
locations. We also show preliminary results on the more challenging domain of
text- and location-controllable synthesis of images of human actions on the
MPII Human Pose dataset. | [
"cs.CV",
"cs.NE"
] |
The highest strength-to-weight ratio criterion has fascinated curiosity
increasingly in virtually all areas where heft reduction is indispensable.
Lightweight materials and their joining processes are also a recent point of
research demands in the manufacturing industries. Friction Stir Welding (FSW)
is one of the recent advancements for joining materials without adding any
third material (filler rod) and joining below the melting point of the parent
material. The process is widely used for joining similar and dissimilar metals,
especially lightweight non-ferrous materials like aluminum, copper, and
magnesium alloys. This paper presents verdicts of optimum process parameters on
attaining enhanced mechanical properties of the weld joint. The experiment was
conducted on a 5 mm 6061 aluminum alloy sheet. Process parameters; tool
material, rotational speed, traverse speed, and axial forces were utilized.
Mechanical properties of the weld joint are examined employing a tensile test,
and the maximum joint strength efficiency was reached 94.2%. Supervised Machine
Learning based Regression algorithms such as Decision Trees, Random Forest, and
Gradient Boosting Algorithm were used. The results showed that the Random
Forest algorithm yielded highest coefficient of determination value of 0.926
which means it gives a best fit in comparison to other algorithms. | [
"cs.LG"
] |
Over the past two decades, biometric recognition has exploded into a plethora
of different applications around the globe. This proliferation can be
attributed to the high levels of authentication accuracy and user convenience
that biometric recognition systems afford end-users. However, in-spite of the
success of biometric recognition systems, there are a number of outstanding
problems and concerns pertaining to the various sub-modules of biometric
recognition systems that create an element of mistrust in their use - both by
the scientific community and also the public at large. Some of these problems
include: i) questions related to system recognition performance, ii) security
(spoof attacks, adversarial attacks, template reconstruction attacks and
demographic information leakage), iii) uncertainty over the bias and fairness
of the systems to all users, iv) explainability of the seemingly black-box
decisions made by most recognition systems, and v) concerns over data
centralization and user privacy. In this paper, we provide an overview of each
of the aforementioned open-ended challenges. We survey work that has been
conducted to address each of these concerns and highlight the issues requiring
further attention. Finally, we provide insights into how the biometric
community can address core biometric recognition systems design issues to
better instill trust, fairness, and security for all. | [
"cs.CV"
] |
Deep Neural Networks (DNNs) have become ubiquitous in medical image
processing and analysis. Among them, U-Nets are very popular in various image
segmentation tasks. Yet, little is known about how information flows through
these networks and whether they are indeed properly designed for the tasks they
are being proposed for. In this paper, we employ information-theoretic tools in
order to gain insight into information flow through U-Nets. In particular, we
show how mutual information between input/output and an intermediate layer can
be a useful tool to understand information flow through various portions of a
U-Net, assess its architectural efficiency, and even propose more efficient
designs. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
This paper proposes a framework dedicated to the construction of what we call
discrete elastic inner product allowing one to embed sets of non-uniformly
sampled multivariate time series or sequences of varying lengths into inner
product space structures. This framework is based on a recursive definition
that covers the case of multiple embedded time elastic dimensions. We prove
that such inner products exist in our general framework and show how a simple
instance of this inner product class operates on some prospective applications,
while generalizing the Euclidean inner product. Classification experimentations
on time series and symbolic sequences datasets demonstrate the benefits that we
can expect by embedding time series or sequences into elastic inner spaces
rather than into classical Euclidean spaces. These experiments show good
accuracy when compared to the euclidean distance or even dynamic programming
algorithms while maintaining a linear algorithmic complexity at exploitation
stage, although a quadratic indexing phase beforehand is required. | [
"cs.LG",
"cs.DB"
] |
We introduce a class of learning problems where the agent is presented with a
series of tasks. Intuitively, if there is relation among those tasks, then the
information gained during execution of one task has value for the execution of
another task. Consequently, the agent is intrinsically motivated to explore its
environment beyond the degree necessary to solve the current task it has at
hand. We develop a decision theoretic setting that generalises standard
reinforcement learning tasks and captures this intuition. More precisely, we
consider a multi-stage stochastic game between a learning agent and an
opponent. We posit that the setting is a good model for the problem of
life-long learning in uncertain environments, where while resources must be
spent learning about currently important tasks, there is also the need to
allocate effort towards learning about aspects of the world which are not
relevant at the moment. This is due to the fact that unpredictable future
events may lead to a change of priorities for the decision maker. Thus, in some
sense, the model "explains" the necessity of curiosity. Apart from introducing
the general formalism, the paper provides algorithms. These are evaluated
experimentally in some exemplary domains. In addition, performance bounds are
proven for some cases of this problem. | [
"cs.LG",
"stat.ML"
] |
Humans are incredibly good at transferring knowledge from one domain to
another, enabling rapid learning of new tasks. Likewise, transfer learning has
enabled enormous success in many computer vision problems using pretraining.
However, the benefits of transfer in multi-domain learning, where a network
learns multiple tasks defined by different datasets, has not been adequately
studied. Learning multiple domains could be beneficial or these domains could
interfere with each other given limited network capacity. In this work, we
decipher the conditions where interference and knowledge transfer occur in
multi-domain learning. We propose new metrics disentangling interference and
transfer and set up experimental protocols. We further examine the roles of
network capacity, task grouping, and dynamic loss weighting in reducing
interference and facilitating transfer. We demonstrate our findings on the
CIFAR-100, MiniPlaces, and Tiny-ImageNet datasets. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Multiple Object Tracking (MOT) is an important task in computer vision. MOT
is still challenging due to the occlusion problem, especially in dense scenes.
Following the tracking-by-detection framework, we propose the Box-Plane
Matching (BPM) method to improve the MOT performacne in dense scenes. First, we
design the Layer-wise Aggregation Discriminative Model (LADM) to filter the
noisy detections. Then, to associate remaining detections correctly, we
introduce the Global Attention Feature Model (GAFM) to extract appearance
feature and use it to calculate the appearance similarity between history
tracklets and current detections. Finally, we propose the Box-Plane Matching
strategy to achieve data association according to the motion similarity and
appearance similarity between tracklets and detections. With the effectiveness
of the three modules, our team achieves the 1st place on the Track-1
leaderboard in the ACM MM Grand Challenge HiEve 2020. | [
"cs.CV"
] |
In prepress department RGB image has to be converted to CMYK image. To
control that amount of black, cyan, magenta and yellow has to be controlled by
using color separation method. Graycolor separation method is selected to
control the amounts of these colors because it increase the quality of printing
also. A single printer used for printing the same image on different paper also
results in different printed images. To remove this problem a different ICC
profile based on gray level control is developedand a sheet offset printer is
calibrated using that profile and a subjective evaluation shows satisfactory
results for different quality papers. | [
"cs.CV"
] |
We revisit the structure learning problem for dynamic Bayesian networks and
propose a method that simultaneously estimates contemporaneous (intra-slice)
and time-lagged (inter-slice) relationships between variables in a time-series.
Our approach is score-based, and revolves around minimizing a penalized loss
subject to an acyclicity constraint. To solve this problem, we leverage a
recent algebraic result characterizing the acyclicity constraint as a smooth
equality constraint. The resulting algorithm, which we call DYNOTEARS,
outperforms other methods on simulated data, especially in high-dimensions as
the number of variables increases. We also apply this algorithm on real
datasets from two different domains, finance and molecular biology, and analyze
the resulting output. Compared to state-of-the-art methods for learning dynamic
Bayesian networks, our method is both scalable and accurate on real data. The
simple formulation and competitive performance of our method make it suitable
for a variety of problems where one seeks to learn connections between
variables across time. | [
"stat.ML",
"cs.LG"
] |
We propose a method for converting a single RGB-D input image into a 3D photo
- a multi-layer representation for novel view synthesis that contains
hallucinated color and depth structures in regions occluded in the original
view. We use a Layered Depth Image with explicit pixel connectivity as
underlying representation, and present a learning-based inpainting model that
synthesizes new local color-and-depth content into the occluded region in a
spatial context-aware manner. The resulting 3D photos can be efficiently
rendered with motion parallax using standard graphics engines. We validate the
effectiveness of our method on a wide range of challenging everyday scenes and
show fewer artifacts compared with the state of the arts. | [
"cs.CV",
"eess.IV"
] |
Deep neural networks (DNNs) trained on one set of medical images often
experience severe performance drop on unseen test images, due to various domain
discrepancy between the training images (source domain) and the test images
(target domain), which raises a domain adaptation issue. In clinical settings,
it is difficult to collect enough annotated target domain data in a short
period. Few-shot domain adaptation, i.e., adapting a trained model with a
handful of annotations, is highly practical and useful in this case. In this
paper, we propose a Polymorphic Transformer (Polyformer), which can be
incorporated into any DNN backbones for few-shot domain adaptation.
Specifically, after the polyformer layer is inserted into a model trained on
the source domain, it extracts a set of prototype embeddings, which can be
viewed as a "basis" of the source-domain features. On the target domain, the
polyformer layer adapts by only updating a projection layer which controls the
interactions between image features and the prototype embeddings. All other
model weights (except BatchNorm parameters) are frozen during adaptation. Thus,
the chance of overfitting the annotations is greatly reduced, and the model can
perform robustly on the target domain after being trained on a few annotated
images. We demonstrate the effectiveness of Polyformer on two medical
segmentation tasks (i.e., optic disc/cup segmentation, and polyp segmentation).
The source code of Polyformer is released at
https://github.com/askerlee/segtran. | [
"cs.CV"
] |
While generative adversarial networks (GANs) can successfully produce
high-quality images, they can be challenging to control. Simplifying GAN-based
image generation is critical for their adoption in graphic design and artistic
work. This goal has led to significant interest in methods that can intuitively
control the appearance of images generated by GANs. In this paper, we present
HistoGAN, a color histogram-based method for controlling GAN-generated images'
colors. We focus on color histograms as they provide an intuitive way to
describe image color while remaining decoupled from domain-specific semantics.
Specifically, we introduce an effective modification of the recent StyleGAN
architecture to control the colors of GAN-generated images specified by a
target color histogram feature. We then describe how to expand HistoGAN to
recolor real images. For image recoloring, we jointly train an encoder network
along with HistoGAN. The recoloring model, ReHistoGAN, is an unsupervised
approach trained to encourage the network to keep the original image's content
while changing the colors based on the given target histogram. We show that
this histogram-based approach offers a better way to control GAN-generated and
real images' colors while producing more compelling results compared to
existing alternative strategies. | [
"cs.CV"
] |
Dynamic high resolution data on human population distribution is of great
importance for a wide spectrum of activities and real-life applications, but is
too difficult and expensive to obtain directly. Therefore, generating
fine-scaled population distributions from coarse population data is of great
significance. However, there are three major challenges: 1) the complexity in
spatial relations between high and low resolution population; 2) the dependence
of population distributions on other external information; 3) the difficulty in
retrieving temporal distribution patterns. In this paper, we first propose the
idea to generate dynamic population distributions in full-time series, then we
design dynamic population mapping via deep neural network(DeepDPM), a model
that describes both spatial and temporal patterns using coarse data and point
of interest information. In DeepDPM, we utilize super-resolution convolutional
neural network(SRCNN) based model to directly map coarse data into higher
resolution data, and a time-embedded long short-term memory model to
effectively capture the periodicity nature to smooth the finer-scaled results
from the previous static SRCNN model. We perform extensive experiments on a
real-life mobile dataset collected from Shanghai. Our results demonstrate that
DeepDPM outperforms previous state-of-the-art methods and a suite of frequent
data-mining approaches. Moreover, DeepDPM breaks through the limitation from
previous works in time dimension so that dynamic predictions in all-day time
slots can be obtained. | [
"cs.CV"
] |
Differentiable Architecture Search (DARTS) has attracted extensive attention
due to its efficiency in searching for cell structures. DARTS mainly focuses on
the operation search and derives the cell topology from the operation weights.
However, the operation weights can not indicate the importance of cell topology
and result in poor topology rating correctness. To tackle this, we propose to
Decouple the Operation and Topology Search (DOTS), which decouples the topology
representation from operation weights and makes an explicit topology search.
DOTS is achieved by introducing a topology search space that contains
combinations of candidate edges. The proposed search space directly reflects
the search objective and can be easily extended to support a flexible number of
edges in the searched cell. Existing gradient-based NAS methods can be
incorporated into DOTS for further improvement by the topology search.
Considering that some operations (e.g., Skip-Connection) can affect the
topology, we propose a group operation search scheme to preserve
topology-related operations for a better topology search. The experiments on
CIFAR10/100 and ImageNet demonstrate that DOTS is an effective solution for
differentiable NAS. | [
"cs.CV"
] |
We present a determinantal point process (DPP) inspired alternative to
non-maximum suppression (NMS) which has become an integral step in all
state-of-the-art object detection frameworks. DPPs have been shown to encourage
diversity in subset selection problems. We pose NMS as a subset selection
problem and posit that directly incorporating DPP like framework can improve
the overall performance of the object detection system. We propose an
optimization problem which takes the same inputs as NMS, but introduces a novel
sub-modularity based diverse subset selection functional. Our results strongly
indicate that the modifications proposed in this paper can provide consistent
improvements to state-of-the-art object detection pipelines. | [
"cs.CV"
] |
In many practical applications, it is often difficult and expensive to obtain
enough large-scale labeled data to train deep neural networks to their full
capability. Therefore, transferring the learned knowledge from a separate,
labeled source domain to an unlabeled or sparsely labeled target domain becomes
an appealing alternative. However, direct transfer often results in significant
performance decay due to domain shift. Domain adaptation (DA) addresses this
problem by minimizing the impact of domain shift between the source and target
domains. Multi-source domain adaptation (MDA) is a powerful extension in which
the labeled data may be collected from multiple sources with different
distributions. Due to the success of DA methods and the prevalence of
multi-source data, MDA has attracted increasing attention in both academia and
industry. In this survey, we define various MDA strategies and summarize
available datasets for evaluation. We also compare modern MDA methods in the
deep learning era, including latent space transformation and intermediate
domain generation. Finally, we discuss future research directions for MDA. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Mining frequent episodes aims at recovering sequential patterns from temporal
data sequences, which can then be used to predict the occurrence of related
events in advance. On the other hand, gradual patterns that capture
co-variation of complex attributes in the form of " when X increases/decreases,
Y increases/decreases" play an important role in many real world applications
where huge volumes of complex numerical data must be handled. Recently, these
patterns have received attention from the data mining community exploring
temporal data who proposed methods to automatically extract gradual patterns
from temporal data. However, to the best of our knowledge, no method has been
proposed to extract gradual patterns that regularly appear at identical time
intervals in many sequences of temporal data, despite the fact that such
patterns may add knowledge to certain applications, such as e-commerce. In this
paper, we propose to extract co-variations of periodically repeating attributes
from the sequences of temporal data that we call seasonal gradual patterns. For
this purpose, we formulate the task of mining seasonal gradual patterns as the
problem of mining periodic patterns in multiple sequences and then we exploit
periodic pattern mining algorithms to extract seasonal gradual patterns. We
discuss specific features of these patterns and propose an approach for their
extraction based on mining periodic frequent patterns common to multiple
sequences. We also propose a new anti-monotonous support definition associated
to these seasonal gradual patterns. The illustrative results obtained from some
real world data sets show that the proposed approach is efficient and that it
can extract small sets of patterns by filtering numerous nonseasonal patterns
to identify the seasonal ones. | [
"cs.LG",
"cs.AI"
] |
In this work, we evaluate the use of superpixel pooling layers in deep
network architectures for semantic segmentation. Superpixel pooling is a
flexible and efficient replacement for other pooling strategies that
incorporates spatial prior information. We propose a simple and efficient
GPU-implementation of the layer and explore several designs for the integration
of the layer into existing network architectures. We provide experimental
results on the IBSR and Cityscapes dataset, demonstrating that superpixel
pooling can be leveraged to consistently increase network accuracy with minimal
computational overhead. Source code is available at
https://github.com/bermanmaxim/superpixPool | [
"cs.CV",
"cs.LG"
] |
Deep encoder-decoder based CNNs have advanced image inpainting methods for
hole filling. While existing methods recover structures and textures
step-by-step in the hole regions, they typically use two encoder-decoders for
separate recovery. The CNN features of each encoder are learned to capture
either missing structures or textures without considering them as a whole. The
insufficient utilization of these encoder features limit the performance of
recovering both structures and textures. In this paper, we propose a mutual
encoder-decoder CNN for joint recovery of both. We use CNN features from the
deep and shallow layers of the encoder to represent structures and textures of
an input image, respectively. The deep layer features are sent to a structure
branch and the shallow layer features are sent to a texture branch. In each
branch, we fill holes in multiple scales of the CNN features. The filled CNN
features from both branches are concatenated and then equalized. During feature
equalization, we reweigh channel attentions first and propose a bilateral
propagation activation function to enable spatial equalization. To this end,
the filled CNN features of structure and texture mutually benefit each other to
represent image content at all feature levels. We use the equalized feature to
supplement decoder features for output image generation through skip
connections. Experiments on the benchmark datasets show the proposed method is
effective to recover structures and textures and performs favorably against
state-of-the-art approaches. | [
"cs.CV"
] |
Advanced machine learning techniques have been used in remote sensing (RS)
applications such as crop mapping and yield prediction, but remain
under-utilized for tracking crop progress. In this study, we demonstrate the
use of agronomic knowledge of crop growth drivers in a Long Short-Term
Memory-based, Domain-guided neural network (DgNN) for in-season crop progress
estimation. The DgNN uses a branched structure and attention to separate
independent crop growth drivers and capture their varying importance throughout
the growing season. The DgNN is implemented for corn, using RS data in Iowa for
the period 2003-2019, with USDA crop progress reports used as ground truth.
State-wide DgNN performance shows significant improvement over sequential and
dense-only NN structures, and a widely-used Hidden Markov Model method. The
DgNN had a 3.5% higher Nash-Sutfliffe efficiency over all growth stages and 33%
more weeks with highest cosine similarity than the other NNs during test years.
The DgNN and Sequential NN were more robust during periods of abnormal crop
progress, though estimating the Silking-Grainfill transition was difficult for
all methods. Finally, Uniform Manifold Approximation and Projection
visualizations of layer activations showed how LSTM-based NNs separate crop
growth time-series differently from a dense-only structure. Results from this
study exhibit both the viability of NNs in crop growth stage estimation (CGSE)
and the benefits of using domain knowledge. The DgNN methodology presented here
can be extended to provide near-real time CGSE of other crops. | [
"cs.LG",
"cs.CV"
] |
Continuous symmetries and their breaking play a prominent role in
contemporary physics. Effective low-energy field theories around symmetry
breaking states explain diverse phenomena such as superconductivity, magnetism,
and the mass of nucleons. We show that such field theories can also be a useful
tool in machine learning, in particular for loss functions with continuous
symmetries that are spontaneously broken by random initializations. In this
paper, we illuminate our earlier published work (Bamler & Mandt, 2018) on this
topic more from the perspective of theoretical physics. We show that the
analogies between superconductivity and symmetry breaking in temporal
representation learning are rather deep, allowing us to formulate a gauge
theory of `charged' embedding vectors in time series models. We show that
making the loss function gauge invariant speeds up convergence in such models. | [
"stat.ML",
"cond-mat.stat-mech",
"cs.LG"
] |
Can we improve detection in the thermal domain by borrowing features from
rich domains like visual RGB? In this paper, we propose a pseudo-multimodal
object detector trained on natural image domain data to help improve the
performance of object detection in thermal images. We assume access to a
large-scale dataset in the visual RGB domain and relatively smaller dataset (in
terms of instances) in the thermal domain, as is common today. We propose the
use of well-known image-to-image translation frameworks to generate pseudo-RGB
equivalents of a given thermal image and then use a multi-modal architecture
for object detection in the thermal image. We show that our framework
outperforms existing benchmarks without the explicit need for paired training
examples from the two domains. We also show that our framework has the ability
to learn with less data from thermal domain when using our approach. Our code
and pre-trained models are made available at
https://github.com/tdchaitanya/MMTOD | [
"cs.CV"
] |
Segmenting an entire 3D image often has high computational complexity and
requires large memory consumption; by contrast, performing volumetric
segmentation in a slice-by-slice manner is efficient but does not fully
leverage the 3D data. To address this challenge, we propose a multi-dimensional
attention network (MDA-Net) to efficiently integrate slice-wise, spatial, and
channel-wise attention into a U-Net based network, which results in high
segmentation accuracy with a low computational cost. We evaluate our model on
the MICCAI iSeg and IBSR datasets, and the experimental results demonstrate
consistent improvements over existing methods. | [
"cs.CV",
"eess.IV"
] |
Nuclear image has emerged as a promising research work in medical field.
Images from different modality meet its own challenge. Positron Emission
Tomography (PET) image may help to precisely localize disease to assist in
planning the right treatment for each case and saving valuable time. In this
paper, a novel approach of Spatial Fuzzy C Means (PET SFCM) clustering
algorithm is introduced on PET scan image datasets. The proposed algorithm is
incorporated the spatial neighborhood information with traditional FCM and
updating the objective function of each cluster. This algorithm is implemented
and tested on huge data collection of patients with brain neuro degenerative
disorder such as Alzheimers disease. It has demonstrated its effectiveness by
testing it for real world patient data sets. Experimental results are compared
with conventional FCM and K Means clustering algorithm. The performance of the
PET SFCM provides satisfactory results compared with other two algorithms | [
"cs.CV"
] |
Video salient object detection aims at discovering the most visually
distinctive objects in a video. How to effectively take object motion into
consideration during video salient object detection is a critical issue.
Existing state-of-the-art methods either do not explicitly model and harvest
motion cues or ignore spatial contexts within optical flow images. In this
paper, we develop a multi-task motion guided video salient object detection
network, which learns to accomplish two sub-tasks using two sub-networks, one
sub-network for salient object detection in still images and the other for
motion saliency detection in optical flow images. We further introduce a series
of novel motion guided attention modules, which utilize the motion saliency
sub-network to attend and enhance the sub-network for still images. These two
sub-networks learn to adapt to each other by end-to-end training. Experimental
results demonstrate that the proposed method significantly outperforms existing
state-of-the-art algorithms on a wide range of benchmarks. We hope our simple
and effective approach will serve as a solid baseline and help ease future
research in video salient object detection. Code and models will be made
available. | [
"cs.CV"
] |
In this paper we propose a data augmentation method for time series with
irregular sampling, Time-Conditional Generative Adversarial Network (T-CGAN).
Our approach is based on Conditional Generative Adversarial Networks (CGAN),
where the generative step is implemented by a deconvolutional NN and the
discriminative step by a convolutional NN. Both the generator and the
discriminator are conditioned on the sampling timestamps, to learn the hidden
relationship between data and timestamps, and consequently to generate new time
series. We evaluate our model with synthetic and real-world datasets. For the
synthetic data, we compare the performance of a classifier trained with
T-CGAN-generated data, against the performance of the same classifier trained
on the original data. Results show that classifiers trained on T-CGAN-generated
data perform the same as classifiers trained on real data, even with very short
time series and small training sets. For the real world datasets, we compare
our method with other techniques of data augmentation for time series, such as
time slicing and time warping, over a classification problem with unbalanced
datasets. Results show that our method always outperforms the other approaches,
both in case of regularly sampled and irregularly sampled time series. We
achieve particularly good performance in case with a small training set and
short, noisy, irregularly-sampled time series. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Networks have been the backbone of recent rapid progress
in Single-Image Super-Resolution. However, existing networks are very deep with
many network parameters, thus having a large memory footprint and being
challenging to train. We propose Large Receptive Field Networks which strive to
directly expand the receptive field of Super-Resolution networks without
increasing depth or parameter count. In particular, we use two different
methods to expand the network receptive field: 1-D separable kernels and atrous
convolutions. We conduct considerable experiments to study the performance of
various arrangement schemes of the 1-D separable kernels and atrous convolution
in terms of accuracy (PSNR / SSIM), parameter count, and speed, while focusing
on the more challenging high upscaling factors. Extensive benchmark evaluations
demonstrate the effectiveness of our approach. | [
"cs.CV"
] |
Anomaly detection plays in many fields of research, along with the strongly
related task of outlier detection, a very important role. Especially within the
context of the automated analysis of video material recorded by surveillance
cameras, abnormal situations can be of very different nature. For this purpose
this work investigates Generative-Adversarial-Network-based methods (GAN) for
anomaly detection related to surveillance applications. The focus is on the
usage of static camera setups, since this kind of camera is one of the most
often used and belongs to the lower price segment. In order to address this
task, multiple subtasks are evaluated, including the influence of existing
optical flow methods for the incorporation of short-term temporal information,
different forms of network setups and losses for GANs, and the use of
morphological operations for further performance improvement. With these
extension we achieved up to 2.4% better results. Furthermore, the final method
reduced the anomaly detection error for GAN-based methods by about 42.8%. | [
"cs.CV"
] |
This paper presents a new 3D point cloud classification benchmark data set
with over four billion manually labelled points, meant as input for data-hungry
(deep) learning methods. We also discuss first submissions to the benchmark
that use deep convolutional neural networks (CNNs) as a work horse, which
already show remarkable performance improvements over state-of-the-art. CNNs
have become the de-facto standard for many tasks in computer vision and machine
learning like semantic segmentation or object detection in images, but have no
yet led to a true breakthrough for 3D point cloud labelling tasks due to lack
of training data. With the massive data set presented in this paper, we aim at
closing this data gap to help unleash the full potential of deep learning
methods for 3D labelling tasks. Our semantic3D.net data set consists of dense
point clouds acquired with static terrestrial laser scanners. It contains 8
semantic classes and covers a wide range of urban outdoor scenes: churches,
streets, railroad tracks, squares, villages, soccer fields and castles. We
describe our labelling interface and show that our data set provides more dense
and complete point clouds with much higher overall number of labelled points
compared to those already available to the research community. We further
provide baseline method descriptions and comparison between methods submitted
to our online system. We hope semantic3D.net will pave the way for deep
learning methods in 3D point cloud labelling to learn richer, more general 3D
representations, and first submissions after only a few months indicate that
this might indeed be the case. | [
"cs.CV",
"cs.LG",
"cs.NE",
"cs.RO"
] |
We consider the problem of estimating a ranking on a set of items from noisy
pairwise comparisons given item features. We address the fact that pairwise
comparison data often reflects irrational choice, e.g. intransitivity. Our key
observation is that two items compared in isolation from other items may be
compared based on only a salient subset of features. Formalizing this
framework, we propose the salient feature preference model and prove a finite
sample complexity result for learning the parameters of our model and the
underlying ranking with maximum likelihood estimation. We also provide
empirical results that support our theoretical bounds and illustrate how our
model explains systematic intransitivity. Finally we demonstrate strong
performance of maximum likelihood estimation of our model on both synthetic
data and two real data sets: the UT Zappos50K data set and comparison data
about the compactness of legislative districts in the US. | [
"stat.ML",
"cs.LG"
] |
Face images are rich data items that are useful and can easily be collected
in many applications, such as in 1-to-1 face verification tasks in the domain
of security and surveillance systems. Multiple methods have been proposed to
protect an individual's privacy by perturbing the images to remove traces of
identifiable information, such as gender or race. However, significantly less
attention has been given to the problem of protecting images while maintaining
optimal task utility. In this paper, we study the novel problem of creating
privacy-preserving image representations with respect to a given utility task
by proposing a principled framework called the Adversarial Image Anonymizer
(AIA). AIA first creates an image representation using a generative model, then
enhances the learned image representations using adversarial learning to
preserve privacy and utility for a given task. Experiments were conducted on a
publicly available data set to demonstrate the effectiveness of AIA as a
privacy-preserving mechanism for face images. | [
"cs.CV",
"cs.CR"
] |
Segmentation of magnetic resonance (MR) images is a fundamental step in many
medical imaging-based applications. The recent implementation of deep
convolutional neural networks (CNNs) in image processing has been shown to have
significant impacts on medical image segmentation. Network training of
segmentation CNNs typically requires images and paired annotation data
representing pixel-wise tissue labels referred to as masks. However, the
supervised training of highly efficient CNNs with deeper structure and more
network parameters requires a large number of training images and paired tissue
masks. Thus, there is great need to develop a generalized CNN-based
segmentation method which would be applicable for a wide variety of MR image
datasets with different tissue contrasts. The purpose of this study was to
develop and evaluate a generalized CNN-based method for fully-automated
segmentation of different MR image datasets using a single set of annotated
training data. A technique called cycle-consistent generative adversarial
network (CycleGAN) is applied as the core of the proposed method to perform
image-to-image translation between MR image datasets with different tissue
contrasts. A joint segmentation network is incorporated into the adversarial
network to obtain additional segmentation functionality. The proposed method
was evaluated for segmenting bone and cartilage on two clinical knee MR image
datasets acquired at our institution using only a single set of annotated data
from a publicly available knee MR image dataset. The new technique may further
improve the applicability and efficiency of CNN-based segmentation of medical
images while eliminating the need for large amounts of annotated training data. | [
"cs.CV",
"cs.AI"
] |
Message passing neural networks have become a method of choice for learning
on graphs, in particular the prediction of chemical properties and the
acceleration of molecular dynamics studies. While they readily scale to large
training data sets, previous approaches have proven to be less data efficient
than kernel methods. We identify limitations of invariant representations as a
major reason and extend the message passing formulation to rotationally
equivariant representations. On this basis, we propose the polarizable atom
interaction neural network (PaiNN) and improve on common molecule benchmarks
over previous networks, while reducing model size and inference time. We
leverage the equivariant atomwise representations obtained by PaiNN for the
prediction of tensorial properties. Finally, we apply this to the simulation of
molecular spectra, achieving speedups of 4-5 orders of magnitude compared to
the electronic structure reference. | [
"cs.LG",
"physics.chem-ph"
] |
Point clouds and images could provide complementary information when
representing 3D objects. Fusing the two kinds of data usually helps to improve
the detection results. However, it is challenging to fuse the two data
modalities, due to their different characteristics and the interference from
the non-interest areas. To solve this problem, we propose a Multi-Branch Deep
Fusion Network (MBDF-Net) for 3D object detection. The proposed detector has
two stages. In the first stage, our multi-branch feature extraction network
utilizes Adaptive Attention Fusion (AAF) modules to produce cross-modal fusion
features from single-modal semantic features. In the second stage, we use a
region of interest (RoI) -pooled fusion module to generate enhanced local
features for refinement. A novel attention-based hybrid sampling strategy is
also proposed for selecting key points in the downsampling process. We evaluate
our approach on two widely used benchmark datasets including KITTI and
SUN-RGBD. The experimental results demonstrate the advantages of our method
over state-of-the-art approaches. | [
"cs.CV"
] |
Human gaze is known to be an intention-revealing signal in human
demonstrations of tasks. In this work, we use gaze cues from human
demonstrators to enhance the performance of agents trained via three popular
imitation learning methods -- behavioral cloning (BC), behavioral cloning from
observation (BCO), and Trajectory-ranked Reward EXtrapolation (T-REX). Based on
similarities between the attention of reinforcement learning agents and human
gaze, we propose a novel approach for utilizing gaze data in a computationally
efficient manner, as part of an auxiliary loss function, which guides a network
to have higher activations in image regions where the human's gaze fixated.
This work is a step towards augmenting any existing convolutional imitation
learning agent's training with auxiliary gaze data. Our auxiliary
coverage-based gaze loss (CGL) guides learning toward a better reward function
or policy, without adding any additional learnable parameters and without
requiring gaze data at test time. We find that our proposed approach improves
the performance by 95% for BC, 343% for BCO, and 390% for T-REX, averaged over
20 different Atari games. We also find that compared to a prior
state-of-the-art imitation learning method assisted by human gaze (AGIL), our
method achieves better performance, and is more efficient in terms of learning
with fewer demonstrations. We further interpret trained CGL agents with a
saliency map visualization method to explain their performance. At last, we
show that CGL can help alleviate a well-known causal confusion problem in
imitation learning. | [
"cs.LG",
"cs.AI"
] |
To have a better understanding and usage of Convolution Neural Networks
(CNNs), the visualization and interpretation of CNNs has attracted increasing
attention in recent years. In particular, several Class Activation Mapping
(CAM) methods have been proposed to discover the connection between CNN's
decision and image regions. In spite of the reasonable visualization, lack of
clear and sufficient theoretical support is the main limitation of these
methods. In this paper, we introduce two axioms -- Conservation and Sensitivity
-- to the visualization paradigm of the CAM methods. Meanwhile, a dedicated
Axiom-based Grad-CAM (XGrad-CAM) is proposed to satisfy these axioms as much as
possible. Experiments demonstrate that XGrad-CAM is an enhanced version of
Grad-CAM in terms of conservation and sensitivity. It is able to achieve better
visualization performance than Grad-CAM, while also be class-discriminative and
easy-to-implement compared with Grad-CAM++ and Ablation-CAM. The code is
available at https://github.com/Fu0511/XGrad-CAM. | [
"cs.CV",
"cs.AI",
"cs.LG",
"eess.IV"
] |
While Generative Adversarial Networks (GANs) are fundamental to many
generative modelling applications, they suffer from numerous issues. In this
work, we propose a principled framework to simultaneously mitigate two
fundamental issues in GANs: catastrophic forgetting of the discriminator and
mode collapse of the generator. We achieve this by employing for GANs a
contrastive learning and mutual information maximization approach, and perform
extensive analyses to understand sources of improvements. Our approach
significantly stabilizes GAN training and improves GAN performance for image
synthesis across five datasets under the same training and evaluation
conditions against state-of-the-art works. In particular, compared to the
state-of-the-art SSGAN, our approach does not suffer from poorer performance on
image domains such as faces, and instead improves performance significantly.
Our approach is simple to implement and practical: it involves only one
auxiliary objective, has a low computational cost, and performs robustly across
a wide range of training settings and datasets without any hyperparameter
tuning. For reproducibility, our code is available in Mimicry:
https://github.com/kwotsin/mimicry. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Exponential-family harmoniums (EFHs), which extend restricted Boltzmann
machines (RBMs) from Bernoulli random variables to other exponential families
(Welling et al., 2005), are generative models that can be trained with
unsupervised-learning techniques, like contrastive divergence (Hinton et al.
2006; Hinton, 2002), as density estimators for static data. Methods for
extending RBMs--and likewise EFHs--to data with temporal dependencies have been
proposed previously (Sutskever and Hinton, 2007; Sutskever et al., 2009), the
learning procedure being validated by qualitative assessment of the generative
model. Here we propose and justify, from a very different perspective, an
alternative training procedure, proving sufficient conditions for optimal
inference under that procedure. The resulting algorithm can be learned with
only forward passes through the data--backprop-through-time is not required, as
in previous approaches. The proof exploits a recent result about information
retention in density estimators (Makin and Sabes, 2015), and applies it to a
"recurrent EFH" (rEFH) by induction. Finally, we demonstrate optimality by
simulation, testing the rEFH: (1) as a filter on training data generated with a
linear dynamical system, the position of which is noisily reported by a
population of "neurons" with Poisson-distributed spike counts; and (2) with the
qualitative experiments proposed by Sutskever et al. (2009). | [
"cs.LG",
"stat.ML"
] |
This paper presents HoughNet, a one-stage, anchor-free, voting-based,
bottom-up object detection method. Inspired by the Generalized Hough Transform,
HoughNet determines the presence of an object at a certain location by the sum
of the votes cast on that location. Votes are collected from both near and
long-distance locations based on a log-polar vote field. Thanks to this voting
mechanism, HoughNet is able to integrate both near and long-range,
class-conditional evidence for visual recognition, thereby generalizing and
enhancing current object detection methodology, which typically relies on only
local evidence. On the COCO dataset, HoughNet's best model achieves 46.4 $AP$
(and 65.1 $AP_{50}$), performing on par with the state-of-the-art in bottom-up
object detection and outperforming most major one-stage and two-stage methods.
We further validate the effectiveness of our proposal in another task, namely,
"labels to photo" image generation by integrating the voting module of HoughNet
to two different GAN models and showing that the accuracy is significantly
improved in both cases. Code is available at
https://github.com/nerminsamet/houghnet. | [
"cs.CV"
] |
Variational autoencoders (VAEs) defined over SMILES string and graph-based
representations of molecules promise to improve the optimization of molecular
properties, thereby revolutionizing the pharmaceuticals and materials
industries. However, these VAEs are hindered by the non-unique nature of SMILES
strings and the computational cost of graph convolutions. To efficiently pass
messages along all paths through the molecular graph, we encode multiple SMILES
strings of a single molecule using a set of stacked recurrent neural networks,
pooling hidden representations of each atom between SMILES representations, and
use attentional pooling to build a final fixed-length latent representation. By
then decoding to a disjoint set of SMILES strings of the molecule, our All
SMILES VAE learns an almost bijective mapping between molecules and latent
representations near the high-probability-mass subspace of the prior. Our
SMILES-derived but molecule-based latent representations significantly surpass
the state-of-the-art in a variety of fully- and semi-supervised property
regression and molecular property optimization tasks. | [
"cs.LG",
"stat.ML"
] |
Existing popular unsupervised embedding learning methods focus on enhancing
the instance-level local discrimination of the given unlabeled images by
exploring various negative data. However, the existed sample outliers which
exhibit large intra-class divergences or small inter-class variations severely
limit their learning performance. We justify that the performance limitation is
caused by the gradient vanishing on these sample outliers. Moreover, the
shortage of positive data and disregard for global discrimination consideration
also pose critical issues for unsupervised learning but are always ignored by
existing methods. To handle these issues, we propose a novel solution to
explicitly model and directly explore the uncertainty of the given unlabeled
learning samples. Instead of learning a deterministic feature point for each
sample in the embedding space, we propose to represent a sample by a stochastic
Gaussian with the mean vector depicting its space localization and covariance
vector representing the sample uncertainty. We leverage such uncertainty
modeling as momentum to the learning which is helpful to tackle the outliers.
Furthermore, abundant positive candidates can be readily drawn from the learned
instance-specific distributions which are further adopted to mitigate the
aforementioned issues. Thorough rationale analyses and extensive experiments
are presented to verify our superiority. | [
"cs.CV"
] |
Human action analysis and understanding in videos is an important and
challenging task. Although substantial progress has been made in past years,
the explainability of existing methods is still limited. In this work, we
propose a novel action reasoning framework that uses prior knowledge to explain
semantic-level observations of video state changes. Our method takes advantage
of both classical reasoning and modern deep learning approaches. Specifically,
prior knowledge is defined as the information of a target video domain,
including a set of objects, attributes and relationships in the target video
domain, as well as relevant actions defined by the temporal attribute and
relationship changes (i.e. state transitions). Given a video sequence, we first
generate a scene graph on each frame to represent concerned objects, attributes
and relationships. Then those scene graphs are linked by tracking objects
across frames to form a spatio-temporal graph (also called video graph), which
represents semantic-level video states. Finally, by sequentially examining each
state transition in the video graph, our method can detect and explain how
those actions are executed with prior knowledge, just like the logical manner
of thinking by humans. Compared to previous works, the action reasoning results
of our method can be explained by both logical rules and semantic-level
observations of video content changes. Besides, the proposed method can be used
to detect multiple concurrent actions with detailed information, such as who
(particular objects), when (time), where (object locations) and how (what kind
of changes). Experiments on a re-annotated dataset CAD-120 show the
effectiveness of our method. | [
"cs.CV"
] |
We introduce a family of multilayer graph kernels and establish new links
between graph convolutional neural networks and kernel methods. Our approach
generalizes convolutional kernel networks to graph-structured data, by
representing graphs as a sequence of kernel feature maps, where each node
carries information about local graph substructures. On the one hand, the
kernel point of view offers an unsupervised, expressive, and easy-to-regularize
data representation, which is useful when limited samples are available. On the
other hand, our model can also be trained end-to-end on large-scale data,
leading to new types of graph convolutional neural networks. We show that our
method achieves competitive performance on several graph classification
benchmarks, while offering simple model interpretation. Our code is freely
available at https://github.com/claying/GCKN. | [
"stat.ML",
"cs.LG"
] |
Graph neural networks (GNNs) use graph convolutions to exploit network
invariances and learn meaningful features from network data. However, on
large-scale graphs convolutions incur in high computational cost, leading to
scalability limitations. Leveraging the graphon -- the limit object of a graph
-- in this paper we consider the problem of learning a graphon neural network
(WNN) -- the limit object of a GNN -- by training GNNs on graphs sampled
Bernoulli from the graphon. Under smoothness conditions, we show that: (i) the
expected distance between the learning steps on the GNN and on the WNN
decreases asymptotically with the size of the graph, and (ii) when training on
a sequence of growing graphs, gradient descent follows the learning direction
of the WNN. Inspired by these results, we propose a novel algorithm to learn
GNNs on large-scale graphs that, starting from a moderate number of nodes,
successively increases the size of the graph during training. This algorithm is
benchmarked on both a recommendation system and a decentralized control problem
where it is shown to retain comparable performance, to its large-scale
counterpart, at a reduced computational cost. | [
"cs.LG",
"eess.SP"
] |
We present a method for calibrating the Ensemble of Exemplar SVMs model.
Unlike the standard approach, which calibrates each SVM independently, our
method optimizes their joint performance as an ensemble. We formulate joint
calibration as a constrained optimization problem and devise an efficient
optimization algorithm to find its global optimum. The algorithm dynamically
discards parts of the solution space that cannot contain the optimum early on,
making the optimization computationally feasible. We experiment with EE-SVM
trained on state-of-the-art CNN descriptors. Results on the ILSVRC 2014 and
PASCAL VOC 2007 datasets show that (i) our joint calibration procedure
outperforms independent calibration on the task of classifying windows as
belonging to an object class or not; and (ii) this improved window classifier
leads to better performance on the object detection task. | [
"cs.CV"
] |
General unsupervised learning is a long-standing conceptual problem in
machine learning. Supervised learning is successful because it can be solved by
the minimization of the training error cost function. Unsupervised learning is
not as successful, because the unsupervised objective may be unrelated to the
supervised task of interest. For an example, density modelling and
reconstruction have often been used for unsupervised learning, but they did not
produced the sought-after performance gains, because they have no knowledge of
the supervised tasks.
In this paper, we present an unsupervised cost function which we name the
Output Distribution Matching (ODM) cost, which measures a divergence between
the distribution of predictions and distributions of labels. The ODM cost is
appealing because it is consistent with the supervised cost in the following
sense: a perfect supervised classifier is also perfect according to the ODM
cost. Therefore, by aggressively optimizing the ODM cost, we are almost
guaranteed to improve our supervised performance whenever the space of possible
predictions is exponentially large.
We demonstrate that the ODM cost works well on number of small and
semi-artificial datasets using no (or almost no) labelled training cases.
Finally, we show that the ODM cost can be used for one-shot domain adaptation,
which allows the model to classify inputs that differ from the input
distribution in significant ways without the need for prior exposure to the new
domain. | [
"cs.LG"
] |
The quality of the image representations obtained from self-supervised
learning depends strongly on the type of data augmentations used in the
learning formulation. Recent papers have ported these methods from still images
to videos and found that leveraging both audio and video signals yields strong
gains; however, they did not find that spatial augmentations such as cropping,
which are very important for still images, work as well for videos. In this
paper, we improve these formulations in two ways unique to the spatio-temporal
aspect of videos. First, for space, we show that spatial augmentations such as
cropping do work well for videos too, but that previous implementations, due to
the high processing and memory cost, could not do this at a scale sufficient
for it to work well. To address this issue, we first introduce Feature Crop, a
method to simulate such augmentations much more efficiently directly in feature
space. Second, we show that as opposed to naive average pooling, the use of
transformer-based attention improves performance significantly, and is well
suited for processing feature crops. Combining both of our discoveries into a
new method, Space-time Crop & Attend (STiCA) we achieve state-of-the-art
performance across multiple video-representation learning benchmarks. In
particular, we achieve new state-of-the-art accuracies of 67.0% on HMDB-51 and
93.1% on UCF-101 when pre-training on Kinetics-400. | [
"cs.CV"
] |
Vehicle Re-identification is a challenging task due to intra-class
variability and inter-class similarity across non-overlapping cameras. To
tackle these problems, recently proposed methods require additional annotation
to extract more features for false positive image exclusion. In this paper, we
propose a model powered by adaptive attention modules that requires fewer label
annotations but still out-performs the previous models. We also include a
re-ranking method that takes account of the importance of metadata feature
embeddings in our paper. The proposed method is evaluated on CVPR AI City
Challenge 2020 dataset and achieves mAP of 37.25% in Track 2. | [
"cs.CV"
] |
Molecules have seemed like a natural fit to deep learning's tendency to
handle a complex structure through representation learning, given enough data.
However, this often continuous representation is not natural for understanding
chemical space as a domain and is particular to samples and their differences.
We focus on exploring a natural structure for representing chemical space as a
structured domain: embedding drug-like chemical space into an enumerable
hypergraph based on scaffold classes linked through an inclusion operator. This
paper shows how molecules form classes of scaffolds, how scaffolds relate to
each in a hypergraph, and how this structure of scaffolds is natural for drug
discovery workflows such as predicting properties and optimizing molecular
structures. We compare the assumptions and utility of various embeddings of
molecules, such as their respective induced distance metrics, their
extendibility to represent chemical space as a structured domain, and the
consequences of utilizing the structure for learning tasks. | [
"cs.LG"
] |
A unified system integrating a compact object detector and a surrounding
environmental condition classifier for enhancing the robustness of object
detection scheme in advanced driver assistance systems (ADAS) is proposed in
this paper. ADAS are invented to improve traffic safety and effectiveness in
autonomous driving systems where object detection plays an extremely important
role. However, modern object detectors integrated in ADAS are still unstable
due to high latency and the variation of the environmental contexts in the
deployment phase. Our system is proposed to address the aforementioned
problems. The proposed system includes two main components: (1) a compact
one-stage object detector which is expected to be able to perform at a
comparable accuracy compared to state-of-the-art object detectors, and (2) an
environmental condition detector that helps to send a warning signal to the
cloud in case the self-driving car needs human actions due to the significance
of the situation. The empirical results prove the reliability and the
scalability of the proposed system to realistic scenarios. | [
"cs.CV"
] |
We present a new pipeline for holistic 3D scene understanding from a single
image, which could predict object shapes, object poses, and scene layout. As it
is a highly ill-posed problem, existing methods usually suffer from inaccurate
estimation of both shapes and layout especially for the cluttered scene due to
the heavy occlusion between objects. We propose to utilize the latest deep
implicit representation to solve this challenge. We not only propose an
image-based local structured implicit network to improve the object shape
estimation, but also refine the 3D object pose and scene layout via a novel
implicit scene graph neural network that exploits the implicit local object
features. A novel physical violation loss is also proposed to avoid incorrect
context between objects. Extensive experiments demonstrate that our method
outperforms the state-of-the-art methods in terms of object shape, scene layout
estimation, and 3D object detection. | [
"cs.CV"
] |
Improving students academic performance is not an easy task for the academic
community of higher learning. The academic performance of engineering and
science students during their first year at university is a turning point in
their educational path and usually encroaches on their General Point
Average,GPA in a decisive manner. The students evaluation factors like class
quizzes mid and final exam assignment lab work are studied. It is recommended
that all these correlated information should be conveyed to the class teacher
before the conduction of final exam. This study will help the teachers to
reduce the drop out ratio to a significant level and improve the performance of
students. In this paper, we present a hybrid procedure based on Decision Tree
of Data mining method and Data Clustering that enables academicians to predict
students GPA and based on that instructor can take necessary step to improve
student academic performance. | [
"cs.LG"
] |
Reinforcement learning is a framework for interactive decision-making with
incentives sequentially revealed across time without a system dynamics model.
Due to its scaling to continuous spaces, we focus on policy search where one
iteratively improves a parameterized policy with stochastic policy gradient
(PG) updates. In tabular Markov Decision Problems (MDPs), under persistent
exploration and suitable parameterization, global optimality may be obtained.
By contrast, in continuous space, the non-convexity poses a pathological
challenge as evidenced by existing convergence results being mostly limited to
stationarity or arbitrary local extrema. To close this gap, we step towards
persistent exploration in continuous space through policy parameterizations
defined by distributions of heavier tails defined by tail-index parameter
alpha, which increases the likelihood of jumping in state space. Doing so
invalidates smoothness conditions of the score function common to PG. Thus, we
establish how the convergence rate to stationarity depends on the policy's tail
index alpha, a Holder continuity parameter, integrability conditions, and an
exploration tolerance parameter introduced here for the first time. Further, we
characterize the dependence of the set of local maxima on the tail index
through an exit and transition time analysis of a suitably defined Markov
chain, identifying that policies associated with Levy Processes of a heavier
tail converge to wider peaks. This phenomenon yields improved stability to
perturbations in supervised learning, which we corroborate also manifests in
improved performance of policy search, especially when myopic and farsighted
incentives are misaligned. | [
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY",
"math.OC",
"stat.ML"
] |
Wavelet scattering networks, which are convolutional neural networks (CNNs)
with fixed filters and weights, are promising tools for image analysis.
Imposing symmetry on image statistics can improve human interpretability, aid
in generalization, and provide dimension reduction. In this work, we introduce
a fast-to-compute, translationally invariant and rotationally equivariant
wavelet scattering network (EqWS) and filter bank of wavelets (triglets). We
demonstrate the interpretability and quantify the invariance/equivariance of
the coefficients, briefly commenting on difficulties with implementing scale
equivariance. On MNIST, we show that training on a rotationally invariant
reduction of the coefficients maintains rotational invariance when generalized
to test data and visualize residual symmetry breaking terms. Rotation
equivariance is leveraged to estimate the rotation angle of digits and
reconstruct the full rotation dependence of each coefficient from a single
angle. We benchmark EqWS with linear classifiers on EMNIST and CIFAR-10/100,
introducing a new second-order, cross-color channel coupling for the color
images. We conclude by comparing the performance of an isotropic reduction of
the scattering coefficients and RWST, a previous coefficient reduction, on an
isotropic classification of magnetohydrodynamic simulations with astrophysical
relevance. | [
"cs.CV",
"astro-ph.IM"
] |
The existing biclustering algorithms for finding feature relation based
biclusters often depend on assumptions like monotonicity or linearity. Though a
few algorithms overcome this problem by using density-based methods, they tend
to miss out many biclusters because they use global criteria for identifying
dense regions. The proposed method, RelDenClu uses the local variations in
marginal and joint densities for each pair of features to find the subset of
observations, which forms the bases of the relation between them. It then finds
the set of features connected by a common set of observations, resulting in a
bicluster.
To show the effectiveness of the proposed methodology, experimentation has
been carried out on fifteen types of simulated datasets. Further, it has been
applied to six real-life datasets. For three of these real-life datasets, the
proposed method is used for unsupervised learning, while for other three
real-life datasets it is used as an aid to supervised learning. For all the
datasets the performance of the proposed method is compared with that of seven
different state-of-the-art algorithms and the proposed algorithm is seen to
produce better results. The efficacy of proposed algorithm is also seen by its
use on COVID-19 dataset for identifying some features (genetic, demographics
and others) that are likely to affect the spread of COVID-19. | [
"cs.CV",
"cs.LG"
] |
Deep neural networks are known to be vulnerable to adversarial examples,
i.e., images that are maliciously perturbed to fool the model. Generating
adversarial examples has been mostly limited to finding small perturbations
that maximize the model prediction error. Such images, however, contain
artificial perturbations that make them somewhat distinguishable from natural
images. This property is used by several defense methods to counter adversarial
examples by applying denoising filters or training the model to be robust to
small perturbations.
In this paper, we introduce a new class of adversarial examples, namely
"Semantic Adversarial Examples," as images that are arbitrarily perturbed to
fool the model, but in such a way that the modified image semantically
represents the same object as the original image. We formulate the problem of
generating such images as a constrained optimization problem and develop an
adversarial transformation based on the shape bias property of human cognitive
system. In our method, we generate adversarial images by first converting the
RGB image into the HSV (Hue, Saturation and Value) color space and then
randomly shifting the Hue and Saturation components, while keeping the Value
component the same. Our experimental results on CIFAR10 dataset show that the
accuracy of VGG16 network on adversarial color-shifted images is 5.7%. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We present an unsupervised learning approach to recover 3D human pose from 2D
skeletal joints extracted from a single image. Our method does not require any
multi-view image data, 3D skeletons, correspondences between 2D-3D points, or
use previously learned 3D priors during training. A lifting network accepts 2D
landmarks as inputs and generates a corresponding 3D skeleton estimate. During
training, the recovered 3D skeleton is reprojected on random camera viewpoints
to generate new "synthetic" 2D poses. By lifting the synthetic 2D poses back to
3D and re-projecting them in the original camera view, we can define
self-consistency loss both in 3D and in 2D. The training can thus be self
supervised by exploiting the geometric self-consistency of the
lift-reproject-lift process. We show that self-consistency alone is not
sufficient to generate realistic skeletons, however adding a 2D pose
discriminator enables the lifter to output valid 3D poses. Additionally, to
learn from 2D poses "in the wild", we train an unsupervised 2D domain adapter
network to allow for an expansion of 2D data. This improves results and
demonstrates the usefulness of 2D pose data for unsupervised 3D lifting.
Results on Human3.6M dataset for 3D human pose estimation demonstrate that our
approach improves upon the previous unsupervised methods by 30% and outperforms
many weakly supervised approaches that explicitly use 3D data. | [
"cs.CV"
] |
We aim at developing and improving the imbalanced business risk modeling via
jointly using proper evaluation criteria, resampling, cross-validation,
classifier regularization, and ensembling techniques. Area Under the Receiver
Operating Characteristic Curve (AUC of ROC) is used for model comparison based
on 10-fold cross validation. Two undersampling strategies including random
undersampling (RUS) and cluster centroid undersampling (CCUS), as well as two
oversampling methods including random oversampling (ROS) and Synthetic Minority
Oversampling Technique (SMOTE), are applied. Three highly interpretable
classifiers, including logistic regression without regularization (LR),
L1-regularized LR (L1LR), and decision tree (DT) are implemented. Two
ensembling techniques, including Bagging and Boosting, are applied on the DT
classifier for further model improvement. The results show that, Boosting on DT
by using the oversampled data containing 50% positives via SMOTE is the optimal
model and it can achieve AUC, recall, and F1 score valued 0.8633, 0.9260, and
0.8907, respectively. | [
"stat.ML",
"cs.LG"
] |
Recently deep residual learning with residual units for training very deep
neural networks advanced the state-of-the-art performance on 2D image
recognition tasks, e.g., object detection and segmentation. However, how to
fully leverage contextual representations for recognition tasks from volumetric
data has not been well studied, especially in the field of medical image
computing, where a majority of image modalities are in volumetric format. In
this paper we explore the deep residual learning on the task of volumetric
brain segmentation. There are at least two main contributions in our work.
First, we propose a deep voxelwise residual network, referred as VoxResNet,
which borrows the spirit of deep residual learning in 2D image recognition
tasks, and is extended into a 3D variant for handling volumetric data. Second,
an auto-context version of VoxResNet is proposed by seamlessly integrating the
low-level image appearance features, implicit shape information and high-level
context together for further improving the volumetric segmentation performance.
Extensive experiments on the challenging benchmark of brain segmentation from
magnetic resonance (MR) images corroborated the efficacy of our proposed method
in dealing with volumetric data. We believe this work unravels the potential of
3D deep learning to advance the recognition performance on volumetric image
segmentation. | [
"cs.CV"
] |
Deep convolutional neural network (DCNN) is the state-of-the-art method for
image segmentation, which is one of key challenging computer vision tasks.
However, DCNN requires a lot of training images with corresponding image masks
to get a good segmentation result. Image annotation software which is easy to
use and allows fast image mask generation is in great demand. To the best of
our knowledge, all existing image annotation software support only drawing
bounding polygons, bounding boxes, or bounding ellipses to mark target objects.
These existing software are inefficient when targeting objects that have
irregular shapes (e.g., defects in fabric images or tire images). In this paper
we design an easy-to-use image annotation software called Mask Editor for image
mask generation. Mask Editor allows drawing any bounding curve to mark objects
and improves efficiency to mark objects with irregular shapes. Mask Editor also
supports drawing bounding polygons, drawing bounding boxes, drawing bounding
ellipses, painting, erasing, super-pixel-marking, image cropping, multi-class
masks, mask loading, and mask modifying. | [
"cs.CV"
] |
Imaging the atmosphere using ground-based sky cameras is a popular approach
to study various atmospheric phenomena. However, it usually focuses on the
daytime. Nighttime sky/cloud images are darker and noisier, and thus harder to
analyze. An accurate segmentation of sky/cloud images is already challenging
because of the clouds' non-rigid structure and size, and the lower and less
stable illumination of the night sky increases the difficulty. Nonetheless,
nighttime cloud imaging is essential in certain applications, such as
continuous weather analysis and satellite communication.
In this paper, we propose a superpixel-based method to segment nighttime
sky/cloud images. We also release the first nighttime sky/cloud image
segmentation database to the research community. The experimental results show
the efficacy of our proposed algorithm for nighttime images. | [
"cs.CV"
] |
Health care is one of the most exciting frontiers in data mining and machine
learning. Successful adoption of electronic health records (EHRs) created an
explosion in digital clinical data available for analysis, but progress in
machine learning for healthcare research has been difficult to measure because
of the absence of publicly available benchmark data sets. To address this
problem, we propose four clinical prediction benchmarks using data derived from
the publicly available Medical Information Mart for Intensive Care (MIMIC-III)
database. These tasks cover a range of clinical problems including modeling
risk of mortality, forecasting length of stay, detecting physiologic decline,
and phenotype classification. We propose strong linear and neural baselines for
all four tasks and evaluate the effect of deep supervision, multitask training
and data-specific architectural modifications on the performance of neural
models. | [
"stat.ML",
"cs.LG"
] |
Compared to RGB semantic segmentation, RGBD semantic segmentation can achieve
better performance by taking depth information into consideration. However, it
is still problematic for contemporary segmenters to effectively exploit RGBD
information since the feature distributions of RGB and depth (D) images vary
significantly in different scenes. In this paper, we propose an Attention
Complementary Network (ACNet) that selectively gathers features from RGB and
depth branches. The main contributions lie in the Attention Complementary
Module (ACM) and the architecture with three parallel branches. More precisely,
ACM is a channel attention-based module that extracts weighted features from
RGB and depth branches. The architecture preserves the inference of the
original RGB and depth branches, and enables the fusion branch at the same
time. Based on the above structures, ACNet is capable of exploiting more
high-quality features from different channels. We evaluate our model on
SUN-RGBD and NYUDv2 datasets, and prove that our model outperforms
state-of-the-art methods. In particular, a mIoU score of 48.3\% on NYUDv2 test
set is achieved with ResNet50. We will release our source code based on PyTorch
and the trained segmentation model at https://github.com/anheidelonghu/ACNet. | [
"cs.CV"
] |
Finding a path free from obstacles that poses minimal risk is critical for
safe navigation. People who are sighted and people who are visually impaired
require navigation safety while walking on a sidewalk. In this research we
developed an assistive navigation on a sidewalk by integrating sensory inputs
using reinforcement learning. We trained a Sidewalk Obstacle Avoidance Agent
(SOAA) through reinforcement learning in a simulated robotic environment. A
Sidewalk Obstacle Conversational Agent (SOCA) is built by training a natural
language conversation agent with real conversation data. The SOAA along with
SOCA was integrated in a prototype device called augmented guide (AG).
Empirical analysis showed that this prototype improved the obstacle avoidance
experience about 5% from a base case of 81.29% | [
"cs.CV"
] |
Subsets and Splits