text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
Previous RGB-D salient object detection (SOD) methods have widely adopted
deep learning tools to automatically strike a trade-off between RGB and D
(depth), whose key rationale is to take full advantage of their complementary
nature, aiming for a much-improved SOD performance than that of using either of
them solely. However, such fully automatic fusions may not always be helpful
for the SOD task because the D quality itself usually varies from scene to
scene. It may easily lead to a suboptimal fusion result if the D quality is not
considered beforehand. Moreover, as an objective factor, the D quality has long
been overlooked by previous work. As a result, it is becoming a clear
performance bottleneck. Thus, we propose a simple yet effective scheme to
measure D quality in advance, the key idea of which is to devise a series of
features in accordance with the common attributes of high-quality D regions. To
be more concrete, we conduct D quality assessments for each image region,
following a multi-scale methodology that includes low-level edge consistency,
mid-level regional uncertainty and high-level model variance. All these
components will be computed independently and then be assembled with RGB and D
features, applied as implicit indicators, to guide the selective fusion.
Compared with the state-of-the-art fusion schemes, our method can achieve a
more reasonable fusion status between RGB and D. Specifically, the proposed D
quality measurement method achieves steady performance improvements for almost
2.0\% in general. | [
"cs.CV",
"cs.LG"
]
|
Graph Neural Networks (GNNs) are a popular approach for predicting graph
structured data. As GNNs tightly entangle the input graph into the neural
network structure, common explainable AI approaches are not applicable. To a
large extent, GNNs have remained black-boxes for the user so far. In this
paper, we show that GNNs can in fact be naturally explained using higher-order
expansions, i.e. by identifying groups of edges that jointly contribute to the
prediction. Practically, we find that such explanations can be extracted using
a nested attribution scheme, where existing techniques such as layer-wise
relevance propagation (LRP) can be applied at each step. The output is a
collection of walks into the input graph that are relevant for the prediction.
Our novel explanation method, which we denote by GNN-LRP, is applicable to a
broad range of graph neural networks and lets us extract practically relevant
insights on sentiment analysis of text data, structure-property relationships
in quantum chemistry, and image classification. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Convolutional neural networks (CNNs) have shown good performance in
polarimetric synthetic aperture radar (PolSAR) image classification due to the
automation of feature engineering. Excellent hand-crafted architectures of CNNs
incorporated the wisdom of human experts, which is an important reason for
CNN's success. However, the design of the architectures is a difficult problem,
which needs a lot of professional knowledge as well as computational resources.
Moreover, the architecture designed by hand might be suboptimal, because it is
only one of thousands of unobserved but objective existed paths. Considering
that the success of deep learning is largely due to its automation of the
feature engineering process, how to design automatic architecture searching
methods to replace the hand-crafted ones is an interesting topic. In this
paper, we explore the application of neural architecture search (NAS) in PolSAR
area for the first time. Different from the utilization of existing NAS
methods, we propose a differentiable architecture search (DAS) method which is
customized for PolSAR classification. The proposed DAS is equipped with a
PolSAR tailored search space and an improved one-shot search strategy. By DAS,
the weights parameters and architecture parameters (corresponds to the
hyperparameters but not the topologies) can be optimized by stochastic gradient
descent method during the training. The optimized architecture parameters
should be transformed into corresponding CNN architecture and re-train to
achieve high-precision PolSAR classification. In addition, complex-valued DAS
is developed to take into account the characteristics of PolSAR images so as to
further improve the performance. Experiments on three PolSAR benchmark datasets
show that the CNNs obtained by searching have better classification performance
than the hand-crafted ones. | [
"cs.CV"
]
|
We consider learning a sequence classifier without labeled data by using
sequential output statistics. The problem is highly valuable since obtaining
labels in training data is often costly, while the sequential output statistics
(e.g., language models) could be obtained independently of input data and thus
with low or no cost. To address the problem, we propose an unsupervised
learning cost function and study its properties. We show that, compared to
earlier works, it is less inclined to be stuck in trivial solutions and avoids
the need for a strong generative model. Although it is harder to optimize in
its functional form, a stochastic primal-dual gradient method is developed to
effectively solve the problem. Experiment results on real-world datasets
demonstrate that the new unsupervised learning method gives drastically lower
errors than other baseline methods. Specifically, it reaches test errors about
twice of those obtained by fully supervised learning. | [
"cs.LG"
]
|
While convolutional neural networks are dominating the field of computer
vision, one usually does not have access to the large amount of domain-relevant
data needed for their training. It thus became common to use available
synthetic samples along domain adaptation schemes to prepare algorithms for the
target domain. Tackling this problem from a different angle, we introduce a
pipeline to map unseen target samples into the synthetic domain used to train
task-specific methods. Denoising the data and retaining only the features these
recognition algorithms are familiar with, our solution greatly improves their
performance. As this mapping is easier to learn than the opposite one (ie to
learn to generate realistic features to augment the source samples), we
demonstrate how our whole solution can be trained purely on augmented synthetic
data, and still perform better than methods trained with domain-relevant
information (eg real images or realistic textures for the 3D models). Applying
our approach to object recognition from texture-less CAD data, we present a
custom generative network which fully utilizes the purely geometrical
information to learn robust features and achieve a more refined mapping for
unseen color images. | [
"cs.CV"
]
|
Recent work on adversarial learning has focused mainly on neural networks and
domains where those networks excel, such as computer vision, or audio
processing. The data in these domains is typically homogeneous, whereas
heterogeneous tabular datasets domains remain underexplored despite their
prevalence. When searching for adversarial patterns within heterogeneous input
spaces, an attacker must simultaneously preserve the complex domain-specific
validity rules of the data, as well as the adversarial nature of the identified
samples. As such, applying adversarial manipulations to heterogeneous datasets
has proved to be a challenging task, and no generic attack method was suggested
thus far. We, however, argue that machine learning models trained on
heterogeneous tabular data are as susceptible to adversarial manipulations as
those trained on continuous or homogeneous data such as images. To support our
claim, we introduce a generic optimization framework for identifying
adversarial perturbations in heterogeneous input spaces. We define
distribution-aware constraints for preserving the consistency of the
adversarial examples and incorporate them by embedding the heterogeneous input
into a continuous latent space. Due to the nature of the underlying datasets We
focus on $\ell_0$ perturbations, and demonstrate their applicability in real
life. We demonstrate the effectiveness of our approach using three datasets
from different content domains. Our results demonstrate that despite the
constraints imposed on input validity in heterogeneous datasets, machine
learning models trained using such data are still equally susceptible to
adversarial examples. | [
"cs.LG",
"cs.CR"
]
|
Crowdsourcing provides an efficient label collection schema for supervised
machine learning. However, to control annotation cost, each instance in the
crowdsourced data is typically annotated by a small number of annotators. This
creates a sparsity issue and limits the quality of machine learning models
trained on such data. In this paper, we study how to handle sparsity in
crowdsourced data using data augmentation. Specifically, we propose to directly
learn a classifier by augmenting the raw sparse annotations. We implement two
principles of high-quality augmentation using Generative Adversarial Networks:
1) the generated annotations should follow the distribution of authentic ones,
which is measured by a discriminator; 2) the generated annotations should have
high mutual information with the ground-truth labels, which is measured by an
auxiliary network. Extensive experiments and comparisons against an array of
state-of-the-art learning from crowds methods on three real-world datasets
proved the effectiveness of our data augmentation framework. It shows the
potential of our algorithm for low-budget crowdsourcing in general. | [
"cs.LG",
"cs.CV",
"cs.HC"
]
|
As machine learning approaches are increasingly used to augment human
decision-making, eXplainable Artificial Intelligence (XAI) research has
explored methods for communicating system behavior to humans. However, these
approaches often fail to account for the emotional responses of humans as they
interact with explanations. Facial affect analysis, which examines human facial
expressions of emotions, is one promising lens for understanding how users
engage with explanations. Therefore, in this work, we aim to (1) identify which
facial affect features are pronounced when people interact with XAI interfaces,
and (2) develop a multitask feature embedding for linking facial affect signals
with participants' use of explanations. Our analyses and results show that the
occurrence and values of facial AU1 and AU4, and Arousal are heightened when
participants fail to use explanations effectively. This suggests that facial
affect analysis should be incorporated into XAI to personalize explanations to
individuals' interaction styles and to adapt explanations based on the
difficulty of the task performed. | [
"cs.CV",
"cs.HC"
]
|
We present an approach to perform 3D pose estimation of multiple people from
a few calibrated camera views. Our architecture, leveraging the recently
proposed unprojection layer, aggregates feature-maps from a 2D pose estimator
backbone into a comprehensive representation of the 3D scene. Such intermediate
representation is then elaborated by a fully-convolutional volumetric network
and a decoding stage to extract 3D skeletons with sub-voxel accuracy. Our
method achieves state of the art MPJPE on the CMU Panoptic dataset using a few
unseen views and obtains competitive results even with a single input view. We
also assess the transfer learning capabilities of the model by testing it
against the publicly available Shelf dataset obtaining good performance
metrics. The proposed method is inherently efficient: as a pure bottom-up
approach, it is computationally independent of the number of people in the
scene. Furthermore, even though the computational burden of the 2D part scales
linearly with the number of input views, the overall architecture is able to
exploit a very lightweight 2D backbone which is orders of magnitude faster than
the volumetric counterpart, resulting in fast inference time. The system can
run at 6 FPS, processing up to 10 camera views on a single 1080Ti GPU. | [
"cs.CV",
"cs.NE"
]
|
Accurate and reliable tracking of multiple moving objects in 3D space is an
essential component of urban scene understanding. This is a challenging task
because it requires the assignment of detections in the current frame to the
predicted objects from the previous one. Existing filter-based approaches tend
to struggle if this initial assignment is not correct, which can happen easily.
We propose a novel optimization-based approach that does not rely on explicit
and fixed assignments. Instead, we represent the result of an off-the-shelf 3D
object detector as Gaussian mixture model, which is incorporated in a factor
graph framework. This gives us the flexibility to assign all detections to all
objects simultaneously. As a result, the assignment problem is solved
implicitly and jointly with the 3D spatial multi-object state estimation using
non-linear least squares optimization. Despite its simplicity, the proposed
algorithm achieves robust and reliable tracking results and can be applied for
offline as well as online tracking. We demonstrate its performance on the real
world KITTI tracking dataset and achieve better results than many
state-of-the-art algorithms. Especially the consistency of the estimated tracks
is superior offline as well as online. | [
"cs.CV",
"cs.RO"
]
|
This research approaches the task of handwritten text with attention
encoder-decoder networks that are trained on Kazakh and Russian language. We
developed a novel deep neural network model based on Fully Gated CNN, supported
by Multiple bidirectional GRU and Attention mechanisms to manipulate
sophisticated features that achieve 0.045 Character Error Rate (CER), 0.192
Word Error Rate (WER) and 0.253 Sequence Error Rate (SER) for the first test
dataset and 0.064 CER, 0.24 WER and 0.361 SER for the second test dataset.
Also, we propose fully gated layers by taking the advantage of multiple the
output feature from Tahn and input feature, this proposed work achieves better
results and We experimented with our model on the Handwritten Kazakh & Russian
Database (HKR). Our research is the first work on the HKR dataset and
demonstrates state-of-the-art results to most of the other existing models. | [
"cs.CV",
"cs.LG"
]
|
With the arising concerns for the AI systems provided with direct access to
abundant sensitive information, researchers seek to develop more reliable AI
with implicit information sources. To this end, in this paper, we introduce a
new task called video description via two multi-modal cooperative dialog
agents, whose ultimate goal is for one conversational agent to describe an
unseen video based on the dialog and two static frames. Specifically, one of
the intelligent agents - Q-BOT - is given two static frames from the beginning
and the end of the video, as well as a finite number of opportunities to ask
relevant natural language questions before describing the unseen video. A-BOT,
the other agent who has already seen the entire video, assists Q-BOT to
accomplish the goal by providing answers to those questions. We propose a
QA-Cooperative Network with a dynamic dialog history update learning mechanism
to transfer knowledge from A-BOT to Q-BOT, thus helping Q-BOT to better
describe the video. Extensive experiments demonstrate that Q-BOT can
effectively learn to describe an unseen video by the proposed model and the
cooperative learning method, achieving the promising performance where Q-BOT is
given the full ground truth history dialog. | [
"cs.CV"
]
|
Existing methods for structure discovery in time series data construct
interpretable, compositional kernels for Gaussian process regression models.
While the learned Gaussian process model provides posterior mean and variance
estimates, typically the structure is learned via a greedy optimization
procedure. This restricts the space of possible solutions and leads to
over-confident uncertainty estimates. We introduce a fully Bayesian approach,
inferring a full posterior over structures, which more reliably captures the
uncertainty of the model. | [
"stat.ML",
"cs.LG"
]
|
Facial micro-expressions indicate brief and subtle facial movements that
appear during emotional communication. In comparison to macro-expressions,
micro-expressions are more challenging to be analyzed due to the short span of
time and the fine-grained changes. In recent years, micro-expression
recognition (MER) has drawn much attention because it can benefit a wide range
of applications, e.g. police interrogation, clinical diagnosis, depression
analysis, and business negotiation. In this survey, we offer a fresh overview
to discuss new research directions and challenges these days for MER tasks. For
example, we review MER approaches from three novel aspects: macro-to-micro
adaptation, recognition based on key apex frames, and recognition based on
facial action units. Moreover, to mitigate the problem of limited and biased ME
data, synthetic data generation is surveyed for the diversity enrichment of
micro-expression data. Since micro-expression spotting can boost
micro-expression analysis, the state-of-the-art spotting works are also
introduced in this paper. At last, we discuss the challenges in MER research
and provide potential solutions as well as possible directions for further
investigation. | [
"cs.CV",
"cs.LG"
]
|
Recent work has attempted to interpret residual networks (ResNets) as one
step of a forward Euler discretization of an ordinary differential equation,
focusing mainly on syntactic algebraic similarities between the two systems.
Discrete dynamical integrators of continuous dynamical systems, however, have a
much richer structure. We first show that ResNets fail to be meaningful
dynamical integrators in this richer sense. We then demonstrate that neural
network models can learn to represent continuous dynamical systems, with this
richer structure and properties, by embedding them into higher-order numerical
integration schemes, such as the Runge Kutta schemes. Based on these insights,
we introduce ContinuousNet as a continuous-in-depth generalization of ResNet
architectures. ContinuousNets exhibit an invariance to the particular
computational graph manifestation. That is, the continuous-in-depth model can
be evaluated with different discrete time step sizes, which changes the number
of layers, and different numerical integration schemes, which changes the graph
connectivity. We show that this can be used to develop an incremental-in-depth
training scheme that improves model quality, while significantly decreasing
training time. We also show that, once trained, the number of units in the
computational graph can even be decreased, for faster inference with
little-to-no accuracy drop. | [
"cs.LG",
"math.DS",
"stat.ML"
]
|
The richness in the content of various information networks such as social
networks and communication networks provides the unprecedented potential for
learning high-quality expressive representations without external supervision.
This paper investigates how to preserve and extract the abundant information
from graph-structured data into embedding space in an unsupervised manner. To
this end, we propose a novel concept, Graphical Mutual Information (GMI), to
measure the correlation between input graphs and high-level hidden
representations. GMI generalizes the idea of conventional mutual information
computations from vector space to the graph domain where measuring mutual
information from two aspects of node features and topological structure is
indispensable. GMI exhibits several benefits: First, it is invariant to the
isomorphic transformation of input graphs---an inevitable constraint in many
existing graph representation learning algorithms; Besides, it can be
efficiently estimated and maximized by current mutual information estimation
methods such as MINE; Finally, our theoretical analysis confirms its
correctness and rationality. With the aid of GMI, we develop an unsupervised
learning model trained by maximizing GMI between the input and output of a
graph neural encoder. Considerable experiments on transductive as well as
inductive node classification and link prediction demonstrate that our method
outperforms state-of-the-art unsupervised counterparts, and even sometimes
exceeds the performance of supervised ones. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Skeleton-based human action recognition has attracted much attention with the
prevalence of accessible depth sensors. Recently, graph convolutional networks
(GCNs) have been widely used for this task due to their powerful capability to
model graph data. The topology of the adjacency graph is a key factor for
modeling the correlations of the input skeletons. Thus, previous methods mainly
focus on the design/learning of the graph topology. But once the topology is
learned, only a single-scale feature and one transformation exist in each layer
of the networks. Many insights, such as multi-scale information and multiple
sets of transformations, that have been proven to be very effective in
convolutional neural networks (CNNs), have not been investigated in GCNs. The
reason is that, due to the gap between graph-structured skeleton data and
conventional image/video data, it is very challenging to embed these insights
into GCNs. To overcome this gap, we reinvent the split-transform-merge strategy
in GCNs for skeleton sequence processing. Specifically, we design a simple and
highly modularized graph convolutional network architecture for skeleton-based
action recognition. Our network is constructed by repeating a building block
that aggregates multi-granularity information from both the spatial and
temporal paths. Extensive experiments demonstrate that our network outperforms
state-of-the-art methods by a significant margin with only 1/5 of the
parameters and 1/10 of the FLOPs. Code is available at
https://github.com/yellowtownhz/STIGCN. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM"
]
|
Nowadays, scene text recognition has attracted more and more attention due to
its various applications. Most state-of-the-art methods adopt an
encoder-decoder framework with attention mechanism, which generates text
autoregressively from left to right. Despite the convincing performance, the
speed is limited because of the one-by-one decoding strategy. As opposed to
autoregressive models, non-autoregressive models predict the results in
parallel with a much shorter inference time, but the accuracy falls behind the
autoregressive counterpart considerably. In this paper, we propose a Parallel,
Iterative and Mimicking Network (PIMNet) to balance accuracy and efficiency.
Specifically, PIMNet adopts a parallel attention mechanism to predict the text
faster and an iterative generation mechanism to make the predictions more
accurate. In each iteration, the context information is fully explored. To
improve learning of the hidden layer, we exploit the mimicking learning in the
training phase, where an additional autoregressive decoder is adopted and the
parallel decoder mimics the autoregressive decoder with fitting outputs of the
hidden layer. With the shared backbone between the two decoders, the proposed
PIMNet can be trained end-to-end without pre-training. During inference, the
branch of the autoregressive decoder is removed for a faster speed. Extensive
experiments on public benchmarks demonstrate the effectiveness and efficiency
of PIMNet. Our code will be available at https://github.com/Pay20Y/PIMNet. | [
"cs.CV"
]
|
Recent advances in depth imaging sensors provide easy access to the
synchronized depth with color, called RGB-D image. In this paper, we propose an
unsupervised method for indoor RGB-D image segmentation and analysis. We
consider a statistical image generation model based on the color and geometry
of the scene. Our method consists of a joint color-spatial-directional
clustering method followed by a statistical planar region merging method. We
evaluate our method on the NYU depth database and compare it with existing
unsupervised RGB-D segmentation methods. Results show that, it is comparable
with the state of the art methods and it needs less computation time. Moreover,
it opens interesting perspectives to fuse color and geometry in an unsupervised
manner. | [
"cs.CV"
]
|
Existing deep neural networks, say for image classification, have been shown
to be vulnerable to adversarial images that can cause a DNN misclassification,
without any perceptible change to an image. In this work, we propose shock
absorbing robust features such as binarization, e.g., rounding, and group
extraction, e.g., color or shape, to augment the classification pipeline,
resulting in more robust classifiers. Experimentally, we show that augmenting
ML models with these techniques leads to improved overall robustness on
adversarial inputs as well as significant improvements in training time. On the
MNIST dataset, we achieved 14x speedup in training time to obtain 90%
adversarial accuracy com-pared to the state-of-the-art adversarial training
method of Madry et al., as well as retained higher adversarial accuracy over a
broader range of attacks. We also find robustness improvements on traffic sign
classification using robust feature augmentation. Finally, we give theoretical
insights for why one can expect robust feature augmentation to reduce
adversarial input space | [
"cs.LG",
"stat.ML"
]
|
This work examines the use of a fully convolutional net (FCN) to find an
image segment, given a pixel within this segment region. The net receives an
image, a point in the image and a region of interest (RoI ) mask. The net
output is a binary mask of the segment in which the point is located. The
region where the segment can be found is contained within the input RoI mask.
Full image segmentation can be achieved by running this net sequentially,
region-by-region on the image, and stitching the output segments into a single
segmentation map. This simple method addresses two major challenges of image
segmentation: 1) Segmentation of unknown categories that were not included in
the training set. 2) Segmentation of both individual object instances (things)
and non-objects (stuff), such as sky and vegetation. Hence, if the pointer
pixel is located within a person in a group, the net will output a mask that
covers that individual person; if the pointer point is located within the sky
region, the net returns the region of the sky in the image. This is true even
if no example for sky or person appeared in the training set. The net was
tested and trained on the COCO panoptic dataset and achieved 67% IOU for
segmentation of familiar classes (that were part of the net training set) and
53% IOU for segmentation of unfamiliar classes (that were not included in the
training). | [
"cs.CV"
]
|
This notebook paper presents our model in the VATEX video captioning
challenge. In order to capture multi-level aspects in the video, we propose to
integrate both temporal and spatial attentions for video captioning. The
temporal attentive module focuses on global action movements while spatial
attentive module enables to describe more fine-grained objects. Considering
these two types of attentive modules are complementary, we thus fuse them via a
late fusion strategy. The proposed model significantly outperforms baselines
and achieves 73.4 CIDEr score on the testing set which ranks the second place
at the VATEX video captioning challenge leaderboard 2019. | [
"cs.CV",
"cs.LG"
]
|
This paper explores conditional image generation with a One-Vs-All classifier
based on the Generative Adversarial Networks (GANs). Instead of the real/fake
discriminator used in vanilla GANs, we propose to extend the discriminator to a
One-Vs-All classifier (GAN-OVA) that can distinguish each input data to its
category label. Specifically, we feed certain additional information as
conditions to the generator and take the discriminator as a One-Vs-All
classifier to identify each conditional category. Our model can be applied to
different divergence or distances used to define the objective function, such
as Jensen-Shannon divergence and Earth-Mover (or called Wasserstein-1)
distance. We evaluate GAN-OVAs on MNIST and CelebA-HQ datasets, and the
experimental results show that GAN-OVAs make progress toward stable training
over regular conditional GANs. Furthermore, GAN-OVAs effectively accelerate the
generation process of different classes and improves generation quality. | [
"cs.CV"
]
|
When confronted with objects of unknown types in an image, humans can
effortlessly and precisely tell their visual boundaries. This recognition
mechanism and underlying generalization capability seem to contrast to
state-of-the-art image segmentation networks that rely on large-scale
category-aware annotated training samples. In this paper, we make an attempt
towards building models that explicitly account for visual boundary knowledge,
in hope to reduce the training effort on segmenting unseen categories.
Specifically, we investigate a new task termed as Boundary Knowledge
Translation (BKT). Given a set of fully labeled categories, BKT aims to
translate the visual boundary knowledge learned from the labeled categories, to
a set of novel categories, each of which is provided only a few labeled
samples. To this end, we propose a Translation Segmentation Network
(Trans-Net), which comprises a segmentation network and two boundary
discriminators. The segmentation network, combined with a boundary-aware
self-supervised mechanism, is devised to conduct foreground segmentation, while
the two discriminators work together in an adversarial manner to ensure an
accurate segmentation of the novel categories under light supervision.
Exhaustive experiments demonstrate that, with only tens of labeled samples as
guidance, Trans-Net achieves close results on par with fully supervised
methods. | [
"cs.CV"
]
|
Deep learning techniques hold promise to develop dense topography
reconstruction and pose estimation methods for endoscopic videos. However,
currently available datasets do not support effective quantitative
benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM
dataset consisting of 3D point cloud data for six porcine organs, capsule and
standard endoscopy recordings as well as synthetically generated data. A Panda
robotic arm, two commercially available capsule endoscopes, two conventional
endoscopes with different camera properties, and two high precision 3D scanners
were employed to collect data from 8 ex-vivo porcine gastrointestinal
(GI)-tract organs. In total, 35 sub-datasets are provided with 6D pose ground
truth for the ex-vivo part: 18 sub-dataset for colon, 12 sub-datasets for
stomach and 5 sub-datasets for small intestine, while four of these contain
polyp-mimicking elevations carried out by an expert gastroenterologist.
Synthetic capsule endoscopy frames from GI-tract with both depth and pose
annotations are included to facilitate the study of simulation-to-real transfer
learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised
monocular depth and pose estimation method that combines residual networks with
spatial attention module in order to dictate the network to focus on
distinguishable and highly textured tissue regions. The proposed approach makes
use of a brightness-aware photometric loss to improve the robustness under fast
frame-to-frame illumination changes. To exemplify the use-case of the EndoSLAM
dataset, the performance of Endo-SfMLearner is extensively compared with the
state-of-the-art. The codes and the link for the dataset are publicly available
at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the
experimental setup and procedure is accessible through
https://www.youtube.com/watch?v=G_LCe0aWWdQ. | [
"cs.CV"
]
|
The drug discovery stage is a vital aspect of the drug development process
and forms part of the initial stages of the development pipeline. In recent
times, machine learning-based methods are actively being used to model
drug-target interactions for rational drug discovery due to the successful
application of these methods in other domains. In machine learning approaches,
the numerical representation of molecules is critical to the performance of the
model. While significant progress has been made in molecular representation
engineering, this has resulted in several descriptors for both targets and
compounds. Also, the interpretability of model predictions is a vital feature
that could have several pharmacological applications. In this study, we propose
a self-attention-based multi-view representation learning approach for modeling
drug-target interactions. We evaluated our approach using three benchmark
kinase datasets and compared the proposed method to some baseline models. Our
experimental results demonstrate the ability of our method to achieve
competitive prediction performance and offer biologically plausible drug-target
interaction interpretations. | [
"cs.LG",
"stat.ML"
]
|
One of the ways to improve the performance of a target task is to learn the
transfer of abundant knowledge of a pre-trained network. However, learning of
the pre-trained network requires high computation capability and large-scale
labeled dataset. To mitigate the burden of large-scale labeling, learning in
un/self-supervised manner can be a solution. In addition, using unsupervised
multi-task learning, a generalized feature representation can be learned.
However, unsupervised multi-task learning can be biased to a specific task. To
overcome this problem, we propose the metric-based regularization term and
temporal task ensemble (TTE) for multi-task learning. Since these two
techniques prevent the entire network from learning in a state deviated to a
specific task, it is possible to learn a generalized feature representation
that appropriately reflects the characteristics of each task without biasing.
Experimental results for three target tasks such as classification, object
detection and embedding clustering prove that the TTE-based multi-task
framework is more effective than the state-of-the-art (SOTA) method in
improving the performance of a target task. | [
"cs.CV"
]
|
Generative Adversarial Networks (GANs) are commonly used for modeling complex
distributions of data. Both the generators and discriminators of GANs are often
modeled by neural networks, posing a non-transparent optimization problem which
is non-convex and non-concave over the generator and discriminator,
respectively. Such networks are often heuristically optimized with gradient
descent-ascent (GDA), but it is unclear whether the optimization problem
contains any saddle points, or whether heuristic methods can find them in
practice. In this work, we analyze the training of Wasserstein GANs with
two-layer neural network discriminators through the lens of convex duality, and
for a variety of generators expose the conditions under which Wasserstein GANs
can be solved exactly with convex optimization approaches, or can be
represented as convex-concave games. Using this convex duality interpretation,
we further demonstrate the impact of different activation functions of the
discriminator. Our observations are verified with numerical results
demonstrating the power of the convex interpretation, with applications in
progressive training of convex architectures corresponding to linear generators
and quadratic-activation discriminators for CelebA image generation. The code
for our experiments is available at https://github.com/ardasahiner/ProCoGAN. | [
"cs.LG",
"cs.CV",
"eess.IV",
"math.OC",
"stat.ML"
]
|
Knowledge graph embedding methods learn embeddings of entities and relations
in a low dimensional space which can be used for various downstream machine
learning tasks such as link prediction and entity matching. Various graph
convolutional network methods have been proposed which use different types of
information to learn the features of entities and relations. However, these
methods assign the same weight (importance) to the neighbors when aggregating
the information, ignoring the role of different relations with the neighboring
entities. To this end, we propose a relation-aware graph attention model that
leverages relation information to compute different weights to the neighboring
nodes for learning embeddings of entities and relations. We evaluate our
proposed approach on link prediction and entity matching tasks. Our
experimental results on link prediction on three datasets (one proprietary and
two public) and results on unsupervised entity matching on one proprietary
dataset demonstrate the effectiveness of the relation-aware attention. | [
"cs.LG",
"cs.AI"
]
|
In this paper we present our scientific discovery that good representation
can be learned via continuous attention during the interaction between
Unsupervised Learning(UL) and Reinforcement Learning(RL) modules driven by
intrinsic motivation. Specifically, we designed intrinsic rewards generated
from UL modules for driving the RL agent to focus on objects for a period of
time and to learn good representations of objects for later object recognition
task. We evaluate our proposed algorithm in both with and without extrinsic
reward settings. Experiments with end-to-end training in simulated environments
with applications to few-shot object recognition demonstrated the effectiveness
of the proposed algorithm. | [
"cs.LG",
"stat.ML"
]
|
Embedding image features into a binary Hamming space can improve both the
speed and accuracy of large-scale query-by-example image retrieval systems.
Supervised hashing aims to map the original features to compact binary codes in
a manner which preserves the label-based similarities of the original data.
Most existing approaches apply a single form of hash function, and an
optimization process which is typically deeply coupled to this specific form.
This tight coupling restricts the flexibility of those methods, and can result
in complex optimization problems that are difficult to solve. In this work we
proffer a flexible yet simple framework that is able to accommodate different
types of loss functions and hash functions. The proposed framework allows a
number of existing approaches to hashing to be placed in context, and
simplifies the development of new problem-specific hashing methods. Our
framework decomposes the into two steps: binary code (hash bits) learning, and
hash function learning. The first step can typically be formulated as a binary
quadratic problem, and the second step can be accomplished by training standard
binary classifiers. For solving large-scale binary code inference, we show how
to ensure that the binary quadratic problems are submodular such that an
efficient graph cut approach can be used. To achieve efficiency as well as
efficacy on large-scale high-dimensional data, we propose to use boosted
decision trees as the hash functions, which are nonlinear, highly descriptive,
and very fast to train and evaluate. Experiments demonstrate that our proposed
method significantly outperforms most state-of-the-art methods, especially on
high-dimensional data. | [
"cs.LG",
"cs.CV"
]
|
Existing deep learning models may encounter great challenges in handling
graph structured data. In this paper, we introduce a new deep learning model
for graph data specifically, namely the deep loopy neural network.
Significantly different from the previous deep models, inside the deep loopy
neural network, there exist a large number of loops created by the extensive
connections among nodes in the input graph data, which makes model learning an
infeasible task. To resolve such a problem, in this paper, we will introduce a
new learning algorithm for the deep loopy neural network specifically. Instead
of learning the model variables based on the original model, in the proposed
learning algorithm, errors will be back-propagated through the edges in a group
of extracted spanning trees. Extensive numerical experiments have been done on
several real-world graph datasets, and the experimental results demonstrate the
effectiveness of both the proposed model and the learning algorithm in handling
graph data. | [
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
]
|
Zero-shot learning uses semantic attributes to connect the search space of
unseen objects. In recent years, although the deep convolutional network brings
powerful visual modeling capabilities to the ZSL task, its visual features have
severe pattern inertia and lack of representation of semantic relationships,
which leads to severe bias and ambiguity. In response to this, we propose the
Graph-based Visual-Semantic Entanglement Network to conduct graph modeling of
visual features, which is mapped to semantic attributes by using a knowledge
graph, it contains several novel designs: 1. it establishes a multi-path
entangled network with the convolutional neural network (CNN) and the graph
convolutional network (GCN), which input the visual features from CNN to GCN to
model the implicit semantic relations, then GCN feedback the graph modeled
information to CNN features; 2. it uses attribute word vectors as the target
for the graph semantic modeling of GCN, which forms a self-consistent
regression for graph modeling and supervise GCN to learn more personalized
attribute relations; 3. it fuses and supplements the hierarchical
visual-semantic features refined by graph modeling into visual embedding. Our
method outperforms state-of-the-art approaches on multiple representative ZSL
datasets: AwA2, CUB, and SUN by promoting the semantic linkage modelling of
visual features. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
In plug-and-play image restoration, the regularization is performed using
powerful denoisers such as nonlocal means (NLM) or BM3D. This is done within
the framework of alternating direction method of multipliers (ADMM), where the
regularization step is formally replaced by an off-the-shelf denoiser. Each
plug-and-play iteration involves the inversion of the forward model followed by
a denoising step. In this paper, we present a couple of ideas for improving the
efficiency of the inversion and denoising steps. First, we propose to use
linearized ADMM, which generally allows us to perform the inversion at a lower
cost than standard ADMM. Moreover, we can easily incorporate hard constraints
into the optimization framework as a result. Second, we develop a fast
algorithm for doubly stochastic NLM, originally proposed by Sreehari et al.
(IEEE TCI, 2016), which is about 80x faster than brute-force computation. This
particular denoiser can be expressed as the proximal map of a convex
regularizer and, as a consequence, we can guarantee convergence for linearized
plug-and-play ADMM. We demonstrate the effectiveness of our proposals for
super-resolution and single-photon imaging. | [
"cs.CV"
]
|
The Neural Tangent Kernel (NTK) has recently attracted intense study, as it
describes the evolution of an over-parameterized Neural Network (NN) trained by
gradient descent. However, it is now well-known that gradient descent is not
always a good optimizer for NNs, which can partially explain the unsatisfactory
practical performance of the NTK regression estimator. In this paper, we
introduce the Weighted Neural Tangent Kernel (WNTK), a generalized and improved
tool, which can capture an over-parameterized NN's training dynamics under
different optimizers. Theoretically, in the infinite-width limit, we prove: i)
the stability of the WNTK at initialization and during training, and ii) the
equivalence between the WNTK regression estimator and the corresponding NN
estimator with different learning rates on different parameters. With the
proposed weight update algorithm, both empirical and analytical WNTKs
outperform the corresponding NTKs in numerical experiments. | [
"cs.LG"
]
|
Neural architecture search (NAS) recently attracts much research attention
because of its ability to identify better architectures than handcrafted ones.
However, many NAS methods, which optimize the search process in a discrete
search space, need many GPU days for convergence. Recently, DARTS, which
constructs a differentiable search space and then optimizes it by gradient
descent, can obtain high-performance architecture and reduces the search time
to several days. However, DARTS is still slow as it updates an ensemble of all
operations and keeps only one after convergence. Besides, DARTS can converge to
inferior architectures due to the strong correlation among operations. In this
paper, we propose a new differentiable Neural Architecture Search method based
on Proximal gradient descent (denoted as NASP). Different from DARTS, NASP
reformulates the search process as an optimization problem with a constraint
that only one operation is allowed to be updated during forward and backward
propagation. Since the constraint is hard to deal with, we propose a new
algorithm inspired by proximal iterations to solve it. Experiments on various
tasks demonstrate that NASP can obtain high-performance architectures with 10
times of speedup on the computational time than DARTS. | [
"cs.LG",
"stat.ML"
]
|
Few/Zero-shot learning is a big challenge of many classifications tasks,
where a classifier is required to recognise instances of classes that have very
few or even no training samples. It becomes more difficult in multi-label
classification, where each instance is labelled with more than one class. In
this paper, we present a simple multi-graph aggregation model that fuses
knowledge from multiple label graphs encoding different semantic label
relationships in order to study how the aggregated knowledge can benefit
multi-label zero/few-shot document classification. The model utilises three
kinds of semantic information, i.e., the pre-trained word embeddings, label
description, and pre-defined label relations. Experimental results derived on
two large clinical datasets (i.e., MIMIC-II and MIMIC-III) and the EU
legislation dataset show that methods equipped with the multi-graph knowledge
aggregation achieve significant performance improvement across almost all the
measures on few/zero-shot labels. | [
"cs.LG",
"cs.AI"
]
|
Object reconstruction from a single image -- in the wild -- is a problem
where we can make progress and get meaningful results today. This is the main
message of this paper, which introduces an automated pipeline with pixels as
inputs and 3D surfaces of various rigid categories as outputs in images of
realistic scenes. At the core of our approach are deformable 3D models that can
be learned from 2D annotations available in existing object detection datasets,
that can be driven by noisy automatic object segmentations and which we
complement with a bottom-up module for recovering high-frequency shape details.
We perform a comprehensive quantitative analysis and ablation study of our
approach using the recently introduced PASCAL 3D+ dataset and show very
encouraging automatic reconstructions on PASCAL VOC. | [
"cs.CV"
]
|
In this paper, we investigate the problem of weakly supervised 3D vehicle
detection. Conventional methods for 3D object detection need vast amounts of
manually labelled 3D data as supervision signals. However, annotating large
datasets requires huge human efforts, especially for 3D area. To tackle this
problem, we propose frustum-aware geometric reasoning (FGR) to detect vehicles
in point clouds without any 3D annotations. Our method consists of two stages:
coarse 3D segmentation and 3D bounding box estimation. For the first stage, a
context-aware adaptive region growing algorithm is designed to segment objects
based on 2D bounding boxes. Leveraging predicted segmentation masks, we develop
an anti-noise approach to estimate 3D bounding boxes in the second stage.
Finally 3D pseudo labels generated by our method are utilized to train a 3D
detector. Independent of any 3D groundtruth, FGR reaches comparable performance
with fully supervised methods on the KITTI dataset. The findings indicate that
it is able to accurately detect objects in 3D space with only 2D bounding boxes
and sparse point clouds. | [
"cs.CV"
]
|
This study analyzed the performance of different machine learning methods for
winter wheat yield prediction using extensive datasets of weather, soil, and
crop phenology. To address the seasonality, weekly features were used that
explicitly take soil moisture conditions and meteorological events into
account. Our results indicated that nonlinear models such as deep neural
networks (DNN) and XGboost are more effective in finding the functional
relationship between the crop yield and input data compared to linear models.
The results also revealed that the deep neural networks often had a higher
prediction accuracy than XGboost. One of the main limitations of machine
learning models is their black box property. As a result, we moved beyond
prediction and performed feature selection, as it provides key results towards
explaining yield prediction (variable importance by time). The feature
selection method estimated the individual effect of weather components, soil
conditions, and phenology variables as well as the time that these variables
become important. As such, our study indicates which variables have the most
significant effect on winter wheat yield. | [
"cs.LG"
]
|
As the decisions made or influenced by machine learning models increasingly
impact our lives, it is crucial to detect, understand, and mitigate unfairness.
But even simply determining what "unfairness" should mean in a given context is
non-trivial: there are many competing definitions, and choosing between them
often requires a deep understanding of the underlying task. It is thus tempting
to use model explainability to gain insights into model fairness, however
existing explainability tools do not reliably indicate whether a model is
indeed fair. In this work we present a new approach to explaining fairness in
machine learning, based on the Shapley value paradigm. Our fairness
explanations attribute a model's overall unfairness to individual input
features, even in cases where the model does not operate on sensitive
attributes directly. Moreover, motivated by the linearity of Shapley
explainability, we propose a meta algorithm for applying existing training-time
fairness interventions, wherein one trains a perturbation to the original
model, rather than a new model entirely. By explaining the original model, the
perturbation, and the fair-corrected model, we gain insight into the
accuracy-fairness trade-off that is being made by the intervention. We further
show that this meta algorithm enjoys both flexibility and stability benefits
with no loss in performance. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
The aim of a Content-Based Image Retrieval (CBIR) system, also known as Query
by Image Content (QBIC), is to help users to retrieve relevant images based on
their contents. CBIR technologies provide a method to find images in large
databases by using unique descriptors from a trained image. The image
descriptors include texture, color, intensity and shape of the object inside an
image. Several feature-extraction techniques viz., Average RGB, Color Moments,
Co-occurrence, Local Color Histogram, Global Color Histogram and Geometric
Moment have been critically compared in this paper. However, individually these
techniques result in poor performance. So, combinations of these techniques
have also been evaluated and results for the most efficient combination of
techniques have been presented and optimized for each class of image query. We
also propose an improvement in image retrieval performance by introducing the
idea of Query modification through image cropping. It enables the user to
identify a region of interest and modify the initial query to refine and
personalize the image retrieval results. | [
"cs.CV",
"cs.AI",
"cs.IR",
"cs.LG",
"cs.MM"
]
|
While the performance of crowd counting via deep learning has been improved
dramatically in the recent years, it remains an ingrained problem due to
cluttered backgrounds and varying scales of people within an image. In this
paper, we propose a Shallow feature based Dense Attention Network (SDANet) for
crowd counting from still images, which diminishes the impact of backgrounds
via involving a shallow feature based attention model, and meanwhile, captures
multi-scale information via densely connecting hierarchical image features.
Specifically, inspired by the observation that backgrounds and human crowds
generally have noticeably different responses in shallow features, we decide to
build our attention model upon shallow-feature maps, which results in accurate
background-pixel detection. Moreover, considering that the most representative
features of people across different scales can appear in different layers of a
feature extraction network, to better keep them all, we propose to densely
connect hierarchical image features of different layers and subsequently encode
them for estimating crowd density. Experimental results on three benchmark
datasets clearly demonstrate the superiority of SDANet when dealing with
different scenarios. Particularly, on the challenging UCF CC 50 dataset, our
method outperforms other existing methods by a large margin, as is evident from
a remarkable 11.9% Mean Absolute Error (MAE) drop of our SDANet. | [
"cs.CV"
]
|
Exploring fine-grained relationship between entities(e.g. objects in image or
words in sentence) has great contribution to understand multimedia content
precisely. Previous attention mechanism employed in image-text matching either
takes multiple self attention steps to gather correspondences or uses image
objects (or words) as context to infer image-text similarity. However, they
only take advantage of semantic information without considering that objects'
relative position also contributes to image understanding. To this end, we
introduce a novel position-aware relation module to model both the semantic and
spatial relationship simultaneously for image-text matching in this paper.
Given an image, our method utilizes the location of different objects to
capture spatial relationship innovatively. With the combination of semantic and
spatial relationship, it's easier to understand the content of different
modalities (images and sentences) and capture fine-grained latent
correspondences of image-text pairs. Besides, we employ a two-step aggregated
relation module to capture interpretable alignment of image-text pairs. The
first step, we call it intra-modal relation mechanism, in which we computes
responses between different objects in an image or different words in a
sentence separately; The second step, we call it inter-modal relation
mechanism, in which the query plays a role of textual context to refine the
relationship among object proposals in an image. In this way, our
position-aware aggregated relation network (ParNet) not only knows which
entities are relevant by attending on different objects (words) adaptively, but
also adjust the inter-modal correspondence according to the latent alignments
according to query's content. Our approach achieves the state-of-the-art
results on MS-COCO dataset. | [
"cs.CV",
"cs.CL",
"cs.LG",
"cs.MM"
]
|
Off-policy evaluation (OPE) is the problem of evaluating new policies using
historical data obtained from a different policy. In the recent OPE context,
most studies have focused on single-player cases, and not on multi-player
cases. In this study, we propose OPE estimators constructed by the doubly
robust and double reinforcement learning estimators in two-player zero-sum
Markov games. The proposed estimators project exploitability that is often used
as a metric for determining how close a policy profile (i.e., a tuple of
policies) is to a Nash equilibrium in two-player zero-sum games. We prove the
exploitability estimation error bounds for the proposed estimators. We then
propose the methods to find the best candidate policy profile by selecting the
policy profile that minimizes the estimated exploitability from a given policy
profile class. We prove the regret bounds of the policy profiles selected by
our methods. Finally, we demonstrate the effectiveness and performance of the
proposed estimators through experiments. | [
"cs.LG",
"cs.GT",
"econ.EM",
"stat.ML"
]
|
Generative adversarial networks (GANs) have attained photo-realistic quality.
However, it remains an open challenge of how to best control the image content.
We introduce LatentKeypointGAN, a two-stage GAN that is trained end-to-end on
the classical GAN objective yet internally conditioned on a set of sparse
keypoints with associated appearance embeddings that respectively control the
position and style of the generated objects and their parts. A major difficulty
that we address with suitable network architectures and training schemes is
disentangling the image into spatial and appearance factors without any
supervision signals of either nor domain knowledge. We demonstrate that
LatentKeypointGAN provides an interpretable latent space that can be used to
re-arrange the generated images by re-positioning and exchanging keypoint
embeddings, such as combining the eyes, nose, and mouth from different images
for generating portraits. In addition, the explicit generation of keypoints and
matching images enables a new, GAN-based methodology for unsupervised keypoint
detection. | [
"cs.CV"
]
|
Retrosynthesis is one of the fundamental problems in organic chemistry. The
task is to identify reactants that can be used to synthesize a specified
product molecule. Recently, computer-aided retrosynthesis is finding renewed
interest from both chemistry and computer science communities. Most existing
approaches rely on template-based models that define subgraph matching rules,
but whether or not a chemical reaction can proceed is not defined by hard
decision rules. In this work, we propose a new approach to this task using the
Conditional Graph Logic Network, a conditional graphical model built upon graph
neural networks that learns when rules from reaction templates should be
applied, implicitly considering whether the resulting reaction would be both
chemically feasible and strategic. We also propose an efficient hierarchical
sampling to alleviate the computation cost. While achieving a significant
improvement of $8.1\%$ over current state-of-the-art methods on the benchmark
dataset, our model also offers interpretations for the prediction. | [
"cs.LG",
"stat.ML"
]
|
Video captioning aims to automatically generate natural language descriptions
of video content, which has drawn a lot of attention recent years. Generating
accurate and fine-grained captions needs to not only understand the global
content of video, but also capture the detailed object information. Meanwhile,
video representations have great impact on the quality of generated captions.
Thus, it is important for video captioning to capture salient objects with
their detailed temporal dynamics, and represent them using discriminative
spatio-temporal representations. In this paper, we propose a new video
captioning approach based on object-aware aggregation with bidirectional
temporal graph (OA-BTG), which captures detailed temporal dynamics for salient
objects in video, and learns discriminative spatio-temporal representations by
performing object-aware local feature aggregation on detected object regions.
The main novelties and advantages are: (1) Bidirectional temporal graph: A
bidirectional temporal graph is constructed along and reversely along the
temporal order, which provides complementary ways to capture the temporal
trajectories for each salient object. (2) Object-aware aggregation: Learnable
VLAD (Vector of Locally Aggregated Descriptors) models are constructed on
object temporal trajectories and global frame sequence, which performs
object-aware aggregation to learn discriminative representations. A
hierarchical attention mechanism is also developed to distinguish different
contributions of multiple objects. Experiments on two widely-used datasets
demonstrate our OA-BTG achieves state-of-the-art performance in terms of
BLEU@4, METEOR and CIDEr metrics. | [
"cs.CV"
]
|
An accurate load forecasting has always been one of the main indispensable
parts in the operation and planning of power systems. Among different time
horizons of forecasting, while short-term load forecasting (STLF) and long-term
load forecasting (LTLF) have respectively got benefits of accurate predictors
and probabilistic forecasting, medium-term load forecasting (MTLF) demands more
attention due to its vital role in power system operation and planning such as
optimal scheduling of generation units, robust planning program for customer
service, and economic supply. In this study, a hybrid method, composed of
Support Vector Regression (SVR) and Symbiotic Organism Search Optimization
(SOSO) method, is proposed for MTLF. In the proposed forecasting model, SVR is
the main part of the forecasting algorithm while SOSO is embedded into it to
optimize the parameters of SVR. In addition, a minimum redundancy-maximum
relevance feature selection algorithm is used to in the preprocessing of input
data. The proposed method is tested on EUNITE competition dataset to
demonstrate its proper performance. Furthermore, it is compared with some
previous works to show eligibility of our method. | [
"stat.ML",
"cs.LG",
"cs.NE"
]
|
The task of video object segmentation with referring expressions
(language-guided VOS) is to, given a linguistic phrase and a video, generate
binary masks for the object to which the phrase refers. Our work argues that
existing benchmarks used for this task are mainly composed of trivial cases, in
which referents can be identified with simple phrases. Our analysis relies on a
new categorization of the phrases in the DAVIS-2017 and Actor-Action datasets
into trivial and non-trivial REs, with the non-trivial REs annotated with seven
RE semantic categories. We leverage this data to analyze the results of RefVOS,
a novel neural network that obtains competitive results for the task of
language-guided image segmentation and state of the art results for
language-guided VOS. Our study indicates that the major challenges for the task
are related to understanding motion and static actions. | [
"cs.CV"
]
|
Edge devices, such as cameras and mobile units, are increasingly capable of
performing sophisticated computation in addition to their traditional roles in
sensing and communicating signals. The focus of this paper is on collaborative
object detection, where deep features computed on the edge device from input
images are transmitted to the cloud for further processing. We consider the
impact of packet loss on the transmitted features and examine several ways for
recovering the missing data. In particular, through theory and experiments, we
show that methods for image inpainting based on partial differential equations
work well for the recovery of missing features in the latent space. The
obtained results represent the new state of the art for missing data recovery
in collaborative object detection. | [
"cs.CV",
"cs.MM"
]
|
While many visual ego-motion algorithm variants have been proposed in the
past decade, learning based ego-motion estimation methods have seen an
increasing attention because of its desirable properties of robustness to image
noise and camera calibration independence. In this work, we propose a
data-driven approach of fully trainable visual ego-motion estimation for a
monocular camera. We use an end-to-end learning approach in allowing the model
to map directly from input image pairs to an estimate of ego-motion
(parameterized as 6-DoF transformation matrices). We introduce a novel
two-module Long-term Recurrent Convolutional Neural Networks called
PoseConvGRU, with an explicit sequence pose estimation loss to achieve this.
The feature-encoding module encodes the short-term motion feature in an image
pair, while the memory-propagating module captures the long-term motion feature
in the consecutive image pairs. The visual memory is implemented with
convolutional gated recurrent units, which allows propagating information over
time. At each time step, two consecutive RGB images are stacked together to
form a 6 channels tensor for module-1 to learn how to extract motion
information and estimate poses. The sequence of output maps is then passed
through a stacked ConvGRU module to generate the relative transformation pose
of each image pair. We also augment the training data by randomly skipping
frames to simulate the velocity variation which results in a better performance
in turning and high-velocity situations. We evaluate the performance of our
proposed approach on the KITTI Visual Odometry benchmark. The experiments show
a competitive performance of the proposed method to the geometric method and
encourage further exploration of learning based methods for the purpose of
estimating camera ego-motion even though geometrical methods demonstrate
promising results. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Micro-expression, for its high objectivity in emotion detection, has emerged
to be a promising modality in affective computing. Recently, deep learning
methods have been successfully introduced into the micro-expression recognition
area. Whilst the higher recognition accuracy achieved, substantial challenges
in micro-expression recognition remain. The existence of micro expression in
small-local areas on face and limited size of available databases still
constrain the recognition accuracy on such emotional facial behavior. In this
work, to tackle such challenges, we propose a novel attention mechanism called
micro-attention cooperating with residual network. Micro-attention enables the
network to learn to focus on facial areas of interest covering different action
units. Moreover, coping with small datasets, the micro-attention is designed
without adding noticeable parameters while a simple yet efficient transfer
learning approach is together utilized to alleviate the overfitting risk. With
extensive experimental evaluations on three benchmarks (CASMEII, SAMM and SMIC)
and post-hoc feature visualizations, we demonstrate the effectiveness of the
proposed micro-attention and push the boundary of automatic recognition of
micro-expression. | [
"cs.CV"
]
|
We introduce the Graph Mixture Density Networks, a new family of machine
learning models that can fit multimodal output distributions conditioned on
graphs of arbitrary topology. By combining ideas from mixture models and graph
representation learning, we address a broader class of challenging conditional
density estimation problems that rely on structured data. In this respect, we
evaluate our method on a new benchmark application that leverages random graphs
for stochastic epidemic simulations. We show a significant improvement in the
likelihood of epidemic outcomes when taking into account both multimodality and
structure. The empirical analysis is complemented by two real-world regression
tasks showing the effectiveness of our approach in modeling the output
prediction uncertainty. Graph Mixture Density Networks open appealing research
opportunities in the study of structure-dependent phenomena that exhibit
non-trivial conditional output distributions. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
With the rapid growth of video data and the increasing demands of various
applications such as intelligent video search and assistance toward
visually-impaired people, video captioning task has received a lot of attention
recently in computer vision and natural language processing fields. The
state-of-the-art video captioning methods focus more on encoding the temporal
information, while lack of effective ways to remove irrelevant temporal
information and also neglecting the spatial details. However, the current RNN
encoding module in single time order can be influenced by the irrelevant
temporal information, especially the irrelevant temporal information is at the
beginning of the encoding. In addition, neglecting spatial information will
lead to the relationship confusion of the words and detailed loss. Therefore,
in this paper, we propose a novel recurrent video encoding method and a novel
visual spatial feature for the video captioning task. The recurrent encoding
module encodes the video twice with the predicted key frame to avoid the
irrelevant temporal information often occurring at the beginning and the end of
a video. The novel spatial features represent the spatial information in
different regions of a video and enrich the details of a caption. Experiments
on two benchmark datasets show superior performance of the proposed method. | [
"cs.CV"
]
|
In this work, we introduce a deep-structured conditional random field
(DS-CRF) model for the purpose of state-based object silhouette tracking. The
proposed DS-CRF model consists of a series of state layers, where each state
layer spatially characterizes the object silhouette at a particular point in
time. The interactions between adjacent state layers are established by
inter-layer connectivity dynamically determined based on inter-frame optical
flow. By incorporate both spatial and temporal context in a dynamic fashion
within such a deep-structured probabilistic graphical model, the proposed
DS-CRF model allows us to develop a framework that can accurately and
efficiently track object silhouettes that can change greatly over time, as well
as under different situations such as occlusion and multiple targets within the
scene. Experiment results using video surveillance datasets containing
different scenarios such as occlusion and multiple targets showed that the
proposed DS-CRF approach provides strong object silhouette tracking performance
when compared to baseline methods such as mean-shift tracking, as well as
state-of-the-art methods such as context tracking and boosted particle
filtering. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
We introduce GANHopper, an unsupervised image-to-image translation network
that transforms images gradually between two domains, through multiple hops.
Instead of executing translation directly, we steer the translation by
requiring the network to produce in-between images that resemble weighted
hybrids between images from the input domains. Our network is trained on
unpaired images from the two domains only, without any in-between images. All
hops are produced using a single generator along each direction. In addition to
the standard cycle-consistency and adversarial losses, we introduce a new
hybrid discriminator, which is trained to classify the intermediate images
produced by the generator as weighted hybrids, with weights based on a
predetermined hop count. We also add a smoothness term to constrain the
magnitude of each hop, further regularizing the translation. Compared to
previous methods, GANHopper excels at image translations involving
domain-specific image features and geometric variations while also preserving
non-domain-specific features such as general color schemes. | [
"cs.CV"
]
|
Larger networks generally have greater representational power at the cost of
increased computational complexity. Sparsifying such networks has been an
active area of research but has been generally limited to static regularization
or dynamic approaches using reinforcement learning. We explore a mixture of
experts (MoE) approach to deep dynamic routing, which activates certain experts
in the network on a per-example basis. Our novel DeepMoE architecture increases
the representational power of standard convolutional networks by adaptively
sparsifying and recalibrating channel-wise features in each convolutional
layer. We employ a multi-headed sparse gating network to determine the
selection and scaling of channels for each input, leveraging exponential
combinations of experts within a single convolutional network. Our proposed
architecture is evaluated on four benchmark datasets and tasks, and we show
that Deep-MoEs are able to achieve higher accuracy with lower computation than
standard convolutional networks. | [
"cs.CV"
]
|
This paper presents a new vision Transformer, called Swin Transformer, that
capably serves as a general-purpose backbone for computer vision. Challenges in
adapting Transformer from language to vision arise from differences between the
two domains, such as large variations in the scale of visual entities and the
high resolution of pixels in images compared to words in text. To address these
differences, we propose a hierarchical Transformer whose representation is
computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme
brings greater efficiency by limiting self-attention computation to
non-overlapping local windows while also allowing for cross-window connection.
This hierarchical architecture has the flexibility to model at various scales
and has linear computational complexity with respect to image size. These
qualities of Swin Transformer make it compatible with a broad range of vision
tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and
dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP
on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its
performance surpasses the previous state-of-the-art by a large margin of +2.7
box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the
potential of Transformer-based models as vision backbones. The hierarchical
design and the shifted window approach also prove beneficial for all-MLP
architectures. The code and models are publicly available
at~\url{https://github.com/microsoft/Swin-Transformer}. | [
"cs.CV",
"cs.LG"
]
|
LIDAR point clouds and RGB-images are both extremely essential for 3D object
detection. So many state-of-the-art 3D detection algorithms dedicate in fusing
these two types of data effectively. However, their fusion methods based on
Birds Eye View (BEV) or voxel format are not accurate. In this paper, we
propose a novel fusion approach named Point-based Attentive Cont-conv
Fusion(PACF) module, which fuses multi-sensor features directly on 3D points.
Except for continuous convolution, we additionally add a Point-Pooling and an
Attentive Aggregation to make the fused features more expressive. Moreover,
based on the PACF module, we propose a 3D multi-sensor multi-task network
called Pointcloud-Image RCNN(PI-RCNN as brief), which handles the image
segmentation and 3D object detection tasks. PI-RCNN employs a segmentation
sub-network to extract full-resolution semantic feature maps from images and
then fuses the multi-sensor features via powerful PACF module. Beneficial from
the effectiveness of the PACF module and the expressive semantic features from
the segmentation module, PI-RCNN can improve much in 3D object detection. We
demonstrate the effectiveness of the PACF module and PI-RCNN on the KITTI 3D
Detection benchmark, and our method can achieve state-of-the-art on the metric
of 3D AP. | [
"cs.CV"
]
|
Knowledge distillation is a potential solution for model compression. The
idea is to make a small student network imitate the target of a large teacher
network, then the student network can be competitive to the teacher one. Most
previous studies focus on model distillation in the classification task, where
they propose different architects and initializations for the student network.
However, only the classification task is not enough, and other related tasks
such as regression and retrieval are barely considered. To solve the problem,
in this paper, we take face recognition as a breaking point and propose model
distillation with knowledge transfer from face classification to alignment and
verification. By selecting appropriate initializations and targets in the
knowledge transfer, the distillation can be easier in non-classification tasks.
Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the
student network can be competitive to the teacher one in alignment and
verification, and even surpasses the teacher network under specific compression
rates. In addition, to achieve stronger knowledge transfer, we also use a
common initialization trick to improve the distillation performance of
classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M
datasets show the effectiveness of this simple trick. | [
"cs.CV"
]
|
We propose FlowReg, a deep learning-based framework for unsupervised image
registration for neuroimaging applications. The system is composed of two
architectures that are trained sequentially: FlowReg-A which affinely corrects
for gross differences between moving and fixed volumes in 3D followed by
FlowReg-O which performs pixel-wise deformations on a slice-by-slice basis for
fine tuning in 2D. The affine network regresses the 3D affine matrix based on a
correlation loss function that enforces global similarity. The deformable
network operates on 2D image slices based on the optical flow network
FlowNet-Simple but with three loss components. The photometric loss minimizes
pixel intensity differences differences, the smoothness loss encourages similar
magnitudes between neighbouring vectors, and a correlation loss that is used to
maintain the intensity similarity between fixed and moving image slices. The
proposed method is compared to four open source registration techniques ANTs,
Demons, SE, and Voxelmorph. In total, 4643 FLAIR MR imaging volumes are used
from dementia and vascular disease cohorts, acquired from over 60 international
centres with varying acquisition parameters. A battery of quantitative novel
registration validation metrics are proposed that focus on the structural
integrity of tissues, spatial alignment, and intensity similarity. Experimental
results show FlowReg (FlowReg-A+O) performs better than iterative-based
registration algorithms for intensity and spatial alignment metrics with a
Pixelwise Agreement of 0.65, correlation coefficient of 0.80, and Mutual
Information of 0.29. Among the deep learning frameworks, FlowReg-A or
FlowReg-A+O provided the highest performance over all but one of the metrics.
Results show that FlowReg is able to obtain high intensity and spatial
similarity while maintaining the shape and structure of anatomy and pathology. | [
"cs.CV",
"eess.IV"
]
|
The structured time series (STS) classification problem requires the modeling
of interweaved spatiotemporal dependency. most previous STS classification
methods model the spatial and temporal dependencies independently. Due to the
complexity of the STS data, we argue that a desirable STS classification method
should be a holistic framework that can be made as adaptive and flexible as
possible. This motivates us to design a deep neural network with such merits.
Inspired by the dual-stream hypothesis in neural science, we propose a novel
dual-stream framework for modeling the interweaved spatiotemporal dependency,
and develop a convolutional neural network within this framework that aims to
achieve high adaptability and flexibility in STS configurations from various
diagonals, i.e., sequential order, dependency range and features. The proposed
architecture is highly modularized and scalable, making it easy to be adapted
to specific tasks. The effectiveness of our model is demonstrated through
experiments on synthetic data as well as benchmark datasets for skeleton based
activity recognition. | [
"cs.CV"
]
|
Image segmentation and 3D pose estimation are two key cogs in any algorithm
for scene understanding. However, state-of-the-art CRF-based models for image
segmentation rely mostly on 2D object models to construct top-down high-order
potentials. In this paper, we propose new top-down potentials for image
segmentation and pose estimation based on the shape and volume of a 3D object
model. We show that these complex top-down potentials can be easily decomposed
into standard forms for efficient inference in both the segmentation and pose
estimation tasks. Experiments on a car dataset show that knowledge of
segmentation helps perform pose estimation better and vice versa. | [
"cs.CV"
]
|
In this paper, we propose a novel multi-task learning method based on the
deep convolutional network. The proposed deep network has four convolutional
layers, three max-pooling layers, and two parallel fully connected layers. To
adjust the deep network to multi-task learning problem, we propose to learn a
low-rank deep network so that the relation among different tasks can be
explored. We proposed to minimize the number of independent parameter rows of
one fully connected layer to explore the relations among different tasks, which
is measured by the nuclear norm of the parameter of one fully connected layer,
and seek a low-rank parameter matrix. Meanwhile, we also propose to regularize
another fully connected layer by sparsity penalty, so that the useful features
learned by the lower layers can be selected. The learning problem is solved by
an iterative algorithm based on gradient descent and back-propagation
algorithms. The proposed algorithm is evaluated over benchmark data sets of
multiple face attribute prediction, multi-task natural language processing, and
joint economics index predictions. The evaluation results show the advantage of
the low-rank deep CNN model over multi-task problems. | [
"cs.LG",
"stat.ML"
]
|
SOM is a type of unsupervised learning where the goal is to discover some
underlying structure of the data. In this paper, a new extraction method based
on the main idea of Concurrent Self-Organizing Maps (CSOM), representing a
winner-takes-all collection of small SOM networks is proposed. Each SOM of the
system is trained individually to provide best results for one class only. The
experiments confirm that the proposed features based CSOM is capable to
represent image content better than extracted features based on a single big
SOM and these proposed features improve the final decision of the CAD.
Experiments held on Mammographic Image Analysis Society (MIAS) dataset. | [
"cs.CV"
]
|
Real-world machine learning systems are achieving remarkable performance in
terms of coarse-grained metrics like overall accuracy and F-1 score. However,
model improvement and development often require fine-grained modeling on
individual data subsets or slices, for instance, the data slices where the
models have unsatisfactory results. In practice, it gives tangible values for
developing such models that can pay extra attention to critical or interested
slices while retaining the original overall performance. This work extends the
recent slice-based learning (SBL)~\cite{chen2019slice} with a mixture of
attentions (MoA) to learn slice-aware dual attentive representations. We
empirically show that the MoA approach outperforms the baseline method as well
as the original SBL approach on monitored slices with two natural language
understanding (NLU) tasks. | [
"cs.LG",
"cs.CL"
]
|
We present compositional nearest neighbors (CompNN), a simple approach to
visually interpreting distributed representations learned by a convolutional
neural network (CNN) for pixel-level tasks (e.g., image synthesis and
segmentation). It does so by reconstructing both a CNN's input and output image
by copy-pasting corresponding patches from the training set with similar
feature embeddings. To do so efficiently, it makes of a patch-match-based
algorithm that exploits the fact that the patch representations learned by a
CNN for pixel level tasks vary smoothly. Finally, we show that CompNN can be
used to establish semantic correspondences between two images and control
properties of the output image by modifying the images contained in the
training set. We present qualitative and quantitative experiments for semantic
segmentation and image-to-image translation that demonstrate that CompNN is a
good tool for interpreting the embeddings learned by pixel-level CNNs. | [
"cs.CV"
]
|
While representation learning aims to derive interpretable features for
describing visual data, representation disentanglement further results in such
features so that particular image attributes can be identified and manipulated.
However, one cannot easily address this task without observing ground truth
annotation for the training data. To address this problem, we propose a novel
deep learning model of Cross-Domain Representation Disentangler (CDRD). By
observing fully annotated source-domain data and unlabeled target-domain data
of interest, our model bridges the information across data domains and
transfers the attribute information accordingly. Thus, cross-domain joint
feature disentanglement and adaptation can be jointly performed. In the
experiments, we provide qualitative results to verify our disentanglement
capability. Moreover, we further confirm that our model can be applied for
solving classification tasks of unsupervised domain adaptation, and performs
favorably against state-of-the-art image disentanglement and translation
methods. | [
"cs.CV"
]
|
Manual visual inspection performed by certified inspectors is still the main
form of road pothole detection. This process is, however, not only tedious,
time-consuming and costly, but also dangerous for the inspectors. Furthermore,
the road pothole detection results are always subjective, because they depend
entirely on the individual experience. Our recently introduced disparity (or
inverse depth) transformation algorithm allows better discrimination between
damaged and undamaged road areas, and it can be easily deployed to any semantic
segmentation network for better road pothole detection results. To boost the
performance, we propose a novel attention aggregation (AA) framework, which
takes the advantages of different types of attention modules. In addition, we
develop an effective training set augmentation technique based on adversarial
domain adaptation, where the synthetic road RGB images and transformed road
disparity (or inverse depth) images are generated to enhance the training of
semantic segmentation networks. The experimental results demonstrate that,
firstly, the transformed disparity (or inverse depth) images become more
informative; secondly, AA-UNet and AA-RTFNet, our best performing
implementations, respectively outperform all other state-of-the-art
single-modal and data-fusion networks for road pothole detection; and finally,
the training set augmentation technique based on adversarial domain adaptation
not only improves the accuracy of the state-of-the-art semantic segmentation
networks, but also accelerates their convergence. | [
"cs.CV",
"cs.RO"
]
|
We present a framework to systematically analyze convolutional neural
networks (CNNs) used in classification of cars in autonomous vehicles. Our
analysis procedure comprises an image generator that produces synthetic
pictures by sampling in a lower dimension image modification subspace and a
suite of visualization tools. The image generator produces images which can be
used to test the CNN and hence expose its vulnerabilities. The presented
framework can be used to extract insights of the CNN classifier, compare across
classification models, or generate training and validation datasets. | [
"cs.CV",
"cs.AI"
]
|
The paper describes our proposed methodology for the seven basic expression
classification track of Affective Behavior Analysis in-the-wild (ABAW)
Competition 2021. In this task, facial expression recognition (FER) methods aim
to classify the correct expression category from a diverse background, but
there are several challenges. First, to adapt the model to in-the-wild
scenarios, we use the knowledge from pre-trained large-scale face recognition
data. Second, we propose an ensemble model with a convolution neural network
(CNN), a CNN-recurrent neural network (CNN-RNN), and a CNN-Transformer
(CNN-Transformer), to incorporate both spatial and temporal information. Our
ensemble model achieved F1 as 0.4133, accuracy as 0.6216 and final metric as
0.4821 on the validation set. | [
"cs.CV",
"eess.IV"
]
|
Convolutional neural networks are nowadays witnessing a major success in
different pattern recognition problems. These learning models were basically
designed to handle vectorial data such as images but their extension to
non-vectorial and semi-structured data (namely graphs with variable sizes,
topology, etc.) remains a major challenge, though a few interesting solutions
are currently emerging. In this paper, we introduce MLGCN; a novel spectral
Multi-Laplacian Graph Convolutional Network. The main contribution of this
method resides in a new design principle that learns graph-laplacians as convex
combinations of other elementary laplacians each one dedicated to a particular
topology of the input graphs. We also introduce a novel pooling operator, on
graphs, that proceeds in two steps: context-dependent node expansion is
achieved, followed by a global average pooling; the strength of this two-step
process resides in its ability to preserve the discrimination power of nodes
while achieving permutation invariance. Experiments conducted on SBU and
UCF-101 datasets, show the validity of our method for the challenging task of
action recognition. | [
"cs.CV"
]
|
Humans perceive the world by concurrently processing and fusing
high-dimensional inputs from multiple modalities such as vision and audio.
Machine perception models, in stark contrast, are typically modality-specific
and optimised for unimodal benchmarks, and hence late-stage fusion of final
representations or predictions from each modality (`late-fusion') is still a
dominant paradigm for multimodal video classification. Instead, we introduce a
novel transformer based architecture that uses `fusion bottlenecks' for
modality fusion at multiple layers. Compared to traditional pairwise
self-attention, our model forces information between different modalities to
pass through a small number of bottleneck latents, requiring the model to
collate and condense the most relevant information in each modality and only
share what is necessary. We find that such a strategy improves fusion
performance, at the same time reducing computational cost. We conduct thorough
ablation studies, and achieve state-of-the-art results on multiple audio-visual
classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All
code and models will be released. | [
"cs.CV"
]
|
Autonomous driving applications use two types of sensor systems to identify
vehicles - depth sensing LiDAR and radiance sensing cameras. We compare the
performance (average precision) of a ResNet for vehicle detection in complex,
daytime, driving scenes when the input is a depth map (D = d(x,y)), a radiance
image (L = r(x,y)), or both [D,L]. (1) When the spatial sampling resolution of
the depth map and radiance image are equal to typical camera resolutions, a
ResNet detects vehicles at higher average precision from depth than radiance.
(2) As the spatial sampling of the depth map declines to the range of current
LiDAR devices, the ResNet average precision is higher for radiance than depth.
(3) For a hybrid system that combines a depth map and radiance image, the
average precision is higher than using depth or radiance alone. We established
these observations in simulation and then confirmed them using realworld data.
The advantage of combining depth and radiance can be explained by noting that
the two type of information have complementary weaknesses. The radiance data
are limited by dynamic range and motion blur. The LiDAR data have relatively
low spatial resolution. The ResNet combines the two data sources effectively to
improve overall vehicle detection. | [
"cs.CV"
]
|
One essential problem in skeleton-based action recognition is how to extract
discriminative features over all skeleton joints. However, the complexity of
the State-Of-The-Art (SOTA) models of this task tends to be exceedingly
sophisticated and over-parameterized, where the low efficiency in model
training and inference has obstructed the development in the field, especially
for large-scale action datasets. In this work, we propose an efficient but
strong baseline based on Graph Convolutional Network (GCN), where three main
improvements are aggregated, i.e., early fused Multiple Input Branches (MIB),
Residual GCN (ResGCN) with bottleneck structure and Part-wise Attention
(PartAtt) block. Firstly, an MIB is designed to enrich informative skeleton
features and remain compact representations at an early fusion stage. Then,
inspired by the success of the ResNet architecture in Convolutional Neural
Network (CNN), a ResGCN module is introduced in GCN to alleviate computational
costs and reduce learning difficulties in model training while maintain the
model accuracy. Finally, a PartAtt block is proposed to discover the most
essential body parts over a whole action sequence and obtain more explainable
representations for different skeleton action sequences. Extensive experiments
on two large-scale datasets, i.e., NTU RGB+D 60 and 120, validate that the
proposed baseline slightly outperforms other SOTA models and meanwhile requires
much fewer parameters during training and inference procedures, e.g., at most
34 times less than DGNN, which is one of the best SOTA methods. | [
"cs.CV"
]
|
Probabilistic circuits (PCs) are a promising avenue for probabilistic
modeling, as they permit a wide range of exact and efficient inference
routines. Recent ``deep-learning-style'' implementations of PCs strive for a
better scalability, but are still difficult to train on real-world data, due to
their sparsely connected computational graphs. In this paper, we propose Einsum
Networks (EiNets), a novel implementation design for PCs, improving prior art
in several regards. At their core, EiNets combine a large number of arithmetic
operations in a single monolithic einsum-operation, leading to speedups and
memory savings of up to two orders of magnitude, in comparison to previous
implementations. As an algorithmic contribution, we show that the
implementation of Expectation-Maximization (EM) can be simplified for PCs, by
leveraging automatic differentiation. Furthermore, we demonstrate that EiNets
scale well to datasets which were previously out of reach, such as SVHN and
CelebA, and that they can be used as faithful generative image models. | [
"cs.LG",
"stat.ML"
]
|
With the prosperity of digital video industry, video frame interpolation has
arisen continuous attention in computer vision community and become a new
upsurge in industry. Many learning-based methods have been proposed and
achieved progressive results. Among them, a recent algorithm named quadratic
video interpolation (QVI) achieves appealing performance. It exploits
higher-order motion information (e.g. acceleration) and successfully models the
estimation of interpolated flow. However, its produced intermediate frames
still contain some unsatisfactory ghosting, artifacts and inaccurate motion,
especially when large and complex motion occurs. In this work, we further
improve the performance of QVI from three facets and propose an enhanced
quadratic video interpolation (EQVI) model. In particular, we adopt a rectified
quadratic flow prediction (RQFP) formulation with least squares method to
estimate the motion more accurately. Complementary with image pixel-level
blending, we introduce a residual contextual synthesis network (RCSN) to employ
contextual information in high-dimensional feature space, which could help the
model handle more complicated scenes and motion patterns. Moreover, to further
boost the performance, we devise a novel multi-scale fusion network (MS-Fusion)
which can be regarded as a learnable augmentation process. The proposed EQVI
model won the first place in the AIM2020 Video Temporal Super-Resolution
Challenge. | [
"cs.CV"
]
|
While data sharing is crucial for knowledge development, privacy concerns and
strict regulation (e.g., European General Data Protection Regulation (GDPR))
unfortunately limit its full effectiveness. Synthetic tabular data emerges as
an alternative to enable data sharing while fulfilling regulatory and privacy
constraints. The state-of-the-art tabular data synthesizers draw methodologies
from generative Adversarial Networks (GAN) and address two main data types in
the industry, i.e., continuous and categorical. In this paper, we develop
CTAB-GAN, a novel conditional table GAN architecture that can effectively model
diverse data types, including a mix of continuous and categorical variables.
Moreover, we address data imbalance and long-tail issues, i.e., certain
variables have drastic frequency differences across large values. To achieve
those aims, we first introduce the information loss and classification loss to
the conditional GAN. Secondly, we design a novel conditional vector, which
efficiently encodes the mixed data type and skewed distribution of data
variable. We extensively evaluate CTAB-GAN with the state of the art GANs that
generate synthetic tables, in terms of data similarity and analysis utility.
The results on five datasets show that the synthetic data of CTAB-GAN
remarkably resembles the real data for all three types of variables and results
into higher accuracy for five machine learning algorithms, by up to 17%. | [
"cs.LG",
"I.2.m"
]
|
We develop a language-guided navigation task set in a continuous 3D
environment where agents must execute low-level actions to follow natural
language navigation directions. By being situated in continuous environments,
this setting lifts a number of assumptions implicit in prior work that
represents environments as a sparse graph of panoramas with edges corresponding
to navigability. Specifically, our setting drops the presumptions of known
environment topologies, short-range oracle navigation, and perfect agent
localization. To contextualize this new task, we develop models that mirror
many of the advances made in prior settings as well as single-modality
baselines. While some of these techniques transfer, we find significantly lower
absolute performance in the continuous setting -- suggesting that performance
in prior `navigation-graph' settings may be inflated by the strong implicit
assumptions. | [
"cs.CV",
"cs.CL",
"cs.RO"
]
|
This paper proposes a joint segmentation and deconvolution Bayesian method
for medical ultrasound (US) images. Contrary to piecewise homogeneous images,
US images exhibit heavy characteristic speckle patterns correlated with the
tissue structures. The generalized Gaussian distribution (GGD) has been shown
to be one of the most relevant distributions for characterizing the speckle in
US images. Thus, we propose a GGD-Potts model defined by a label map coupling
US image segmentation and deconvolution. The Bayesian estimators of the unknown
model parameters, including the US image, the label map and all the
hyperparameters are difficult to be expressed in closed form. Thus, we
investigate a Gibbs sampler to generate samples distributed according to the
posterior of interest. These generated samples are finally used to compute the
Bayesian estimators of the unknown parameters. The performance of the proposed
Bayesian model is compared with existing approaches via several experiments
conducted on realistic synthetic data and in vivo US images. | [
"cs.CV"
]
|
Designing effective architectures is one of the key factors behind the
success of deep neural networks. Existing deep architectures are either
manually designed or automatically searched by some Neural Architecture Search
(NAS) methods. However, even a well-designed/searched architecture may still
contain many nonsignificant or redundant modules/operations. Thus, it is
necessary to optimize the operations inside an architecture to improve the
performance without introducing extra computational cost. To this end, we have
proposed a Neural Architecture Transformer (NAT) method which casts the
optimization problem into a Markov Decision Process (MDP) and seeks to replace
the redundant operations with more efficient operations, such as skip or null
connection. Note that NAT only considers a small number of possible transitions
and thus comes with a limited search/transition space. As a result, such a
small search space may hamper the performance of architecture optimization. To
address this issue, we propose a Neural Architecture Transformer++ (NAT++)
method which further enlarges the set of candidate transitions to improve the
performance of architecture optimization. Specifically, we present a two-level
transition rule to obtain valid transitions, i.e., allowing operations to have
more efficient types (e.g., convolution->separable convolution) or smaller
kernel sizes (e.g., 5x5->3x3). Note that different operations may have
different valid transitions. We further propose a Binary-Masked Softmax
(BMSoftmax) layer to omit the possible invalid transitions. Extensive
experiments on several benchmark datasets show that the transformed
architecture significantly outperforms both its original counterpart and the
architectures optimized by existing methods. | [
"cs.CV"
]
|
Salient object detection has been attracting a lot of interest, and recently
various heuristic computational models have been designed. In this paper, we
formulate saliency map computation as a regression problem. Our method, which
is based on multi-level image segmentation, utilizes the supervised learning
approach to map the regional feature vector to a saliency score. Saliency
scores across multiple levels are finally fused to produce the saliency map.
The contributions lie in two-fold. One is that we propose a discriminate
regional feature integration approach for salient object detection. Compared
with existing heuristic models, our proposed method is able to automatically
integrate high-dimensional regional saliency features and choose discriminative
ones. The other is that by investigating standard generic region properties as
well as two widely studied concepts for salient object detection, i.e.,
regional contrast and backgroundness, our approach significantly outperforms
state-of-the-art methods on six benchmark datasets. Meanwhile, we demonstrate
that our method runs as fast as most existing algorithms. | [
"cs.CV"
]
|
This paper focuses on inverse reinforcement learning for autonomous
navigation using distance and semantic category observations. The objective is
to infer a cost function that explains demonstrated behavior while relying only
on the expert's observations and state-control trajectory. We develop a map
encoder, that infers semantic category probabilities from the observation
sequence, and a cost encoder, defined as a deep neural network over the
semantic features. Since the expert cost is not directly observable, the model
parameters can only be optimized by differentiating the error between
demonstrated controls and a control policy computed from the cost estimate. We
propose a new model of expert behavior that enables error minimization using a
closed-form subgradient computed only over a subset of promising states via a
motion planning algorithm. Our approach allows generalizing the learned
behavior to new environments with new spatial configurations of the semantic
categories. We analyze the different components of our model in a minigrid
environment. We also demonstrate that our approach learns to follow traffic
rules in the autonomous driving CARLA simulator by relying on semantic
observations of buildings, sidewalks, and road lanes. | [
"cs.LG",
"cs.RO"
]
|
Learning discriminative features is crucial for various robotic applications
such as object detection and classification. In this paper, we present a
general framework for the analysis of the discriminative properties of haptic
signals. Our focus is on two crucial components of a robotic perception system:
discriminative feature extraction and metric-based feature transformation to
enhance the separability of haptic signals in the projected space. We propose a
set of hand-crafted haptic features (generated only from acceleration data),
which enables discrimination of real-world textures. Since the Euclidean space
does not reflect the underlying pattern in the data, we propose to learn an
appropriate transformation function to project the feature onto the new space
and apply different pattern recognition algorithms for texture classification
and discrimination tasks. Unlike other existing methods, we use a triplet-based
method for improved discrimination in the embedded space. We further
demonstrate how to build a haptic vocabulary by selecting a compact set of the
most distinct and representative signals in the embedded space. The
experimental results show that the proposed features augmented with learned
embedding improves the performance of semantic discrimination tasks such as
classification and clustering and outperforms the related state-of-the-art. | [
"cs.LG",
"cs.HC",
"stat.ML"
]
|
Data corruption is an impediment to modern machine learning deployments.
Corrupted data can severely bias the learned model and can also lead to invalid
inferences. We present, Picket, a simple framework to safeguard against data
corruptions during both training and deployment of machine learning models over
tabular data. For the training stage, Picket identifies and removes corrupted
data points from the training data to avoid obtaining a biased model. For the
deployment stage, Picket flags, in an online manner, corrupted query points to
a trained machine learning model that due to noise will result in incorrect
predictions. To detect corrupted data, Picket uses a self-supervised deep
learning model for mixed-type tabular data, which we call PicketNet. To
minimize the burden of deployment, learning a PicketNet model does not require
any human-labeled data. Picket is designed as a plugin that can increase the
robustness of any machine learning pipeline. We evaluate Picket on a diverse
array of real-world data considering different corruption models that include
systematic and adversarial noise during both training and testing. We show that
Picket consistently safeguards against corrupted data during both training and
deployment of various models ranging from SVMs to neural networks, beating a
diverse array of competing methods that span from data quality validation
models to robust outlier-detection models. | [
"cs.LG",
"stat.ML",
"68U35, 68T05, 68-04",
"H.2.8"
]
|
Representation learning from 3D point clouds is challenging due to their
inherent nature of permutation invariance and irregular distribution in space.
Existing deep learning methods follow a hierarchical feature extraction
paradigm in which high-level abstract features are derived from low-level
features. However, they fail to exploit different granularity of information
due to the limited interaction between these features. To this end, we propose
Multi-Abstraction Refinement Network (MARNet) that ensures an effective
exchange of information between multi-level features to gain local and global
contextual cues while effectively preserving them till the final layer. We
empirically show the effectiveness of MARNet in terms of state-of-the-art
results on two challenging tasks: Shape classification and Coarse-to-fine
grained semantic segmentation. MARNet significantly improves the classification
performance by 2% over the baseline and outperforms the state-of-the-art
methods on semantic segmentation task. | [
"cs.CV"
]
|
The multivariate time series generated from merchant transaction history can
provide critical insights for payment processing companies. The capability of
predicting merchants' future is crucial for fraud detection and recommendation
systems. Conventionally, this problem is formulated to predict one multivariate
time series under the multi-horizon setting. However, real-world applications
often require more than one future trend prediction considering the
uncertainties, where more than one multivariate time series needs to be
predicted. This problem is called multi-future prediction. In this work, we
combine the two research directions and propose to study this new problem:
multi-future, multi-horizon and multivariate time series prediction. This
problem is crucial as it has broad use cases in the financial industry to
reduce the risk while improving user experience by providing alternative
futures. This problem is also challenging as now we not only need to capture
the patterns and insights from the past but also train a model that has a
strong inference capability to project multiple possible outcomes. To solve
this problem, we propose a new model using convolutional neural networks and a
simple yet effective encoder-decoder structure to learn the time series pattern
from multiple perspectives. We use experiments on real-world merchant
transaction data to demonstrate the effectiveness of our proposed model. We
also provide extensive discussions on different model design choices in our
experimental section. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Conventional RGB-D salient object detection methods aim to leverage depth as
complementary information to find the salient regions in both modalities.
However, the salient object detection results heavily rely on the quality of
captured depth data which sometimes are unavailable. In this work, we make the
first attempt to solve the RGB-D salient object detection problem with a novel
depth-awareness framework. This framework only relies on RGB data in the
testing phase, utilizing captured depth data as supervision for representation
learning. To construct our framework as well as achieving accurate salient
detection results, we propose a Ubiquitous Target Awareness (UTA) network to
solve three important challenges in RGB-D SOD task: 1) a depth awareness module
to excavate depth information and to mine ambiguous regions via adaptive
depth-error weights, 2) a spatial-aware cross-modal interaction and a
channel-aware cross-level interaction, exploiting the low-level boundary cues
and amplifying high-level salient channels, and 3) a gated multi-scale
predictor module to perceive the object saliency in different contextual
scales. Besides its high performance, our proposed UTA network is depth-free
for inference and runs in real-time with 43 FPS. Experimental evidence
demonstrates that our proposed network not only surpasses the state-of-the-art
methods on five public RGB-D SOD benchmarks by a large margin, but also
verifies its extensibility on five public RGB SOD benchmarks. | [
"cs.CV"
]
|
Disentanglement is at the forefront of unsupervised learning, as disentangled
representations of data improve generalization, interpretability, and
performance in downstream tasks. Current unsupervised approaches remain
inapplicable for real-world datasets since they are highly variable in their
performance and fail to reach levels of disentanglement of (semi-)supervised
approaches. We introduce population-based training (PBT) for improving
consistency in training variational autoencoders (VAEs) and demonstrate the
validity of this approach in a supervised setting (PBT-VAE). We then use
Unsupervised Disentanglement Ranking (UDR) as an unsupervised heuristic to
score models in our PBT-VAE training and show how models trained this way tend
to consistently disentangle only a subset of the generative factors. Building
on top of this observation we introduce the recursive rPU-VAE approach. We
train the model until convergence, remove the learned factors from the dataset
and reiterate. In doing so, we can label subsets of the dataset with the
learned factors and consecutively use these labels to train one model that
fully disentangles the whole dataset. With this approach, we show striking
improvement in state-of-the-art unsupervised disentanglement performance and
robustness across multiple datasets and metrics. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
Object detection for robot guidance is a crucial mission for autonomous
robots, which has provoked extensive attention for researchers. However, the
changing view of robot movement and limited available data hinder the research
in this area. To address these matters, we proposed a new vision system for
robots, the model adaptation object detection system. Instead of using a single
one to solve problems, We made use of different object detection neural
networks to guide the robot in accordance with various situations, with the
help of a meta neural network to allocate the object detection neural networks.
Furthermore, taking advantage of transfer learning technology and depthwise
separable convolutions, our model is easy to train and can address small
dataset problems. | [
"cs.CV",
"cs.RO"
]
|
Endoscopic videos from multicentres often have different imaging conditions,
e.g., color and illumination, which make the models trained on one domain
usually fail to generalize well to another. Domain adaptation is one of the
potential solutions to address the problem. However, few of existing works
focused on the translation of video-based data. In this work, we propose a
novel generative adversarial network (GAN), namely VideoGAN, to transfer the
video-based data across different domains. As the frames of a video may have
similar content and imaging conditions, the proposed VideoGAN has an X-shape
generator to preserve the intra-video consistency during translation.
Furthermore, a loss function, namely color histogram loss, is proposed to tune
the color distribution of each translated frame. Two colonoscopic datasets from
different centres, i.e., CVC-Clinic and ETIS-Larib, are adopted to evaluate the
performance of domain adaptation of our VideoGAN. Experimental results
demonstrate that the adapted colonoscopic video generated by our VideoGAN can
significantly boost the segmentation accuracy, i.e., an improvement of 5%, of
colorectal polyps on multicentre datasets. As our VideoGAN is a general network
architecture, we also evaluate its performance with the CamVid driving video
dataset on the cloudy-to-sunny translation task. Comprehensive experiments show
that the domain gap could be substantially narrowed down by our VideoGAN. | [
"cs.CV"
]
|
Differentiable Architecture Search (DARTS) has attracted a lot of attention
due to its simplicity and small search costs achieved by a continuous
relaxation and an approximation of the resulting bi-level optimization problem.
However, DARTS does not work robustly for new problems: we identify a wide
range of search spaces for which DARTS yields degenerate architectures with
very poor test performance. We study this failure mode and show that, while
DARTS successfully minimizes validation loss, the found solutions generalize
poorly when they coincide with high validation loss curvature in the
architecture space. We show that by adding one of various types of
regularization we can robustify DARTS to find solutions with less curvature and
better generalization properties. Based on these observations, we propose
several simple variations of DARTS that perform substantially more robustly in
practice. Our observations are robust across five search spaces on three image
classification tasks and also hold for the very different domains of disparity
estimation (a dense regression task) and language modelling. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
]
|
Speech-driven facial animation is useful for a variety of applications such
as telepresence, chatbots, etc. The necessary attributes of having a realistic
face animation are 1) audio-visual synchronization (2) identity preservation of
the target individual (3) plausible mouth movements (4) presence of natural eye
blinks. The existing methods mostly address the audio-visual lip
synchronization, and few recent works have addressed the synthesis of natural
eye blinks for overall video realism. In this paper, we propose a method for
identity-preserving realistic facial animation from speech. We first generate
person-independent facial landmarks from audio using DeepSpeech features for
invariance to different voices, accents, etc. To add realism, we impose eye
blinks on facial landmarks using unsupervised learning and retargets the
person-independent landmarks to person-specific landmarks to preserve the
identity-related facial structure which helps in the generation of plausible
mouth shapes of the target identity. Finally, we use LSGAN to generate the
facial texture from person-specific facial landmarks, using an attention
mechanism that helps to preserve identity-related texture. An extensive
comparison of our proposed method with the current state-of-the-art methods
demonstrates a significant improvement in terms of lip synchronization
accuracy, image reconstruction quality, sharpness, and identity-preservation. A
user study also reveals improved realism of our animation results over the
state-of-the-art methods. To the best of our knowledge, this is the first work
in speech-driven 2D facial animation that simultaneously addresses all the
above-mentioned attributes of a realistic speech-driven face animation. | [
"cs.CV"
]
|
We present a novel system that gets as an input video frames of a musician
playing the piano and generates the music for that video. Generation of music
from visual cues is a challenging problem and it is not clear whether it is an
attainable goal at all. Our main aim in this work is to explore the
plausibility of such a transformation and to identify cues and components able
to carry the association of sounds with visual events. To achieve the
transformation we built a full pipeline named `\textit{Audeo}' containing three
components. We first translate the video frames of the keyboard and the
musician hand movements into raw mechanical musical symbolic representation
Piano-Roll (Roll) for each video frame which represents the keys pressed at
each time step. We then adapt the Roll to be amenable for audio synthesis by
including temporal correlations. This step turns out to be critical for
meaningful audio generation. As a last step, we implement Midi synthesizers to
generate realistic music. \textit{Audeo} converts video to audio smoothly and
clearly with only a few setup constraints. We evaluate \textit{Audeo} on `in
the wild' piano performance videos and obtain that their generated music is of
reasonable audio quality and can be successfully recognized with high precision
by popular music identification software. | [
"cs.CV",
"cs.LG",
"cs.MM",
"cs.SD",
"eess.AS",
"eess.IV"
]
|
We present a generative adversarial learning approach to synthesize gaze
behavior of a given personality. We train the model using an existing data set
that comprises eye-tracking data and personality traits of 42 participants
performing an everyday task. Given the values of Big-Five personality traits
(openness, conscientiousness, extroversion, agreeableness, and neuroticism),
our model generates time series data consisting of gaze target, blinking times,
and pupil dimensions. We use the generated data to synthesize the gaze motion
of virtual agents on a game engine. | [
"cs.CV",
"cs.GR",
"I.3.0; J.4; I.2.6"
]
|
Most of the achievements in artificial intelligence so far were accomplished
by supervised learning which requires numerous annotated training data and thus
costs innumerable manpower for labeling. Unsupervised learning is one of the
effective solutions to overcome such difficulties. In our work, we propose
AugNet, a new deep learning training paradigm to learn image features from a
collection of unlabeled pictures. We develop a method to construct the
similarities between pictures as distance metrics in the embedding space by
leveraging the inter-correlation between augmented versions of samples. Our
experiments demonstrate that the method is able to represent the image in low
dimensional space and performs competitively in downstream tasks such as image
classification and image similarity comparison. Specifically, we achieved over
60% and 27% accuracy on the STL10 and CIFAR100 datasets with unsupervised
clustering, respectively. Moreover, unlike many deep-learning-based image
retrieval algorithms, our approach does not require access to external
annotated datasets to train the feature extractor, but still shows comparable
or even better feature representation ability and easy-to-use characteristics.
In our evaluations, the method outperforms all the state-of-the-art image
retrieval algorithms on some out-of-domain image datasets. The code for the
model implementation is available at
https://github.com/chenmingxiang110/AugNet. | [
"cs.CV"
]
|
Real-world time series data often present recurrent or repetitive patterns
and it is often generated in real time, such as transportation passenger
volume, network traffic, system resource consumption, energy usage, and human
gait. Detecting anomalous events based on machine learning approaches in such
time series data has been an active research topic in many different areas.
However, most machine learning approaches require labeled datasets, offline
training, and may suffer from high computation complexity, consequently
hindering their applicability. Providing a lightweight self-adaptive approach
that does not need offline training in advance and meanwhile is able to detect
anomalies in real time could be highly beneficial. Such an approach could be
immediately applied and deployed on any commodity machine to provide timely
anomaly alerts. To facilitate such an approach, this paper introduces SALAD,
which is a Self-Adaptive Lightweight Anomaly Detection approach based on a
special type of recurrent neural networks called Long Short-Term Memory (LSTM).
Instead of using offline training, SALAD converts a target time series into a
series of average absolute relative error (AARE) values on the fly and predicts
an AARE value for every upcoming data point based on short-term historical AARE
values. If the difference between a calculated AARE value and its corresponding
forecast AARE value is higher than a self-adaptive detection threshold, the
corresponding data point is considered anomalous. Otherwise, the data point is
considered normal. Experiments based on two real-world open-source time series
datasets demonstrate that SALAD outperforms five other state-of-the-art anomaly
detection approaches in terms of detection accuracy. In addition, the results
also show that SALAD is lightweight and can be deployed on a commodity machine. | [
"cs.LG"
]
|
Visual question answering (VQA) is a challenging multi-modal task that
requires not only the semantic understanding of both images and questions, but
also the sound perception of a step-by-step reasoning process that would lead
to the correct answer. So far, most successful attempts in VQA have been
focused on only one aspect, either the interaction of visual pixel features of
images and word features of questions, or the reasoning process of answering
the question in an image with simple objects. In this paper, we propose a deep
reasoning VQA model with explicit visual structure-aware textual information,
and it works well in capturing step-by-step reasoning process and detecting a
complex object-relationship in photo-realistic images. REXUP network consists
of two branches, image object-oriented and scene graph oriented, which jointly
works with super-diagonal fusion compositional attention network. We
quantitatively and qualitatively evaluate REXUP on the GQA dataset and conduct
extensive ablation studies to explore the reasons behind REXUP's effectiveness.
Our best model significantly outperforms the precious state-of-the-art, which
delivers 92.7% on the validation set and 73.1% on the test-dev set. | [
"cs.CV",
"cs.AI"
]
|
Alarm root cause analysis is a significant component in the day-to-day
telecommunication network maintenance, and it is critical for efficient and
accurate fault localization and failure recovery. In practice, accurate and
self-adjustable alarm root cause analysis is a great challenge due to network
complexity and vast amounts of alarms. A popular approach for failure root
cause identification is to construct a graph with approximate edges, commonly
based on either event co-occurrences or conditional independence tests.
However, considerable expert knowledge is typically required for edge pruning.
We propose a novel data-driven framework for root cause alarm localization,
combining both causal inference and network embedding techniques. In this
framework, we design a hybrid causal graph learning method (HPCI), which
combines Hawkes Process with Conditional Independence tests, as well as propose
a novel Causal Propagation-Based Embedding algorithm (CPBE) to infer edge
weights. We subsequently discover root cause alarms in a real-time data stream
by applying an influence maximization algorithm on the weighted graph. We
evaluate our method on artificial data and real-world telecom data, showing a
significant improvement over the best baselines. | [
"cs.LG",
"cs.AI",
"cs.SI"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.