text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
We address the problem of text-guided video temporal grounding, which aims to
identify the time interval of certain event based on a natural language
description. Different from most existing methods that only consider RGB images
as visual features, we propose a multi-modal framework to extract complementary
information from videos. Specifically, we adopt RGB images for appearance,
optical flow for motion, and depth maps for image structure. While RGB images
provide abundant visual cues of certain event, the performance may be affected
by background clutters. Therefore, we use optical flow to focus on large motion
and depth maps to infer the scene configuration when the action is related to
objects recognizable with their shapes. To integrate the three modalities more
effectively and enable inter-modal learning, we design a dynamic fusion scheme
with transformers to model the interactions between modalities. Furthermore, we
apply intra-modal self-supervised learning to enhance feature representations
across videos for each modality, which also facilitates multi-modal learning.
We conduct extensive experiments on the Charades-STA and ActivityNet Captions
datasets, and show that the proposed method performs favorably against
state-of-the-art approaches. | [
"cs.CV"
]
|
Biology is both an important application area and a source of motivation for
development of advanced machine learning techniques. Although much attention
has been paid to large and complex data sets resulting from high-throughput
sequencing, advances in high-quality video recording technology have begun to
generate similarly rich data sets requiring sophisticated techniques from both
computer vision and time-series analysis. Moreover, just as studying gene
expression patterns in one organism can reveal general principles that apply to
other organisms, the study of complex social interactions in an experimentally
tractable model system, such as a laboratory ant colony, can provide general
principles about the dynamics of other social groups. Here, we focus on one
such example from the study of reproductive regulation in small laboratory
colonies of more than 50 Harpegnathos ants. These ants can be artificially
induced to begin a ~20 day process of hierarchy reformation. Although the
conclusion of this process is conspicuous to a human observer, it remains
unclear which behaviors during the transient period are contributing to the
process. To address this issue, we explore the potential application of
One-class Classification (OC) to the detection of abnormal states in ant
colonies for which behavioral data is only available for the normal societal
conditions during training. Specifically, we build upon the Deep Support Vector
Data Description (DSVDD) and introduce the Inner-Outlier Generator (IO-GEN)
that synthesizes fake "inner outlier" observations during training that are
near the center of the DSVDD data description. We show that IO-GEN increases
the reliability of the final OC classifier relative to other DSVDD baselines.
This method can be used to screen video frames for which additional human
observation is needed. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Nowadays, deep neural networks (DNNs) have become the main instrument for
machine learning tasks within a wide range of domains, including vision, NLP,
and speech. Meanwhile, in an important case of heterogenous tabular data, the
advantage of DNNs over shallow counterparts remains questionable. In
particular, there is no sufficient evidence that deep learning machinery allows
constructing methods that outperform gradient boosting decision trees (GBDT),
which are often the top choice for tabular problems. In this paper, we
introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning
architecture, designed to work with any tabular data. In a nutshell, the
proposed NODE architecture generalizes ensembles of oblivious decision trees,
but benefits from both end-to-end gradient-based optimization and the power of
multi-layer hierarchical representation learning. With an extensive
experimental comparison to the leading GBDT packages on a large number of
tabular datasets, we demonstrate the advantage of the proposed NODE
architecture, which outperforms the competitors on most of the tasks. We
open-source the PyTorch implementation of NODE and believe that it will become
a universal framework for machine learning on tabular data. | [
"cs.LG",
"stat.ML"
]
|
This paper is concerned with the inverse problem of recovering the unknown
signal components, along with extraction of their instantaneous frequencies
(IFs), governed by the adaptive harmonic model (AHM), from discrete (and
possibly non-uniform) samples of the blind-source composite signal.
None of the existing decomposition methods and algorithms, including the most
popular empirical mode decomposition (EMD) computational scheme and its current
modifications, is capable of solving this inverse problem.
In order to meet the AHM formulation and to extract the IFs of the decomposed
components, called intrinsic mode functions (IMFs), each IMF of EMD is extended
to an analytic function in the upper half of the complex plane via the Hilbert
transform, followed by taking the real part of the polar form of the analytic
extension.
Unfortunately, this approach most often fails to resolve the inverse problem
satisfactorily.
More recently, to resolve the inverse problem, the notion of synchrosqueezed
wavelet transform (SST) was proposed by Daubechies and Maes, and further
developed in many other papers, while a more direct method, called signal
separation operation (SSO), was proposed and developed in our previous work
published in the journal, Applied and Computational Harmonic Analysis, vol.
30(2):243-261, 2016.
In the present paper, we propose a synthesis of SSO using a deep neural
network, based directly on a discrete sample set, that may be non-uniformly
sampled, of the blind-source signal.
Our method is localized, as illustrated by a number of numerical examples,
including components with different signal arrival and departure times.
It also yields short-term prediction of the signal components, along with
their IFs.
Our neural networks are inspired by theory, designed so that they do not
require any training in the traditional sense. | [
"cs.LG",
"eess.SP",
"stat.ML"
]
|
Prior work on training generative Visual Dialog models with reinforcement
learning(Das et al.) has explored a Qbot-Abot image-guessing game and shown
that this 'self-talk' approach can lead to improved performance at the
downstream dialog-conditioned image-guessing task. However, this improvement
saturates and starts degrading after a few rounds of interaction, and does not
lead to a better Visual Dialog model. We find that this is due in part to
repeated interactions between Qbot and Abot during self-talk, which are not
informative with respect to the image. To improve this, we devise a simple
auxiliary objective that incentivizes Qbot to ask diverse questions, thus
reducing repetitions and in turn enabling Abot to explore a larger state space
during RL ie. be exposed to more visual concepts to talk about, and varied
questions to answer. We evaluate our approach via a host of automatic metrics
and human studies, and demonstrate that it leads to better dialog, ie. dialog
that is more diverse (ie. less repetitive), consistent (ie. has fewer
conflicting exchanges), fluent (ie. more human-like),and detailed, while still
being comparably image-relevant as prior work and ablations. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CV",
"stat.ML"
]
|
Compression and efficient storage of neural network (NN) parameters is
critical for applications that run on resource-constrained devices. Although NN
model compression has made significant progress, there has been considerably
less investigation in the actual physical storage of NN parameters.
Conventionally, model compression and physical storage are decoupled, as
digital storage media with error correcting codes (ECCs) provide robust
error-free storage. This decoupled approach is inefficient, as it forces the
storage to treat each bit of the compressed model equally, and to dedicate the
same amount of resources to each bit. We propose a radically different approach
that: (i) employs analog memories to maximize the capacity of each memory cell,
and (ii) jointly optimizes model compression and physical storage to maximize
memory utility. We investigate the challenges of analog storage by studying
model storage on phase change memory (PCM) arrays and develop a variety of
robust coding strategies for NN model storage. We demonstrate the efficacy of
our approach on MNIST, CIFAR-10 and ImageNet datasets for both existing and
novel compression methods. Compared to conventional error-free digital storage,
our method has the potential to reduce the memory size by one order of
magnitude, without significantly compromising the stored model's accuracy. | [
"cs.LG"
]
|
RGB-Infrared (IR) person re-identification is an important and challenging
task due to large cross-modality variations between RGB and IR images. Most
conventional approaches aim to bridge the cross-modality gap with feature
alignment by feature representation learning. Different from existing methods,
in this paper, we propose a novel and end-to-end Alignment Generative
Adversarial Network (AlignGAN) for the RGB-IR RE-ID task. The proposed model
enjoys several merits. First, it can exploit pixel alignment and feature
alignment jointly. To the best of our knowledge, this is the first work to
model the two alignment strategies jointly for the RGB-IR RE-ID problem.
Second, the proposed model consists of a pixel generator, a feature generator,
and a joint discriminator. By playing a min-max game among the three
components, our model is able to not only alleviate the cross-modality and
intra-modality variations but also learn identity-consistent features.
Extensive experimental results on two standard benchmarks demonstrate that the
proposed model performs favorably against state-of-the-art methods. Especially,
on SYSU-MM01 dataset, our model can achieve an absolute gain of 15.4% and 12.9%
in terms of Rank-1 and mAP. | [
"cs.CV"
]
|
In computer science, there exist a large number of optimization problems
defined on graphs, that is to find a best node state configuration or a network
structure such that the designed objective function is optimized under some
constraints. However, these problems are notorious for their hardness to solve
because most of them are NP-hard or NP-complete. Although traditional general
methods such as simulated annealing (SA), genetic algorithms (GA) and so forth
have been devised to these hard problems, their accuracy and time consumption
are not satisfying in practice. In this work, we proposed a simple, fast, and
general algorithm framework based on advanced automatic differentiation
technique empowered by deep learning frameworks. By introducing Gumbel-softmax
technique, we can optimize the objective function directly by gradient descent
algorithm regardless of the discrete nature of variables. We also introduce
evolution strategy to parallel version of our algorithm. We test our algorithm
on three representative optimization problems on graph including modularity
optimization from network science, Sherrington-Kirkpatrick (SK) model from
statistical physics, maximum independent set (MIS) and minimum vertex cover
(MVC) problem from combinatorial optimization on graph. High-quality solutions
can be obtained with much less time consuming compared to traditional
approaches. | [
"cs.LG",
"stat.ML"
]
|
Recent work has shown that exploiting relations between labels improves the
performance of multi-label classification. We propose a novel framework based
on generative adversarial networks (GANs) to model label dependency. The
discriminator learns to model label dependency by discriminating real and
generated label sets. To fool the discriminator, the classifier, or generator,
learns to generate label sets with dependencies close to real data. Extensive
experiments and comparisons on two large-scale image classification benchmark
datasets (MS-COCO and NUS-WIDE) show that the discriminator improves
generalization ability for different kinds of models | [
"cs.LG",
"stat.ML"
]
|
Modern approaches for multi-person pose estimation in video require large
amounts of dense annotations. However, labeling every frame in a video is
costly and labor intensive. To reduce the need for dense annotations, we
propose a PoseWarper network that leverages training videos with sparse
annotations (every k frames) to learn to perform dense temporal pose
propagation and estimation. Given a pair of video frames---a labeled Frame A
and an unlabeled Frame B---we train our model to predict human pose in Frame A
using the features from Frame B by means of deformable convolutions to
implicitly learn the pose warping between A and B. We demonstrate that we can
leverage our trained PoseWarper for several applications. First, at inference
time we can reverse the application direction of our network in order to
propagate pose information from manually annotated frames to unlabeled frames.
This makes it possible to generate pose annotations for the entire video given
only a few manually-labeled frames. Compared to modern label propagation
methods based on optical flow, our warping mechanism is much more compact (6M
vs 39M parameters), and also more accurate (88.7% mAP vs 83.8% mAP). We also
show that we can improve the accuracy of a pose estimator by training it on an
augmented dataset obtained by adding our propagated poses to the original
manual labels. Lastly, we can use our PoseWarper to aggregate temporal pose
information from neighboring frames during inference. This allows our system to
achieve state-of-the-art pose detection results on the PoseTrack2017 and
PoseTrack2018 datasets. Code has been made available at:
https://github.com/facebookresearch/PoseWarper. | [
"cs.CV"
]
|
Many optimizers have been proposed for training deep neural networks, and
they often have multiple hyperparameters, which make it tricky to benchmark
their performance. In this work, we propose a new benchmarking protocol to
evaluate both end-to-end efficiency (training a model from scratch without
knowing the best hyperparameter) and data-addition training efficiency (the
previously selected hyperparameters are used for periodically re-training the
model with newly collected data). For end-to-end efficiency, unlike previous
work that assumes random hyperparameter tuning, which over-emphasizes the
tuning time, we propose to evaluate with a bandit hyperparameter tuning
strategy. A human study is conducted to show that our evaluation protocol
matches human tuning behavior better than the random search. For data-addition
training, we propose a new protocol for assessing the hyperparameter
sensitivity to data shift. We then apply the proposed benchmarking framework to
7 optimizers and various tasks, including computer vision, natural language
processing, reinforcement learning, and graph mining. Our results show that
there is no clear winner across all the tasks. | [
"cs.LG",
"math.OC",
"stat.ML"
]
|
We study the challenging problem of releasing a robot in a previously unseen
environment, and having it follow unconstrained natural language navigation
instructions. Recent work on the task of Vision-and-Language Navigation (VLN)
has achieved significant progress in simulation. To assess the implications of
this work for robotics, we transfer a VLN agent trained in simulation to a
physical robot. To bridge the gap between the high-level discrete action space
learned by the VLN agent, and the robot's low-level continuous action space, we
propose a subgoal model to identify nearby waypoints, and use domain
randomization to mitigate visual domain differences. For accurate sim and real
comparisons in parallel environments, we annotate a 325m2 office space with
1.3km of navigation instructions, and create a digitized replica in simulation.
We find that sim-to-real transfer to an environment not seen in training is
successful if an occupancy map and navigation graph can be collected and
annotated in advance (success rate of 46.8% vs. 55.9% in sim), but much more
challenging in the hardest setting with no prior mapping at all (success rate
of 22.5%). | [
"cs.CV",
"cs.CL",
"cs.RO"
]
|
Annotating videos with object segmentation masks typically involves a two
stage procedure of drawing polygons per object instance for all the frames and
then linking them through time. While simple, this is a very tedious, time
consuming and expensive process, making the creation of accurate annotations at
scale only possible for well-funded labs. What if we were able to segment an
object in the full video with only a single click? This will enable video
segmentation at scale with a very low budget opening the door to many
applications. Towards this goal, in this paper we propose a bottom up approach
where given a single click for each object in a video, we obtain the
segmentation masks of these objects in the full video. In particular, we
construct a correlation volume that assigns each pixel in a target frame to
either one of the objects in the reference frame or the background. We then
refine this correlation volume via a recurrent attention module and decode the
final segmentation. To evaluate the performance, we label the popular and
challenging Cityscapes dataset with video object segmentations. Results on this
new CityscapesVideo dataset show that our approach outperforms all the
baselines in this challenging setting. | [
"cs.CV"
]
|
Recently, the leading performance of human pose estimation is dominated by
heatmap based methods. While being a fundamental component of heatmap
processing, heatmap decoding (i.e. transforming heatmaps to coordinates)
receives only limited investigations, to our best knowledge. This work fills
the gap by studying the heatmap decoding processing with a particular focus on
the errors introduced throughout the prediction process. We found that the
errors of heatmap based methods are surprisingly significant, which
nevertheless was universally ignored before. In view of the discovered
importance, we further reveal the intrinsic limitations of the previous widely
used heatmap decoding methods and thereout propose a Distribution-Aware and
Error-Compensation Coordinate Decoding (DAEC). Serving as a model-agnostic
plug-in, DAEC learns its decoding strategy from training data and remarkably
improves the performance of a variety of state-of-the-art human pose estimation
models with negligible extra computation. Specifically, equipped with DAEC, the
SimpleBaseline-ResNet152-256x192 and HRNet-W48-256x192 are significantly
improved by 2.6 AP and 2.9 AP achieving 72.6 AP and 75.7 AP on COCO,
respectively. Moreover, the HRNet-W32-256x256 and ResNet-152-256x256 frameworks
enjoy even more dramatic promotions of 8.4% and 7.8% on MPII with PCKh0.1
metric. Extensive experiments performed on these two common benchmarks,
demonstrates that DAEC exceeds its competitors by considerable margins, backing
up the rationality and generality of our novel heatmap decoding idea. The
project is available at https://github.com/fyang235/DAEC. | [
"cs.CV"
]
|
Recent contrastive representation learning methods rely on estimating mutual
information (MI) between multiple views of an underlying context. E.g., we can
derive multiple views of a given image by applying data augmentation, or we can
split a sequence into views comprising the past and future of some step in the
sequence. Contrastive lower bounds on MI are easy to optimize, but have a
strong underestimation bias when estimating large amounts of MI. We propose
decomposing the full MI estimation problem into a sum of smaller estimation
problems by splitting one of the views into progressively more informed
subviews and by applying the chain rule on MI between the decomposed views.
This expression contains a sum of unconditional and conditional MI terms, each
measuring modest chunks of the total MI, which facilitates approximation via
contrastive bounds. To maximize the sum, we formulate a contrastive lower bound
on the conditional MI which can be approximated efficiently. We refer to our
general approach as Decomposed Estimation of Mutual Information (DEMI). We show
that DEMI can capture a larger amount of MI than standard non-decomposed
contrastive bounds in a synthetic setting, and learns better representations in
a vision domain and for dialogue generation. | [
"cs.LG",
"cs.AI"
]
|
t-SNE is one of the most commonly used force-based nonlinear dimensionality
reduction methods. This paper has two contributions: the first is forceful
colorings, an idea that is also applicable to other force-based methods (UMAP,
ForceAtlas2,...). In every equilibrium, the attractive and repulsive forces
acting on a particle cancel out: however, both the size and the direction of
the attractive (or repulsive) forces acting on a particle are related to its
properties: the force vector can serve as an additional feature. Secondly, we
analyze the case of t-SNE acting on a single homogeneous cluster (modeled by
affinities coming from the adjacency matrix of a random k-regular graph); we
derive a mean-field model that leads to interesting questions in classical
calculus of variations. The model predicts that, in the limit, the t-SNE
embedding of a single perfectly homogeneous cluster is not a point but a thin
annulus of diameter $\sim k^{-1/4} n^{-1/4}$. This is supported by numerical
results. The mean field ansatz extends to other force-based dimensionality
reduction methods. | [
"cs.LG"
]
|
High accuracy in cancer prediction is important to improve the quality of the
treatment and to improve the rate of survivability of patients. As the data
volume is increasing rapidly in the healthcare research, the analytical
challenge exists in double. The use of effective sampling technique in
classification algorithms always yields good prediction accuracy. The SEER
public use cancer database provides various prominent class labels for
prognosis prediction. The main objective of this paper is to find the effect of
sampling techniques in classifying the prognosis variable and propose an ideal
sampling method based on the outcome of the experimentation. In the first phase
of this work the traditional random sampling and stratified sampling techniques
have been used. At the next level the balanced stratified sampling with
variations as per the choice of the prognosis class labels have been tested.
Much of the initial time has been focused on performing the pre_processing of
the SEER data set. The classification model for experimentation has been built
using the breast cancer, respiratory cancer and mixed cancer data sets with
three traditional classifiers namely Decision Tree, Naive Bayes and K-Nearest
Neighbor. The three prognosis factors survival, stage and metastasis have been
used as class labels for experimental comparisons. The results shows a steady
increase in the prediction accuracy of balanced stratified model as the sample
size increases, but the traditional approach fluctuates before the optimum
results. | [
"cs.LG",
"62D05",
"I.2.6; H.2.8"
]
|
Object detection on drone-captured scenarios is a recent popular task. As
drones always navigate in different altitudes, the object scale varies
violently, which burdens the optimization of networks. Moreover, high-speed and
low-altitude flight bring in the motion blur on the densely packed objects,
which leads to great challenge of object distinction. To solve the two issues
mentioned above, we propose TPH-YOLOv5. Based on YOLOv5, we add one more
prediction head to detect different-scale objects. Then we replace the original
prediction heads with Transformer Prediction Heads (TPH) to explore the
prediction potential with self-attention mechanism. We also integrate
convolutional block attention model (CBAM) to find attention region on
scenarios with dense objects. To achieve more improvement of our proposed
TPH-YOLOv5, we provide bags of useful strategies such as data augmentation,
multiscale testing, multi-model integration and utilizing extra classifier.
Extensive experiments on dataset VisDrone2021 show that TPH-YOLOv5 have good
performance with impressive interpretability on drone-captured scenarios. On
DET-test-challenge dataset, the AP result of TPH-YOLOv5 are 39.18%, which is
better than previous SOTA method (DPNetV3) by 1.81%. On VisDrone Challenge
2021, TPHYOLOv5 wins 5th place and achieves well-matched results with 1st place
model (AP 39.43%). Compared to baseline model (YOLOv5), TPH-YOLOv5 improves
about 7%, which is encouraging and competitive. | [
"cs.CV",
"cs.AI",
"I.2.10; I.4.8"
]
|
Passive methods for object detection and segmentation treat images of the
same scene as individual samples and do not exploit object permanence across
multiple views. Generalization to novel or difficult viewpoints thus requires
additional training with lots of annotations. In contrast, humans often
recognize objects by simply moving around, to get more informative viewpoints.
In this paper, we propose a method for improving object detection in testing
environments, assuming nothing but an embodied agent with a pre-trained 2D
object detector. Our agent collects multi-view data, generates 2D and 3D
pseudo-labels, and fine-tunes its detector in a self-supervised manner.
Experiments on both indoor and outdoor datasets show that (1) our method
obtains high-quality 2D and 3D pseudo-labels from multi-view RGB-D data; (2)
fine-tuning with these pseudo-labels improves the 2D detector significantly in
the test environment; (3) training a 3D detector with our pseudo-labels
outperforms a prior self-supervised method by a large margin; (4) given weak
supervision, our method can generate better pseudo-labels for novel objects. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
Transformers have achieved success in both language and vision domains.
However, it is prohibitively expensive to scale them to long sequences such as
long documents or high-resolution images, because self-attention mechanism has
quadratic time and memory complexities with respect to the input sequence
length. In this paper, we propose Long-Short Transformer (Transformer-LS), an
efficient self-attention mechanism for modeling long sequences with linear
complexity for both language and vision tasks. It aggregates a novel long-range
attention with dynamic projection to model distant correlations and a
short-term attention to capture fine-grained local correlations. We propose a
dual normalization strategy to account for the scale mismatch between the two
attention mechanisms. Transformer-LS can be applied to both autoregressive and
bidirectional models without additional complexity. Our method outperforms the
state-of-the-art models on multiple tasks in language and vision domains,
including the Long Range Arena benchmark, autoregressive language modeling, and
ImageNet classification. For instance, Transformer-LS achieves 0.97 test BPC on
enwik8 using half the number of parameters than previous method, while being
faster and is able to handle 3x as long sequences compared to its
full-attention version on the same hardware. On ImageNet, it can obtain the
state-of-the-art results (e.g., a moderate size of 55.8M model solely trained
on 224x224 ImageNet-1K can obtain Top-1 accuracy 84.1%), while being more
scalable on high-resolution images. The source code and models are released at
https://github.com/NVIDIA/transformer-ls . | [
"cs.CV",
"cs.CL",
"cs.LG",
"cs.MM"
]
|
We study how an offline dataset of prior (possibly random) experience can be
used to address two challenges that autonomous systems face when they endeavor
to learn from, adapt to, and collaborate with humans : (1) identifying the
human's intent and (2) safely optimizing the autonomous system's behavior to
achieve this inferred intent. First, we use the offline dataset to efficiently
infer the human's reward function via pool-based active preference learning.
Second, given this learned reward function, we perform offline reinforcement
learning to optimize a policy based on the inferred human intent. Crucially,
our proposed approach does not require actual physical rollouts or an accurate
simulator for either the reward learning or policy optimization steps, enabling
both safe and efficient apprenticeship learning. We identify and evaluate our
approach on a subset of existing offline RL benchmarks that are well suited for
offline reward learning and also evaluate extensions of these benchmarks which
allow more open-ended behaviors. Our experiments show that offline
preference-based reward learning followed by offline reinforcement learning
enables efficient and high-performing policies, while only requiring small
numbers of preference queries. Videos available at
https://sites.google.com/view/offline-prefs. | [
"cs.LG"
]
|
Deep learning (DL) has gained much attention and become increasingly popular
in modern data science. Computer scientists led the way in developing deep
learning techniques, so the ideas and perspectives can seem alien to
statisticians. Nonetheless, it is important that statisticians become involved
-- many of our students need this expertise for their careers. In this paper,
developed as part of a program on DL held at the Statistical and Applied
Mathematical Sciences Institute, we address this culture gap and provide tips
on how to teach deep learning to statistics graduate students. After some
background, we list ways in which DL and statistical perspectives differ,
provide a recommended syllabus that evolved from teaching two iterations of a
DL graduate course, offer examples of suggested homework assignments, give an
annotated list of teaching resources, and discuss DL in the context of two
research areas. | [
"stat.ML",
"cs.CY",
"cs.LG"
]
|
Learning meaningful representations of data is an important aspect of machine
learning and has recently been successfully applied to many domains like
language understanding or computer vision. Instead of training a model for one
specific task, representation learning is about training a model to capture all
useful information in the underlying data and make it accessible for a
predictor. For predictive process analytics, it is essential to have all
explanatory characteristics of a process instance available when making
predictions about the future, as well as for clustering and anomaly detection.
Due to the large variety of perspectives and types within business process
data, generating a good representation is a challenging task. In this paper, we
propose a novel approach for representation learning of business process
instances which can process and combine most perspectives in an event log. In
conjunction with a self-supervised pre-training method, we show the
capabilities of the approach through a visualization of the representation
space and case retrieval. Furthermore, the pre-trained model is fine-tuned to
multiple process prediction tasks and demonstrates its effectiveness in
comparison with existing approaches. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Graph-based analyses have gained a lot of relevance in the past years due to
their high potential in describing complex systems by detailing the actors
involved, their relations and their behaviours. Nevertheless, in scenarios
where these aspects are evolving over time, it is not easy to extract valuable
information or to characterize correctly all the actors. In this study, a two
phased approach for exploiting the potential of graph structures in the
cybersecurity domain is presented. The main idea is to convert a network
classification problem into a graph-based behavioural one. We extract these
graph structures that can represent the evolution of both normal and attack
entities and apply a temporal dissection approach in order to highlight their
micro-dynamics. Further, three clustering techniques are applied to the normal
entities in order to aggregate similar behaviours, mitigate the imbalance
problem and reduce noisy data. Our approach suggests the implementation of two
promising deep learning paradigms for entity classification based on Graph
Convolutional Networks. | [
"cs.LG",
"cs.IT",
"math.IT"
]
|
The text detection and localization is important for video analysis and
understanding. The scene text in video contains semantic information and thus
can contribute significantly to video retrieval and understanding. However,
most of the approaches detect scene text in still images or single video frame.
Videos differ from images in temporal redundancy. This paper proposes a novel
hybrid method to robustly localize the texts in natural scene images and videos
based on fusion of discrete wavelet transform and gradient difference. A set of
rules and geometric properties have been devised to localize the actual text
regions. Then, morphological operation is performed to generate the text
regions and finally the connected component analysis is employed to localize
the text in a video frame. The experimental results obtained on publicly
available standard ICDAR 2003 and Hua dataset illustrate that the proposed
method can accurately detect and localize texts of various sizes, fonts and
colors. The experimentation on huge collection of video databases reveal the
suitability of the proposed method to video databases. | [
"cs.CV"
]
|
Gait recognition is an important biometric technique for video surveillance
tasks, due to the advantage of using it at distance. In this paper, we present
a persistent homology-based method to extract topological features (the
so-called {\it topological gait signature}) from the the body silhouettes of a
gait sequence. % It has been used before in several conference papers of the
same authors for human identification, gender classification, carried object
detection and monitoring human activities at distance. % The novelty of this
paper is the study of the stability of the topological gait signature under
small perturbations and the number of gait cycles contained in a gait sequence.
In other words, we show that the topological gait signature is robust to the
presence of noise in the body silhouettes and to the number of gait cycles
contained in a given gait sequence. % We also show that computing our
topological gait signature of only the lowest fourth part of the body
silhouette, we avoid the upper body movements that are unrelated to the natural
dynamic of the gait, caused for example by carrying a bag or wearing a coat. | [
"cs.CV"
]
|
Recently deep reinforcement learning (DRL) has achieved outstanding success
on solving many difficult and large-scale RL problems. However the high sample
cost required for effective learning often makes DRL unaffordable in
resource-limited applications. With the aim of improving sample efficiency and
learning performance, we will develop a new DRL algorithm in this paper that
seamless integrates entropy-induced and bootstrap-induced techniques for
efficient and deep exploration of the learning environment. Specifically, a
general form of Tsallis entropy regularizer will be utilized to drive
entropy-induced exploration based on efficient approximation of optimal
action-selection policies. Different from many existing works that rely on
action dithering strategies for exploration, our algorithm is efficient in
exploring actions with clear exploration value. Meanwhile, by employing an
ensemble of Q-networks under varied Tsallis entropy regularization, the
diversity of the ensemble can be further enhanced to enable effective
bootstrap-induced exploration. Experiments on Atari game playing tasks clearly
demonstrate that our new algorithm can achieve more efficient and effective
exploration for DRL, in comparison to recently proposed exploration methods
including Bootstrapped Deep Q-Network and UCB Q-Ensemble. | [
"cs.LG",
"stat.ML"
]
|
Self-attention (SA) mechanisms can capture effectively global dependencies in
deep neural networks, and have been applied to natural language processing and
image processing successfully. However, SA modules for image reconstruction
have high time and space complexity, which restrict their applications to
higher-resolution images. In this paper, we refine the SA module in
self-attention generative adversarial networks (SAGAN) via adapting a non-local
operation, revising the connectivity among the units in SA module and
re-implementing its computational pattern, such that its time and space
complexity is reduced from $\text{O}(n^2)$ to $\text{O}(n)$, but it is still
equivalent to the original SA module. Further, we explore the principles behind
the module and discover that our module is a special kind of channel attention
mechanisms. Experimental results based on two benchmark datasets of image
reconstruction, verify that under the same computational environment, two
models can achieve comparable effectiveness for image reconstruction, but the
proposed one runs faster and takes up less memory space. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
With the prosperity of digital video industry, video frame interpolation has
arisen continuous attention in computer vision community and become a new
upsurge in industry. Many learning-based methods have been proposed and
achieved progressive results. Among them, a recent algorithm named quadratic
video interpolation (QVI) achieves appealing performance. It exploits
higher-order motion information (e.g. acceleration) and successfully models the
estimation of interpolated flow. However, its produced intermediate frames
still contain some unsatisfactory ghosting, artifacts and inaccurate motion,
especially when large and complex motion occurs. In this work, we further
improve the performance of QVI from three facets and propose an enhanced
quadratic video interpolation (EQVI) model. In particular, we adopt a rectified
quadratic flow prediction (RQFP) formulation with least squares method to
estimate the motion more accurately. Complementary with image pixel-level
blending, we introduce a residual contextual synthesis network (RCSN) to employ
contextual information in high-dimensional feature space, which could help the
model handle more complicated scenes and motion patterns. Moreover, to further
boost the performance, we devise a novel multi-scale fusion network (MS-Fusion)
which can be regarded as a learnable augmentation process. The proposed EQVI
model won the first place in the AIM2020 Video Temporal Super-Resolution
Challenge. | [
"cs.CV"
]
|
Reinforcement learning (RL) has gained increasing interest since the
demonstration it was able to reach human performance on video game benchmarks
using deep Q-learning (DQN). The current consensus for training neural networks
on such complex environments is to rely on gradient-based optimization.
Although alternative Bayesian deep learning methods exist, most of them still
rely on gradient-based optimization, and they typically do not scale on
benchmarks such as the Atari game environment. Moreover none of these
approaches allow performing the analytical inference for the weights and biases
defining the neural network. In this paper, we present how we can adapt the
temporal difference Q-learning framework to make it compatible with the
tractable approximate Gaussian inference (TAGI), which allows learning the
parameters of a neural network using a closed-form analytical method.
Throughout the experiments with on- and off-policy reinforcement learning
approaches, we demonstrate that TAGI can reach a performance comparable to
backpropagation-trained networks while using fewer hyperparameters, and without
relying on gradient-based optimization. | [
"cs.LG",
"stat.ML"
]
|
Image segmentation and classification are the two main fundamental steps in
pattern recognition. To perform medical image segmentation or classification
with deep learning models, it requires training on large image dataset with
annotation. The dermoscopy images (ISIC archive) considered for this work does
not have ground truth information for lesion segmentation. Performing manual
labelling on this dataset is time-consuming. To overcome this issue,
self-learning annotation scheme was proposed in the two-stage deep learning
algorithm. The two-stage deep learning algorithm consists of U-Net segmentation
model with the annotation scheme and CNN classifier model. The annotation
scheme uses a K-means clustering algorithm along with merging conditions to
achieve initial labelling information for training the U-Net model. The
classifier models namely ResNet-50 and LeNet-5 were trained and tested on the
image dataset without segmentation for comparison and with the U-Net
segmentation for implementing the proposed self-learning Artificial
Intelligence (AI) framework. The classification results of the proposed AI
framework achieved training accuracy of 93.8% and testing accuracy of 82.42%
when compared with the two classifier models directly trained on the input
images. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
We present VideoGPT: a conceptually simple architecture for scaling
likelihood based generative modeling to natural videos. VideoGPT uses VQ-VAE
that learns downsampled discrete latent representations of a raw video by
employing 3D convolutions and axial self-attention. A simple GPT-like
architecture is then used to autoregressively model the discrete latents using
spatio-temporal position encodings. Despite the simplicity in formulation and
ease of training, our architecture is able to generate samples competitive with
state-of-the-art GAN models for video generation on the BAIR Robot dataset, and
generate high fidelity natural videos from UCF-101 and Tumbler GIF Dataset
(TGIF). We hope our proposed architecture serves as a reproducible reference
for a minimalistic implementation of transformer based video generation models.
Samples and code are available at
https://wilson1yan.github.io/videogpt/index.html | [
"cs.CV",
"cs.LG"
]
|
This study provides an efficient approach for using text data to calculate
patent-to-patent (p2p) technological similarity, and presents a hybrid
framework for leveraging the resulting p2p similarity for applications such as
semantic search and automated patent classification. We create embeddings using
Sentence-BERT (SBERT) based on patent claims. To further increase the patent
embedding quality, we use transformer models based on SBERT and RoBERT, and
apply the augmented approach for fine-tuning SBERT by in-domain supervised
patent claims data. We leverage SBERTs efficiency in creating embedding
distance measures to map p2p similarity in large sets of patent data. We deploy
our framework for classification with a simple Nearest Neighbors (KNN) model
that predicts Cooperative Patent Classification (CPC) of a patent based on the
CPC assignment of the K patents with the highest p2p similarity. We thereby
validate that p2p similarity captures their technological features in terms of
CPC overlap, and at the same demonstrate the usefulness of this approach for
automatic patent classification based on text data. In the out-of-sample model
validation, we are able to perform a multi-label prediction of all assigned CPC
classes on the subclass (640) level on 163,269 patents with an accuracy of 54%
and F1 score > 63%, which suggests that our model outperforms the current
state-of-the-art in text-based multi-label and multi-class patent
classification by a margin of > 18% F1 score. We furthermore discuss the
applicability of the presented framework for semantic IP search, patent
landscaping, and technology intelligence. We finally point towards a future
research agenda for leveraging multi-source patent embeddings, their
appropriateness across applications, as well as to improve and validate patent
embeddings by creating domain-expert curated Semantic Textual Similarity (STS)
benchmark datasets. | [
"cs.LG",
"econ.EM",
"H.0"
]
|
Deep learning has achieved a remarkable performance breakthrough in several
fields, most notably in speech recognition, natural language processing, and
computer vision. In particular, convolutional neural network (CNN)
architectures currently produce state-of-the-art performance on a variety of
image analysis tasks such as object detection and recognition. Most of deep
learning research has so far focused on dealing with 1D, 2D, or 3D
Euclidean-structured data such as acoustic signals, images, or videos.
Recently, there has been an increasing interest in geometric deep learning,
attempting to generalize deep learning methods to non-Euclidean structured data
such as graphs and manifolds, with a variety of applications from the domains
of network analysis, computational social science, or computer graphics. In
this paper, we propose a unified framework allowing to generalize CNN
architectures to non-Euclidean domains (graphs and manifolds) and learn local,
stationary, and compositional task-specific features. We show that various
non-Euclidean CNN methods previously proposed in the literature can be
considered as particular instances of our framework. We test the proposed
method on standard tasks from the realms of image-, graph- and 3D shape
analysis and show that it consistently outperforms previous approaches. | [
"cs.CV"
]
|
Object detection in aerial images is an active yet challenging task in
computer vision because of the birdview perspective, the highly complex
backgrounds, and the variant appearances of objects. Especially when detecting
densely packed objects in aerial images, methods relying on horizontal
proposals for common object detection often introduce mismatches between the
Region of Interests (RoIs) and objects. This leads to the common misalignment
between the final object classification confidence and localization accuracy.
Although rotated anchors have been used to tackle this problem, the design of
them always multiplies the number of anchors and dramatically increases the
computational complexity. In this paper, we propose a RoI Transformer to
address these problems. More precisely, to improve the quality of region
proposals, we first designed a Rotated RoI (RRoI) learner to transform a
Horizontal Region of Interest (HRoI) into a Rotated Region of Interest (RRoI).
Based on the RRoIs, we then proposed a Rotated Position Sensitive RoI Align
(RPS-RoI-Align) module to extract rotation-invariant features from them for
boosting subsequent classification and regression. Our RoI Transformer is with
light weight and can be easily embedded into detectors for oriented object
detection. A simple implementation of the RoI Transformer has achieved
state-of-the-art performances on two common and challenging aerial datasets,
i.e., DOTA and HRSC2016, with a neglectable reduction to detection speed. Our
RoI Transformer exceeds the deformable Position Sensitive RoI pooling when
oriented bounding-box annotations are available. Extensive experiments have
also validated the flexibility and effectiveness of our RoI Transformer. The
results demonstrate that it can be easily integrated with other detector
architectures and significantly improve the performances. | [
"cs.CV"
]
|
Generative Adversarial Networks (GANs) are formulated as minimax game
problems, whereby generators attempt to approach real data distributions by
virtue of adversarial learning against discriminators. The intrinsic problem
complexity poses the challenge to enhance the performance of generative
networks. In this work, we aim to boost model learning from the perspective of
network architectures, by incorporating recent progress on automated
architecture search into GANs. To this end, we propose a fully differentiable
search framework for generative adversarial networks, dubbed alphaGAN. The
searching process is formalized as solving a bi-level minimax optimization
problem, in which the outer-level objective aims for seeking a suitable network
architecture towards pure Nash Equilibrium conditioned on the generator and the
discriminator network parameters optimized with a traditional GAN loss in the
inner level. The entire optimization performs a first-order method by
alternately minimizing the two-level objective in a fully differentiable
manner, enabling architecture search to be completed in an enormous search
space. Extensive experiments on CIFAR-10 and STL-10 datasets show that our
algorithm can obtain high-performing architectures only with 3-GPU hours on a
single GPU in the search space comprised of approximate 2 ? 1011 possible
configurations. We also provide a comprehensive analysis on the behavior of the
searching process and the properties of searched architectures, which would
benefit further research on architectures for generative models. Pretrained
models and codes are available at https://github.com/yuesongtian/AlphaGAN. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
We present a new model-based algorithm for reinforcement learning (RL) which
consists of explicit exploration and exploitation phases, and is applicable in
large or infinite state spaces. The algorithm maintains a set of dynamics
models consistent with current experience and explores by finding policies
which induce high disagreement between their state predictions. It then
exploits using the refined set of models or experience gathered during
exploration. We show that under realizability and optimal planning assumptions,
our algorithm provably finds a near-optimal policy with a number of samples
that is polynomial in a structural complexity measure which we show to be low
in several natural settings. We then give a practical approximation using
neural networks and demonstrate its performance and sample efficiency in
practice. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
The class of Gaussian Process (GP) methods for Temporal Difference learning
has shown promise for data-efficient model-free Reinforcement Learning. In this
paper, we consider a recent variant of the GP-SARSA algorithm, called Sparse
Pseudo-input Gaussian Process SARSA (SPGP-SARSA), and derive recursive formulas
for its predictive moments. This extension promotes greater memory efficiency,
since previous computations can be reused and, interestingly, it provides a
technique for updating value estimates on a multiple timescales | [
"cs.LG",
"stat.ML"
]
|
Facial landmarks (FLM) estimation is a critical component in many
face-related applications. In this work, we aim to optimize for both accuracy
and speed and explore the trade-off between them. Our key observation is that
not all faces are created equal. Frontal faces with neutral expressions
converge faster than faces with extreme poses or expressions. To differentiate
among samples, we train our model to predict the regression error after each
iteration. If the current iteration is accurate enough, we stop iterating,
saving redundant iterations while keeping the accuracy in check. We also
observe that as neighboring patches overlap, we can infer all facial landmarks
(FLMs) with only a small number of patches without a major accuracy sacrifice.
Architecturally, we offer a multi-scale, patch-based, lightweight feature
extractor with a fine-grained local patch attention module, which computes a
patch weighting according to the information in the patch itself and enhances
the expressive power of the patch features. We analyze the patch attention data
to infer where the model is attending when regressing facial landmarks and
compare it to face attention in humans. Our model runs in real-time on a mobile
device GPU, with 95 Mega Multiply-Add (MMA) operations, outperforming all
state-of-the-art methods under 1000 MMA, with a normalized mean error of 8.16
on the 300W challenging dataset. | [
"cs.CV"
]
|
In recent years, huge amounts of unstructured textual data on the Internet
are a big difficulty for AI algorithms to provide the best recommendations for
users and their search queries. Since the Internet became widespread, a lot of
research has been done in the field of Natural Language Processing (NLP) and
machine learning. Almost every solution transforms documents into Vector Space
Models (VSM) in order to apply AI algorithms over them. One such approach is
based on Case-Based Reasoning (CBR). Therefore, the most important part of
those systems is to compute the similarity between numerical data points. In
2016, the new similarity TS-SS metric is proposed, which showed
state-of-the-art results in the field of textual mining for unsupervised
learning. However, no one before has investigated its performances for
supervised learning (classification task). In this work, we devised a CBR
system capable of finding the most similar documents for a given query aiming
to investigate performances of the new state-of-the-art metric, TS-SS, in
addition to the two other geometrical similarity measures --- Euclidean
distance and Cosine similarity --- that showed the best predictive results over
several benchmark corpora. The results show surprising inappropriateness of
TS-SS measure for high dimensional features. | [
"cs.LG",
"cs.CL",
"cs.IR",
"stat.ML"
]
|
This paper presents an end-to-end differentiable algorithm for robust and
detail-preserving surface normal estimation on unstructured point-clouds. We
utilize graph neural networks to iteratively parameterize an adaptive
anisotropic kernel that produces point weights for weighted least-squares plane
fitting in local neighborhoods. The approach retains the interpretability and
efficiency of traditional sequential plane fitting while benefiting from
adaptation to data set statistics through deep learning. This results in a
state-of-the-art surface normal estimator that is robust to noise, outliers and
point density variation, preserves sharp features through anisotropic kernels
and equivariance through a local quaternion-based spatial transformer. Contrary
to previous deep learning methods, the proposed approach does not require any
hand-crafted features or preprocessing. It improves on the state-of-the-art
results while being more than two orders of magnitude faster and more parameter
efficient. | [
"cs.CV",
"cs.CG"
]
|
Generating schema labels automatically for column values of data tables has
many data science applications such as schema matching, and data discovery and
linking. For example, automatically extracted tables with missing headers can
be filled by the predicted schema labels which significantly minimizes human
effort. Furthermore, the predicted labels can reduce the impact of inconsistent
names across multiple data tables. Understanding the connection between column
values and contextual information is an important yet neglected aspect as
previously proposed methods treat each column independently. In this paper, we
propose a context-aware semantic labeling method using both the column values
and context. Our new method is based on a new setting for semantic labeling,
where we sequentially predict labels for an input table with missing headers.
We incorporate both the values and context of each data column using the
pre-trained contextualized language model, BERT, that has achieved significant
improvements in multiple natural language processing tasks. To our knowledge,
we are the first to successfully apply BERT to solve the semantic labeling
task. We evaluate our approach using two real-world datasets from different
domains, and we demonstrate substantial improvements in terms of evaluation
metrics over state-of-the-art feature-based methods. | [
"cs.LG",
"cs.DB",
"cs.IR"
]
|
Deep generative priors offer powerful models for complex-structured data,
such as images, audio, and text. Using these priors in inverse problems
typically requires estimating the input and/or hidden signals in a multi-layer
deep neural network from observation of its output. While these approaches have
been successful in practice, rigorous performance analysis is complicated by
the non-convex nature of the underlying optimization problems. This paper
presents a novel algorithm, Multi-Layer Vector Approximate Message Passing
(ML-VAMP), for inference in multi-layer stochastic neural networks. ML-VAMP can
be configured to compute maximum a priori (MAP) or approximate minimum
mean-squared error (MMSE) estimates for these networks. We show that the
performance of ML-VAMP can be exactly predicted in a certain high-dimensional
random limit. Furthermore, under certain conditions, ML-VAMP yields estimates
that achieve the minimum (i.e., Bayes-optimal) MSE as predicted by the replica
method. In this way, ML-VAMP provides a computationally efficient method for
multi-layer inference with an exact performance characterization and testable
conditions for optimality in the large-system limit. | [
"cs.LG",
"cs.IT",
"cs.NE",
"eess.SP",
"math.IT",
"stat.ML"
]
|
Data privacy is an increasingly important aspect of many real-world Data
sources that contain sensitive information may have immense potential which
could be unlocked using the right privacy enhancing transformations, but
current methods often fail to produce convincing output. Furthermore, finding
the right balance between privacy and utility is often a tricky trade-off. In
this work, we propose a novel approach for data privatization, which involves
two steps: in the first step, it removes the sensitive information, and in the
second step, it replaces this information with an independent random sample.
Our method builds on adversarial representation learning which ensures strong
privacy by training the model to fool an increasingly strong adversary. While
previous methods only aim at obfuscating the sensitive information, we find
that adding new random information in its place strengthens the provided
privacy and provides better utility at any given level of privacy. The result
is an approach that can provide stronger privatization on image data, and yet
be preserving both the domain and the utility of the inputs, entirely
independent of the downstream task. | [
"cs.LG",
"cs.CR",
"stat.ML"
]
|
In many sequence learning tasks, such as program synthesis and document
summarization, a key problem is searching over a large space of possible output
sequences. We propose to learn representations of the outputs that are
specifically meant for search: rich enough to specify the desired output but
compact enough to make search more efficient. Discrete latent codes are
appealing for this purpose, as they naturally allow sophisticated combinatorial
search strategies. The latent codes are learned using a self-supervised
learning principle, in which first a discrete autoencoder is trained on the
output sequences, and then the resulting latent codes are used as intermediate
targets for the end-to-end sequence prediction task. Based on these insights,
we introduce the \emph{Latent Programmer}, a program synthesis method that
first predicts a discrete latent code from input/output examples, and then
generates the program in the target language. We evaluate the Latent Programmer
on two domains: synthesis of string transformation programs, and generation of
programs from natural language descriptions. We demonstrate that the discrete
latent representation significantly improves synthesis accuracy. | [
"cs.LG",
"cs.AI"
]
|
Facial expressions recognition (FER) of 3D face scans has received a
significant amount of attention in recent years. Most of the facial expression
recognition methods have been proposed using mainly 2D images. These methods
suffer from several issues like illumination changes and pose variations.
Moreover, 2D mapping from 3D images may lack some geometric and topological
characteristics of the face. Hence, to overcome this problem, a multi-modal 2D
+ 3D feature-based method is proposed. We extract shallow features from the 3D
images, and deep features using Convolutional Neural Networks (CNN) from the
transformed 2D images. Combining these features into a compact representation
uses covariance matrices as descriptors for both features instead of
single-handedly descriptors. A covariance matrix learning is used as a manifold
layer to reduce the deep covariance matrices size and enhance their
discrimination power while preserving their manifold structure. We then use the
Bag-of-Features (BoF) paradigm to quantize the covariance matrices after
flattening. Accordingly, we obtained two codebooks using shallow and deep
features. The global codebook is then used to feed an SVM classifier. High
classification performances have been achieved on the BU-3DFE and Bosphorus
datasets compared to the state-of-the-art methods. | [
"cs.CV"
]
|
The technology of image segmentation is widely used in medical image
processing, face recognition pedestrian detection, etc. The current image
segmentation techniques include region-based segmentation, edge detection
segmentation, segmentation based on clustering, segmentation based on
weakly-supervised learning in CNN, etc. This paper analyzes and summarizes
these algorithms of image segmentation, and compares the advantages and
disadvantages of different algorithms. Finally, we make a prediction of the
development trend of image segmentation with the combination of these
algorithms. | [
"cs.CV"
]
|
Gradient boosting methods based on Structured Categorical Decision Trees
(SCDT) have been demonstrated to outperform numerical and one-hot-encodings on
problems where the categorical variable has a known underlying structure.
However, the enumeration procedure in the SCDT is infeasible except for
categorical variables with low or moderate cardinality. We propose and
implement two methods to overcome the computational obstacles and efficiently
perform Gradient Boosting on complex structured categorical variables. The
resulting package, called StructureBoost, is shown to outperform established
packages such as CatBoost and LightGBM on problems with categorical predictors
that contain sophisticated structure. Moreover, we demonstrate that
StructureBoost can make accurate predictions on unseen categorical values due
to its knowledge of the underlying structure. | [
"stat.ML",
"cs.LG",
"stat.AP",
"stat.CO"
]
|
Contrastive learning (CL) is an emerging analysis approach that aims to
discover unique patterns in one dataset relative to another. By applying this
approach to network analysis, we can reveal unique characteristics in one
network by contrasting with another. For example, with networks of protein
interactions obtained from normal and cancer tissues, we can discover unique
types of interactions in cancer tissues. However, existing CL methods cannot be
directly applied to networks. To address this issue, we introduce a novel
approach called contrastive network representation learning (cNRL). This
approach embeds network nodes into a low-dimensional space that reveals the
uniqueness of one network compared to another. Within this approach, we also
design a method, named i-cNRL, that offers interpretability in the learned
results, allowing for understanding which specific patterns are found in one
network but not the other. We demonstrate the capability of i-cNRL with
multiple network models and real-world datasets. Furthermore, we provide
quantitative and qualitative comparisons across i-cNRL and other potential cNRL
algorithm designs. | [
"cs.LG",
"cs.SI",
"stat.ML"
]
|
Algorithms based on spectral graph cut objectives such as normalized cuts,
ratio cuts and ratio association have become popular in recent years because
they are widely applicable and simple to implement via standard eigenvector
computations. Despite strong performance for a number of clustering tasks,
spectral graph cut algorithms still suffer from several limitations: first,
they require the number of clusters to be known in advance, but this
information is often unknown a priori; second, they tend to produce clusters
with uniform sizes. In some cases, the true clusters exhibit a known size
distribution; in image segmentation, for instance, human-segmented images tend
to yield segment sizes that follow a power-law distribution. In this paper, we
propose a general framework of power-law graph cut algorithms that produce
clusters whose sizes are power-law distributed, and also does not fix the
number of clusters upfront. To achieve our goals, we treat the Pitman-Yor
exchangeable partition probability function (EPPF) as a regularizer to graph
cut objectives. Because the resulting objectives cannot be solved by relaxing
via eigenvectors, we derive a simple iterative algorithm to locally optimize
the objectives. Moreover, we show that our proposed algorithm can be viewed as
performing MAP inference on a particular Pitman-Yor mixture model. Our
experiments on various data sets show the effectiveness of our algorithms. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
Deep Neural Networks (DNNs) typically require massive amount of computation
resource in inference tasks for computer vision applications. Quantization can
significantly reduce DNN computation and storage by decreasing the bitwidth of
network encodings. Recent research affirms that carefully selecting the
quantization levels for each layer can preserve the accuracy while pushing the
bitwidth below eight bits. However, without arduous manual effort, this deep
quantization can lead to significant accuracy loss, leaving it in a position of
questionable utility. As such, deep quantization opens a large hyper-parameter
space (bitwidth of the layers), the exploration of which is a major challenge.
We propose a systematic approach to tackle this problem, by automating the
process of discovering the quantization levels through an end-to-end deep
reinforcement learning framework (ReLeQ). We adapt policy optimization methods
to the problem of quantization, and focus on finding the best design decisions
in choosing the state and action spaces, network architecture and training
framework, as well as the tuning of various hyperparamters. We show how ReLeQ
can balance speed and quality, and provide an asymmetric general solution for
quantization of a large variety of deep networks (AlexNet, CIFAR-10, LeNet,
MobileNet-V1, ResNet-20, SVHN, and VGG-11) that virtually preserves the
accuracy (=< 0.3% loss) while minimizing the computation and storage cost. With
these DNNs, ReLeQ enables conventional hardware to achieve 2.2x speedup over
8-bit execution. Similarly, a custom DNN accelerator achieves 2.0x speedup and
energy reduction compared to 8-bit runs. These encouraging results mark ReLeQ
as the initial step towards automating the deep quantization of neural
networks. | [
"cs.LG",
"stat.ML"
]
|
We consider a framework involving behavioral economics and machine learning.
Rationally inattentive Bayesian agents make decisions based on their posterior
distribution, utility function and information acquisition cost Renyi
divergence which generalizes Shannon mutual information). By observing these
decisions, how can an observer estimate the utility function and information
acquisition cost? Using deep learning, we estimate framing information
(essential extrinsic features) that determines the agent's attention strategy.
Then we present a preference based inverse reinforcement learning algorithm to
test for rational inattention: is the agent an utility maximizer, attention
maximizer, and does an information cost function exist that rationalizes the
data? The test imposes a Renyi mutual information constraint which impacts how
the agent can select attention strategies to maximize their expected utility.
The test provides constructive estimates of the utility function and
information acquisition cost of the agent. We illustrate these methods on a
massive YouTube dataset for characterizing the commenting behavior of users. | [
"cs.LG",
"cs.HC",
"stat.ML"
]
|
Image generation has been heavily investigated in computer vision, where one
core research challenge is to generate images from arbitrarily complex
distributions with little supervision. Generative Adversarial Networks (GANs)
as an implicit approach have achieved great successes in this direction and
therefore been employed widely. However, GANs are known to suffer from issues
such as mode collapse, non-structured latent space, being unable to compute
likelihoods, etc. In this paper, we propose a new unsupervised non-parametric
method named mixture of infinite conditional GANs or MIC-GANs, to tackle
several GAN issues together, aiming for image generation with parsimonious
prior knowledge. Through comprehensive evaluations across different datasets,
we show that MIC-GANs are effective in structuring the latent space and
avoiding mode collapse, and outperform state-of-the-art methods. MICGANs are
adaptive, versatile, and robust. They offer a promising solution to several
well-known GAN issues. Code available: github.com/yinghdb/MICGANs. | [
"cs.CV"
]
|
We present DetectFusion, an RGB-D SLAM system that runs in real-time and can
robustly handle semantically known and unknown objects that can move
dynamically in the scene. Our system detects, segments and assigns semantic
class labels to known objects in the scene, while tracking and reconstructing
them even when they move independently in front of the monocular camera. In
contrast to related work, we achieve real-time computational performance on
semantic instance segmentation with a novel method combining 2D object
detection and 3D geometric segmentation. In addition, we propose a method for
detecting and segmenting the motion of semantically unknown objects, thus
further improving the accuracy of camera tracking and map reconstruction. We
show that our method performs on par or better than previous work in terms of
localization and object reconstruction accuracy, while achieving about 20 FPS
even if the objects are segmented in each frame. | [
"cs.CV"
]
|
We introduce Few-Shot Video Object Detection (FSVOD) with three important
contributions: 1) a large-scale video dataset FSVOD-500 comprising of 500
classes with class-balanced videos in each category for few-shot learning; 2) a
novel Tube Proposal Network (TPN) to generate high-quality video tube proposals
to aggregate feature representation for the target video object; 3) a
strategically improved Temporal Matching Network (TMN+) to match representative
query tube features and supports with better discriminative ability. Our TPN
and TMN+ are jointly and end-to-end trained. Extensive experiments demonstrate
that our method produces significantly better detection results on two few-shot
video object detection datasets compared to image-based methods and other naive
video-based extensions. Codes and datasets will be released at
https://github.com/fanq15/FewX. | [
"cs.CV"
]
|
In this paper, we propose a new video representation learning method, named
Temporal Squeeze (TS) pooling, which can extract the essential movement
information from a long sequence of video frames and map it into a set of few
images, named Squeezed Images. By embedding the Temporal Squeeze pooling as a
layer into off-the-shelf Convolution Neural Networks (CNN), we design a new
video classification model, named Temporal Squeeze Network (TeSNet). The
resulting Squeezed Images contain the essential movement information from the
video frames, corresponding to the optimization of the video classification
task. We evaluate our architecture on two video classification benchmarks, and
the results achieved are compared to the state-of-the-art. | [
"cs.CV"
]
|
Recommendation problems with large numbers of discrete items, such as
products, webpages, or videos, are ubiquitous in the technology industry. Deep
neural networks are being increasingly used for these recommendation problems.
These models use embeddings to represent discrete items as continuous vectors,
and the vocabulary sizes and embedding dimensions, although heavily influence
the model's accuracy, are often manually selected in a heuristical manner. We
present Neural Input Search (NIS), a technique for learning the optimal
vocabulary sizes and embedding dimensions for categorical features. The goal is
to maximize prediction accuracy subject to a constraint on the total memory
used by all embeddings. Moreover, we argue that the traditional Single-size
Embedding (SE), which uses the same embedding dimension for all values of a
feature, suffers from inefficient usage of model capacity and training data. We
propose a novel type of embedding, namely Multi-size Embedding (ME), which
allows the embedding dimension to vary for different values of the feature.
During training we use reinforcement learning to find the optimal vocabulary
size for each feature and embedding dimension for each value of the feature. In
experiments on two common types of large scale recommendation problems, i.e.
retrieval and ranking problems, NIS automatically found better vocabulary and
embedding sizes that result in $6.8\%$ and $1.8\%$ relative improvements on
Recall@1 and ROC-AUC over manually optimized ones. | [
"cs.LG",
"cs.IR",
"stat.ML"
]
|
We describe and study a model for an Automated Online Recommendation System
(AORS) in which a user's preferences can be time-dependent and can also depend
on the history of past recommendations and play-outs. The three key features of
the model that makes it more realistic compared to existing models for
recommendation systems are (1) user preference is inherently latent, (2)
current recommendations can affect future preferences, and (3) it allows for
the development of learning algorithms with provable performance guarantees.
The problem is cast as an average-cost restless multi-armed bandit for a given
user, with an independent partially observable Markov decision process (POMDP)
for each item of content. We analyze the POMDP for a single arm, describe its
structural properties, and characterize its optimal policy. We then develop a
Thompson sampling-based online reinforcement learning algorithm to learn the
parameters of the model and optimize utility from the binary responses of the
users to continuous recommendations. We then analyze the performance of the
learning algorithm and characterize the regret. Illustrative numerical results
and directions for extension to the restless hidden Markov multi-armed bandit
problem are also presented. | [
"cs.LG"
]
|
In this technical report, we briefly introduce the solution of our team
"TAL-ai" for (Semi-) supervised Face detection in the low light condition in
UG2+ Challenge in CVPR 2021. By conducting several experiments with popular
image enhancement methods and image transfer methods, we pulled the low light
image and the normal image to a more closer domain. And it is observed that
using these data to training can achieve better performance. We also adapt
several popular object detection frameworks, e.g., DetectoRS, Cascade-RCNN, and
large backbone like Swin-transformer. Finally, we ensemble several models which
achieved mAP 74.89 on the testing set, ranking 1st on the final leaderboard. | [
"cs.CV"
]
|
Research on group activity recognition mostly leans on the standard
two-stream approach (RGB and Optical Flow) as their input features. Few have
explored explicit pose information, with none using it directly to reason about
the persons interactions. In this paper, we leverage the skeleton information
to learn the interactions between the individuals straight from it. With our
proposed method GIRN, multiple relationship types are inferred from independent
modules, that describe the relations between the body joints pair-by-pair.
Additionally to the joints relations, we also experiment with the previously
unexplored relationship between individuals and relevant objects (e.g.
volleyball). The individuals distinct relations are then merged through an
attention mechanism, that gives more importance to those individuals more
relevant for distinguishing the group activity. We evaluate our method in the
Volleyball dataset, obtaining competitive results to the state-of-the-art. Our
experiments demonstrate the potential of skeleton-based approaches for modeling
multi-person interactions. | [
"cs.CV"
]
|
Fine-grained facial expression manipulation is a challenging problem, as
fine-grained expression details are difficult to be captured. Most existing
expression manipulation methods resort to discrete expression labels, which
mainly edit global expressions and ignore the manipulation of fine details. To
tackle this limitation, we propose an end-to-end expression-guided generative
adversarial network (EGGAN), which utilizes structured latent codes and
continuous expression labels as input to generate images with expected
expressions. Specifically, we adopt an adversarial autoencoder to map a source
image into a structured latent space. Then, given the source latent code and
the target expression label, we employ a conditional GAN to generate a new
image with the target expression. Moreover, we introduce a perceptual loss and
a multi-scale structural similarity loss to preserve identity and global shape
during generation. Extensive experiments show that our method can manipulate
fine-grained expressions, and generate continuous intermediate expressions
between source and target expressions. | [
"cs.CV"
]
|
Localization technology is important for the development of indoor
location-based services (LBS). Global Positioning System (GPS) becomes invalid
in indoor environments due to the non-line-of-sight issue, so it is urgent to
develop a real-time high-accuracy localization approach for smartphones.
However, accurate localization is challenging due to issues such as real-time
response requirements, limited fingerprint samples and mobile device storage.
To address these problems, we propose a novel deep learning architecture:
Tensor-Generative Adversarial Network (TGAN).
We first introduce a transform-based 3D tensor to model fingerprint samples.
Instead of those passive methods that construct a fingerprint database as a
prior, our model applies artificial neural network with deep learning to train
network classifiers and then gives out estimations. Then we propose a novel
tensor-based super-resolution scheme using the generative adversarial network
(GAN) that adopts sparse coding as the generator network and a residual
learning network as the discriminator. Further, we analyze the performance of
tensor-GAN and implement a trace-based localization experiment, which achieves
better performance. Compared to existing methods for smartphones indoor
positioning, that are energy-consuming and high demands on devices, TGAN can
give out an improved solution in localization accuracy, response time and
implementation complexity. | [
"cs.LG",
"cs.NI",
"eess.SP"
]
|
According to observations, different visual objects have different salient
features in different scenarios. Even for the same object, its salient shape
and appearance features may change greatly from time to time in a long-term
tracking task. Motivated by them, we proposed an end-to-end feature fusion
framework based on Siamese network, named FF-Siam, which can effectively fuse
different features for adaptive visual tracking. The framework consists of four
layers. A feature extraction layer is designed to extract the different
features of the target region and search region. The extracted features are
then put into a weight generation layer to obtain the channel weights, which
indicate the importance of different feature channels. Both features and the
channel weights are utilized in a template generation layer to generate a
discriminative template. Finally, the corresponding response maps created by
the convolution of the search region features and the template are applied with
a fusion layer to obtain the final response map for locating the target.
Experimental results demonstrate that the proposed framework achieves
state-of-the-art performance on the popular Temple-Color, OTB50 and UAV123
benchmarks. | [
"cs.CV"
]
|
Fast linear transforms are ubiquitous in machine learning, including the
discrete Fourier transform, discrete cosine transform, and other structured
transformations such as convolutions. All of these transforms can be
represented by dense matrix-vector multiplication, yet each has a specialized
and highly efficient (subquadratic) algorithm. We ask to what extent
hand-crafting these algorithms and implementations is necessary, what
structural priors they encode, and how much knowledge is required to
automatically learn a fast algorithm for a provided structured transform.
Motivated by a characterization of fast matrix-vector multiplication as
products of sparse matrices, we introduce a parameterization of
divide-and-conquer methods that is capable of representing a large class of
transforms. This generic formulation can automatically learn an efficient
algorithm for many important transforms; for example, it recovers the $O(N \log
N)$ Cooley-Tukey FFT algorithm to machine precision, for dimensions $N$ up to
$1024$. Furthermore, our method can be incorporated as a lightweight
replacement of generic matrices in machine learning pipelines to learn
efficient and compressible transformations. On a standard task of compressing a
single hidden-layer network, our method exceeds the classification accuracy of
unconstrained matrices on CIFAR-10 by 3.9 points -- the first time a structured
approach has done so -- with 4X faster inference speed and 40X fewer
parameters. | [
"cs.LG",
"stat.ML"
]
|
The success of deep neural networks relies on significant architecture
engineering. Recently neural architecture search (NAS) has emerged as a promise
to greatly reduce manual effort in network design by automatically searching
for optimal architectures, although typically such algorithms need an excessive
amount of computational resources, e.g., a few thousand GPU-days. To date, on
challenging vision tasks such as object detection, NAS, especially fast
versions of NAS, is less studied. Here we propose to search for the decoder
structure of object detectors with search efficiency being taken into
consideration. To be more specific, we aim to efficiently search for the
feature pyramid network (FPN) as well as the prediction head of a simple
anchor-free object detector, namely FCOS, using a tailored reinforcement
learning paradigm. With carefully designed search space, search algorithms and
strategies for evaluating network quality, we are able to efficiently search a
top-performing detection architecture within 4 days using 8 V100 GPUs. The
discovered architecture surpasses state-of-the-art object detection models
(such as Faster R-CNN, RetinaNet and FCOS) by 1.5 to 3.5 points in AP on the
COCO dataset, with comparable computation complexity and memory footprint,
demonstrating the efficacy of the proposed NAS for object detection. | [
"cs.CV"
]
|
This paper investigates two techniques for developing efficient
self-supervised vision transformers (EsViT) for visual representation learning.
First, we show through a comprehensive empirical study that multi-stage
architectures with sparse self-attentions can significantly reduce modeling
complexity but with a cost of losing the ability to capture fine-grained
correspondences between image regions. Second, we propose a new pre-training
task of region matching which allows the model to capture fine-grained region
dependencies and as a result significantly improves the quality of the learned
vision representations. Our results show that combining the two techniques,
EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation,
outperforming prior arts with around an order magnitude of higher throughput.
When transferring to downstream linear classification tasks, EsViT outperforms
its supervised counterpart on 17 out of 18 datasets. The code and models will
be publicly available. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
We propose a novel Active Learning framework capable to train effectively a
convolutional neural network for semantic segmentation of medical imaging, with
a limited amount of training labeled data. Our contribution is a practical
Cost-Effective Active Learning approach using dropout at test time as Monte
Carlo sampling to model the pixel-wise uncertainty and to analyze the image
information to improve the training performance. The source code of this
project is available at
https://marc-gorriz.github.io/CEAL-Medical-Image-Segmentation/ . | [
"cs.CV"
]
|
Depth Completion can produce a dense depth map from a sparse input and
provide a more complete 3D description of the environment. Despite great
progress made in depth completion, the sparsity of the input and low density of
the ground truth still make this problem challenging. In this work, we propose
DenseLiDAR, a novel real-time pseudo-depth guided depth completion neural
network. We exploit dense pseudo-depth map obtained from simple morphological
operations to guide the network in three aspects: (1) Constructing a residual
structure for the output; (2) Rectifying the sparse input data; (3) Providing
dense structural loss for training the network. Thanks to these novel designs,
higher performance of the output could be achieved. In addition, two new
metrics for better evaluating the quality of the predicted depth map are also
presented. Extensive experiments on KITTI depth completion benchmark suggest
that our model is able to achieve the state-of-the-art performance at the
highest frame rate of 50Hz. The predicted dense depth is further evaluated by
several downstream robotic perception or positioning tasks. For the task of 3D
object detection, 3~5 percent performance gains on small objects categories are
achieved on KITTI 3D object detection dataset. For RGB-D SLAM, higher accuracy
on vehicle's trajectory is also obtained in KITTI Odometry dataset. These
promising results not only verify the high quality of our depth prediction, but
also demonstrate the potential of improving the related downstream tasks by
using depth completion results. | [
"cs.CV"
]
|
In this paper, we focus on the problem of unsupervised image-sentence
matching. Existing research explores to utilize document-level structural
information to sample positive and negative instances for model training.
Although the approach achieves positive results, it introduces a sampling bias
and fails to distinguish instances with high semantic similarity. To alleviate
the bias, we propose a new sampling strategy to select additional
intra-document image-sentence pairs as positive or negative samples.
Furthermore, to recognize the complex pattern in intra-document samples, we
propose a Transformer based model to capture fine-grained features and
implicitly construct a graph for each document, where concepts in a document
are introduced to bridge the representation learning of images and sentences in
the context of a document. Experimental results show the effectiveness of our
approach to alleviate the bias and learn well-aligned multimodal
representations. | [
"cs.CV",
"cs.AI",
"cs.MM"
]
|
Anomaly detection is a challenging problem in machine learning, and is even
more so when dealing with instances that are captured in low-level, raw data
representations without a well-behaved set of engineered features. The Radial
Basis Function Data Descriptor (RBFDD) network is an effective solution for
anomaly detection, however, it is a shallow model that does not deal
effectively with raw data representations. This paper investigates approaches
to modifying the RBFDD network to transform it into a deep one-class classifier
suitable for anomaly detection problems with low-level raw data
representations. We show that approaches based on transfer learning are not
effective and our results suggest that this is because the latent
representations learned by generic classification models are not suitable for
anomaly detection. Instead we show that an approach that adds multiple
convolutional layers before the RBF layer, to form a Deep Radial Basis Function
Data Descriptor (D-RBFDD) network, is very effective. This is shown in a set of
evaluation experiments using multiple anomaly detection scenarios created from
publicly available image classification datasets, and a real-world anomaly
detection dataset in which different types of arrhythmia are detected in
electrocardiogram (ECG) data. Our experiments show that the D-RBFDD network
out-performs state-of-the-art anomaly detection methods including the Deep
Support Vector Data Descriptor (Deep SVDD), One-Class SVM, and Isolation Forest
on the image datasets, and produces competitive results for the ECG dataset. | [
"cs.LG"
]
|
Automatic image matting (AIM) refers to estimating the soft foreground from
an arbitrary natural image without any auxiliary input like trimap, which is
useful for image editing. Prior methods try to learn semantic features to aid
the matting process while being limited to images with salient opaque
foregrounds such as humans and animals. In this paper, we investigate the
difficulties when extending them to natural images with salient
transparent/meticulous foregrounds or non-salient foregrounds. To address the
problem, a novel end-to-end matting network is proposed, which can predict a
generalized trimap for any image of the above types as a unified semantic
representation. Simultaneously, the learned semantic features guide the matting
network to focus on the transition areas via an attention mechanism. We also
construct a test set AIM-500 that contains 500 diverse natural images covering
all types along with manually labeled alpha mattes, making it feasible to
benchmark the generalization ability of AIM models. Results of the experiments
demonstrate that our network trained on available composite matting datasets
outperforms existing methods both objectively and subjectively. The source code
and dataset are available at https://github.com/JizhiziLi/AIM. | [
"cs.CV",
"cs.AI"
]
|
Change detection, or anomaly detection, from street-view images acquired by
an autonomous robot at multiple different times, is a major problem in robotic
mapping and autonomous driving. Formulation as an image comparison task, which
operates on a given pair of query and reference images is common to many
existing approaches to this problem. Unfortunately, providing relevant
reference images is not straightforward. In this paper, we propose a novel
formulation for change detection, termed compressive change retrieval, which
can operate on a query image and similar reference images retrieved from the
web. Compared to previous formulations, there are two sources of difficulty.
First, the retrieved reference images may frequently contain non-relevant
reference images, because even state-of-the-art place-recognition techniques
suffer from retrieval noise. Second, image comparison needs to be conducted in
a compressed domain to minimize the storage cost of large collections of
street-view images. To address the above issues, we also present a practical
change detection algorithm that uses compressed bag-of-words (BoW) image
representation as a scalable solution. The results of experiments conducted on
a practical change detection task, "moving object detection (MOD)," using the
publicly available Malaga dataset validate the effectiveness of the proposed
approach. | [
"cs.CV"
]
|
Lacking the ability to sense ambient environments effectively, blind and
visually impaired people (BVIP) face difficulty in walking outdoors, especially
in urban areas. Therefore, tools for assisting BVIP are of great importance. In
this paper, we propose a novel "flying guide dog" prototype for BVIP assistance
using drone and street view semantic segmentation. Based on the walkable areas
extracted from the segmentation prediction, the drone can adjust its movement
automatically and thus lead the user to walk along the walkable path. By
recognizing the color of pedestrian traffic lights, our prototype can help the
user to cross a street safely. Furthermore, we introduce a new dataset named
Pedestrian and Vehicle Traffic Lights (PVTL), which is dedicated to traffic
light recognition. The result of our user study in real-world scenarios shows
that our prototype is effective and easy to use, providing new insight into
BVIP assistance. | [
"cs.CV",
"cs.HC",
"cs.RO",
"eess.IV"
]
|
Rotation moment invariants have been of great interest in image processing
and pattern recognition. This paper presents a novel kind of rotation moment
invariants based on the Slepian functions, which were originally introduced in
the method of separation of variables for Helmholtz equations. They were first
proposed for time series by Slepian and his coworkers in the 1960s. Recent
studies have shown that these functions have an good performance in local
approximation compared to other approximation basis. Motivated by the good
approximation performance, we construct the Slepian-based moments and derive
the rotation invariant. We not only theoretically prove the invariance, but
also discuss the experiments on real data. The proposed rotation invariants are
robust to noise and yield decent performance in facial expression
classification. | [
"cs.CV",
"30E05, 33E10, 14L24"
]
|
Neural network based models for collaborative filtering have started to gain
attention recently. One branch of research is based on using deep generative
models to model user preferences where variational autoencoders were shown to
produce state-of-the-art results. However, there are some potentially
problematic characteristics of the current variational autoencoder for CF. The
first is the too simplistic prior that VAEs incorporate for learning the latent
representations of user preference. The other is the model's inability to learn
deeper representations with more than one hidden layer for each network. Our
goal is to incorporate appropriate techniques to mitigate the aforementioned
problems of variational autoencoder CF and further improve the recommendation
performance. Our work is the first to apply flexible priors to collaborative
filtering and show that simple priors (in original VAEs) may be too restrictive
to fully model user preferences and setting a more flexible prior gives
significant gains. We experiment with the VampPrior, originally proposed for
image generation, to examine the effect of flexible priors in CF. We also show
that VampPriors coupled with gating mechanisms outperform SOTA results
including the Variational Autoencoder for Collaborative Filtering by meaningful
margins on 2 popular benchmark datasets (MovieLens & Netflix). | [
"stat.ML",
"cs.IR",
"cs.LG"
]
|
Unsupervised domain mapping aims to learn a function to translate domain X to
Y by a function GXY in the absence of paired examples. Finding the optimal GXY
without paired data is an ill-posed problem, so appropriate constraints are
required to obtain reasonable solutions. One of the most prominent constraints
is cycle consistency, which enforces the translated image by GXY to be
translated back to the input image by an inverse mapping GYX. While cycle
consistency requires the simultaneous training of GXY and GY X, recent studies
have shown that one-sided domain mapping can be achieved by preserving pairwise
distances between images. Although cycle consistency and distance preservation
successfully constrain the solution space, they overlook the special properties
that simple geometric transformations do not change the semantic structure of
images. Based on this special property, we develop a geometry-consistent
generative adversarial network (GcGAN), which enables one-sided unsupervised
domain mapping. GcGAN takes the original image and its counterpart image
transformed by a predefined geometric transformation as inputs and generates
two images in the new domain coupled with the corresponding
geometry-consistency constraint. The geometry-consistency constraint reduces
the space of possible solutions while keep the correct solutions in the search
space. Quantitative and qualitative comparisons with the baseline (GAN alone)
and the state-of-the-art methods including CycleGAN and DistanceGAN demonstrate
the effectiveness of our method. | [
"cs.CV"
]
|
Shape and texture are two prominent and complementary cues for recognizing
objects. Nonetheless, Convolutional Neural Networks are often biased towards
either texture or shape, depending on the training dataset. Our ablation shows
that such bias degenerates model performance. Motivated by this observation, we
develop a simple algorithm for shape-texture debiased learning. To prevent
models from exclusively attending on a single cue in representation learning,
we augment training data with images with conflicting shape and texture
information (eg, an image of chimpanzee shape but with lemon texture) and, most
importantly, provide the corresponding supervisions from shape and texture
simultaneously.
Experiments show that our method successfully improves model performance on
several image recognition benchmarks and adversarial robustness. For example,
by training on ImageNet, it helps ResNet-152 achieve substantial improvements
on ImageNet (+1.2%), ImageNet-A (+5.2%), ImageNet-C (+8.3%) and
Stylized-ImageNet (+11.1%), and on defending against FGSM adversarial attacker
on ImageNet (+14.4%). Our method also claims to be compatible with other
advanced data augmentation strategies, eg, Mixup, and CutMix. The code is
available here: https://github.com/LiYingwei/ShapeTextureDebiasedTraining. | [
"cs.CV"
]
|
In Machine Learning, feature selection entails selecting a subset of the
available features in a dataset to use for model development. There are many
motivations for feature selection, it may result in better models, it may
provide insight into the data and it may deliver economies in data gathering or
data processing. For these reasons feature selection has received a lot of
attention in data analytics research. In this paper we provide an overview of
the main methods and present practical examples with Python implementations.
While the main focus is on supervised feature selection techniques, we also
cover some feature transformation methods. | [
"cs.LG"
]
|
3D human shape and pose estimation is the essential task for human motion
analysis, which is widely used in many 3D applications. However, existing
methods cannot simultaneously capture the relations at multiple levels,
including spatial-temporal level and human joint level. Therefore they fail to
make accurate predictions in some hard scenarios when there is cluttered
background, occlusion, or extreme pose. To this end, we propose Multi-level
Attention Encoder-Decoder Network (MAED), including a Spatial-Temporal Encoder
(STE) and a Kinematic Topology Decoder (KTD) to model multi-level attentions in
a unified framework. STE consists of a series of cascaded blocks based on
Multi-Head Self-Attention, and each block uses two parallel branches to learn
spatial and temporal attention respectively. Meanwhile, KTD aims at modeling
the joint level attention. It regards pose estimation as a top-down
hierarchical process similar to SMPL kinematic tree. With the training set of
3DPW, MAED outperforms previous state-of-the-art methods by 6.2, 7.2, and 2.4
mm of PA-MPJPE on the three widely used benchmarks 3DPW, MPI-INF-3DHP, and
Human3.6M respectively. Our code is available at
https://github.com/ziniuwan/maed. | [
"cs.CV"
]
|
We present an approach which takes advantage of both structure and semantics
for unsupervised monocular learning of depth and ego-motion. More specifically,
we model the motion of individual objects and learn their 3D motion vector
jointly with depth and ego-motion. We obtain more accurate results, especially
for challenging dynamic scenes not addressed by previous approaches. This is an
extended version of Casser et al. [AAAI'19]. Code and models have been open
sourced at https://sites.google.com/corp/view/struct2depth. | [
"cs.CV",
"cs.RO"
]
|
Most recent gains in visual recognition have originated from the inclusion of
attention mechanisms in deep convolutional networks (DCNs). Because these
networks are optimized for object recognition, they learn where to attend using
only a weak form of supervision derived from image class labels. Here, we
demonstrate the benefit of using stronger supervisory signals by teaching DCNs
to attend to image regions that humans deem important for object recognition.
We first describe a large-scale online experiment (ClickMe) used to supplement
ImageNet with nearly half a million human-derived "top-down" attention maps.
Using human psychophysics, we confirm that the identified top-down features
from ClickMe are more diagnostic than "bottom-up" saliency features for rapid
image categorization. As a proof of concept, we extend a state-of-the-art
attention network and demonstrate that adding ClickMe supervision significantly
improves its accuracy and yields visual features that are more interpretable
and more similar to those used by human observers. | [
"cs.CV"
]
|
We present an approach for polarimetric Synthetic Aperture Radar (SAR) image
region boundary detection based on the use of B-Spline active contours and a
new model for polarimetric SAR data: the GHP distribution. In order to detect
the boundary of a region, initial B-Spline curves are specified, either
automatically or manually, and the proposed algorithm uses a deformable
contours technique to find the boundary. In doing this, the parameters of the
polarimetric GHP model for the data are estimated, in order to find the
transition points between the region being segmented and the surrounding area.
This is a local algorithm since it works only on the region to be segmented.
Results of its performance are presented. | [
"cs.CV",
"stat.ML"
]
|
Most of state of the art methods applied on time series consist of deep
learning methods that are too complex to be interpreted. This lack of
interpretability is a major drawback, as several applications in the real world
are critical tasks, such as the medical field or the autonomous driving field.
The explainability of models applied on time series has not gather much
attention compared to the computer vision or the natural language processing
fields. In this paper, we present an overview of existing explainable AI (XAI)
methods applied on time series and illustrate the type of explanations they
produce. We also provide a reflection on the impact of these explanation
methods to provide confidence and trust in the AI systems. | [
"cs.LG",
"cs.AI"
]
|
We show that existing upsampling operators can be unified using the notion of
the index function. This notion is inspired by an observation in the decoding
process of deep image matting where indices-guided unpooling can often recover
boundary details considerably better than other upsampling operators such as
bilinear interpolation. By viewing the indices as a function of the feature
map, we introduce the concept of "learning to index", and present a novel
index-guided encoder-decoder framework where indices are self-learned
adaptively from data and are used to guide the downsampling and upsampling
stages, without extra training supervision. At the core of this framework is a
new learnable module, termed Index Network (IndexNet), which dynamically
generates indices conditioned on the feature map itself. IndexNet can be used
as a plug-in applying to almost all off-the-shelf convolutional networks that
have coupled downsampling and upsampling stages, giving the networks the
ability to dynamically capture variations of local patterns. In particular, we
instantiate and investigate five families of IndexNet and demonstrate their
effectiveness on four dense prediction tasks, including image denoising, image
matting, semantic segmentation, and monocular depth estimation. Code and models
have been made available at: https://tinyurl.com/IndexNetV1 | [
"cs.CV"
]
|
In this work we present a novel system for generation of virtual PET images
using CT scans. We combine a fully convolutional network (FCN) with a
conditional generative adversarial network (GAN) to generate simulated PET data
from given input CT data. The synthesized PET can be used for false-positive
reduction in lesion detection solutions. Clinically, such solutions may enable
lesion detection and drug treatment evaluation in a CT-only environment, thus
reducing the need for the more expensive and radioactive PET/CT scan. Our
dataset includes 60 PET/CT scans from Sheba Medical center. We used 23 scans
for training and 37 for testing. Different schemes to achieve the synthesized
output were qualitatively compared. Quantitative evaluation was conducted using
an existing lesion detection software, combining the synthesized PET as a false
positive reduction layer for the detection of malignant lesions in the liver.
Current results look promising showing a 28% reduction in the average false
positive per case from 2.9 to 2.1. The suggested solution is comprehensive and
can be expanded to additional body organs, and different modalities. | [
"cs.CV",
"cs.AI"
]
|
Most problems involving simultaneous localization and mapping can nowadays be
solved using one of two fundamentally different approaches. The traditional
approach is given by a least-squares objective, which minimizes many local
photometric or geometric residuals over explicitly parametrized structure and
camera parameters. Unmodeled effects violating the lambertian surface
assumption or geometric invariances of individual residuals are encountered
through statistical averaging or the addition of robust kernels and smoothness
terms. Aiming at more accurate measurement models and the inclusion of
higher-order shape priors, the community more recently shifted its attention to
deep end-to-end models for solving geometric localization and mapping problems.
However, at test-time, these feed-forward models ignore the more traditional
geometric or photometric consistency terms, thus leading to a low ability to
recover fine details and potentially complete failure in corner case scenarios.
With an application to dense object modeling from RGBD images, our work aims at
taking the best of both worlds by embedding modern higher-order object shape
priors into classical iterative residual minimization objectives. We
demonstrate a general ability to improve mapping accuracy with respect to each
modality alone, and present a successful application to real data. | [
"cs.CV",
"cs.RO"
]
|
Recently, Transformer networks have redefined the state of the art in many
NLP tasks. However, these models suffer from quadratic computational cost in
the input sequence length $n$ to compute pairwise attention in each layer. This
has prompted recent research into sparse Transformers that sparsify the
connections in the attention layers. While empirically promising for long
sequences, fundamental questions remain unanswered: Can sparse Transformers
approximate any arbitrary sequence-to-sequence function, similar to their dense
counterparts? How does the sparsity pattern and the sparsity level affect their
performance? In this paper, we address these questions and provide a unifying
framework that captures existing sparse attention models. We propose sufficient
conditions under which we prove that a sparse attention model can universally
approximate any sequence-to-sequence function. Surprisingly, our results show
that sparse Transformers with only $O(n)$ connections per attention layer can
approximate the same function class as the dense model with $n^2$ connections.
Lastly, we present experiments comparing different patterns/levels of sparsity
on standard NLP tasks. | [
"cs.LG",
"stat.ML"
]
|
Over the past decade, multivariate time series classification (MTSC) has
received great attention with the advance of sensing techniques. Current deep
learning methods for MTSC are based on convolutional and recurrent neural
network, with the assumption that time series variables have the same effect to
each other. Thus they cannot model the pairwise dependencies among variables
explicitly. What's more, current spatial-temporal modeling methods based on
GNNs are inherently flat and lack the capability of aggregating node
information in a hierarchical manner. To address this limitation and attain
expressive global representation of MTS, we propose a graph pooling based
framework MTPool and view MTSC task as graph classification task. With graph
structure learning and temporal convolution, MTS slices are converted to graphs
and spatial-temporal features are extracted. Then, we propose a novel graph
pooling method, which uses an ``encoder-decoder'' mechanism to generate
adaptive centroids for cluster assignments. GNNs and graph pooling layers are
used for joint graph representation learning and graph coarsening. With
multiple graph pooling layers, the input graphs are hierachically coarsened to
one node. Finally, differentiable classifier takes this coarsened one-node
graph as input to get the final predicted class. Experiments on 10 benchmark
datasets demonstrate MTPool outperforms state-of-the-art methods in MTSC tasks. | [
"cs.LG",
"cs.AI"
]
|
Deep Convolutional Neural Networks (CNN) has achieved significant success in
computer vision field. However, the high computational cost of the deep complex
models prevents the deployment on edge devices with limited memory and
computational resource. In this paper, we proposed a novel filter pruning for
convolutional neural networks compression, namely spectral clustering filter
pruning with soft self-adaption manners (SCSP). We first apply spectral
clustering on filters layer by layer to explore their intrinsic connections and
only count on efficient groups. By self-adaption manners, the pruning
operations can be done in few epochs to let the network gradually choose
meaningful groups. According to this strategy, we not only achieve model
compression while keeping considerable performance, but also find a novel angle
to interpret the model compression process. | [
"cs.CV"
]
|
This study investigates the effects of Markov chain Monte Carlo (MCMC)
sampling in unsupervised Maximum Likelihood (ML) learning. Our attention is
restricted to the family of unnormalized probability densities for which the
negative log density (or energy function) is a ConvNet. We find that many of
the techniques used to stabilize training in previous studies are not
necessary. ML learning with a ConvNet potential requires only a few
hyper-parameters and no regularization. Using this minimal framework, we
identify a variety of ML learning outcomes that depend solely on the
implementation of MCMC sampling.
On one hand, we show that it is easy to train an energy-based model which can
sample realistic images with short-run Langevin. ML can be effective and stable
even when MCMC samples have much higher energy than true steady-state samples
throughout training. Based on this insight, we introduce an ML method with
purely noise-initialized MCMC, high-quality short-run synthesis, and the same
budget as ML with informative MCMC initialization such as CD or PCD. Unlike
previous models, our energy model can obtain realistic high-diversity samples
from a noise signal after training.
On the other hand, ConvNet potentials learned with non-convergent MCMC do not
have a valid steady-state and cannot be considered approximate unnormalized
densities of the training data because long-run MCMC samples differ greatly
from observed images. We show that it is much harder to train a ConvNet
potential to learn a steady-state over realistic images. To our knowledge,
long-run MCMC samples of all previous models lose the realism of short-run
samples. With correct tuning of Langevin noise, we train the first ConvNet
potentials for which long-run and steady-state MCMC samples are realistic
images. | [
"stat.ML",
"cs.CV",
"cs.LG"
]
|
Humans are good at compositional zero-shot reasoning; someone who has never
seen a zebra before could nevertheless recognize one when we tell them it looks
like a horse with black and white stripes. Machine learning systems, on the
other hand, usually leverage spurious correlations in the training data, and
while such correlations can help recognize objects in context, they hurt
generalization. To be able to deal with underspecified datasets while still
leveraging contextual clues during classification, we propose ProtoProp, a
novel prototype propagation graph method. First we learn prototypical
representations of objects (e.g., zebra) that are conditionally independent
w.r.t. their attribute labels (e.g., stripes) and vice versa. Next we propagate
the independent prototypes through a compositional graph, to learn
compositional prototypes of novel attribute-object combinations that reflect
the dependencies of the target distribution. The method does not rely on any
external data, such as class hierarchy graphs or pretrained word embeddings. We
evaluate our approach on AO-Clever, a synthetic and strongly visual dataset
with clean labels, and UT-Zappos, a noisy real-world dataset of fine-grained
shoe types. We show that in the generalized compositional zero-shot setting we
outperform state-of-the-art results, and through ablations we show the
importance of each part of the method and their contribution to the final
results. | [
"cs.CV",
"cs.LG"
]
|
Particle Imaging Velocimetry (PIV) estimates the flow of fluid by analyzing
the motion of injected particles. The problem is challenging as the particles
lie at different depths but have similar appearance and tracking a large number
of particles is particularly difficult. In this paper, we present a PIV
solution that uses densely sampled light field to reconstruct and track 3D
particles. We exploit the refocusing capability and focal symmetry constraint
of the light field for reliable particle depth estimation. We further propose a
new motion-constrained optical flow estimation scheme by enforcing local motion
rigidity and the Navier-Stoke constraint. Comprehensive experiments on
synthetic and real experiments show that using a single light field camera, our
technique can recover dense and accurate 3D fluid flows in small to medium
volumes. | [
"cs.CV"
]
|
The ability to look multiple times through a series of pose-adjusted glimpses
is fundamental to human vision. This critical faculty allows us to understand
highly complex visual scenes. Short term memory plays an integral role in
aggregating the information obtained from these glimpses and informing our
interpretation of the scene. Computational models have attempted to address
glimpsing and visual attention but have failed to incorporate the notion of
memory. We introduce a novel, biologically inspired visual working memory
architecture that we term the Hebb-Rosenblatt memory. We subsequently introduce
a fully differentiable Short Term Attentive Working Memory model (STAWM) which
uses transformational attention to learn a memory over each image it sees. The
state of our Hebb-Rosenblatt memory is embedded in STAWM as the weights space
of a layer. By projecting different queries through this layer we can obtain
goal-oriented latent representations for tasks including classification and
visual reconstruction. Our model obtains highly competitive classification
performance on MNIST and CIFAR-10. As demonstrated through the CelebA dataset,
to perform reconstruction the model learns to make a sequence of updates to a
canvas which constitute a parts-based representation. Classification with the
self supervised representation obtained from MNIST is shown to be in line with
the state of the art models (none of which use a visual attention mechanism).
Finally, we show that STAWM can be trained under the dual constraints of
classification and reconstruction to provide an interpretable visual sketchpad
which helps open the 'black-box' of deep learning. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
A technique for object localization based on pose estimation and camera
calibration is presented. The 3-dimensional (3D) coordinates are estimated by
collecting multiple 2-dimensional (2D) images of the object and are utilized
for the calibration of the camera. The calibration steps involving a number of
parameter calculation including intrinsic and extrinsic parameters for the
removal of lens distortion, computation of object's size and camera's position
calculation are discussed. A transformation strategy to estimate the 3D pose
using the 2D images is presented. The proposed method is implemented on MATLAB
and validation experiments are carried out for both pose estimation and camera
calibration. | [
"cs.CV",
"cs.RO"
]
|
Games such as go, chess and checkers have multiple equivalent game states,
i.e. multiple board positions where symmetrical and opposite moves should be
made. These equivalences are not exploited by current state of the art neural
agents which instead must relearn similar information, thereby wasting
computing time. Group equivariant CNNs in existing work create networks which
can exploit symmetries to improve learning, however, they lack the
expressiveness to correctly reflect the move embeddings necessary for games. We
introduce Finite Group Neural Networks (FGNNs), a method for creating agents
with an innate understanding of these board positions. FGNNs are shown to
improve the performance of networks playing checkers (draughts), and can be
easily adapted to other games and learning problems. Additionally, FGNNs can be
created from existing network architectures. These include, for the first time,
those with skip connections and arbitrary layer types. We demonstrate that an
equivariant version of U-Net (FGNN-U-Net) outperforms the unmodified network in
image segmentation. | [
"cs.LG",
"stat.ML"
]
|
With the goal of making high-resolution forecasts of regional rainfall,
precipitation nowcasting has become an important and fundamental technology
underlying various public services ranging from rainstorm warnings to flight
safety. Recently, the Convolutional LSTM (ConvLSTM) model has been shown to
outperform traditional optical flow based methods for precipitation nowcasting,
suggesting that deep learning models have a huge potential for solving the
problem. However, the convolutional recurrence structure in ConvLSTM-based
models is location-invariant while natural motion and transformation (e.g.,
rotation) are location-variant in general. Furthermore, since
deep-learning-based precipitation nowcasting is a newly emerging area, clear
evaluation protocols have not yet been established. To address these problems,
we propose both a new model and a benchmark for precipitation nowcasting.
Specifically, we go beyond ConvLSTM and propose the Trajectory GRU (TrajGRU)
model that can actively learn the location-variant structure for recurrent
connections. Besides, we provide a benchmark that includes a real-world
large-scale dataset from the Hong Kong Observatory, a new training loss, and a
comprehensive evaluation protocol to facilitate future research and gauge the
state of the art. | [
"cs.CV"
]
|
In this paper, we carry out a comparative study of the efficacy of wavelets
belonging to Daubechies and Coiflet family in achieving image segmentation
through a fast statistical algorithm.The fact that wavelets belonging to
Daubechies family optimally capture the polynomial trends and those of Coiflet
family satisfy mini-max condition, makes this comparison interesting. In the
context of the present algorithm, it is found that the performance of Coiflet
wavelets is better, as compared to Daubechies wavelet. | [
"cs.CV"
]
|
Real-time object detection in videos using lightweight hardware is a crucial
component of many robotic tasks. Detectors using different modalities and with
varying computational complexities offer different trade-offs. One option is to
have a very lightweight model that can predict from all modalities at once for
each frame. However, in some situations (e.g., in static scenes) it might be
better to have a more complex but more accurate model and to extrapolate from
previous predictions for the frames coming in at processing time. We formulate
this task as a sequential decision making problem and use reinforcement
learning (RL) to generate a policy that decides from the RGB input which
detector out of a portfolio of different object detectors to take for the next
prediction. The objective of the RL agent is to maximize the accuracy of the
predictions per image. We evaluate the approach on the Waymo Open Dataset and
show that it exceeds the performance of each single detector. | [
"cs.LG",
"cs.CV",
"cs.RO"
]
|
A first-person camera, placed at a person's head, captures, which objects are
important to the camera wearer. Most prior methods for this task learn to
detect such important objects from the manually labeled first-person data in a
supervised fashion. However, important objects are strongly related to the
camera wearer's internal state such as his intentions and attention, and thus,
only the person wearing the camera can provide the importance labels. Such a
constraint makes the annotation process costly and limited in scalability.
In this work, we show that we can detect important objects in first-person
images without the supervision by the camera wearer or even third-person
labelers. We formulate an important detection problem as an interplay between
the 1) segmentation and 2) recognition agents. The segmentation agent first
proposes a possible important object segmentation mask for each image, and then
feeds it to the recognition agent, which learns to predict an important object
mask using visual semantics and spatial features.
We implement such an interplay between both agents via an alternating
cross-pathway supervision scheme inside our proposed Visual-Spatial Network
(VSN). Our VSN consists of spatial ("where") and visual ("what") pathways, one
of which learns common visual semantics while the other focuses on the spatial
location cues. Our unsupervised learning is accomplished via a cross-pathway
supervision, where one pathway feeds its predictions to a segmentation agent,
which proposes a candidate important object segmentation mask that is then used
by the other pathway as a supervisory signal. We show our method's success on
two different important object datasets, where our method achieves similar or
better results as the supervised methods. | [
"cs.CV"
]
|
The purpose of this paper is to describe one-shot-learning gesture
recognition systems developed on the \textit{ChaLearn Gesture Dataset}. We use
RGB and depth images and combine appearance (Histograms of Oriented Gradients)
and motion descriptors (Histogram of Optical Flow) for parallel temporal
segmentation and recognition. The Quadratic-Chi distance family is used to
measure differences between histograms to capture cross-bin relationships. We
also propose a new algorithm for trimming videos --- to remove all the
unimportant frames from videos. We present two methods that use combination of
HOG-HOF descriptors together with variants of Dynamic Time Warping technique.
Both methods outperform other published methods and help narrow down the gap
between human performance and algorithms on this task. The code has been made
publicly available in the MLOSS repository. | [
"cs.CV"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.