text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
In this paper we study the concentration properties for the eigenvalues of
kernel matrices, which are central objects in a wide range of kernel methods
and, more recently, in network analysis. We present a set of concentration
inequalities tailored for each individual eigenvalue of the kernel matrix with
respect to its known asymptotic limit. The inequalities presented here are of
relative type, meaning that they scale with the eigenvalue in consideration,
which results in convergence rates that vary across the spectrum. The rates we
obtain here are faster than the typical $\O(\frac{1}{\sqrt n})$ and are often
exponential, depending on regularity assumptions of Sobolev type. One key
feature of our results is that they apply to non positive kernels, which is
fundamental in the context of network analysis. We show how our results are
well suited for the study of dot product kernels, which are related to random
geometric graphs on the sphere, via the graphon formalism. We illustrate our
results by applying them to a variety of dot product kernels on the sphere and
to the one dimensional Gaussian kernel. | [
"stat.ML",
"cs.LG"
] |
Despite their importance in training artificial intelligence systems, large
datasets remain challenging to acquire. For example, the ImageNet dataset
required fourteen million labels of basic human knowledge, such as whether an
image contains a chair. Unfortunately, this knowledge is so simple that it is
tedious for human annotators but also tacit enough such that they are
necessary. However, human collaborative efforts for tasks like labeling massive
amounts of data are costly, inconsistent, and prone to failure, and this method
does not resolve the issue of the resulting dataset being static in nature.
What if we asked people questions they want to answer and collected their
responses as data? This would mean we could gather data at a much lower cost,
and expanding a dataset would simply become a matter of asking more questions.
We focus on the task of Visual Question Answering (VQA) and propose a system
that uses Visual Question Generation (VQG) to produce questions, asks them to
social media users, and collects their responses. We present two models that
can then parse clean answers from the noisy human responses significantly
better than our baselines, with the goal of eventually incorporating the
answers into a Visual Question Answering (VQA) dataset. By demonstrating how
our system can collect large amounts of data at little to no cost, we envision
similar systems being used to improve performance on other tasks in the future. | [
"cs.CV",
"cs.CL",
"cs.HC",
"cs.LG",
"I.2, I.4, I.6, I.7",
"I.2; I.4; I.6; I.7"
] |
Person re-identification is an important task that requires learning
discriminative visual features for distinguishing different person identities.
Diverse auxiliary information has been utilized to improve the visual feature
learning. In this paper, we propose to exploit natural language description as
additional training supervisions for effective visual features. Compared with
other auxiliary information, language can describe a specific person from more
compact and semantic visual aspects, thus is complementary to the pixel-level
image data. Our method not only learns better global visual feature with the
supervision of the overall description but also enforces semantic consistencies
between local visual and linguistic features, which is achieved by building
global and local image-language associations. The global image-language
association is established according to the identity labels, while the local
association is based upon the implicit correspondences between image regions
and noun phrases. Extensive experiments demonstrate the effectiveness of
employing language as training supervisions with the two association schemes.
Our method achieves state-of-the-art performance without utilizing any
auxiliary information during testing and shows better performance than other
joint embedding methods for the image-language association. | [
"cs.CV"
] |
In neural architecture search (NAS), differentiable architecture search
(DARTS) has recently attracted much attention due to its high efficiency. It
defines an over-parameterized network with mixed edges, each of which
represents all operator candidates, and jointly optimizes the weights of the
network and its architecture in an alternating manner. However, this method
finds a model with the weights converging faster than the others, and such a
model with fastest convergence often leads to overfitting. Accordingly, the
resulting model cannot always be well-generalized. To overcome this problem, we
propose a method called minimum stable rank DARTS (MSR-DARTS), for finding a
model with the best generalization error by replacing architecture optimization
with the selection process using the minimum stable rank criterion.
Specifically, a convolution operator is represented by a matrix, and MSR-DARTS
selects the one with the smallest stable rank. We evaluated MSR-DARTS on
CIFAR-10 and ImageNet datasets. It achieves an error rate of 2.54% with 4.0M
parameters within 0.3 GPU-days on CIFAR-10, and a top-1 error rate of 23.9% on
ImageNet. The official code is available at
https://github.com/mtaecchhi/msrdarts.git. | [
"cs.CV",
"cs.AI"
] |
We introduce a structured low rank algorithm for the calibration-free
compensation of field inhomogeneity artifacts in Echo Planar Imaging (EPI) MRI
data. We acquire the data using two EPI readouts that differ in echo-time (TE).
Using time segmentation, we reformulate the field inhomogeneity compensation
problem as the recovery of an image time series from highly undersampled
Fourier measurements. The temporal profile at each pixel is modeled as a single
exponential, which is exploited to fill in the missing entries. We show that
the exponential behavior at each pixel, along with the spatial smoothness of
the exponential parameters, can be exploited to derive a 3D annihilation
relation in the Fourier domain. This relation translates to a low rank property
on a structured multi-fold Toeplitz matrix, whose entries correspond to the
measured k-space samples. We introduce a fast two-step algorithm for the
completion of the Toeplitz matrix from the available samples. In the first
step, we estimate the null space vectors of the Toeplitz matrix using only its
fully sampled rows. The null space is then used to estimate the signal
subspace, which facilitates the efficient recovery of the time series of
images. We finally demonstrate the proposed approach on spherical MR phantom
data and human data and show that the artifacts are significantly reduced. The
proposed approach could potentially be used to compensate for time varying
field map variations in dynamic applications such as functional MRI. | [
"cs.CV"
] |
Ultrasound imaging makes use of backscattering of waves during their
interaction with scatterers present in biological tissues. Simulation of
synthetic ultrasound images is a challenging problem on account of inability to
accurately model various factors of which some include intra-/inter scanline
interference, transducer to surface coupling, artifacts on transducer elements,
inhomogeneous shadowing and nonlinear attenuation. Current approaches typically
solve wave space equations making them computationally expensive and slow to
operate. We propose a generative adversarial network (GAN) inspired approach
for fast simulation of patho-realistic ultrasound images. We apply the
framework to intravascular ultrasound (IVUS) simulation. A stage 0 simulation
performed using pseudo B-mode ultrasound image simulator yields speckle mapping
of a digitally defined phantom. The stage I GAN subsequently refines them to
preserve tissue specific speckle intensities. The stage II GAN further refines
them to generate high resolution images with patho-realistic speckle profiles.
We evaluate patho-realism of simulated images with a visual Turing test
indicating an equivocal confusion in discriminating simulated from real. We
also quantify the shift in tissue specific intensity distributions of the real
and simulated images to prove their similarity. | [
"cs.CV"
] |
Adversarial machine learning has attracted a great amount of attention in
recent years. In a poisoning attack, the adversary can inject a small number of
specially crafted samples into the training data which make the decision
boundary severely deviate and cause unexpected misclassification. Due to the
great importance and popular use of support vector machines (SVM), we consider
defending SVM against poisoning attacks in this paper. We study two commonly
used strategies for defending: designing robust SVM algorithms and data
sanitization. Though several robust SVM algorithms have been proposed before,
most of them either are in lack of adversarial-resilience, or rely on strong
assumptions about the data distribution or the attacker's behavior. Moreover,
the research on their complexities is still quite limited. We are the first, to
the best of our knowledge, to prove that even the simplest hard-margin
one-class SVM with outliers problem is NP-complete, and has no fully PTAS
unless P$=$NP (that means it is hard to achieve an even approximate algorithm).
For the data sanitization defense, we link it to the intrinsic dimensionality
of data; in particular, we provide a sampling theorem in doubling metrics for
explaining the effectiveness of DBSCAN (as a density-based outlier removal
method) for defending against poisoning attacks. In our empirical experiments,
we compare several defenses including the DBSCAN and robust SVM methods, and
investigate the influences from the intrinsic dimensionality and data density
to their performances. | [
"cs.LG",
"cs.CG",
"cs.CR",
"stat.ML"
] |
Anomaly detection in computer vision is the task of identifying images which
deviate from a set of normal images. A common approach is to train deep
convolutional autoencoders to inpaint covered parts of an image and compare the
output with the original image. By training on anomaly-free samples only, the
model is assumed to not being able to reconstruct anomalous regions properly.
For anomaly detection by inpainting we suggest it to be beneficial to
incorporate information from potentially distant regions. In particular we pose
anomaly detection as a patch-inpainting problem and propose to solve it with a
purely self-attention based approach discarding convolutions. The proposed
Inpainting Transformer (InTra) is trained to inpaint covered patches in a large
sequence of image patches, thereby integrating information across large regions
of the input image. When training from scratch, in comparison to other methods
not using extra training data, InTra achieves results on par with the current
state-of-the-art on the MVTec AD dataset for detection and surpassing them on
segmentation. | [
"cs.CV"
] |
We present a novel unsupervised learning framework for single view depth
estimation using monocular videos. It is well known in 3D vision that enlarging
the baseline can increase the depth estimation accuracy, and jointly optimizing
a set of camera poses and landmarks is essential. In previous monocular
unsupervised learning frameworks, only part of the photometric and geometric
constraints within a sequence are used as supervisory signals. This may result
in a short baseline and overfitting. Besides, previous works generally estimate
a low resolution depth from a low resolution impute image. The low resolution
depth is then interpolated to recover the original resolution. This strategy
may generate large errors on object boundaries, as the depth of background and
foreground are mixed to yield the high resolution depth. In this paper, we
introduce a bundle adjustment framework and a super-resolution network to solve
the above two problems. In bundle adjustment, depths and poses of an image
sequence are jointly optimized, which increases the baseline by establishing
the relationship between farther frames. The super resolution network learns to
estimate a high resolution depth from a low resolution image. Additionally, we
introduce the clip loss to deal with moving objects and occlusion. Experimental
results on the KITTI dataset show that the proposed algorithm outperforms the
state-of-the-art unsupervised methods using monocular sequences, and achieves
comparable or even better result compared to unsupervised methods using stereo
sequences. | [
"cs.CV"
] |
We focus on contrastive methods for self-supervised video representation
learning. A common paradigm in contrastive learning is to construct positive
pairs by sampling different data views for the same instance, with different
data instances as negatives. These methods implicitly assume a set of
representational invariances to the view selection mechanism (eg, sampling
frames with temporal shifts), which may lead to poor performance on downstream
tasks which violate these invariances (fine-grained video action recognition
that would benefit from temporal information). To overcome this limitation, we
propose an 'augmentation aware' contrastive learning framework, where we
explicitly provide a sequence of augmentation parameterisations (such as the
values of the time shifts used to create data views) as composable augmentation
encodings (CATE) to our model when projecting the video representations for
contrastive learning. We show that representations learned by our method encode
valuable information about specified spatial or temporal augmentation, and in
doing so also achieve state-of-the-art performance on a number of video
benchmarks. | [
"cs.CV"
] |
Detection and localization of image manipulations like splices are gaining in
importance with the easy accessibility of image editing softwares. While
detection generates a verdict for an image it provides no insight into the
manipulation. Localization helps explain a positive detection by identifying
the pixels of the image which have been tampered. We propose a deep learning
based method for splice localization without prior knowledge of a test image's
camera-model. It comprises a novel approach for learning rich filters and for
suppressing image-edges. Additionally, we train our model on a surrogate task
of camera model identification, which allows us to leverage large and widely
available, unmanipulated, camera-tagged image databases. During inference, we
assume that the spliced and host regions come from different camera-models and
we segment these regions using a Gaussian-mixture model. Experiments on three
test databases demonstrate results on par with and above the state-of-the-art
and a good generalization ability to unknown datasets. | [
"cs.CV"
] |
A central capability of a long-lived reinforcement learning (RL) agent is to
incrementally adapt its behavior as its environment changes, and to
incrementally build upon previous experiences to facilitate future learning in
real-world scenarios. In this paper, we propose LifeLong Incremental
Reinforcement Learning (LLIRL), a new incremental algorithm for efficient
lifelong adaptation to dynamic environments. We develop and maintain a library
that contains an infinite mixture of parameterized environment models, which is
equivalent to clustering environment parameters in a latent space. The prior
distribution over the mixture is formulated as a Chinese restaurant process
(CRP), which incrementally instantiates new environment models without any
external information to signal environmental changes in advance. During
lifelong learning, we employ the expectation maximization (EM) algorithm with
online Bayesian inference to update the mixture in a fully incremental manner.
In EM, the E-step involves estimating the posterior expectation of
environment-to-cluster assignments, while the M-step updates the environment
parameters for future learning. This method allows for all environment models
to be adapted as necessary, with new models instantiated for environmental
changes and old models retrieved when previously seen environments are
encountered again. Experiments demonstrate that LLIRL outperforms relevant
existing methods, and enables effective incremental adaptation to various
dynamic environments for lifelong learning. | [
"cs.LG",
"cs.AI"
] |
The goal of this paper is to formulate a general framework for a
constraint-based refinement of the optical flow using variational methods. We
demonstrate that for a particular choice of the constraint, formulated as a
minimization problem with the quadratic regularization, our results are close
to the continuity equation based fluid flow. This closeness to the continuity
model is theoretically justified through a modified augmented Lagrangian method
and validated numerically. Further, along with the continuity constraint, our
model can include geometric constraints as well. The correctness of our process
is studied in the Hilbert space setting. Moreover, a special feature of our
system is the possibility of a diagonalization by the Cauchy-Riemann operator
and transforming it to a diffusion process on the curl and the divergence of
the flow. Using the theory of semigroups on the decoupled system, we show that
our process preserves the spatial characteristics of the divergence and the
vorticities. We perform several numerical experiments and show the results on
different datasets. | [
"cs.CV",
"math.AP",
"35J50, 35Q68"
] |
Finger vein recognition has drawn increasing attention as one of the most
popular and promising biometrics due to its high distinguishes ability,
security and non-invasive procedure. The main idea of traditional schemes is to
directly extract features from finger vein images or patterns and then compare
features to find the best match. However, the features extracted from images
contain much redundant data, while the features extracted from patterns are
greatly influenced by image segmentation methods. To tack these problems, this
paper proposes a new finger vein recognition by generating code. The proposed
method does not require an image segmentation algorithm, is simple to calculate
and has a small amount of data. Firstly, the finger vein images were divided
into blocks to calculate the mean value. Then the centrosymmetric coding is
performed by using the generated eigenmatrix. The obtained codewords are
concatenated as the feature codewords of the image. The similarity between vein
codes is measured by the ratio of minimum Hamming distance to codeword length.
Extensive experiments on two public finger vein databases verify the
effectiveness of the proposed method. The results indicate that our method
outperforms the state-of-theart methods and has competitive potential in
performing the matching task. | [
"cs.CV",
"cs.IT",
"math.IT",
"Computing methodologies for image processing",
"I.4"
] |
Deep learning is ubiquitous across many areas areas of computer vision. It
often requires large scale datasets for training before being fine-tuned on
small-to-medium scale problems. Activity, or, in other words, action
recognition, is one of many application areas of deep learning. While there
exist many Convolutional Neural Network architectures that work with the RGB
and optical flow frames, training on the time sequences of 3D body skeleton
joints is often performed via recurrent networks such as LSTM.
In this paper, we propose a new representation which encodes sequences of 3D
body skeleton joints in texture-like representations derived from
mathematically rigorous kernel methods. Such a representation becomes the first
layer in a standard CNN network e.g., ResNet-50, which is then used in the
supervised domain adaptation pipeline to transfer information from the source
to target dataset. This lets us leverage the available Kinect-based data beyond
training on a single dataset and outperform simple fine-tuning on any two
datasets combined in a naive manner. More specifically, in this paper we
utilize the overlapping classes between datasets. We associate datapoints of
the same class via so-called commonality, known from the supervised domain
adaptation. We demonstrate state-of-the-art results on three publicly available
benchmarks. | [
"cs.CV"
] |
Hierarchical image segmentation provides region-oriented scalespace, i.e., a
set of image segmentations at different detail levels in which the
segmentations at finer levels are nested with respect to those at coarser
levels. Most image segmentation algorithms, such as region merging algorithms,
rely on a criterion for merging that does not lead to a hierarchy, and for
which the tuning of the parameters can be difficult. In this work, we propose a
hierarchical graph based image segmentation relying on a criterion popularized
by Felzenzwalb and Huttenlocher. We illustrate with both real and synthetic
images, showing efficiency, ease of use, and robustness of our method. | [
"cs.CV"
] |
Recently, data-driven deep saliency models have achieved high performance and
have outperformed classical saliency models, as demonstrated by results on
datasets such as the MIT300 and SALICON. Yet, there remains a large gap between
the performance of these models and the inter-human baseline. Some outstanding
questions include what have these models learned, how and where they fail, and
how they can be improved. This article attempts to answer these questions by
analyzing the representations learned by individual neurons located at the
intermediate layers of deep saliency models. To this end, we follow the steps
of existing deep saliency models, that is borrowing a pre-trained model of
object recognition to encode the visual features and learning a decoder to
infer the saliency. We consider two cases when the encoder is used as a fixed
feature extractor and when it is fine-tuned, and compare the inner
representations of the network. To study how the learned representations depend
on the task, we fine-tune the same network using the same image set but for two
different tasks: saliency prediction versus scene classification. Our analyses
reveal that: 1) some visual regions (e.g. head, text, symbol, vehicle) are
already encoded within various layers of the network pre-trained for object
recognition, 2) using modern datasets, we find that fine-tuning pre-trained
models for saliency prediction makes them favor some categories (e.g. head)
over some others (e.g. text), 3) although deep models of saliency outperform
classical models on natural images, the converse is true for synthetic stimuli
(e.g. pop-out search arrays), an evidence of significant difference between
human and data-driven saliency models, and 4) we confirm that, after-fine
tuning, the change in inner-representations is mostly due to the task and not
the domain shift in the data. | [
"cs.CV"
] |
Graph Neural Networks (GNNs) have been widely applied to various fields due
to their powerful representations of graph-structured data. Despite the success
of GNNs, most existing GNNs are designed to learn node representations on the
fixed and homogeneous graphs. The limitations especially become problematic
when learning representations on a misspecified graph or a heterogeneous graph
that consists of various types of nodes and edges. To address this limitations,
we propose Graph Transformer Networks (GTNs) that are capable of generating new
graph structures, which preclude noisy connections and include useful
connections (e.g., meta-paths) for tasks, while learning effective node
representations on the new graphs in an end-to-end fashion. We further propose
enhanced version of GTNs, Fast Graph Transformer Networks (FastGTNs), that
improve scalability of graph transformations. Compared to GTNs, FastGTNs are
230x faster and use 100x less memory while allowing the identical graph
transformations as GTNs. In addition, we extend graph transformations to the
semantic proximity of nodes allowing non-local operations beyond meta-paths.
Extensive experiments on both homogeneous graphs and heterogeneous graphs show
that GTNs and FastGTNs with non-local operations achieve the state-of-the-art
performance for node classification tasks. The code is available:
https://github.com/seongjunyun/Graph_Transformer_Networks | [
"cs.LG",
"cs.AI",
"cs.SI"
] |
Fake face detection is a significant challenge for intelligent systems as
generative models become more powerful every single day. As the quality of fake
faces increases, the trained models become more and more inefficient to detect
the novel fake faces, since the corresponding training data is considered
outdated. In this case, robust One-Shot learning methods is more compatible
with the requirements of changeable training data. In this paper, we propose a
universal One-Shot GAN generated fake face detection method which can be used
in significantly different areas of anomaly detection. The proposed method is
based on extracting out-of-context objects from faces via scene understanding
models. To do so, we use state of the art scene understanding and object
detection methods as a pre-processing tool to detect the weird objects in the
face. Second, we create a bag of words given all the detected out-of-context
objects per all training data. This way, we transform each image into a sparse
vector where each feature represents the confidence score related to each
detected object in the image. Our experiments show that, we can discriminate
fake faces from real ones in terms of out-of-context features. It means that,
different sets of objects are detected in fake faces comparing to real ones
when we analyze them with scene understanding and object detection models. We
prove that, the proposed method can outperform previous methods based on our
experiments on Style-GAN generated fake faces. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In this work, we propose a new segmentation algorithm for images containing
convex objects present in multiple shapes with a high degree of overlap. The
proposed algorithm is carried out in two steps, first we identify the visible
contours, segment them using concave points and finally group the segments
belonging to the same object. The next step is to assign a shape identity to
these grouped contour segments. For images containing objects in multiple
shapes we begin first by identifying shape classes of the contours followed by
assigning a shape entity to these classes. We provide a comprehensive
experimentation of our algorithm on two crystal image datasets. One dataset
comprises of images containing objects in multiple shapes overlapping each
other and the other dataset contains standard images with objects present in a
single shape. We test our algorithm against two baselines, with our proposed
algorithm outperforming both the baselines. | [
"cs.CV"
] |
We present a conceptually simple, flexible and general framework for
cross-dataset training in object detection. Given two or more already labeled
datasets that target for different object classes, cross-dataset training aims
to detect the union of the different classes, so that we do not have to label
all the classes for all the datasets. By cross-dataset training, existing
datasets can be utilized to detect the merged object classes with a single
model. Further more, in industrial applications, the object classes usually
increase on demand. So when adding new classes, it is quite time-consuming if
we label the new classes on all the existing datasets. While using
cross-dataset training, we only need to label the new classes on the new
dataset. We experiment on PASCAL VOC, COCO, WIDER FACE and WIDER Pedestrian
with both solo and cross-dataset settings. Results show that our cross-dataset
pipeline can achieve similar impressive performance simultaneously on these
datasets compared with training independently. | [
"cs.CV"
] |
Skeleton sequences are lightweight and compact, thus are ideal candidates for
action recognition on edge devices. Recent skeleton-based action recognition
methods extract features from 3D joint coordinates as spatial-temporal cues,
using these representations in a graph neural network for feature fusion to
boost recognition performance. The use of first- and second-order features,
\ie{} joint and bone representations, has led to high accuracy. Nonetheless,
many models are still confused by actions that have similar motion
trajectories. To address these issues, we propose fusing third-order features
in the form of angular encoding into modern architectures to robustly capture
the relationships between joints and body parts. This simple fusion with
popular spatial-temporal graph neural networks achieves new state-of-the-art
accuracy in two large benchmarks, including NTU60 and NTU120, while employing
fewer parameters and reduced run time. Our source code is publicly available
at: https://github.com/ZhenyueQin/Angular-Skeleton-Encoding. | [
"cs.CV"
] |
Novelty detection using deep generative models such as autoencoder,
generative adversarial networks mostly takes image reconstruction error as
novelty score function. However, image data, high dimensional as it is,
contains a lot of different features other than class information which makes
models hard to detect novelty data. The problem gets harder in multi-modal
normality case. To address this challenge, we propose a new way of measuring
novelty score in multi-modal normality cases using orthogonalized latent space.
Specifically, we employ orthogonal low-rank embedding in the latent space to
disentangle the features in the latent space using mutual class information.
With the orthogonalized latent space, novelty score is defined by the change of
each latent vector. Proposed algorithm was compared to state-of-the-art novelty
detection algorithms using GAN such as RaPP and OCGAN, and experimental results
show that ours outperforms those algorithms. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
As deep reinforcement learning (RL) is applied to more tasks, there is a need
to visualize and understand the behavior of learned agents. Saliency maps
explain agent behavior by highlighting the features of the input state that are
most relevant for the agent in taking an action. Existing perturbation-based
approaches to compute saliency often highlight regions of the input that are
not relevant to the action taken by the agent. Our proposed approach, SARFA
(Specific and Relevant Feature Attribution), generates more focused saliency
maps by balancing two aspects (specificity and relevance) that capture
different desiderata of saliency. The first captures the impact of perturbation
on the relative expected reward of the action to be explained. The second
downweighs irrelevant features that alter the relative expected rewards of
actions other than the action to be explained. We compare SARFA with existing
approaches on agents trained to play board games (Chess and Go) and Atari games
(Breakout, Pong and Space Invaders). We show through illustrative examples
(Chess, Atari, Go), human studies (Chess), and automated evaluation methods
(Chess) that SARFA generates saliency maps that are more interpretable for
humans than existing approaches. For the code release and demo videos, see
https://nikaashpuri.github.io/sarfa-saliency/. | [
"cs.CV",
"cs.AI"
] |
Medical time-series datasets have unique characteristics that make prediction
tasks challenging. Most notably, patient trajectories often contain
longitudinal variations in their input-output relationships, generally referred
to as temporal conditional shift. Designing sequence models capable of adapting
to such time-varying distributions remains a prevailing problem. To address
this we present Model-Attentive Ensemble learning for Sequence modeling (MAES).
MAES is a mixture of time-series experts which leverages an attention-based
gating mechanism to specialize the experts on different sequence dynamics and
adaptively weight their predictions. We demonstrate that MAES significantly
out-performs popular sequence models on datasets subject to temporal shift. | [
"cs.LG",
"cs.AI"
] |
In this paper, we propose MINE to perform novel view synthesis and depth
estimation via dense 3D reconstruction from a single image. Our approach is a
continuous depth generalization of the Multiplane Images (MPI) by introducing
the NEural radiance fields (NeRF). Given a single image as input, MINE predicts
a 4-channel image (RGB and volume density) at arbitrary depth values to jointly
reconstruct the camera frustum and fill in occluded contents. The reconstructed
and inpainted frustum can then be easily rendered into novel RGB or depth views
using differentiable rendering. Extensive experiments on RealEstate10K, KITTI
and Flowers Light Fields show that our MINE outperforms state-of-the-art by a
large margin in novel view synthesis. We also achieve competitive results in
depth estimation on iBims-1 and NYU-v2 without annotated depth supervision. Our
source code is available at https://github.com/vincentfung13/MINE | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
In real-world practice, medical images acquired in different phases possess
complementary information, {\em e.g.}, radiologists often refer to both
arterial and venous scans in order to make the diagnosis. However, in medical
image analysis, fusing prediction from two phases is often difficult, because
(i) there is a domain gap between two phases, and (ii) the semantic labels are
not pixel-wise corresponded even for images scanned from the same patient. This
paper studies organ segmentation in two-phase CT scans. We propose Phase
Collaborative Network (PCN), an end-to-end framework that contains both
generative and discriminative modules. PCN can be mathematically explained to
formulate phase-to-phase and data-to-label relations jointly. Experiments are
performed on a two-phase CT dataset, on which PCN outperforms the baselines
working with one-phase data by a large margin, and we empirically verify that
the gain comes from inter-phase collaboration. Besides, PCN transfers well to
two public single-phase datasets, demonstrating its potential applications. | [
"cs.CV"
] |
Current speech enhancement techniques operate on the spectral domain and/or
exploit some higher-level feature. The majority of them tackle a limited number
of noise conditions and rely on first-order statistics. To circumvent these
issues, deep networks are being increasingly used, thanks to their ability to
learn complex functions from large example sets. In this work, we propose the
use of generative adversarial networks for speech enhancement. In contrast to
current techniques, we operate at the waveform level, training the model
end-to-end, and incorporate 28 speakers and 40 different noise conditions into
the same model, such that model parameters are shared across them. We evaluate
the proposed model using an independent, unseen test set with two speakers and
20 alternative noise conditions. The enhanced samples confirm the viability of
the proposed model, and both objective and subjective evaluations confirm the
effectiveness of it. With that, we open the exploration of generative
architectures for speech enhancement, which may progressively incorporate
further speech-centric design choices to improve their performance. | [
"cs.LG",
"cs.NE",
"cs.SD"
] |
We propose a novel tree-based ensemble method named Selective Cascade of
Residual ExtraTrees (SCORE). SCORE draws inspiration from representation
learning, incorporates regularized regression with variable selection features,
and utilizes boosting to improve prediction and reduce generalization errors.
We also develop a variable importance measure to increase the explainability of
SCORE. Our computer experiments show that SCORE provides comparable or superior
performance in prediction against ExtraTrees, random forest, gradient boosting
machine, and neural networks; and the proposed variable importance measure for
SCORE is comparable to studied benchmark methods. Finally, the predictive
performance of SCORE remains stable across hyper-parameter values, suggesting
potential robustness to hyperparameter specification. | [
"cs.LG",
"stat.ML"
] |
Deep generative models seek to recover the process with which the observed
data was generated. They may be used to synthesize new samples or to
subsequently extract representations. Successful approaches in the domain of
images are driven by several core inductive biases. However, a bias to account
for the compositional way in which humans structure a visual scene in terms of
objects has frequently been overlooked. In this work, we investigate object
compositionality as an inductive bias for Generative Adversarial Networks
(GANs). We present a minimal modification of a standard generator to
incorporate this inductive bias and find that it reliably learns to generate
images as compositions of objects. Using this general design as a backbone, we
then propose two useful extensions to incorporate dependencies among objects
and background. We extensively evaluate our approach on several multi-object
image datasets and highlight the merits of incorporating structure for
representation learning purposes. In particular, we find that our structured
GANs are better at generating multi-object images that are more faithful to the
reference distribution. More so, we demonstrate how, by leveraging the
structure of the learned generative process, one can `invert' the learned
generative model to perform unsupervised instance segmentation. On the
challenging CLEVR dataset, it is shown how our approach is able to improve over
other recent purely unsupervised object-centric approaches to image generation. | [
"cs.CV",
"cs.NE",
"I.2.6"
] |
We present a Fourier neural network (FNN) that can be mapped directly to the
Fourier decomposition. The choice of activation and loss function yields
results that replicate a Fourier series expansion closely while preserving a
straightforward architecture with a single hidden layer. The simplicity of this
network architecture facilitates the integration with any other
higher-complexity networks, at a data pre- or postprocessing stage. We validate
this FNN on naturally periodic smooth functions and on piecewise continuous
periodic functions. We showcase the use of this FNN for modeling or solving
partial differential equations with periodic boundary conditions. The main
advantages of the current approach are the validity of the solution outside the
training region, interpretability of the trained model, and simplicity of use. | [
"cs.LG",
"cs.NA",
"math.NA",
"stat.ML",
"35CXX, 68T20",
"I.2.0"
] |
In this article, we focus on the analysis of the potential factors driving
the spread of influenza, and possible policies to mitigate the adverse effects
of the disease. To be precise, we first invoke discrete Fourier transform (DFT)
to conclude a yearly periodic regional structure in the influenza activity,
thus safely restricting ourselves to the analysis of the yearly influenza
behavior. Then we collect a massive number of possible region-wise indicators
contributing to the influenza mortality, such as consumption, immunization,
sanitation, water quality, and other indicators from external data, with $1170$
dimensions in total. We extract significant features from the high dimensional
indicators using a combination of data analysis techniques, including matrix
completion, support vector machines (SVM), autoencoders, and principal
component analysis (PCA). Furthermore, we model the international flow of
migration and trade as a convolution on regional influenza activity, and solve
the deconvolution problem as higher-order perturbations to the linear
regression, thus separating regional and international factors related to the
influenza mortality. Finally, both the original model and the perturbed model
are tested on regional examples, as validations of our models. Pertaining to
the policy, we make a proposal based on the connectivity data along with the
previously extracted significant features to alleviate the impact of influenza,
as well as efficiently propagate and carry out the policies. We conclude that
environmental features and economic features are of significance to the
influenza mortality. The model can be easily adapted to model other types of
infectious diseases. | [
"cs.LG",
"stat.AP",
"stat.ML"
] |
The quantification of emotional states is an important step to understanding
wellbeing. Time series data from multiple modalities such as physiological and
motion sensor data have proven to be integral for measuring and quantifying
emotions. Monitoring emotional trajectories over long periods of time inherits
some critical limitations in relation to the size of the training data. This
shortcoming may hinder the development of reliable and accurate machine
learning models. To address this problem, this paper proposes a framework to
tackle the limitation in performing emotional state recognition on multiple
multimodal datasets: 1) encoding multivariate time series data into coloured
images; 2) leveraging pre-trained object recognition models to apply a Transfer
Learning (TL) approach using the images from step 1; 3) utilising a 1D
Convolutional Neural Network (CNN) to perform emotion classification from
physiological data; 4) concatenating the pre-trained TL model with the 1D CNN.
Furthermore, the possibility of performing TL to infer stress from
physiological data is explored by initially training a 1D CNN using a large
physical activity dataset and then applying the learned knowledge to the target
dataset. We demonstrate that model performance when inferring real-world
wellbeing rated on a 5-point Likert scale can be enhanced using our framework,
resulting in up to 98.5% accuracy, outperforming a conventional CNN by 4.5%.
Subject-independent models using the same approach resulted in an average of
72.3% accuracy (SD 0.038). The proposed CNN-TL-based methodology may overcome
problems with small training datasets, thus improving on the performance of
conventional deep learning methods. | [
"cs.CV",
"cs.LG"
] |
Directed networks appear in various areas, such as biology, sociology,
physiology and computer science. However, at present, most network analysis
ignores the direction. In this paper, we construct a spectral clustering method
based on the singular decomposition of the adjacency matrix to detect community
in directed stochastic block model (DiSBM). By considering a sparsity
parameter, under some mild conditions, we show the proposed approach can
consistently recover hidden row and column communities for different scaling of
degrees.
By considering the degree heterogeneity of both row and column nodes, we
further establish a theoretical framework for directed degree corrected
stochastic block model (DiDCSBM). We show that the spectral clustering method
stably yields consistent community detection for row clusters and column
clusters under mild constraints on the degree heterogeneity. Our theoretical
results under DiSBM and DiDCSBM provide some innovations on some special
directed networks, such as directed network with balanced clusters, directed
network with nodes enjoying similar degrees, and the directed Erd\"os-R\'enyi
graph. Furthermore, our theoretical results under DiDCSBM are consistent with
those under DiSBM when DiDCSBM degenerates to DiSBM. | [
"stat.ML",
"cs.IT",
"cs.LG",
"math.IT"
] |
This paper studies a recent proposal to use randomized value functions to
drive exploration in reinforcement learning. These randomized value functions
are generated by injecting random noise into the training data, making the
approach compatible with many popular methods for estimating parameterized
value functions. By providing a worst-case regret bound for tabular
finite-horizon Markov decision processes, we show that planning with respect to
these randomized value functions can induce provably efficient exploration. | [
"cs.LG",
"cs.AI",
"cs.SY",
"stat.ML"
] |
Generating font glyphs of consistent style from one or a few reference
glyphs, i.e., font completion, is an important task in topographical design. As
the problem is more well-defined than general image style transfer tasks, thus
it has received interest from both vision and machine learning communities.
Existing approaches address this problem as a direct image-to-image translation
task. In this work, we innovate to explore the generation of font glyphs as 2D
graphic objects with the graph as an intermediate representation, so that more
intrinsic graphic properties of font styles can be captured. Specifically, we
formulate a cross-modality cycled image-to-image model structure with a graph
constructor between an image encoder and an image renderer. The novel graph
constructor maps a glyph's latent code to its graph representation that matches
expert knowledge, which is trained to help the translation task. Our model
generates improved results than both image-to-image baseline and previous
state-of-the-art methods for glyph completion. Furthermore, the graph
representation output by our model also provides an intuitive interface for
users to do local editing and manipulation. Our proposed cross-modality cycled
representation learning has the potential to be applied to other domains with
prior knowledge from different data modalities. Our code is available at
https://github.com/VITA-Group/Font_Completion_Graph. | [
"cs.CV"
] |
This work presents a method to predict a geometric surface image from a
photograph to assist in image recognition. To recognize objects, several images
from different conditions are required for training a model or fine-tuning a
pre-trained model. In this work, a geometric surface image is introduced as a
better representation than its color image counterpart to overcome lighting
conditions. The surface image is predicted from a color image. To do so, the
geometric surface image together with its color photographs are firstly trained
with Generative Adversarial Networks (GAN) model. The trained generator model
is then used to predict the geometric surface image from the input color image.
The evaluation on a case study of an amulet recognition shows that the
predicted geometric surface images contain less ambiguity than their color
images counterpart under different lighting conditions and can be used
effectively for assisting in image recognition task. | [
"cs.CV",
"cs.GR"
] |
This paper aims at high-accuracy 3D object detection in autonomous driving
scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework
that takes both LIDAR point cloud and RGB images as input and predicts oriented
3D bounding boxes. We encode the sparse 3D point cloud with a compact
multi-view representation. The network is composed of two subnetworks: one for
3D object proposal generation and another for multi-view feature fusion. The
proposal network generates 3D candidate boxes efficiently from the bird's eye
view representation of 3D point cloud. We design a deep fusion scheme to
combine region-wise features from multiple views and enable interactions
between intermediate layers of different paths. Experiments on the challenging
KITTI benchmark show that our approach outperforms the state-of-the-art by
around 25% and 30% AP on the tasks of 3D localization and 3D detection. In
addition, for 2D detection, our approach obtains 10.3% higher AP than the
state-of-the-art on the hard data among the LIDAR-based methods. | [
"cs.CV"
] |
Recently, contrastive learning (CL) has emerged as a successful method for
unsupervised graph representation learning. Most graph CL methods first perform
stochastic augmentation on the input graph to obtain two graph views and
maximize the agreement of representations in the two views. Despite the
prosperous development of graph CL methods, the design of graph augmentation
schemes -- a crucial component in CL -- remains rarely explored. We argue that
the data augmentation schemes should preserve intrinsic structures and
attributes of graphs, which will force the model to learn representations that
are insensitive to perturbation on unimportant nodes and edges. However, most
existing methods adopt uniform data augmentation schemes, like uniformly
dropping edges and uniformly shuffling features, leading to suboptimal
performance. In this paper, we propose a novel graph contrastive representation
learning method with adaptive augmentation that incorporates various priors for
topological and semantic aspects of the graph. Specifically, on the topology
level, we design augmentation schemes based on node centrality measures to
highlight important connective structures. On the node attribute level, we
corrupt node features by adding more noise to unimportant node features, to
enforce the model to recognize underlying semantic information. We perform
extensive experiments of node classification on a variety of real-world
datasets. Experimental results demonstrate that our proposed method
consistently outperforms existing state-of-the-art baselines and even surpasses
some supervised counterparts, which validates the effectiveness of the proposed
contrastive framework with adaptive augmentation. | [
"cs.LG"
] |
The Pommerman simulation was recently developed to mimic the classic Japanese
game Bomberman, and focuses on competitive gameplay in a multi-agent setting.
We focus on the 2$\times$2 team version of Pommerman, developed for a
competition at NeurIPS 2018. Our methodology involves training an agent
initially through imitation learning on a noisy expert policy, followed by a
proximal-policy optimization (PPO) reinforcement learning algorithm. The basic
PPO approach is modified for stable transition from the imitation learning
phase through reward shaping, action filters based on heuristics, and
curriculum learning. The proposed methodology is able to beat heuristic and
pure reinforcement learning baselines with a combined 100,000 training games,
significantly faster than other non-tree-search methods in literature. We
present results against multiple agents provided by the developers of the
simulation, including some that we have enhanced. We include a sensitivity
analysis over different parameters, and highlight undesirable effects of some
strategies that initially appear promising. Since Pommerman is a complex
multi-agent competitive environment, the strategies developed here provide
insights into several real-world problems with characteristics such as partial
observability, decentralized execution (without communication), and very sparse
and delayed rewards. | [
"cs.LG",
"stat.ML"
] |
Many seemingly unrelated computer vision tasks can be viewed as a special
case of image decomposition into separate layers. For example, image
segmentation (separation into foreground and background layers); transparent
layer separation (into reflection and transmission layers); Image dehazing
(separation into a clear image and a haze map), and more. In this paper we
propose a unified framework for unsupervised layer decomposition of a single
image, based on coupled "Deep-image-Prior" (DIP) networks. It was shown
[Ulyanov et al] that the structure of a single DIP generator network is
sufficient to capture the low-level statistics of a single image. We show that
coupling multiple such DIPs provides a powerful tool for decomposing images
into their basic components, for a wide variety of applications. This
capability stems from the fact that the internal statistics of a mixture of
layers is more complex than the statistics of each of its individual
components. We show the power of this approach for Image-Dehazing, Fg/Bg
Segmentation, Watermark-Removal, Transparency Separation in images and video,
and more. These capabilities are achieved in a totally unsupervised way, with
no training examples other than the input image/video itself. | [
"cs.CV",
"cs.LG"
] |
Finding anomalous subsequence in a long time series is a very important but
difficult problem. Existing state-of-the-art methods have been focusing on
searching for the subsequence that is the most dissimilar to the rest of the
subsequences; however, they do not take into account the background patterns
that contain the anomalous candidates. As a result, such approaches are likely
to miss local anomalies. We introduce a new definition named \textit{semantic
discord}, which incorporates the context information from larger subsequences
containing the anomaly candidates. We propose an efficient algorithm with a
derived lower bound that is up to 3 orders of magnitude faster than the brute
force algorithm in real world data. We demonstrate that our method
significantly outperforms the state-of-the-art methods in locating anomalies by
extensive experiments. We further explain the interpretability of semantic
discord. | [
"cs.LG",
"stat.ML"
] |
In recent years, many explanation methods have been proposed to explain
individual classifications of deep neural networks. However, how to leverage
the created explanations to improve the learning process has been less
explored. As the privileged information, the explanations of a model can be
used to guide the learning process of the model itself. In the community,
another intensively investigated privileged information used to guide the
training of a model is the knowledge from a powerful teacher model. The goal of
this work is to leverage the self-explanation to improve the learning process
by borrowing ideas from knowledge distillation. We start by investigating the
effective components of the knowledge transferred from the teacher network to
the student network. Our investigation reveals that both the responses in
non-ground-truth classes and class-similarity information in teacher's outputs
contribute to the success of the knowledge distillation. Motivated by the
conclusion, we propose an implementation of introspective learning by
distilling knowledge from online self-explanations. The models trained with the
introspective learning procedure outperform the ones trained with the standard
learning procedure, as well as the ones trained with different regularization
methods. When compared to the models learned from peer networks or teacher
networks, our models also show competitive performance and requires neither
peers nor teachers. | [
"cs.CV",
"cs.LG"
] |
We present an approach for encoding visual task relationships to improve
model performance in an Unsupervised Domain Adaptation (UDA) setting. Semantic
segmentation and monocular depth estimation are shown to be complementary
tasks; in a multi-task learning setting, a proper encoding of their
relationships can further improve performance on both tasks. Motivated by this
observation, we propose a novel Cross-Task Relation Layer (CTRL), which encodes
task dependencies between the semantic and depth predictions. To capture the
cross-task relationships, we propose a neural network architecture that
contains task-specific and cross-task refinement heads. Furthermore, we propose
an Iterative Self-Learning (ISL) training scheme, which exploits semantic
pseudo-labels to provide extra supervision on the target domain. We
experimentally observe improvements in both tasks' performance because the
complementary information present in these tasks is better captured.
Specifically, we show that: (1) our approach improves performance on all tasks
when they are complementary and mutually dependent; (2) the CTRL helps to
improve both semantic segmentation and depth estimation tasks performance in
the challenging UDA setting; (3) the proposed ISL training scheme further
improves the semantic segmentation performance. The implementation is available
at https://github.com/susaha/ctrl-uda. | [
"cs.CV",
"cs.LG"
] |
The objective of this work is to annotate sign instances across a broad
vocabulary in continuous sign language. We train a Transformer model to ingest
a continuous signing stream and output a sequence of written tokens on a
large-scale collection of signing footage with weakly-aligned subtitles. We
show that through this training it acquires the ability to attend to a large
vocabulary of sign instances in the input sequence, enabling their
localisation. Our contributions are as follows: (1) we demonstrate the ability
to leverage large quantities of continuous signing videos with weakly-aligned
subtitles to localise signs in continuous sign language; (2) we employ the
learned attention to automatically generate hundreds of thousands of
annotations for a large sign vocabulary; (3) we collect a set of 37K manually
verified sign instances across a vocabulary of 950 sign classes to support our
study of sign language recognition; (4) by training on the newly annotated data
from our method, we outperform the prior state of the art on the BSL-1K sign
language recognition benchmark. | [
"cs.CV"
] |
HyperGraph Convolutional Neural Networks (HGCNNs) have demonstrated their
potential in modeling high-order relations preserved in graph structured data.
However, most existing convolution filters are localized and determined by the
pre-defined initial hypergraph topology, neglecting to explore implicit and
long-ange relations in real-world data. In this paper, we propose the first
learning-based method tailored for constructing adaptive hypergraph structure,
termed HypERgrAph Laplacian aDaptor (HERALD), which serves as a generic
plug-in-play module for improving the representational power of HGCNNs.
Specifically, HERALD adaptively optimizes the adjacency relationship between
hypernodes and hyperedges in an end-to-end manner and thus the task-aware
hypergraph is learned. Furthermore, HERALD employs the self-attention mechanism
to capture the non-local paired-nodes relation. Extensive experiments on
various popular hypergraph datasets for node classification and graph
classification tasks demonstrate that our approach obtains consistent and
considerable performance enhancement, proving its effectiveness and
generalization ability. | [
"cs.LG"
] |
We formulate a general framework for competitive gradient-based learning that
encompasses a wide breadth of multi-agent learning algorithms, and analyze the
limiting behavior of competitive gradient-based learning algorithms using
dynamical systems theory. For both general-sum and potential games, we
characterize a non-negligible subset of the local Nash equilibria that will be
avoided if each agent employs a gradient-based learning algorithm. We also shed
light on the issue of convergence to non-Nash strategies in general- and
zero-sum games, which may have no relevance to the underlying game, and arise
solely due to the choice of algorithm. The existence and frequency of such
strategies may explain some of the difficulties encountered when using gradient
descent in zero-sum games as, e.g., in the training of generative adversarial
networks. To reinforce the theoretical contributions, we provide empirical
results that highlight the frequency of linear quadratic dynamic games (a
benchmark for multi-agent reinforcement learning) that admit global Nash
equilibria that are almost surely avoided by policy gradient. | [
"cs.LG",
"stat.ML"
] |
Incentive salience attribution can be understood as a psychobiological
process ascribing relevance to potentially rewarding objects and actions.
Despite being an important component of the motivational process guiding our
everyday behaviour its study in naturalistic contexts is not straightforward.
Here we propose a methodology based on artificial neural networks (ANNs) for
approximating latent states produced by this process in situations where large
volumes of behavioural data are available but no strict experimental control is
possible. Leveraging knowledge derived from theoretical and computational
accounts of incentive salience attribution we designed an ANN for estimating
duration and intensity of future interactions between individuals and a series
of video games in a large-scale ($N> 3 \times 10^6$) longitudinal dataset.
Through model comparison and inspection we show that our approach outperforms
competing ones while also generating a representation that well approximate
some of the functions of attributed incentive salience. We discuss our findings
with reference to the adopted theoretical and computational frameworks and
suggest how our methodology could be an initial step for estimating attributed
incentive salience in large scale behavioural studies. | [
"cs.LG",
"stat.ML"
] |
We propose Deep Hierarchical Machine (DHM), a model inspired from the
divide-and-conquer strategy while emphasizing representation learning ability
and flexibility. A stochastic routing framework as used by recent deep neural
decision/regression forests is incorporated, but we remove the need to evaluate
unnecessary computation paths by utilizing a different topology and introducing
a probabilistic pruning technique. We also show a specified version of DHM
(DSHM) for efficiency, which inherits the sparse feature extraction process as
in traditional decision tree with pixel-difference feature. To achieve sparse
feature extraction, we propose to utilize sparse convolution operation in DSHM
and show one possibility of introducing sparse convolution kernels by using
local binary convolution layer. DHM can be applied to both classification and
regression problems, and we validate it on standard image classification and
face alignment tasks to show its advantages over past architectures. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We propose an approach for learning optimal tree-based prescription policies
directly from data, combining methods for counterfactual estimation from the
causal inference literature with recent advances in training globally-optimal
decision trees. The resulting method, Optimal Policy Trees, yields
interpretable prescription policies, is highly scalable, and handles both
discrete and continuous treatments. We conduct extensive experiments on both
synthetic and real-world datasets and demonstrate that these trees offer
best-in-class performance across a wide variety of problems. | [
"cs.LG"
] |
The availability of large amounts of time series data, paired with the
performance of deep-learning algorithms on a broad class of problems, has
recently led to significant interest in the use of sequence-to-sequence models
for time series forecasting. We provide the first theoretical analysis of this
time series forecasting framework. We include a comparison of
sequence-to-sequence modeling to classical time series models, and as such our
theory can serve as a quantitative guide for practitioners choosing between
different modeling methodologies. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Skeleton-based action recognition has attracted increasing attention due to
its strong adaptability to dynamic circumstances and potential for broad
applications such as autonomous and anonymous surveillance. With the help of
deep learning techniques, it has also witnessed substantial progress and
currently achieved around 90\% accuracy in benign environment. On the other
hand, research on the vulnerability of skeleton-based action recognition under
different adversarial settings remains scant, which may raise security concerns
about deploying such techniques into real-world systems. However, filling this
research gap is challenging due to the unique physical constraints of skeletons
and human actions. In this paper, we attempt to conduct a thorough study
towards understanding the adversarial vulnerability of skeleton-based action
recognition. We first formulate generation of adversarial skeleton actions as a
constrained optimization problem by representing or approximating the
physiological and physical constraints with mathematical formulations. Since
the primal optimization problem with equality constraints is intractable, we
propose to solve it by optimizing its unconstrained dual problem using ADMM. We
then specify an efficient plug-in defense, inspired by recent theories and
empirical observations, against the adversarial skeleton actions. Extensive
evaluations demonstrate the effectiveness of the attack and defense method
under different settings. | [
"cs.CV",
"cs.LG"
] |
Digitization techniques for biomedical images yield different visual patterns
in radiological exams. These differences may hamper the use of data-driven
approaches for inference over these images, such as Deep Neural Networks.
Another noticeable difficulty in this field is the lack of labeled data, even
though in many cases there is an abundance of unlabeled data available.
Therefore an important step in improving the generalization capabilities of
these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation
between different datasets of biomedical images. In order to tackle this
problem, in this work we propose an Unsupervised and Semi-Supervised Domain
Adaptation method for segmentation of biomedical images using Generative
Adversarial Networks for Unsupervised Image Translation. We merge these
unsupervised networks with supervised deep semantic segmentation architectures
in order to create a semi-supervised method capable of learning from both
unlabeled and labeled data, whenever labeling is available. We compare our
method using several domains, datasets, segmentation tasks and traditional
baselines, such as unsupervised distance-based methods and reusing pretrained
models both with and without Fine-tuning. We perform both quantitative and
qualitative analysis of the proposed method and baselines in the distinct
scenarios considered in our experimental evaluation. The proposed method shows
consistently better results than the baselines in scarce labeled data
scenarios, achieving Jaccard values greater than 0.9 and good segmentation
quality in most tasks. Unsupervised Domain Adaptation results were observed to
be close to the Fully Supervised Domain Adaptation used in the traditional
procedure of Fine-tuning pretrained networks. | [
"cs.CV"
] |
Semantic segmentation aims to robustly predict coherent class labels for
entire regions of an image. It is a scene understanding task that powers
real-world applications (e.g., autonomous navigation). One important
application, the use of imagery for automated semantic understanding of
pedestrian environments, provides remote mapping of accessibility features in
street environments. This application (and others like it) require detailed
geometric information of geographical objects. Semantic segmentation is a
prerequisite for this task since it maps contiguous regions of the same class
as single entities. Importantly, semantic segmentation uses like ours are not
pixel-wise outcomes; however, most of their quantitative evaluation metrics
(e.g., mean Intersection Over Union) are based on pixel-wise similarities to a
ground-truth, which fails to emphasize over- and under-segmentation properties
of a segmentation model. Here, we introduce a new metric to assess region-based
over- and under-segmentation. We analyze and compare it to other metrics,
demonstrating that the use of our metric lends greater explainability to
semantic segmentation model performance in real-world applications. | [
"cs.CV"
] |
Knowledge tracing, the act of modeling a student's knowledge through learning
activities, is an extensively studied problem in the field of computer-aided
education. Although models with attention mechanism have outperformed
traditional approaches such as Bayesian knowledge tracing and collaborative
filtering, they share two limitations. Firstly, the models rely on shallow
attention layers and fail to capture complex relations among exercises and
responses over time. Secondly, different combinations of queries, keys and
values for the self-attention layer for knowledge tracing were not extensively
explored. Usual practice of using exercises and interactions (exercise-response
pairs) as queries and keys/values respectively lacks empirical support. In this
paper, we propose a novel Transformer based model for knowledge tracing, SAINT:
Separated Self-AttentIve Neural Knowledge Tracing. SAINT has an encoder-decoder
structure where exercise and response embedding sequence separately enter the
encoder and the decoder respectively, which allows to stack attention layers
multiple times. To the best of our knowledge, this is the first work to suggest
an encoder-decoder model for knowledge tracing that applies deep self-attentive
layers to exercises and responses separately. The empirical evaluations on a
large-scale knowledge tracing dataset show that SAINT achieves the
state-of-the-art performance in knowledge tracing with the improvement of AUC
by 1.8% compared to the current state-of-the-art models. | [
"cs.LG",
"cs.AI",
"cs.CY"
] |
Individuality is essential in human society, which induces the division of
labor and thus improves the efficiency and productivity. Similarly, it should
also be the key to multi-agent cooperation. Inspired by that individuality is
of being an individual separate from others, we propose a simple yet efficient
method for the emergence of individuality (EOI) in multi-agent reinforcement
learning (MARL). EOI learns a probabilistic classifier that predicts a
probability distribution over agents given their observation and gives each
agent an intrinsic reward of being correctly predicted by the classifier. The
intrinsic reward encourages the agents to visit their own familiar
observations, and learning the classifier by such observations makes the
intrinsic reward signals stronger and the agents more identifiable. To further
enhance the intrinsic reward and promote the emergence of individuality, two
regularizers are proposed to increase the discriminability of the classifier.
We implement EOI on top of popular MARL algorithms. Empirically, we show that
EOI significantly outperforms existing methods in a variety of multi-agent
cooperative scenarios. | [
"cs.LG",
"cs.AI",
"cs.MA",
"stat.ML"
] |
Recent works find that AI algorithms learn biases from data. Therefore, it is
urgent and vital to identify biases in AI algorithms. However, the previous
bias identification pipeline overly relies on human experts to conjecture
potential biases (e.g., gender), which may neglect other underlying biases not
realized by humans. To help human experts better find the AI algorithms'
biases, we study a new problem in this work -- for a classifier that predicts a
target attribute of the input image, discover its unknown biased attribute.
To solve this challenging problem, we use a hyperplane in the generative
model's latent space to represent an image attribute; thus, the original
problem is transformed to optimizing the hyperplane's normal vector and offset.
We propose a novel total-variation loss within this framework as the objective
function and a new orthogonalization penalty as a constraint. The latter
prevents trivial solutions in which the discovered biased attribute is
identical with the target or one of the known-biased attributes. Extensive
experiments on both disentanglement datasets and real-world datasets show that
our method can discover biased attributes and achieve better disentanglement
w.r.t. target attributes. Furthermore, the qualitative results show that our
method can discover unnoticeable biased attributes for various object and scene
classifiers, proving our method's generalizability for detecting biased
attributes in diverse domains of images. The code is available at
https://git.io/J3kMh. | [
"cs.CV"
] |
Methods for supervised principal component analysis (SPCA) aim to incorporate
label information into principal component analysis (PCA), so that the
extracted features are more useful for a prediction task of interest. Prior
work on SPCA has focused primarily on optimizing prediction error, and has
neglected the value of maximizing variance explained by the extracted features.
We propose a new method for SPCA that addresses both of these objectives
jointly, and demonstrate empirically that our approach dominates existing
approaches, i.e., outperforms them with respect to both prediction error and
variation explained. Our approach accommodates arbitrary supervised learning
losses and, through a statistical reformulation, provides a novel low-rank
extension of generalized linear models. | [
"stat.ML",
"cs.LG"
] |
Reinforcement learning optimizes policies for expected cumulative reward.
Need the supervision be so narrow? Reward is delayed and sparse for many tasks,
making it a difficult and impoverished signal for end-to-end optimization. To
augment reward, we consider a range of self-supervised tasks that incorporate
states, actions, and successors to provide auxiliary losses. These losses offer
ubiquitous and instantaneous supervision for representation learning even in
the absence of reward. While current results show that learning from reward
alone is feasible, pure reinforcement learning methods are constrained by
computational and data efficiency issues that can be remedied by auxiliary
losses. Self-supervised pre-training and joint optimization improve the data
efficiency and policy returns of end-to-end reinforcement learning. | [
"cs.LG"
] |
Few-shot learning, especially few-shot image classification, has received
increasing attention and witnessed significant advances in recent years. Some
recent studies implicitly show that many generic techniques or ``tricks'', such
as data augmentation, pre-training, knowledge distillation, and
self-supervision, may greatly boost the performance of a few-shot learning
method. Moreover, different works may employ different software platforms,
different training schedules, different backbone architectures and even
different input image sizes, making fair comparisons difficult and
practitioners struggle with reproducibility. To address these situations, we
propose a comprehensive library for few-shot learning (LibFewShot) by
re-implementing seventeen state-of-the-art few-shot learning methods in a
unified framework with the same single codebase in PyTorch. Furthermore, based
on LibFewShot, we provide comprehensive evaluations on multiple benchmark
datasets with multiple backbone architectures to evaluate common pitfalls and
effects of different training tricks. In addition, given the recent doubts on
the necessity of meta- or episodic-training mechanism, our evaluation results
show that such kind of mechanism is still necessary especially when combined
with pre-training. We hope our work can not only lower the barriers for
beginners to work on few-shot learning but also remove the effects of the
nontrivial tricks to facilitate intrinsic research on few-shot learning. The
source code is available from https://github.com/RL-VIG/LibFewShot. | [
"cs.CV"
] |
This paper is based on a machine learning project at the Norwegian University
of Science and Technology, fall 2020. The project was initiated with a
literature review on the latest developments within time-series forecasting
methods in the scientific community over the past five years. The paper
summarizes the essential aspects of this research. Furthermore, in this paper,
we introduce an LSTM cell's architecture, and explain how different components
go together to alter the cell's memory and predict the output. Also, the paper
provides the necessary formulas and foundations to calculate a forward
iteration through an LSTM. Then, the paper refers to some practical
applications and research that emphasize the strength and weaknesses of LSTMs,
shown within the time-series domain and the natural language processing (NLP)
domain. Finally, alternative statistical methods for time series predictions
are highlighted, where the paper outline ARIMA and exponential smoothing.
Nevertheless, as LSTMs can be viewed as a complex architecture, the paper
assumes that the reader has some knowledge of essential machine learning
aspects, such as the multi-layer perceptron, activation functions, overfitting,
backpropagation, bias, over- and underfitting, and more. | [
"cs.LG",
"cs.AI"
] |
Although few-shot learning and one-class classification (OCC), i.e., learning
a binary classifier with data from only one class, have been separately well
studied, their intersection remains rather unexplored. Our work addresses the
few-shot OCC problem and presents a method to modify the episodic data sampling
strategy of the model-agnostic meta-learning (MAML) algorithm to learn a model
initialization particularly suited for learning few-shot OCC tasks. This is
done by explicitly optimizing for an initialization which only requires few
gradient steps with one-class minibatches to yield a performance increase on
class-balanced test data. We provide a theoretical analysis that explains why
our approach works in the few-shot OCC scenario, while other meta-learning
algorithms fail, including the unmodified MAML. Our experiments on eight
datasets from the image and time-series domains show that our method leads to
better results than classical OCC and few-shot classification approaches, and
demonstrate the ability to learn unseen tasks from only few normal class
samples. Moreover, we successfully train anomaly detectors for a real-world
application on sensor readings recorded during industrial manufacturing of
workpieces with a CNC milling machine, by using few normal examples. Finally,
we empirically demonstrate that the proposed data sampling technique increases
the performance of more recent meta-learning algorithms in few-shot OCC and
yields state-of-the-art results in this problem setting. | [
"cs.LG",
"stat.ML"
] |
We propose a transductive Laplacian-regularized inference for few-shot tasks.
Given any feature embedding learned from the base classes, we minimize a
quadratic binary-assignment function containing two terms: (1) a unary term
assigning query samples to the nearest class prototype, and (2) a pairwise
Laplacian term encouraging nearby query samples to have consistent label
assignments. Our transductive inference does not re-train the base model, and
can be viewed as a graph clustering of the query set, subject to supervision
constraints from the support set. We derive a computationally efficient bound
optimizer of a relaxation of our function, which computes independent
(parallel) updates for each query sample, while guaranteeing convergence.
Following a simple cross-entropy training on the base classes, and without
complex meta-learning strategies, we conducted comprehensive experiments over
five few-shot learning benchmarks. Our LaplacianShot consistently outperforms
state-of-the-art methods by significant margins across different models,
settings, and data sets. Furthermore, our transductive inference is very fast,
with computational times that are close to inductive inference, and can be used
for large-scale few-shot tasks. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Weakly supervised image segmentation trained with image-level labels usually
suffers from inaccurate coverage of object areas during the generation of the
pseudo groundtruth. This is because the object activation maps are trained with
the classification objective and lack the ability to generalize. To improve the
generality of the objective activation maps, we propose a region prototypical
network RPNet to explore the cross-image object diversity of the training set.
Similar object parts across images are identified via region feature
comparison. Object confidence is propagated between regions to discover new
object areas while background regions are suppressed. Experiments show that the
proposed method generates more complete and accurate pseudo object masks, while
achieving state-of-the-art performance on PASCAL VOC 2012 and MS COCO. In
addition, we investigate the robustness of the proposed method on reduced
training sets. | [
"cs.CV"
] |
Unsupervised meta-learning approaches rely on synthetic meta-tasks that are
created using techniques such as random selection, clustering and/or
augmentation. Unfortunately, clustering and augmentation are domain-dependent,
and thus they require either manual tweaking or expensive learning. In this
work, we describe an approach that generates meta-tasks using generative
models. A critical component is a novel approach of sampling from the latent
space that generates objects grouped into synthetic classes forming the
training and validation data of a meta-task. We find that the proposed
approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM),
outperforms or is competitive with current unsupervised learning baselines on
few-shot classification tasks on the most widely used benchmark datasets. In
addition, the approach promises to be applicable without manual tweaking over a
wider range of domains than previous approaches. | [
"cs.LG",
"stat.ML"
] |
We present an approach to explain the decisions of black box models for image
classification. While using the black box to label images, our explanation
method exploits the latent feature space learned through an adversarial
autoencoder. The proposed method first generates exemplar images in the latent
feature space and learns a decision tree classifier. Then, it selects and
decodes exemplars respecting local decision rules. Finally, it visualizes them
in a manner that shows to the user how the exemplars can be modified to either
stay within their class, or to become counter-factuals by "morphing" into
another class. Since we focus on black box decision systems for image
classification, the explanation obtained from the exemplars also provides a
saliency map highlighting the areas of the image that contribute to its
classification, and areas of the image that push it into another class. We
present the results of an experimental evaluation on three datasets and two
black box models. Besides providing the most useful and interpretable
explanations, we show that the proposed method outperforms existing explainers
in terms of fidelity, relevance, coherence, and stability. | [
"cs.CV",
"cs.LG"
] |
Constructing high-quality generative models for 3D shapes is a fundamental
task in computer vision with diverse applications in geometry processing,
engineering, and design. Despite the recent progress in deep generative
modelling, synthesis of finely detailed 3D surfaces, such as high-resolution
point clouds, from scratch has not been achieved with existing approaches. In
this work, we propose to employ the latent-space Laplacian pyramid
representation within a hierarchical generative model for 3D point clouds. We
combine the recently proposed latent-space GAN and Laplacian GAN architectures
to form a multi-scale model capable of generating 3D point clouds at increasing
levels of detail. Our evaluation demonstrates that our model outperforms the
existing generative models for 3D point clouds. | [
"cs.CV",
"eess.IV"
] |
Human activity, which usually consists of several actions, generally covers
interactions among persons and or objects. In particular, human actions involve
certain spatial and temporal relationships, are the components of more
complicated activity, and evolve dynamically over time. Therefore, the
description of a single human action and the modeling of the evolution of
successive human actions are two major issues in human activity recognition. In
this paper, we develop a method for human activity recognition that tackles
these two issues. In the proposed method, an activity is divided into several
successive actions represented by spatio temporal patterns, and the evolution
of these actions are captured by a sequential model. A refined comprehensive
spatio temporal graph is utilized to represent a single action, which is a
qualitative representation of a human action incorporating both the spatial and
temporal relations of the participant objects. Next, a discrete hidden Markov
model is applied to model the evolution of action sequences. Moreover, a fully
automatic partition method is proposed to divide a long-term human activity
video into several human actions based on variational objects and qualitative
spatial relations. Finally, a hierarchical decomposition of the human body is
introduced to obtain a discriminative representation for a single action.
Experimental results on the Cornell Activity Dataset demonstrate the efficiency
and effectiveness of the proposed approach, which will enable long videos of
human activity to be better recognized. | [
"cs.CV",
"cs.LG"
] |
This work proposes a scheme that allows learning complex multi-agent
behaviors in a sample efficient manner, applied to 2v2 soccer. The problem is
formulated as a Markov game, and solved using deep reinforcement learning. We
propose a basic multi-agent extension of TD3 for learning the policy of each
player, in a decentralized manner. To ease learning, the task of 2v2 soccer is
divided in three stages: 1v0, 1v1 and 2v2. The process of learning in
multi-agent stages (1v1 and 2v2) uses agents trained on a previous stage as
fixed opponents. In addition, we propose using experience sharing, a method
that shares experience from a fixed opponent, trained in a previous stage, for
training the agent currently learning, and a form of frame-skipping, to raise
performance significantly. Our results show that high quality soccer play can
be obtained with our approach in just under 40M interactions. A summarized
video of the resulting game play can be found in https://youtu.be/f25l1j1U9RM. | [
"cs.LG",
"cs.MA"
] |
Conditional GANs are at the forefront of natural image synthesis. The main
drawback of such models is the necessity for labeled data. In this work we
exploit two popular unsupervised learning techniques, adversarial training and
self-supervision, and take a step towards bridging the gap between conditional
and unconditional GANs. In particular, we allow the networks to collaborate on
the task of representation learning, while being adversarial with respect to
the classic GAN game. The role of self-supervision is to encourage the
discriminator to learn meaningful feature representations which are not
forgotten during training. We test empirically both the quality of the learned
image representations, and the quality of the synthesized images. Under the
same conditions, the self-supervised GAN attains a similar performance to
state-of-the-art conditional counterparts. Finally, we show that this approach
to fully unsupervised learning can be scaled to attain an FID of 23.4 on
unconditional ImageNet generation. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
As moving objects always draw more attention of human eyes, the temporal
motive information is always exploited complementarily with spatial information
to detect salient objects in videos. Although efficient tools such as optical
flow have been proposed to extract temporal motive information, it often
encounters difficulties when used for saliency detection due to the movement of
camera or the partial movement of salient objects. In this paper, we
investigate the complimentary roles of spatial and temporal information and
propose a novel dynamic spatiotemporal network (DS-Net) for more effective
fusion of spatiotemporal information. We construct a symmetric two-bypass
network to explicitly extract spatial and temporal features. A dynamic weight
generator (DWG) is designed to automatically learn the reliability of
corresponding saliency branch. And a top-down cross attentive aggregation (CAA)
procedure is designed so as to facilitate dynamic complementary aggregation of
spatiotemporal features. Finally, the features are modified by spatial
attention with the guidance of coarse saliency map and then go through decoder
part for final saliency map. Experimental results on five benchmarks VOS,
DAVIS, FBMS, SegTrack-v2, and ViSal demonstrate that the proposed method
achieves superior performance than state-of-the-art algorithms. The source code
is available at https://github.com/TJUMMG/DS-Net. | [
"cs.CV"
] |
Variational autoencoders (VAE) are a powerful and widely-used class of models
to learn complex data distributions in an unsupervised fashion. One important
limitation of VAEs is the prior assumption that latent sample representations
are independent and identically distributed. However, for many important
datasets, such as time-series of images, this assumption is too strong:
accounting for covariances between samples, such as those in time, can yield to
a more appropriate model specification and improve performance in downstream
tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior
Variational Autoencoder (GPPVAE), to specifically address this issue. The
GPPVAE aims to combine the power of VAEs with the ability to model correlations
afforded by GP priors. To achieve efficient inference in this new class of
models, we leverage structure in the covariance matrix, and introduce a new
stochastic backpropagation strategy that allows for computing stochastic
gradients in a distributed and low-memory fashion. We show that our method
outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two
image data applications. | [
"cs.LG",
"stat.ML"
] |
Conditional Generative Adversarial Networks (cGANs) are finding increasingly
widespread use in many application domains. Despite outstanding progress,
quantitative evaluation of such models often involves multiple distinct metrics
to assess different desirable properties, such as image quality, conditional
consistency, and intra-conditioning diversity. In this setting, model
benchmarking becomes a challenge, as each metric may indicate a different
"best" model. In this paper, we propose the Frechet Joint Distance (FJD), which
is defined as the Frechet distance between joint distributions of images and
conditioning, allowing it to implicitly capture the aforementioned properties
in a single metric. We conduct proof-of-concept experiments on a controllable
synthetic dataset, which consistently highlight the benefits of FJD when
compared to currently established metrics. Moreover, we use the newly
introduced metric to compare existing cGAN-based models for a variety of
conditioning modalities (e.g. class labels, object masks, bounding boxes,
images, and text captions). We show that FJD can be used as a promising single
metric for cGAN benchmarking and model selection. Code can be found at
https://github.com/facebookresearch/fjd. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Despite recent advances in representation learning in hypercomplex (HC)
space, this subject is still vastly unexplored in the context of graphs.
Motivated by the complex and quaternion algebras, which have been found in
several contexts to enable effective representation learning that inherently
incorporates a weight-sharing mechanism, we develop graph neural networks that
leverage the properties of hypercomplex feature transformation. In particular,
in our proposed class of models, the multiplication rule specifying the algebra
itself is inferred from the data during training. Given a fixed model
architecture, we present empirical evidence that our proposed model
incorporates a regularization effect, alleviating the risk of overfitting. We
also show that for fixed model capacity, our proposed method outperforms its
corresponding real-formulated GNN, providing additional confirmation for the
enhanced expressivity of HC embeddings. Finally, we test our proposed
hypercomplex GNN on several open graph benchmark datasets and show that our
models reach state-of-the-art performance while consuming a much lower memory
footprint with 70& fewer parameters. Our implementations are available at
https://github.com/bayer-science-for-a-better-life/phc-gnn. | [
"cs.LG"
] |
Deep learning offers state of the art solutions for image recognition.
However, deep models are vulnerable to adversarial perturbations in images that
are subtle but significantly change the model's prediction. In a white-box
attack, these perturbations are generally learned for deep models that operate
on RGB images and, hence, the perturbations are equally distributed in the RGB
color space. In this paper, we show that the adversarial perturbations prevail
in the Y-channel of the YCbCr space. Our finding is motivated from the fact
that the human vision and deep models are more responsive to shape and texture
rather than color. Based on our finding, we propose a defense against
adversarial images. Our defence, coined ResUpNet, removes perturbations only
from the Y-channel by exploiting ResNet features in an upsampling framework
without the need for a bottleneck. At the final stage, the untouched
CbCr-channels are combined with the refined Y-channel to restore the clean
image. Note that ResUpNet is model agnostic as it does not modify the DNN
structure. ResUpNet is trained end-to-end in Pytorch and the results are
compared to existing defence techniques in the input transformation category.
Our results show that our approach achieves the best balance between defence
against adversarial attacks such as FGSM, PGD and DDN and maintaining the
original accuracies of VGG-16, ResNet50 and DenseNet121 on clean images. We
perform another experiment to show that learning adversarial perturbations only
for the Y-channel results in higher fooling rates for the same perturbation
magnitude. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
In decision making problems for continuous state and action spaces, linear
dynamical models are widely employed. Specifically, policies for stochastic
linear systems subject to quadratic cost functions capture a large number of
applications in reinforcement learning. Selected randomized policies have been
studied in the literature recently that address the trade-off between
identification and control. However, little is known about policies based on
bootstrapping observed states and actions. In this work, we show that
bootstrap-based policies achieve a square root scaling of regret with respect
to time. We also obtain results on the accuracy of learning the model's
dynamics. Corroborative numerical analysis that illustrates the technical
results is also provided. | [
"cs.LG",
"cs.SY",
"stat.ML"
] |
Deep learning approaches to 3D shape segmentation are typically formulated as
a multi-class labeling problem. Existing models are trained for a fixed set of
labels, which greatly limits their flexibility and adaptivity. We opt for
top-down recursive decomposition and develop the first deep learning model for
hierarchical segmentation of 3D shapes, based on recursive neural networks.
Starting from a full shape represented as a point cloud, our model performs
recursive binary decomposition, where the decomposition network at all nodes in
the hierarchy share weights. At each node, a node classifier is trained to
determine the type (adjacency or symmetry) and stopping criteria of its
decomposition. The features extracted in higher level nodes are recursively
propagated to lower level ones. Thus, the meaningful decompositions in higher
levels provide strong contextual cues constraining the segmentations in lower
levels. Meanwhile, to increase the segmentation accuracy at each node, we
enhance the recursive contextual feature with the shape feature extracted for
the corresponding part. Our method segments a 3D shape in point cloud into an
unfixed number of parts, depending on the shape complexity, showing strong
generality and flexibility. It achieves the state-of-the-art performance, both
for fine-grained and semantic segmentation, on the public benchmark and a new
benchmark of fine-grained segmentation proposed in this work. We also
demonstrate its application for fine-grained part refinements in image-to-shape
reconstruction. | [
"cs.CV"
] |
We present the first end to end approach for real time material estimation
for general object shapes with uniform material that only requires a single
color image as input. In addition to Lambertian surface properties, our
approach fully automatically computes the specular albedo, material shininess,
and a foreground segmentation. We tackle this challenging and ill posed inverse
rendering problem using recent advances in image to image translation
techniques based on deep convolutional encoder decoder architectures. The
underlying core representations of our approach are specular shading, diffuse
shading and mirror images, which allow to learn the effective and accurate
separation of diffuse and specular albedo. In addition, we propose a novel
highly efficient perceptual rendering loss that mimics real world image
formation and obtains intermediate results even during run time. The estimation
of material parameters at real time frame rates enables exciting mixed reality
applications, such as seamless illumination consistent integration of virtual
objects into real world scenes, and virtual material cloning. We demonstrate
our approach in a live setup, compare it to the state of the art, and
demonstrate its effectiveness through quantitative and qualitative evaluation. | [
"cs.CV"
] |
Human ratings are currently the most accurate way to assess the quality of an
image captioning model, yet most often the only used outcome of an expensive
human rating evaluation is a few overall statistics over the evaluation
dataset. In this paper, we show that the signal from instance-level human
caption ratings can be leveraged to improve captioning models, even when the
amount of caption ratings is several orders of magnitude less than the caption
training data. We employ a policy gradient method to maximize the human ratings
as rewards in an off-policy reinforcement learning setting, where policy
gradients are estimated by samples from a distribution that focuses on the
captions in a caption ratings dataset. Our empirical evidence indicates that
the proposed method learns to generalize the human raters' judgments to a
previously unseen set of images, as judged by a different set of human judges,
and additionally on a different, multi-dimensional side-by-side human
evaluation procedure. | [
"cs.CV",
"cs.CL"
] |
Unmanned Aerial Vehicles (UAVs) especially drones, equipped with vision
techniques have become very popular in recent years, with their extensive use
in wide range of applications. Many of these applications require use of
computer vision techniques, particularly object detection from the information
captured by on-board camera. In this paper, we propose an end to end object
detection model running on a UAV platform which is suitable for real-time
applications. We propose a deep feature pyramid architecture which makes use of
inherent properties of features extracted from Convolutional Networks by
capturing more generic features in the images (such as edge, color etc.) along
with the minute detailed features specific to the classes contained in our
problem. We use VisDrone-18 dataset for our studies which contain different
objects such as pedestrians, vehicles, bicycles etc. We provide software and
hardware architecture of our platform used in this study. We implemented our
model with both ResNet and MobileNet as convolutional bases. Our model combined
with modified focal loss function, produced a desirable performance of 30.6 mAP
for object detection with an inference time of 14 fps. We compared our results
with RetinaNet-ResNet-50 and HAL-RetinaNet and shown that our model combined
with MobileNet as backend feature extractor gave the best results in terms of
accuracy, speed and memory efficiency and is best suitable for real time object
detection with drones. | [
"cs.CV"
] |
Many different classification tasks need to manage structured data, which are
usually modeled as graphs. Moreover, these graphs can be dynamic, meaning that
the vertices/edges of each graph may change during time. Our goal is to jointly
exploit structured data and temporal information through the use of a neural
network model. To the best of our knowledge, this task has not been addressed
using these kind of architectures. For this reason, we propose two novel
approaches, which combine Long Short-Term Memory networks and Graph
Convolutional Networks to learn long short-term dependencies together with
graph structure. The quality of our methods is confirmed by the promising
results achieved. | [
"cs.LG",
"stat.ML"
] |
In Digital Holography (DH), it is crucial to extract the object distance from
a hologram in order to reconstruct its amplitude and phase. This step is called
auto-focusing and it is conventionally solved by first reconstructing a stack
of images and then by sharpening each reconstructed image using a focus metric
such as entropy or variance. The distance corresponding to the sharpest image
is considered the focal position. This approach, while effective, is
computationally demanding and time-consuming. In this paper, the determination
of the distance is performed by Deep Learning (DL). Two deep learning (DL)
architectures are compared: Convolutional Neural Network (CNN)and Visual
transformer (ViT). ViT and CNN are used to cope with the problem of
auto-focusing as a classification problem. Compared to a first attempt [11] in
which the distance between two consecutive classes was 100{\mu}m, our proposal
allows us to drastically reduce this distance to 1{\mu}m. Moreover, ViT reaches
similar accuracy and is more robust than CNN. | [
"cs.CV",
"eess.IV"
] |
Probabilistic point cloud registration methods are becoming more popular
because of their robustness. However, unlike point-to-plane variants of
iterative closest point (ICP) which incorporate local surface geometric
information such as surface normals, most probabilistic methods (e.g., coherent
point drift (CPD)) ignore such information and build Gaussian mixture models
(GMMs) with isotropic Gaussian covariances. This results in sphere-like GMM
components which only penalize the point-to-point distance between the two
point clouds. In this paper, we propose a novel method called CPD with Local
Surface Geometry (LSG-CPD) for rigid point cloud registration. Our method
adaptively adds different levels of point-to-plane penalization on top of the
point-to-point penalization based on the flatness of the local surface. This
results in GMM components with anisotropic covariances. We formulate point
cloud registration as a maximum likelihood estimation (MLE) problem and solve
it with the Expectation-Maximization (EM) algorithm. In the E step, we
demonstrate that the computation can be recast into simple matrix manipulations
and efficiently computed on a GPU. In the M step, we perform an unconstrained
optimization on a matrix Lie group to efficiently update the rigid
transformation of the registration. The proposed method outperforms
state-of-the-art algorithms in terms of accuracy and robustness on various
datasets captured with range scanners, RGBD cameras, and LiDARs. Also, it is
significantly faster than modern implementations of CPD. The source code is
available at https://github.com/ChirikjianLab/LSG-CPD.git. | [
"cs.CV"
] |
This paper proposes an automated method to obtain the extrinsic calibration
parameters between a camera and a 3D lidar with as low as 16 beams. We use a
checkerboard as a reference to obtain features of interest in both sensor
frames. The calibration board centre point and normal vector are automatically
extracted from the lidar point cloud by exploiting the geometry of the board.
The corresponding features in the camera image are obtained from the camera's
extrinsic matrix. We explain the reasons behind selecting these features, and
why they are more robust compared to other possibilities. To obtain the optimal
extrinsic parameters, we choose a genetic algorithm to address the highly
non-linear state space. The process is automated after defining the bounds of
the 3D experimental region relative to the lidar, and the true board
dimensions. In addition, the camera is assumed to be intrinsically calibrated.
Our method requires a minimum of 3 checkerboard poses, and the calibration
accuracy is demonstrated by evaluating our algorithm using real world and
simulated features. | [
"cs.CV",
"cs.RO"
] |
In open set learning, a model must be able to generalize to novel classes
when it encounters a sample that does not belong to any of the classes it has
seen before. Open set learning poses a realistic learning scenario that is
receiving growing attention. Existing studies on open set learning mainly
focused on detecting novel classes, but few studies tried to model them for
differentiating novel classes. In this paper, we recognize that novel classes
should be different from each other, and propose distribution networks for open
set learning that can model different novel classes based on probability
distributions. We hypothesize that, through a certain mapping, samples from
different classes with the same classification criterion should follow
different probability distributions from the same distribution family. A deep
neural network is learned to map the samples in the original feature space to a
latent space where the distributions of known classes can be jointly learned
with the network. We additionally propose a distribution parameter transfer and
updating strategy for novel class modeling when a novel class is detected in
the latent space. By novel class modeling, the detected novel classes can serve
as known classes to the subsequent classification. Our experimental results on
image datasets MNIST and CIFAR10 show that the distribution networks can detect
novel classes accurately, and model them well for the subsequent classification
tasks. | [
"cs.LG",
"stat.ML"
] |
Deep co-training has recently been proposed as an effective approach for
image segmentation when annotated data is scarce. In this paper, we improve
existing approaches for semi-supervised segmentation with a self-paced and
self-consistent co-training method. To help distillate information from
unlabeled images, we first design a self-paced learning strategy for
co-training that lets jointly-trained neural networks focus on
easier-to-segment regions first, and then gradually consider harder ones.This
is achieved via an end-to-end differentiable loss inthe form of a generalized
Jensen Shannon Divergence(JSD). Moreover, to encourage predictions from
different networks to be both consistent and confident, we enhance this
generalized JSD loss with an uncertainty regularizer based on entropy. The
robustness of individual models is further improved using a self-ensembling
loss that enforces their prediction to be consistent across different training
iterations. We demonstrate the potential of our method on three challenging
image segmentation problems with different image modalities, using small
fraction of labeled data. Results show clear advantages in terms of performance
compared to the standard co-training baselines and recently proposed
state-of-the-art approaches for semi-supervised segmentation | [
"cs.CV"
] |
Simultaneous localisation and categorization of objects in medical images,
also referred to as medical object detection, is of high clinical relevance
because diagnostic decisions often depend on rating of objects rather than e.g.
pixels. For this task, the cumbersome and iterative process of method
configuration constitutes a major research bottleneck. Recently, nnU-Net has
tackled this challenge for the task of image segmentation with great success.
Following nnU-Net's agenda, in this work we systematize and automate the
configuration process for medical object detection. The resulting
self-configuring method, nnDetection, adapts itself without any manual
intervention to arbitrary medical detection problems while achieving results en
par with or superior to the state-of-the-art. We demonstrate the effectiveness
of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10
further medical object detection tasks on public data sets for comprehensive
method evaluation. Code is at https://github.com/MIC-DKFZ/nnDetection . | [
"cs.CV"
] |
Deep neural decision forest (NDF) achieved remarkable performance on various
vision tasks via combining decision tree and deep representation learning. In
this work, we first trace the decision-making process of this model and
visualize saliency maps to understand which portion of the input influence it
more for both classification and regression problems. We then apply NDF on a
multi-task coordinate regression problem and demonstrate the distribution of
routing probabilities, which is vital for interpreting NDF yet not shown for
regression problems. The pre-trained model and code for visualization will be
available at https://github.com/Nicholasli1995/VisualizingNDF | [
"cs.CV",
"cs.AI"
] |
It has been shown that image descriptors extracted by convolutional neural
networks (CNNs) achieve remarkable results for retrieval problems. In this
paper, we apply attention mechanism to CNN, which aims at enhancing more
relevant features that correspond to important keypoints in the input image.
The generated attention-aware features are then aggregated by the previous
state-of-the-art generalized mean (GeM) pooling followed by normalization to
produce a compact global descriptor, which can be efficiently compared to other
image descriptors by the dot product. An extensive comparison of our proposed
approach with state-of-the-art methods is performed on the new challenging
ROxford5k and RParis6k retrieval benchmarks. Results indicate significant
improvement over previous work. In particular, our attention-aware GeM (AGeM)
descriptor outperforms state-of-the-art method on ROxford5k under the `Hard'
evaluation protocal. | [
"cs.CV"
] |
Crowd counting has drawn more attention because of its wide application in
smart cities. Recent works achieved promising performance but relied on the
supervised paradigm with expensive crowd annotations. To alleviate annotation
cost, in this work we proposed a semi-supervised learning framework S4-Crowd,
which can leverage both unlabeled/labeled data for robust crowd modelling. In
the unsupervised pathway, two self-supervised losses were proposed to simulate
the crowd variations such as scale, illumination, etc., based on which and the
supervised information pseudo labels were generated and gradually refined. We
also proposed a crowd-driven recurrent unit Gated-Crowd-Recurrent-Unit (GCRU),
which can preserve discriminant crowd information by extracting second-order
statistics, yielding pseudo labels with improved quality. A joint loss
including both unsupervised/supervised information was proposed, and a dynamic
weighting strategy was employed to balance the importance of the unsupervised
loss and supervised loss at different training stages. We conducted extensive
experiments on four popular crowd counting datasets in semi-supervised
settings. Experimental results suggested the effectiveness of each proposed
component in our S4-Crowd framework. Our method also outperformed other
state-of-the-art semi-supervised learning approaches on these crowd datasets. | [
"cs.CV",
"cs.LG"
] |
Deep reinforcement learning (DRL) on Markov decision processes (MDPs) with
continuous action spaces is often approached by directly training parametric
policies along the direction of estimated policy gradients (PGs). Previous
research revealed that the performance of these PG algorithms depends heavily
on the bias-variance tradeoffs involved in estimating and using PGs. A notable
approach towards balancing this tradeoff is to merge both on-policy and
off-policy gradient estimations. However existing PG merging methods can be
sample inefficient and are not suitable to train deterministic policies
directly. To address these issues, this paper introduces elite PGs and
strengthens their variance reduction effect by adopting elitism and policy
consolidation techniques to regularize policy training based on policy
behavioral knowledge extracted from elite trajectories. Meanwhile, we propose a
two-step method to merge elite PGs and conventional PGs as a new extension of
the conventional interpolation merging method. At both the theoretical and
experimental levels, we show that both two-step merging and interpolation
merging can induce varied bias-variance tradeoffs during policy training. They
enable us to effectively use elite PGs and mitigate their performance impact on
trained policies. Our experiments also show that two-step merging can
outperform interpolation merging and several state-of-the-art algorithms on six
benchmark control tasks. | [
"cs.LG",
"stat.ML"
] |
3D detection plays an indispensable role in environment perception. Due to
the high cost of commonly used LiDAR sensor, stereo vision based 3D detection,
as an economical yet effective setting, attracts more attention recently. For
these approaches based on 2D images, accurate depth information is the key to
achieve 3D detection, and most existing methods resort to a preliminary stage
for depth estimation. They mainly focus on the global depth and neglect the
property of depth information in this specific task, namely, sparsity and
locality, where exactly accurate depth is only needed for these 3D bounding
boxes. Motivated by this finding, we propose a stereo-image based anchor-free
3D detection method, called structure-aware stereo 3D detector (termed as
SIDE), where we explore the instance-level depth information via constructing
the cost volume from RoIs of each object. Due to the information sparsity of
local cost volume, we further introduce match reweighting and structure-aware
attention, to make the depth information more concentrated. Experiments
conducted on the KITTI dataset show that our method achieves the
state-of-the-art performance compared to existing methods without depth map
supervision. | [
"cs.CV"
] |
The task of writer verification is to provide a likelihood score for whether
the queried and known handwritten image samples belong to the same writer or
not. Such a task calls for the neural network to make it's outcome
interpretable, i.e. provide a view into the network's decision making process.
We implement and integrate cross-attention and soft-attention mechanisms to
capture the highly correlated and salient points in feature space of 2D inputs.
The attention maps serve as an explanation premise for the network's output
likelihood score. The attention mechanism also allows the network to focus more
on relevant areas of the input, thus improving the classification performance.
Our proposed approach achieves a precision of 86\% for detecting intra-writer
cases in CEDAR cursive "AND" dataset. Furthermore, we generate meaningful
explanations for the provided decision by extracting attention maps from
multiple levels of the network. | [
"cs.CV",
"cs.AI",
"cs.LG",
"eess.IV"
] |
In this paper, we propose an efficient semantic segmentation framework for
indoor scenes, tailored to the application on a mobile robot. Semantic
segmentation can help robots to gain a reasonable understanding of their
environment, but to reach this goal, the algorithms not only need to be
accurate, but also fast and robust. Therefore, we developed an optimized 3D
point cloud processing framework based on a Randomized Decision Forest,
achieving competitive results at sufficiently high frame rates. We evaluate the
capabilities of our method on the popular NYU depth dataset and our own data
and demonstrate its feasibility by deploying it on a mobile service robot, for
which we could optimize an object search procedure using our results. | [
"cs.CV",
"cs.RO"
] |
We use matrix iteration theory to characterize acceleration in smooth games.
We define the spectral shape of a family of games as the set containing all
eigenvalues of the Jacobians of standard gradient dynamics in the family.
Shapes restricted to the real line represent well-understood classes of
problems, like minimization. Shapes spanning the complex plane capture the
added numerical challenges in solving smooth games. In this framework, we
describe gradient-based methods, such as extragradient, as transformations on
the spectral shape. Using this perspective, we propose an optimal algorithm for
bilinear games. For smooth and strongly monotone operators, we identify a
continuum between convex minimization, where acceleration is possible using
Polyak's momentum, and the worst case where gradient descent is optimal.
Finally, going beyond first-order methods, we propose an accelerated version of
consensus optimization. | [
"cs.LG",
"math.OC",
"stat.ML",
"G.1.6, I.2.6",
"G.1.6; I.2.6"
] |
Dilated Convolutions have been shown to be highly useful for the task of
image segmentation. By introducing gaps into convolutional filters, they enable
the use of larger receptive fields without increasing the original kernel size.
Even though this allows for the inexpensive capturing of features at different
scales, the structure of the dilated convolutional filter leads to a loss of
information. We hypothesise that inexpensive modifications to Dilated
Convolutional Neural Networks, such as additional averaging layers, could
overcome this limitation. In this project we test this hypothesis by evaluating
the effect of these modifications for a state-of-the art image segmentation
system and compare them to existing approaches with the same objective. Our
experiments show that our proposed methods improve the performance of dilated
convolutions for image segmentation. Crucially, our modifications achieve these
results at a much lower computational cost than previous smoothing approaches. | [
"cs.CV"
] |
Current reinforcement learning (RL) methods can successfully learn single
tasks but often generalize poorly to modest perturbations in task domain or
training procedure. In this work, we present a decoupled learning strategy for
RL that creates a shared representation space where knowledge can be robustly
transferred. We separate learning the task representation, the forward
dynamics, the inverse dynamics and the reward function of the domain, and show
that this decoupling improves performance within the task, transfers well to
changes in dynamics and reward, and can be effectively used for online
planning. Empirical results show good performance in both continuous and
discrete RL domains. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Real-world image super-resolution (SR) is a challenging image translation
problem. Low-resolution (LR) images are often generated by various unknown
transformations rather than by applying simple bilinear down-sampling on
high-resolution (HR) images. To address this issue, this paper proposes a novel
pipeline which exploits style and attention mechanism in real-world SR. Our
pipeline consists of a style Variational Autoencoder (styleVAE) and a SR
network incorporated with attention mechanism. To get real-world-like
low-quality images paired with the HR images, we design the styleVAE to
transfer the complex nuisance factors in real-world LR images to the generated
LR images. We also use mutual information estimation (MI) to get better style
information. For our SR network, we firstly propose a global attention residual
block to learn long-range dependencies in images. Then another local attention
residual block is proposed to enforce the attention of SR network moving to
local areas of images in which texture detail will be filled. It is worth
noticing that styleVAE can be presented in a plug-and-play manner and thus can
help to improve the generalization and robustness of our SR method as well as
other SR methods. Extensive experiments demonstrate that our method surpasses
the state-of-the-art work, both quantitatively and qualitatively. | [
"cs.CV",
"eess.IV"
] |
Person re-identification (\textit{re-id}) refers to matching pedestrians
across disjoint yet non-overlapping camera views. The most effective way to
match these pedestrians undertaking significant visual variations is to seek
reliably invariant features that can describe the person of interest
faithfully. Most of existing methods are presented in a supervised manner to
produce discriminative features by relying on labeled paired images in
correspondence. However, annotating pair-wise images is prohibitively expensive
in labors, and thus not practical in large-scale networked cameras. Moreover,
seeking comparable representations across camera views demands a flexible model
to address the complex distributions of images. In this work, we study the
co-occurrence statistic patterns between pairs of images, and propose to
crossing Generative Adversarial Network (Cross-GAN) for learning a joint
distribution for cross-image representations in a unsupervised manner. Given a
pair of person images, the proposed model consists of the variational
auto-encoder to encode the pair into respective latent variables, a proposed
cross-view alignment to reduce the view disparity, and an adversarial layer to
seek the joint distribution of latent representations. The learned latent
representations are well-aligned to reflect the co-occurrence patterns of
paired images. We empirically evaluate the proposed model against challenging
datasets, and our results show the importance of joint invariant features in
improving matching rates of person re-id with comparison to semi/unsupervised
state-of-the-arts. | [
"cs.CV"
] |
Supervised learning (SL) has achieved remarkable success in numerous
artificial intelligence applications. In the current literature, by referring
to the properties of the ground-truth labels prepared for a training data set,
SL is roughly categorized as fully supervised learning (FSL) and weakly
supervised learning (WSL). However, solutions for various FSL tasks have shown
that the given ground-truth labels are not always learnable, and the target
transformation from the given ground-truth labels to learnable targets can
significantly affect the performance of the final FSL solutions. Without
considering the properties of the target transformation from the given
ground-truth labels to learnable targets, the roughness of the FSL category
conceals some details that can be critical to building the optimal solutions
for some specific FSL tasks. Thus, it is desirable to reveal these details.
This article attempts to achieve this goal by expanding the categorization of
FSL and investigating the subtype that plays the central role in FSL. Taking
into consideration the properties of the target transformation from the given
ground-truth labels to learnable targets, we first categorize FSL into three
narrower subtypes. Then, we focus on the subtype moderately supervised learning
(MSL). MSL concerns the situation where the given ground-truth labels are
ideal, but due to the simplicity in annotation of the given ground-truth
labels, careful designs are required to transform the given ground-truth labels
into learnable targets. From the perspectives of definition and framework, we
comprehensively illustrate MSL to reveal what details are concealed by the
roughness of the FSL category. Finally, discussions on the revealed details
suggest that MSL should be given more attention. | [
"cs.CV",
"cs.LG",
"eess.IV",
"68T20",
"A.1"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.