text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
How do groups of individuals achieve consensus in movement decisions? Do
individuals follow their friends, the one predetermined leader, or whomever
just happens to be nearby? To address these questions computationally, we
formalize "Coordination Strategy Inference Problem". In this setting, a group
of multiple individuals moves in a coordinated manner towards a target path.
Each individual uses a specific strategy to follow others (e.g. nearest
neighbors, pre-defined leaders, preferred friends). Given a set of time series
that includes coordinated movement and a set of candidate strategies as inputs,
we provide the first methodology (to the best of our knowledge) to infer
whether each individual uses local-agreement-system or dictatorship-like
strategy to achieve movement coordination at the group level. We evaluate and
demonstrate the performance of the proposed framework by predicting the
direction of movement of an individual in a group in both simulated datasets as
well as two real-world datasets: a school of fish and a troop of baboons.
Moreover, since there is no prior methodology for inferring individual-level
strategies, we compare our framework with the state-of-the-art approach for the
task of classification of group-level-coordination models. The results show
that our approach is highly accurate in inferring the correct strategy in
simulated datasets even in complicated mixed strategy settings, which no
existing method can infer. In the task of classification of
group-level-coordination models, our framework performs better than the
state-of-the-art approach in all datasets. Animal data experiments show that
fish, as expected, follow their neighbors, while baboons have a preference to
follow specific individuals. Our methodology generalizes to arbitrary time
series data of real numbers, beyond movement data. | [
"stat.ML",
"cs.AI",
"cs.LG",
"cs.MA",
"physics.data-an",
"37M10, 62F07, 92B99, 91C99, 68P99",
"G.3; I.2.3; I.2.6; I.2.11; J.4"
] |
This paper develops a hierarchical reinforcement learning architecture for
multi-mission spaceflight campaign design under uncertainty, including vehicle
design, infrastructure deployment planning, and space transportation
scheduling. This problem involves a high-dimensional design space and is
challenging especially with uncertainty present. To tackle this challenge, the
developed framework has a hierarchical structure with reinforcement learning
(RL) and network-based mixed-integer linear programming (MILP), where the
former optimizes campaign-level decisions (e.g., design of the vehicle used
throughout the campaign, destination demand assigned to each mission in the
campaign), whereas the latter optimizes the detailed mission-level decisions
(e.g., when to launch what from where to where). The framework is applied to a
set of human lunar exploration campaign scenarios with uncertain in-situ
resource utilization (ISRU) performance as a case study. The main value of this
work is its integration of the rapidly growing RL research and the existing
MILP-based space logistics methods through a hierarchical framework to handle
the otherwise intractable complexity of space mission design under uncertainty.
We expect this unique framework to be a critical steppingstone for the emerging
research direction of artificial intelligence for space mission design. | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC"
] |
Point estimation of class prevalences in the presence of data set shift has
been a popular research topic for more than two decades. Less attention has
been paid to the construction of confidence and prediction intervals for
estimates of class prevalences. One little considered question is whether or
not it is necessary for practical purposes to distinguish confidence and
prediction intervals. Another question so far not yet conclusively answered is
whether or not the discriminatory power of the classifier or score at the basis
of an estimation method matters for the accuracy of the estimates of the class
prevalences. This paper presents a simulation study aimed at shedding some
light on these and other related questions. | [
"stat.ML",
"cs.LG",
"stat.AP",
"65C60, 68U20"
] |
Bounding box regression is an important component in object detection. Recent
work has shown the promising performance by optimizing the Intersection over
Union (IoU) as loss. However, IoU-based loss has the gradient vanish problem in
the case of low overlapping bounding boxes, and the model could easily ignore
these simple cases. In this paper, we propose Side Overlap (SO) loss by
maximizing the side overlap of two bounding boxes, which puts more penalty for
low overlapping bounding box cases. Besides, to speed up the convergence, the
Corner Distance (CD) is added into the objective function. Combining the Side
Overlap and Corner Distance, we get a new regression objective function, Side
and Corner Align Loss (SCALoss). The SCALoss is well-correlated with IoU loss,
which also benefits the evaluation metric but produces more penalty for
low-overlapping cases. It can serve as a comprehensive similarity measure,
leading the better localization performance and faster convergence speed.
Experiments on COCO and PASCAL VOC benchmarks show that SCALoss can bring
consistent improvement and outperform $\ell_n$ loss and IoU based loss with
popular object detectors such as YOLOV3, SSD, Reppoints, Faster-RCNN. | [
"cs.CV"
] |
In this paper, we propose Contextual Guided Segmentation (CGS) framework for
video instance segmentation in three passes. In the first pass, i.e., preview
segmentation, we propose Instance Re-Identification Flow to estimate main
properties of each instance (i.e., human/non-human, rigid/deformable,
known/unknown category) by propagating its preview mask to other frames. In the
second pass, i.e., contextual segmentation, we introduce multiple contextual
segmentation schemes. For human instance, we develop skeleton-guided
segmentation in a frame along with object flow to correct and refine the result
across frames. For non-human instance, if the instance has a wide variation in
appearance and belongs to known categories (which can be inferred from the
initial mask), we adopt instance segmentation. If the non-human instance is
nearly rigid, we train FCNs on synthesized images from the first frame of a
video sequence. In the final pass, i.e., guided segmentation, we develop a
novel fined-grained segmentation method on non-rectangular regions of interest
(ROIs). The natural-shaped ROI is generated by applying guided attention from
the neighbor frames of the current one to reduce the ambiguity in the
segmentation of different overlapping instances. Forward mask propagation is
followed by backward mask propagation to further restore missing instance
fragments due to re-appeared instances, fast motion, occlusion, or heavy
deformation. Finally, instances in each frame are merged based on their depth
values, together with human and non-human object interaction and rare instance
priority. Experiments conducted on the DAVIS Test-Challenge dataset demonstrate
the effectiveness of our proposed framework. We achieved the 3rd consistently
in the DAVIS Challenges 2017-2019 with 75.4%, 72.4%, and 78.4% in terms of
global score, region similarity, and contour accuracy, respectively. | [
"cs.CV"
] |
Synchronization of coupled oscillators is observed at multiple levels of
neural systems, and has been shown to play an important function in visual
perception. We propose a computing system based on locally coupled oscillator
networks for image segmentation. The system can serve as the preprocessing
front-end of an image processing pipeline where the common frequencies of
clusters of oscillators reflect the segmentation results. To demonstrate the
feasibility of our design, the system is simulated and tested on a human face
image dataset and its performance is compared with traditional intensity
threshold based algorithms. Our system shows both better performance and higher
noise tolerance than traditional methods. | [
"cs.CV",
"q-bio.NC",
"C.1.3"
] |
Multi-task regression attempts to exploit the task similarity in order to
achieve knowledge transfer across related tasks for performance improvement.
The application of Gaussian process (GP) in this scenario yields the
non-parametric yet informative Bayesian multi-task regression paradigm.
Multi-task GP (MTGP) provides not only the prediction mean but also the
associated prediction variance to quantify uncertainty, thus gaining popularity
in various scenarios. The linear model of coregionalization (LMC) is a
well-known MTGP paradigm which exploits the dependency of tasks through linear
combination of several independent and diverse GPs. The LMC however suffers
from high model complexity and limited model capability when handling
complicated multi-task cases. To this end, we develop the neural embedding of
coregionalization that transforms the latent GPs into a high-dimensional latent
space to induce rich yet diverse behaviors. Furthermore, we use advanced
variational inference as well as sparse approximation to devise a tight and
compact evidence lower bound (ELBO) for higher quality of scalable model
inference. Extensive numerical experiments have been conducted to verify the
higher prediction quality and better generalization of our model, named NSVLMC,
on various real-world multi-task datasets and the cross-fluid modeling of
unsteady fluidized bed. | [
"stat.ML",
"cs.LG"
] |
Monocular depth estimation is a challenging task in complex compositions
depicting multiple objects of diverse scales. Albeit the recent great progress
thanks to the deep convolutional neural networks (CNNs), the state-of-the-art
monocular depth estimation methods still fall short to handle such real-world
challenging scenarios. In this paper, we propose a deep end-to-end learning
framework to tackle these challenges, which learns the direct mapping from a
color image to the corresponding depth map. First, we represent monocular depth
estimation as a multi-category dense labeling task by contrast to the
regression based formulation. In this way, we could build upon the recent
progress in dense labeling such as semantic segmentation. Second, we fuse
different side-outputs from our front-end dilated convolutional neural network
in a hierarchical way to exploit the multi-scale depth cues for depth
estimation, which is critical to achieve scale-aware depth estimation. Third,
we propose to utilize soft-weighted-sum inference instead of the hard-max
inference, transforming the discretized depth score to continuous depth value.
Thus, we reduce the influence of quantization error and improve the robustness
of our method. Extensive experiments on the NYU Depth V2 and KITTI datasets
show the superiority of our method compared with current state-of-the-art
methods. Furthermore, experiments on the NYU V2 dataset reveal that our model
is able to learn the probability distribution of depth. | [
"cs.CV"
] |
Graph neural networks (GNNs) have been widely used to analyze the
graph-structured data in various application domains, e.g., social networks,
molecular biology, and anomaly detection. With great power, the GNN models,
usually as valuable Intellectual Properties of their owners, also become
attractive targets of the attacker. Recent studies show that machine learning
models are facing a severe threat called Model Extraction Attacks, where a
well-trained private model owned by a service provider can be stolen by the
attacker pretending as a client. Unfortunately, existing works focus on the
models trained on the Euclidean space, e.g., images and texts, while how to
extract a GNN model that contains a graph structure and node features is yet to
be explored. In this paper, we explore and develop model extraction attacks
against GNN models. Given only black-box access to a target GNN model, the
attacker aims to reconstruct a duplicated one via several nodes he obtained
(called attacker nodes). We first systematically formalise the threat modeling
in the context of GNN model extraction and classify the adversarial threats
into seven categories by considering different background knowledge of the
attacker, e.g., attributes and/or neighbor connectives of the attacker nodes.
Then we present the detailed methods which utilize the accessible knowledge in
each threat to implement the attacks. By evaluating over three real-world
datasets, our attacks are shown to extract duplicated models effectively, i.e.,
more than 89% inputs in the target domain have the same output predictions as
the victim model. | [
"cs.LG",
"cs.CR"
] |
Recently, deep convolutional neural networks (CNNs) have been demonstrated
remarkable progress on single image super-resolution. However, as the depth and
width of the networks increase, CNN-based super-resolution methods have been
faced with the challenges of computational complexity and memory consumption in
practice. In order to solve the above questions, we propose a deep but compact
convolutional network to directly reconstruct the high resolution image from
the original low resolution image. In general, the proposed model consists of
three parts, which are feature extraction block, stacked information
distillation blocks and reconstruction block respectively. By combining an
enhancement unit with a compression unit into a distillation block, the local
long and short-path features can be effectively extracted. Specifically, the
proposed enhancement unit mixes together two different types of features and
the compression unit distills more useful information for the sequential
blocks. In addition, the proposed network has the advantage of fast execution
due to the comparatively few numbers of filters per layer and the use of group
convolution. Experimental results demonstrate that the proposed method is
superior to the state-of-the-art methods, especially in terms of time
performance. | [
"cs.CV"
] |
An unsupervised point cloud registration method, called salient points
analysis (SPA), is proposed in this work. The proposed SPA method can register
two point clouds effectively using only a small subset of salient points. It
first applies the PointHop++ method to point clouds, finds corresponding
salient points in two point clouds based on the local surface characteristics
of points and performs registration by matching the corresponding salient
points. The SPA method offers several advantages over the recent deep learning
based solutions for registration. Deep learning methods such as PointNetLK and
DCP train end-to-end networks and rely on full supervision (namely, ground
truth transformation matrix and class label). In contrast, the SPA is
completely unsupervised. Furthermore, SPA's training time and model size are
much less. The effectiveness of the SPA method is demonstrated by experiments
on seen and unseen classes and noisy point clouds from the ModelNet-40 dataset. | [
"cs.CV"
] |
Bayesian priors offer a compact yet general means of incorporating domain
knowledge into many learning tasks. The correctness of the Bayesian analysis
and inference, however, largely depends on accuracy and correctness of these
priors. PAC-Bayesian methods overcome this problem by providing bounds that
hold regardless of the correctness of the prior distribution. This paper
introduces the first PAC-Bayesian bound for the batch reinforcement learning
problem with function approximation. We show how this bound can be used to
perform model-selection in a transfer learning scenario. Our empirical results
confirm that PAC-Bayesian policy evaluation is able to leverage prior
distributions when they are informative and, unlike standard Bayesian RL
approaches, ignore them when they are misleading. | [
"cs.LG",
"stat.ML"
] |
Designing efficient exploration is central to Reinforcement Learning due to
the fundamental problem posed by the exploration-exploitation dilemma. Bayesian
exploration strategies like Thompson Sampling resolve this trade-off in a
principled way by modeling and updating the distribution of the parameters of
the the action-value function, the outcome model of the environment. However,
this technique becomes infeasible for complex environments due to the
difficulty of representing and updating probability distributions over
parameters of outcome models of corresponding complexity. Moreover, the
approximation techniques introduced to mitigate this issue typically result in
poor exploration-exploitation trade-offs, as observed in the case of deep
neural network models with approximate posterior methods that have been shown
to underperform in the deep bandit scenario.
In this paper we introduce Sample Average Uncertainty (SAU), a simple and
efficient uncertainty measure for contextual bandits. While Bayesian approaches
like Thompson Sampling estimate outcomes uncertainty indirectly by first
quantifying the variability over the parameters of the outcome model, SAU is a
frequentist approach that directly estimates the uncertainty of the outcomes
based on the value predictions. Importantly, we show theoretically that the
uncertainty measure estimated by SAU asymptotically matches the uncertainty
provided by Thompson Sampling, as well as its regret bounds. Because of its
simplicity SAU can be seamlessly applied to deep contextual bandits as a very
scalable drop-in replacement for epsilon-greedy exploration. Finally, we
empirically confirm our theory by showing that SAU-based exploration
outperforms current state-of-the-art deep Bayesian bandit methods on several
real-world datasets at modest computation cost. | [
"cs.LG",
"stat.ML"
] |
Convolutional networks have been the paradigm of choice in many computer
vision applications. The convolution operation however has a significant
weakness in that it only operates on a local neighborhood, thus missing global
information. Self-attention, on the other hand, has emerged as a recent advance
to capture long range interactions, but has mostly been applied to sequence
modeling and generative modeling tasks. In this paper, we consider the use of
self-attention for discriminative visual tasks as an alternative to
convolutions. We introduce a novel two-dimensional relative self-attention
mechanism that proves competitive in replacing convolutions as a stand-alone
computational primitive for image classification. We find in control
experiments that the best results are obtained when combining both convolutions
and self-attention. We therefore propose to augment convolutional operators
with this self-attention mechanism by concatenating convolutional feature maps
with a set of feature maps produced via self-attention. Extensive experiments
show that Attention Augmentation leads to consistent improvements in image
classification on ImageNet and object detection on COCO across many different
models and scales, including ResNets and a state-of-the art mobile constrained
network, while keeping the number of parameters similar. In particular, our
method achieves a $1.3\%$ top-1 accuracy improvement on ImageNet classification
over a ResNet50 baseline and outperforms other attention mechanisms for images
such as Squeeze-and-Excitation. It also achieves an improvement of 1.4 mAP in
COCO Object Detection on top of a RetinaNet baseline. | [
"cs.CV"
] |
Airborne light detection and ranging (LiDAR) plays an increasingly
significant role in urban planning, topographic mapping, environmental
monitoring, power line detection and other fields thanks to its capability to
quickly acquire large-scale and high-precision ground information. To achieve
point cloud classification, previous studies proposed point cloud deep learning
models that can directly process raw point clouds based on PointNet-like
architectures. And some recent works proposed graph convolution neural network
based on the inherent topology of point clouds. However, the above point cloud
deep learning models only pay attention to exploring local geometric
structures, yet ignore global contextual relationships among all points. In
this paper, we present a graph attention convolution neural network (GACNN)
that can be directly applied to the classification of unstructured 3D point
clouds obtained by airborne LiDAR. Specifically, we first introduce a graph
attention convolution module that incorporates global contextual information
and local structural features. Based on the proposed graph attention
convolution module, we further design an end-to-end encoder-decoder network,
named GACNN, to capture multiscale features of the point clouds and therefore
enable more accurate airborne point cloud classification. Experiments on the
ISPRS 3D labeling dataset show that the proposed model achieves a new
state-of-the-art performance in terms of average F1 score (71.5\%) and a
satisfying overall accuracy (83.2\%). Additionally, experiments further
conducted on the 2019 Data Fusion Contest Dataset by comparing with other
prevalent point cloud deep learning models demonstrate the favorable
generalization capability of the proposed model. | [
"cs.CV"
] |
The success of deep learning heavily depends on the availability of large
labeled training sets. However, it is hard to get large labeled datasets in
medical image domain because of the strict privacy concern and costly labeling
efforts. Contrastive learning, an unsupervised learning technique, has been
proved powerful in learning image-level representations from unlabeled data.
The learned encoder can then be transferred or fine-tuned to improve the
performance of downstream tasks with limited labels. A critical step in
contrastive learning is the generation of contrastive data pairs, which is
relatively simple for natural image classification but quite challenging for
medical image segmentation due to the existence of the same tissue or organ
across the dataset. As a result, when applied to medical image segmentation,
most state-of-the-art contrastive learning frameworks inevitably introduce a
lot of false-negative pairs and result in degraded segmentation quality. To
address this issue, we propose a novel positional contrastive learning (PCL)
framework to generate contrastive data pairs by leveraging the position
information in volumetric medical images. Experimental results on CT and MRI
datasets demonstrate that the proposed PCL method can substantially improve the
segmentation performance compared to existing methods in both semi-supervised
setting and transfer learning setting. | [
"cs.CV"
] |
In recent years, neural network approaches have been widely adopted for
machine learning tasks, with applications in computer vision. More recently,
unsupervised generative models based on neural networks have been successfully
applied to model data distributions via low-dimensional latent spaces. In this
paper, we use Generative Adversarial Networks (GANs) to impose structure in
compressed sensing problems, replacing the usual sparsity constraint. We
propose to train the GANs in a task-aware fashion, specifically for
reconstruction tasks. We also show that it is possible to train our model
without using any (or much) non-compressed data. Finally, we show that the
latent space of the GAN carries discriminative information and can further be
regularized to generate input features for general inference tasks. We
demonstrate the effectiveness of our method on a variety of reconstruction and
classification problems. | [
"cs.LG",
"stat.ML"
] |
Nuclei segmentation is a fundamental task that is critical for various
computational pathology applications including nuclei morphology analysis, cell
type classification, and cancer grading. Conventional vision-based methods for
nuclei segmentation struggle in challenging cases and deep learning approaches
have proven to be more robust and generalizable. However, CNNs require large
amounts of labeled histopathology data. Moreover, conventional CNN-based
approaches lack structured prediction capabilities which are required to
distinguish overlapping and clumped nuclei. Here, we present an approach to
nuclei segmentation that overcomes these challenges by utilizing a conditional
generative adversarial network (cGAN) trained with synthetic and real data. We
generate a large dataset of H&E training images with perfect nuclei
segmentation labels using an unpaired GAN framework. This synthetic data along
with real histopathology data from six different organs are used to train a
conditional GAN with spectral normalization and gradient penalty for nuclei
segmentation. This adversarial regression framework enforces higher order
consistency when compared to conventional CNN models. We demonstrate that this
nuclei segmentation approach generalizes across different organs, sites,
patients and disease states, and outperforms conventional approaches,
especially in isolating individual and overlapping nuclei. | [
"cs.CV"
] |
We present the first edition of "VIPriors: Visual Inductive Priors for
Data-Efficient Deep Learning" challenges. We offer four data-impaired
challenges, where models are trained from scratch, and we reduce the number of
training samples to a fraction of the full set. Furthermore, to encourage data
efficient solutions, we prohibited the use of pre-trained models and other
transfer learning techniques. The majority of top ranking solutions make heavy
use of data augmentation, model ensembling, and novel and efficient network
architectures to achieve significant performance increases compared to the
provided baselines. | [
"cs.CV"
] |
Clinical finding summaries from an orthopantomogram, or a dental panoramic
radiograph, have significant potential to improve patient communication and
speed up clinical judgments. While orthopantomogram is a first-line tool for
dental examinations, no existing work has explored the summarization of
findings from it. A finding summary has to find teeth in the imaging study and
label the teeth with several types of past treatments. To tackle the problem,
we developDeepOPG that breaks the summarization process into functional
segmentation and tooth localization, the latter of which is further refined by
a novel dental coherence module. We also leverage weak supervision labels to
improve detection results in a reinforcement learning scenario. Experiments
show high efficacy of DeepOPG on finding summarization, achieving an overall
AUC of 88.2% in detecting six types of findings. The proposed dental coherence
and weak supervision both are shown to improve DeepOPG by adding 5.9% and 0.4%
to AP@IoU=0.5. | [
"cs.CV",
"cs.LG"
] |
We present iNeRF, a framework that performs mesh-free pose estimation by
"inverting" a Neural RadianceField (NeRF). NeRFs have been shown to be
remarkably effective for the task of view synthesis - synthesizing
photorealistic novel views of real-world scenes or objects. In this work, we
investigate whether we can apply analysis-by-synthesis via NeRF for mesh-free,
RGB-only 6DoF pose estimation - given an image, find the translation and
rotation of a camera relative to a 3D object or scene. Our method assumes that
no object mesh models are available during either training or test time.
Starting from an initial pose estimate, we use gradient descent to minimize the
residual between pixels rendered from a NeRF and pixels in an observed image.
In our experiments, we first study 1) how to sample rays during pose refinement
for iNeRF to collect informative gradients and 2) how different batch sizes of
rays affect iNeRF on a synthetic dataset. We then show that for complex
real-world scenes from the LLFF dataset, iNeRF can improve NeRF by estimating
the camera poses of novel images and using these images as additional training
data for NeRF. Finally, we show iNeRF can perform category-level object pose
estimation, including object instances not seen during training, with RGB
images by inverting a NeRF model inferred from a single view. | [
"cs.CV",
"cs.RO"
] |
Facial attribute editing aims to manipulate attributes on the human face,
e.g., adding a mustache or changing the hair color. Existing approaches suffer
from a serious compromise between correct attribute generation and preservation
of the other information such as identity and background, because they edit the
attributes in the imprecise area. To resolve this dilemma, we propose a
progressive attention GAN (PA-GAN) for facial attribute editing. In our
approach, the editing is progressively conducted from high to low feature level
while being constrained inside a proper attribute area by an attention mask at
each level. This manner prevents undesired modifications to the irrelevant
regions from the beginning, and then the network can focus more on correctly
generating the attributes within a proper boundary at each level. As a result,
our approach achieves correct attribute editing with irrelevant details much
better preserved compared with the state-of-the-arts. Codes are released at
https://github.com/LynnHo/PA-GAN-Tensorflow. | [
"cs.CV"
] |
We consider the task of classifying when a significantly reduced amount of
labelled data is available. This problem is of a great interest, in several
real-world problems, as obtaining large amounts of labelled data is expensive
and time consuming. We present a novel semi-supervised framework for
multi-class classification that is based on the non-smooth $\ell_1$ norm of the
normalised graph 1-Laplacian. Our transductive framework is framed under a
novel functional with carefully selected class priors - that enforces a
sufficiently smooth solution and strengthens the intrinsic relation between the
labelled and unlabelled data. We provide theoretical results of our new
optimisation model and show its connections with deep learning for handling
large-scale datasets. We demonstrate through extensive experimental results on
large datasets - CIFAR-10, CIFAR-100 and ChestX-Ray14 - that our method
outperforms classic methods and readily competes with recent deep-learning
approaches. | [
"cs.LG",
"stat.ML"
] |
Traffic forecasting is a fundamental and challenging task in the field of
intelligent transportation. Accurate forecasting not only depends on the
historical traffic flow information but also needs to consider the influence of
a variety of external factors, such as weather conditions and surrounding POI
distribution. Recently, spatiotemporal models integrating graph convolutional
networks and recurrent neural networks have become traffic forecasting research
hotspots and have made significant progress. However, few works integrate
external factors. Therefore, based on the assumption that introducing external
factors can enhance the spatiotemporal accuracy in predicting traffic and
improving interpretability, we propose an attribute-augmented spatiotemporal
graph convolutional network (AST-GCN). We model the external factors as dynamic
attributes and static attributes and design an attribute-augmented unit to
encode and integrate those factors into the spatiotemporal graph convolution
model. Experiments on real datasets show the effectiveness of considering
external information on traffic forecasting tasks when compared to traditional
traffic prediction methods. Moreover, under different attribute-augmented
schemes and prediction horizon settings, the forecasting accuracy of the
AST-GCN is higher than that of the baselines. | [
"cs.LG"
] |
Given an image, generating its natural language description (i.e., caption)
is a well studied problem. Approaches proposed to address this problem usually
rely on image features that are difficult to interpret. Particularly, these
image features are subdivided into global and local features, where global
features are extracted from the global representation of the image, while local
features are extracted from the objects detected locally in an image. Although,
local features extract rich visual information from the image, existing models
generate captions in a blackbox manner and humans have difficulty interpreting
which local objects the caption is aimed to represent. Hence in this paper, we
propose a novel framework for the image captioning with an explicit object
(e.g., knowledge graph entity) selection process while still maintaining its
end-to-end training ability. The model first explicitly selects which local
entities to include in the caption according to a human-interpretable mask,
then generate proper captions by attending to selected entities. Experiments
conducted on the MSCOCO dataset demonstrate that our method achieves good
performance in terms of the caption quality and diversity with a more
interpretable generating process than previous counterparts. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
With the large uses of the intelligent systems in different domains, and in
order to increase the drivers and pedestrians safety, the road and traffic sign
recognition system has been a challenging issue and an important task for many
years. But studies, done in this field of detection and recognition of traffic
signs in an image, which are interested in the Arab context, are still
insufficient. Detection of the road signs present in the scene is the one of
the main stages of the traffic sign detection and recognition. In this paper,
an efficient solution to enhance road signs detection, including Arabic
context, performance based on color segmentation, Randomized Hough Transform
and the combination of Zernike moments and Haralick features has been made.
Segmentation stage is useful to determine the Region of Interest (ROI) in the
image. The Randomized Hough Transform (RHT) is used to detect the circular and
octagonal shapes. This stage is improved by the extraction of the Haralick
features and Zernike moments. Furthermore, we use it as input of a classifier
based on SVM. Experimental results show that the proposed approach allows us to
perform the measurements precision. | [
"cs.CV",
"eess.IV"
] |
This manuscript proposes a posterior mean (PM) super-resolution (SR) method
with a compound Gaussian Markov random field (MRF) prior. SR is a technique to
estimate a spatially high-resolution image from observed multiple
low-resolution images. A compound Gaussian MRF model provides a preferable
prior for natural images that preserves edges. PM is the optimal estimator for
the objective function of peak signal-to-noise ratio (PSNR). This estimator is
numerically determined by using variational Bayes (VB). We then solve the
conjugate prior problem on VB and the exponential-order calculation cost
problem of a compound Gaussian MRF prior with simple Taylor approximations. In
experiments, the proposed method roughly overcomes existing methods. | [
"cs.CV"
] |
Face super-resolution (SR) has become an indispensable function in security
solutions such as video surveillance and identification system, but the
distortion in facial components is a great challenge in it. Most
state-of-the-art methods have utilized facial priors with deep neural networks.
These methods require extra labels, longer training time, and larger
computation memory. In this paper, we propose a novel Edge and Identity
Preserving Network for Face SR Network, named as EIPNet, to minimize the
distortion by utilizing a lightweight edge block and identity information. We
present an edge block to extract perceptual edge information, and concatenate
it to the original feature maps in multiple scales. This structure
progressively provides edge information in reconstruction to aggregate local
and global structural information. Moreover, we define an identity loss
function to preserve identification of SR images. The identity loss function
compares feature distributions between SR images and their ground truth to
recover identities in SR images. In addition, we provide a
luminance-chrominance error (LCE) to separately infer brightness and color
information in SR images. The LCE method not only reduces the dependency of
color information by dividing brightness and color components but also enables
our network to reflect differences between SR images and their ground truth in
two color spaces of RGB and YUV. The proposed method facilitates the proposed
SR network to elaborately restore facial components and generate high quality
8x scaled SR images with a lightweight network structure. Furthermore, our
network is able to reconstruct an 128x128 SR image with 215 fps on a GTX 1080Ti
GPU. Extensive experiments demonstrate that our network qualitatively and
quantitatively outperforms state-of-the-art methods on two challenging
datasets: CelebA and VGGFace2. | [
"cs.CV",
"eess.IV"
] |
We present an end-to-end model using streaming physiological time series to
accurately predict near-term risk for hypoxemia, a rare, but life-threatening
condition known to cause serious patient harm during surgery. Our proposed
model makes inference on both hypoxemia outcomes and future input sequences,
enabled by a joint sequence autoencoder that simultaneously optimizes a
discriminative decoder for label prediction, and two auxiliary decoders trained
for data reconstruction and forecast, which seamlessly learns future-indicative
latent representation. All decoders share a memory-based encoder that helps
capture the global dynamics of patient data. In a large surgical cohort of
73,536 surgeries at a major academic medical center, our model outperforms all
baselines and gives a large performance gain over the state-of-the-art
hypoxemia prediction system. With a high sensitivity cutoff at 80%, it presents
99.36% precision in predicting hypoxemia and 86.81% precision in predicting the
much more severe and rare hypoxemic condition, persistent hypoxemia. With
exceptionally low rate of false alarms, our proposed model is promising in
improving clinical decision making and easing burden on the health system. | [
"cs.LG"
] |
With current development universally in computing, now a days user
interaction approaches with mouse, keyboard, touch-pens etc. are not
sufficient. Directly using of hands or hand gestures as an input device is a
method to attract people with providing the applications, through Machine
Learning and Computer Vision. Human-computer interaction application in which
you can simply draw different shapes, fill the colors, moving the folder from
one place to another place and rotating your image with rotating your hand
gesture all this will be without touching your device only. In this paper
Machine Learning based hand gestures recognition is presented, with the use of
Computer Vision different types of gesture applications have been created. | [
"cs.CV"
] |
Unsupervised image translation aims to learn the transformation from a source
domain to another target domain given unpaired training data. Several
state-of-the-art works have yielded impressive results in the GANs-based
unsupervised image-to-image translation. It fails to capture strong geometric
or structural changes between domains, or it produces unsatisfactory result for
complex scenes, compared to local texture mapping tasks such as style transfer.
Recently, SAGAN (Han Zhang, 2018) showed that the self-attention network
produces better results than the convolution-based GAN. However, the
effectiveness of the self-attention network in unsupervised image-to-image
translation tasks have not been verified. In this paper, we propose an
unsupervised image-to-image translation with self-attention networks, in which
long range dependency helps to not only capture strong geometric change but
also generate details using cues from all feature locations. In experiments, we
qualitatively and quantitatively show superiority of the proposed method
compared to existing state-of-the-art unsupervised image-to-image translation
task. The source code and our results are online:
https://github.com/itsss/img2img_sa and
http://itsc.kr/2019/01/24/2019_img2img_sa | [
"cs.CV"
] |
We propose two new techniques for training Generative Adversarial Networks
(GANs). Our objectives are to alleviate mode collapse in GAN and improve the
quality of the generated samples. First, we propose neighbor embedding, a
manifold learning-based regularization to explicitly retain local structures of
latent samples in the generated samples. This prevents generator from producing
nearly identical data samples from different latent samples, and reduces mode
collapse. We propose an inverse t-SNE regularizer to achieve this. Second, we
propose a new technique, gradient matching, to align the distributions of the
generated samples and the real samples. As it is challenging to work with
high-dimensional sample distributions, we propose to align these distributions
through the scalar discriminator scores. We constrain the difference between
the discriminator scores of the real samples and generated ones. We further
constrain the difference between the gradients of these discriminator scores.
We derive these constraints from Taylor approximations of the discriminator
function. We perform experiments to demonstrate that our proposed techniques
are computationally simple and easy to be incorporated in existing systems.
When Gradient matching and Neighbour embedding are applied together, our GN-GAN
achieves outstanding results on 1D/2D synthetic, CIFAR-10 and STL-10 datasets,
e.g. FID score of $30.80$ for the STL-10 dataset. Our code is available at:
https://github.com/tntrung/gan | [
"cs.CV"
] |
Scattering networks are a class of designed Convolutional Neural Networks
(CNNs) with fixed weights. We argue they can serve as generic representations
for modelling images. In particular, by working in scattering space, we achieve
competitive results both for supervised and unsupervised learning tasks, while
making progress towards constructing more interpretable CNNs. For supervised
learning, we demonstrate that the early layers of CNNs do not necessarily need
to be learned, and can be replaced with a scattering network instead. Indeed,
using hybrid architectures, we achieve the best results with predefined
representations to-date, while being competitive with end-to-end learned CNNs.
Specifically, even applying a shallow cascade of small-windowed scattering
coefficients followed by 1$\times$1-convolutions results in AlexNet accuracy on
the ILSVRC2012 classification task. Moreover, by combining scattering networks
with deep residual networks, we achieve a single-crop top-5 error of 11.4% on
ILSVRC2012. Also, we show they can yield excellent performance in the small
sample regime on CIFAR-10 and STL-10 datasets, exceeding their end-to-end
counterparts, through their ability to incorporate geometrical priors. For
unsupervised learning, scattering coefficients can be a competitive
representation that permits image recovery. We use this fact to train hybrid
GANs to generate images. Finally, we empirically analyze several properties
related to stability and reconstruction of images from scattering coefficients. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Recent research on graph neural network (GNN) models successfully applied
GNNs to classical graph algorithms and combinatorial optimisation problems.
This has numerous benefits, such as allowing applications of algorithms when
preconditions are not satisfied, or reusing learned models when sufficient
training data is not available or can't be generated. Unfortunately, a key
hindrance of these approaches is their lack of explainability, since GNNs are
black-box models that cannot be interpreted directly. In this work, we address
this limitation by applying existing work on concept-based explanations to GNN
models. We introduce concept-bottleneck GNNs, which rely on a modification to
the GNN readout mechanism. Using three case studies we demonstrate that: (i)
our proposed model is capable of accurately learning concepts and extracting
propositional formulas based on the learned concepts for each target class;
(ii) our concept-based GNN models achieve comparative performance with
state-of-the-art models; (iii) we can derive global graph concepts, without
explicitly providing any supervision on graph-level concepts. | [
"cs.LG"
] |
Graph neural networks are currently leading the performance charts in
learning-based molecule property prediction and classification. Computational
chemistry has, therefore, become the a prominent testbed for generic graph
neural networks, as well as for specialized message passing methods. In this
work, we demonstrate that the replacement of the underlying networks with
hypernetworks leads to a boost in performance, obtaining state of the art
results in various benchmarks. A major difficulty in the application of
hypernetworks is their lack of stability. We tackle this by combining the
current message and the first message. A recent work has tackled the training
instability of hypernetworks in the context of error correcting codes, by
replacing the activation function of the message passing network with a
low-order Taylor approximation of it. We demonstrate that our generic solution
can replace this domain-specific solution. | [
"cs.LG",
"stat.ML"
] |
Generative adversarial networks (GANs) can implicitly learn rich
distributions over images, audio, and data which are hard to model with an
explicit likelihood. We present a practical Bayesian formulation for
unsupervised and semi-supervised learning with GANs. Within this framework, we
use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of
the generator and discriminator networks. The resulting approach is
straightforward and obtains good performance without any standard interventions
such as feature matching, or mini-batch discrimination. By exploring an
expressive posterior over the parameters of the generator, the Bayesian GAN
avoids mode-collapse, produces interpretable and diverse candidate samples, and
provides state-of-the-art quantitative results for semi-supervised learning on
benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN,
Wasserstein GANs, and DCGAN ensembles. | [
"stat.ML",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Causal discovery is at the core of human cognition. It enables us to reason
about the environment and make counterfactual predictions about unseen
scenarios that can vastly differ from our previous experiences. We consider the
task of causal discovery from videos in an end-to-end fashion without
supervision on the ground-truth graph structure. In particular, our goal is to
discover the structural dependencies among environmental and object variables:
inferring the type and strength of interactions that have a causal effect on
the behavior of the dynamical system. Our model consists of (a) a perception
module that extracts a semantically meaningful and temporally consistent
keypoint representation from images, (b) an inference module for determining
the graph distribution induced by the detected keypoints, and (c) a dynamics
module that can predict the future by conditioning on the inferred graph. We
assume access to different configurations and environmental conditions, i.e.,
data from unknown interventions on the underlying system; thus, we can hope to
discover the correct underlying causal graph without explicit interventions. We
evaluate our method in a planar multi-body interaction environment and
scenarios involving fabrics of different shapes like shirts and pants.
Experiments demonstrate that our model can correctly identify the interactions
from a short sequence of images and make long-term future predictions. The
causal structure assumed by the model also allows it to make counterfactual
predictions and extrapolate to systems of unseen interaction graphs or graphs
of various sizes. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
This paper presents an end-to-end semi-supervised object detection approach,
in contrast to previous more complex multi-stage methods. The end-to-end
training gradually improves pseudo label qualities during the curriculum, and
the more and more accurate pseudo labels in turn benefit object detection
training. We also propose two simple yet effective techniques within this
framework: a soft teacher mechanism where the classification loss of each
unlabeled bounding box is weighed by the classification score produced by the
teacher network; a box jittering approach to select reliable pseudo boxes for
the learning of box regression. On the COCO benchmark, the proposed approach
outperforms previous methods by a large margin under various labeling ratios,
i.e. 1\%, 5\% and 10\%. Moreover, our approach proves to perform also well when
the amount of labeled data is relatively large. For example, it can improve a
40.9 mAP baseline detector trained using the full COCO training set by +3.6
mAP, reaching 44.5 mAP, by leveraging the 123K unlabeled images of COCO. On the
state-of-the-art Swin Transformer based object detector (58.9 mAP on test-dev),
it can still significantly improve the detection accuracy by +1.5 mAP, reaching
60.4 mAP, and improve the instance segmentation accuracy by +1.2 mAP, reaching
52.4 mAP. Further incorporating with the Object365 pre-trained model, the
detection accuracy reaches 61.3 mAP and the instance segmentation accuracy
reaches 53.0 mAP, pushing the new state-of-the-art. | [
"cs.CV",
"cs.AI"
] |
Hashing has been widely used in approximate nearest search for large-scale
database retrieval for its computation and storage efficiency. Deep hashing,
which devises convolutional neural network architecture to exploit and extract
the semantic information or feature of images, has received increasing
attention recently. In this survey, several deep supervised hashing methods for
image retrieval are evaluated and I conclude three main different directions
for deep supervised hashing methods. Several comments are made at the end.
Moreover, to break through the bottleneck of the existing hashing methods, I
propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise
a CNN architecture to extract the semantic features of images and design a loss
function to encourage similar images projected close. To this end, I propose a
concept: shadow of the CNN output. During optimization process, the CNN output
and its shadow are guiding each other so as to achieve the optimal solution as
much as possible. Several experiments on dataset CIFAR-10 show the satisfying
performance of SRH. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Machine learning models are susceptible to adversarial perturbations: small
changes to input that can cause large changes in output. It is also
demonstrated that there exist input-agnostic perturbations, called universal
adversarial perturbations, which can change the inference of target model on
most of the data samples. However, existing methods to craft universal
perturbations are (i) task specific, (ii) require samples from the training
data distribution, and (iii) perform complex optimizations. Additionally,
because of the data dependence, fooling ability of the crafted perturbations is
proportional to the available training data. In this paper, we present a novel,
generalizable and data-free approaches for crafting universal adversarial
perturbations. Independent of the underlying task, our objective achieves
fooling via corrupting the extracted features at multiple layers. Therefore,
the proposed objective is generalizable to craft image-agnostic perturbations
across multiple vision tasks such as object recognition, semantic segmentation,
and depth estimation. In the practical setting of black-box attack scenario
(when the attacker does not have access to the target model and it's training
data), we show that our objective outperforms the data dependent objectives to
fool the learned models. Further, via exploiting simple priors related to the
data distribution, our objective remarkably boosts the fooling ability of the
crafted perturbations. Significant fooling rates achieved by our objective
emphasize that the current deep learning models are now at an increased risk,
since our objective generalizes across multiple tasks without the requirement
of training data for crafting the perturbations. To encourage reproducible
research, we have released the codes for our proposed algorithm. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Previous methods on estimating detailed human depth often require supervised
training with `ground truth' depth data. This paper presents a self-supervised
method that can be trained on YouTube videos without known depth, which makes
training data collection simple and improves the generalization of the learned
network. The self-supervised learning is achieved by minimizing a
photo-consistency loss, which is evaluated between a video frame and its
neighboring frames warped according to the estimated depth and the 3D non-rigid
motion of the human body. To solve this non-rigid motion, we first estimate a
rough SMPL model at each video frame and compute the non-rigid body motion
accordingly, which enables self-supervised learning on estimating the shape
details. Experiments demonstrate that our method enjoys better generalization
and performs much better on data in the wild. | [
"cs.CV"
] |
Predicting movement of objects while the action of learning agent interacts
with the dynamics of the scene still remains a key challenge in robotics. We
propose a multi-layer Long Short Term Memory (LSTM) autoendocer network that
predicts future frames for a robot navigating in a dynamic environment with
moving obstacles. The autoencoder network is composed of a state and action
conditioned decoder network that reconstructs the future frames of video,
conditioned on the action taken by the agent. The input image frames are first
transformed into low dimensional feature vectors with a pre-trained encoder
network and then reconstructed with the LSTM autoencoder network to generate
the future frames. A virtual environment, based on the OpenAi-Gym framework for
robotics, is used to gather training data and test the proposed network. The
initial experiments show promising results indicating that these predicted
frames can be used by an appropriate reinforcement learning framework in future
to navigate around dynamic obstacles. | [
"cs.LG",
"cs.CV",
"cs.RO",
"stat.ML",
"68T05"
] |
The majority of the existing methods for non-rigid 3D surface regression from
monocular 2D images require an object template or point tracks over multiple
frames as an input, and are still far from real-time processing rates. In this
work, we present the Isometry-Aware Monocular Generative Adversarial Network
(IsMo-GAN) - an approach for direct 3D reconstruction from a single image,
trained for the deformation model in an adversarial manner on a light-weight
synthetic dataset. IsMo-GAN reconstructs surfaces from real images under
varying illumination, camera poses, textures and shading at over 250 Hz. In
multiple experiments, it consistently outperforms several approaches in the
reconstruction accuracy, runtime, generalisation to unknown surfaces and
robustness to occlusions. In comparison to the state-of-the-art, we reduce the
reconstruction error by 10-30% including the textureless case and our surfaces
evince fewer artefacts qualitatively. | [
"cs.CV"
] |
We propose a temporally coherent generative model addressing the
super-resolution problem for fluid flows. Our work represents a first approach
to synthesize four-dimensional physics fields with neural networks. Based on a
conditional generative adversarial network that is designed for the inference
of three-dimensional volumetric data, our model generates consistent and
detailed results by using a novel temporal discriminator, in addition to the
commonly used spatial one. Our experiments show that the generator is able to
infer more realistic high-resolution details by using additional physical
quantities, such as low-resolution velocities or vorticities. Besides
improvements in the training process and in the generated outputs, these inputs
offer means for artistic control as well. We additionally employ a
physics-aware data augmentation step, which is crucial to avoid overfitting and
to reduce memory requirements. In this way, our network learns to generate
advected quantities with highly detailed, realistic, and temporally coherent
features. Our method works instantaneously, using only a single time-step of
low-resolution fluid data. We demonstrate the abilities of our method using a
variety of complex inputs and applications in two and three dimensions. | [
"cs.LG",
"cs.GR"
] |
Convolutional neural networks with many layers have recently been shown to
achieve excellent results on many high-level tasks such as image
classification, object detection and more recently also semantic segmentation.
Particularly for semantic segmentation, a two-stage procedure is often
employed. Hereby, convolutional networks are trained to provide good local
pixel-wise features for the second step being traditionally a more global
graphical model. In this work we unify this two-stage process into a single
joint training algorithm. We demonstrate our method on the semantic image
segmentation task and show encouraging results on the challenging PASCAL VOC
2012 dataset. | [
"cs.CV",
"cs.LG"
] |
Inspired by recent work of Islamov et al (2021), we propose a family of
Federated Newton Learn (FedNL) methods, which we believe is a marked step in
the direction of making second-order methods applicable to FL. In contrast to
the aforementioned work, FedNL employs a different Hessian learning technique
which i) enhances privacy as it does not rely on the training data to be
revealed to the coordinating server, ii) makes it applicable beyond generalized
linear models, and iii) provably works with general contractive compression
operators for compressing the local Hessians, such as Top-$K$ or Rank-$R$,
which are vastly superior in practice. Notably, we do not need to rely on error
feedback for our methods to work with contractive compressors. Moreover, we
develop FedNL-PP, FedNL-CR and FedNL-LS, which are variants of FedNL that
support partial participation, and globalization via cubic regularization and
line search, respectively, and FedNL-BC, which is a variant that can further
benefit from bidirectional compression of gradients and models, i.e., smart
uplink gradient and smart downlink model compression. We prove local
convergence rates that are independent of the condition number, the number of
training data points, and compression variance. Our communication efficient
Hessian learning technique provably learns the Hessian at the optimum. Finally,
we perform a variety of numerical experiments that show that our FedNL methods
have state-of-the-art communication complexity when compared to key baselines. | [
"cs.LG",
"cs.DC",
"math.OC"
] |
Generative Adversarial Networks (GANs) produce impressive results on
unconditional image generation when powered with large-scale image datasets.
Yet generated images are still easy to spot especially on datasets with high
variance (e.g. bedroom, church). In this paper, we propose various improvements
to further push the boundaries in image generation. Specifically, we propose a
novel dual contrastive loss and show that, with this loss, discriminator learns
more generalized and distinguishable representations to incentivize generation.
In addition, we revisit attention and extensively experiment with different
attention blocks in the generator. We find attention to be still an important
module for successful image generation even though it was not used in the
recent state-of-the-art models. Lastly, we study different attention
architectures in the discriminator, and propose a reference attention
mechanism. By combining the strengths of these remedies, we improve the
compelling state-of-the-art Fr\'{e}chet Inception Distance (FID) by at least
17.5% on several benchmark datasets. We obtain even more significant
improvements on compositional synthetic scenes (up to 47.5% in FID). | [
"cs.CV",
"cs.GR"
] |
Object detection is a crucial task for autonomous driving. In addition to
requiring high accuracy to ensure safety, object detection for autonomous
driving also requires real-time inference speed to guarantee prompt vehicle
control, as well as small model size and energy efficiency to enable embedded
system deployment.
In this work, we propose SqueezeDet, a fully convolutional neural network for
object detection that aims to simultaneously satisfy all of the above
constraints. In our network, we use convolutional layers not only to extract
feature maps but also as the output layer to compute bounding boxes and class
probabilities. The detection pipeline of our model only contains a single
forward pass of a neural network, thus it is extremely fast. Our model is
fully-convolutional, which leads to a small model size and better energy
efficiency. While achieving the same accuracy as previous baselines, our model
is 30.4x smaller, 19.7x faster, and consumes 35.2x lower energy. The code is
open-sourced at \url{https://github.com/BichenWuUCB/squeezeDet}. | [
"cs.CV"
] |
Vehicle Re-identification aims to identify a specific vehicle across time and
camera view. With the rapid growth of intelligent transportation systems and
smart cities, vehicle Re-identification technology gets more and more
attention. However, due to the difference of shooting angle and the high
similarity of vehicles belonging to the same brand, vehicle re-identification
becomes a great challenge for existing method. In this paper, we propose a
vehicle attribute-guided method to re-rank vehicle Re-ID result. The attributes
used include vehicle orientation and vehicle brand . We also focus on the
camera information and introduce camera mutual exclusion theory to further
fine-tune the search results. In terms of feature extraction, we combine the
data augmentations of multi-resolutions with the large model ensemble to get a
more robust vehicle features. Our method achieves mAP of 63.73% and rank-1
accuracy 76.61% in the CVPR 2021 AI City Challenge. | [
"cs.CV",
"cs.AI"
] |
As a unique and promising biometric, video-based gait recognition has broad
applications. The key step of this methodology is to learn the walking pattern
of individuals, which, however, often suffers challenges to extract the
behavioral feature from a sequence directly. Most existing methods just focus
on either the appearance or the motion pattern. To overcome these limitations,
we propose a sequential convolutional network (SCN) from a novel perspective,
where spatiotemporal features can be learned by a basic convolutional backbone.
In SCN, behavioral information extractors (BIE) are constructed to comprehend
intermediate feature maps in time series through motion templates where the
relationship between frames can be analyzed, thereby distilling the information
of the walking pattern. Furthermore, a multi-frame aggregator in SCN performs
feature integration on a sequence whose length is uncertain, via a mobile 3D
convolutional layer. To demonstrate the effectiveness, experiments have been
conducted on two popular public benchmarks, CASIA-B and OU-MVLP, and our
approach is demonstrated superior performance, comparing with the state-of-art
methods. | [
"cs.CV"
] |
Using deep learning techniques to process 3D objects has achieved many
successes. However, few methods focus on the representation of 3D objects,
which could be more effective for specific tasks than traditional
representations, such as point clouds, voxels, and multi-view images. In this
paper, we propose a Sphere Node Graph (SN-Graph) to represent 3D objects.
Specifically, we extract a certain number of internal spheres (as nodes) from
the signed distance field (SDF), and then establish connections (as edges)
among the sphere nodes to construct a graph, which is seamlessly suitable for
3D analysis using graph neural network (GNN). Experiments conducted on the
ModelNet40 dataset show that when there are fewer nodes in the graph or the
tested objects are rotated arbitrarily, the classification accuracy of SN-Graph
is significantly higher than the state-of-the-art methods. | [
"cs.CV",
"cs.LG"
] |
State-of-the-art semantic segmentation approaches increase the receptive
field of their models by using either a downsampling path composed of
poolings/strided convolutions or successive dilated convolutions. However, it
is not clear which operation leads to best results. In this paper, we
systematically study the differences introduced by distinct receptive field
enlargement methods and their impact on the performance of a novel
architecture, called Fully Convolutional DenseResNet (FC-DRN). FC-DRN has a
densely connected backbone composed of residual networks. Following standard
image segmentation architectures, receptive field enlargement operations that
change the representation level are interleaved among residual networks. This
allows the model to exploit the benefits of both residual and dense
connectivity patterns, namely: gradient flow, iterative refinement of
representations, multi-scale feature combination and deep supervision. In order
to highlight the potential of our model, we test it on the challenging CamVid
urban scene understanding benchmark and make the following observations: 1)
downsampling operations outperform dilations when the model is trained from
scratch, 2) dilations are useful during the finetuning step of the model, 3)
coarser representations require less refinement steps, and 4) ResNets (by model
construction) are good regularizers, since they can reduce the model capacity
when needed. Finally, we compare our architecture to alternative methods and
report state-of-the-art result on the Camvid dataset, with at least twice fewer
parameters. | [
"cs.CV"
] |
Humans excel in continuously learning with small data without forgetting how
to solve old problems. However, neural networks require large datasets to
compute latent representations across different tasks while minimizing a loss
function. For example, a natural language understanding (NLU) system will often
deal with emerging entities during its deployment as interactions with users in
realistic scenarios will generate new and infrequent names, events, and
locations. Here, we address this scenario by introducing an RL trainable
controller that disentangles the representation learning of a neural encoder
from its memory management role.
Our proposed solution is straightforward and simple: we train a controller to
execute an optimal sequence of reading and writing operations on an external
memory with the goal of leveraging diverse activations from the past and
provide accurate predictions. Our approach is named Learning to Control (LTC)
and allows few-shot learning with two degrees of memory plasticity. We
experimentally show that our system obtains accurate results for few-shot
learning of entity recognition in the Stanford Task-Oriented Dialogue dataset. | [
"cs.LG"
] |
We consider the problem of modeling cardiovascular responses to physical
activity and sleep changes captured by wearable sensors in free living
conditions. We use an attentional convolutional neural network to learn
parsimonious signatures of individual cardiovascular response from data
recorded at the minute level resolution over several months on a cohort of 80k
people. We demonstrate internal validity by showing that signatures generated
on an individual's 2017 data generalize to predict minute-level heart rate from
physical activity and sleep for the same individual in 2018, outperforming
several time-series forecasting baselines. We also show external validity
demonstrating that signatures outperform plain resting heart rate (RHR) in
predicting variables associated with cardiovascular functions, such as age and
Body Mass Index (BMI). We believe that the computed cardiovascular signatures
have utility in monitoring cardiovascular health over time, including detecting
abnormalities and quantifying recovery from acute events. | [
"cs.LG",
"cs.CY",
"stat.ML"
] |
Gait as a biometric trait has attracted much attention in many security and
privacy applications such as identity recognition and authentication, during
the last few decades. Because of its nature as a long-distance biometric trait,
gait can be easily collected and used to identify individuals non-intrusively
through CCTV cameras. However, it is very difficult to develop robust automated
gait recognition systems, since gait may be affected by many covariate factors
such as clothing, walking speed, camera view angle etc. Out of them, large view
angle changes has been deemed as the most challenging factor as it can alter
the overall gait appearance substantially.
Existing works on gait recognition are far from enough to provide satisfying
performances because of such view changes. Furthermore, very few works have
considered evidences -- the demonstrable information revealing the
reliabilities of decisions, which are regarded as important demands in machine
learning-based recognition/authentication applications. To address these
issues, in this paper we propose a Discriminant Gait Generative Adversarial
Network, namely DiGGAN, which can effectively extract view-invariant features
for cross-view gait recognition; and more importantly, to transfer gait images
to different views -- serving as evidences and showing how the decisions have
been made. Quantitative experiments have been conducted on the two most popular
cross-view gait datasets, the OU-MVLP and CASIA-B, where the proposed DiGGAN
has outperformed state-of-the-art methods. Qualitative analysis has also been
provided and demonstrates the proposed DiGGAN's capability in providing
evidences. | [
"cs.CV"
] |
Low-rank structures play important role in recent advances of many problems
in image science and data science. As a natural extension of low-rank
structures for data with nonlinear structures, the concept of the
low-dimensional manifold structure has been considered in many data processing
problems. Inspired by this concept, we consider a manifold based low-rank
regularization as a linear approximation of manifold dimension. This
regularization is less restricted than the global low-rank regularization, and
thus enjoy more flexibility to handle data with nonlinear structures. As
applications, we demonstrate the proposed regularization to classical inverse
problems in image sciences and data sciences including image inpainting, image
super-resolution, X-ray computer tomography (CT) image reconstruction and
semi-supervised learning. We conduct intensive numerical experiments in several
image restoration problems and a semi-supervised learning problem of
classifying handwritten digits using the MINST data. Our numerical tests
demonstrate the effectiveness of the proposed methods and illustrate that the
new regularization methods produce outstanding results by comparing with many
existing methods. | [
"cs.CV",
"math.NA",
"65D18, 65J22, 68U10, 68Q32"
] |
Transfer Learning (TL) aims to transfer knowledge acquired in one problem,
the source problem, onto another problem, the target problem, dispensing with
the bottom-up construction of the target model. Due to its relevance, TL has
gained significant interest in the Machine Learning community since it paves
the way to devise intelligent learning models that can easily be tailored to
many different applications. As it is natural in a fast evolving area, a wide
variety of TL methods, settings and nomenclature have been proposed so far.
However, a wide range of works have been reporting different names for the same
concepts. This concept and terminology mixture contribute however to obscure
the TL field, hindering its proper consideration. In this paper we present a
review of the literature on the majority of classification TL methods, and also
a distribution-based categorization of TL with a common nomenclature suitable
to classification problems. Under this perspective three main TL categories are
presented, discussed and illustrated with examples. | [
"cs.LG"
] |
The recent success of deep neural networks (DNNs) for function approximation
in reinforcement learning has triggered the development of Deep Reinforcement
Learning (DRL) algorithms in various fields, such as robotics, computer games,
natural language processing, computer vision, sensing systems, and wireless
networking. Unfortunately, DNNs suffer from high computational cost and memory
consumption, which limits the use of DRL algorithms in systems with limited
hardware resources. In recent years, pruning algorithms have demonstrated
considerable success in reducing the redundancy of DNNs in classification
tasks. However, existing algorithms suffer from a significant performance
reduction in the DRL domain. In this paper, we develop the first effective
solution to the performance reduction problem of pruning in the DRL domain, and
establish a working algorithm, named Policy Pruning and Shrinking (PoPS), to
train DRL models with strong performance while achieving a compact
representation of the DNN. The framework is based on a novel iterative policy
pruning and shrinking method that leverages the power of transfer learning when
training the DRL model. We present an extensive experimental study that
demonstrates the strong performance of PoPS using the popular Cartpole, Lunar
Lander, Pong, and Pacman environments. Finally, we develop an open source
software for the benefit of researchers and developers in related fields. | [
"cs.LG",
"cs.AI"
] |
Large-scale non-convex sparsity-constrained problems have recently gained
extensive attention. Most existing deterministic optimization methods (e.g.,
GraSP) are not suitable for large-scale and high-dimensional problems, and thus
stochastic optimization methods with hard thresholding (e.g., SVRGHT) become
more attractive. Inspired by GraSP, this paper proposes a new general relaxed
gradient support pursuit (RGraSP) framework, in which the sub-algorithm only
requires to satisfy a slack descent condition. We also design two specific
semi-stochastic gradient hard thresholding algorithms. In particular, our
algorithms have much less hard thresholding operations than SVRGHT, and their
average per-iteration cost is much lower (i.e., O(d) vs. O(d log(d)) for
SVRGHT), which leads to faster convergence. Our experimental results on both
synthetic and real-world datasets show that our algorithms are superior to the
state-of-the-art gradient hard thresholding methods. | [
"cs.LG",
"cs.CV",
"math.OC",
"stat.ML"
] |
Deep neural network (DNN) based approaches have been widely investigated and
deployed in medical image analysis. For example, fully convolutional neural
networks (FCN) achieve the state-of-the-art performance in several applications
of 2D/3D medical image segmentation. Even the baseline neural network models
(U-Net, V-Net, etc.) have been proven to be very effective and efficient when
the training process is set up properly. Nevertheless, to fully exploit the
potentials of neural networks, we propose an automated searching approach for
the optimal training strategy with reinforcement learning. The proposed
approach can be utilized for tuning hyper-parameters, and selecting necessary
data augmentation with certain probabilities. The proposed approach is
validated on several tasks of 3D medical image segmentation. The performance of
the baseline model is boosted after searching, and it can achieve comparable
accuracy to other manually-tuned state-of-the-art segmentation approaches. | [
"cs.CV"
] |
We present a novel approach for unsupervised learning of depth and ego-motion
from monocular video. Unsupervised learning removes the need for separate
supervisory signals (depth or ego-motion ground truth, or multi-view video).
Prior work in unsupervised depth learning uses pixel-wise or gradient-based
losses, which only consider pixels in small local neighborhoods. Our main
contribution is to explicitly consider the inferred 3D geometry of the scene,
enforcing consistency of the estimated 3D point clouds and ego-motion across
consecutive frames. This is a challenging task and is solved by a novel
(approximate) backpropagation algorithm for aligning 3D structures.
We combine this novel 3D-based loss with 2D losses based on photometric
quality of frame reconstructions using estimated depth and ego-motion from
adjacent frames. We also incorporate validity masks to avoid penalizing areas
in which no useful information exists.
We test our algorithm on the KITTI dataset and on a video dataset captured on
an uncalibrated mobile phone camera. Our proposed approach consistently
improves depth estimates on both datasets, and outperforms the state-of-the-art
for both depth and ego-motion. Because we only require a simple video, learning
depth and ego-motion on large and varied datasets becomes possible. We
demonstrate this by training on the low quality uncalibrated video dataset and
evaluating on KITTI, ranking among top performing prior methods which are
trained on KITTI itself. | [
"cs.CV"
] |
Cancer is a complex disease, the understanding and treatment of which are
being aided through increases in the volume of collected data and in the scale
of deployed computing power. Consequently, there is a growing need for the
development of data-driven and, in particular, deep learning methods for
various tasks such as cancer diagnosis, detection, prognosis, and prediction.
Despite recent successes, however, designing high-performing deep learning
models for nonimage and nontext cancer data is a time-consuming,
trial-and-error, manual task that requires both cancer domain and deep learning
expertise. To that end, we develop a reinforcement-learning-based neural
architecture search to automate deep-learning-based predictive model
development for a class of representative cancer data. We develop custom
building blocks that allow domain experts to incorporate the
cancer-data-specific characteristics. We show that our approach discovers deep
neural network architectures that have significantly fewer trainable
parameters, shorter training time, and accuracy similar to or higher than those
of manually designed architectures. We study and demonstrate the scalability of
our approach on up to 1,024 Intel Knights Landing nodes of the Theta
supercomputer at the Argonne Leadership Computing Facility. | [
"cs.LG",
"stat.ML"
] |
Transformation Equivariant Representations (TERs) aim to capture the
intrinsic visual structures that equivary to various transformations by
expanding the notion of {\em translation} equivariance underlying the success
of Convolutional Neural Networks (CNNs). For this purpose, we present both
deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding
Variational Transformations (AVT) models to learn visual representations from
generic groups of transformations. While the AET is trained by directly
decoding the transformations from the learned representations, the AVT is
trained by maximizing the joint mutual information between the learned
representation and transformations. This results in Generalized TERs (GTERs)
equivariant against transformations in a more general fashion by capturing
complex patterns of visual structures beyond the conventional linear
equivariance under a transformation group. The presented approach can be
extended to (semi-)supervised models by jointly maximizing the mutual
information of the learned representation with both labels and transformations.
Experiments demonstrate the proposed models outperform the state-of-the-art
models in both unsupervised and (semi-)supervised tasks. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Recently, neural network compression schemes like channel pruning have been
widely used to reduce the model size and computational complexity of deep
neural network (DNN) for applications in power-constrained scenarios such as
embedded systems. Reinforcement learning (RL)-based auto-pruning has been
further proposed to automate the DNN pruning process to avoid expensive
hand-crafted work. However, the RL-based pruner involves a time-consuming
training process and the high expense of each sample further exacerbates this
problem. These impediments have greatly restricted the real-world application
of RL-based auto-pruning. Thus, in this paper, we propose an efficient
auto-pruning framework which solves this problem by taking advantage of the
historical data from the previous auto-pruning process. In our framework, we
first boost the convergence of the RL-pruner by transfer learning. Then, an
augmented transfer learning scheme is proposed to further speed up the training
process by improving the transferability. Finally, an assistant learning
process is proposed to improve the sample efficiency of the RL agent. The
experiments have shown that our framework can accelerate the auto-pruning
process by 1.5-2.5 times for ResNet20, and 1.81-2.375 times for other neural
networks like ResNet56, ResNet18, and MobileNet v1. | [
"cs.LG",
"cs.AI"
] |
This paper presents Poisoning MorphNet, the first backdoor attack method on
point clouds. Conventional adversarial attack takes place in the inference
stage, often fooling a model by perturbing samples. In contrast, backdoor
attack aims to implant triggers into a model during the training stage, such
that the victim model acts normally on the clean data unless a trigger is
present in a sample. This work follows a typical setting of clean-label
backdoor attack, where a few poisoned samples (with their content tampered yet
labels unchanged) are injected into the training set. The unique contributions
of MorphNet are two-fold. First, it is key to ensure the implanted triggers
both visually imperceptible to humans and lead to high attack success rate on
the point clouds. To this end, MorphNet jointly optimizes two objectives for
sample-adaptive poisoning: a reconstruction loss that preserves the visual
similarity between benign / poisoned point clouds, and a classification loss
that enforces a modern recognition model of point clouds tends to mis-classify
the poisoned sample to a pre-specified target category. This implicitly
conducts spectral separation over point clouds, hiding sample-adaptive triggers
in fine-grained high-frequency details. Secondly, existing backdoor attack
methods are mainly designed for image data, easily defended by some point cloud
specific operations (such as denoising). We propose a third loss in MorphNet
for suppressing isolated points, leading to improved resistance to
denoising-based defense. Comprehensive evaluations are conducted on ModelNet40
and ShapeNetcorev2. Our proposed Poisoning MorphNet outstrips all previous
methods with clear margins. | [
"cs.CV"
] |
The problems of shape classification and part segmentation from 3D point
clouds have garnered increasing attention in the last few years. Both of these
problems, however, suffer from relatively small training sets, creating the
need for statistically efficient methods to learn 3D shape representations. In
this paper, we investigate the use of Approximate Convex Decompositions (ACD)
as a self-supervisory signal for label-efficient learning of point cloud
representations. We show that using ACD to approximate ground truth
segmentation provides excellent self-supervision for learning 3D point cloud
representations that are highly effective on downstream tasks. We report
improvements over the state-of-the-art for unsupervised representation learning
on the ModelNet40 shape classification dataset and significant gains in
few-shot part segmentation on the ShapeNetPart dataset.Code available at
https://github.com/matheusgadelha/PointCloudLearningACD | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
State-of-the-art pedestrian detectors have achieved significant progress on
non-occluded pedestrians, yet they are still struggling under heavy occlusions.
The recent occlusion handling strategy of popular two-stage approaches is to
build a two-branch architecture with the help of additional visible body
annotations. Nonetheless, these methods still have some weaknesses. Either the
two branches are trained independently with only score-level fusion, which
cannot guarantee the detectors to learn robust enough pedestrian features. Or
the attention mechanisms are exploited to only emphasize on the visible body
features. However, the visible body features of heavily occluded pedestrians
are concentrated on a relatively small area, which will easily cause missing
detections. To address the above issues, we propose in this paper a novel
Mutual-Supervised Feature Modulation (MSFM) network, to better handle occluded
pedestrian detection. The key MSFM module in our network calculates the
similarity loss of full body boxes and visible body boxes corresponding to the
same pedestrian so that the full-body detector could learn more complete and
robust pedestrian features with the assist of contextual features from the
occluding parts. To facilitate the MSFM module, we also propose a novel
two-branch architecture, consisting of a standard full body detection branch
and an extra visible body classification branch. These two branches are trained
in a mutual-supervised way with full body annotations and visible body
annotations, respectively. To verify the effectiveness of our proposed method,
extensive experiments are conducted on two challenging pedestrian datasets:
Caltech and CityPersons, and our approach achieves superior performance
compared to other state-of-the-art methods on both datasets, especially in
heavy occlusion case. | [
"cs.CV"
] |
Cluster structure detection is a fundamental task for the analysis of graphs,
in order to understand and to visualize their functional characteristics. Among
the different cluster structure detection methods, spectral clustering is
currently one of the most widely used due to its speed and simplicity. Yet,
there are few theoretical guarantee to recover the underlying partitions of the
graph for general models. This paper therefore presents a variant of spectral
clustering, called 1-spectral clustering, performed on a new random model
closely related to stochastic block model. Its goal is to promote a sparse
eigenbasis solution of a 1 minimization problem revealing the natural structure
of the graph. The effectiveness and the robustness to small noise perturbations
of our technique is confirmed through a collection of simulated and real data
examples. | [
"stat.ML",
"cs.LG"
] |
The motivation of this paper is to address the problem of registering
airborne LiDAR data and optical aerial or satellite imagery acquired from
different platforms, at different times, with different points of view and
levels of detail. In this paper, we present a robust registration method based
on building regions, which are extracted from optical images using mean shift
segmentation, and from LiDAR data using a 3D point cloud filtering process. The
matching of the extracted building segments is then carried out using Graph
Transformation Matching (GTM) which allows to determine a common pattern of
relative positions of segment centers. Thanks to this registration, the
relative shifts between the data sets are significantly reduced, which enables
a subsequent fine registration and a resulting high-quality data fusion. | [
"cs.CV"
] |
Model-free reinforcement learning methods such as the Proximal Policy
Optimization algorithm (PPO) have successfully applied in complex
decision-making problems such as Atari games. However, these methods suffer
from high variances and high sample complexity. On the other hand, model-based
reinforcement learning methods that learn the transition dynamics are more
sample efficient, but they often suffer from the bias of the transition
estimation. How to make use of both model-based and model-free learning is a
central problem in reinforcement learning. In this paper, we present a new
technique to address the trade-off between exploration and exploitation, which
regards the difference between model-free and model-based estimations as a
measure of exploration value. We apply this new technique to the PPO algorithm
and arrive at a new policy optimization method, named Policy Optimization with
Model-based Explorations (POME). POME uses two components to predict the
actions' target values: a model-free one estimated by Monte-Carlo sampling and
a model-based one which learns a transition model and predicts the value of the
next state. POME adds the error of these two target estimations as the
additional exploration value for each state-action pair, i.e, encourages the
algorithm to explore the states with larger target errors which are hard to
estimate. We compare POME with PPO on Atari 2600 games, and it shows that POME
outperforms PPO on 33 games out of 49 games. | [
"cs.LG",
"stat.ML"
] |
Any clustering algorithm must synchronously learn to model the clusters and
allocate data to those clusters in the absence of labels. Mixture model-based
methods model clusters with pre-defined statistical distributions and allocate
data to those clusters based on the cluster likelihoods. They iteratively
refine those distribution parameters and member assignments following the
Expectation-Maximization (EM) algorithm. However, the cluster representability
of such hand-designed distributions that employ a limited amount of parameters
is not adequate for most real-world clustering tasks. In this paper, we realize
mixture model-based clustering with a neural network where the final layer
neurons, with the aid of an additional transformation, approximate cluster
distribution outputs. The network parameters pose as the parameters of those
distributions. The result is an elegant, much-generalized representation of
clusters than a restricted mixture of hand-designed distributions. We train the
network end-to-end via batch-wise EM iterations where the forward pass acts as
the E-step and the backward pass acts as the M-step. In image clustering, the
mixture-based EM objective can be used as the clustering objective along with
existing representation learning methods. In particular, we show that when
mixture-EM optimization is fused with consistency optimization, it improves the
sole consistency optimization performance in clustering. Our trained networks
outperform single-stage deep clustering methods that still depend on k-means,
with unsupervised classification accuracy of 63.8% in STL10, 58% in CIFAR10,
25.9% in CIFAR100, and 98.9% in MNIST. | [
"cs.LG",
"cs.AI",
"cs.CV",
"68T10, 62H30",
"I.2; I.4; I.5"
] |
Sparse reward is one of the most challenging problems in reinforcement
learning (RL). Hindsight Experience Replay (HER) attempts to address this issue
by converting a failed experience to a successful one by relabeling the goals.
Despite its effectiveness, HER has limited applicability because it lacks a
compact and universal goal representation. We present Augmenting experienCe via
TeacheR's adviCE (ACTRCE), an efficient reinforcement learning technique that
extends the HER framework using natural language as the goal representation. We
first analyze the differences among goal representation, and show that ACTRCE
can efficiently solve difficult reinforcement learning problems in challenging
3D navigation tasks, whereas HER with non-language goal representation failed
to learn. We also show that with language goal representations, the agent can
generalize to unseen instructions, and even generalize to instructions with
unseen lexicons. We further demonstrate it is crucial to use hindsight advice
to solve challenging tasks, and even small amount of advice is sufficient for
the agent to achieve good performance. | [
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
] |
First-person vision is gaining interest as it offers a unique viewpoint on
people's interaction with objects, their attention, and even intention.
However, progress in this challenging domain has been relatively slow due to
the lack of sufficiently large datasets. In this paper, we introduce
EPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32
participants in their native kitchen environments. Our videos depict
nonscripted daily activities: we simply asked each participant to start
recording every time they entered their kitchen. Recording took place in 4
cities (in North America and Europe) by participants belonging to 10 different
nationalities, resulting in highly diverse cooking styles. Our dataset features
55 hours of video consisting of 11.5M frames, which we densely labeled for a
total of 39.6K action segments and 454.3K object bounding boxes. Our annotation
is unique in that we had the participants narrate their own videos (after
recording), thus reflecting true intention, and we crowd-sourced ground-truths
based on these. We describe our object, action and anticipation challenges, and
evaluate several baselines over two test splits, seen and unseen kitchens.
Dataset and Project page: http://epic-kitchens.github.io | [
"cs.CV"
] |
Graph Convolutional Networks (GCNs) have been drawing significant attention
with the power of representation learning on graphs. Unlike Convolutional
Neural Networks (CNNs), which are able to take advantage of stacking very deep
layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting
issues when going deeper. These challenges limit the representation power of
GCNs on large-scale graphs. This paper proposes DeeperGCN that is capable of
successfully and reliably training very deep GCNs. We define differentiable
generalized aggregation functions to unify different message aggregation
operations (e.g. mean, max). We also propose a novel normalization layer namely
MsgNorm and a pre-activation version of residual connections for GCNs.
Extensive experiments on Open Graph Benchmark (OGB) show DeeperGCN
significantly boosts performance over the state-of-the-art on the large scale
graph learning tasks of node property prediction and graph property prediction.
Please visit https://www.deepgcns.org for more information. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We introduce CAFLOW, a new diverse image-to-image translation model that
simultaneously leverages the power of auto-regressive modeling and the modeling
efficiency of conditional normalizing flows. We transform the conditioning
image into a sequence of latent encodings using a multi-scale normalizing flow
and repeat the process for the conditioned image. We model the conditional
distribution of the latent encodings by modeling the auto-regressive
distributions with an efficient multi-scale normalizing flow, where each
conditioning factor affects image synthesis at its respective resolution scale.
Our proposed framework performs well on a range of image-to-image translation
tasks. It outperforms former designs of conditional flows because of its
expressive auto-regressive structure. | [
"cs.CV",
"cs.AI",
"cs.LG",
"stat.ML"
] |
Explaining complex or seemingly simple machine learning models is an
important practical problem. We want to explain individual predictions from a
complex machine learning model by learning simple, interpretable explanations.
Shapley values is a game theoretic concept that can be used for this purpose.
The Shapley value framework has a series of desirable theoretical properties,
and can in principle handle any predictive model. Kernel SHAP is a
computationally efficient approximation to Shapley values in higher dimensions.
Like several other existing methods, this approach assumes that the features
are independent, which may give very wrong explanations. This is the case even
if a simple linear model is used for predictions. In this paper, we extend the
Kernel SHAP method to handle dependent features. We provide several examples of
linear and non-linear models with various degrees of feature dependence, where
our method gives more accurate approximations to the true Shapley values. We
also propose a method for aggregating individual Shapley values, such that the
prediction can be explained by groups of dependent variables. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
During the last years, computer vision-based diagnosis systems have been
widely used in several hospitals and dermatology clinics, aiming at the early
detection of malignant melanoma tumor, which is among the most frequent types
of skin cancer. In this work, we present an automated diagnosis system based on
the ABCD rule used in clinical diagnosis in order to discriminate benign from
malignant skin lesions. First, to reduce the influence of small structures, a
preprocessing step based on morphological and fast marching schemes is used. In
the second step, an unsupervised approach for lesion segmentation is proposed.
Iterative thresholding is applied to initialize level set automatically. As the
detection of an automated border is an important step for the correctness of
subsequent phases in the computerized melanoma recognition systems, we compare
its accuracy with growcut and mean shift algorithms, and discuss how these
results may influence in the following steps: the feature extraction and the
final lesion classification. Relying on visual diagnosis four features:
Asymmetry (A), Border (B), Color (C) and Diversity (D) are computed and used to
construct a classification module based on artificial neural network for the
recognition of malignant melanoma. This framework has been tested on a
dermoscopic database [16] of 320 images. The classification results show an
increasing true detection rate and a decreasing false positive rate. | [
"cs.CV"
] |
We present a novel blind source separation (BSS) method, called information
geometric blind source separation (IGBSS). Our formulation is based on the
log-linear model equipped with a hierarchically structured sample space, which
has theoretical guarantees to uniquely recover a set of source signals by
minimizing the KL divergence from a set of mixed signals. Source signals,
received signals, and mixing matrices are realized as different layers in our
hierarchical sample space. Our empirical results have demonstrated on images
and time series data that our approach is superior to well established
techniques and is able to separate signals with complex interactions. | [
"stat.ML",
"cs.LG"
] |
We propose a unified game-theoretical framework to perform classification and
conditional image generation given limited supervision. It is formulated as a
three-player minimax game consisting of a generator, a classifier and a
discriminator, and therefore is referred to as Triple Generative Adversarial
Network (Triple-GAN). The generator and the classifier characterize the
conditional distributions between images and labels to perform conditional
generation and classification, respectively. The discriminator solely focuses
on identifying fake image-label pairs. Under a nonparametric assumption, we
prove the unique equilibrium of the game is that the distributions
characterized by the generator and the classifier converge to the data
distribution. As a byproduct of the three-player mechanism, Triple-GAN is
flexible to incorporate different semi-supervised classifiers and GAN
architectures. We evaluate Triple-GAN in two challenging settings, namely,
semi-supervised learning and the extreme low data regime. In both settings,
Triple-GAN can achieve excellent classification results and generate meaningful
samples in a specific class simultaneously. In particular, using a commonly
adopted 13-layer CNN classifier, Triple-GAN outperforms extensive
semi-supervised learning methods substantially on more than 10 benchmarks no
matter data augmentation is applied or not. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Convolutional Neural Networks (CNNs) have become common in many fields
including computer vision, speech recognition, and natural language processing.
Although CNN hardware accelerators are already included as part of many SoC
architectures, the task of achieving high accuracy on resource-restricted
devices is still considered challenging, mainly due to the vast number of
design parameters that need to be balanced to achieve an efficient solution.
Quantization techniques, when applied to the network parameters, lead to a
reduction of power and area and may also change the ratio between communication
and computation. As a result, some algorithmic solutions may suffer from lack
of memory bandwidth or computational resources and fail to achieve the expected
performance due to hardware constraints. Thus, the system designer and the
micro-architect need to understand at early development stages the impact of
their high-level decisions (e.g., the architecture of the CNN and the amount of
bits used to represent its parameters) on the final product (e.g., the expected
power saving, area, and accuracy). Unfortunately, existing tools fall short of
supporting such decisions.
This paper introduces a hardware-aware complexity metric that aims to assist
the system designer of the neural network architectures, through the entire
project lifetime (especially at its early stages) by predicting the impact of
architectural and micro-architectural decisions on the final product. We
demonstrate how the proposed metric can help evaluate different design
alternatives of neural network models on resource-restricted devices such as
real-time embedded systems, and to avoid making design mistakes at early
stages. | [
"cs.LG",
"cs.AR"
] |
Fine-grained classification remains a challenging task because distinguishing
categories needs learning complex and local differences. Diversity in the pose,
scale, and position of objects in an image makes the problem even more
difficult. Although the recent Vision Transformer models achieve high
performance, they need an extensive volume of input data. To encounter this
problem, we made the best use of GAN-based data augmentation to generate extra
dataset instances. Oxford-IIIT Pets was our dataset of choice for this
experiment. It consists of 37 breeds of cats and dogs with variations in scale,
poses, and lighting, which intensifies the difficulty of the classification
task. Furthermore, we enhanced the performance of the recent Generative
Adversarial Network (GAN), StyleGAN2-ADA model to generate more realistic
images while preventing overfitting to the training set. We did this by
training a customized version of MobileNetV2 to predict animal facial
landmarks; then, we cropped images accordingly. Lastly, we combined the
synthetic images with the original dataset and compared our proposed method
with standard GANs augmentation and no augmentation with different subsets of
training data. We validated our work by evaluating the accuracy of fine-grained
image classification on the recent Vision Transformer (ViT) Model. | [
"cs.CV",
"I.2.10; I.4; I.5"
] |
Many radiological studies can reveal the presence of several co-existing
abnormalities, each one represented by a distinct visual pattern. In this
article we address the problem of learning a distance metric for plain
radiographs that captures a notion of "radiological similarity": two chest
radiographs are considered to be similar if they share similar abnormalities.
Deep convolutional neural networks (DCNs) are used to learn a low-dimensional
embedding for the radiographs that is equipped with the desired metric. Two
loss functions are proposed to deal with multi-labelled images and potentially
noisy labels. We report on a large-scale study involving over 745,000 chest
radiographs whose labels were automatically extracted from free-text
radiological reports through a natural language processing system. Using 4,500
validated exams, we demonstrate that the methodology performs satisfactorily on
clustering and image retrieval tasks. Remarkably, the learned metric separates
normal exams from those having radiological abnormalities. | [
"stat.ML",
"cs.CV"
] |
The recently proposed Lottery Ticket Hypothesis of Frankle and Carbin (2019)
suggests that the performance of over-parameterized deep networks is due to the
random initialization seeding the network with a small fraction of favorable
weights. These weights retain their dominant status throughout training -- in a
very real sense, this sub-network "won the lottery" during initialization. The
authors find sub-networks via unstructured magnitude pruning with 85-95% of
parameters removed that train to the same accuracy as the original network at a
similar speed, which they call winning tickets. In this paper, we extend the
Lottery Ticket Hypothesis to a variety of transfer learning tasks. We show that
sparse sub-networks with approximately 90-95% of weights removed achieve (and
often exceed) the accuracy of the original dense network in several realistic
settings. We experimentally validate this by transferring the sparse
representation found via pruning on CIFAR-10 to SmallNORB and FashionMNIST for
object recognition tasks. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
We present new algorithms for computing and approximating bisimulation
metrics in Markov Decision Processes (MDPs). Bisimulation metrics are an
elegant formalism that capture behavioral equivalence between states and
provide strong theoretical guarantees on differences in optimal behaviour.
Unfortunately, their computation is expensive and requires a tabular
representation of the states, which has thus far rendered them impractical for
large problems. In this paper we present a new version of the metric that is
tied to a behavior policy in an MDP, along with an analysis of its theoretical
properties. We then present two new algorithms for approximating bisimulation
metrics in large, deterministic MDPs. The first does so via sampling and is
guaranteed to converge to the true metric. The second is a differentiable loss
which allows us to learn an approximation even for continuous state MDPs, which
prior to this work had not been possible. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Many inference problems in structured prediction can be modeled as maximizing
a score function on a space of labels, where graphs are a natural
representation to decompose the total score into a sum of unary (nodes) and
pairwise (edges) scores. Given a generative model with an undirected connected
graph $G$ and true vector of binary labels, it has been previously shown that
when $G$ has good expansion properties, such as complete graphs or $d$-regular
expanders, one can exactly recover the true labels (with high probability and
in polynomial time) from a single noisy observation of each edge and node. We
analyze the previously studied generative model by Globerson et al. (2015)
under a notion of statistical parity. That is, given a fair binary node
labeling, we ask the question whether it is possible to recover the fair
assignment, with high probability and in polynomial time, from single edge and
node observations. We find that, in contrast to the known trade-offs between
fairness and model performance, the addition of the fairness constraint
improves the probability of exact recovery. We effectively explain this
phenomenon and empirically show how graphs with poor expansion properties, such
as grids, are now capable to achieve exact recovery with high probability.
Finally, as a byproduct of our analysis, we provide a tighter
minimum-eigenvalue bound than that of Weyl's inequality. | [
"stat.ML",
"cs.LG"
] |
We present a transformer-based image anomaly detection and localization
network. Our proposed model is a combination of a reconstruction-based approach
and patch embedding. The use of transformer networks helps to preserve the
spatial information of the embedded patches, which are later processed by a
Gaussian mixture density network to localize the anomalous areas. In addition,
we also publish BTAD, a real-world industrial anomaly dataset. Our results are
compared with other state-of-the-art algorithms using publicly available
datasets like MNIST and MVTec. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Computer models play a key role in many scientific and engineering problems.
One major source of uncertainty in computer model experiment is input parameter
uncertainty. Computer model calibration is a formal statistical procedure to
infer input parameters by combining information from model runs and
observational data. The existing standard calibration framework suffers from
inferential issues when the model output and observational data are
high-dimensional dependent data such as large time series due to the difficulty
in building an emulator and the non-identifiability between effects from input
parameters and data-model discrepancy. To overcome these challenges we propose
a new calibration framework based on a deep neural network (DNN) with
long-short term memory layers that directly emulates the inverse relationship
between the model output and input parameters. Adopting the 'learning with
noise' idea we train our DNN model to filter out the effects from data model
discrepancy on input parameter inference. We also formulate a new way to
construct interval predictions for DNN using quantile regression to quantify
the uncertainty in input parameter estimates. Through a simulation study and
real data application with WRF-hydro model we show that our approach can yield
accurate point estimates and well calibrated interval estimates for input
parameters. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
Representation learning (RL) methods learn objects' latent embeddings where
information is preserved by distances. Since distances are invariant to certain
linear transformations, one may obtain different embeddings while preserving
the same information. In dynamic systems, a temporal difference in embeddings
may be explained by the stability of the system or by the misalignment of
embeddings due to arbitrary transformations. In the literature, embedding
alignment has not been defined formally, explored theoretically, or analyzed
empirically. Here, we explore the embedding alignment and its parts, provide
the first formal definitions, propose novel metrics to measure alignment and
stability, and show their suitability through synthetic experiments. Real-world
experiments show that both static and dynamic RL methods are prone to produce
misaligned embeddings and such misalignment worsens the performance of dynamic
network inference tasks. By ensuring alignment, the prediction accuracy raises
by up to 90% in static and by 40% in dynamic RL methods. | [
"cs.LG",
"cs.SI"
] |
To explore the robustness of recommender systems, researchers have proposed
various shilling attack models and analyzed their adverse effects. Primitive
attacks are highly feasible but less effective due to simplistic handcrafted
rules, while upgraded attacks are more powerful but costly and difficult to
deploy because they require more knowledge from recommendations. In this paper,
we explore a novel shilling attack called Graph cOnvolution-based generative
shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness.
GOAT adopts the primitive attacks' paradigm that assigns items for fake users
by sampling and the upgraded attacks' paradigm that generates fake ratings by a
deep learning-based model. It deploys a generative adversarial network (GAN)
that learns the real rating distribution to generate fake ratings.
Additionally, the generator combines a tailored graph convolution structure
that leverages the correlations between co-rated items to smoothen the fake
ratings and enhance their authenticity. The extensive experiments on two public
datasets evaluate GOAT's performance from multiple perspectives. Our study of
the GOAT demonstrates technical feasibility for building a more powerful and
intelligent attack model with a much-reduced cost, enables analysis the threat
of such an attack and guides for investigating necessary prevention measures. | [
"cs.LG",
"cs.CR",
"cs.IR",
"cs.SI"
] |
Text detection in scenes based on deep neural networks have shown promising
results. Instead of using word bounding box regression, recent state-of-the-art
methods have started focusing on character bounding box and pixel-level
prediction. This necessitates the need to link adjacent characters, which we
propose in this paper using a novel Graph Neural Network (GNN) architecture
that allows us to learn both node and edge features as opposed to only the node
features under the typical GNN. The main advantage of using GNN for link
prediction lies in its ability to connect characters which are spatially
separated and have an arbitrary orientation. We show our concept on the well
known SynthText dataset, achieving top results as compared to state-of-the-art
methods. | [
"cs.LG",
"cs.CL",
"cs.CV"
] |
Both assistant driving and self-driving have attracted a great amount of
attention in the last few years. However, the majority of research efforts
focus on safe driving; few research has been conducted on in-vehicle climate
control, or assistant driving based on travellers' personal habits or
preferences. In this paper, we propose a novel approach for climate control,
driver behavior recognition and driving recommendation for better fitting
drivers' preferences in their daily driving. The algorithm consists three
components: (1) A in-vehicle sensing and context feature enriching compnent
with a Internet of Things (IoT) platform for collecting related environment,
vehicle-running, and traffic parameters that affect drivers' behaviors. (2) A
non-intrusive intelligent driver behaviour and vehicle status detection
component, which can automatically label vehicle's status (open windows, turn
on air condition, etc.), based on results of applying further feature
extraction and machine learning algorithms. (3) A personalized driver habits
learning and preference recommendation component for more healthy and
comfortable experiences. A prototype using a client-server architecture with an
iOS app and an air-quality monitoring sensor has been developed for collecting
heterogeneous data and testing our algorithms. Real-world experiments on
driving data of 11,370 km (320 hours) by different drivers in multiple cities
worldwide have been conducted, which demonstrate the effective and accuracy of
our approach. | [
"cs.LG",
"stat.ML"
] |
Color constancy is the problem of inferring the color of the light that
illuminated a scene, usually so that the illumination color can be removed.
Because this problem is underconstrained, it is often solved by modeling the
statistical regularities of the colors of natural objects and illumination. In
contrast, in this paper we reformulate the problem of color constancy as a 2D
spatial localization task in a log-chrominance space, thereby allowing us to
apply techniques from object detection and structured prediction to the color
constancy problem. By directly learning how to discriminate between correctly
white-balanced images and poorly white-balanced images, our model is able to
improve performance on standard benchmarks by nearly 40%. | [
"cs.CV"
] |
DETR has been recently proposed to eliminate the need for many hand-designed
components in object detection while demonstrating good performance. However,
it suffers from slow convergence and limited feature spatial resolution, due to
the limitation of Transformer attention modules in processing image feature
maps. To mitigate these issues, we proposed Deformable DETR, whose attention
modules only attend to a small set of key sampling points around a reference.
Deformable DETR can achieve better performance than DETR (especially on small
objects) with 10 times less training epochs. Extensive experiments on the COCO
benchmark demonstrate the effectiveness of our approach. Code is released at
https://github.com/fundamentalvision/Deformable-DETR. | [
"cs.CV"
] |
Learning useful representations is a key ingredient to the success of modern
machine learning. Currently, representation learning mostly relies on embedding
data into Euclidean space. However, recent work has shown that data in some
domains is better modeled by non-euclidean metric spaces, and inappropriate
geometry can result in inferior performance. In this paper, we aim to eliminate
the inductive bias imposed by the embedding space geometry. Namely, we propose
to map data into more general non-vector metric spaces: a weighted graph with a
shortest path distance. By design, such graphs can model arbitrary geometry
with a proper configuration of edges and weights. Our main contribution is
PRODIGE: a method that learns a weighted graph representation of data
end-to-end by gradient descent. Greater generality and fewer model assumptions
make PRODIGE more powerful than existing embedding-based approaches. We confirm
the superiority of our method via extensive experiments on a wide range of
tasks, including classification, compression, and collaborative filtering. | [
"cs.LG",
"stat.ML"
] |
Attention mechanisms and non-local mean operations in general are key
ingredients in many state-of-the-art deep learning techniques. In particular,
the Transformer model based on multi-head self-attention has recently achieved
great success in natural language processing and computer vision. However, the
vanilla algorithm computing the Transformer of an image with n pixels has
O(n^2) complexity, which is often painfully slow and sometimes prohibitively
expensive for large-scale image data. In this paper, we propose a fast
randomized algorithm --- SCRAM --- that only requires O(n log(n)) time to
produce an image attention map. Such a dramatic acceleration is attributed to
our insight that attention maps on real-world images usually exhibit (1)
spatial coherence and (2) sparse structure. The central idea of SCRAM is to
employ PatchMatch, a randomized correspondence algorithm, to quickly pinpoint
the most compatible key (argmax) for each query first, and then exploit that
knowledge to design a sparse approximation to non-local mean operations. Using
the argmax (mode) to dynamically construct the sparse approximation
distinguishes our algorithm from all of the existing sparse approximate methods
and makes it very efficient. Moreover, SCRAM is a broadly applicable
approximation to any non-local mean layer in contrast to some other sparse
approximations that can only approximate self-attention. Our preliminary
experimental results suggest that SCRAM is indeed promising for speeding up or
scaling up the computation of attention maps in the Transformer. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
The segmentation of animals from camera-trap images is a difficult task. To
illustrate, there are various challenges due to environmental conditions and
hardware limitation in these images. We proposed a multi-layer robust principal
component analysis (multi-layer RPCA) approach for background subtraction. Our
method computes sparse and low-rank images from a weighted sum of descriptors,
using color and texture features as case of study for camera-trap images
segmentation. The segmentation algorithm is composed of histogram equalization
or Gaussian filtering as pre-processing, and morphological filters with active
contour as post-processing. The parameters of our multi-layer RPCA were
optimized with an exhaustive search. The database consists of camera-trap
images from the Colombian forest taken by the Instituto de Investigaci\'on de
Recursos Biol\'ogicos Alexander von Humboldt. We analyzed the performance of
our method in inherent and therefore challenging situations of camera-trap
images. Furthermore, we compared our method with some state-of-the-art
algorithms of background subtraction, where our multi-layer RPCA outperformed
these other methods. Our multi-layer RPCA reached 76.17 and 69.97% of average
fine-grained F-measure for color and infrared sequences, respectively. To our
best knowledge, this paper is the first work proposing multi-layer RPCA and
using it for camera-trap images segmentation. | [
"cs.CV"
] |
The VAT method is a visual technique for determining the potential cluster
structure and the possible number of clusters in numerical data. Its improved
version, iVAT, uses a path-based distance transform to improve the
effectiveness of VAT for "tough" cases. Both VAT and iVAT have also been used
in conjunction with a single-linkage(SL) hierarchical clustering algorithm.
However, they are sensitive to noise and bridge points between clusters in the
dataset, and consequently, the corresponding VAT/iVAT images are often
in-conclusive for such cases. In this paper, we propose a constraint-based
version of iVAT, which we call ConiVAT, that makes use of background knowledge
in the form of constraints, to improve VAT/iVAT for challenging and complex
datasets. ConiVAT uses the input constraints to learn the underlying similarity
metric and builds a minimum transitive dissimilarity matrix, before applying
VAT to it. We demonstrate ConiVAT approach to visual assessment and single
linkage clustering on nine datasets to show that, it improves the quality of
iVAT images for complex datasets, and it also overcomes the limitation of SL
clustering with VAT/iVAT due to "noisy" bridges between clusters. Extensive
experiment results on nine datasets suggest that ConiVAT outperforms the other
three semi-supervised clustering algorithms in terms of improved clustering
accuracy. | [
"cs.LG",
"stat.ML"
] |
In this work, we present HyperFlow - a novel generative model that leverages
hypernetworks to create continuous 3D object representations in a form of
lightweight surfaces (meshes), directly out of point clouds. Efficient object
representations are essential for many computer vision applications, including
robotic manipulation and autonomous driving. However, creating those
representations is often cumbersome, because it requires processing unordered
sets of point clouds. Therefore, it is either computationally expensive, due to
additional optimization constraints such as permutation invariance, or leads to
quantization losses introduced by binning point clouds into discrete voxels.
Inspired by mesh-based representations of objects used in computer graphics, we
postulate a fundamentally different approach and represent 3D objects as a
family of surfaces. To that end, we devise a generative model that uses a
hypernetwork to return the weights of a Continuous Normalizing Flows (CNF)
target network. The goal of this target network is to map points from a
probability distribution into a 3D mesh. To avoid numerical instability of the
CNF on compact support distributions, we propose a new Spherical Log-Normal
function which models density of 3D points around object surfaces mimicking
noise introduced by 3D capturing devices. As a result, we obtain continuous
mesh-based object representations that yield better qualitative results than
competing approaches, while reducing training time by over an order of
magnitude. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Eye movements are intricate and dynamic biosignals that contain a wealth of
cognitive information about the subject. However, these are ambiguous signals
and therefore require meticulous feature engineering to be used by machine
learning algorithms. We instead propose to learn feature vectors of eye
movements in a self-supervised manner. We adopt a contrastive learning approach
and propose a set of data transformations that encourage a deep neural network
to discern salient and granular gaze patterns. This paper presents a novel
experiment utilizing six eye-tracking data sets despite different data
specifications and experimental conditions. We assess the learned features on
biometric tasks with only a linear classifier, achieving 84.6% accuracy on a
mixed dataset, and up to 97.3% accuracy on a single dataset. Our work advances
the state of machine learning for eye movements and provides insights into a
general representation learning method not only for eye movements but also for
similar biosignals. | [
"cs.CV"
] |
Vision-Language Navigation (VLN) is a task where agents learn to navigate
following natural language instructions. The key to this task is to perceive
both the visual scene and natural language sequentially. Conventional
approaches exploit the vision and language features in cross-modal grounding.
However, the VLN task remains challenging, since previous works have neglected
the rich semantic information contained in the environment (such as implicit
navigation graphs or sub-trajectory semantics). In this paper, we introduce
Auxiliary Reasoning Navigation (AuxRN), a framework with four self-supervised
auxiliary reasoning tasks to take advantage of the additional training signals
derived from the semantic information. The auxiliary tasks have four reasoning
objectives: explaining the previous actions, estimating the navigation
progress, predicting the next orientation, and evaluating the trajectory
consistency. As a result, these additional training signals help the agent to
acquire knowledge of semantic representations in order to reason about its
activity and build a thorough perception of the environment. Our experiments
indicate that auxiliary reasoning tasks improve both the performance of the
main task and the model generalizability by a large margin. Empirically, we
demonstrate that an agent trained with self-supervised auxiliary reasoning
tasks substantially outperforms the previous state-of-the-art method, being the
best existing approach on the standard benchmark. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.