text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Despite tremendous progress in computer vision, there has not been an attempt
for machine learning on very large-scale medical image databases. We present an
interleaved text/image deep learning system to extract and mine the semantic
interactions of radiology images and reports from a national research
hospital's Picture Archiving and Communication System. With natural language
processing, we mine a collection of representative ~216K two-dimensional key
images selected by clinicians for diagnostic reference, and match the images
with their descriptions in an automated manner. Our system interleaves between
unsupervised learning and supervised learning on document- and sentence-level
text collections, to generate semantic labels and to predict them given an
image. Given an image of a patient scan, semantic topics in radiology levels
are predicted, and associated key-words are generated. Also, a number of
frequent disease types are detected as present or absent, to provide more
specific interpretation of a patient scan. This shows the potential of
large-scale learning and prediction in electronic patient records available in
most modern clinical institutions. | [
"cs.CV",
"cs.LG"
] |
Bayesian nonparametric approaches, in particular the Pitman-Yor process and
the associated two-parameter Chinese Restaurant process, have been successfully
used in applications where the data exhibit a power-law behavior. Examples
include natural language processing, natural images or networks. There is also
growing empirical evidence that some datasets exhibit a two-regime power-law
behavior: one regime for small frequencies, and a second regime, with a
different exponent, for high frequencies. In this paper, we introduce a class
of completely random measures which are doubly regularly-varying. Contrary to
the Pitman-Yor process, we show that when completely random measures in this
class are normalized to obtain random probability measures and associated
random partitions, such partitions exhibit a double power-law behavior. We
discuss in particular three models within this class: the beta prime process
(Broderick et al. (2015, 2018), a novel process called generalized BFRY
process, and a mixture construction. We derive efficient Markov chain Monte
Carlo algorithms to estimate the parameters of these models. Finally, we show
that the proposed models provide a better fit than the Pitman-Yor process on
various datasets. | [
"stat.ML",
"cs.LG"
] |
We present a method for depth estimation with monocular images, which can
predict high-quality depth on diverse scenes up to an affine transformation,
thus preserving accurate shapes of a scene. Previous methods that predict
metric depth often work well only for a specific scene. In contrast, learning
relative depth (information of being closer or further) can enjoy better
generalization, with the price of failing to recover the accurate geometric
shape of the scene. In this work, we propose a dataset and methods to tackle
this dilemma, aiming to predict accurate depth up to an affine transformation
with good generalization to diverse scenes. First we construct a large-scale
and diverse dataset, termed Diverse Scene Depth dataset (DiverseDepth), which
has a broad range of scenes and foreground contents. Compared with previous
learning objectives, i.e., learning metric depth or relative depth, we propose
to learn the affine-invariant depth using our diverse dataset to ensure both
generalization and high-quality geometric shapes of scenes. Furthermore, in
order to train the model on the complex dataset effectively, we propose a
multi-curriculum learning method. Experiments show that our method outperforms
previous methods on 8 datasets by a large margin with the zero-shot test
setting, demonstrating the excellent generalization capacity of the learned
model to diverse scenes. The reconstructed point clouds with the predicted
depth show that our method can recover high-quality 3D shapes. Code and dataset
are available at: https://tinyurl.com/DiverseDepth | [
"cs.CV"
] |
Pre-trained models, e.g., from ImageNet, have proven to be effective in
boosting the performance of many downstream applications. It is too demanding
to acquire large-scale annotations to build such models for medical imaging.
Meanwhile, there are numerous clinical data (in the form of images and text
reports) stored in the hospital information systems. The paired image-text data
from the same patient study could be utilized for the pre-training task in a
weakly supervised manner. However, the integrity, accessibility, and amount of
such raw data vary across different institutes, e.g., paired vs. unpaired
(image-only or text-only). In this work, we introduce an image-text
pre-training framework that can learn from these raw data with mixed data
inputs, i.e., paired image-text data, a mixture of paired and unpaired data.
The unpaired data can be sourced from one or multiple institutes (e.g., images
from one institute coupled with texts from another). Specifically, we propose a
transformer-based training framework for jointly learning the representation of
both the image and text data. In addition to the existing masked language
modeling, multi-scale masked vision modeling is introduced as a self-supervised
training task for image patch regeneration. We not only demonstrate the
feasibility of pre-training across mixed data inputs but also illustrate the
benefits of adopting such pre-trained models in 3 chest X-ray applications,
i.e., classification, retrieval, and image regeneration. Superior results are
reported in comparison to prior art using MIMIC-CXR, NIH14-CXR, and OpenI-CXR
datasets. | [
"cs.CV"
] |
Reinforcement Learning has been able to solve many complicated robotics tasks
without any need for feature engineering in an end-to-end fashion. However,
learning the optimal policy directly from the sensory inputs, i.e the
observations, often requires processing and storage of a huge amount of data.
In the context of robotics, the cost of data from real robotics hardware is
usually very high, thus solutions that achieve high sample-efficiency are
needed. We propose a method that aims at learning a mapping from the
observations into a lower-dimensional state space. This mapping is learned with
unsupervised learning using loss functions shaped to incorporate prior
knowledge of the environment and the task. Using the samples from the state
space, the optimal policy is quickly and efficiently learned. We test the
method on several mobile robot navigation tasks in a simulation environment and
also on a real robot. | [
"cs.LG",
"cs.AI"
] |
Best group subset selection aims to choose a small part of non-overlapping
groups to achieve the best interpretability on the response variable. It is
practically attractive for group variable selection; however, due to the
computational intractability in high dimensionality setting, it doesn't catch
enough attention. To fill the blank of efficient algorithms for best group
subset selection, in this paper, we propose a group-splicing algorithm that
iteratively detects effective groups and excludes the helpless ones. Moreover,
coupled with a novel Bayesian group information criterion, an adaptive
algorithm is developed to determine the true group subset size. It is
certifiable that our algorithms enable identifying the optimal group subset in
polynomial time under mild conditions. We demonstrate the efficiency and
accuracy of our proposal by comparing state-of-the-art algorithms on both
synthetic and real-world datasets. | [
"cs.LG",
"stat.ML"
] |
Reinforcement learning (RL) can be used to learn treatment policies and aid
decision making in healthcare. However, given the need for generalization over
complex state/action spaces, the incorporation of function approximators (e.g.,
deep neural networks) requires model selection to reduce overfitting and
improve policy performance at deployment. Yet a standard validation pipeline
for model selection requires running a learned policy in the actual
environment, which is often infeasible in a healthcare setting. In this work,
we investigate a model selection pipeline for offline RL that relies on
off-policy evaluation (OPE) as a proxy for validation performance. We present
an in-depth analysis of popular OPE methods, highlighting the additional
hyperparameters and computational requirements (fitting/inference of auxiliary
models) when used to rank a set of candidate policies. We compare the utility
of different OPE methods as part of the model selection pipeline in the context
of learning to treat patients with sepsis. Among all the OPE methods we
considered, fitted Q evaluation (FQE) consistently leads to the best validation
ranking, but at a high computational cost. To balance this trade-off between
accuracy of ranking and computational efficiency, we propose a simple two-stage
approach to accelerate model selection by avoiding potentially unnecessary
computation. Our work serves as a practical guide for offline RL model
selection and can help RL practitioners select policies using real-world
datasets. To facilitate reproducibility and future extensions, the code
accompanying this paper is available online at
https://github.com/MLD3/OfflineRL_ModelSelection. | [
"cs.LG"
] |
Without any specific way for imbalance data classification, artificial
intelligence algorithm cannot recognize data from minority classes easily. In
general, modifying the existing algorithm by assuming that the training data is
imbalanced, is the only way to handle imbalance data. However, for a normal
data handling, this way mostly produces a deficient result. In this research,
we propose a class expert generative adversarial network (CE-GAN) as the
solution for imbalance data classification. CE-GAN is a modification in deep
learning algorithm architecture that does not have an assumption that the
training data is imbalance data. Moreover, CE-GAN is designed to identify more
detail about the character of each class before classification step. CE-GAN has
been proved in this research to give a good performance for imbalance data
classification. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We propose a variational approach to obtain super-resolution images from
multiple low-resolution frames extracted from video clips. First the
displacement between the low-resolution frames and the reference frame are
computed by an optical flow algorithm. Then a low-rank model is used to
construct the reference frame in high-resolution by incorporating the
information of the low-resolution frames. The model has two terms: a 2-norm
data fidelity term and a nuclear-norm regularization term. Alternating
direction method of multipliers is used to solve the model. Comparison of our
methods with other models on synthetic and real video clips show that our
resulting images are more accurate with less artifacts. It also provides much
finer and discernable details. | [
"cs.CV",
"65K10",
"G.1.6"
] |
In recent years we have seen an upsurge in terror attacks around the world.
Such attacks usually happen in public places with large crowds to cause the
most damage possible and get the most attention. Even though surveillance
cameras are assumed to be a powerful tool, their effect in preventing crime is
far from clear due to either limitation in the ability of humans to vigilantly
monitor video surveillance or for the simple reason that they are operating
passively. In this paper, we present a weapon detection system based on an
ensemble of semantic Convolutional Neural Networks that decomposes the problem
of detecting and locating a weapon into a set of smaller problems concerned
with the individual component parts of a weapon. This approach has
computational and practical advantages: a set of simpler neural networks
dedicated to specific tasks requires less computational resources and can be
trained in parallel; the overall output of the system given by the aggregation
of the outputs of individual networks can be tuned by a user to trade-off false
positives and false negatives; finally, according to ensemble theory, the
output of the overall system will be robust and reliable even in the presence
of weak individual models. We evaluated our system running simulations aimed at
assessing the accuracy of individual networks and the whole system. The results
on synthetic data and real-world data are promising, and they suggest that our
approach may have advantages compared to the monolithic approach based on a
single deep convolutional neural network. | [
"cs.CV",
"cs.LG",
"68T01",
"I.2.6; I.2.10; J.7"
] |
Increasing the scale of reinforcement learning experiments has allowed
researchers to achieve unprecedented results in both training sophisticated
agents for video games, and in sim-to-real transfer for robotics. Typically
such experiments rely on large distributed systems and require expensive
hardware setups, limiting wider access to this exciting area of research. In
this work we aim to solve this problem by optimizing the efficiency and
resource utilization of reinforcement learning algorithms instead of relying on
distributed computation. We present the "Sample Factory", a high-throughput
training system optimized for a single-machine setting. Our architecture
combines a highly efficient, asynchronous, GPU-based sampler with off-policy
correction techniques, allowing us to achieve throughput higher than $10^5$
environment frames/second on non-trivial control problems in 3D without
sacrificing sample efficiency. We extend Sample Factory to support self-play
and population-based training and apply these techniques to train highly
capable agents for a multiplayer first-person shooter game. The source code is
available at https://github.com/alex-petrenko/sample-factory | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Learning to flexibly follow task instructions in dynamic environments poses
interesting challenges for reinforcement learning agents. We focus here on the
problem of learning control flow that deviates from a strict step-by-step
execution of instructions -- that is, control flow that may skip forward over
parts of the instructions or return backward to previously completed or skipped
steps. Demand for such flexible control arises in two fundamental ways:
explicitly when control is specified in the instructions themselves (such as
conditional branching and looping) and implicitly when stochastic environment
dynamics require re-completion of instructions whose effects have been
perturbed, or opportunistic skipping of instructions whose effects are already
present. We formulate an attention-based architecture that meets these
challenges by learning, from task reward only, to flexibly attend to and
condition behavior on an internal encoding of the instructions. We test the
architecture's ability to learn both explicit and implicit control in two
illustrative domains -- one inspired by Minecraft and the other by StarCraft --
and show that the architecture exhibits zero-shot generalization to novel
instructions of length greater than those in a training set, at a performance
level unmatched by two baseline recurrent architectures and one ablation
architecture. | [
"cs.LG"
] |
Recent research has shown remarkable success in revealing "steering"
directions in the latent spaces of pre-trained GANs. These directions
correspond to semantically meaningful image transformations e.g., shift, zoom,
color manipulations), and have similar interpretable effects across all
categories that the GAN can generate. Some methods focus on user-specified
transformations, while others discover transformations in an unsupervised
manner. However, all existing techniques rely on an optimization procedure to
expose those directions, and offer no control over the degree of allowed
interaction between different transformations. In this paper, we show that
"steering" trajectories can be computed in closed form directly from the
generator's weights without any form of training or optimization. This applies
to user-prescribed geometric transformations, as well as to unsupervised
discovery of more complex effects. Our approach allows determining both linear
and nonlinear trajectories, and has many advantages over previous methods. In
particular, we can control whether one transformation is allowed to come on the
expense of another (e.g. zoom-in with or without allowing translation to keep
the object centered). Moreover, we can determine the natural end-point of the
trajectory, which corresponds to the largest extent to which a transformation
can be applied without incurring degradation. Finally, we show how transferring
attributes between images can be achieved without optimization, even across
different categories. | [
"cs.CV"
] |
Most provably-efficient learning algorithms introduce optimism about
poorly-understood states and actions to encourage exploration. We study an
alternative approach for efficient exploration, posterior sampling for
reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of
known duration. At the start of each episode, PSRL updates a prior distribution
over Markov decision processes and takes one sample from this posterior. PSRL
then follows the policy that is optimal for this sample during the episode. The
algorithm is conceptually simple, computationally efficient and allows an agent
to encode prior knowledge in a natural way. We establish an $\tilde{O}(\tau S
\sqrt{AT})$ bound on the expected regret, where $T$ is time, $\tau$ is the
episode length and $S$ and $A$ are the cardinalities of the state and action
spaces. This bound is one of the first for an algorithm not based on optimism,
and close to the state of the art for any reinforcement learning algorithm. We
show through simulation that PSRL significantly outperforms existing algorithms
with similar regret bounds. | [
"stat.ML",
"cs.LG"
] |
Many irregular domains such as social networks, financial transactions,
neuron connections, and natural language constructs are represented using graph
structures. In recent years, a variety of graph neural networks (GNNs) have
been successfully applied for representation learning and prediction on such
graphs. In many of the real-world applications, the underlying graph changes
over time, however, most of the existing GNNs are inadequate for handling such
dynamic graphs. In this paper we propose a novel technique for learning
embeddings of dynamic graphs using a tensor algebra framework. Our method
extends the popular graph convolutional network (GCN) for learning
representations of dynamic graphs using the recently proposed tensor M-product
technique. Theoretical results presented establish a connection between the
proposed tensor approach and spectral convolution of tensors. The proposed
method TM-GCN is consistent with the Message Passing Neural Network (MPNN)
framework, accounting for both spatial and temporal message passing. Numerical
experiments on real-world datasets demonstrate the performance of the proposed
method for edge classification and link prediction tasks on dynamic graphs. We
also consider an application related to the COVID-19 pandemic, and show how our
method can be used for early detection of infected individuals from contact
tracing data. | [
"cs.LG",
"stat.ML"
] |
We propose a metalearning approach for learning gradient-based reinforcement
learning (RL) algorithms. The idea is to evolve a differentiable loss function,
such that an agent, which optimizes its policy to minimize this loss, will
achieve high rewards. The loss is parametrized via temporal convolutions over
the agent's experience. Because this loss is highly flexible in its ability to
take into account the agent's history, it enables fast task learning. Empirical
results show that our evolved policy gradient algorithm (EPG) achieves faster
learning on several randomized environments compared to an off-the-shelf policy
gradient method. We also demonstrate that EPG's learned loss can generalize to
out-of-distribution test time tasks, and exhibits qualitatively different
behavior from other popular metalearning algorithms. | [
"cs.LG",
"cs.AI"
] |
Despite their great success in recent years, deep neural networks (DNN) are
mainly black boxes where the results obtained by running through the network
are difficult to understand and interpret. Compared to e.g. decision trees or
bayesian classifiers, DNN suffer from bad interpretability where we understand
by interpretability, that a human can easily derive the relations modeled by
the network. A reasonable way to provide interpretability for humans are
logical rules. In this paper we propose neural logic rule layers (NLRL) which
are able to represent arbitrary logic rules in terms of their conjunctive and
disjunctive normal forms. Using various NLRL within one layer and
correspondingly stacking various layers, we are able to represent arbitrary
complex rules by the resulting neural network architecture. The NLRL are
end-to-end trainable allowing to learn logic rules directly from available data
sets. Experiments show that NLRL-enhanced neural networks can learn to model
arbitrary complex logic and perform arithmetic operation over the input values. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Standard deep reinforcement learning algorithms use a shared representation
for the policy and value function, especially when training directly from
images. However, we argue that more information is needed to accurately
estimate the value function than to learn the optimal policy. Consequently, the
use of a shared representation for the policy and value function can lead to
overfitting. To alleviate this problem, we propose two approaches which are
combined to create IDAAC: Invariant Decoupled Advantage Actor-Critic. First,
IDAAC decouples the optimization of the policy and value function, using
separate networks to model them. Second, it introduces an auxiliary loss which
encourages the representation to be invariant to task-irrelevant properties of
the environment. IDAAC shows good generalization to unseen environments,
achieving a new state-of-the-art on the Procgen benchmark and outperforming
popular methods on DeepMind Control tasks with distractors. Our implementation
is available at https://github.com/rraileanu/idaac. | [
"cs.LG",
"cs.AI"
] |
In recent years, graph neural networks (GNNs) have been widely adopted in the
representation learning of graph-structured data and provided state-of-the-art
performance in various applications such as link prediction, node
classification, and recommendation. Motivated by recent advances of
self-supervision for representation learning in natural language processing and
computer vision, self-supervised learning has been recently studied to leverage
unlabeled graph-structured data. However, employing self-supervision tasks as
auxiliary tasks to assist a primary task has been less explored in the
literature on graphs. In this paper, we propose a novel self-supervised
auxiliary learning framework to effectively learn graph neural networks.
Moreover, this work is the first study showing that a meta-path prediction is
beneficial as a self-supervised auxiliary task for heterogeneous graphs. Our
method is learning to learn a primary task with various auxiliary tasks to
improve generalization performance. The proposed method identifies an effective
combination of auxiliary tasks and automatically balances them to improve the
primary task. Our methods can be applied to any graph neural network in a
plug-in manner without manual labeling or additional data. Also, it can be
extended to any other auxiliary tasks. Our experiments demonstrate that the
proposed method consistently improves the performance of node classification
and link prediction. | [
"cs.LG"
] |
In continuous action domains, standard deep reinforcement learning algorithms
like DDPG suffer from inefficient exploration when facing sparse or deceptive
reward problems. Conversely, evolutionary and developmental methods focusing on
exploration like Novelty Search, Quality-Diversity or Goal Exploration
Processes explore more robustly but are less efficient at fine-tuning policies
using gradient descent. In this paper, we present the GEP-PG approach, taking
the best of both worlds by sequentially combining a Goal Exploration Process
and two variants of DDPG. We study the learning performance of these components
and their combination on a low dimensional deceptive reward problem and on the
larger Half-Cheetah benchmark. We show that DDPG fails on the former and that
GEP-PG improves over the best DDPG variant in both environments. Supplementary
videos and discussion can be found at http://frama.link/gep_pg, the code at
http://github.com/flowersteam/geppg. | [
"cs.LG"
] |
We propose a fully convolutional one-stage object detector (FCOS) to solve
object detection in a per-pixel prediction fashion, analogue to semantic
segmentation. Almost all state-of-the-art object detectors such as RetinaNet,
SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast,
our proposed detector FCOS is anchor box free, as well as proposal free. By
eliminating the predefined set of anchor boxes, FCOS completely avoids the
complicated computation related to anchor boxes such as calculating overlapping
during training. More importantly, we also avoid all hyper-parameters related
to anchor boxes, which are often very sensitive to the final detection
performance. With the only post-processing non-maximum suppression (NMS), FCOS
with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale
testing, surpassing previous one-stage detectors with the advantage of being
much simpler. For the first time, we demonstrate a much simpler and flexible
detection framework achieving improved detection accuracy. We hope that the
proposed FCOS framework can serve as a simple and strong alternative for many
other instance-level tasks. Code is available at:Code is available at:
https://tinyurl.com/FCOSv1 | [
"cs.CV"
] |
Both high-level and high-resolution feature representations are of great
importance in various visual understanding tasks. To acquire high-resolution
feature maps with high-level semantic information, one common strategy is to
adopt dilated convolutions in the backbone networks to extract high-resolution
feature maps, such as the dilatedFCN-based methods for semantic segmentation.
However, due to many convolution operations are conducted on the
high-resolution feature maps, such methods have large computational complexity
and memory consumption. In this paper, we propose one novel holistically-guided
decoder which is introduced to obtain the high-resolution semantic-rich feature
maps via the multi-scale features from the encoder. The decoding is achieved
via novel holistic codeword generation and codeword assembly operations, which
take advantages of both the high-level and low-level features from the encoder
features. With the proposed holistically-guided decoder, we implement the
EfficientFCN architecture for semantic segmentation and HGD-FPN for object
detection and instance segmentation. The EfficientFCN achieves comparable or
even better performance than state-of-the-art methods with only 1/3 of their
computational costs for semantic segmentation on PASCAL Context, PASCAL VOC,
ADE20K datasets. Meanwhile, the proposed HGD-FPN achieves $>2\%$ higher mean
Average Precision (mAP) when integrated into several object detection
frameworks with ResNet-50 encoding backbones. | [
"cs.CV"
] |
While generative models have shown great success in generating
high-dimensional samples conditional on low-dimensional descriptors (learning
e.g. stroke thickness in MNIST, hair color in CelebA, or speaker identity in
Wavenet), their generation out-of-sample poses fundamental problems. The
conditional variational autoencoder (CVAE) as a simple conditional generative
model does not explicitly relate conditions during training and, hence, has no
incentive of learning a compact joint distribution across conditions. We
overcome this limitation by matching their distributions using maximum mean
discrepancy (MMD) in the decoder layer that follows the bottleneck. This
introduces a strong regularization both for reconstructing samples within the
same condition and for transforming samples across conditions, resulting in
much improved generalization. We refer to the architecture as
\emph{transformer} VAE (trVAE). Benchmarking trVAE on high-dimensional image
and tabular data, we demonstrate higher robustness and higher accuracy than
existing approaches. In particular, we show qualitatively improved predictions
for cellular perturbation response to treatment and disease based on
high-dimensional single-cell gene expression data, by tackling previously
problematic minority classes and multiple conditions. For generic tasks, we
improve Pearson correlations of high-dimensional estimated means and variances
with their ground truths from 0.89 to 0.97 and 0.75 to 0.87, respectively. | [
"cs.LG",
"eess.IV",
"q-bio.CB",
"q-bio.GN",
"stat.ML"
] |
Electronic Health Records (EHR) are high-dimensional data with implicit
connections among thousands of medical concepts. These connections, for
instance, the co-occurrence of diseases and lab-disease correlations can be
informative when only a subset of these variables is documented by the
clinician. A feasible approach to improving the representation learning of EHR
data is to associate relevant medical concepts and utilize these connections.
Existing medical ontologies can be the reference for EHR structures, but they
place numerous constraints on the data source. Recent progress on graph neural
networks (GNN) enables end-to-end learning of topological structures for
non-grid or non-sequential data. However, there are problems to be addressed on
how to learn the medical graph adaptively and how to understand the effect of
the medical graph on representation learning. In this paper, we propose a
variationally regularized encoder-decoder graph network that achieves more
robustness in graph structure learning by regularizing node representations.
Our model outperforms the existing graph and non-graph based methods in various
EHR predictive tasks based on both public data and real-world clinical data.
Besides the improvements in empirical experiment performances, we provide an
interpretation of the effect of variational regularization compared to standard
graph neural network, using singular value analysis. | [
"cs.LG",
"stat.ML"
] |
The objective of this paper is self-supervised learning from video, in
particular for representations for action recognition. We make the following
contributions: (i) We propose a new architecture and learning framework
Memory-augmented Dense Predictive Coding (MemDPC) for the task. It is trained
with a predictive attention mechanism over the set of compressed memories, such
that any future states can always be constructed by a convex combination of the
condense representations, allowing to make multiple hypotheses efficiently.
(ii) We investigate visual-only self-supervised video representation learning
from RGB frames, or from unsupervised optical flow, or both. (iii) We
thoroughly evaluate the quality of learnt representation on four different
downstream tasks: action recognition, video retrieval, learning with scarce
annotations, and unintentional action classification. In all cases, we
demonstrate state-of-the-art or comparable performance over other approaches
with orders of magnitude fewer training data. | [
"cs.CV"
] |
Physical modeling method, represented by simulation and visualization of the
principles in physics, is introduced in the shape extraction of the active
contours. The objectives of adopting this concept are to address the several
major difficulties in the application of Active Contours. Primarily, a
technique is developed to realize the topological changes of Parametric Active
Contours (Snakes). The key strategy is to imitate the process of a balloon
expanding and filling in a closed space with several objects. After removing
the touched balloon surfaces, the objects can be identified by surrounded
remaining balloon surfaces. A burned region swept by Snakes is utilized to
trace the contour and to give a criterion for stopping the movement of Snake
curve. When the Snakes terminates evolution totally, through ignoring this
criterion, it can form a connected area by evolving the Snakes again and
continuing the region burning. The contours extracted from the boundaries of
the burned area can represent the child snake of each object respectively.
Secondly, a novel scheme is designed to solve the problems of leakage of the
contour from the large gaps, and the segmentation error in Geometric Active
Contours (GAC). It divides the segmentation procedure into two processing
stages. By simulating the wave propagating in the isotropic substance at the
final stage, it can significantly enhance the effect of image force in GAC
based on Level Set and give the satisfied solutions to the two problems.
Thirdly, to support the physical models for active contours above, we introduce
a general image force field created on a template plane over the image plane.
This force is more adaptable to noisy images with complicated geometric shapes. | [
"cs.CV",
"cs.GR"
] |
Recent work has shown that biologically plausible Hebbian learning can be
integrated with backpropagation learning (backprop), when training deep
convolutional neural networks. In particular, it has been shown that Hebbian
learning can be used for training the lower or the higher layers of a neural
network. For instance, Hebbian learning is effective for re-training the higher
layers of a pre-trained deep neural network, achieving comparable accuracy
w.r.t. SGD, while requiring fewer training epochs, suggesting potential
applications for transfer learning. In this paper we build on these results and
we further improve Hebbian learning in these settings, by using a nonlinear
Hebbian Principal Component Analysis (HPCA) learning rule, in place of the
Hebbian Winner Takes All (HWTA) strategy used in previous work. We test this
approach in the context of computer vision. In particular, the HPCA rule is
used to train Convolutional Neural Networks in order to extract relevant
features from the CIFAR-10 image dataset. The HPCA variant that we explore
further improves the previous results, motivating further interest towards
biologically plausible learning algorithms. | [
"cs.CV"
] |
We challenged to get data about hand movement in pen spinning using MediaPipe
Hands and OpenCV. The purpose is to create a system that can be used to
objectively evaluate the performance of pen spinning competitions. Evaluation
of execution, smoothness, and control in competitions are quite difficult and
often with subjectivity. Therefore, we aimed to fully automate the process by
using objective numerical values for evaluation. Uncertainty still exists in
MediaPipe's skeletal recognition, and it tends to be more difficult to
recognize in brightly colored backgrounds. However, we could improve the
recognition accuracy by changing the saturation and brightness in the program.
Furthermore, automatic detection and adjustment of brightness is now possible.
As the next step to systematize the evaluation of pen spinning using objective
numerical values, we adopted "hand movements". We were able to visualize the
ups and downs of the hand movements by calculating the standard deviation and
L2 norm of the hand's coordinates in each frame. The results of hand movements
are quite accurate, and we feel that it is a big step toward our goal. In the
future, we would like to make great efforts to fully automate the grading of
pen spinning. | [
"cs.CV"
] |
We survey in this article the connections between Machine Learning and
Control Theory. Control Theory provide useful concepts and tools for Machine
Learning. Conversely Machine Learning can be used to solve large control
problems. In the first part of the paper, we develop the connections between
reinforcement learning and Markov Decision Processes, which are discrete time
control problems. In the second part, we review the concept of supervised
learning and the relation with static optimization. Deep learning which extends
supervised learning, can be viewed as a control problem. In the third part, we
present the links between stochastic gradient descent and mean-field theory.
Conversely, in the fourth and fifth parts, we review machine learning
approaches to stochastic control problems, and focus on the deterministic case,
to explain, more easily, the numerical algorithms. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
We consider the task of photo-realistic unconditional image generation
(generate high quality, diverse samples that carry the same visual content as
the image) on mobile platforms using Generative Adversarial Networks (GANs). In
this paper, we propose a novel approach to trade-off image generation accuracy
of a GAN for the energy consumed (compute) at run-time called Scale-Energy
Tradeoff GAN (SETGAN). GANs usually take a long time to train and consume a
huge memory hence making it difficult to run on edge devices. The key idea
behind SETGAN for an image generation task is for a given input image, we train
a GAN on a remote server and use the trained model on edge devices. We use
SinGAN, a single image unconditional generative model, that contains a pyramid
of fully convolutional GANs, each responsible for learning the patch
distribution at a different scale of the image. During the training process, we
determine the optimal number of scales for a given input image and the energy
constraint from the target edge device. Results show that with SETGAN's unique
client-server-based architecture, we were able to achieve a 56% gain in energy
for a loss of 3% to 12% SSIM accuracy. Also, with the parallel multi-scale
training, we obtain around 4x gain in training time on the server. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Time series sequence prediction and modelling has proven to be a challenging
endeavor in real world datasets. Two key issues are the multi-dimensionality of
data and the interaction of independent dimensions forming a latent output
signal, as well as the representation of multi-dimensional temporal data inside
of a predictive model. This paper proposes a multi-branch deep neural network
approach to tackling the aforementioned problems by modelling a latent state
vector representation of data windows through the use of a recurrent
autoencoder branch and subsequently feeding the trained latent vector
representation into a predictor branch of the model. This model is henceforth
referred to as Multivariate Temporal Autoencoder (MvTAe). The framework in this
paper utilizes a synthetic multivariate temporal dataset which contains
dimensions that combine to create a hidden output target. | [
"cs.LG"
] |
We present a self-supervised Contrastive Video Representation Learning (CVRL)
method to learn spatiotemporal visual representations from unlabeled videos.
Our representations are learned using a contrastive loss, where two augmented
clips from the same short video are pulled together in the embedding space,
while clips from different videos are pushed away. We study what makes for good
data augmentations for video self-supervised learning and find that both
spatial and temporal information are crucial. We carefully design data
augmentations involving spatial and temporal cues. Concretely, we propose a
temporally consistent spatial augmentation method to impose strong spatial
augmentations on each frame of the video while maintaining the temporal
consistency across frames. We also propose a sampling-based temporal
augmentation method to avoid overly enforcing invariance on clips that are
distant in time. On Kinetics-600, a linear classifier trained on the
representations learned by CVRL achieves 70.4% top-1 accuracy with a
3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training
by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated
R3D-50. The performance of CVRL can be further improved to 72.9% with a larger
R3D-152 (2x filters) backbone, significantly closing the gap between
unsupervised and supervised video representation learning. Our code and models
will be available at
https://github.com/tensorflow/models/tree/master/official/. | [
"cs.CV",
"cs.LG"
] |
In this work we propose a novel self-attention mechanism model to address
electricity theft detection on an imbalanced realistic dataset that presents a
daily electricity consumption provided by State Grid Corporation of China. Our
key contribution is the introduction of a multi-head self-attention mechanism
concatenated with dilated convolutions and unified by a convolution of kernel
size $1$. Moreover, we introduce a binary input channel (Binary Mask) to
identify the position of the missing values, allowing the network to learn how
to deal with these values. Our model achieves an AUC of $0.926$ which is an
improvement in more than $17\%$ with respect to previous baseline work. The
code is available on GitHub at
https://github.com/neuralmind-ai/electricity-theft-detection-with-self-attention. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Processing point clouds using deep neural networks is still a challenging
task. Most existing models focus on object detection and registration with deep
neural networks using point clouds. In this paper, we propose a deep model that
learns to estimate odometry in driving scenarios using point cloud data. The
proposed model consumes raw point clouds in order to extract frame-to-frame
odometry estimation through a hierarchical model architecture. Also, a local
bundle adjustment variation of this model using LSTM layers is implemented.
These two approaches are comprehensively evaluated and are compared against the
state-of-the-art. | [
"cs.CV",
"cs.CG",
"cs.RO"
] |
Semi-supervised learning is pervasive in real-world applications, where only
a few labeled data are available and large amounts of instances remain
unlabeled. Since AUC is an important model evaluation metric in classification,
directly optimizing AUC in semi-supervised learning scenario has drawn much
attention in the machine learning community. Recently, it has been shown that
one could find an unbiased solution for the semi-supervised AUC maximization
problem without knowing the class prior distribution. However, this method is
hardly scalable for nonlinear classification problems with kernels. To address
this problem, in this paper, we propose a novel scalable quadruply stochastic
gradient algorithm (QSG-S2AUC) for nonlinear semi-supervised AUC optimization.
In each iteration of the stochastic optimization process, our method randomly
samples a positive instance, a negative instance, an unlabeled instance and
their random features to compute the gradient and then update the model by
using this quadruply stochastic gradient to approach the optimal solution. More
importantly, we prove that QSG-S2AUC can converge to the optimal solution in
O(1/t), where t is the iteration number. Extensive experimental results on a
variety of benchmark datasets show that QSG-S2AUC is far more efficient than
the existing state-of-the-art algorithms for semi-supervised AUC maximization
while retaining the similar generalization performance. | [
"cs.LG",
"stat.ML"
] |
Recent research has revealed that deep generative models including flow-based
models and Variational autoencoders may assign higher likelihood to
out-of-distribution (OOD) data than in-distribution (ID) data. However, we
cannot sample out OOD data from the model. This counterintuitive phenomenon has
not been satisfactorily explained. In this paper, we prove theorems to
investigate the divergences in flow-based model and give two explanations to
the above phenomenon from divergence and geometric perspectives, respectively.
Based on our analysis, we propose two group anomaly detection methods.
Furthermore, we decompose the KL divergence and propose a point-wise anomaly
detection method. We have conducted extensive experiments on prevalent
benchmarks to evaluate our methods. For group anomaly detection (GAD), our
method can achieve near 100\% AUROC on all problems and has robustness against
data manipulations. On the contrary, the state-of-the-art (SOTA) GAD method
performs not better than random guessing for challenging problems and can be
attacked by data manipulation in almost all cases. For point-wise anomaly
detection (PAD), our method is comparable to the SOTA PAD method on one
category of problems and outperforms the baseline significantly on another
category of problems. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We propose a novel framework for multi-task reinforcement learning (MTRL).
Using a variational inference formulation, we learn policies that generalize
across both changing dynamics and goals. The resulting policies are
parametrized by shared parameters that allow for transfer between different
dynamics and goal conditions, and by task-specific latent-space embeddings that
allow for specialization to particular tasks. We show how the latent-spaces
enable generalization to unseen dynamics and goals conditions. Additionally,
policies equipped with such embeddings serve as a space of skills (or options)
for hierarchical reinforcement learning. Since we can change task dynamics and
goals independently, we name our framework Disentangled Skill Embeddings (DSE). | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Conditional kernel mean embeddings form an attractive nonparametric framework
for representing conditional means of functions, describing the observation
processes for many complex models. However, the recovery of the original
underlying function of interest whose conditional mean was observed is a
challenging inference task. We formalize deconditional kernel mean embeddings
as a solution to this inverse problem, and show that it can be naturally viewed
as a nonparametric Bayes' rule. Critically, we introduce the notion of task
transformed Gaussian processes and establish deconditional kernel means as
their posterior predictive mean. This connection provides Bayesian
interpretations and uncertainty estimates for deconditional kernel mean
embeddings, explains their regularization hyperparameters, and reveals a
marginal likelihood for kernel hyperparameter learning. These revelations
further enable practical applications such as likelihood-free inference and
learning sparse representations for big data. | [
"stat.ML",
"cs.LG"
] |
Zero-Shot Learning (ZSL) targets at recognizing unseen categories by
leveraging auxiliary information, such as attribute embedding. Despite the
encouraging results achieved, prior ZSL approaches focus on improving the
discriminant power of seen-class features, yet have largely overlooked the
geometric structure of the samples and the prototypes. The subsequent
attribute-based generative adversarial network (GAN), as a result, also
neglects the topological information in sample generation and further yields
inferior performances in classifying the visual features of unseen classes. In
this paper, we introduce a novel structure-aware feature generation scheme,
termed as SA-GAN, to explicitly account for the topological structure in
learning both the latent space and the generative networks. Specifically, we
introduce a constraint loss to preserve the initial geometric structure when
learning a discriminative latent space, and carry out our GAN training with
additional supervising signals from a structure-aware discriminator and a
reconstruction module. The former supervision distinguishes fake and real
samples based on their affinity to class prototypes, while the latter aims to
reconstruct the original feature space from the generated latent space. This
topology-preserving mechanism enables our method to significantly enhance the
generalization capability on unseen-classes and consequently improve the
classification performance. Experiments on four benchmarks demonstrate that the
proposed approach consistently outperforms the state of the art. Our code can
be found in the supplementary material and will also be made publicly
available. | [
"cs.CV"
] |
This work proposes a method for depth completion of sparse LiDAR data using a
convolutional neural network which can be used to generate semi-dense depth
maps and "almost" full 3D point-clouds with significantly lower root mean
squared error (RMSE) over state-of-the-art methods. We add an "Error
Prediction" unit to our network and present a novel and simple end-to-end
method that learns to predict an error-map of depth regression task. An
"almost" dense high-confidence/low-variance point-cloud is more valuable for
safety-critical applications specifically real-world autonomous driving than a
full point-cloud with high error rate and high error variance. Using our
predicted error-map, we demonstrate that by up-filling a LiDAR point cloud from
18,000 points to 285,000 points, versus 300,000 points for full depth, we can
reduce the RMSE error from 1004 to 399. This error is approximately 60% less
than the state-of-the-art and 50% less than the state-of-the-art with RGB
guidance (we did not use RGB guidance in our algorithm). In addition to
analyzing our results on Kitti depth completion dataset, we also demonstrate
the ability of our proposed method to extend to new tasks by deploying our
"Error Prediction" unit to improve upon the state-of-the-art for monocular
depth estimation. Codes and demo videos are available at
http://github.com/hekmak/Conf-net. | [
"cs.CV",
"cs.LG"
] |
We present SentiMATE, a novel end-to-end Deep Learning model for Chess,
employing Natural Language Processing that aims to learn an effective
evaluation function assessing move quality. This function is pre-trained on the
sentiment of commentary associated with the training moves and is used to guide
and optimize the agent's game-playing decision making. The contributions of
this research are three-fold: we build and put forward both a classifier which
extracts commentary describing the quality of Chess moves in vast commentary
datasets, and a Sentiment Analysis model trained on Chess commentary to
accurately predict the quality of said moves, to then use those predictions to
evaluate the optimal next move of a Chess agent. Both classifiers achieve over
90 % classification accuracy. Lastly, we present a Chess engine, SentiMATE,
which evaluates Chess moves based on a pre-trained sentiment evaluation
function. Our results exhibit strong evidence to support our initial hypothesis
- "Can Natural Language Processing be used to train a novel and sample
efficient evaluation function in Chess Engines?" - as we integrate our
evaluation function into modern Chess engines and play against agents with
traditional Chess move evaluation functions, beating both random agents and a
DeepChess implementation at a level-one search depth - representing the number
of moves a traditional Chess agent (employing the alpha-beta search algorithm)
looks ahead in order to evaluate a given chess state. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
Softmax is widely used in neural networks for multiclass classification, gate
structure and attention mechanisms. The statistical assumption that the input
is normal distributed supports the gradient stability of Softmax. However, when
used in attention mechanisms such as transformers, since the correlation scores
between embeddings are often not normally distributed, the gradient vanishing
problem appears, and we prove this point through experimental confirmation. In
this work, we suggest that replacing the exponential function by periodic
functions, and we delve into some potential periodic alternatives of Softmax
from the view of value and gradient. Through experiments on a simply designed
demo referenced to LeViT, our method is proved to be able to alleviate the
gradient problem and yield substantial improvements compared to Softmax and its
variants. Further, we analyze the impact of pre-normalization for Softmax and
our methods through mathematics and experiments. Lastly, we increase the depth
of the demo and prove the applicability of our method in deep structures. | [
"cs.CV",
"cs.LG"
] |
Deep convolutional neural networks achieve excellent image up-sampling
performance. However, CNN-based methods tend to restore high-resolution results
highly depending on traditional interpolations (e.g. bicubic). In this paper,
we present a deep sampling network (DSN) for down-sampling and up-sampling
without any cheap interpolation. First, the down-sampling subnetwork is trained
without supervision, thereby preserving more information and producing better
visual effects in the low-resolution image. Second, the up-sampling subnetwork
learns a sub-pixel residual with dense connections to accelerate convergence
and improve performance. DSN's down-sampling subnetwork can be used to generate
photo-realistic low-resolution images and replace traditional down-sampling
method in image processing. With the powerful down-sampling process, the
co-training DSN set a new state-of-the-art performance for image
super-resolution. Moreover, DSN is compatible with existing image codecs to
improve image compression. | [
"cs.CV"
] |
While cloud/sky image segmentation has extensive real-world applications, a
large amount of labelled data is needed to train a highly accurate models to
perform the task. Scarcity of such volumes of cloud/sky images with
corresponding ground-truth binary maps makes it highly difficult to train such
complex image segmentation models. In this paper, we demonstrate the
effectiveness of using Generative Adversarial Networks (GANs) to generate data
to augment the training set in order to increase the prediction accuracy of
image segmentation model. We further present a way to estimate ground-truth
binary maps for the GAN-generated images to facilitate their effective use as
augmented images. Finally, we validate our work with different statistical
techniques. | [
"cs.CV",
"eess.IV"
] |
One practice of employing deep neural networks is to apply the same
architecture to all the input instances. However, a fixed architecture may not
be representative enough for data with high diversity. To promote the model
capacity, existing approaches usually employ larger convolutional kernels or
deeper network structure, which may increase the computational cost. In this
paper, we address this issue by raising the Dynamic Graph Network (DG-Net). The
network learns the instance-aware connectivity, which creates different forward
paths for different instances. Specifically, the network is initialized as a
complete directed acyclic graph, where the nodes represent convolutional blocks
and the edges represent the connection paths. We generate edge weights by a
learnable module \textit{router} and select the edges whose weights are larger
than a threshold, to adjust the connectivity of the neural network structure.
Instead of using the same path of the network, DG-Net aggregates features
dynamically in each node, which allows the network to have more representation
ability. To facilitate the training, we represent the network connectivity of
each sample in an adjacency matrix. The matrix is updated to aggregate features
in the forward pass, cached in the memory, and used for gradient computing in
the backward pass. We verify the effectiveness of our method with several
static architectures, including MobileNetV2, ResNet, ResNeXt, and RegNet.
Extensive experiments are performed on ImageNet classification and COCO object
detection, which shows the effectiveness and generalization ability of our
approach. | [
"cs.CV"
] |
Point clouds acquired from scanning devices are often perturbed by noise,
which affects downstream tasks such as surface reconstruction and analysis. The
distribution of a noisy point cloud can be viewed as the distribution of a set
of noise-free samples $p(x)$ convolved with some noise model $n$, leading to
$(p * n)(x)$ whose mode is the underlying clean surface. To denoise a noisy
point cloud, we propose to increase the log-likelihood of each point from $p *
n$ via gradient ascent -- iteratively updating each point's position. Since $p
* n$ is unknown at test-time, and we only need the score (i.e., the gradient of
the log-probability function) to perform gradient ascent, we propose a neural
network architecture to estimate the score of $p * n$ given only noisy point
clouds as input. We derive objective functions for training the network and
develop a denoising algorithm leveraging on the estimated scores. Experiments
demonstrate that the proposed model outperforms state-of-the-art methods under
a variety of noise models, and shows the potential to be applied in other tasks
such as point cloud upsampling. The code is available at
\url{https://github.com/luost26/score-denoise}. | [
"cs.CV"
] |
We present a new practical framework based on deep reinforcement learning and
decision-time planning for real-world vehicle repositioning on ride-hailing (a
type of mobility-on-demand, MoD) platforms. Our approach learns the
spatiotemporal state-value function using a batch training algorithm with deep
value networks. The optimal repositioning action is generated on-demand through
value-based policy search, which combines planning and bootstrapping with the
value networks. For the large-fleet problems, we develop several algorithmic
features that we incorporate into our framework and that we demonstrate to
induce coordination among the algorithmically-guided vehicles. We benchmark our
algorithm with baselines in a ride-hailing simulation environment to
demonstrate its superiority in improving income efficiency meausred by
income-per-hour. We have also designed and run a real-world experiment program
with regular drivers on a major ride-hailing platform. We have observed
significantly positive results on key metrics comparing our method with
experienced drivers who performed idle-time repositioning based on their own
expertise. | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
Machine Learning models should ideally be compact and robust. Compactness
provides efficiency and comprehensibility whereas robustness provides
resilience. Both topics have been studied in recent years but in isolation.
Here we present a robust model compression scheme which is independent of model
types: it can compress ensembles, neural networks and other types of models
into diverse types of small models. The main building block is the notion of
depth derived from robust statistics. Originally, depth was introduced as a
measure of the centrality of a point in a sample such that the median is the
deepest point. This concept was extended to classification functions which
makes it possible to define the depth of a hypothesis and the median
hypothesis. Algorithms have been suggested to approximate the median but they
have been limited to binary classification. In this study, we present a new
algorithm, the Multiclass Empirical Median Optimization (MEMO) algorithm that
finds a deep hypothesis in multi-class tasks, and prove its correctness. This
leads to our Compact Robust Estimated Median Belief Optimization (CREMBO)
algorithm for robust model compression. We demonstrate the success of this
algorithm empirically by compressing neural networks and random forests into
small decision trees, which are interpretable models, and show that they are
more accurate and robust than other comparable methods. In addition, our
empirical study shows that our method outperforms Knowledge Distillation on DNN
to DNN compression. | [
"cs.LG"
] |
Near infrared (NIR) imaging has been widely applied in low-light imaging
scenarios; however, it is difficult for human and algorithms to perceive the
real scene in the colorless NIR domain. While Generative Adversarial Network
(GAN) has been widely employed in various image colorization tasks, it is
challenging for a direct mapping mechanism, such as a conventional GAN, to
transform an image from the NIR to the RGB domain with correct semantic
reasoning, well-preserved textures, and vivid color combinations concurrently.
In this work, we propose a novel Attention-based NIR image colorization
framework via Adaptive Fusion of Semantic and Texture clues, aiming at
achieving these goals within the same framework. The tasks of texture transfer
and semantic reasoning are carried out in two separate network blocks.
Specifically, the Texture Transfer Block (TTB) aims at extracting texture
features from the NIR image's Laplacian component and transferring them for
subsequent color fusion. The Semantic Reasoning Block (SRB) extracts semantic
clues and maps the NIR pixel values to the RGB domain. Finally, a Fusion
Attention Block (FAB) is proposed to adaptively fuse the features from the two
branches and generate an optimized colorization result. In order to enhance the
network's learning capacity in semantic reasoning as well as mapping precision
in texture transfer, we have proposed the Residual Coordinate Attention Block
(RCAB), which incorporates coordinate attention into a residual learning
framework, enabling the network to capture long-range dependencies along the
channel direction and meanwhile precise positional information can be preserved
along spatial directions. RCAB is also incorporated into FAB to facilitate
accurate texture alignment during fusion. Both quantitative and qualitative
evaluations show that the proposed method outperforms state-of-the-art NIR
image colorization methods. | [
"cs.CV"
] |
A multi-scale greedy-based object proposal generation approach is presented.
Based on the multi-scale nature of objects in images, our approach is built on
top of a hierarchical segmentation. We first identify the representative and
diverse exemplar clusters within each scale by using a diversity ranking
algorithm. Object proposals are obtained by selecting a subset from the
multi-scale segment pool via maximizing a submodular objective function, which
consists of a weighted coverage term, a single-scale diversity term and a
multi-scale reward term. The weighted coverage term forces the selected set of
object proposals to be representative and compact; the single-scale diversity
term encourages choosing segments from different exemplar clusters so that they
will cover as many object patterns as possible; the multi-scale reward term
encourages the selected proposals to be discriminative and selected from
multiple layers generated by the hierarchical image segmentation. The
experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012
segmentation dataset demonstrate the accuracy and efficiency of our object
proposal model. Additionally, we validate our object proposals in simultaneous
segmentation and detection and outperform the state-of-art performance. | [
"cs.CV"
] |
Visual voice activity detection (V-VAD) uses visual features to predict
whether a person is speaking or not. V-VAD is useful whenever audio VAD (A-VAD)
is inefficient either because the acoustic signal is difficult to analyze or
because it is simply missing. We propose two deep architectures for V-VAD, one
based on facial landmarks and one based on optical flow. Moreover, available
datasets, used for learning and for testing V-VAD, lack content variability. We
introduce a novel methodology to automatically create and annotate very large
datasets in-the-wild -- WildVVAD -- based on combining A-VAD with face
detection and tracking. A thorough empirical evaluation shows the advantage of
training the proposed deep V-VAD models with this dataset. | [
"cs.CV"
] |
The raise of collaborative robotics has led to wide range of sensor
technologies to detect human-machine interactions: at short distances,
proximity sensors detect nontactile gestures virtually occlusion-free, while at
medium distances, active depth sensors are frequently used to infer human
intentions. We describe an optical system for large workspaces to capture human
pose based on a single panoramic color camera. Despite the two-dimensional
input, our system is able to predict metric 3D pose information over larger
field of views than would be possible with active depth measurement cameras. We
merge posture context with proximity perception to reduce occlusions and
improve accuracy at long distances. We demonstrate the capabilities of our
system in two use cases involving multiple humans and robots. | [
"cs.CV",
"cs.HC",
"cs.RO"
] |
Not all neural network architectures are created equal, some perform much
better than others for certain tasks. But how important are the weight
parameters of a neural network compared to its architecture? In this work, we
question to what extent neural network architectures alone, without learning
any weight parameters, can encode solutions for a given task. We propose a
search method for neural network architectures that can already perform a task
without any explicit weight training. To evaluate these networks, we populate
the connections with a single shared weight parameter sampled from a uniform
random distribution, and measure the expected performance. We demonstrate that
our method can find minimal neural network architectures that can perform
several reinforcement learning tasks without weight training. On a supervised
learning domain, we find network architectures that achieve much higher than
chance accuracy on MNIST using random weights. Interactive version of this
paper at https://weightagnostic.github.io/ | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Graph Neural Networks (GNNs), which generalize the deep neural networks to
graph-structured data, have achieved great success in modeling graphs. However,
as an extension of deep learning for graphs, GNNs lack explainability, which
largely limits their adoption in scenarios that demand the transparency of
models. Though many efforts are taken to improve the explainability of deep
learning, they mainly focus on i.i.d data, which cannot be directly applied to
explain the predictions of GNNs because GNNs utilize both node features and
graph topology to make predictions. There are only very few work on the
explainability of GNNs and they focus on post-hoc explanations. Since post-hoc
explanations are not directly obtained from the GNNs, they can be biased and
misrepresent the true explanations. Therefore, in this paper, we study a novel
problem of self-explainable GNNs which can simultaneously give predictions and
explanations. We propose a new framework which can find $K$-nearest labeled
nodes for each unlabeled node to give explainable node classification, where
nearest labeled nodes are found by interpretable similarity module in terms of
both node similarity and local structure similarity. Extensive experiments on
real-world and synthetic datasets demonstrate the effectiveness of the proposed
framework for explainable node classification. | [
"cs.LG"
] |
The unprecedented increase in the usage of computer vision technology in
society goes hand in hand with an increased concern in data privacy. In many
real-world scenarios like people tracking or action recognition, it is
important to be able to process the data while taking careful consideration in
protecting people's identity. We propose and develop CIAGAN, a model for image
and video anonymization based on conditional generative adversarial networks.
Our model is able to remove the identifying characteristics of faces and bodies
while producing high-quality images and videos that can be used for any
computer vision task, such as detection or tracking. Unlike previous methods,
we have full control over the de-identification (anonymization) procedure,
ensuring both anonymization as well as diversity. We compare our method to
several baselines and achieve state-of-the-art results. | [
"cs.CV"
] |
Exploration is an essential component of reinforcement learning algorithms,
where agents need to learn how to predict and control unknown and often
stochastic environments. Reinforcement learning agents depend crucially on
exploration to obtain informative data for the learning process as the lack of
enough information could hinder effective learning. In this article, we provide
a survey of modern exploration methods in (Sequential) reinforcement learning,
as well as a taxonomy of exploration methods. | [
"cs.LG",
"cs.AI"
] |
A neural radiance field (NeRF) is a scene model supporting high-quality view
synthesis, optimized per scene. In this paper, we explore enabling user editing
of a category-level NeRF - also known as a conditional radiance field - trained
on a shape category. Specifically, we introduce a method for propagating coarse
2D user scribbles to the 3D space, to modify the color or shape of a local
region. First, we propose a conditional radiance field that incorporates new
modular network components, including a shape branch that is shared across
object instances. Observing multiple instances of the same category, our model
learns underlying part semantics without any supervision, thereby allowing the
propagation of coarse 2D user scribbles to the entire 3D region (e.g., chair
seat). Next, we propose a hybrid network update strategy that targets specific
network components, which balances efficiency and accuracy. During user
interaction, we formulate an optimization problem that both satisfies the
user's constraints and preserves the original object structure. We demonstrate
our approach on various editing tasks over three shape datasets and show that
it outperforms prior neural editing approaches. Finally, we edit the appearance
and shape of a real photograph and show that the edit propagates to
extrapolated novel views. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Deep CNN-based methods have so far achieved the state of the art results in
multi-view 3D object reconstruction. Despite the considerable progress, the two
core modules of these methods - multi-view feature extraction and fusion, are
usually investigated separately, and the object relations in different views
are rarely explored. In this paper, inspired by the recent great success in
self-attention-based Transformer models, we reformulate the multi-view 3D
reconstruction as a sequence-to-sequence prediction problem and propose a new
framework named 3D Volume Transformer (VolT) for such a task. Unlike previous
CNN-based methods using a separate design, we unify the feature extraction and
view fusion in a single Transformer network. A natural advantage of our design
lies in the exploration of view-to-view relationships using self-attention
among multiple unordered inputs. On ShapeNet - a large-scale 3D reconstruction
benchmark dataset, our method achieves a new state-of-the-art accuracy in
multi-view reconstruction with fewer parameters ($70\%$ less) than other
CNN-based methods. Experimental results also suggest the strong scaling
capability of our method. Our code will be made publicly available. | [
"cs.CV"
] |
A method to predict time-series using multiple deep learners and a Bayesian
network is proposed. In this study, the input explanatory variables are
Bayesian network nodes that are associated with learners. Training data are
divided using K-means clustering, and multiple deep learners are trained
depending on the cluster. A Bayesian network is used to determine which deep
learner is in charge of predicting a time-series. We determine a threshold
value and select learners with a posterior probability equal to or greater than
the threshold value, which could facilitate more robust prediction. The
proposed method is applied to financial time-series data, and the predicted
results for the Nikkei 225 index are demonstrated. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Unsupervised learning poses one of the most difficult challenges in computer
vision today. The task has an immense practical value with many applications in
artificial intelligence and emerging technologies, as large quantities of
unlabeled videos can be collected at relatively low cost. In this paper, we
address the unsupervised learning problem in the context of detecting the main
foreground objects in single images. We train a student deep network to predict
the output of a teacher pathway that performs unsupervised object discovery in
videos or large image collections. Our approach is different from published
methods on unsupervised object discovery. We move the unsupervised learning
phase during training time, then at test time we apply the standard
feed-forward processing along the student pathway. This strategy has the
benefit of allowing increased generalization possibilities during training,
while remaining fast at testing. Our unsupervised learning algorithm can run
over several generations of student-teacher training. Thus, a group of student
networks trained in the first generation collectively create the teacher at the
next generation. In experiments our method achieves top results on three
current datasets for object discovery in video, unsupervised image segmentation
and saliency detection. At test time the proposed system is fast, being one to
two orders of magnitude faster than published unsupervised methods. | [
"cs.CV"
] |
Detecting and localizing objects in the real 3D space, which plays a crucial
role in scene understanding, is particularly challenging given only a single
RGB image due to the geometric information loss during imagery projection. We
propose MonoGRNet for the amodal 3D object detection from a monocular RGB image
via geometric reasoning in both the observed 2D projection and the unobserved
depth dimension. MonoGRNet is a single, unified network composed of four
task-specific subnetworks, responsible for 2D object detection, instance depth
estimation (IDE), 3D localization and local corner regression. Unlike the
pixel-level depth estimation that needs per-pixel annotations, we propose a
novel IDE method that directly predicts the depth of the targeting 3D bounding
box's center using sparse supervision. The 3D localization is further achieved
by estimating the position in the horizontal and vertical dimensions. Finally,
MonoGRNet is jointly learned by optimizing the locations and poses of the 3D
bounding boxes in the global context. We demonstrate that MonoGRNet achieves
state-of-the-art performance on challenging datasets. | [
"cs.CV"
] |
The way features propagate in Fully Convolutional Networks is of momentous
importance to capture multi-scale contexts for obtaining precise segmentation
masks. This paper proposes a novel series-parallel hybrid paradigm called the
Chained Context Aggregation Module (CAM) to diversify feature propagation. CAM
gains features of various spatial scales through chain-connected ladder-style
information flows and fuses them in a two-stage process, namely pre-fusion and
re-fusion. The serial flow continuously increases receptive fields of output
neurons and those in parallel encode different region-based contexts. Each
information flow is a shallow encoder-decoder with appropriate down-sampling
scales to sufficiently capture contextual information. We further adopt an
attention model in CAM to guide feature re-fusion. Based on these developments,
we construct the Chained Context Aggregation Network (CANet), which employs an
asymmetric decoder to recover precise spatial details of prediction maps. We
conduct extensive experiments on six challenging datasets, including Pascal VOC
2012, Pascal Context, Cityscapes, CamVid, SUN-RGBD and GATECH. Results evidence
that CANet achieves state-of-the-art performance. | [
"cs.CV"
] |
Leveraging the advances of natural language processing, most recent scene
text recognizers adopt an encoder-decoder architecture where text images are
first converted to representative features and then a sequence of characters
via `direct decoding'. However, scene text images suffer from rich noises of
different sources such as complex background and geometric distortions which
often confuse the decoder and lead to incorrect alignment of visual features at
noisy decoding time steps. This paper presents I2C2W, a novel scene text
recognizer that is accurate and tolerant to various noises in scenes. I2C2W
consists of an image-to-character module (I2C) and a character-to-word module
(C2W) which are complementary and can be trained end-to-end. I2C detects
characters and predicts their relative positions in a word. It strives to
detect all possible characters including incorrect and redundant ones based on
different alignments of visual features without the restriction of time steps.
Taking the detected characters as input, C2W learns from character semantics
and their positions to filter out incorrect and redundant detection and produce
the final word recognition. Extensive experiments over seven public datasets
show that I2C2W achieves superior recognition performances and outperforms the
state-of-the-art by large margins on challenging irregular scene text datasets. | [
"cs.CV"
] |
In this work, we study the image transformation problem by learning the
underlying transformations from a collection of images using Generative
Adversarial Networks (GANs). Specifically, we propose an unsupervised learning
framework, termed as TrGAN, to project images onto a transformation space that
is shared by the generator and the discriminator. Any two points in this
projected space define a transformation that can guide the image generation
process, leading to continuous semantic change. By projecting a pair of images
onto the transformation space, we are able to adequately extract the semantic
variation between them and further apply the extracted semantic to facilitating
image editing, including not only transferring image styles (e.g., changing day
to night) but also manipulating image contents (e.g., adding clouds in the
sky). Code and models are available at https://genforce.github.io/trgan. | [
"cs.CV"
] |
In this paper, we propose a \textbf{Tr}ansformer-based RGB-D
\textbf{e}gocentric \textbf{a}ction \textbf{r}ecognition framework, called
Trear. It consists of two modules, inter-frame attention encoder and
mutual-attentional fusion block. Instead of using optical flow or recurrent
units, we adopt self-attention mechanism to model the temporal structure of the
data from different modalities. Input frames are cropped randomly to mitigate
the effect of the data redundancy. Features from each modality are interacted
through the proposed fusion block and combined through a simple yet effective
fusion operation to produce a joint RGB-D representation. Empirical experiments
on two large egocentric RGB-D datasets, THU-READ and FPHA, and one small
dataset, WCVS, have shown that the proposed method outperforms the
state-of-the-art results by a large margin. | [
"cs.CV"
] |
Person re-identification (ReID) is aimed at identifying the same person
across videos captured from different cameras. In the view that networks
extracting global features using ordinary network architectures are difficult
to extract local features due to their weak attention mechanisms, researchers
have proposed a lot of elaborately designed ReID networks, while greatly
improving the accuracy, the model size and the feature extraction latency are
also soaring. We argue that a relatively compact ordinary network extracting
globally pooled features has the capability to extract discriminative local
features and can achieve state-of-the-art precision if only the model's
parameters are properly learnt. In order to reduce the difficulty in learning
hard identity labels, we propose a novel knowledge distillation method:
Factorized Distillation, which factorizes both feature maps and retrieval
features of holistic ReID network to mimic representations of multiple partial
ReID models, thus transferring the knowledge from partial ReID models to the
holistic network. Experiments show that the performance of model trained with
the proposed method can outperform state-of-the-art with relatively few network
parameters. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Deep learning has driven a great progress in natural and biological image
processing. However, in material science and engineering, there are often some
flaws and indistinctions in material microscopic images induced from complex
sample preparation, even due to the material itself, hindering the detection of
target objects. In this work, we propose WPU-net that redesigns the
architecture and weighted loss of U-Net, which forces the network to integrate
information from adjacent slices and pays more attention to the topology in
boundary detection task. Then, the WPU-net is applied into a typical material
example, i.e., the grain boundary detection of polycrystalline material.
Experiments demonstrate that the proposed method achieves promising performance
and outperforms state-of-the-art methods. Besides, we propose a new method for
object tracking between adjacent slices, which can effectively reconstruct 3D
structure of the whole material. Finally, we present a material microscopic
image dataset with the goal of advancing the state-of-the-art in image
processing for material science. | [
"cs.CV"
] |
Neural style transfer (NST) is a powerful image generation technique that
uses a convolutional neural network (CNN) to merge the content of one image
with the style of another. Contemporary methods of NST use first or second
order statistics of the CNN's features to achieve transfers with relatively
little computational cost. However, these methods cannot fully extract the
style from the CNN's features. We present a new algorithm for style transfer
that fully extracts the style from the features by redefining the style loss as
the Wasserstein distance between the distribution of features. Thus, we set a
new standard in style transfer quality. In addition, we state two important
interpretations of NST. The first is a re-emphasis from Li et al., which states
that style is simply the distribution of features. The second states that NST
is a type of generative adversarial network (GAN) problem. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Recently, road scene-graph representations used in conjunction with graph
learning techniques have been shown to outperform state-of-the-art deep
learning techniques in tasks including action classification, risk assessment,
and collision prediction. To enable the exploration of applications of road
scene-graph representations, we introduce roadscene2vec: an open-source tool
for extracting and embedding road scene-graphs. The goal of roadscene2vec is to
enable research into the applications and capabilities of road scene-graphs by
providing tools for generating scene-graphs, graph learning models to generate
spatio-temporal scene-graph embeddings, and tools for visualizing and analyzing
scene-graph-based methodologies. The capabilities of roadscene2vec include (i)
customized scene-graph generation from either video clips or data from the
CARLA simulator, (ii) multiple configurable spatio-temporal graph embedding
models and baseline CNN-based models, (iii) built-in functionality for using
graph and sequence embeddings for risk assessment and collision prediction
applications, (iv) tools for evaluating transfer learning, and (v) utilities
for visualizing scene-graphs and analyzing the explainability of graph learning
models. We demonstrate the utility of roadscene2vec for these use cases with
experimental results and qualitative evaluations for both graph learning models
and CNN-based models. roadscene2vec is available at
https://github.com/AICPS/roadscene2vec. | [
"cs.CV"
] |
Benefiting from the spatial cues embedded in depth images, recent progress on
RGB-D saliency detection shows impressive ability on some challenge scenarios.
However, there are still two limitations. One hand is that the pooling and
upsampling operations in FCNs might cause blur object boundaries. On the other
hand, using an additional depth-network to extract depth features might lead to
high computation and storage cost. The reliance on depth inputs during testing
also limits the practical applications of current RGB-D models. In this paper,
we propose a novel collaborative learning framework where edge, depth and
saliency are leveraged in a more efficient way, which solves those problems
tactfully. The explicitly extracted edge information goes together with
saliency to give more emphasis to the salient regions and object boundaries.
Depth and saliency learning is innovatively integrated into the high-level
feature learning process in a mutual-benefit manner. This strategy enables the
network to be free of using extra depth networks and depth inputs to make
inference. To this end, it makes our model more lightweight, faster and more
versatile. Experiment results on seven benchmark datasets show its superior
performance. | [
"cs.CV"
] |
Meta and transfer learning are two successful families of approaches to
few-shot learning. Despite highly related goals, state-of-the-art advances in
each family are measured largely in isolation of each other. As a result of
diverging evaluation norms, a direct or thorough comparison of different
approaches is challenging. To bridge this gap, we perform a cross-family study
of the best transfer and meta learners on both a large-scale meta-learning
benchmark (Meta-Dataset, MD), and a transfer learning benchmark (Visual Task
Adaptation Benchmark, VTAB). We find that, on average, large-scale transfer
methods (Big Transfer, BiT) outperform competing approaches on MD, even when
trained only on ImageNet. In contrast, meta-learning approaches struggle to
compete on VTAB when trained and validated on MD. However, BiT is not without
limitations, and pushing for scale does not improve performance on highly
out-of-distribution MD tasks. In performing this study, we reveal a number of
discrepancies in evaluation norms and study some of these in light of the
performance gap. We hope that this work facilitates sharing of insights from
each community, and accelerates progress on few-shot learning. | [
"cs.LG",
"cs.CV"
] |
The use of synthetic data generated by Generative Adversarial Networks (GANs)
has become quite a popular method to do data augmentation for many
applications. While practitioners celebrate this as an economical way to get
more synthetic data that can be used to train downstream classifiers, it is not
clear that they recognize the inherent pitfalls of this technique. In this
paper, we aim to exhort practitioners against deriving any false sense of
security against data biases based on data augmentation. To drive this point
home, we show that starting with a dataset consisting of head-shots of
engineering researchers, GAN-based augmentation "imagines" synthetic engineers,
most of whom have masculine features and white skin color (inferred from a
human subject study conducted on Amazon Mechanical Turk). This demonstrates how
biases inherent in the training data are reinforced, and sometimes even
amplified, by GAN-based data augmentation; it should serve as a cautionary tale
for the lay practitioners. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Instance segmentation and panoptic segmentation is being paid more and more
attention in recent years. In comparison with bounding box based object
detection and semantic segmentation, instance segmentation can provide more
analytical results at pixel level. Given the insight that pixels belonging to
one instance have one or more common attributes of current instance, we bring
up an one-stage instance segmentation network named Common Attribute Support
Network (CASNet), which realizes instance segmentation by predicting and
clustering common attributes. CASNet is designed in the manner of fully
convolutional and can implement training and inference from end to end. And
CASNet manages predicting the instance without overlaps and holes, which
problem exists in most of current instance segmentation algorithms.
Furthermore, it can be easily extended to panoptic segmentation through minor
modifications with little computation overhead. CASNet builds a bridge between
semantic and instance segmentation from finding pixel class ID to obtaining
class and instance ID by operations on common attribute. Through experiment for
instance and panoptic segmentation, CASNet gets mAP 32.8% and PQ 59.0% on
Cityscapes validation dataset by joint training, and mAP 36.3% and PQ 66.1% by
separated training mode. For panoptic segmentation, CASNet gets
state-of-the-art performance on the Cityscapes validation dataset. | [
"cs.CV"
] |
It is abundantly clear that time dependent data is a vital source of
information in the world. The challenge has been for applications in machine
learning to gain access to a considerable amount of quality data needed for
algorithm development and analysis. Modeling synthetic data using a Generative
Adversarial Network (GAN) has been at the heart of providing a viable solution.
Our work focuses on one dimensional times series and explores the few shot
approach, which is the ability of an algorithm to perform well with limited
data. This work attempts to ease the frustration by proposing a new
architecture, Time Series GAN (TSGAN), to model realistic time series data. We
evaluate TSGAN on 70 data sets from a benchmark time series database. Our
results demonstrate that TSGAN performs better than the competition both
quantitatively using the Frechet Inception Score (FID) metric, and
qualitatively when classification is used as the evaluation criteria. | [
"cs.LG",
"stat.ML"
] |
Sleep staging plays an important role on the diagnosis of sleep disorders. In
general, experts classify sleep stages manually based on polysomnography (PSG),
which is quite time-consuming. Meanwhile, the acquisition process of multiple
signals is much complex, which can affect the subject's sleep. Therefore, the
use of single-channel electroencephalogram (EEG) for automatic sleep staging
has become a popular research topic. In the literature, a large number of sleep
staging methods based on single-channel EEG have been proposed with promising
results and achieve the preliminary automation of sleep staging. However, the
performance for most of these methods in the N1 stage do not satisfy the needs
of the diagnosis. In this paper, we propose a deep learning model multi scale
dual attention network(MSDAN) based on raw EEG, which utilizes multi-scale
convolution to extract features in different waveforms contained in the EEG
signal, connects channel attention and spatial attention mechanisms in series
to filter and highlight key information, and uses soft thresholding to remove
redundant information. Experiments were conducted using two datasets with
5-fold cross-validation and hold-out validation method. The final average
accuracy, overall accuracy, macro F1 score and Cohen's Kappa coefficient of the
model reach 96.70%, 91.74%, 0.8231 and 0.8723 on the Sleep-EDF dataset, 96.14%,
90.35%, 0.7945 and 0.8284 on the Sleep-EDFx dataset. Significantly, our model
performed superiorly in the N1 stage, with F1 scores of 54.41% and 52.79% on
the two datasets respectively. The results show the superiority of our network
over the existing methods, reaching a new state-of-the-art. In particular, the
proposed method achieves excellent results in the N1 sleep stage compared to
other methods. | [
"cs.LG",
"eess.SP"
] |
Deep neural networks are prone to adversarial examples that maliciously alter
the network's outcome. Due to the increasing popularity of 3D sensors in
safety-critical systems and the vast deployment of deep learning models for 3D
point sets, there is a growing interest in adversarial attacks and defenses for
such models. So far, the research has focused on the semantic level, namely,
deep point cloud classifiers. However, point clouds are also widely used in a
geometric-related form that includes encoding and reconstructing the geometry.
In this work, we explore adversarial examples at a geometric level. That is, a
small change to a clean source point cloud leads, after passing through an
autoencoder model, to a shape from a different target class. On the defense
side, we show that remnants of the attack's target shape are still present at
the reconstructed output after applying the defense to the adversarial input.
Our code is publicly available at https://github.com/itailang/geometric_adv. | [
"cs.CV"
] |
This work explores conditional image generation with a new image density
model based on the PixelCNN architecture. The model can be conditioned on any
vector, including descriptive labels or tags, or latent embeddings created by
other networks. When conditioned on class labels from the ImageNet database,
the model is able to generate diverse, realistic scenes representing distinct
animals, objects, landscapes and structures. When conditioned on an embedding
produced by a convolutional network given a single image of an unseen face, it
generates a variety of new portraits of the same person with different facial
expressions, poses and lighting conditions. We also show that conditional
PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally,
the gated convolutional layers in the proposed model improve the log-likelihood
of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet,
with greatly reduced computational cost. | [
"cs.CV",
"cs.LG"
] |
In 1869, the first draft of the periodic table was published by Russian
chemist Dmitri Mendeleev. In terms of data science, his achievement can be
viewed as a successful example of feature embedding based on human cognition:
chemical properties of all known elements at that time were compressed onto the
two-dimensional grid system for tabular display. In this study, we seek to
answer the question of whether machine learning can reproduce or recreate the
periodic table by using observed physicochemical properties of the elements. To
achieve this goal, we developed a periodic table generator (PTG). The PTG is an
unsupervised machine learning algorithm based on the generative topographic
mapping (GTM), which can automate the translation of high-dimensional data into
a tabular form with varying layouts on-demand. The PTG autonomously produced
various arrangements of chemical symbols, which organized a two-dimensional
array such as Mendeleev's periodic table or three-dimensional spiral table
according to the underlying periodicity in the given data. We further showed
what the PTG learned from the element data and how the element features, such
as melting point and electronegativity, are compressed to the lower-dimensional
latent spaces. | [
"stat.ML",
"cs.LG"
] |
Reinforcement learning is about learning agent models that make the best
sequential decisions in unknown environments. In an unknown environment, the
agent needs to explore the environment while exploiting the collected
information, which usually forms a sophisticated problem to solve.
Derivative-free optimization, meanwhile, is capable of solving sophisticated
problems. It commonly uses a sampling-and-updating framework to iteratively
improve the solution, where exploration and exploitation are also needed to be
well balanced. Therefore, derivative-free optimization deals with a similar
core issue as reinforcement learning, and has been introduced in reinforcement
learning approaches, under the names of learning classifier systems and
neuroevolution/evolutionary reinforcement learning. Although such methods have
been developed for decades, recently, derivative-free reinforcement learning
exhibits attracting increasing attention. However, recent survey on this topic
is still lacking. In this article, we summarize methods of derivative-free
reinforcement learning to date, and organize the methods in aspects including
parameter updating, model selection, exploration, and parallel/distributed
methods. Moreover, we discuss some current limitations and possible future
directions, hoping that this article could bring more attentions to this topic
and serve as a catalyst for developing novel and efficient approaches. | [
"cs.LG",
"cs.AI"
] |
We propose HOI Transformer to tackle human object interaction (HOI) detection
in an end-to-end manner. Current approaches either decouple HOI task into
separated stages of object detection and interaction classification or
introduce surrogate interaction problem. In contrast, our method, named HOI
Transformer, streamlines the HOI pipeline by eliminating the need for many
hand-designed components. HOI Transformer reasons about the relations of
objects and humans from global image context and directly predicts HOI
instances in parallel. A quintuple matching loss is introduced to force HOI
predictions in a unified way. Our method is conceptually much simpler and
demonstrates improved accuracy. Without bells and whistles, HOI Transformer
achieves $26.61\% $ $ AP $ on HICO-DET and $52.9\%$ $AP_{role}$ on V-COCO,
surpassing previous methods with the advantage of being much simpler. We hope
our approach will serve as a simple and effective alternative for HOI tasks.
Code is available at https://github.com/bbepoch/HoiTransformer . | [
"cs.CV"
] |
We study many-class few-shot (MCFS) problem in both supervised learning and
meta-learning settings. Compared to the well-studied many-class many-shot and
few-class few-shot problems, the MCFS problem commonly occurs in practical
applications but has been rarely studied in previous literature. It brings new
challenges of distinguishing between many classes given only a few training
samples per class. In this paper, we leverage the class hierarchy as a prior
knowledge to train a coarse-to-fine classifier that can produce accurate
predictions for MCFS problem in both settings. The propose model,
"memory-augmented hierarchical-classification network (MahiNet)", performs
coarse-to-fine classification where each coarse class can cover multiple fine
classes. Since it is challenging to directly distinguish a variety of fine
classes given few-shot data per class, MahiNet starts from learning a
classifier over coarse-classes with more training data whose labels are much
cheaper to obtain. The coarse classifier reduces the searching range over the
fine classes and thus alleviates the challenges from "many classes". On
architecture, MahiNet firstly deploys a convolutional neural network (CNN) to
extract features. It then integrates a memory-augmented attention module and a
multi-layer perceptron (MLP) together to produce the probabilities over coarse
and fine classes. While the MLP extends the linear classifier, the attention
module extends the KNN classifier, both together targeting the "few-shot"
problem. We design several training strategies of MahiNet for supervised
learning and meta-learning. In addition, we propose two novel benchmark
datasets "mcfsImageNet" and "mcfsOmniglot" specially designed for MCFS problem.
In experiments, we show that MahiNet outperforms several state-of-the-art
models on MCFS problems in both supervised learning and meta-learning. | [
"cs.LG",
"stat.ML"
] |
The need for large annotated image datasets for training Convolutional Neural
Networks (CNNs) has been a significant impediment for their adoption in
computer vision applications. We show that with transfer learning an effective
object detector can be trained almost entirely on synthetically rendered
datasets. We apply this strategy for detecting pack- aged food products
clustered in refrigerator scenes. Our CNN trained only with 4000 synthetic
images achieves mean average precision (mAP) of 24 on a test set with 55
distinct products as objects of interest and 17 distractor objects. A further
increase of 12% in the mAP is obtained by adding only 400 real images to these
4000 synthetic images in the training set. A high degree of photorealism in the
synthetic images was not essential in achieving this performance. We analyze
factors like training data set size and 3D model dictionary size for their
influence on detection performance. Additionally, training strategies like
fine-tuning with selected layers and early stopping which affect transfer
learning from synthetic scenes to real scenes are explored. Training CNNs with
synthetic datasets is a novel application of high-performance computing and a
promising approach for object detection applications in domains where there is
a dearth of large annotated image data. | [
"cs.CV"
] |
While representation learning has yielded a great success on many graph
learning tasks, there is little understanding behind the structures that are
being captured by these embeddings. For example, we wonder if the topological
features, such as the Triangle Count, the Degree of the node and other
centrality measures are concretely encoded in the embeddings. Furthermore, we
ask if the presence of these structures in the embeddings is necessary for a
better performance on the downstream tasks, such as clustering and
classification. To address these questions, we conduct an extensive empirical
study over three classes of unsupervised graph embedding models and seven
different variants of Graph Autoencoders. Our results show that five
topological features: the Degree, the Local Clustering Score, the Betweenness
Centrality, the Eigenvector Centrality, and Triangle Count are concretely
preserved in the first layer of the graph autoencoder that employs the SUM
aggregation rule, under the condition that the model preserves the second-order
proximity. We supplement further evidence for the presence of these features by
revealing a hierarchy in the distribution of the topological features in the
embeddings of the aforementioned model. We also show that a model with such
properties can outperform other models on certain downstream tasks, especially
when the preserved features are relevant to the task at hand. Finally, we
evaluate the suitability of our findings through a test case study related to
social influence prediction. | [
"cs.LG",
"cs.AI"
] |
LiDAR odometry is a fundamental task for various areas such as robotics,
autonomous driving. This problem is difficult since it requires the systems to
be highly robust running in noisy real-world data. Existing methods are mostly
local iterative methods. Feature-based global registration methods are not
preferred since extracting accurate matching pairs in the nonuniform and sparse
LiDAR data remains challenging. In this paper, we present Deep Matching LiDAR
Odometry (DMLO), a novel learning-based framework which makes the feature
matching method applicable to LiDAR odometry task. Unlike many recent
learning-based methods, DMLO explicitly enforces geometry constraints in the
framework. Specifically, DMLO decomposes the 6-DoF pose estimation into two
parts, a learning-based matching network which provides accurate
correspondences between two scans and rigid transformation estimation with a
close-formed solution by Singular Value Decomposition (SVD). Comprehensive
experimental results on real-world datasets KITTI and Argoverse demonstrate
that our DMLO dramatically outperforms existing learning-based methods and
comparable with the state-of-the-art geometry based approaches. | [
"cs.CV",
"cs.RO"
] |
Due to an increase in the number of image achieves, Content-Based Image
Retrieval (CBIR) has gained attention for research community of computer
vision. The image visual contents are represented in a feature space in the
form of numerical values that is considered as a feature vector of image.
Images belonging to different classes may contain the common visuals and shapes
that can result in the closeness of computed feature space of two different
images belonging to separate classes. Due to this reason, feature extraction
and image representation is selected with appropriate features as it directly
affects the performance of image retrieval system. The commonly used visual
features are image spatial layout, color, texture and shape. Image feature
space is combined to achieve the discriminating ability that is not possible to
achieve when the features are used separately. Due to this reason, in this
paper, we aim to explore the low-level feature combination that are based on
color and shape features. We selected color moments and color histogram to
represent color while shape is represented by using invariant moments. We
selected this combination, as these features are reported intuitive, compact
and robust for image representation. We evaluated the performance of our
proposed research by using the Corel, Coil and Ground Truth (GT) image
datasets. We evaluated the proposed low-level feature fusion by calculating the
precision, recall and time required for feature extraction. The precision,
recall and feature extraction values obtained from the proposed low-level
feature fusion outperforms the existing research of CBIR. | [
"cs.CV"
] |
Thalamic alterations are relevant to many neurological disorders including
Alzheimer's disease, Parkinson's disease and multiple sclerosis. Routine
interventions to improve symptom severity in movement disorders, for example,
often consist of surgery or deep brain stimulation to diencephalic nuclei.
Therefore, accurate delineation of grey matter thalamic subregions is of the
upmost clinical importance. MRI is highly appropriate for structural
segmentation as it provides different views of the anatomy from a single
scanning session. Though with several contrasts potentially available, it is
also of increasing importance to develop new image segmentation techniques that
can operate multi-spectrally. We hereby propose a new segmentation method for
use with multi-modality data, which we evaluated for automated segmentation of
major thalamic subnuclear groups using T1-, T2*-weighted and quantitative
susceptibility mapping (QSM) information. The proposed method consists of four
steps: highly iterative image co-registration, manual segmentation on the
average training-data template, supervised learning for pattern recognition,
and a final convex optimisation step imposing further spatial constraints to
refine the solution. This led to solutions in greater agreement with manual
segmentation than the standard Morel atlas based approach. Furthermore, we show
that the multi-contrast approach boosts segmentation performances. We then
investigated whether prior knowledge using the training-template contours could
further improve convex segmentation accuracy and robustness, which led to
highly precise multi-contrast segmentations in single subjects. This approach
can be extended to most 3D imaging data types and any region of interest
discernible in single scans or multi-subject templates. | [
"cs.CV",
"math.NA"
] |
Deep Neural Networks have shown tremendous success in the area of object
recognition, image classification and natural language processing. However,
designing optimal Neural Network architectures that can learn and output
arbitrary graphs is an ongoing research problem. The objective of this survey
is to summarize and discuss the latest advances in methods to Learn
Representations of Graph Data. We start by identifying commonly used types of
graph data and review basics of graph theory. This is followed by a discussion
of the relationships between graph kernel methods and neural networks. Next we
identify the major approaches used for learning representations of graph data
namely: Kernel approaches, Convolutional approaches, Graph neural networks
approaches, Graph embedding approaches and Probabilistic approaches. A variety
of methods under each of the approaches are discussed and the survey is
concluded with a brief discussion of the future of learning representation of
graph data. | [
"cs.LG",
"stat.ML"
] |
In offline reinforcement learning (RL), the goal is to learn a highly
rewarding policy based solely on a dataset of historical interactions with the
environment. The ability to train RL policies offline can greatly expand the
applicability of RL, its data efficiency, and its experimental velocity. Prior
work in offline RL has been confined almost exclusively to model-free RL
approaches. In this work, we present MOReL, an algorithmic framework for
model-based offline RL. This framework consists of two steps: (a) learning a
pessimistic MDP (P-MDP) using the offline dataset; and (b) learning a
near-optimal policy in this P-MDP. The learned P-MDP has the property that for
any policy, the performance in the real environment is approximately
lower-bounded by the performance in the P-MDP. This enables it to serve as a
good surrogate for purposes of policy evaluation and learning, and overcome
common pitfalls of model-based RL like model exploitation. Theoretically, we
show that MOReL is minimax optimal (up to log factors) for offline RL. Through
experiments, we show that MOReL matches or exceeds state-of-the-art results in
widely studied offline RL benchmarks. Moreover, the modular design of MOReL
enables future advances in its components (e.g. generative modeling,
uncertainty estimation, planning etc.) to directly translate into advances for
offline RL. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Cameras that can measure the depth of each pixel in addition to its color
have become easily available and are used in many consumer products worldwide.
Often the depth channel is captured at lower quality compared to the RGB
channels and different algorithms have been proposed to improve the quality of
the D channel given the RGB channels. Typically these approaches work by
assuming that edges in RGB are correlated with edges in D.
In this paper we approach this problem from the standpoint of natural image
statistics. We obtain examples of high quality RGBD images from a computer
graphics generated movie (MPI-Sintel) and we use these examples to compare
different probabilistic generative models of RGBD image patches. We then use
the generative models together with a degradation model and obtain a Bayes
Least Squares (BLS) estimator of the D channel given the RGB channels. Our
results show that learned generative models outperform the state-of-the-art in
improving the quality of depth channels given the color channels in natural
images even when training is performed on artificially generated images. | [
"cs.CV"
] |
We present an algorithm for learning from unlabeled text, based on the Vector
Space Model (VSM) of information retrieval, that can solve verbal analogy
questions of the kind found in the Scholastic Aptitude Test (SAT). A verbal
analogy has the form A:B::C:D, meaning "A is to B as C is to D"; for example,
mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B,
and the problem is to select the most analogous word pair, C:D, from a set of
five choices. The VSM algorithm correctly answers 47% of a collection of 374
college-level analogy questions (random guessing would yield 20% correct). We
motivate this research by relating it to work in cognitive science and
linguistics, and by applying it to a difficult problem in natural language
processing, determining semantic relations in noun-modifier pairs. The problem
is to classify a noun-modifier pair, such as "laser printer", according to the
semantic relation between the noun (printer) and the modifier (laser). We use a
supervised nearest-neighbour algorithm that assigns a class to a given
noun-modifier pair by finding the most analogous noun-modifier pair in the
training data. With 30 classes of semantic relations, on a collection of 600
labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5%
(random guessing: 3.3%). With 5 classes of semantic relations, the F value is
43.2% (random: 20%). The performance is state-of-the-art for these challenging
problems. | [
"cs.LG",
"cs.CL",
"cs.IR",
"H.3.1; I.2.6; I.2.7"
] |
Convolutional Neural Networks (CNNs) have reigned for a decade as the de
facto approach to automated medical image diagnosis. Recently, vision
transformers (ViTs) have appeared as a competitive alternative to CNNs,
yielding similar levels of performance while possessing several interesting
properties that could prove beneficial for medical imaging tasks. In this work,
we explore whether it is time to move to transformer-based models or if we
should keep working with CNNs - can we trivially switch to transformers? If so,
what are the advantages and drawbacks of switching to ViTs for medical image
diagnosis? We consider these questions in a series of experiments on three
mainstream medical image datasets. Our findings show that, while CNNs perform
better when trained from scratch, off-the-shelf vision transformers using
default hyperparameters are on par with CNNs when pretrained on ImageNet, and
outperform their CNN counterparts when pretrained using self-supervision. | [
"cs.CV",
"cs.LG"
] |
Transfer learning has received a lot of attention in the machine learning
community over the last years, and several effective algorithms have been
developed. However, relatively little is known about their theoretical
properties, especially in the setting of lifelong learning, where the goal is
to transfer information to tasks for which no data have been observed so far.
In this work we study lifelong learning from a theoretical perspective. Our
main result is a PAC-Bayesian generalization bound that offers a unified view
on existing paradigms for transfer learning, such as the transfer of parameters
or the transfer of low-dimensional representations. We also use the bound to
derive two principled lifelong learning algorithms, and we show that these
yield results comparable with existing methods. | [
"stat.ML",
"cs.LG",
"68T05"
] |
Object detection in autonomous cars is commonly based on camera images and
Lidar inputs, which are often used to train prediction models such as deep
artificial neural networks for decision making for object recognition,
adjusting speed, etc. A mistake in such decision making can be damaging; thus,
it is vital to measure the reliability of decisions made by such prediction
models via uncertainty measurement. Uncertainty, in deep learning models, is
often measured for classification problems. However, deep learning models in
autonomous driving are often multi-output regression models. Hence, we propose
a novel method called PURE (Prediction sURface uncErtainty) for measuring
prediction uncertainty of such regression models. We formulate the object
recognition problem as a regression model with more than one outputs for
finding object locations in a 2-dimensional camera view. For evaluation, we
modified three widely-applied object recognition models (i.e., YoLo, SSD300 and
SSD512) and used the KITTI, Stanford Cars, Berkeley DeepDrive, and NEXET
datasets. Results showed the statistically significant negative correlation
between prediction surface uncertainty and prediction accuracy suggesting that
uncertainty significantly impacts the decisions made by autonomous driving. | [
"cs.CV",
"cs.AI"
] |
We propose an efficient transfer learning method for adapting ImageNet
pre-trained Convolutional Neural Network (CNN) to fine-grained image
classification task. Conventional transfer learning methods typically face the
trade-off between training time and accuracy. By adding "attention module" to
each convolutional filters of the pre-trained network, we are able to rank and
adjust the importance of each convolutional signal in an end-to-end pipeline.
In this report, we show our method can adapt a pre-trianed ResNet50 for a
fine-grained transfer learning task within few epochs and achieve accuracy
above conventional transfer learning methods and close to models trained from
scratch. Our model also offer interpretable result because the rank of the
convolutional signal shows which convolution channels are utilized and
amplified to achieve better classification result, as well as which signal
should be treated as noise for the specific transfer learning task, which could
be pruned to lower model size. | [
"cs.CV"
] |
The aim of the inverse chemical design is to develop new molecules with given
optimized molecular properties or objectives. Recently, generative deep
learning (DL) networks are considered as the state-of-the-art in inverse
chemical design and have achieved early success in generating molecular
structures with desired properties in the pharmaceutical and material chemistry
fields. However, satisfying a large number (larger than 10 objectives) of
molecular objectives is a limitation of current generative models. To improve
the model's ability to handle a large number of molecule design objectives, we
developed a Reinforcement Learning (RL) based generative framework to optimize
chemical molecule generation. Our use of Curriculum Learning (CL) to fine-tune
the pre-trained generative network allowed the model to satisfy up to 21
objectives and increase the generative network's robustness. The experiments
show that the proposed multiple-objective RL-based generative model can
correctly identify unknown molecules with an 83 to 100 percent success rate,
compared to the baseline approach of 0 percent. Additionally, this proposed
generative model is not limited to just chemistry research challenges; we
anticipate that problems that utilize RL with multiple-objectives will benefit
from this framework. | [
"cs.LG",
"cs.AI"
] |
In this article a novel algorithm for color image segmentation has been
developed. The proposed algorithm based on combining two existing methods in
such a novel way to obtain a significant method to partition the color image
into significant regions. On the first phase, the traditional Otsu method for
gray channel image segmentation were applied for each of the R,G, and B
channels separately to determine the suitable automatic threshold for each
channel. After that, the new modified channels are integrated again to
formulate a new color image. The resulted image suffers from some kind of
distortion. To get rid of this distortion, the second phase is arise which is
the median filter to smooth the image and increase the segmented regions. This
process looks very significant by the ocular eye. Experimental results were
presented on a variety of test images to support the proposed algorithm. | [
"cs.CV"
] |
Deep learning based single image super-resolution methods use a large number
of training datasets and have recently achieved great quality progress both
quantitatively and qualitatively. Most deep networks focus on nonlinear mapping
from low-resolution inputs to high-resolution outputs via residual learning
without exploring the feature abstraction and analysis. We propose a
Hierarchical Back Projection Network (HBPN), that cascades multiple HourGlass
(HG) modules to bottom-up and top-down process features across all scales to
capture various spatial correlations and then consolidates the best
representation for reconstruction. We adopt the back projection blocks in our
proposed network to provide the error correlated up and down-sampling process
to replace simple deconvolution and pooling process for better estimation. A
new Softmax based Weighted Reconstruction (WR) process is used to combine the
outputs of HG modules to further improve super-resolution. Experimental results
on various datasets (including the validation dataset, NTIRE2019, of the Real
Image Super-resolution Challenge) show that our proposed approach can achieve
and improve the performance of the state-of-the-art methods for different
scaling factors. | [
"cs.CV",
"eess.IV"
] |
Though there is a growing body of literature on fairness for supervised
learning, the problem of incorporating fairness into unsupervised learning has
been less well-studied. This paper studies fairness in the context of principal
component analysis (PCA). We first present a definition of fairness for
dimensionality reduction, and our definition can be interpreted as saying that
a reduction is fair if information about a protected class (e.g., race or
gender) cannot be inferred from the dimensionality-reduced data points. Next,
we develop convex optimization formulations that can improve the fairness (with
respect to our definition) of PCA and kernel PCA. These formulations are
semidefinite programs (SDP's), and we demonstrate the effectiveness of our
formulations using several datasets. We conclude by showing how our approach
can be used to perform a fair (with respect to age) clustering of health data
that may be used to set health insurance rates. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
We propose a self-supervised visual learning method by predicting the
variable playback speeds of a video. Without semantic labels, we learn the
spatio-temporal visual representation of the video by leveraging the variations
in the visual appearance according to different playback speeds under the
assumption of temporal coherence. To learn the spatio-temporal visual
variations in the entire video, we have not only predicted a single playback
speed but also generated clips of various playback speeds and directions with
randomized starting points. Hence the visual representation can be successfully
learned from the meta information (playback speeds and directions) of the
video. We also propose a new layer dependable temporal group normalization
method that can be applied to 3D convolutional networks to improve the
representation learning performance where we divide the temporal features into
several groups and normalize each one using the different corresponding
parameters. We validate the effectiveness of our method by fine-tuning it to
the action recognition and video retrieval tasks on UCF-101 and HMDB-51. | [
"cs.CV"
] |
To achieve parsimonious inference in per-pixel labeling tasks with a limited
computational budget, we propose a \emph{Pixel-wise Attentional Gating} unit
(\emph{PAG}) that learns to selectively process a subset of spatial locations
at each layer of a deep convolutional network. PAG is a generic,
architecture-independent, problem-agnostic mechanism that can be readily
"plugged in" to an existing model with fine-tuning. We utilize PAG in two ways:
1) learning spatially varying pooling fields that improve model performance
without the extra computation cost associated with multi-scale pooling, and 2)
learning a dynamic computation policy for each pixel to decrease total
computation while maintaining accuracy.
We extensively evaluate PAG on a variety of per-pixel labeling tasks,
including semantic segmentation, boundary detection, monocular depth and
surface normal estimation. We demonstrate that PAG allows competitive or
state-of-the-art performance on these tasks. Our experiments show that PAG
learns dynamic spatial allocation of computation over the input image which
provides better performance trade-offs compared to related approaches (e.g.,
truncating deep models or dynamically skipping whole layers). Generally, we
observe PAG can reduce computation by $10\%$ without noticeable loss in
accuracy and performance degrades gracefully when imposing stronger
computational constraints. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.