text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Electron microscopic connectomics is an ambitious research direction with the
goal of studying comprehensive brain connectivity maps by using
high-throughput, nano-scale microscopy. One of the main challenges in
connectomics research is developing scalable image analysis algorithms that
require minimal user intervention. Recently, deep learning has drawn much
attention in computer vision because of its exceptional performance in image
classification tasks. For this reason, its application to connectomic analyses
holds great promise, as well. In this paper, we introduce a novel deep neural
network architecture, FusionNet, for the automatic segmentation of neuronal
structures in connectomics data. FusionNet leverages the latest advances in
machine learning, such as semantic segmentation and residual neural networks,
with the novel introduction of summation-based skip connections to allow a much
deeper network architecture for a more accurate segmentation. We demonstrate
the performance of the proposed method by comparing it with state-of-the-art
electron microscopy (EM) segmentation methods from the ISBI EM segmentation
challenge. We also show the segmentation results on two different tasks
including cell membrane and cell body segmentation and a statistical analysis
of cell morphology. | [
"cs.CV"
] |
Existing knowledge distillation methods focus on convolutional neural
networks (CNNs), where the input samples like images lie in a grid domain, and
have largely overlooked graph convolutional networks (GCN) that handle non-grid
data. In this paper, we propose to our best knowledge the first dedicated
approach to distilling knowledge from a pre-trained GCN model. To enable the
knowledge transfer from the teacher GCN to the student, we propose a local
structure preserving module that explicitly accounts for the topological
semantics of the teacher. In this module, the local structure information from
both the teacher and the student are extracted as distributions, and hence
minimizing the distance between these distributions enables topology-aware
knowledge transfer from the teacher, yielding a compact yet high-performance
student model. Moreover, the proposed approach is readily extendable to dynamic
graph models, where the input graphs for the teacher and the student may
differ. We evaluate the proposed method on two different datasets using GCN
models of different architectures, and demonstrate that our method achieves the
state-of-the-art knowledge distillation performance for GCN models. Code is
publicly available at https://github.com/ihollywhy/DistillGCN.PyTorch. | [
"cs.CV"
] |
We propose a novel Synergistic Attention Network (SA-Net) to address the
light field salient object detection by establishing a synergistic effect
between multi-modal features with advanced attention mechanisms. Our SA-Net
exploits the rich information of focal stacks via 3D convolutional neural
networks, decodes the high-level features of multi-modal light field data with
two cascaded synergistic attention modules, and predicts the saliency map using
an effective feature fusion module in a progressive manner. Extensive
experiments on three widely-used benchmark datasets show that our SA-Net
outperforms 28 state-of-the-art models, sufficiently demonstrating its
effectiveness and superiority. Our code will be made publicly available. | [
"cs.CV"
] |
Caricature is an artistic drawing created to abstract or exaggerate facial
features of a person. Rendering visually pleasing caricatures is a difficult
task that requires professional skills, and thus it is of great interest to
design a method to automatically generate such drawings. To deal with large
shape changes, we propose an algorithm based on a semantic shape transform to
produce diverse and plausible shape exaggerations. Specifically, we predict
pixel-wise semantic correspondences and perform image warping on the input
photo to achieve dense shape transformation. We show that the proposed
framework is able to render visually pleasing shape exaggerations while
maintaining their facial structures. In addition, our model allows users to
manipulate the shape via the semantic map. We demonstrate the effectiveness of
our approach on a large photograph-caricature benchmark dataset with
comparisons to the state-of-the-art methods. | [
"cs.CV"
] |
This paper presents the first non-asymptotic result showing that a model-free
algorithm can achieve a logarithmic cumulative regret for episodic tabular
reinforcement learning if there exists a strictly positive sub-optimality gap
in the optimal $Q$-function. We prove that the optimistic $Q$-learning studied
in [Jin et al. 2018] enjoys a ${\mathcal{O}}\left(\frac{SA\cdot
\mathrm{poly}\left(H\right)}{\Delta_{\min}}\log\left(SAT\right)\right)$
cumulative regret bound, where $S$ is the number of states, $A$ is the number
of actions, $H$ is the planning horizon, $T$ is the total number of steps, and
$\Delta_{\min}$ is the minimum sub-optimality gap. This bound matches the
information theoretical lower bound in terms of $S,A,T$ up to a
$\log\left(SA\right)$ factor. We further extend our analysis to the discounted
setting and obtain a similar logarithmic cumulative regret bound. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Wrist Fracture is the most common type of fracture with a high incidence
rate. Conventional radiography (i.e. X-ray imaging) is used for wrist fracture
detection routinely, but occasionally fracture delineation poses issues and an
additional confirmation by computed tomography (CT) is needed for diagnosis.
Recent advances in the field of Deep Learning (DL), a subfield of Artificial
Intelligence (AI), have shown that wrist fracture detection can be automated
using Convolutional Neural Networks. However, previous studies did not pay
close attention to the difficult cases which can only be confirmed via CT
imaging. In this study, we have developed and analyzed a state-of-the-art
DL-based pipeline for wrist (distal radius) fracture detection -- DeepWrist,
and evaluated it against one general population test set, and one challenging
test set comprising only cases requiring confirmation by CT. Our results reveal
that a typical state-of-the-art approach, such as DeepWrist, while having a
near-perfect performance on the general independent test set, has a
substantially lower performance on the challenging test set -- average
precision of 0.99 (0.99-0.99) vs 0.64 (0.46-0.83), respectively. Similarly, the
area under the ROC curve was of 0.99 (0.98-0.99) vs 0.84 (0.72-0.93),
respectively. Our findings highlight the importance of a meticulous analysis of
DL-based models before clinical use, and unearth the need for more challenging
settings for testing medical AI systems. | [
"cs.CV",
"cs.LG",
"eess.IV",
"q-bio.QM"
] |
We introduce a new notion of generalization -- Distributional Generalization
-- which roughly states that outputs of a classifier at train and test time are
close *as distributions*, as opposed to close in just their average error. For
example, if we mislabel 30% of dogs as cats in the train set of CIFAR-10, then
a ResNet trained to interpolation will in fact mislabel roughly 30% of dogs as
cats on the *test set* as well, while leaving other classes unaffected. This
behavior is not captured by classical generalization, which would only consider
the average error and not the distribution of errors over the input domain. Our
formal conjectures, which are much more general than this example, characterize
the form of distributional generalization that can be expected in terms of
problem parameters: model architecture, training procedure, number of samples,
and data distribution. We give empirical evidence for these conjectures across
a variety of domains in machine learning, including neural networks, kernel
machines, and decision trees. Our results thus advance our empirical
understanding of interpolating classifiers. | [
"cs.LG",
"cs.NE",
"math.ST",
"stat.ML",
"stat.TH"
] |
This work proposes a method for depth completion of sparse LiDAR data using a
convolutional neural network which can be used to generate semi-dense depth
maps and "almost" full 3D point-clouds with significantly lower root mean
squared error (RMSE) over state-of-the-art methods. We add an "Error
Prediction" unit to our network and present a novel and simple end-to-end
method that learns to predict an error-map of depth regression task. An
"almost" dense high-confidence/low-variance point-cloud is more valuable for
safety-critical applications specifically real-world autonomous driving than a
full point-cloud with high error rate and high error variance. Using our
predicted error-map, we demonstrate that by up-filling a LiDAR point cloud from
18,000 points to 285,000 points, versus 300,000 points for full depth, we can
reduce the RMSE error from 1004 to 399. This error is approximately 60% less
than the state-of-the-art and 50% less than the state-of-the-art with RGB
guidance (we did not use RGB guidance in our algorithm). In addition to
analyzing our results on Kitti depth completion dataset, we also demonstrate
the ability of our proposed method to extend to new tasks by deploying our
"Error Prediction" unit to improve upon the state-of-the-art for monocular
depth estimation. Codes and demo videos are available at
http://github.com/hekmak/Conf-net. | [
"cs.CV",
"cs.LG"
] |
This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.NE",
"stat.ML"
] |
This work focuses on Registration or Alignment of 3D point sets. Although the
Registration problem is a well established problem and it's solved using
multiple variants of Iterative Closest Point (ICP) Algorithm, most of the
approaches in the current state of the art still suffers from misalignment when
the \textit{Source} and the \textit{Target} point sets are separated by large
rotations and translation. In this work, we propose a variant of the Standard
ICP algorithm, where we introduce a Correntropy Relationship Matrix in the
computation of rotation and translation component which attempts to solve the
large rotation and translation problem between \textit{Source} and
\textit{Target} point sets. This matrix is created through correntropy
criterion which is updated in every iteration. The correntropy criterion
defined in this approach maintains the relationship between the points in the
\textit{Source} dataset and the \textit{Target} dataset. Through our
experiments and validation we verify that our approach has performed well under
various rotation and translation in comparison to the other well-known state of
the art methods available in the Point Cloud Library (PCL) as well as other
methods available as open source. We have uploaded our code in the github
repository for the readers to validate and verify our approach
https://github.com/aralab-unr/CoSM-ICP. | [
"cs.CV"
] |
Prognostic tumor growth modeling via volumetric medical imaging observations
can potentially lead to better outcomes of tumor treatment and surgical
planning. Recent advances of convolutional networks have demonstrated higher
accuracy than traditional mathematical models in predicting future tumor
volumes. This indicates that deep learning-based techniques may have great
potentials on addressing such problem. However, current 2D patch-based modeling
approaches cannot make full use of the spatio-temporal imaging context of the
tumor's longitudinal 4D (3D + time) data. Moreover, they are incapable to
predict clinically-relevant tumor properties, other than volumes. In this
paper, we exploit to formulate the tumor growth process through convolutional
Long Short-Term Memory (ConvLSTM) that extract tumor's static imaging
appearances and capture its temporal dynamic changes within a single network.
We extend ConvLSTM into the spatio-temporal domain (ST-ConvLSTM) by jointly
learning the inter-slice 3D contexts and the longitudinal or temporal dynamics
from multiple patient studies. Our approach can incorporate other non-imaging
patient information in an end-to-end trainable manner. Experiments are
conducted on the largest 4D longitudinal tumor dataset of 33 patients to date.
Results validate that the ST-ConvLSTM produces a Dice score of 83.2%+-5.1% and
a RVD of 11.2%+-10.8%, both significantly outperforming (p<0.05) other compared
methods of linear model, ConvLSTM, and generative adversarial network (GAN)
under the metric of predicting future tumor volumes. Additionally, our new
method enables the prediction of both cell density and CT intensity numbers.
Last, we demonstrate the generalizability of ST-ConvLSTM by employing it in 4D
medical image segmentation task, which achieves an averaged Dice score of
86.3+-1.2% for left-ventricle segmentation in 4D ultrasound with 3 seconds per
patient. | [
"cs.CV"
] |
Generative Adversarial Networks (GANs) coupled with self-supervised tasks
have shown promising results in unconditional and semi-supervised image
generation. We propose a self-supervised approach (LT-GAN) to improve the
generation quality and diversity of images by estimating the GAN-induced
transformation (i.e. transformation induced in the generated images by
perturbing the latent space of generator). Specifically, given two pairs of
images where each pair comprises of a generated image and its transformed
version, the self-supervision task aims to identify whether the latent
transformation applied in the given pair is same to that of the other pair.
Hence, this auxiliary loss encourages the generator to produce images that are
distinguishable by the auxiliary network, which in turn promotes the synthesis
of semantically consistent images with respect to latent transformations. We
show the efficacy of this pretext task by improving the image generation
quality in terms of FID on state-of-the-art models for both conditional and
unconditional settings on CIFAR-10, CelebA-HQ and ImageNet datasets. Moreover,
we empirically show that LT-GAN helps in improving controlled image editing for
CelebA-HQ and ImageNet over baseline models. We experimentally demonstrate that
our proposed LT self-supervision task can be effectively combined with other
state-of-the-art training techniques for added benefits. Consequently, we show
that our approach achieves the new state-of-the-art FID score of 9.8 on
conditional CIFAR-10 image generation. | [
"cs.CV"
] |
In this article, we present a new machine learning model by imitation based
on the linguistic description of complex phenomena. The idea consists of,
first, capturing the behaviour of human players by creating a computational
perception network based on the execution traces of the games and, second,
representing it using fuzzy logic (linguistic variables and if-then rules).
From this knowledge, a set of data (dataset) is automatically created to
generate a learning model based on decision trees. This model will be used
later to automatically control the movements of a bot. The result is an
artificial agent that mimics the human player. We have implemented, tested and
evaluated this technology. The results obtained are interesting and promising,
showing that this method can be a good alternative to design and implement the
behaviour of intelligent agents in video game development. | [
"cs.LG"
] |
In this paper, we propose a novel controllable text-to-image generative
adversarial network (ControlGAN), which can effectively synthesise high-quality
images and also control parts of the image generation according to natural
language descriptions. To achieve this, we introduce a word-level spatial and
channel-wise attention-driven generator that can disentangle different visual
attributes, and allow the model to focus on generating and manipulating
subregions corresponding to the most relevant words. Also, a word-level
discriminator is proposed to provide fine-grained supervisory feedback by
correlating words with image regions, facilitating training an effective
generator which is able to manipulate specific visual attributes without
affecting the generation of other content. Furthermore, perceptual loss is
adopted to reduce the randomness involved in the image generation, and to
encourage the generator to manipulate specific attributes required in the
modified text. Extensive experiments on benchmark datasets demonstrate that our
method outperforms existing state of the art, and is able to effectively
manipulate synthetic images using natural language descriptions. Code is
available at https://github.com/mrlibw/ControlGAN. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferring
multiple reward functions from expert demonstrations. Prior work, built on
Bayesian IRL, is unable to scale to complex environments due to computational
constraints. This paper contributes a formulation of multi-task IRL in the more
computationally efficient Maximum Causal Entropy (MCE) IRL framework.
Experiments show our approach can perform one-shot imitation learning in a
gridworld environment that single-task IRL algorithms need hundreds of
demonstrations to solve. We outline preliminary work using meta-learning to
extend our method to the function approximator setting of modern MCE IRL
algorithms. Evaluating on multi-task variants of common simulated robotics
benchmarks, we discover serious limitations of these IRL algorithms, and
conclude with suggestions for further work. | [
"cs.LG",
"cs.AI",
"stat.ML",
"I.2.6"
] |
Inference in discrete graphical models with variational methods is difficult
because of the inability to re-parameterize gradients of the Evidence Lower
Bound (ELBO). Many sampling-based methods have been proposed for estimating
these gradients, but they suffer from high bias or variance. In this paper, we
propose a new approach that leverages the tractability of probabilistic circuit
models, such as Sum Product Networks (SPN), to compute ELBO gradients exactly
(without sampling) for a certain class of densities. In particular, we show
that selective-SPNs are suitable as an expressive variational distribution, and
prove that when the log-density of the target model is a polynomial the
corresponding ELBO can be computed analytically. To scale to graphical models
with thousands of variables, we develop an efficient and effective construction
of selective-SPNs with size $O(kn)$, where $n$ is the number of variables and
$k$ is an adjustable hyperparameter. We demonstrate our approach on three types
of graphical models -- Ising models, Latent Dirichlet Allocation, and factor
graphs from the UAI Inference Competition. Selective-SPNs give a better lower
bound than mean-field and structured mean-field, and is competitive with
approximations that do not provide a lower bound, such as Loopy Belief
Propagation and Tree-Reweighted Belief Propagation. Our results show that
probabilistic circuits are promising tools for variational inference in
discrete graphical models as they combine tractability and expressivity. | [
"cs.LG",
"cs.AI"
] |
Recent progress in Generative Adversarial Networks (GANs) has shown promising
signs of improving GAN training via architectural change. Despite some early
success, at present the design of GAN architectures requires human expertise,
laborious trial-and-error testings, and often draws inspiration from its image
classification counterpart. In the current paper, we present the first neural
architecture search algorithm, automated neural architecture search for deep
generative models, or AGAN for abbreviation, that is specifically suited for
GAN training. For unsupervised image generation tasks on CIFAR-10, our
algorithm finds architecture that outperforms state-of-the-art models under
same regularization techniques. For supervised tasks, the automatically
searched architectures also achieve highly competitive performance,
outperforming best human-invented architectures at resolution $32\times32$.
Moreover, we empirically demonstrate that the modules learned by AGAN are
transferable to other image generation tasks such as STL-10. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Humanitarian actions require accurate information to efficiently delegate
support operations. Such information can be maps of building footprints,
building functions, and population densities. While the access to this
information is comparably easy in industrialized countries thanks to reliable
census data and national geo-data infrastructures, this is not the case for
developing countries, where that data is often incomplete or outdated. Building
maps derived from remote sensing images may partially remedy this challenge in
such countries, but are not always accurate due to different landscape
configurations and lack of validation data. Even when they exist, building
footprint layers usually do not reveal more fine-grained building properties,
such as the number of stories or the building's function (e.g., office,
residential, school, etc.). In this project we aim to automate building
footprint and function mapping using heterogeneous data sources. In a first
step, we intend to delineate buildings from satellite data, using deep learning
models for semantic image segmentation. Building functions shall be retrieved
by parsing social media data like for instance tweets, as well as ground-based
imagery, to automatically identify different buildings functions and retrieve
further information such as the number of building stories. Building maps
augmented with those additional attributes make it possible to derive more
accurate population density maps, needed to support the targeted provision of
humanitarian aid. | [
"cs.CV",
"eess.IV"
] |
Arbitrary-oriented objects exist widely in natural scenes, and thus the
oriented object detection has received extensive attention in recent years. The
mainstream rotation detectors use oriented bounding boxes (OBB) or
quadrilateral bounding boxes (QBB) to represent the rotating objects. However,
these methods suffer from the representation ambiguity for oriented object
definition, which leads to suboptimal regression optimization and the
inconsistency between the loss metric and the localization accuracy of the
predictions. In this paper, we propose a Representation Invariance Loss (RIL)
to optimize the bounding box regression for the rotating objects. Specifically,
RIL treats multiple representations of an oriented object as multiple
equivalent local minima, and hence transforms bounding box regression into an
adaptive matching process with these local minima. Then, the Hungarian matching
algorithm is adopted to obtain the optimal regression strategy. We also propose
a normalized rotation loss to alleviate the weak correlation between different
variables and their unbalanced loss contribution in OBB representation.
Extensive experiments on remote sensing datasets and scene text datasets show
that our method achieves consistent and substantial improvement. The source
code and trained models are available at https://github.com/ming71/RIDet. | [
"cs.CV"
] |
Recent advances in deep learning have enabled the development of automated
frameworks for analysing medical images and signals, including analysis of
cervical cancer. Many previous works focus on the analysis of isolated cervical
cells, or do not offer sufficient methods to explain and understand how the
proposed models reach their classification decisions on multi-cell images.
Here, we evaluate various state-of-the-art deep learning models and
attention-based frameworks for the classification of images of multiple
cervical cells. As we aim to provide interpretable deep learning models to
address this task, we also compare their explainability through the
visualization of their gradients. We demonstrate the importance of using images
that contain multiple cells over using isolated single-cell images. We show the
effectiveness of the residual channel attention model for extracting important
features from a group of cells, and demonstrate this model's efficiency for
this classification task. This work highlights the benefits of channel
attention mechanisms in analyzing multiple-cell images for potential relations
and distributions within a group of cells. It also provides interpretable
models to address the classification of cervical cells. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We present a method to reconstruct the three-dimensional trajectory of a
moving instance of a known object category in monocular video data. We track
the two-dimensional shape of objects on pixel level exploiting instance-aware
semantic segmentation techniques and optical flow cues. We apply Structure from
Motion techniques to object and background images to determine for each frame
camera poses relative to object instances and background structures. By
combining object and background camera pose information, we restrict the object
trajectory to a one-parameter family of possible solutions. We compute a ground
representation by fusing background structures and corresponding semantic
segmentations. This allows us to determine an object trajectory consistent to
image observations and reconstructed environment model. Our method is robust to
occlusion and handles temporarily stationary objects. We show qualitative
results using drone imagery. Due to the lack of suitable benchmark datasets we
present a new dataset to evaluate the quality of reconstructed
three-dimensional object trajectories. The video sequences contain vehicles in
urban areas and are rendered using the path-tracing render engine Cycles to
achieve realistic results. We perform a quantitative evaluation of the
presented approach using this dataset. Our algorithm achieves an average
reconstruction-to-ground-truth distance of 0.31 meter. | [
"cs.CV"
] |
We address the problem of restoring a high-resolution face image from a
blurry low-resolution input. This problem is difficult as super-resolution and
deblurring need to be tackled simultaneously. Moreover, existing algorithms
cannot handle face images well as low-resolution face images do not have much
texture which is especially critical for deblurring. In this paper, we propose
an effective algorithm by utilizing the domain-specific knowledge of human
faces to recover high-quality faces. We first propose a facial component guided
deep Convolutional Neural Network (CNN) to restore a coarse face image, which
is denoted as the base image where the facial component is automatically
generated from the input face image. However, the CNN based method cannot
handle image details well. We further develop a novel exemplar-based detail
enhancement algorithm via facial component matching. Extensive experiments show
that the proposed method outperforms the state-of-the-art algorithms both
quantitatively and qualitatively. | [
"cs.CV"
] |
In recent years, manifold methods have moved into focus as tools for
dimension reduction. Assuming that the high-dimensional data actually lie on or
close to a low-dimensional nonlinear manifold, these methods have shown
convincing results in several settings. This manifold assumption is often
reasonable for functional data, i.e., data representing continuously observed
functions, as well. However, the performance of manifold methods recently
proposed for tabular or image data has not been systematically assessed in the
case of functional data yet. Moreover, it is unclear how to evaluate the
quality of learned embeddings that do not yield invertible mappings, since the
reconstruction error cannot be used as a performance measure for such
representations. In this work, we describe and investigate the specific
challenges for nonlinear dimension reduction posed by the functional data
setting. The contributions of the paper are three-fold: First of all, we define
a theoretical framework which allows to systematically assess specific
challenges that arise in the functional data context, transfer several
nonlinear dimension reduction methods for tabular and image data to functional
data, and show that manifold methods can be used successfully in this setting.
Secondly, we subject performance assessment and tuning strategies to a thorough
and systematic evaluation based on several different functional data settings
and point out some previously undescribed weaknesses and pitfalls which can
jeopardize reliable judgment of embedding quality. Thirdly, we propose a
nuanced approach to make trustworthy decisions for or against competing
nonconforming embeddings more objectively. | [
"stat.ML",
"cs.LG"
] |
Recurrent neural networks (RNNs) are a widely used tool for modeling
sequential data, yet they are often treated as inscrutable black boxes. Given a
trained recurrent network, we would like to reverse engineer it--to obtain a
quantitative, interpretable description of how it solves a particular task.
Even for simple tasks, a detailed understanding of how recurrent networks work,
or a prescription for how to develop such an understanding, remains elusive. In
this work, we use tools from dynamical systems analysis to reverse engineer
recurrent networks trained to perform sentiment classification, a foundational
natural language processing task. Given a trained network, we find fixed points
of the recurrent dynamics and linearize the nonlinear system around these fixed
points. Despite their theoretical capacity to implement complex,
high-dimensional computations, we find that trained networks converge to highly
interpretable, low-dimensional representations. In particular, the topological
structure of the fixed points and corresponding linearized dynamics reveal an
approximate line attractor within the RNN, which we can use to quantitatively
understand how the RNN solves the sentiment analysis task. Finally, we find
this mechanism present across RNN architectures (including LSTMs, GRUs, and
vanilla RNNs) trained on multiple datasets, suggesting that our findings are
not unique to a particular architecture or dataset. Overall, these results
demonstrate that surprisingly universal and human interpretable computations
can arise across a range of recurrent networks. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Networks (CNNs) have performed extremely well on data
represented by regularly arranged grids such as images. However, directly
leveraging the classic convolution kernels or parameter sharing mechanisms on
sparse 3D point clouds is inefficient due to their irregular and unordered
nature. We propose a point attention network that learns rich local shape
features and their contextual correlations for 3D point cloud semantic
segmentation. Since the geometric distribution of the neighboring points is
invariant to the point ordering, we propose a Local Attention-Edge Convolution
(LAE Conv) to construct a local graph based on the neighborhood points searched
in multi-directions. We assign attention coefficients to each edge and then
aggregate the point features as a weighted sum of its neighbors. The learned
LAE-Conv layer features are then given to a point-wise spatial attention module
to generate an interdependency matrix of all points regardless of their
distances, which captures long-range spatial contextual features contributing
to more precise semantic information. The proposed point attention network
consists of an encoder and decoder which, together with the LAE-Conv layers and
the point-wise spatial attention modules, make it an end-to-end trainable
network for predicting dense labels for 3D point cloud segmentation.
Experiments on challenging benchmarks of 3D point clouds show that our
algorithm can perform at par or better than the existing state of the art
methods. | [
"cs.CV"
] |
Video games are a compelling source of annotated data as they can readily
provide fine-grained groundtruth for diverse tasks. However, it is not clear
whether the synthetically generated data has enough resemblance to the
real-world images to improve the performance of computer vision models in
practice. We present experiments assessing the effectiveness on real-world data
of systems trained on synthetic RGB images that are extracted from a video
game. We collected over 60000 synthetic samples from a modern video game with
similar conditions to the real-world CamVid and Cityscapes datasets. We provide
several experiments to demonstrate that the synthetically generated RGB images
can be used to improve the performance of deep neural networks on both image
segmentation and depth estimation. These results show that a convolutional
network trained on synthetic data achieves a similar test error to a network
that is trained on real-world data for dense image classification. Furthermore,
the synthetically generated RGB images can provide similar or better results
compared to the real-world datasets if a simple domain adaptation technique is
applied. Our results suggest that collaboration with game developers for an
accessible interface to gather data is potentially a fruitful direction for
future work in computer vision. | [
"cs.CV"
] |
We review basic concepts of convex duality, focusing on the very general and
supremely useful Fenchel-Rockafellar duality. We summarize how this duality may
be applied to a variety of reinforcement learning (RL) settings, including
policy evaluation or optimization, online or offline learning, and discounted
or undiscounted rewards. The derivations yield a number of intriguing results,
including the ability to perform policy evaluation and on-policy policy
gradient with behavior-agnostic offline data and methods to learn a policy via
max-likelihood optimization. Although many of these results have appeared
previously in various forms, we provide a unified treatment and perspective on
these results, which we hope will enable researchers to better use and apply
the tools of convex duality to make further progress in RL. | [
"cs.LG",
"stat.ML"
] |
The state-of-the-art unsupervised contrastive visual representation learning
methods that have emerged recently (SimCLR, MoCo, SwAV) all make use of data
augmentations in order to construct a pretext task of instant discrimination
consisting of similar and dissimilar pairs of images. Similar pairs are
constructed by randomly extracting patches from the same image and applying
several other transformations such as color jittering or blurring, while
transformed patches from different image instances in a given batch are
regarded as dissimilar pairs. We argue that this approach can result similar
pairs that are \textit{semantically} dissimilar. In this work, we address this
problem by introducing a \textit{batch curation} scheme that selects batches
during the training process that are more inline with the underlying
contrastive objective. We provide insights into what constitutes beneficial
similar and dissimilar pairs as well as validate \textit{batch curation} on
CIFAR10 by integrating it in the SimCLR model. | [
"cs.LG"
] |
In this paper, we develop a new weakly-supervised learning algorithm to learn
to segment cancerous regions in histopathology images. Our work is under a
multiple instance learning framework (MIL) with a new formulation, deep weak
supervision (DWS); we also propose an effective way to introduce constraints to
our neural networks to assist the learning process. The contributions of our
algorithm are threefold: (1) We build an end-to-end learning system that
segments cancerous regions with fully convolutional networks (FCN) in which
image-to-image weakly-supervised learning is performed. (2) We develop a deep
week supervision formulation to exploit multi-scale learning under weak
supervision within fully convolutional networks. (3) Constraints about positive
instances are introduced in our approach to effectively explore additional
weakly-supervised information that is easy to obtain and enjoys a significant
boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL,
is easy to implement and can be trained efficiently. Our system demonstrates
state-of-the-art results on large-scale histopathology image datasets and can
be applied to various applications in medical imaging beyond histopathology
images such as MRI, CT, and ultrasound images. | [
"cs.CV"
] |
Deep learning systems extensively use convolution operations to process input
data. Though convolution is clearly defined for structured data such as 2D
images or 3D volumes, this is not true for other data types such as sparse
point clouds. Previous techniques have developed approximations to convolutions
for restricted conditions. Unfortunately, their applicability is limited and
cannot be used for general point clouds. We propose an efficient and effective
method to learn convolutions for non-uniformly sampled point clouds, as they
are obtained with modern acquisition techniques. Learning is enabled by four
key novelties: first, representing the convolution kernel itself as a
multilayer perceptron; second, phrasing convolution as a Monte Carlo
integration problem, third, using this notion to combine information from
multiple samplings at different levels; and fourth using Poisson disk sampling
as a scalable means of hierarchical point cloud learning. The key idea across
all these contributions is to guarantee adequate consideration of the
underlying non-uniform sample distribution function from a Monte Carlo
perspective. To make the proposed concepts applicable to real-world tasks, we
furthermore propose an efficient implementation which significantly reduces the
GPU memory required during the training process. By employing our method in
hierarchical network architectures we can outperform most of the
state-of-the-art networks on established point cloud segmentation,
classification and normal estimation benchmarks. Furthermore, in contrast to
most existing approaches, we also demonstrate the robustness of our method with
respect to sampling variations, even when training with uniformly sampled data
only. To support the direct application of these concepts, we provide a
ready-to-use TensorFlow implementation of these layers at
https://github.com/viscom-ulm/MCCNN | [
"cs.CV"
] |
Knowledge distillation, which involves extracting the "dark knowledge" from a
teacher network to guide the learning of a student network, has emerged as an
important technique for model compression and transfer learning. Unlike
previous works that exploit architecture-specific cues such as activation and
attention for distillation, here we wish to explore a more general and
model-agnostic approach for extracting "richer dark knowledge" from the
pre-trained teacher model. We show that the seemingly different
self-supervision task can serve as a simple yet powerful solution. For example,
when performing contrastive learning between transformed entities, the noisy
predictions of the teacher network reflect its intrinsic composition of
semantic and pose information. By exploiting the similarity between those
self-supervision signals as an auxiliary task, one can effectively transfer the
hidden information from the teacher to the student. In this paper, we discuss
practical ways to exploit those noisy self-supervision signals with selective
transfer for distillation. We further show that self-supervision signals
improve conventional distillation with substantial gains under few-shot and
noisy-label scenarios. Given the richer knowledge mined from self-supervision,
our knowledge distillation approach achieves state-of-the-art performance on
standard benchmarks, i.e., CIFAR100 and ImageNet, under both
similar-architecture and cross-architecture settings. The advantage is even
more pronounced under the cross-architecture setting, where our method
outperforms the state of the art CRD by an average of 2.3% in accuracy rate on
CIFAR100 across six different teacher-student pairs. | [
"cs.CV"
] |
As the class size grows, maintaining a balanced dataset across many classes
is challenging because the data are long-tailed in nature; it is even
impossible when the sample-of-interest co-exists with each other in one
collectable unit, e.g., multiple visual instances in one image. Therefore,
long-tailed classification is the key to deep learning at scale. However,
existing methods are mainly based on re-weighting/re-sampling heuristics that
lack a fundamental theory. In this paper, we establish a causal inference
framework, which not only unravels the whys of previous methods, but also
derives a new principled solution. Specifically, our theory shows that the SGD
momentum is essentially a confounder in long-tailed classification. On one
hand, it has a harmful causal effect that misleads the tail prediction biased
towards the head. On the other hand, its induced mediation also benefits the
representation learning and head prediction. Our framework elegantly
disentangles the paradoxical effects of the momentum, by pursuing the direct
causal effect caused by an input sample. In particular, we use causal
intervention in training, and counterfactual reasoning in inference, to remove
the "bad" while keep the "good". We achieve new state-of-the-arts on three
long-tailed visual recognition benchmarks: Long-tailed CIFAR-10/-100,
ImageNet-LT for image classification and LVIS for instance segmentation. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Self-supervised learning of depth has been a highly studied topic of research
as it alleviates the requirement of having ground truth annotations for
predicting depth. Depth is learnt as an intermediate solution to the task of
view synthesis, utilising warped photometric consistency. Although it gives
good results when trained using stereo data, the predicted depth is still
sensitive to noise, illumination changes and specular reflections. Also,
occlusion can be tackled better by learning depth from a single camera. We
propose ADAA, utilising depth augmentation as depth supervision for learning
accurate and robust depth. We propose a relational self-attention module that
learns rich contextual features and further enhances depth results. We also
optimize the auto-masking strategy across all losses by enforcing L1
regularisation over mask. Our novel progressive training strategy first learns
depth at a lower resolution and then progresses to the original resolution with
slight training. We utilise a ResNet18 encoder, learning features for
prediction of both depth and pose. We evaluate our predicted depth on the
standard KITTI driving dataset and achieve state-of-the-art results for
monocular depth estimation whilst having significantly lower number of
trainable parameters in our deep learning framework. We also evaluate our model
on Make3D dataset showing better generalization than other methods. | [
"cs.CV"
] |
Since its discovery in 2013, the phenomenon of adversarial examples has
attracted a growing amount of attention from the machine learning community. A
deeper understanding of the problem could lead to a better comprehension of how
information is processed and encoded in neural networks and, more in general,
could help to solve the issue of interpretability in machine learning. Our idea
to increase adversarial resilience starts with the observation that artificial
neurons can be divided in two broad categories: AND-like neurons and OR-like
neurons. Intuitively, the former are characterised by a relatively low number
of combinations of input values which trigger neuron activation, while for the
latter the opposite is true. Our hypothesis is that the presence in a network
of a sufficiently high number of OR-like neurons could lead to classification
"brittleness" and increase the network's susceptibility to adversarial attacks.
After constructing an operational definition of a neuron AND-like behaviour, we
proceed to introduce several measures to increase the proportion of AND-like
neurons in the network: L1 norm weight normalisation; application of an input
filter; comparison between the neuron output's distribution obtained when the
network is fed with the actual data set and the distribution obtained when the
network is fed with a randomised version of the former called "scrambled data
set". Tests performed on the MNIST data set hint that the proposed measures
could represent an interesting direction to explore. | [
"cs.LG",
"cs.AI",
"cs.CR"
] |
Although the inherently ambiguous task of predicting what resides beyond all
four edges of an image has rarely been explored before, we demonstrate that
GANs hold powerful potential in producing reasonable extrapolations. Two
outpainting methods are proposed that aim to instigate this line of research:
the first approach uses a context encoder inspired by common inpainting
architectures and paradigms, while the second approach adds an extra
post-processing step using a single-image generative model. This way, the
hallucinated details are integrated with the style of the original image, in an
attempt to further boost the quality of the result and possibly allow for
arbitrary output resolutions to be supported. | [
"cs.CV"
] |
Patients with diabetes who are self-monitoring have to decide right before
each meal how much insulin they should take. A standard bolus advisor exists,
but has never actually been proven to be optimal in any sense. We challenged
this rule applying Reinforcement Learning techniques on data simulated with
T1DM, an FDA-approved simulator developed by Kovatchev et al. modeling the
gluco-insulin interaction. Results show that the optimal bolus rule is fairly
different from the standard bolus advisor, and if followed can actually avoid
hypoglycemia episodes. | [
"stat.ML",
"cs.LG"
] |
We present a simple yet powerful neural network that implicitly represents
and renders 3D objects and scenes only from 2D observations. The network models
3D geometries as a general radiance field, which takes a set of 2D images with
camera poses and intrinsics as input, constructs an internal representation for
each point of the 3D space, and then renders the corresponding appearance and
geometry of that point viewed from an arbitrary position. The key to our
approach is to learn local features for each pixel in 2D images and to then
project these features to 3D points, thus yielding general and rich point
representations. We additionally integrate an attention mechanism to aggregate
pixel features from multiple 2D views, such that visual occlusions are
implicitly taken into account. Extensive experiments demonstrate that our
method can generate high-quality and realistic novel views for novel objects,
unseen categories and challenging real-world scenes. | [
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG",
"cs.RO"
] |
In real-world maintenance applications, deep generative models have shown
promising performance in detecting anomalous events of entities from
time-series signals collected from multiple sensors. Nevertheless, we outline
two important challenges of leveraging such models for times-series anomaly
detection: 1) developing effective and efficient reconstruction models and 2)
exploiting the similarity and interrelation structures among the multivariate
time series data channels. To address these challenges, in this paper we
propose a stacking variational auto-encoder (VAE) model with graph neural
networks for the effective and interpretable time-series anomaly detection.
Specifically, we propose a stacking block-wise reconstruction framework with a
weight-sharing scheme for the multivariate time series data with similarities
among channels. Moreover, with a graph learning module, our model learns a
sparse adjacency matrix to explicitly capture the stable interrelation
structure information among multiple time series data channels for
interpretable reconstruction of series patterns. Experimental results show that
our proposed model outperforms the strong baselines on three public datasets
with considerable improvements and meanwhile still maintains the training
efficiency. Furthermore, we demonstrate that the intuitive stable structure
learned by our model significantly improves the interpretability of our
detection results. | [
"cs.LG"
] |
In unsupervised data generation tasks, besides the generation of a sample
based on previous observations, one would often like to give hints to the model
in order to bias the generation towards desirable metrics. We propose a method
that combines Generative Adversarial Networks (GANs) and reinforcement learning
(RL) in order to accomplish exactly that. While RL biases the data generation
process towards arbitrary metrics, the GAN component of the reward function
ensures that the model still remembers information learned from data. We build
upon previous results that incorporated GANs and RL in order to generate
sequence data and test this model in several settings for the generation of
molecules encoded as text sequences (SMILES) and in the context of music
generation, showing for each case that we can effectively bias the generation
process towards desired metrics. | [
"stat.ML",
"cs.LG"
] |
In this paper we explore methods to exploit symmetries for ensuring sample
efficiency in reinforcement learning (RL), this problem deserves ever
increasing attention with the recent advances in the use of deep networks for
complex RL tasks which require large amount of training data. We introduce a
novel method to detect symmetries using reward trails observed during episodic
experience and prove its completeness. We also provide a framework to
incorporate the discovered symmetries for functional approximation. Finally we
show that the use of potential based reward shaping is especially effective for
our symmetry exploitation mechanism. Experiments on various classical problems
show that our method improves the learning performance significantly by
utilizing symmetry information. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
A common problem in Machine Learning and statistics consists in detecting
whether the current sample in a stream of data belongs to the same distribution
as previous ones, is an isolated outlier or inaugurates a new distribution of
data. We present a hierarchical Bayesian algorithm that aims at learning a
time-specific approximate posterior distribution of the parameters describing
the distribution of the data observed. We derive the update equations of the
variational parameters of the approximate posterior at each time step for
models from the exponential family, and show that these updates find
interesting correspondents in Reinforcement Learning (RL). In this perspective,
our model can be seen as a hierarchical RL algorithm that learns a posterior
distribution according to a certain stability confidence that is, in turn,
learned according to its own stability confidence. Finally, we show some
applications of our generic model, first in a RL context, next with an adaptive
Bayesian Autoregressive model, and finally in the context of Stochastic
Gradient Descent optimization. | [
"stat.ML",
"cs.LG"
] |
Feature-based time series representations have attracted substantial
attention in a wide range of time series analysis methods. Recently, the use of
time series features for forecast model averaging has been an emerging research
focus in the forecasting community. Nonetheless, most of the existing
approaches depend on the manual choice of an appropriate set of features.
Exploiting machine learning methods to extract features from time series
automatically becomes crucial in state-of-the-art time series analysis. In this
paper, we introduce an automated approach to extract time series features based
on time series imaging. We first transform time series into recurrence plots,
from which local features can be extracted using computer vision algorithms.
The extracted features are used for forecast model averaging. Our experiments
show that forecasting based on automatically extracted features, with less
human intervention and a more comprehensive view of the raw time series data,
yields highly comparable performances with the best methods in the largest
forecasting competition dataset (M4) and outperforms the top methods in the
Tourism forecasting competition dataset. | [
"stat.ML",
"cs.CV",
"cs.LG",
"stat.CO"
] |
Counterfactual regret minimization (CFR) is a popular method to deal with
decision-making problems of two-player zero-sum games with imperfect
information. Unlike existing studies that mostly explore for solving larger
scale problems or accelerating solution efficiency, we propose a framework,
RLCFR, which aims at improving the generalization ability of the CFR method. In
the RLCFR, the game strategy is solved by the CFR in a reinforcement learning
framework. And the dynamic procedure of iterative interactive strategy updating
is modeled as a Markov decision process (MDP). Our method, RLCFR, then learns a
policy to select the appropriate way of regret updating in the process of
iteration. In addition, a stepwise reward function is formulated to learn the
action policy, which is proportional to how well the iteration strategy is at
each step. Extensive experimental results on various games have shown that the
generalization ability of our method is significantly improved compared with
existing state-of-the-art methods. | [
"cs.LG",
"cs.GT",
"stat.ML"
] |
We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | [
"cs.LG"
] |
Asynchronous and parallel implementation of standard reinforcement learning
(RL) algorithms is a key enabler of the tremendous success of modern RL. Among
many asynchronous RL algorithms, arguably the most popular and effective one is
the asynchronous advantage actor-critic (A3C) algorithm. Although A3C is
becoming the workhorse of RL, its theoretical properties are still not
well-understood, including the non-asymptotic analysis and the performance gain
of parallelism (a.k.a. speedup). This paper revisits the A3C algorithm with
TD(0) for the critic update, termed A3C-TD(0), with provable convergence
guarantees. With linear value function approximation for the TD update, the
convergence of A3C-TD(0) is established under both i.i.d. and Markovian
sampling. Under i.i.d. sampling, A3C-TD(0) obtains sample complexity of
$\mathcal{O}(\epsilon^{-2.5}/N)$ per worker to achieve $\epsilon$ accuracy,
where $N$ is the number of workers. Compared to the best-known sample
complexity of $\mathcal{O}(\epsilon^{-2.5})$ for two-timescale AC, A3C-TD(0)
achieves \emph{linear speedup}, which justifies the advantage of parallelism
and asynchrony in AC algorithms theoretically for the first time. Numerical
tests on synthetically generated instances and OpenAI Gym environments have
been provided to verify our theoretical analysis. | [
"cs.LG",
"math.OC"
] |
In this paper, we propose a new capsule network architecture called Attention
Routing CapsuleNet (AR CapsNet). We replace the dynamic routing and squash
activation function of the capsule network with dynamic routing (CapsuleNet)
with the attention routing and capsule activation. The attention routing is a
routing between capsules through an attention module. The attention routing is
a fast forward-pass while keeping spatial information. On the other hand, the
intuitive interpretation of the dynamic routing is finding a centroid of the
prediction capsules. Thus, the squash activation function and its variant focus
on preserving a vector orientation. However, the capsule activation focuses on
performing a capsule-scale activation function.
We evaluate our proposed model on the MNIST, affNIST, and CIFAR-10
classification tasks. The proposed model achieves higher accuracy with fewer
parameters (x0.65 in the MNIST, x0.82 in the CIFAR-10) and less training time
than CapsuleNet (x0.19 in the MNIST, x0.35 in the CIFAR-10). These results
validate that designing a capsule-scale operation is a key factor to implement
the capsule concept.
Also, our experiment shows that our proposed model is transformation
equivariant as CapsuleNet. As we perturb each element of the output capsule,
the decoder attached to the output capsules shows global variations. Further
experiments show that the difference in the capsule features caused by applying
affine transformations on an input image is significantly aligned in one
direction. | [
"cs.CV"
] |
The classification decisions of neural networks can be misled by small
imperceptible perturbations. This work aims to explain the misled
classifications using saliency methods. The idea behind saliency methods is to
explain the classification decisions of neural networks by creating so-called
saliency maps. Unfortunately, a number of recent publications have shown that
many of the proposed saliency methods do not provide insightful explanations. A
prominent example is Guided Backpropagation (GuidedBP), which simply performs
(partial) image recovery. However, our numerical analysis shows the saliency
maps created by GuidedBP do indeed contain class-discriminative information. We
propose a simple and efficient way to enhance the saliency maps. The proposed
enhanced GuidedBP shows the state-of-the-art performance to explain adversary
classifications. | [
"cs.CV"
] |
Deep learning architectures have an extremely high-capacity for modeling
complex data in a wide variety of domains. However, these architectures have
been limited in their ability to support complex prediction problems using
insurance claims data, such as readmission at 30 days, mainly due to data
sparsity issue. Consequently, classical machine learning methods, especially
those that embed domain knowledge in handcrafted features, are often on par
with, and sometimes outperform, deep learning approaches. In this paper, we
illustrate how the potential of deep learning can be achieved by blending
domain knowledge within deep learning architectures to predict adverse events
at hospital discharge, including readmissions. More specifically, we introduce
a learning architecture that fuses a representation of patient data computed by
a self-attention based recurrent neural network, with clinically relevant
features. We conduct extensive experiments on a large claims dataset and show
that the blended method outperforms the standard machine learning approaches. | [
"cs.LG"
] |
In recent years, there has been a surge of interest in developing deep
learning methods for non-Euclidean structured data such as graphs. In this
paper, we propose Dual-Primal Graph CNN, a graph convolutional architecture
that alternates convolution-like operations on the graph and its dual. Our
approach allows to learn both vertex- and edge features and generalizes the
previous graph attention (GAT) model. We provide extensive experimental
validation showing state-of-the-art results on a variety of tasks tested on
established graph benchmarks, including CORA and Citeseer citation networks as
well as MovieLens, Flixter, Douban and Yahoo Music graph-guided recommender
systems. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Environment perception is crucial for autonomous vehicle (AV) safety. Most
existing AV perception algorithms have not studied the surrounding environment
complexity and failed to include the environment complexity parameter. This
paper proposes a novel attention-based neural network model to predict the
complexity level of the surrounding driving environment. The proposed model
takes naturalistic driving videos and corresponding vehicle dynamics parameters
as input. It consists of a Yolo-v3 object detection algorithm, a heat map
generation algorithm, CNN-based feature extractors, and attention-based feature
extractors for both video and time-series vehicle dynamics data inputs to
extract features. The output from the proposed algorithm is a surrounding
environment complexity parameter. The Berkeley DeepDrive dataset (BDD Dataset)
and subjectively labeled surrounding environment complexity levels are used for
model training and validation to evaluate the algorithm. The proposed
attention-based network achieves 91.22% average classification accuracy to
classify the surrounding environment complexity. It proves that the environment
complexity level can be accurately predicted and applied for future AVs'
environment perception studies. | [
"cs.LG",
"cs.CV",
"cs.RO",
"eess.IV"
] |
Semantic segmentation using convolutional neural networks (CNN) is a crucial
component in image analysis. Training a CNN to perform semantic segmentation
requires a large amount of labeled data, where the production of such labeled
data is both costly and labor intensive. Semi-supervised learning algorithms
address this issue by utilizing unlabeled data and so reduce the amount of
labeled data needed for training. In particular, data augmentation techniques
such as CutMix and ClassMix generate additional training data from existing
labeled data. In this paper we propose a new approach for data augmentation,
termed ComplexMix, which incorporates aspects of CutMix and ClassMix with
improved performance. The proposed approach has the ability to control the
complexity of the augmented data while attempting to be semantically-correct
and address the tradeoff between complexity and correctness. The proposed
ComplexMix approach is evaluated on a standard dataset for semantic
segmentation and compared to other state-of-the-art techniques. Experimental
results show that our method yields improvement over state-of-the-art methods
on standard datasets for semantic image segmentation. | [
"cs.CV"
] |
Many stochastic processes are defined on special geometrical objects like
spheres and cones. We describe how tools from harmonic analysis, i.e. Fourier
analysis on groups, can be used to investigate probability density functions
(pdfs) on groups and homogeneous spaces. We consider the special case of the
Lorentz group SU(1,1) and the unit disk with its hyperbolic geometry, but the
procedure can be generalized to a much wider class of Lie-groups. We mainly
concentrate on the Mehler-Fock transform which is the radial part of the
Fourier transform on the disk. Some of the characteristic features of this
transform are the relation to group-convolutions, the isometry between signal
and transform space, the relation to the Laplace-Beltrami operator and the
relation to group representation theory. We will give an overview over these
properties and their applications in signal processing. We will illustrate the
theory with two examples from low-level vision and color image processing. | [
"cs.CV",
"43A32",
"I.5.4"
] |
Motion synthesis in a dynamic environment has been a long-standing problem
for character animation. Methods using motion capture data tend to scale poorly
in complex environments because of their larger capturing and labeling
requirement. Physics-based controllers are effective in this regard, albeit
less controllable. In this paper, we present CARL, a quadruped agent that can
be controlled with high-level directives and react naturally to dynamic
environments. Starting with an agent that can imitate individual animation
clips, we use Generative Adversarial Networks to adapt high-level controls,
such as speed and heading, to action distributions that correspond to the
original animations. Further fine-tuning through the deep reinforcement
learning enables the agent to recover from unseen external perturbations while
producing smooth transitions. It then becomes straightforward to create
autonomous agents in dynamic environments by adding navigation modules over the
entire process. We evaluate our approach by measuring the agent's ability to
follow user control and provide a visual analysis of the generated motion to
show its effectiveness. | [
"cs.LG",
"cs.GR",
"stat.ML"
] |
Active learning for object detection is conventionally achieved by applying
techniques developed for classification in a way that aggregates individual
detections into image-level selection criteria. This is typically coupled with
the costly assumption that every image selected for labelling must be
exhaustively annotated. This yields incremental improvements on well-curated
vision datasets and struggles in the presence of data imbalance and visual
clutter that occurs in real-world imagery. Alternatives to the image-level
approach are surprisingly under-explored in the literature. In this work, we
introduce a new strategy that subsumes previous Image-level and Object-level
approaches into a generalized, Region-level approach that promotes
spatial-diversity by avoiding nearby redundant queries from the same image and
minimizes context-switching for the labeler. We show that this approach
significantly decreases labeling effort and improves rare object search on
realistic data with inherent class-imbalance and cluttered scenes. | [
"cs.CV"
] |
Time series analysis plays a vital role in various applications, for
instance, healthcare, weather prediction, disaster forecast, etc. However, to
obtain sufficient shapelets by a feature network is still challenging. To this
end, we propose a novel robust temporal feature network (RTFN) that contains
temporal feature networks and attentional LSTM networks. The temporal feature
networks are built to extract basic features from input data while the
attentional LSTM networks are devised to capture complicated shapelets and
relationships to enrich features. In experiments, we embed RTFN into supervised
structure as a feature extraction network and into unsupervised clustering as
an encoder, respectively. The results show that the RTFN-based supervised
structure is a winner of 40 out of 85 datasets and the RTFN-based unsupervised
clustering performs the best on 4 out of 11 datasets in the UCR2018 archive. | [
"cs.LG",
"stat.ML"
] |
Autonomous Driving and Simultaneous Localization and Mapping(SLAM) are
becoming increasingly important in real world, where point cloud-based large
scale place recognition is the spike of them. Previous place recognition
methods have achieved acceptable performances by regarding the task as a point
cloud retrieval problem. However, all of them are suffered from a common
defect: they can't handle the situation when the point clouds are rotated,
which is common, e.g, when viewpoints or motorcycle types are changed. To
tackle this issue, we propose an Attentive Rotation Invariant Convolution
(ARIConv) in this paper. The ARIConv adopts three kind of Rotation Invariant
Features (RIFs): Spherical Signals (SS), Individual-Local Rotation Invariant
Features (ILRIF) and Group-Local Rotation Invariant features (GLRIF) in its
structure to learn rotation invariant convolutional kernels, which are robust
for learning rotation invariant point cloud features. What's more, to highlight
pivotal RIFs, we inject an attentive module in ARIConv to give different RIFs
different importance when learning kernels. Finally, utilizing ARIConv, we
build a DenseNet-like network architecture to learn rotation-insensitive global
descriptors used for retrieving. We experimentally demonstrate that our model
can achieve state-of-the-art performance on large scale place recognition task
when the point cloud scans are rotated and can achieve comparable results with
most of existing methods on the original non-rotated datasets. | [
"cs.CV"
] |
Text-based games -- in which an agent interacts with the world through
textual natural language -- present us with the problem of
combinatorially-sized action-spaces. Most current reinforcement learning
algorithms are not capable of effectively handling such a large number of
possible actions per turn. Poor sample efficiency, consequently, results in
agents that are unable to pass bottleneck states, where they are unable to
proceed because they do not see the right action sequence to pass the
bottleneck enough times to be sufficiently reinforced. Building on prior work
using knowledge graphs in reinforcement learning, we introduce two new game
state exploration strategies. We compare our exploration strategies against
strong baselines on the classic text-adventure game, Zork1, where prior agent
have been unable to get past a bottleneck where the agent is eaten by a Grue. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
Pose-guided person image synthesis aims to synthesize person images by
transforming reference images into target poses. In this paper, we observe that
the commonly used spatial transformation blocks have complementary advantages.
We propose a novel model by combining the attention operation with the
flow-based operation. Our model not only takes the advantage of the attention
operation to generate accurate target structures but also uses the flow-based
operation to sample realistic source textures. Both objective and subjective
experiments demonstrate the superiority of our model. Meanwhile, comprehensive
ablation studies verify our hypotheses and show the efficacy of the proposed
modules. Besides, additional experiments on the portrait image editing task
demonstrate the versatility of the proposed combination. | [
"cs.CV"
] |
RGB-Infrared (IR) person re-identification aims to retrieve
person-of-interest from heterogeneous cameras, easily suffering from large
image modality discrepancy caused by different sensing wavelength ranges.
Existing work usually minimizes such discrepancy by aligning domain
distribution of global features, while neglecting the intra-modality structural
relations between semantic parts. This could result in the network overly
focusing on local cues, without considering long-range body part dependencies,
leading to meaningless region representations. In this paper, we propose a
graph-enabled distribution matching solution, dubbed Geometry-Guided
Dual-Alignment (G2DA) learning, for RGB-IR ReID. It can jointly encourage the
cross-modal consistency between part semantics and structural relations for
fine-grained modality alignment by solving a graph matching task within a
multi-scale skeleton graph that embeds human topology information.
Specifically, we propose to build a semantic-aligned complete graph into which
all cross-modality images can be mapped via a pose-adaptive graph construction
mechanism. This graph represents extracted whole-part features by nodes and
expresses the node-wise similarities with associated edges. To achieve the
graph-based dual-alignment learning, an Optimal Transport (OT) based structured
metric is further introduced to simultaneously measure point-wise relations and
group-wise structural similarities across modalities. By minimizing the cost of
an inter-modality transport plan, G2DA can learn a consistent and
discriminative feature subspace for cross-modality image retrieval.
Furthermore, we advance a Message Fusion Attention (MFA) mechanism to
adaptively reweight the information flow of semantic propagation, effectively
strengthening the discriminability of extracted semantic features. | [
"cs.CV",
"68T07 (Primary)",
"I.4.9"
] |
Deep convolutional neural networks trained for image object categorization
have shown remarkable similarities with representations found across the
primate ventral visual stream. Yet, artificial and biological networks still
exhibit important differences. Here we investigate one such property:
increasing invariance to identity-preserving image transformations found along
the ventral stream. Despite theoretical evidence that invariance should emerge
naturally from the optimization process, we present empirical evidence that the
activations of convolutional neural networks trained for object categorization
are not robust to identity-preserving image transformations commonly used in
data augmentation. As a solution, we propose data augmentation invariance, an
unsupervised learning objective which improves the robustness of the learned
representations by promoting the similarity between the activations of
augmented image samples. Our results show that this approach is a simple, yet
effective and efficient (10 % increase in training time) way of increasing the
invariance of the models while obtaining similar categorization performance. | [
"cs.CV",
"cs.LG"
] |
Since the advent of online real estate database companies like Zillow, Trulia
and Redfin, the problem of automatic estimation of market values for houses has
received considerable attention. Several real estate websites provide such
estimates using a proprietary formula. Although these estimates are often close
to the actual sale prices, in some cases they are highly inaccurate. One of the
key factors that affects the value of a house is its interior and exterior
appearance, which is not considered in calculating automatic value estimates.
In this paper, we evaluate the impact of visual characteristics of a house on
its market value. Using deep convolutional neural networks on a large dataset
of photos of home interiors and exteriors, we develop a method for estimating
the luxury level of real estate photos. We also develop a novel framework for
automated value assessment using the above photos in addition to home
characteristics including size, offered price and number of bedrooms. Finally,
by applying our proposed method for price estimation to a new dataset of real
estate photos and metadata, we show that it outperforms Zillow's estimates. | [
"cs.CV",
"cs.LG"
] |
A recently introduced text classifier, called SS3, has obtained
state-of-the-art performance on the CLEF's eRisk tasks. SS3 was created to deal
with risk detection over text streams and, therefore, not only supports
incremental training and classification but also can visually explain its
rationale. However, little attention has been paid to the potential use of SS3
as a general classifier. We believe this could be due to the unavailability of
an open-source implementation of SS3. In this work, we introduce PySS3, a
package that implements SS3 and also comes with visualization tools that allow
researchers to deploy robust, explainable, and trusty machine learning models
for text classification. | [
"cs.LG",
"cs.AI",
"cs.IR",
"cs.SE",
"stat.ML"
] |
Model parallelism has become a necessity for training modern large-scale deep
language models. In this work, we identify a new and orthogonal dimension from
existing model parallel approaches: it is possible to perform pipeline
parallelism within a single training sequence for Transformer-based language
models thanks to its autoregressive property. This enables a more fine-grained
pipeline compared with previous work. With this key idea, we design TeraPipe, a
high-performance token-level pipeline parallel algorithm for synchronous
model-parallel training of Transformer-based language models. We develop a
novel dynamic programming-based algorithm to calculate the optimal pipelining
execution scheme given a specific model and cluster configuration. We show that
TeraPipe can speed up the training by 5.0x for the largest GPT-3 model with 175
billion parameters on an AWS cluster with 48 p3.16xlarge instances compared
with state-of-the-art model-parallel methods. | [
"cs.LG",
"cs.CL",
"cs.DC"
] |
Convolutional neural networks have witnessed remarkable improvements in
computational efficiency in recent years. A key driving force has been the idea
of trading-off model expressivity and efficiency through a combination of
$1\times 1$ and depth-wise separable convolutions in lieu of a standard
convolutional layer. The price of the efficiency, however, is the sub-optimal
flow of information across space and channels in the network. To overcome this
limitation, we present MUXConv, a layer that is designed to increase the flow
of information by progressively multiplexing channel and spatial information in
the network, while mitigating computational complexity. Furthermore, to
demonstrate the effectiveness of MUXConv, we integrate it within an efficient
multi-objective evolutionary algorithm to search for the optimal model
hyper-parameters while simultaneously optimizing accuracy, compactness, and
computational efficiency. On ImageNet, the resulting models, dubbed MUXNets,
match the performance (75.3% top-1 accuracy) and multiply-add operations (218M)
of MobileNetV3 while being 1.6$\times$ more compact, and outperform other
mobile models in all the three criteria. MUXNet also performs well under
transfer learning and when adapted to object detection. On the ChestX-Ray 14
benchmark, its accuracy is comparable to the state-of-the-art while being
$3.3\times$ more compact and $14\times$ more efficient. Similarly, detection on
PASCAL VOC 2007 is 1.2% more accurate, 28% faster and 6% more compact compared
to MobileNetV2. Code is available from
https://github.com/human-analysis/MUXConv | [
"cs.CV",
"cs.LG",
"cs.NE",
"eess.IV"
] |
Age progression and regression refers to aesthetically render-ing a given
face image to present effects of face aging and rejuvenation, respectively.
Although numerous studies have been conducted in this topic, there are two
major problems: 1) multiple models are usually trained to simulate different
age mappings, and 2) the photo-realism of generated face images is heavily
influenced by the variation of training images in terms of pose, illumination,
and background. To address these issues, in this paper, we propose a framework
based on conditional Generative Adversarial Networks (cGANs) to achieve age
progression and regression simultaneously. Particularly, since face aging and
rejuvenation are largely different in terms of image translation patterns, we
model these two processes using two separate generators, each dedicated to one
age changing process. In addition, we exploit spatial attention mechanisms to
limit image modifications to regions closely related to age changes, so that
images with high visual fidelity could be synthesized for in-the-wild cases.
Experiments on multiple datasets demonstrate the ability of our model in
synthesizing lifelike face images at desired ages with personalized features
well preserved, and keeping age-irrelevant regions unchanged. | [
"cs.CV"
] |
In certain situations, Neural Networks (NN) are trained upon data that obey
underlying physical symmetries. However, it is not guaranteed that NNs will
obey the underlying symmetry unless embedded in the network structure. In this
work, we explore a special kind of symmetry where functions are invariant with
respect to involutory linear/affine transformations up to parity $p=\pm 1$. We
develop mathematical theorems and propose NN architectures that ensure
invariance and universal approximation properties. Numerical experiments
indicate that the proposed models outperform baseline networks while respecting
the imposed symmetry. An adaption of our technique to convolutional NN
classification tasks for datasets with inherent horizontal/vertical reflection
symmetry has also been proposed. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Digital watermark is a commonly used technique to protect the copyright of
medias. Simultaneously, to increase the robustness of watermark, attacking
technique, such as watermark removal, also gets the attention from the
community. Previous watermark removal methods require to gain the watermark
location from users or train a multi-task network to recover the background
indiscriminately. However, when jointly learning, the network performs better
on watermark detection than recovering the texture. Inspired by this
observation and to erase the visible watermarks blindly, we propose a novel
two-stage framework with a stacked attention-guided ResUNets to simulate the
process of detection, removal and refinement. In the first stage, we design a
multi-task network called SplitNet. It learns the basis features for three
sub-tasks altogether while the task-specific features separately use multiple
channel attentions. Then, with the predicted mask and coarser restored image,
we design RefineNet to smooth the watermarked region with a mask-guided spatial
attention. Besides network structure, the proposed algorithm also combines
multiple perceptual losses for better quality both visually and numerically. We
extensively evaluate our algorithm over four different datasets under various
settings and the experiments show that our approach outperforms other
state-of-the-art methods by a large margin. The code is available at
http://github.com/vinthony/deep-blind-watermark-removal. | [
"cs.CV",
"eess.IV"
] |
Development of metrics for structural data-generating mechanisms is
fundamental in machine learning and the related fields. In this paper, we give
a general framework to construct metrics on random nonlinear dynamical systems,
defined with the Perron-Frobenius operators in vector-valued reproducing kernel
Hilbert spaces (vvRKHSs). We employ vvRKHSs to design mathematically manageable
metrics and also to introduce operator-valued kernels, which enables us to
handle randomness in systems. Our metric provides an extension of the existing
metrics for deterministic systems, and gives a specification of the kernel
maximal mean discrepancy of random processes. Moreover, by considering the
time-wise independence of random processes, we clarify a connection between our
metric and the independence criteria with kernels such as Hilbert-Schmidt
independence criteria. We empirically illustrate our metric with synthetic
data, and evaluate it in the context of the independence test for random
processes. We also evaluate the performance with real time seris datas via
clusering tasks. | [
"stat.ML",
"cs.LG",
"math.DS",
"math.PR",
"62-07, 37H99"
] |
This paper presents a robust filter called quaternion Hardy filter (QHF) for
color image edge detection. The QHF can be capable of color edge feature
enhancement and noise resistance. It is flexible to use QHF by selecting
suitable parameters to handle different levels of noise. In particular, the
quaternion analytic signal, which is an effective tool in color image
processing, can also be produced by quaternion Hardy filtering with specific
parameters. Based on the QHF and the improved Di Zenzo gradient operator, a
novel color edge detection algorithm is proposed. Importantly, it can be
efficiently implemented by using the fast discrete quaternion Fourier transform
technique. From the experimental results, we conclude that the minimum PSNR
improvement rate is 2.3% and minimum SSIM improvement rate is 30.2% on the
Dataset 3. The experiments demonstrate that the proposed algorithm outperforms
several widely used algorithms. | [
"cs.CV",
"eess.IV"
] |
Benefiting from convenient cycling and flexible parking locations, the
Dockless Public Bicycle-sharing (DL-PBS) network becomes increasingly popular
in many countries. However, redundant and low-utility stations waste public
urban space and maintenance costs of DL-PBS vendors. In this paper, we propose
a Bicycle Station Dynamic Planning (BSDP) system to dynamically provide the
optimal bicycle station layout for the DL-PBS network. The BSDP system contains
four modules: bicycle drop-off location clustering, bicycle-station graph
modeling, bicycle-station location prediction, and bicycle-station layout
recommendation. In the bicycle drop-off location clustering module, candidate
bicycle stations are clustered from each spatio-temporal subset of the
large-scale cycling trajectory records. In the bicycle-station graph modeling
module, a weighted digraph model is built based on the clustering results and
inferior stations with low station revenue and utility are filtered. Then,
graph models across time periods are combined to create a graph sequence model.
In the bicycle-station location prediction module, the GGNN model is used to
train the graph sequence data and dynamically predict bicycle stations in the
next period. In the bicycle-station layout recommendation module, the predicted
bicycle stations are fine-tuned according to the government urban management
plan, which ensures that the recommended station layout is conducive to city
management, vendor revenue, and user convenience. Experiments on actual DL-PBS
networks verify the effectiveness, accuracy and feasibility of the proposed
BSDP system. | [
"cs.LG",
"cs.AI"
] |
Much attention has been devoted recently to the generalization puzzle in deep
learning: large, deep networks can generalize well, but existing theories
bounding generalization error are exceedingly loose, and thus cannot explain
this striking performance. Furthermore, a major hope is that knowledge may
transfer across tasks, so that multi-task learning can improve generalization
on individual tasks. However we lack analytic theories that can quantitatively
predict how the degree of knowledge transfer depends on the relationship
between the tasks. We develop an analytic theory of the nonlinear dynamics of
generalization in deep linear networks, both within and across tasks. In
particular, our theory provides analytic solutions to the training and testing
error of deep networks as a function of training time, number of examples,
network size and initialization, and the task structure and SNR. Our theory
reveals that deep networks progressively learn the most important task
structure first, so that generalization error at the early stopping time
primarily depends on task structure and is independent of network size. This
suggests any tight bound on generalization error must take into account task
structure, and explains observations about real data being learned faster than
random data. Intriguingly our theory also reveals the existence of a learning
algorithm that proveably out-performs neural network training through gradient
descent. Finally, for transfer learning, our theory reveals that knowledge
transfer depends sensitively, but computably, on the SNRs and input feature
alignments of pairs of tasks. | [
"stat.ML",
"cs.LG",
"I.2.6; F.m"
] |
Integrating outside knowledge for reasoning in visio-linguistic tasks such as
visual question answering (VQA) is an open problem. Given that pretrained
language models have been shown to include world knowledge, we propose to use a
unimodal (text-only) train and inference procedure based on automatic
off-the-shelf captioning of images and pretrained language models. Our results
on a visual question answering task which requires external knowledge (OK-VQA)
show that our text-only model outperforms pretrained multimodal (image-text)
models of comparable number of parameters. In contrast, our model is less
effective in a standard VQA task (VQA 2.0) confirming that our text-only method
is specially effective for tasks requiring external knowledge. In addition, we
show that our unimodal model is complementary to multimodal models in both
OK-VQA and VQA 2.0, and yield the best result to date in OK-VQA among systems
not using external knowledge graphs, and comparable to systems that do use
them. Our qualitative analysis on OK-VQA reveals that automatic captions often
fail to capture relevant information in the images, which seems to be balanced
by the better inference ability of the text-only language models. Our work
opens up possibilities to further improve inference in visio-linguistic tasks. | [
"cs.CV",
"cs.AI"
] |
To evaluate their performance, existing dehazing approaches generally rely on
distance measures between the generated image and its corresponding ground
truth. Despite its ability to produce visually good images, using pixel-based
or even perceptual metrics do not guarantee, in general, that the produced
image is fit for being used as input for low-level computer vision tasks such
as segmentation. To overcome this weakness, we are proposing a novel end-to-end
approach for image dehazing, fit for being used as input to an image
segmentation procedure, while maintaining the visual quality of the generated
images. Inspired by the success of Generative Adversarial Networks (GAN), we
propose to optimize the generator by introducing a discriminator network and a
loss function that evaluates segmentation quality of dehazed images. In
addition, we make use of a supplementary loss function that verifies that the
visual and the perceptual quality of the generated image are preserved in hazy
conditions. Results obtained using the proposed technique are appealing, with a
favorable comparison to state-of-the-art approaches when considering the
performance of segmentation algorithms on the hazy images. | [
"cs.CV",
"eess.IV"
] |
Dual-energy (DE) chest radiographs provide greater diagnostic information
than standard radiographs by separating the image into bone and soft tissue,
revealing suspicious lesions which may otherwise be obstructed from view.
However, acquisition of DE images requires two physical scans, necessitating
specialized hardware and processing, and images are prone to motion artifact.
Generation of virtual DE images from standard, single-shot chest radiographs
would expand the diagnostic value of standard radiographs without changing the
acquisition procedure. We present a Multi-scale Conditional Adversarial Network
(MCA-Net) which produces high-resolution virtual DE bone images from standard,
single-shot chest radiographs. Our proposed MCA-Net is trained using the
adversarial network so that it learns sharp details for the production of
high-quality bone images. Then, the virtual DE soft tissue image is generated
by processing the standard radiograph with the virtual bone image using a cross
projection transformation. Experimental results from 210 patient DE chest
radiographs demonstrated that the algorithm can produce high-quality virtual DE
chest radiographs. Important structures were preserved, such as coronary
calcium in bone images and lung lesions in soft tissue images. The average
structure similarity index and the peak signal to noise ratio of the produced
bone images in testing data were 96.4 and 41.5, which are significantly better
than results from previous methods. Furthermore, our clinical evaluation
results performed on the publicly available dataset indicates the clinical
values of our algorithms. Thus, our algorithm can produce high-quality DE
images that are potentially useful for radiologists, computer-aided
diagnostics, and other diagnostic tasks. | [
"cs.CV"
] |
Deep Learning (DL) has become a crucial technology for Artificial
Intelligence (AI). It is a powerful technique to automatically extract
high-level features from complex data which can be exploited for applications
such as computer vision, natural language processing, cybersecurity,
communications, and so on. For the particular case of computer vision, several
algorithms like object detection in real time videos have been proposed and
they work well on Desktop GPUs and distributed computing platforms. However
these algorithms are still heavy for mobile and embedded visual applications.
The rapid spreading of smart portable devices and the emerging 5G network are
introducing new smart multimedia applications in mobile environments. As a
consequence, the possibility of implementing deep neural networks to mobile
environments has attracted a lot of researchers. This paper presents emerging
deep learning acceleration techniques that can enable the delivery of real time
visual recognition into the hands of end users, anytime and anywhere. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Many recent medical segmentation systems rely on powerful deep learning
models to solve highly specific tasks. To maximize performance, it is standard
practice to evaluate numerous pipelines with varying model topologies,
optimization parameters, pre- & postprocessing steps, and even model cascades.
It is often not clear how the resulting pipeline transfers to different tasks.
We propose a simple and thoroughly evaluated deep learning framework for
segmentation of arbitrary medical image volumes. The system requires no
task-specific information, no human interaction and is based on a fixed model
topology and a fixed hyperparameter set, eliminating the process of model
selection and its inherent tendency to cause method-level over-fitting. The
system is available in open source and does not require deep learning expertise
to use. Without task-specific modifications, the system performed better than
or similar to highly specialized deep learning methods across 3 separate
segmentation tasks. In addition, it ranked 5-th and 6-th in the first and
second round of the 2018 Medical Segmentation Decathlon comprising another 10
tasks. The system relies on multi-planar data augmentation which facilitates
the application of a single 2D architecture based on the familiar U-Net.
Multi-planar training combines the parameter efficiency of a 2D fully
convolutional neural network with a systematic train- and test-time
augmentation scheme, which allows the 2D model to learn a representation of the
3D image volume that fosters generalization. | [
"cs.LG",
"eess.IV"
] |
Computer vision-based accident detection through video surveillance has
become a beneficial but daunting task. In this paper, a neoteric framework for
detection of road accidents is proposed. The proposed framework capitalizes on
Mask R-CNN for accurate object detection followed by an efficient centroid
based object tracking algorithm for surveillance footage. The probability of an
accident is determined based on speed and trajectory anomalies in a vehicle
after an overlap with other vehicles. The proposed framework provides a robust
method to achieve a high Detection Rate and a low False Alarm Rate on general
road-traffic CCTV surveillance footage. This framework was evaluated on diverse
conditions such as broad daylight, low visibility, rain, hail, and snow using
the proposed dataset. This framework was found effective and paves the way to
the development of general-purpose vehicular accident detection algorithms in
real-time. | [
"cs.CV"
] |
Crowd estimation is a very challenging problem. The most recent study tries
to exploit auditory information to aid the visual models, however, the
performance is limited due to the lack of an effective approach for feature
extraction and integration. The paper proposes a new audiovisual multi-task
network to address the critical challenges in crowd counting by effectively
utilizing both visual and audio inputs for better modalities association and
productive feature extraction. The proposed network introduces the notion of
auxiliary and explicit image patch-importance ranking (PIR) and patch-wise
crowd estimate (PCE) information to produce a third (run-time) modality. These
modalities (audio, visual, run-time) undergo a transformer-inspired
cross-modality co-attention mechanism to finally output the crowd estimate. To
acquire rich visual features, we propose a multi-branch structure with
transformer-style fusion in-between. Extensive experimental evaluations show
that the proposed scheme outperforms the state-of-the-art networks under all
evaluation settings with up to 33.8% improvement. We also analyze and compare
the vision-only variant of our network and empirically demonstrate its
superiority over previous approaches. | [
"cs.CV"
] |
This paper studies efficient means for dealing with intra-category diversity
in object detection. Strategies for occlusion and orientation handling are
explored by learning an ensemble of detection models from visual and
geometrical clusters of object instances. An AdaBoost detection scheme is
employed with pixel lookup features for fast detection. The analysis provides
insight into the design of a robust vehicle detection system, showing promise
in terms of detection performance and orientation estimation accuracy. | [
"cs.CV"
] |
Policies for complex visual tasks have been successfully learned with deep
reinforcement learning, using an approach called deep Q-networks (DQN), but
relatively large (task-specific) networks and extensive training are needed to
achieve good performance. In this work, we present a novel method called policy
distillation that can be used to extract the policy of a reinforcement learning
agent and train a new network that performs at the expert level while being
dramatically smaller and more efficient. Furthermore, the same method can be
used to consolidate multiple task-specific policies into a single policy. We
demonstrate these claims using the Atari domain and show that the multi-task
distilled agent outperforms the single-task teachers as well as a
jointly-trained DQN agent. | [
"cs.LG"
] |
Optimal surface segmentation is a state-of-the-art method used for
segmentation of multiple globally optimal surfaces in volumetric datasets. The
method is widely used in numerous medical image segmentation applications.
However, nodes in the graph based optimal surface segmentation method typically
encode uniformly distributed orthogonal voxels of the volume. Thus the
segmentation cannot attain an accuracy greater than a single unit voxel, i.e.
the distance between two adjoining nodes in graph space. Segmentation accuracy
higher than a unit voxel is achievable by exploiting partial volume information
in the voxels which shall result in non-equidistant spacing between adjoining
graph nodes. This paper reports a generalized graph based multiple surface
segmentation method with convex priors which can optimally segment the target
surfaces in an irregularly sampled space. The proposed method allows
non-equidistant spacing between the adjoining graph nodes to achieve subvoxel
segmentation accuracy by utilizing the partial volume information in the
voxels. The partial volume information in the voxels is exploited by computing
a displacement field from the original volume data to identify the
subvoxel-accurate centers within each voxel resulting in non-equidistant
spacing between the adjoining graph nodes. The smoothness of each surface
modeled as a convex constraint governs the connectivity and regularity of the
surface. We employ an edge-based graph representation to incorporate the
necessary constraints and the globally optimal solution is obtained by
computing a minimum s-t cut. The proposed method was validated on 10
intravascular multi-frame ultrasound image datasets for subvoxel segmentation
accuracy. In all cases, the approach yielded highly accurate results. Our
approach can be readily extended to higher-dimensional segmentations. | [
"cs.CV"
] |
Symbolic regression is a powerful technique that can discover analytical
equations that describe data, which can lead to explainable models and
generalizability outside of the training data set. In contrast, neural networks
have achieved amazing levels of accuracy on image recognition and natural
language processing tasks, but are often seen as black-box models that are
difficult to interpret and typically extrapolate poorly. Here we use a neural
network-based architecture for symbolic regression called the Equation Learner
(EQL) network and integrate it with other deep learning architectures such that
the whole system can be trained end-to-end through backpropagation. To
demonstrate the power of such systems, we study their performance on several
substantially different tasks. First, we show that the neural network can
perform symbolic regression and learn the form of several functions. Next, we
present an MNIST arithmetic task where a separate part of the neural network
extracts the digits. Finally, we demonstrate prediction of dynamical systems
where an unknown parameter is extracted through an encoder. We find that the
EQL-based architecture can extrapolate quite well outside of the training data
set compared to a standard neural network-based architecture, paving the way
for deep learning to be applied in scientific exploration and discovery. | [
"cs.LG",
"cs.NE",
"physics.data-an",
"stat.ML"
] |
Person re-identification (Re-ID) aims to match person images across
non-overlapping camera views. The majority of Re-ID methods focus on
small-scale surveillance systems in which each pedestrian is captured in
different camera views of adjacent scenes. However, in large-scale surveillance
systems that cover larger areas, it is required to track a pedestrian of
interest across distant scenes (e.g., a criminal suspect escapes from one city
to another). Since most pedestrians appear in limited local areas, it is
difficult to collect training data with cross-camera pairs of the same person.
In this work, we study intra-camera supervised person re-identification across
distant scenes (ICS-DS Re-ID), which uses cross-camera unpaired data with
intra-camera identity labels for training. It is challenging as cross-camera
paired data plays a crucial role for learning camera-invariant features in most
existing Re-ID methods. To learn camera-invariant representation from
cross-camera unpaired training data, we propose a cross-camera feature
prediction method to mine cross-camera self supervision information from
camera-specific feature distribution by transforming fake cross-camera positive
feature pairs and minimize the distances of the fake pairs. Furthermore, we
automatically localize and extract local-level feature by a transformer. Joint
learning of global-level and local-level features forms a global-local
cross-camera feature prediction scheme for mining fine-grained cross-camera
self supervision information. Finally, cross-camera self supervision and
intra-camera supervision are aggregated in a framework. The experiments are
conducted in the ICS-DS setting on Market-SCT, Duke-SCT and MSMT17-SCT
datasets. The evaluation results demonstrate the superiority of our method,
which gains significant improvements of 15.4 Rank-1 and 22.3 mAP on Market-SCT
as compared to the second best method. | [
"cs.CV",
"cs.AI"
] |
In Bayesian learning of Gaussian graphical model structure, it is common to
restrict attention to certain classes of graphs and approximate the posterior
distribution by repeatedly moving from one graph to another, using MCMC or
methods such as stochastic shotgun search (SSS). I give two corrected versions
of an algorithm for non-decomposable graphs and discuss random graph
distributions, in particular as prior distributions. The main topic of the
thesis is Bayesian structure-learning with forests or trees. Restricting
attention to these graphs can be justified using theorems on random graphs. I
describe how to use the Chow$\unicode{x2013}$Liu algorithm and the Matrix Tree
Theorem to find the MAP forest and certain quantities in the posterior
distribution on trees. I give adapted versions of MCMC and SSS for
approximating the posterior distribution for forests and trees, and systems for
storing these graphs so that it is easy to choose moves to neighbouring graphs.
Experiments show that SSS with trees does well when the true graph is a tree or
sparse graph. SSS with trees or forests does better than SSS with decomposable
graphs in certain cases. Graph priors improve detection of hubs but need large
ranges of probabilities. MCMC on forests fails to mix well and MCMC on trees is
slower than SSS. (For a longer abstract see the thesis.) | [
"stat.ML",
"cs.LG"
] |
Competitions play an invaluable role in the field of forecasting, as
exemplified through the recent M4 competition. The competition received
attention from both academics and practitioners and sparked discussions around
the representativeness of the data for business forecasting. Several
competitions featuring real-life business forecasting tasks on the Kaggle
platform has, however, been largely ignored by the academic community. We
believe the learnings from these competitions have much to offer to the
forecasting community and provide a review of the results from six Kaggle
competitions. We find that most of the Kaggle datasets are characterized by
higher intermittence and entropy than the M-competitions and that global
ensemble models tend to outperform local single models. Furthermore, we find
the strong performance of gradient boosted decision trees, increasing success
of neural networks for forecasting, and a variety of techniques for adapting
machine learning models to the forecasting task. | [
"stat.ML",
"cs.LG",
"stat.AP"
] |
Psychological studies have found that human visual tracking system involves
learning, memory, and planning. Despite recent successes, not many works have
focused on memory and planning in deep learning based tracking. We are thus
interested in memory augmented network, where an external memory remembers the
evolving appearance of the target (foreground) object without backpropagation
for updating weights. Our Dual Augmented Memory Network (DAWN) is unique in
remembering both target and background, and using an improved attention LSTM
memory to guide the focus on memorized features. DAWN is effective in
unsupervised tracking in handling total occlusion, severe motion blur, abrupt
changes in target appearance, multiple object instances, and similar foreground
and background features. We present extensive quantitative and qualitative
experimental comparison with state-of-the-art methods including top contenders
in recent VOT challenges. Notably, despite the straightforward implementation,
DAWN is ranked third in both VOT2016 and VOT2017 challenges with excellent
success rate among all VOT fast trackers running at fps > 10 in unsupervised
tracking in both challenges. We propose DAWN-RPN, where we simply augment our
memory and attention LSTM modules to the state-of-the-art SiamRPN, and report
immediate performance gain, thus demonstrating DAWN can work well with and
directly benefit other models to handle difficult cases as well. | [
"cs.CV"
] |
Recently, graph neural networks for semi-supervised classification have been
widely studied. However, existing methods only use the information of limited
neighbors and do not deal with the inter-class connections in graphs. In this
paper, we propose Adaptive aggregation with Class-Attentive Diffusion (AdaCAD),
a new aggregation scheme that adaptively aggregates nodes probably of the same
class among K-hop neighbors. To this end, we first propose a novel stochastic
process, called Class-Attentive Diffusion (CAD), that strengthens attention to
intra-class nodes and attenuates attention to inter-class nodes. In contrast to
the existing diffusion methods with a transition matrix determined solely by
the graph structure, CAD considers both the node features and the graph
structure with the design of our class-attentive transition matrix that
utilizes a classifier. Then, we further propose an adaptive update scheme that
leverages different reflection ratios of the diffusion result for each node
depending on the local class-context. As the main advantage, AdaCAD alleviates
the problem of undesired mixing of inter-class features caused by discrepancies
between node labels and the graph topology. Built on AdaCAD, we construct a
simple model called Class-Attentive Diffusion Network (CAD-Net). Extensive
experiments on seven benchmark datasets consistently demonstrate the efficacy
of the proposed method and our CAD-Net significantly outperforms the
state-of-the-art methods. Code is available at
https://github.com/ljin0429/CAD-Net. | [
"cs.LG",
"stat.ML"
] |
Current work in explainable reinforcement learning generally produces
policies in the form of a decision tree over the state space. Such policies can
be used for formal safety verification, agent behavior prediction, and manual
inspection of important features. However, existing approaches fit a decision
tree after training or use a custom learning procedure which is not compatible
with new learning techniques, such as those which use neural networks. To
address this limitation, we propose a novel Markov Decision Process (MDP) type
for learning decision tree policies: Iterative Bounding MDPs (IBMDPs). An IBMDP
is constructed around a base MDP so each IBMDP policy is guaranteed to
correspond to a decision tree policy for the base MDP when using a
method-agnostic masking procedure. Because of this decision tree equivalence,
any function approximator can be used during training, including a neural
network, while yielding a decision tree policy for the base MDP. We present the
required masking procedure as well as a modified value update step which allows
IBMDPs to be solved using existing algorithms. We apply this procedure to
produce IBMDP variants of recent reinforcement learning methods. We empirically
show the benefits of our approach by solving IBMDPs to produce decision tree
policies for the base MDPs. | [
"cs.LG",
"cs.AI"
] |
A key challenge in video enhancement and action recognition is to fuse useful
information from neighboring frames. Recent works suggest establishing accurate
correspondences between neighboring frames before fusing temporal information.
However, the generated results heavily depend on the quality of correspondence
estimation. In this paper, we propose a more robust solution: \emph{sampling
and fusing multi-level features} across neighborhood frames to generate the
results. Based on this idea, we introduce a new module to improve the
capability of 3D convolution, namely, learnable sampling 3D convolution
(\emph{LS3D-Conv}). We add learnable 2D offsets to 3D convolution which aims to
sample locations on spatial feature maps across frames. The offsets can be
learned for specific tasks. The \emph{LS3D-Conv} can flexibly replace 3D
convolution layers in existing 3D networks and get new architectures, which
learns the sampling at multiple feature levels. The experiments on video
interpolation, video super-resolution, video denoising, and action recognition
demonstrate the effectiveness of our approach. | [
"cs.CV"
] |
Successive Subspace Learning (SSL) offers a light-weight unsupervised feature
learning method based on inherent statistical properties of data units (e.g.
image pixels and points in point cloud sets). It has shown promising results,
especially on small datasets. In this paper, we intuitively explain this
method, provide an overview of its development, and point out some open
questions and challenges for future research. | [
"cs.CV"
] |
We present a new method for few-shot human motion transfer that achieves
realistic human image generation with only a small number of appearance inputs.
Despite recent advances in single person motion transfer, prior methods often
require a large number of training images and take long training time. One
promising direction is to perform few-shot human motion transfer, which only
needs a few of source images for appearance transfer. However, it is
particularly challenging to obtain satisfactory transfer results. In this
paper, we address this issue by rendering a human texture map to a surface
geometry (represented as a UV map), which is personalized to the source person.
Our geometry generator combines the shape information from source images, and
the pose information from 2D keypoints to synthesize the personalized UV map. A
texture generator then generates the texture map conditioned on the texture of
source images to fill out invisible parts. Furthermore, we may fine-tune the
texture map on the manifold of the texture generator from a few source images
at the test time, which improves the quality of the texture map without
over-fitting or artifacts. Extensive experiments show the proposed method
outperforms state-of-the-art methods both qualitatively and quantitatively. Our
code is available at https://github.com/HuangZhiChao95/FewShotMotionTransfer. | [
"cs.CV",
"cs.GR"
] |
Despite that convolutional neural networks (CNN) have recently demonstrated
high-quality reconstruction for single-image super-resolution (SR), recovering
natural and realistic texture remains a challenging problem. In this paper, we
show that it is possible to recover textures faithful to semantic classes. In
particular, we only need to modulate features of a few intermediate layers in a
single network conditioned on semantic segmentation probability maps. This is
made possible through a novel Spatial Feature Transform (SFT) layer that
generates affine transformation parameters for spatial-wise feature modulation.
SFT layers can be trained end-to-end together with the SR network using the
same loss function. During testing, it accepts an input image of arbitrary size
and generates a high-resolution image with just a single forward pass
conditioned on the categorical priors. Our final results show that an SR
network equipped with SFT can generate more realistic and visually pleasing
textures in comparison to state-of-the-art SRGAN and EnhanceNet. | [
"cs.CV"
] |
In recent years, Convolutional Neural Networks (CNNs) have enabled ubiquitous
image processing applications. As such, CNNs require fast runtime (forward
propagation) to process high-resolution visual streams in real time. This is
still a challenging task even with state-of-the-art graphics and tensor
processing units. The bottleneck in computational efficiency primarily occurs
in the convolutional layers. Performing operations in the Fourier domain is a
promising way to accelerate forward propagation since it transforms
convolutions into elementwise multiplications, which are considerably faster to
compute for large kernels. Furthermore, such computation could be implemented
using an optical 4f system with orders of magnitude faster operation. However,
a major challenge in using this spectral approach, as well as in an optical
implementation of CNNs, is the inclusion of a nonlinearity between each
convolutional layer, without which CNN performance drops dramatically. Here, we
propose a Spectral CNN Linear Counterpart (SCLC) network architecture and
develop a Knowledge Distillation (KD) approach to circumvent the need for a
nonlinearity and successfully train such networks. While the KD approach is
known in machine learning as an effective process for network pruning, we adapt
the approach to transfer the knowledge from a nonlinear network (teacher) to a
linear counterpart (student). We show that the KD approach can achieve
performance that easily surpasses the standard linear version of a CNN and
could approach the performance of the nonlinear network. Our simulations show
that the possibility of increasing the resolution of the input image allows our
proposed 4f optical linear network to perform more efficiently than a nonlinear
network with the same accuracy on two fundamental image processing tasks: (i)
object classification and (ii) semantic segmentation. | [
"cs.CV",
"cs.ET",
"cs.LG"
] |
Domain specific neural network accelerators have garnered attention because
of their improved energy efficiency and inference performance compared to CPUs
and GPUs. Such accelerators are thus well suited for resource-constrained
embedded systems. However, mapping sophisticated neural network models on these
accelerators still entails significant energy and memory consumption, along
with high inference time overhead. Binarized neural networks (BNNs), which
utilize single-bit weights, represent an efficient way to implement and deploy
neural network models on accelerators. In this paper, we present a novel
optical-domain BNN accelerator, named ROBIN, which intelligently integrates
heterogeneous microring resonator optical devices with complementary
capabilities to efficiently implement the key functionalities in BNNs. We
perform detailed fabrication-process variation analyses at the optical device
level, explore efficient corrective tuning for these devices, and integrate
circuit-level optimization to counter thermal variations. As a result, our
proposed ROBIN architecture possesses the desirable traits of being robust,
energy-efficient, low latency, and high throughput, when executing BNN models.
Our analysis shows that ROBIN can outperform the best-known optical BNN
accelerators and also many electronic accelerators. Specifically, our
energy-efficient ROBIN design exhibits energy-per-bit values that are ~4x lower
than electronic BNN accelerators and ~933x lower than a recently proposed
photonic BNN accelerator, while a performance-efficient ROBIN design shows ~3x
and ~25x better performance than electronic and photonic BNN accelerators,
respectively. | [
"cs.LG",
"cs.AR",
"cs.ET"
] |
The generation of synthetic images is currently being dominated by Generative
Adversarial Networks (GANs). Despite their outstanding success in generating
realistic looking images, they still suffer from major drawbacks, including an
unstable and highly sensitive training procedure, mode-collapse and
mode-mixture, and dependency on large training sets. In this work we present a
novel non-adversarial generative method - Clustered Optimization of LAtent
space (COLA), which overcomes some of the limitations of GANs, and outperforms
GANs when training data is scarce. In the full data regime, our method is
capable of generating diverse multi-class images with no supervision,
surpassing previous non-adversarial methods in terms of image quality and
diversity. In the small-data regime, where only a small sample of labeled
images is available for training with no access to additional unlabeled data,
our results surpass state-of-the-art GAN models trained on the same amount of
data. Finally, when utilizing our model to augment small datasets, we surpass
the state-of-the-art performance in small-sample classification tasks on
challenging datasets, including CIFAR-10, CIFAR-100, STL-10 and Tiny-ImageNet.
A theoretical analysis supporting the essence of the method is presented. | [
"cs.CV",
"cs.LG"
] |
Following a navigation instruction such as 'Walk down the stairs and stop at
the brown sofa' requires embodied AI agents to ground scene elements referenced
via language (e.g. 'stairs') to visual content in the environment (pixels
corresponding to 'stairs').
We ask the following question -- can we leverage abundant 'disembodied'
web-scraped vision-and-language corpora (e.g. Conceptual Captions) to learn
visual groundings (what do 'stairs' look like?) that improve performance on a
relatively data-starved embodied perception task (Vision-and-Language
Navigation)? Specifically, we develop VLN-BERT, a visiolinguistic
transformer-based model for scoring the compatibility between an instruction
('...stop at the brown sofa') and a sequence of panoramic RGB images captured
by the agent. We demonstrate that pretraining VLN-BERT on image-text pairs from
the web before fine-tuning on embodied path-instruction data significantly
improves performance on VLN -- outperforming the prior state-of-the-art in the
fully-observed setting by 4 absolute percentage points on success rate.
Ablations of our pretraining curriculum show each stage to be impactful -- with
their combination resulting in further positive synergistic effects. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
Video super-resolution (SR) aims to generate a sequence of high-resolution
(HR) frames with plausible and temporally consistent details from their
low-resolution (LR) counterparts. The generation of accurate correspondence
plays a significant role in video SR. It is demonstrated by traditional video
SR methods that simultaneous SR of both images and optical flows can provide
accurate correspondences and better SR results. However, LR optical flows are
used in existing deep learning based methods for correspondence generation. In
this paper, we propose an end-to-end trainable video SR framework to
super-resolve both images and optical flows. Specifically, we first propose an
optical flow reconstruction network (OFRnet) to infer HR optical flows in a
coarse-to-fine manner. Then, motion compensation is performed according to the
HR optical flows. Finally, compensated LR inputs are fed to a super-resolution
network (SRnet) to generate the SR results. Extensive experiments demonstrate
that HR optical flows provide more accurate correspondences than their LR
counterparts and improve both accuracy and consistency performance. Comparative
results on the Vid4 and DAVIS-10 datasets show that our framework achieves the
state-of-the-art performance. | [
"cs.CV"
] |
Given an outfit, what small changes would most improve its fashionability?
This question presents an intriguing new vision challenge. We introduce
Fashion++, an approach that proposes minimal adjustments to a full-body
clothing outfit that will have maximal impact on its fashionability. Our model
consists of a deep image generation neural network that learns to synthesize
clothing conditioned on learned per-garment encodings. The latent encodings are
explicitly factorized according to shape and texture, thereby allowing direct
edits for both fit/presentation and color/patterns/material, respectively. We
show how to bootstrap Web photos to automatically train a fashionability model,
and develop an activation maximization-style approach to transform the input
image into its more fashionable self. The edits suggested range from swapping
in a new garment to tweaking its color, how it is worn (e.g., rolling up
sleeves), or its fit (e.g., making pants baggier). Experiments demonstrate that
Fashion++ provides successful edits, both according to automated metrics and
human opinion. Project page is at
http://vision.cs.utexas.edu/projects/FashionPlus. | [
"cs.CV"
] |
Nowadays U-net-like FCNs predominate various biomedical image segmentation
applications and attain promising performance, largely due to their elegant
architectures, e.g., symmetric contracting and expansive paths as well as
lateral skip-connections. It remains a research direction to devise novel
architectures to further benefit the segmentation. In this paper, we develop an
ACE-net that aims to enhance the feature representation and utilization by
augmenting the contracting and expansive paths. In particular, we augment the
paths by the recently proposed advanced techniques including ASPP, dense
connection and deep supervision mechanisms, and novel connections such as
directly connecting the raw image to the expansive side. With these
augmentations, ACE-net can utilize features from multiple sources, scales and
reception fields to segment while still maintains a relative simple
architecture. Experiments on two typical biomedical segmentation tasks validate
its effectiveness, where highly competitive results are obtained in both tasks
while ACE-net still runs fast at inference. | [
"cs.CV"
] |
Multi-view clustering has attracted increasing attentions recently by
utilizing information from multiple views. However, existing multi-view
clustering methods are either with high computation and space complexities, or
lack of representation capability. To address these issues, we propose deep
embedded multi-view clustering with collaborative training (DEMVC) in this
paper. Firstly, the embedded representations of multiple views are learned
individually by deep autoencoders. Then, both consensus and complementary of
multiple views are taken into account and a novel collaborative training scheme
is proposed. Concretely, the feature representations and cluster assignments of
all views are learned collaboratively. A new consistency strategy for cluster
centers initialization is further developed to improve the multi-view
clustering performance with collaborative training. Experimental results on
several popular multi-view datasets show that DEMVC achieves significant
improvements over state-of-the-art methods. | [
"cs.LG",
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.