text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
In this paper, we propose an end-to-end learning network to predict future
frames in a point cloud sequence. As main novelty, an initial layer learns
topological information of point clouds as geometric features, to form
representative spatio-temporal neighborhoods. This module is followed by
multiple Graph-RNN cells. Each cell learns points dynamics (i.e., RNN states)
by processing each point jointly with the spatio-temporal neighbouring points.
We tested the network performance with a MINST dataset of moving digits, a
synthetic human bodies motions and JPEG dynamic bodies datasets. Simulation
results demonstrate that our method outperforms baseline ones that neglect
geometry features information. | [
"cs.CV"
] |
Belief propagation (BP) is a popular method for performing probabilistic
inference on graphical models. In this work, we enhance BP and propose
self-guided belief propagation (SBP) that incorporates the pairwise potentials
only gradually. This homotopy continuation method converges to a unique
solution and increases the accuracy without increasing the computational
burden. We provide a formal analysis to demonstrate that SBP finds the global
optimum of the Bethe approximation for attractive models where all variables
favor the same state. Moreover, we apply SBP to various graphs with random
potentials and empirically show that: (i) SBP is superior in terms of accuracy
whenever BP converges, and (ii) SBP obtains a unique, stable, and accurate
solution whenever BP does not converge. | [
"stat.ML",
"cs.LG"
] |
Recently, researchers have utilized neural networks to accurately solve
partial differential equations (PDEs), enabling the mesh-free method for
scientific computation. Unfortunately, the network performance drops when
encountering a high nonlinearity domain. To improve the generalizability, we
introduce the novel approach of employing multi-task learning techniques, the
uncertainty-weighting loss and the gradients surgery, in the context of
learning PDE solutions. The multi-task scheme exploits the benefits of learning
shared representations, controlled by cross-stitch modules, between multiple
related PDEs, which are obtainable by varying the PDE parameterization
coefficients, to generalize better on the original PDE. Encouraging the network
pay closer attention to the high nonlinearity domain regions that are more
challenging to learn, we also propose adversarial training for generating
supplementary high-loss samples, similarly distributed to the original training
distribution. In the experiments, our proposed methods are found to be
effective and reduce the error on the unseen data points as compared to the
previous approaches in various PDE examples, including high-dimensional
stochastic PDEs. | [
"cs.LG",
"cs.NA",
"math.NA"
] |
Image attribute transfer aims to change an input image to a target one with
expected attributes, which has received significant attention in recent years.
However, most of the existing methods lack the ability to de-correlate the
target attributes and irrelevant information, i.e., the other attributes and
background information, thus often suffering from blurs and artifacts. To
address these issues, we propose a novel Attribute Manifold Encoding GAN
(AME-GAN) for fully-featured attribute transfer, which can modify and adjust
every detail in the images. Specifically, our method divides the input image
into image attribute part and image background part on manifolds, which are
controlled by attribute latent variables and background latent variables
respectively. Through enforcing attribute latent variables to Gaussian
distributions and background latent variables to uniform distributions
respectively, the attribute transfer procedure becomes controllable and image
generation is more photo-realistic. Furthermore, we adopt a conditional
multi-scale discriminator to render accurate and high-quality target attribute
images. Experimental results on three popular datasets demonstrate the
superiority of our proposed method in both performances of the attribute
transfer and image generation quality. | [
"cs.CV"
] |
We study pure exploration in multi-armed bandits with graph side-information.
In particular, we consider the best arm (and near-best arm) identification
problem in the fixed confidence setting under the assumption that the arm
rewards are smooth with respect to a given arbitrary graph. This captures a
range of real world pure-exploration scenarios where one often has information
about the similarity of the options or actions under consideration. We propose
a novel algorithm GRUB (GRaph based UcB) for this problem and provide a
theoretical characterization of its performance that elicits the benefit of the
graph-side information. We complement our theory with experimental results that
show that capitalizing on available graph side information yields significant
improvements over pure exploration methods that are unable to use this
information. | [
"cs.LG",
"stat.ML"
] |
Deep net architectures have constantly evolved over the past few years,
leading to significant advancements in a wide array of computer vision tasks.
However, besides high accuracy, many applications also require a low
computational load and limited memory footprint. To date, efficiency has
typically been achieved either by architectural choices at the macro level
(e.g. using skip connections or pruning techniques) or modifications at the
level of the individual layers (e.g. using depth-wise convolutions or channel
shuffle operations). Interestingly, much less attention has been devoted to the
role of the activation functions in constructing efficient nets. Recently,
Kligvasser et al. showed that incorporating spatial connections within the
activation functions, enables a significant boost in performance in image
restoration tasks, at any given budget of parameters. However, the
effectiveness of their xUnit module has only been tested on simple small
models, which are not characteristic of those used in high-level vision tasks.
In this paper, we adopt and improve the xUnit activation, show how it can be
incorporated into the DenseNet architecture, and illustrate its high
effectiveness for classification and image restoration tasks alike. While the
DenseNet architecture is extremely efficient to begin with, our dense xUnit net
(DxNet) can typically achieve the same performance with far fewer parameters.
For example, on ImageNet, our DxNet outperforms a ReLU-based DenseNet having
30% more parameters and achieves state-of-the-art results for this budget of
parameters. Furthermore, in denoising and super-resolution, DxNet significantly
improves upon all existing lightweight solutions, including the xUnit-based
nets of Kligvasser et al. | [
"cs.CV"
] |
Knowledge distillation has become increasingly important in model
compression. It boosts the performance of a miniaturized student network with
the supervision of the output distribution and feature maps from a
sophisticated teacher network. Some recent works introduce multi-teacher
distillation to provide more supervision to the student network. However, the
effectiveness of multi-teacher distillation methods are accompanied by costly
computation resources. To tackle with both the efficiency and the effectiveness
of knowledge distillation, we introduce the feature aggregation to imitate the
multi-teacher distillation in the single-teacher distillation framework by
extracting informative supervision from multiple teacher feature maps.
Specifically, we introduce DFA, a two-stage Differentiable Feature Aggregation
search method that motivated by DARTS in neural architecture search, to
efficiently find the aggregations. In the first stage, DFA formulates the
searching problem as a bi-level optimization and leverages a novel bridge loss,
which consists of a student-to-teacher path and a teacher-to-student path, to
find appropriate feature aggregations. The two paths act as two players against
each other, trying to optimize the unified architecture parameters to the
opposite directions while guaranteeing both expressivity and learnability of
the feature aggregation simultaneously. In the second stage, DFA performs
knowledge distillation with the derived feature aggregation. Experimental
results show that DFA outperforms existing methods on CIFAR-100 and CINIC-10
datasets under various teacher-student settings, verifying the effectiveness
and robustness of the design. | [
"cs.LG",
"cs.CV"
] |
Classical regression has a simple geometric description in terms of a
projection of the training labels onto the column space of the design matrix.
However, for over-parameterized models -- where the number of fit parameters is
large enough to perfectly fit the training data -- this picture becomes
uninformative. Here, we present an alternative geometric interpretation of
regression that applies to both under- and over-parameterized models. Unlike
the classical picture which takes place in the space of training labels, our
new picture resides in the space of input features. This new feature-based
perspective provides a natural geometric interpretation of the double-descent
phenomenon in the context of bias and variance, explaining why it can occur
even in the absence of label noise. Furthermore, we show that adversarial
perturbations -- small perturbations to the input features that result in large
changes in label values -- are a generic feature of biased models, arising from
the underlying geometry. We demonstrate these ideas by analyzing three minimal
models for over-parameterized linear least squares regression: without basis
functions (input features equal model features) and with linear or nonlinear
basis functions (two-layer neural networks with linear or nonlinear activation
functions, respectively). | [
"stat.ML",
"cond-mat.dis-nn",
"cs.LG"
] |
Recurrent neural networks (RNNs) are instrumental in modelling sequential and
time-series data. Yet, when using RNNs to inform decision-making, predictions
by themselves are not sufficient; we also need estimates of predictive
uncertainty. Existing approaches for uncertainty quantification in RNNs are
based predominantly on Bayesian methods; these are computationally prohibitive,
and require major alterations to the RNN architecture and training.
Capitalizing on ideas from classical jackknife resampling, we develop a
frequentist alternative that: (a) does not interfere with model training or
compromise its accuracy, (b) applies to any RNN architecture, and (c) provides
theoretical coverage guarantees on the estimated uncertainty intervals. Our
method derives predictive uncertainty from the variability of the (jackknife)
sampling distribution of the RNN outputs, which is estimated by repeatedly
deleting blocks of (temporally-correlated) training data, and collecting the
predictions of the RNN re-trained on the remaining data. To avoid exhaustive
re-training, we utilize influence functions to estimate the effect of removing
training data blocks on the learned RNN parameters. Using data from a critical
care setting, we demonstrate the utility of uncertainty quantification in
sequential decision-making. | [
"cs.LG",
"stat.ML"
] |
There is a growing demand for planning redirected walking techniques and
applying them to physical environments with obstacles. Such techniques are
mainly managed using three kinds of methods: direct scripting, generalized
controller, and physical- or virtual-environment analysis to determine user
redirection. The first approach is effective when a user's path and both
physical and virtual environments are fixed; however, it is difficult to handle
irregular movements and reuse other environments. The second approach has the
potential of reusing any environment but is less optimized. The last approach
is highly anticipated and versatile, although it has not been sufficiently
developed. In this study, we propose a novel redirection controller using
reinforcement learning with advanced plannability/versatility. Our simulation
experiments show that the proposed strategy can reduce the number of resets by
20.3% for physical-space conditions with multiple obstacles. | [
"cs.LG",
"cs.RO",
"eess.SP"
] |
Feature-based time series representations have attracted substantial
attention in a wide range of time series analysis methods. Recently, the use of
time series features for forecast model averaging has been an emerging research
focus in the forecasting community. Nonetheless, most of the existing
approaches depend on the manual choice of an appropriate set of features.
Exploiting machine learning methods to extract features from time series
automatically becomes crucial in state-of-the-art time series analysis. In this
paper, we introduce an automated approach to extract time series features based
on time series imaging. We first transform time series into recurrence plots,
from which local features can be extracted using computer vision algorithms.
The extracted features are used for forecast model averaging. Our experiments
show that forecasting based on automatically extracted features, with less
human intervention and a more comprehensive view of the raw time series data,
yields highly comparable performances with the best methods in the largest
forecasting competition dataset (M4) and outperforms the top methods in the
Tourism forecasting competition dataset. | [
"stat.ML",
"cs.CV",
"cs.LG",
"stat.CO"
] |
Many decisions involve choosing an uncertain course of actions in deep and
wide decision trees, as when we plan to visit an exotic country for vacation.
In these cases, exhaustive search for the best sequence of actions is not
tractable due to the large number of possibilities and limited time or
computational resources available to make the decision. Therefore, planning
agents need to balance breadth (exploring many actions at each level of the
tree) and depth (exploring many levels in the tree) to allocate optimally their
finite search capacity. We provide efficient analytical solutions and numerical
analysis to the problem of allocating finite sampling capacity in one shot to
large decision trees. We find that in general the optimal policy is to allocate
few samples per level so that deep levels can be reached, thus favoring depth
over breadth search. In contrast, in poor environments and at low capacity, it
is best to broadly sample branches at the cost of not sampling deeply, although
this policy is marginally better than deep allocations. Our results provide a
theoretical foundation for the optimality of deep imagination for planning and
show that it is a generally valid heuristic that could have evolved from the
finite constraints of cognitive systems. | [
"stat.ML",
"cs.LG",
"q-bio.NC"
] |
The core operation of current Graph Neural Networks (GNNs) is the aggregation
enabled by the graph Laplacian or message passing, which filters the
neighborhood node information. Though effective for various tasks, in this
paper, we show that they are potentially a problematic factor underlying all
GNN methods for learning on certain datasets, as they force the node
representations similar, making the nodes gradually lose their identity and
become indistinguishable. Hence, we augment the aggregation operations with
their dual, i.e. diversification operators that make the node more distinct and
preserve the identity. Such augmentation replaces the aggregation with a
two-channel filtering process that, in theory, is beneficial for enriching the
node representations. In practice, the proposed two-channel filters can be
easily patched on existing GNN methods with diverse training strategies,
including spectral and spatial (message passing) methods. In the experiments,
we observe desired characteristics of the models and significant performance
boost upon the baselines on 9 node classification tasks. | [
"cs.LG",
"stat.ML"
] |
With the exponential increase in the amount of digital information over the
internet, online shops, online music, video and image libraries, search engines
and recommendation system have become the most convenient ways to find relevant
information within a short time. In the recent times, deep learning's advances
have gained significant attention in the field of speech recognition, image
processing and natural language processing. Meanwhile, several recent studies
have shown the utility of deep learning in the area of recommendation systems
and information retrieval as well. In this short review, we cover the recent
advances made in the field of recommendation using various variants of deep
learning technology. We organize the review in three parts: Collaborative
system, Content based system and Hybrid system. The review also discusses the
contribution of deep learning integrated recommendation systems into several
application domains. The review concludes by discussion of the impact of deep
learning in recommendation system in various domain and whether deep learning
has shown any significant improvement over the conventional systems for
recommendation. Finally, we also provide future directions of research which
are possible based on the current state of use of deep learning in
recommendation systems. | [
"cs.LG",
"cs.IR"
] |
Language Models (LMs) are important components in several Natural Language
Processing systems. Recurrent Neural Network LMs composed of LSTM units,
especially those augmented with an external memory, have achieved
state-of-the-art results. However, these models still struggle to process long
sequences which are more likely to contain long-distance dependencies because
of information fading and a bias towards more recent information. In this paper
we demonstrate an effective mechanism for retrieving information in a memory
augmented LSTM LM based on attending to information in memory in proportion to
the number of timesteps the LSTM gating mechanism persisted the information. | [
"cs.LG",
"stat.ML"
] |
We capitalize on large amounts of unlabeled video in order to learn a model
of scene dynamics for both video recognition tasks (e.g. action classification)
and video generation tasks (e.g. future prediction). We propose a generative
adversarial network for video with a spatio-temporal convolutional architecture
that untangles the scene's foreground from the background. Experiments suggest
this model can generate tiny videos up to a second at full frame rate better
than simple baselines, and we show its utility at predicting plausible futures
of static images. Moreover, experiments and visualizations show the model
internally learns useful features for recognizing actions with minimal
supervision, suggesting scene dynamics are a promising signal for
representation learning. We believe generative video models can impact many
applications in video understanding and simulation. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Classification of the extent of damage suffered by a building in a seismic
event is crucial from the safety perspective and repairing work. In this study,
authors have proposed a CNN based autonomous damage detection model. Over 1200
images of different types of buildings-1000 for training and 200 for testing
classified into 4 categories according to the extent of damage suffered.
Categories are namely, no damage, minor damage, major damage, and collapse.
Trained network tested by the application of various algorithms with different
learning rates. The most optimum results were obtained on the application of
VGG16 transfer learning model with a learning rate of 1e-5 as it gave a
training accuracy of 97.85% and validation accuracy of up to 89.38%. The model
developed has real-time application in the event of an earthquake. | [
"cs.CV"
] |
Learning functions on point clouds has applications in many fields, including
computer vision, computer graphics, physics, and chemistry. Recently, there has
been a growing interest in neural architectures that are invariant or
equivariant to all three shape-preserving transformations of point clouds:
translation, rotation, and permutation.
In this paper, we present a first study of the approximation power of these
architectures. We first derive two sufficient conditions for an equivariant
architecture to have the universal approximation property, based on a novel
characterization of the space of equivariant polynomials. We then use these
conditions to show that two recently suggested models are universal, and for
devising two other novel universal architectures. | [
"cs.LG",
"cs.CG"
] |
This paper proposes a two-view deterministic geometric model fitting method,
termed Superpixel-based Deterministic Fitting (SDF), for multiple-structure
data. SDF starts from superpixel segmentation, which effectively captures prior
information of feature appearances. The feature appearances are beneficial to
reduce the computational complexity for deterministic fitting methods. SDF also
includes two original elements, i.e., a deterministic sampling algorithm and a
novel model selection algorithm. The two algorithms are tightly coupled to
boost the performance of SDF in both speed and accuracy. Specifically, the
proposed sampling algorithm leverages the grouping cues of superpixels to
generate reliable and consistent hypotheses. The proposed model selection
algorithm further makes use of desirable properties of the generated
hypotheses, to improve the conventional fit-and-remove framework for more
efficient and effective performance. The key characteristic of SDF is that it
can efficiently and deterministically estimate the parameters of model
instances in multi-structure data. Experimental results demonstrate that the
proposed SDF shows superiority over several state-of-the-art fitting methods
for real images with single-structure and multiple-structure data. | [
"cs.CV"
] |
We present a simple yet effective general-purpose framework for modeling 3D
shapes by leveraging recent advances in 2D image generation using CNNs. Using
just a single depth image of the object, we can output a dense multi-view depth
map representation of 3D objects. Our simple encoder-decoder framework,
comprised of a novel identity encoder and class-conditional viewpoint
generator, generates 3D consistent depth maps. Our experimental results
demonstrate the two-fold advantage of our approach. First, we can directly
borrow architectures that work well in the 2D image domain to 3D. Second, we
can effectively generate high-resolution 3D shapes with low computational
memory. Our quantitative evaluations show that our method is superior to
existing depth map methods for reconstructing and synthesizing 3D objects and
is competitive with other representations, such as point clouds, voxel grids,
and implicit functions. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Convolution is the main building block of convolutional neural networks
(CNN). We observe that an optimized CNN often has highly correlated filters as
the number of channels increases with depth, reducing the expressive power of
feature representations. We propose Tied Block Convolution (TBC) that shares
the same thinner filters over equal blocks of channels and produces multiple
responses with a single filter. The concept of TBC can also be extended to
group convolution and fully connected layers, and can be applied to various
backbone networks and attention modules. Our extensive experimentation on
classification, detection, instance segmentation, and attention demonstrates
TBC's significant across-the-board gain over standard convolution and group
convolution. The proposed TiedSE attention module can even use 64 times fewer
parameters than the SE module to achieve comparable performance. In particular,
standard CNNs often fail to accurately aggregate information in the presence of
occlusion and result in multiple redundant partial object proposals. By sharing
filters across channels, TBC reduces correlation and can effectively handle
highly overlapping instances. TBC increases the average precision for object
detection on MS-COCO by 6% when the occlusion ratio is 80%. Our code will be
released. | [
"cs.CV"
] |
When it comes to addressing the safety/security related needs at different
production/construction sites, accurate detection of the presence of workers,
vehicles, equipment important and formed an integral part of computer
vision-based surveillance systems (CVSS). Traditional CVSS systems focus on the
use of different computer vision and pattern recognition algorithms overly
reliant on manual extraction of features and small datasets, limiting their
usage because of low accuracy, need for expert knowledge and high computational
costs. The main objective of this paper is to provide decision makers at sites
with a practical yet comprehensive deep learning and IoT based solution to
tackle various computer vision related problems such as scene classification,
object detection in scenes, semantic segmentation, scene captioning etc. Our
overarching goal is to address the central question of What is happening at
this site and where is it happening in an automated fashion minimizing the need
for human resources dedicated to surveillance. We developed Deep ExxonMobil Eye
for Video Analysis (DEEVA) package to handle scene classification, object
detection, semantic segmentation and captioning of scenes in a hierarchical
approach. The results reveal that transfer learning with the RetinaNet object
detector is able to detect the presence of workers, different types of
vehicles/construction equipment, safety related objects at a high level of
accuracy (above 90%). With the help of deep learning to automatically extract
features and IoT technology to automatic capture, transfer and process vast
amount of realtime images, this framework is an important step towards the
development of intelligent surveillance systems aimed at addressing myriads of
open ended problems in the realm of security/safety monitoring, productivity
assessments and future decision making. | [
"cs.CV"
] |
The world is increasingly urbanizing and the building industry accounts for
more than 40% of energy consumption in the United States. To improve urban
sustainability, many cities adopt ambitious energy-saving strategies through
retrofitting existing buildings and constructing new communities. In this
situation, an accurate urban building energy model (UBEM) is the foundation to
support the design of energy-efficient communities. However, current UBEM are
limited in their abilities to capture the inter-building interdependency due to
their dynamic and non-linear characteristics. Those models either ignored or
oversimplified these building interdependencies, which can substantially affect
the accuracy of urban energy modeling. To fill the research gap, this study
proposes a novel data-driven UBEM synthesizing the solar-based building
interdependency and spatial-temporal graph convolutional network (ST-GCN)
algorithm. Especially, we took a university campus located in downtown Atlanta
as an example to predict the hourly energy consumption. Furthermore, we tested
the feasibility of the proposed model by comparing the performance of the
ST-GCN model with other common time-series machine learning models. The results
indicate that the ST-GCN model overall outperforms all others. In addition, the
physical knowledge embedded in the model is well interpreted. After discussion,
it is found that data-driven models integrated engineering or physical
knowledge can significantly improve the urban building energy simulation. | [
"cs.LG"
] |
Domain adaptation aims to leverage information from the source domain to
improve the classification performance in the target domain. It mainly utilizes
two schemes: sample reweighting and feature matching. While the first scheme
allocates different weights to individual samples, the second scheme matches
the feature of two domains using global structural statistics. The two schemes
are complementary with each other, which are expected to jointly work for
robust domain adaptation. Several methods combine the two schemes, but the
underlying relationship of samples is insufficiently analyzed due to the
neglect of the hierarchy of samples and the geometric properties between
samples. To better combine the advantages of the two schemes, we propose a
Grassmannian graph-attentional landmark selection (GGLS) framework for domain
adaptation. GGLS presents a landmark selection scheme using attention-induced
neighbors of the graphical structure of samples and performs distribution
adaptation and knowledge adaptation over Grassmann manifold. the former treats
the landmarks of each sample differently, and the latter avoids feature
distortion and achieves better geometric properties. Experimental results on
different real-world cross-domain visual recognition tasks demonstrate that
GGLS provides better classification accuracies compared with state-of-the-art
domain adaptation methods. | [
"cs.CV"
] |
Stereo matching is an important problem in computer vision which has drawn
tremendous research attention for decades. Recent years, data-driven methods
with convolutional neural networks (CNNs) are continuously pushing stereo
matching to new heights. However, data-driven methods require large amount of
training data, which is not an easy task for real stereo data due to the
annotation difficulties of per-pixel ground-truth disparity. Though synthetic
dataset is proposed to fill the gaps of large data demand, the fine-tuning on
real dataset is still needed due to the domain variances between synthetic data
and real data. In this paper, we found that in synthetic datasets,
close-to-real-scene texture rendering is a key factor to boost up stereo
matching performance, while close-to-real-scene 3D modeling is less important.
We then propose semi-synthetic, an effective and fast way to synthesize large
amount of data with close-to-real-scene texture to minimize the gap between
synthetic data and real data. Extensive experiments demonstrate that models
trained with our proposed semi-synthetic datasets achieve significantly better
performance than with general synthetic datasets, especially on real data
benchmarks with limited training data. With further fine-tuning on the real
dataset, we also achieve SOTA performance on Middlebury and competitive results
on KITTI and ETH3D datasets. | [
"cs.CV"
] |
Although a number of studies have explored deep learning in neuroscience, the
application of these algorithms to neural systems on a microscopic scale, i.e.
parameters relevant to lower scales of organization, remains relatively novel.
Motivated by advances in whole-brain imaging, we examined the performance of
deep learning models on microscopic neural dynamics and resulting emergent
behaviors using calcium imaging data from the nematode C. elegans. We show that
neural networks perform remarkably well on both neuron-level dynamics
prediction, and behavioral state classification. In addition, we compared the
performance of structure agnostic neural networks and graph neural networks to
investigate if graph structure can be exploited as a favorable inductive bias.
To perform this experiment, we designed a graph neural network which explicitly
infers relations between neurons from neural activity and leverages the
inferred graph structure during computations. In our experiments, we found that
graph neural networks generally outperformed structure agnostic models and
excel in generalization on unseen organisms, implying a potential path to
generalizable machine learning in neuroscience. | [
"cs.LG",
"q-bio.NC"
] |
There are many ways to represent a molecule as input to a machine learning
model and each is associated with loss and retention of certain kinds of
information. In the interest of preserving three-dimensional spatial
information, including bond angles and torsions, we have developed libmolgrid,
a general-purpose library for representing three-dimensional molecules using
multidimensional arrays. This library also provides functionality for composing
batches of data suited to machine learning workflows, including data
augmentation, class balancing, and example stratification according to a
regression variable or data subgroup, and it further supports temporal and
spatial recurrences over that data to facilitate work with recurrent neural
networks, dynamical data, and size extensive modeling. It was designed for
seamless integration with popular deep learning frameworks, including Caffe,
PyTorch, and Keras, providing good performance by leveraging graphical
processing units (GPUs) for computationally-intensive tasks and efficient
memory usage through the use of memory views over preallocated buffers.
libmolgrid is a free and open source project that is actively supported,
serving the growing need in the molecular modeling community for tools that
streamline the process of data ingestion, representation construction, and
principled machine learning model development. | [
"cs.LG",
"physics.chem-ph",
"q-bio.BM"
] |
In many advanced video based applications background modeling is a
pre-processing step to eliminate redundant data, for instance in tracking or
video surveillance applications. Over the past years background subtraction is
usually based on low level or hand-crafted features such as raw color
components, gradients, or local binary patterns. The background subtraction
algorithms performance suffer in the presence of various challenges such as
dynamic backgrounds, photometric variations, camera jitters, and shadows. To
handle these challenges for the purpose of accurate background modeling we
propose a unified framework based on the algorithm of image inpainting. It is
an unsupervised visual feature learning hybrid Generative Adversarial algorithm
based on context prediction. We have also presented the solution of random
region inpainting by the fusion of center region inpaiting and random region
inpainting with the help of poisson blending technique. Furthermore we also
evaluated foreground object detection with the fusion of our proposed method
and morphological operations. The comparison of our proposed method with 12
state-of-the-art methods shows its stability in the application of background
estimation and foreground detection. | [
"cs.CV"
] |
Recent studies on unsupervised image-to-image translation have made a
remarkable progress by training a pair of generative adversarial networks with
a cycle-consistent loss. However, such unsupervised methods may generate
inferior results when the image resolution is high or the two image domains are
of significant appearance differences, such as the translations between
semantic layouts and natural images in the Cityscapes dataset. In this paper,
we propose novel Stacked Cycle-Consistent Adversarial Networks (SCANs) by
decomposing a single translation into multi-stage transformations, which not
only boost the image translation quality but also enable higher resolution
image-to-image translations in a coarse-to-fine manner. Moreover, to properly
exploit the information from the previous stage, an adaptive fusion block is
devised to learn a dynamic integration of the current stage's output and the
previous stage's output. Experiments on multiple datasets demonstrate that our
proposed approach can improve the translation quality compared with previous
single-stage unsupervised methods. | [
"cs.CV"
] |
Interpretability in Graph Convolutional Networks (GCNs) has been explored to
some extent in computer vision in general, yet, in the medical domain, it
requires further examination. Moreover, most of the interpretability approaches
for GCNs, especially in the medical domain, focus on interpreting the model in
a post hoc fashion. In this paper, we propose an interpretable graph
learning-based model which 1) interprets the clinical relevance of the input
features towards the task, 2) uses the explanation to improve the model
performance and, 3) learns a population level latent graph that may be used to
interpret the cohort's behavior. In a clinical scenario, such a model can
assist the clinical experts in better decision-making for diagnosis and
treatment planning. The main novelty lies in the interpretable attention module
(IAM), which directly operates on multi-modal features. Our IAM learns the
attention for each feature based on the unique interpretability-specific
losses. We show the application on two publicly available datasets, Tadpole and
UKBB, for three tasks of disease, age, and gender prediction. Our proposed
model shows superior performance with respect to compared methods with an
increase in an average accuracy of 3.2% for Tadpole, 1.6% for UKBB Gender, and
2% for the UKBB Age prediction task. Further, we show exhaustive validation and
clinical interpretation of our results. | [
"cs.CV",
"cs.LG"
] |
Severe infectious diseases such as the novel coronavirus (COVID-19) pose a
huge threat to public health. Stringent control measures, such as school
closures and stay-at-home orders, while having significant effects, also bring
huge economic losses. A crucial question for policymakers around the world is
how to make the trade-off and implement the appropriate interventions. In this
work, we propose a Multi-Objective Reinforcement Learning framework to
facilitate the data-driven decision making and minimize the long-term overall
cost. Specifically, at each decision point, a Bayesian epidemiological model is
first learned as the environment model, and then we use the proposed
model-based multi-objective planning algorithm to find a set of Pareto-optimal
policies. This framework, combined with the prediction bands for each policy,
provides a real-time decision support tool for policymakers. The application is
demonstrated with the spread of COVID-19 in China. | [
"cs.LG",
"stat.ML"
] |
In this paper, we present a self-training method, named ST3D++, with a
holistic pseudo label denoising pipeline for unsupervised domain adaptation on
3D object detection. ST3D++ aims at reducing noise in pseudo label generation
as well as alleviating the negative impacts of noisy pseudo labels on model
training. First, ST3D++ pre-trains the 3D object detector on the labeled source
domain with random object scaling (ROS) which is designed to reduce target
domain pseudo label noise arising from object scale bias of the source domain.
Then, the detector is progressively improved through alternating between
generating pseudo labels and training the object detector with pseudo-labeled
target domain data. Here, we equip the pseudo label generation process with a
hybrid quality-aware triplet memory to improve the quality and stability of
generated pseudo labels. Meanwhile, in the model training stage, we propose a
source data assisted training strategy and a curriculum data augmentation
policy to effectively rectify noisy gradient directions and avoid model
over-fitting to noisy pseudo labeled data. These specific designs enable the
detector to be trained on meticulously refined pseudo labeled target data with
denoised training signals, and thus effectively facilitate adapting an object
detector to a target domain without requiring annotations. Finally, our method
is assessed on four 3D benchmark datasets (i.e., Waymo, KITTI, Lyft, and
nuScenes) for three common categories (i.e., car, pedestrian and bicycle).
ST3D++ achieves state-of-the-art performance on all evaluated settings,
outperforming the corresponding baseline by a large margin (e.g., 9.6% $\sim$
38.16% on Waymo $\rightarrow$ KITTI in terms of AP$_{\text{3D}}$), and even
surpasses the fully supervised oracle results on the KITTI 3D object detection
benchmark with target prior. Code will be available. | [
"cs.CV",
"cs.AI"
] |
Genetic Programming (GP) has been primarily used to tackle optimization,
classification, and feature selection related tasks. The widespread use of GP
is due to its flexible and comprehensible tree-type structure. Similarly,
research is also gaining momentum in the field of Image Processing, because of
its promising results over vast areas of applications ranging from medical
Image Processing to multispectral imaging. Image Processing is mainly involved
in applications such as computer vision, pattern recognition, image
compression, storage, and medical diagnostics. This universal nature of images
and their associated algorithm, i.e., complexities, gave an impetus to the
exploration of GP. GP has thus been used in different ways for Image Processing
since its inception. Many interesting GP techniques have been developed and
employed in the field of Image Processing, and consequently, we aim to provide
the research community an extensive view of these techniques. This survey thus
presents the diverse applications of GP in Image Processing and provides useful
resources for further research. Also, the comparison of different parameters
used in different applications of Image Processing is summarized in tabular
form. Moreover, analysis of the different parameters used in Image Processing
related tasks is carried-out to save the time needed in the future for
evaluating the parameters of GP. As more advancement is made in GP
methodologies, its success in solving complex tasks, not only in Image
Processing but also in other fields, may increase. Additionally, guidelines are
provided for applying GP in Image Processing related tasks, the pros and cons
of GP techniques are discussed, and some future directions are also set. | [
"cs.CV",
"cs.AI"
] |
With the rise of Transformers as the standard for language processing, and
their advancements in computer vision, along with their unprecedented size and
amounts of training data, many have come to believe that they are not suitable
for small sets of data. This trend leads to great concerns, including but not
limited to: limited availability of data in certain scientific domains and the
exclusion of those with limited resource from research in the field. In this
paper, we dispel the myth that transformers are "data hungry" and therefore can
only be applied to large sets of data. We show for the first time that with the
right size and tokenization, transformers can perform head-to-head with
state-of-the-art CNNs on small datasets, often with better accuracy and fewer
parameters. Our model eliminates the requirement for class token and positional
embeddings through a novel sequence pooling strategy and the use of
convolution/s. It is flexible in terms of model size, and can have as little as
0.28M parameters while achieving good results. Our model can reach 98.00%
accuracy when training from scratch on CIFAR-10, which is a significant
improvement over previous Transformer based models. It also outperforms many
modern CNN based approaches, such as ResNet, and even some recent NAS-based
approaches, such as Proxyless-NAS. Our simple and compact design democratizes
transformers by making them accessible to those with limited computing
resources and/or dealing with small datasets. Our method also works on larger
datasets, such as ImageNet (82.71% accuracy with 29% parameters of ViT), and
NLP tasks as well. Our code and pre-trained models are publicly available at
https://github.com/SHI-Labs/Compact-Transformers. | [
"cs.CV",
"cs.LG"
] |
The Encoder-Decoder architecture is a main stream deep learning model for
biomedical image segmentation. The encoder fully compresses the input and
generates encoded features, and the decoder then produces dense predictions
using encoded features. However, decoders are still under-explored in such
architectures. In this paper, we comprehensively study the state-of-the-art
Encoder-Decoder architectures, and propose a new universal decoder, called
cascade decoder, to improve semantic segmentation accuracy. Our cascade decoder
can be embedded into existing networks and trained altogether in an end-to-end
fashion. The cascade decoder structure aims to conduct more effective decoding
of hierarchically encoded features and is more compatible with common encoders
than the known decoders. We replace the decoders of state-of-the-art models
with our cascade decoder for several challenging biomedical image segmentation
tasks, and the considerable improvements achieved demonstrate the efficacy of
our new decoding method. | [
"cs.CV"
] |
Generating images according to natural language descriptions is a challenging
task. Prior research has mainly focused to enhance the quality of generation by
investigating the use of spatial attention and/or textual attention thereby
neglecting the relationship between channels. In this work, we propose the
Combined Attention Generative Adversarial Network (CAGAN) to generate
photo-realistic images according to textual descriptions. The proposed CAGAN
utilises two attention models: word attention to draw different sub-regions
conditioned on related words; and squeeze-and-excitation attention to capture
non-linear interaction among channels. With spectral normalisation to stabilise
training, our proposed CAGAN improves the state of the art on the IS and FID on
the CUB dataset and the FID on the more challenging COCO dataset. Furthermore,
we demonstrate that judging a model by a single evaluation metric can be
misleading by developing an additional model adding local self-attention which
scores a higher IS, outperforming the state of the art on the CUB dataset, but
generates unrealistic images through feature repetition. | [
"cs.CV"
] |
Unsupervised learning of generative models has seen tremendous progress over
recent years, in particular due to generative adversarial networks (GANs),
variational autoencoders, and flow-based models. GANs have dramatically
improved sample quality, but suffer from two drawbacks: (i) they mode-drop,
i.e., do not cover the full support of the train data, and (ii) they do not
allow for likelihood evaluations on held-out data. In contrast,
likelihood-based training encourages models to cover the full support of the
train data, but yields poorer samples. These mutual shortcomings can in
principle be addressed by training generative latent variable models in a
hybrid adversarial-likelihood manner. However, we show that commonly made
parametric assumptions create a conflict between them, making successful hybrid
models non trivial. As a solution, we propose to use deep invertible
transformations in the latent variable decoder. This approach allows for
likelihood computations in image space, is more efficient than fully invertible
models, and can take full advantage of adversarial training. We show that our
model significantly improves over existing hybrid models: offering GAN-like
samples, IS and FID scores that are competitive with fully adversarial models,
and improved likelihood scores. | [
"cs.CV",
"cs.LG"
] |
Fine-grained action segmentation and recognition is an important yet
challenging task. Given a long, untrimmed sequence of kinematic data, the task
is to classify the action at each time frame and segment the time series into
the correct sequence of actions. In this paper, we propose a novel framework
that combines a temporal Conditional Random Field (CRF) model with a powerful
frame-level representation based on discriminative sparse coding. We introduce
an end-to-end algorithm for jointly learning the weights of the CRF model,
which include action classification and action transition costs, as well as an
overcomplete dictionary of mid-level action primitives. This results in a CRF
model that is driven by sparse coding features obtained using a discriminative
dictionary that is shared among different actions and adapted to the task of
structured output learning. We evaluate our method on three surgical tasks
using kinematic data from the JIGSAWS dataset, as well as on a food preparation
task using accelerometer data from the 50 Salads dataset. Our results show that
the proposed method performs on par or better than state-of-the-art methods. | [
"cs.CV"
] |
Many previous methods have showed the importance of considering semantically
relevant objects for performing event recognition, yet none of the methods have
exploited the power of deep convolutional neural networks to directly integrate
relevant object information into a unified network. We present a novel unified
deep CNN architecture which integrates architecturally different, yet
semantically-related object detection networks to enhance the performance of
the event recognition task. Our architecture allows the sharing of the
convolutional layers and a fully connected layer which effectively integrates
event recognition, rigid object detection and non-rigid object detection. | [
"cs.CV"
] |
Imagining a colored realistic image from an arbitrarily drawn sketch is one
of the human capabilities that we eager machines to mimic. Unlike previous
methods that either requires the sketch-image pairs or utilize low-quantity
detected edges as sketches, we study the exemplar-based sketch-to-image (s2i)
synthesis task in a self-supervised learning manner, eliminating the necessity
of the paired sketch data. To this end, we first propose an unsupervised method
to efficiently synthesize line-sketches for general RGB-only datasets. With the
synthetic paired-data, we then present a self-supervised Auto-Encoder (AE) to
decouple the content/style features from sketches and RGB-images, and
synthesize images that are both content-faithful to the sketches and
style-consistent to the RGB-images. While prior works employ either the
cycle-consistence loss or dedicated attentional modules to enforce the
content/style fidelity, we show AE's superior performance with pure
self-supervisions. To further improve the synthesis quality in high resolution,
we also leverage an adversarial network to refine the details of synthetic
images. Extensive experiments on 1024*1024 resolution demonstrate a new
state-of-art-art performance of the proposed model on CelebA-HQ and Wiki-Art
datasets. Moreover, with the proposed sketch generator, the model shows a
promising performance on style mixing and style transfer, which require
synthesized images to be both style-consistent and semantically meaningful. Our
code is available on
https://github.com/odegeasslbc/Self-Supervised-Sketch-to-Image-Synthesis-PyTorch,
and please visit https://create.playform.io/my-projects?mode=sketch for an
online demo of our model. | [
"cs.CV",
"cs.GR",
"cs.MM"
] |
Outlier feature matches and loop-closures that survived front-end data
association can lead to catastrophic failures in the back-end optimization of
large-scale point cloud based 3D reconstruction. To alleviate this problem, we
propose a probabilistic approach for robust back-end optimization in the
presence of outliers. More specifically, we model the problem as a Bayesian
network and solve it using the Expectation-Maximization algorithm. Our approach
leverages on a long-tail Cauchy distribution to suppress outlier feature
matches in the odometry constraints, and a Cauchy-Uniform mixture model with a
set of binary latent variables to simultaneously suppress outlier loop-closure
constraints and outlier feature matches in the inlier loop-closure constraints.
Furthermore, we show that by using a Gaussian-Uniform mixture model, our
approach degenerates to the formulation of a state-of-the-art approach for
robust indoor reconstruction. Experimental results demonstrate that our
approach has comparable performance with the state-of-the-art on a benchmark
indoor dataset, and outperforms it on a large-scale outdoor dataset. Our source
code can be found on the project website. | [
"cs.CV",
"cs.RO"
] |
This paper explores the non-convex composition optimization in the form
including inner and outer finite-sum functions with a large number of component
functions. This problem arises in some important applications such as nonlinear
embedding and reinforcement learning. Although existing approaches such as
stochastic gradient descent (SGD) and stochastic variance reduced gradient
(SVRG) descent can be applied to solve this problem, their query complexity
tends to be high, especially when the number of inner component functions is
large. In this paper, we apply the variance-reduced technique to derive two
variance reduced algorithms that significantly improve the query complexity if
the number of inner component functions is large. To the best of our knowledge,
this is the first work that establishes the query complexity analysis for
non-convex stochastic composition. Experiments validate the proposed algorithms
and theoretical analysis. | [
"stat.ML",
"math.OC"
] |
We investigated the effect of different training scenarios on predicting the
(retro)synthesis of chemical compounds using a text-like representation of
chemical reactions (SMILES) and Natural Language Processing neural network
Transformer architecture. We showed that data augmentation, which is a powerful
method used in image processing, eliminated the effect of data memorization by
neural networks, and improved their performance for the prediction of new
sequences. This effect was observed when augmentation was used simultaneously
for input and the target data simultaneously. The top-5 accuracy was 84.8% for
the prediction of the largest fragment (thus identifying principal
transformation for classical retro-synthesis) for the USPTO-50k test dataset
and was achieved by a combination of SMILES augmentation and a beam search
algorithm. The same approach provided significantly better results for the
prediction of direct reactions from the single-step USPTO-MIT test set. Our
model achieved 90.6% top-1 and 96.1% top-5 accuracy for its challenging mixed
set and 97% top-5 accuracy for the USPTO-MIT separated set. It also
significantly improved results for USPTO-full set single-step retrosynthesis
for both top-1 and top-10 accuracies. The appearance frequency of the most
abundantly generated SMILES was well correlated with the prediction outcome and
can be used as a measure of the quality of reaction prediction. | [
"cs.LG",
"stat.ML"
] |
In this paper, we aim at addressing two critical issues in the 3D detection
task, including the exploitation of multiple sensors~(namely LiDAR point cloud
and camera image), as well as the inconsistency between the localization and
classification confidence. To this end, we propose a novel fusion module to
enhance the point features with semantic image features in a point-wise manner
without any image annotations. Besides, a consistency enforcing loss is
employed to explicitly encourage the consistency of both the localization and
classification confidence. We design an end-to-end learnable framework named
EPNet to integrate these two components. Extensive experiments on the KITTI and
SUN-RGBD datasets demonstrate the superiority of EPNet over the
state-of-the-art methods. Codes and models are available at:
\url{https://github.com/happinesslz/EPNet}. | [
"cs.CV"
] |
Time series classification using novel techniques has experienced a recent
resurgence and growing interest from statisticians, subject-domain scientists,
and decision makers in business and industry. This is primarily due to the ever
increasing amount of big and complex data produced as a result of technological
advances. A motivating example is that of Google trends data, which exhibit
highly nonlinear behavior. Although a rich literature exists for addressing
this problem, existing approaches mostly rely on first and second order
properties of the time series, since they typically assume linearity of the
underlying process. Often, these are inadequate for effective classification of
nonlinear time series data such as Google Trends data. Given these
methodological deficiencies and the abundance of nonlinear time series that
persist among real-world phenomena, we introduce an approach that merges higher
order spectral analysis (HOSA) with deep convolutional neural networks (CNNs)
for classifying time series. The effectiveness of our approach is illustrated
using simulated data and two motivating industry examples that involve Google
trends data and electronic device energy consumption data. | [
"stat.ML",
"cs.LG"
] |
Inverse Tone Mapping (ITM) methods attempt to reconstruct High Dynamic Range
(HDR) information from Low Dynamic Range (LDR) image content. The dynamic range
of well-exposed areas must be expanded and any missing information due to
over/under-exposure must be recovered (hallucinated). The majority of methods
focus on the former and are relatively successful, while most attempts on the
latter are not of sufficient quality, even ones based on Convolutional Neural
Networks (CNNs). A major factor for the reduced inpainting quality in some
works is the choice of loss function. Work based on Generative Adversarial
Networks (GANs) shows promising results for image synthesis and LDR inpainting,
suggesting that GAN losses can improve inverse tone mapping results. This work
presents a GAN-based method that hallucinates missing information from badly
exposed areas in LDR images and compares its efficacy with alternative
variations. The proposed method is quantitatively competitive with
state-of-the-art inverse tone mapping methods, providing good dynamic range
expansion for well-exposed areas and plausible hallucinations for saturated and
under-exposed areas. A density-based normalisation method, targeted for HDR
content, is also proposed, as well as an HDR data augmentation method targeted
for HDR hallucination. | [
"cs.CV",
"cs.GR"
] |
In this paper, we introduce a novel conditional generative adversarial
network that creates dense 3D point clouds, with color, for assorted classes of
objects in an unsupervised manner. To overcome the difficulty of capturing
intricate details at high resolutions, we propose a point transformer that
progressively grows the network through the use of graph convolutions. The
network is composed of a leaf output layer and an initial set of branches.
Every training iteration evolves a point vector into a point cloud of
increasing resolution. After a fixed number of iterations, the number of
branches is increased by replicating the last branch. Experimental results show
that our network is capable of learning and mimicking a 3D data distribution,
and produces colored point clouds with fine details at multiple resolutions. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Object detection with transformers (DETR) reaches competitive performance
with Faster R-CNN via a transformer encoder-decoder architecture. Inspired by
the great success of pre-training transformers in natural language processing,
we propose a pretext task named random query patch detection to Unsupervisedly
Pre-train DETR (UP-DETR) for object detection. Specifically, we randomly crop
patches from the given image and then feed them as queries to the decoder. The
model is pre-trained to detect these query patches from the original image.
During the pre-training, we address two critical issues: multi-task learning
and multi-query localization. (1) To trade off classification and localization
preferences in the pretext task, we freeze the CNN backbone and propose a patch
feature reconstruction branch which is jointly optimized with patch detection.
(2) To perform multi-query localization, we introduce UP-DETR from single-query
patch and extend it to multi-query patches with object query shuffle and
attention mask. In our experiments, UP-DETR significantly boosts the
performance of DETR with faster convergence and higher average precision on
object detection, one-shot detection and panoptic segmentation. Code and
pre-training models: https://github.com/dddzg/up-detr. | [
"cs.CV"
] |
Segmentation of findings in the gastrointestinal tract is a challenging but
also an important task which is an important building stone for sufficient
automatic decision support systems. In this work, we present our solution for
the Medico 2020 task, which focused on the problem of colon polyp segmentation.
We present our simple but efficient idea of using an augmentation method that
uses grids in a pyramid-like manner (large to small) for segmentation. Our
results show that the proposed methods work as indented and can also lead to
comparable results when competing with other methods. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM"
] |
In this paper, we propose a novel data augmentation technique (ANDA) applied
to the Salient Object Detection (SOD) context. Standard data augmentation
techniques proposed in the literature, such as image cropping, rotation,
flipping, and resizing, only generate variations of the existing examples,
providing a limited generalization. Our method has the novelty of creating new
images, by combining an object with a new background while retaining part of
its salience in this new context; To do so, the ANDA technique relies on the
linear combination between labeled salient objects and new backgrounds,
generated by removing the original salient object in a process known as image
inpainting. Our proposed technique allows for more precise control of the
object's position and size while preserving background information. Aiming to
evaluate our proposed method, we trained multiple deep neural networks and
compared the effect that our technique has in each one. We also compared our
method with other data augmentation techniques. Our findings show that
depending on the network improvement can be up to 14.1% in the F-measure and
decay of up to 2.6% in the Mean Absolute Error. | [
"cs.CV"
] |
Although traditionally binary visual representations are mainly designed to
reduce computational and storage costs in the image retrieval research, this
paper argues that binary visual representations can be applied to large scale
recognition and detection problems in addition to hashing in retrieval.
Furthermore, the binary nature may make it generalize better than its
real-valued counterparts. Existing binary hashing methods are either two-stage
or hinging on loss term regularization or saturated functions, hence converge
slowly and only emit soft binary values. This paper proposes Approximately
Binary Clamping (ABC), which is non-saturating, end-to-end trainable, with fast
convergence and can output true binary visual representations. ABC achieves
comparable accuracy in ImageNet classification as its real-valued counterpart,
and even generalizes better in object detection. On benchmark image retrieval
datasets, ABC also outperforms existing hashing methods. | [
"cs.CV"
] |
At present, deep learning has been applied more and more in monocular image
depth estimation and has shown promising results. The current more ideal method
for monocular depth estimation is the supervised learning based on ground truth
depth, but this method requires an abundance of expensive ground truth depth as
the supervised labels. Therefore, researchers began to work on unsupervised
depth estimation methods. Although the accuracy of unsupervised depth
estimation method is still lower than that of supervised method, it is a
promising research direction.
In this paper, Based on the experimental results that the stereo matching
models outperforms monocular depth estimation models under the same
unsupervised depth estimation model, we proposed an unsupervised monocular
vision stereo matching method. In order to achieve the monocular stereo
matching, we constructed two unsupervised deep convolution network models, one
was to reconstruct the right view from the left view, and the other was to
estimate the depth map using the reconstructed right view and the original left
view. The two network models are piped together during the test phase. The
output results of this method outperforms the current mainstream unsupervised
depth estimation method in the challenging KITTI dataset. | [
"cs.CV"
] |
Chinese word segmentation (CWS) is the basic of Chinese natural language
processing (NLP). The quality of word segmentation will directly affect the
rest of NLP tasks. Recently, with the artificial intelligence tide rising
again, Long Short-Term Memory (LSTM) neural network, as one of easily modeling
in sequence, has been widely utilized in various kinds of NLP tasks, and
functions well. Attention mechanism is an ingenious method to solve the memory
compression problem on LSTM. Furthermore, inspired by the powerful abilities of
bidirectional LSTM models for modeling sequence and CRF model for decoding, we
propose a Bidirectional LSTM-CRF Attention-based Model in this paper.
Experiments on PKU and MSRA benchmark datasets show that our model performs
better than the baseline methods modeling by other neural networks. | [
"cs.LG"
] |
In this paper, we present a fast exemplar-based image colorization approach
using color embeddings named Color2Embed. Generally, due to the difficulty of
obtaining input and ground truth image pairs, it is hard to train a
exemplar-based colorization model with unsupervised and unpaired training
manner. Current algorithms usually strive to achieve two procedures: i)
retrieving a large number of reference images with high similarity for
preparing training dataset, which is inevitably time-consuming and tedious; ii)
designing complicated modules to transfer the colors of the reference image to
the target image, by calculating and leveraging the deep semantic
correspondence between them (e.g., non-local operation), which is
computationally expensive during testing. Contrary to the previous methods, we
adopt a self-augmented self-reference learning scheme, where the reference
image is generated by graphical transformations from the original colorful one
whereby the training can be formulated in a paired manner. Second, in order to
reduce the process time, our method explicitly extracts the color embeddings
and exploits a progressive style feature Transformation network, which injects
the color embeddings into the reconstruction of the final image. Such design is
much more lightweight and intelligible, achieving appealing performance with
fast processing speed. | [
"cs.CV",
"cs.MM"
] |
Machine learning plays an increasing role in intelligent tutoring systems as
both the amount of data available and specialization among students grow.
Nowadays, these systems are frequently deployed on mobile applications. Users
on such mobile education platforms are dynamic, frequently being added,
accessing the application with varying levels of focus, and changing while
using the service. The education material itself, on the other hand, is often
static and is an exhaustible resource whose use in tasks such as problem
recommendation must be optimized. The ability to update user models with
respect to educational material in real-time is thus essential; however,
existing approaches require time-consuming re-training of user features
whenever new data is added. In this paper, we introduce a neural pedagogical
agent for real-time user modeling in the task of predicting user response
correctness, a central task for mobile education applications. Our model,
inspired by work in natural language processing on sequence modeling and
machine translation, updates user features in real-time via bidirectional
recurrent neural networks with an attention mechanism over embedded
question-response pairs. We experiment on the mobile education application
SantaTOEIC, which has 559k users, 66M response data points as well as a set of
10k study problems each expert-annotated with topic tags and gathered since
2016. Our model outperforms existing approaches over several metrics in
predicting user response correctness, notably out-performing other methods on
new users without large question-response histories. Additionally, our
attention mechanism and annotated tag set allow us to create an interpretable
education platform, with a smart review system that addresses the
aforementioned issue of varied user attention and problem exhaustion. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Conventional methods of 3D object generative modeling learn volumetric
predictions using deep networks with 3D convolutional operations, which are
direct analogies to classical 2D ones. However, these methods are
computationally wasteful in attempt to predict 3D shapes, where information is
rich only on the surfaces. In this paper, we propose a novel 3D generative
modeling framework to efficiently generate object shapes in the form of dense
point clouds. We use 2D convolutional operations to predict the 3D structure
from multiple viewpoints and jointly apply geometric reasoning with 2D
projection optimization. We introduce the pseudo-renderer, a differentiable
module to approximate the true rendering operation, to synthesize novel depth
maps for optimization. Experimental results for single-image 3D object
reconstruction tasks show that we outperforms state-of-the-art methods in terms
of shape similarity and prediction density. | [
"cs.CV",
"cs.LG"
] |
We study a recent class of models which uses graph neural networks (GNNs) to
improve forecasting in multivariate time series.
The core assumption behind these models is that there is a latent graph
between the time series (nodes) that governs the evolution of the multivariate
time series.
By parameterizing a graph in a differentiable way, the models aim to improve
forecasting quality.
We compare four recent models of this class on the forecasting task. Further,
we perform ablations to study their behavior under changing conditions, e.g.,
when disabling the graph-learning modules and providing the ground-truth
relations instead. Based on our findings, we propose novel ways of combining
the existing architectures. | [
"cs.LG"
] |
The convolution operation suffers from a limited receptive filed, while
global modeling is fundamental to dense prediction tasks, such as semantic
segmentation. In this paper, we apply graph convolution into the semantic
segmentation task and propose an improved Laplacian. The graph reasoning is
directly performed in the original feature space organized as a spatial
pyramid. Different from existing methods, our Laplacian is data-dependent and
we introduce an attention diagonal matrix to learn a better distance metric. It
gets rid of projecting and re-projecting processes, which makes our proposed
method a light-weight module that can be easily plugged into current computer
vision architectures. More importantly, performing graph reasoning directly in
the feature space retains spatial relationships and makes spatial pyramid
possible to explore multiple long-range contextual patterns from different
scales. Experiments on Cityscapes, COCO Stuff, PASCAL Context and PASCAL VOC
demonstrate the effectiveness of our proposed methods on semantic segmentation.
We achieve comparable performance with advantages in computational and memory
overhead. | [
"cs.CV"
] |
The compression of Generative Adversarial Networks (GANs) has lately drawn
attention, due to the increasing demand for deploying GANs into mobile devices
for numerous applications such as image translation, enhancement and editing.
However, compared to the substantial efforts to compressing other deep models,
the research on compressing GANs (usually the generators) remains at its
infancy stage. Existing GAN compression algorithms are limited to handling
specific GAN architectures and losses. Inspired by the recent success of AutoML
in deep compression, we introduce AutoML to GAN compression and develop an
AutoGAN-Distiller (AGD) framework. Starting with a specifically designed
efficient search space, AGD performs an end-to-end discovery for new efficient
generators, given the target computational resource constraints. The search is
guided by the original GAN model via knowledge distillation, therefore
fulfilling the compression. AGD is fully automatic, standalone (i.e., needing
no trained discriminators), and generically applicable to various GAN models.
We evaluate AGD in two representative GAN tasks: image translation and super
resolution. Without bells and whistles, AGD yields remarkably lightweight yet
more competitive compressed models, that largely outperform existing
alternatives. Our codes and pretrained models are available at
https://github.com/TAMU-VITA/AGD. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Online garment shopping has gained many customers in recent years. Describing
a dress using keywords does not always yield the proper results, which in turn
leads to dissatisfaction of customers. A visual search based system will be
enormously beneficent to the industry. Hence, we propose a framework that can
retrieve similar clothes that can be found in an image. The first task is to
extract the garment from the input image (street photo). There are various
challenges for that, including pose, illumination, and background clutter. We
use a Generative Adversarial Network for the task of retrieving the garment
that the person in the image was wearing. It has been shown that GAN can
retrieve the garment very efficiently despite the challenges of street photos.
Finally, a siamese based matching system takes the retrieved cloth image and
matches it with the clothes in the dataset, giving us the top k matches. We
take a pre-trained inception-ResNet v1 module as a siamese network (trained
using triplet loss for face detection) and fine-tune it on the shopping dataset
using center loss. The dataset has been collected inhouse. For training the
GAN, we use the LookBook dataset, which is publically available. | [
"cs.CV"
] |
Deep learning algorithms excel at extracting patterns from raw data, and with
large datasets, they have been very successful in computer vision and natural
language applications. However, in other domains, large datasets on which to
learn representations from may not exist. In this work, we develop a novel
multimodal CNN-MLP neural network architecture that utilizes both
domain-specific feature engineering as well as learned representations from raw
data. We illustrate the effectiveness of such network designs in the chemical
sciences, for predicting biodegradability. DeepBioD, a multimodal CNN-MLP
network is more accurate than either standalone network designs, and achieves
an error classification rate of 0.125 that is 27% lower than the current
state-of-the-art. Thus, our work indicates that combining traditional feature
engineering with representation learning can be effective, particularly in
situations where labeled data is limited. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
This paper tackles the problem of table structure parsing (TSP) from images
in the wild. In contrast to existing studies that mainly focus on parsing
well-aligned tabular images with simple layouts from scanned PDF documents, we
aim to establish a practical table structure parsing system for real-world
scenarios where tabular input images are taken or scanned with severe
deformation, bending or occlusions. For designing such a system, we propose an
approach named Cycle-CenterNet on the top of CenterNet with a novel
cycle-pairing module to simultaneously detect and group tabular cells into
structured tables. In the cycle-pairing module, a new pairing loss function is
proposed for the network training. Alongside with our Cycle-CenterNet, we also
present a large-scale dataset, named Wired Table in the Wild (WTW), which
includes well-annotated structure parsing of multiple style tables in several
scenes like the photo, scanning files, web pages, \emph{etc.}. In experiments,
we demonstrate that our Cycle-CenterNet consistently achieves the best accuracy
of table structure parsing on the new WTW dataset by 24.6\% absolute
improvement evaluated by the TEDS metric. A more comprehensive experimental
analysis also validates the advantages of our proposed methods for the TSP
task. | [
"cs.CV"
] |
Currently, developments of deep learning techniques are providing
instrumental to identify, classify, and quantify patterns in medical images.
Segmentation is one of the important applications in medical image analysis. In
this regard, U-Net is the predominant approach to medical image segmentation
tasks. However, we found that those U-Net based models have limitations in
several aspects, for example, millions of parameters in the U-Net consuming
considerable computation resource and memory, lack of global information, and
missing some tough objects. Therefore, we applied two modifications to improve
the U-Net model: 1) designed and added the dilated channel-wise CNN module, 2)
simplified the U shape network. Based on these two modifications, we proposed a
novel light-weight architecture -- Channel-wise Feature Pyramid Network for
Medicine (CFPNet-M). To evaluate our method, we selected five datasets with
different modalities: thermography, electron microscopy, endoscopy, dermoscopy,
and digital retinal images. And we compared its performance with several models
having different parameter scales. This paper also involves our previous
studies of DC-UNet and some commonly used light-weight neural networks. We
applied the Tanimoto similarity instead of the Jaccard index for gray-level
image measurements. By comparison, CFPNet-M achieves comparable segmentation
results on all five medical datasets with only 0.65 million parameters, which
is about 2% of U-Net, and 8.8 MB memory. Meanwhile, the inference speed can
reach 80 FPS on a single RTX 2070Ti GPU with the 256 by 192 pixels input size. | [
"cs.CV",
"eess.IV"
] |
Most graph-network-based meta-learning approaches model instance-level
relation of examples. We extend this idea further to explicitly model the
distribution-level relation of one example to all other examples in a 1-vs-N
manner. We propose a novel approach named distribution propagation graph
network (DPGN) for few-shot learning. It conveys both the distribution-level
relations and instance-level relations in each few-shot learning task. To
combine the distribution-level relations and instance-level relations for all
examples, we construct a dual complete graph network which consists of a point
graph and a distribution graph with each node standing for an example. Equipped
with dual graph architecture, DPGN propagates label information from labeled
examples to unlabeled examples within several update generations. In extensive
experiments on few-shot learning benchmarks, DPGN outperforms state-of-the-art
results by a large margin in 5% $\sim$ 12% under supervised setting and 7%
$\sim$ 13% under semi-supervised setting. Code will be released. | [
"cs.CV"
] |
Wasserstein distances are increasingly used in a wide variety of applications
in machine learning. Sliced Wasserstein distances form an important subclass
which may be estimated efficiently through one-dimensional sorting operations.
In this paper, we propose a new variant of sliced Wasserstein distance, study
the use of orthogonal coupling in Monte Carlo estimation of Wasserstein
distances and draw connections with stratified sampling, and evaluate our
approaches experimentally in a range of large-scale experiments in generative
modelling and reinforcement learning. | [
"stat.ML",
"cs.LG"
] |
Exploration in environments with sparse rewards is difficult for artificial
agents. Curiosity driven learning -- using feed-forward prediction errors as
intrinsic rewards -- has achieved some success in these scenarios, but fails
when faced with action-dependent noise sources. We present aleatoric mapping
agents (AMAs), a neuroscience inspired solution modeled on the cholinergic
system of the mammalian brain. AMAs aim to explicitly ascertain which dynamics
of the environment are unpredictable, regardless of whether those dynamics are
induced by the actions of the agent. This is achieved by generating separate
forward predictions for the mean and variance of future states and reducing
intrinsic rewards for those transitions with high aleatoric variance. We show
AMAs are able to effectively circumvent action-dependent stochastic traps that
immobilise conventional curiosity driven agents. The code for all experiments
presented in this paper is open sourced:
http://github.com/self-supervisor/Escaping-Stochastic-Traps-With-Aleatoric-Mapping-Agents. | [
"cs.LG",
"cs.AI"
] |
The stereo-matching problem, i.e., matching corresponding features in two
different views to reconstruct depth, is efficiently solved in biology. Yet, it
remains the computational bottleneck for classical machine vision approaches.
By exploiting the properties of event cameras, recently proposed Spiking Neural
Network (SNN) architectures for stereo vision have the potential of simplifying
the stereo-matching problem. Several solutions that combine event cameras with
spike-based neuromorphic processors already exist. However, they are either
simulated on digital hardware or tested on simplified stimuli. In this work, we
use the Dynamic Vision Sensor 3D Human Pose Dataset (DHP19) to validate a
brain-inspired event-based stereo-matching architecture implemented on a
mixed-signal neuromorphic processor with real-world data. Our experiments show
that this SNN architecture, composed of coincidence detectors and disparity
sensitive neurons, is able to provide a coarse estimate of the input disparity
instantaneously, thereby detecting the presence of a stimulus moving in depth
in real-time. | [
"cs.CV",
"cs.AI"
] |
The composition recognition of unseen attribute-object is critical to make
machines learn to decompose and compose complex concepts like people. Most of
the existing methods are limited to the composition recognition of
single-attribute-object, and can hardly distinguish the compositions with
similar appearances. In this paper, a graph-based model is proposed that can
flexibly recognize both single- and multi-attribute-object compositions. The
model maps the visual features of images and the attribute-object category
labels represented by word embedding vectors into a latent space. Then,
according to the constraints of the attribute-object semantic association,
distances are calculated between visual features and the corresponding label
semantic features in the latent space. During the inference, the composition
that is closest to the given image feature among all compositions is used as
the reasoning result. In addition, we build a large-scale Multi-Attribute
Dataset (MAD) with 116,099 images and 8,030 composition categories. Experiments
on MAD and two other single-attribute-object benchmark datasets demonstrate the
effectiveness of our approach. | [
"cs.CV"
] |
Superpixel decomposition methods are widely used in computer vision and image
processing applications. By grouping homogeneous pixels, the accuracy can be
increased and the decrease of the number of elements to process can drastically
reduce the computational burden. For most superpixel methods, a trade-off is
computed between 1) color homogeneity, 2) adherence to the image contours and
3) shape regularity of the decomposition. In this paper, we propose a framework
that jointly enforces all these aspects and provides accurate and regular
Superpixels with Contour Adherence using Linear Path (SCALP). During the
decomposition, we propose to consider color features along the linear path
between the pixel and the corresponding superpixel barycenter. A contour prior
is also used to prevent the crossing of image boundaries when associating a
pixel to a superpixel. Finally, in order to improve the decomposition accuracy
and the robustness to noise, we propose to integrate the pixel neighborhood
information, while preserving the same computational complexity. SCALP is
extensively evaluated on standard segmentation dataset, and the obtained
results outperform the ones of the state-of-the-art methods. SCALP is also
extended for supervoxel decomposition on MRI images. | [
"cs.CV"
] |
Graph Neural Networks (GNNs) have received significant attention due to their
state-of-the-art performance on various graph representation learning tasks.
However, recent studies reveal that GNNs are vulnerable to adversarial attacks,
i.e. an attacker is able to fool the GNNs by perturbing the graph structure or
node features deliberately. While being able to successfully decrease the
performance of GNNs, most existing attacking algorithms require access to
either the model parameters or the training data, which is not practical in the
real world.
In this paper, we develop deeper insights into the Mettack algorithm, which
is a representative grey-box attacking method, and then we propose a
gradient-based black-box attacking algorithm. Firstly, we show that the Mettack
algorithm will perturb the edges unevenly, thus the attack will be highly
dependent on a specific training set. As a result, a simple yet useful strategy
to defense against Mettack is to train the GNN with the validation set.
Secondly, to overcome the drawbacks, we propose the Black-Box Gradient Attack
(BBGA) algorithm. Extensive experiments demonstrate that out proposed method is
able to achieve stable attack performance without accessing the training sets
of the GNNs. Further results shows that our proposed method is also applicable
when attacking against various defense methods. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Real-world applications require RL algorithms to act safely. During learning
process, it is likely that the agent executes sub-optimal actions that may lead
to unsafe/poor states of the system. Exploration is particularly brittle in
high-dimensional state/action space due to increased number of low-performing
actions. In this work, we consider risk-averse exploration in approximate RL
setting. To ensure safety during learning, we propose the distributionally
robust policy iteration scheme that provides lower bound guarantee on
state-values. Our approach induces a dynamic level of risk to prevent poor
decisions and yet preserves the convergence to the optimal policy. Our
formulation results in a efficient algorithm that accounts for a simple
re-weighting of policy actions in the standard policy iteration scheme. We
extend our approach to continuous state/action space and present a practical
algorithm, distributionally robust soft actor-critic, that implements a
different exploration strategy: it acts conservatively at short-term and it
explores optimistically in a long-run. We provide promising experimental
results on continuous control tasks. | [
"stat.ML",
"cs.LG"
] |
Some reinforcement learning methods suffer from high sample complexity
causing them to not be practical in real-world situations. $Q$-function reuse,
a transfer learning method, is one way to reduce the sample complexity of
learning, potentially improving usefulness of existing algorithms. Prior work
has shown the empirical effectiveness of $Q$-function reuse for various
environments when applied to model-free algorithms. To the best of our
knowledge, there has been no theoretical work showing the regret of
$Q$-function reuse when applied to the tabular, model-free setting. We aim to
bridge the gap between theoretical and empirical work in $Q$-function reuse by
providing some theoretical insights on the effectiveness of $Q$-function reuse
when applied to the $Q$-learning with UCB-Hoeffding algorithm. Our main
contribution is showing that in a specific case if $Q$-function reuse is
applied to the $Q$-learning with UCB-Hoeffding algorithm it has a regret that
is independent of the state or action space. We also provide empirical results
supporting our theoretical findings. | [
"cs.LG"
] |
Spatio-temporal forecasting is an open research field whose interest is
growing exponentially. In this work we focus on creating a complex deep neural
framework for spatio-temporal traffic forecasting with comparatively very good
performance and that shows to be adaptable over several spatio-temporal
conditions while remaining easy to understand and interpret. Our proposal is
based on an interpretable attention-based neural network in which several
modules are combined in order to capture key spatio-temporal time series
components. Through extensive experimentation, we show how the results of our
approach are stable and better than those of other state-of-the-art
alternatives. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Neural networks have achieved state of the art performance across a wide
variety of machine learning tasks, often with large and computation-heavy
models. Inducing sparseness as a way to reduce the memory and computation
footprint of these models has seen significant research attention in recent
years. In this paper, we present a new method for \emph{dynamic sparseness},
whereby part of the computations are omitted dynamically, based on the input.
For efficiency, we combined the idea of dynamic sparseness with block-wise
matrix-vector multiplications. In contrast to static sparseness, which
permanently zeroes out selected positions in weight matrices, our method
preserves the full network capabilities by potentially accessing any trained
weights. Yet, matrix vector multiplications are accelerated by omitting a
pre-defined fraction of weight blocks from the matrix, based on the input.
Experimental results on the task of language modeling, using recurrent and
quasi-recurrent models, show that the proposed method can outperform a
magnitude-based static sparseness baseline. In addition, our method achieves
similar language modeling perplexities as the dense baseline, at half the
computational cost at inference time. | [
"cs.LG",
"stat.ML"
] |
Attention maps, a popular heatmap-based explanation method for Visual
Question Answering (VQA), are supposed to help users understand the model by
highlighting portions of the image/question used by the model to infer answers.
However, we see that users are often misled by current attention map
visualizations that point to relevant regions despite the model producing an
incorrect answer. Hence, we propose Error Maps that clarify the error by
highlighting image regions where the model is prone to err. Error maps can
indicate when a correctly attended region may be processed incorrectly leading
to an incorrect answer, and hence, improve users' understanding of those cases.
To evaluate our new explanations, we further introduce a metric that simulates
users' interpretation of explanations to evaluate their potential helpfulness
to understand model correctness. We finally conduct user studies to see that
our new explanations help users understand model correctness better than
baselines by an expected 30% and that our proxy helpfulness metrics correlate
strongly ($\rho$>0.97) with how well users can predict model correctness. | [
"cs.CV",
"cs.AI",
"cs.CY",
"cs.HC"
] |
Recently, Generative Adversarial Network (GAN) has been found wide
applications in style transfer, image-to-image translation and image
super-resolution. In this paper, a color-depth conditional GAN is proposed to
concurrently resolve the problems of depth super-resolution and color
super-resolution in 3D videos. Firstly, given the low-resolution depth image
and low-resolution color image, a generative network is proposed to leverage
mutual information of color image and depth image to enhance each other in
consideration of the geometry structural dependency of color-depth image in the
same scene. Secondly, three loss functions, including data loss, total
variation loss, and 8-connected gradient difference loss are introduced to
train this generative network in order to keep generated images close to the
real ones, in addition to the adversarial loss. Experimental results
demonstrate that the proposed approach produces high-quality color image and
depth image from low-quality image pair, and it is superior to several other
leading methods. Besides, we use the same neural network framework to resolve
the problem of image smoothing and edge detection at the same time. | [
"cs.CV"
] |
Estimation of parameters in differential equation models can be achieved by
applying learning algorithms to quantitative time-series data. However,
sometimes it is only possible to measure qualitative changes of a system in
response to a controlled condition. In dynamical systems theory, such change
points are known as bifurcations and lie on a function of the controlled
condition called the bifurcation diagram. In this work, we propose a
gradient-based semi-supervised approach for inferring the parameters of
differential equations that produce a user-specified bifurcation diagram. The
cost function contains a supervised error term that is minimal when the model
bifurcations match the specified targets and an unsupervised bifurcation
measure which has gradients that push optimisers towards bifurcating parameter
regimes. The gradients can be computed without the need to differentiate
through the operations of the solver that was used to compute the diagram. We
demonstrate parameter inference with minimal models which explore the space of
saddle-node and pitchfork diagrams and the genetic toggle switch from synthetic
biology. Furthermore, the cost landscape allows us to organise models in terms
of topological and geometric equivalence. | [
"cs.LG",
"math.DS",
"q-bio.QM"
] |
At present, designing convolutional neural network (CNN) architectures
requires both human expertise and labor. New architectures are handcrafted by
careful experimentation or modified from a handful of existing networks. We
introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to
automatically generate high-performing CNN architectures for a given learning
task. The learning agent is trained to sequentially choose CNN layers using
$Q$-learning with an $\epsilon$-greedy exploration strategy and experience
replay. The agent explores a large but finite space of possible architectures
and iteratively discovers designs with improved performance on the learning
task. On image classification benchmarks, the agent-designed networks
(consisting of only standard convolution, pooling, and fully-connected layers)
beat existing networks designed with the same layer types and are competitive
against the state-of-the-art methods that use more complex layer types. We also
outperform existing meta-modeling approaches for network design on image
classification tasks. | [
"cs.LG"
] |
Recent work in the multi-agent domain has shown the promise of Graph Neural
Networks (GNNs) to learn complex coordination strategies. However, most current
approaches use minor variants of a Graph Convolutional Network (GCN), which
applies a convolution to the communication graph formed by the multi-agent
system. In this paper, we investigate whether the performance and
generalization of GCNs can be improved upon. We introduce ModGNN, a
decentralized framework which serves as a generalization of GCNs, providing
more flexibility. To test our hypothesis, we evaluate an implementation of
ModGNN against several baselines in the multi-agent flocking problem. We
perform an ablation analysis to show that the most important component of our
framework is one that does not exist in a GCN. By varying the number of agents,
we also demonstrate that an application-agnostic implementation of ModGNN
possesses an improved ability to generalize to new environments. | [
"cs.LG",
"cs.MA",
"cs.RO"
] |
To acquire a new skill, humans learn better and faster if a tutor, based on
their current knowledge level, informs them of how much attention they should
pay to particular content or practice problems. Similarly, a machine learning
model could potentially be trained better with a scorer that "adapts" to its
current learning state and estimates the importance of each training data
instance. Training such an adaptive scorer efficiently is a challenging
problem; in order to precisely quantify the effect of a data instance at a
given time during the training, it is typically necessary to first complete the
entire training process. To efficiently optimize data usage, we propose a
reinforcement learning approach called Differentiable Data Selection (DDS). In
DDS, we formulate a scorer network as a learnable function of the training
data, which can be efficiently updated along with the main model being trained.
Specifically, DDS updates the scorer with an intuitive reward signal: it should
up-weigh the data that has a similar gradient with a dev set upon which we
would finally like to perform well. Without significant computing overhead, DDS
delivers strong and consistent improvements over several strong baselines on
two very different tasks of machine translation and image classification. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Neural Architecture Search (NAS) is quickly becoming the standard methodology
to design neural network models. However, NAS is typically compute-intensive
because multiple models need to be evaluated before choosing the best one. To
reduce the computational power and time needed, a proxy task is often used for
evaluating each model instead of full training. In this paper, we evaluate
conventional reduced-training proxies and quantify how well they preserve
ranking between multiple models during search when compared with the rankings
produced by final trained accuracy. We propose a series of zero-cost proxies,
based on recent pruning literature, that use just a single minibatch of
training data to compute a model's score. Our zero-cost proxies use 3 orders of
magnitude less computation but can match and even outperform conventional
proxies. For example, Spearman's rank correlation coefficient between final
validation accuracy and our best zero-cost proxy on NAS-Bench-201 is 0.82,
compared to 0.61 for EcoNAS (a recently proposed reduced-training proxy).
Finally, we use these zero-cost proxies to enhance existing NAS search
algorithms such as random search, reinforcement learning, evolutionary search
and predictor-based search. For all search methodologies and across three
different NAS datasets, we are able to significantly improve sample efficiency,
and thereby decrease computation, by using our zero-cost proxies. For example
on NAS-Bench-101, we achieved the same accuracy 4$\times$ quicker than the best
previous result. Our code is made public at:
https://github.com/mohsaied/zero-cost-nas. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
The stereo correspondence and reconstruction of endoscopic data sub-challenge
was organized during the Endovis challenge at MICCAI 2019 in Shenzhen, China.
The task was to perform dense depth estimation using 7 training datasets and 2
test sets of structured light data captured using porcine cadavers. These were
provided by a team at Intuitive Surgical. 10 teams participated in the
challenge day. This paper contains 3 additional methods which were submitted
after the challenge finished as well as a supplemental section from these teams
on issues they found with the dataset. | [
"cs.CV"
] |
Synthesizing high-quality realistic images from text descriptions is a
challenging task. Almost all existing text-to-image Generative Adversarial
Networks employ stacked architecture as the backbone. They utilize cross-modal
attention mechanisms to fuse text and image features, and introduce extra
networks to ensure text-image semantic consistency. In this work, we propose a
much simpler, but more effective text-to-image model than previous works.
Corresponding to the above three limitations, we propose: 1) a novel one-stage
text-to-image backbone which is able to synthesize high-quality images directly
by one pair of generator and discriminator, 2) a novel fusion module called
deep text-image fusion block which deepens the text-image fusion process in
generator, 3) a novel target-aware discriminator composed of matching-aware
gradient penalty and one-way output which promotes the generator to synthesize
more realistic and text-image semantic consistent images without introducing
extra networks. Compared with existing text-to-image models, our proposed
method (i.e., DF-GAN) is simpler but more efficient to synthesize realistic and
text-matching images and achieves better performance. Extensive experiments on
both Caltech-UCSD Birds 200 and COCO datasets demonstrate the superiority of
the proposed model in comparison to state-of-the-art models. | [
"cs.CV"
] |
Neural sequence-to-sequence models are finding increasing use in editing of
documents, for example in correcting a text document or repairing source code.
In this paper, we argue that common seq2seq models (with a facility to copy
single tokens) are not a natural fit for such tasks, as they have to explicitly
copy each unchanged token. We present an extension of seq2seq models capable of
copying entire spans of the input to the output in one step, greatly reducing
the number of decisions required during inference. This extension means that
there are now many ways of generating the same output, which we handle by
deriving a new objective for training and a variation of beam search for
inference that explicitly handles this problem. In our experiments on a range
of editing tasks of natural language and source code, we show that our new
model consistently outperforms simpler baselines. | [
"cs.LG",
"stat.ML"
] |
We show that when a third party, the adversary, steps into the two-party
setting (agent and operator) of safely interruptible reinforcement learning, a
trade-off has to be made between the probability of following the optimal
policy in the limit, and the probability of escaping a dangerous situation
created by the adversary. So far, the work on safely interruptible agents has
assumed a perfect perception of the agent about its environment (no adversary),
and therefore implicitly set the second probability to zero, by explicitly
seeking a value of one for the first probability. We show that (1) agents can
be made both interruptible and adversary-resilient, and (2) the
interruptibility can be made safe in the sense that the agent itself will not
seek to avoid it. We also solve the problem that arises when the agent does not
go completely greedy, i.e. issues with safe exploration in the limit.
Resilience to perturbed perception, safe exploration in the limit, and safe
interruptibility are the three pillars of what we call \emph{virtuously safe
reinforcement learning}. | [
"cs.LG",
"cs.AI",
"cs.GT",
"stat.ML"
] |
Medical image segmentation models are typically supervised by expert
annotations at the pixel-level, which can be expensive to acquire. In this
work, we propose a method that combines the high quality of pixel-level expert
annotations with the scale of coarse DNN-generated saliency maps for training
multi-label semantic segmentation models. We demonstrate the application of our
semi-supervised method, which we call CheXseg, on multi-label chest X-ray
interpretation. We find that CheXseg improves upon the performance (mIoU) of
fully-supervised methods that use only pixel-level expert annotations by 9.7%
and weakly-supervised methods that use only DNN-generated saliency maps by
73.1%. Our best method is able to match radiologist agreement on three out of
ten pathologies and reduces the overall performance gap by 57.2% as compared to
weakly-supervised methods. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We present a novel resizing module for neural networks: shape adaptor, a
drop-in enhancement built on top of traditional resizing layers, such as
pooling, bilinear sampling, and strided convolution. Whilst traditional
resizing layers have fixed and deterministic reshaping factors, our module
allows for a learnable reshaping factor. Our implementation enables shape
adaptors to be trained end-to-end without any additional supervision, through
which network architectures can be optimised for each individual task, in a
fully automated way. We performed experiments across seven image classification
datasets, and results show that by simply using a set of our shape adaptors
instead of the original resizing layers, performance increases consistently
over human-designed networks, across all datasets. Additionally, we show the
effectiveness of shape adaptors on two other applications: network compression
and transfer learning. The source code is available at:
https://github.com/lorenmt/shape-adaptor. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Inverse reinforcement learning (IRL) is the problem of learning the
preferences of an agent from the observations of its behavior on a task. While
this problem has been well investigated, the related problem of {\em online}
IRL---where the observations are incrementally accrued, yet the demands of the
application often prohibit a full rerun of an IRL method---has received
relatively less attention. We introduce the first formal framework for online
IRL, called incremental IRL (I2RL), and a new method that advances maximum
entropy IRL with hidden variables, to this setting. Our formal analysis shows
that the new method has a monotonically improving performance with more
demonstration data, as well as probabilistically bounded error, both under full
and partial observability. Experiments in a simulated robotic application of
penetrating a continuous patrol under occlusion shows the relatively improved
performance and speed up of the new method and validates the utility of online
IRL. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In this paper, we propose the pyramid fusion dark channel prior (PF-DCP) for
single image dehazing. Based on the well-known Dark Channel Prior (DCP), we
introduce an easy yet effective approach PF-DCP by employing the DCP algorithm
at a pyramid of multi-scale images to alleviate the problem of patch size
selection. In this case, we obtain the final transmission map by fusing
transmission maps at each level to recover a high-quality haze-free image.
Experiments on RESIDE SOTS show that PF-DCP not only outperforms the
traditional prior-based methods with a large margin but also achieves
comparable or even better results of state-of-art deep learning approaches.
Furthermore, the visual quality is also greatly improved with much fewer color
distortions and halo artifacts. | [
"cs.CV"
] |
Optical flow is a regression task where convolutional neural networks (CNNs)
have led to major breakthroughs. However, this comes at major computational
demands due to the use of cost-volumes and pyramidal representations. This was
mitigated by producing flow predictions at quarter the resolution, which are
upsampled using bilinear interpolation during test time. Consequently, fine
details are usually lost and post-processing is needed to restore them. We
propose the Normalized Convolution UPsampler (NCUP), an efficient joint
upsampling approach to produce the full-resolution flow during the training of
optical flow CNNs. Our proposed approach formulates the upsampling task as a
sparse problem and employs the normalized convolutional neural networks to
solve it. We evaluate our upsampler against existing joint upsampling
approaches when trained end-to-end with a a coarse-to-fine optical flow CNN
(PWCNet) and we show that it outperforms all other approaches on the
FlyingChairs dataset while having at least one order fewer parameters.
Moreover, we test our upsampler with a recurrent optical flow CNN (RAFT) and we
achieve state-of-the-art results on Sintel benchmark with ~6% error reduction,
and on-par on the KITTI dataset, while having 7.5% fewer parameters (see Figure
1). Finally, our upsampler shows better generalization capabilities than RAFT
when trained and evaluated on different datasets. | [
"cs.CV"
] |
Mass spectrometry (MS) is an important technique for chemical profiling which
calculates for a sample a high dimensional histogram-like spectrum. A crucial
step of MS data processing is the peak picking which selects peaks containing
information about molecules with high concentrations which are of interest in
an MS investigation. We present a new procedure of the peak picking based on a
sparse coding algorithm. Given a set of spectra of different classes, i.e. with
different positions and heights of the peaks, this procedure can extract peaks
by means of unsupervised learning. Instead of an $l_1$-regularization penalty
term used in the original sparse coding algorithm we propose using an
elastic-net penalty term for better regularization. The evaluation is done by
means of simulation. We show that for a large region of parameters the proposed
peak picking method based on the sparse coding features outperforms a mean
spectrum-based method. Moreover, we demonstrate the procedure applying it to
two real-life datasets. | [
"stat.ML",
"physics.med-ph",
"stat.AP",
"stat.ME"
] |
In geology, a key activity is the characterisation of geological structures
(surface formation topology and rock units) using Planar Orientation
measurements such as Strike, Dip and Dip Direction. In general these
measurements are collected manually using basic equipment; usually a
compass/clinometer and a backboard, recorded on a map by hand. Various
computing techniques and technologies, such as Lidar, have been utilised in
order to automate this process and update the collection paradigm for these
types of measurements. Techniques such as Structure from Motion (SfM)
reconstruct of scenes and objects by generating a point cloud from input
images, with detailed reconstruction possible on the decimetre scale. SfM-type
techniques provide advantages in areas of cost and usability in more varied
environmental conditions, while sacrificing the extreme levels of data
fidelity. Here is presented a methodology of data acquisition and a Machine
Learning-based software system: GeoStructure, developed to automate the
measurement of orientation measurements. Rather than deriving measurements
using a method applied to the input images, such as the Hough Transform, this
method takes measurements directly from the reconstructed point cloud surfaces.
Point cloud noise is mitigated using a Mahalanobis distance implementation.
Significant structure is characterised using a k-nearest neighbour region
growing algorithm, and final surface orientations are quantified using the
plane, and normal direction cosines. | [
"cs.LG",
"cs.CV"
] |
Image classification is a major application domain for conventional deep
learning (DL). Quantum machine learning (QML) has the potential to
revolutionize image classification. In any typical DL-based image
classification, we use convolutional neural network (CNN) to extract features
from the image and multi-layer perceptron network (MLP) to create the actual
decision boundaries. On one hand, QML models can be useful in both of these
tasks. Convolution with parameterized quantum circuits (Quanvolution) can
extract rich features from the images. On the other hand, quantum neural
network (QNN) models can create complex decision boundaries. Therefore,
Quanvolution and QNN can be used to create an end-to-end QML model for image
classification. Alternatively, we can extract image features separately using
classical dimension reduction techniques such as, Principal Components Analysis
(PCA) or Convolutional Autoencoder (CAE) and use the extracted features to
train a QNN. We review two proposals on quantum-classical hybrid ML models for
image classification namely, Quanvolutional Neural Network and dimension
reduction using a classical algorithm followed by QNN. Particularly, we make a
case for trainable filters in Quanvolution and CAE-based feature extraction for
image datasets (instead of dimension reduction using linear transformations
such as, PCA). We discuss various design choices, potential opportunities, and
drawbacks of these models. We also release a Python-based framework to create
and explore these hybrid models with a variety of design choices. | [
"cs.CV",
"cs.LG"
] |
Road extraction is an essential step in building autonomous navigation
systems. Detecting road segments is challenging as they are of varying widths,
bifurcated throughout the image, and are often occluded by terrain, cloud, or
other weather conditions. Using just convolution neural networks (ConvNets) for
this problem is not effective as it is inefficient at capturing distant
dependencies between road segments in the image which is essential to extract
road connectivity. To this end, we propose a Spatial and Interaction Space
Graph Reasoning (SPIN) module which when plugged into a ConvNet performs
reasoning over graphs constructed on spatial and interaction spaces projected
from the feature maps. Reasoning over spatial space extracts dependencies
between different spatial regions and other contextual information. Reasoning
over a projected interaction space helps in appropriate delineation of roads
from other topographies present in the image. Thus, SPIN extracts long-range
dependencies between road segments and effectively delineates roads from other
semantics. We also introduce a SPIN pyramid which performs SPIN graph reasoning
across multiple scales to extract multi-scale features. We propose a network
based on stacked hourglass modules and SPIN pyramid for road segmentation which
achieves better performance compared to existing methods. Moreover, our method
is computationally efficient and significantly boosts the convergence speed
during training, making it feasible for applying on large-scale high-resolution
aerial images. Code available at:
https://github.com/wgcban/SPIN_RoadMapper.git. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Enabling bi-directional retrieval of images and texts is important for
understanding the correspondence between vision and language. Existing methods
leverage the attention mechanism to explore such correspondence in a
fine-grained manner. However, most of them consider all semantics equally and
thus align them uniformly, regardless of their diverse complexities. In fact,
semantics are diverse (i.e. involving different kinds of semantic concepts),
and humans usually follow a latent structure to combine them into
understandable languages. It may be difficult to optimally capture such
sophisticated correspondences in existing methods. In this paper, to address
such a deficiency, we propose an Iterative Matching with Recurrent Attention
Memory (IMRAM) method, in which correspondences between images and texts are
captured with multiple steps of alignments. Specifically, we introduce an
iterative matching scheme to explore such fine-grained correspondence
progressively. A memory distillation unit is used to refine alignment knowledge
from early steps to later ones. Experiment results on three benchmark datasets,
i.e. Flickr8K, Flickr30K, and MS COCO, show that our IMRAM achieves
state-of-the-art performance, well demonstrating its effectiveness. Experiments
on a practical business advertisement dataset, named \Ads{}, further validates
the applicability of our method in practical scenarios. | [
"cs.CV"
] |
Resistance Spot Welding (RSW) is an important manufacturing process that
attracts increasing attention in automotive industry. However, due to the
complexity of the manufacturing process, the corresponding product quality
shows significant inconsistencies even under the same process setup. This paper
develops a statistical method to capture the inconsistence of welding quality
measurements (e.g., nugget width) based on process parameters to efficiently
monitor product quality. The proposed method provides engineering efficiency
and cost saving benefit through reduction of physical testing required for
weldability and verification. The developed method is applied to the real-world
welding process. | [
"cs.LG"
] |
Contrastive learning between multiple views of the data has recently achieved
state of the art performance in the field of self-supervised representation
learning. Despite its success, the influence of different view choices has been
less studied. In this paper, we use theoretical and empirical analysis to
better understand the importance of view selection, and argue that we should
reduce the mutual information (MI) between views while keeping task-relevant
information intact. To verify this hypothesis, we devise unsupervised and
semi-supervised frameworks that learn effective views by aiming to reduce their
MI. We also consider data augmentation as a way to reduce MI, and show that
increasing data augmentation indeed leads to decreasing MI and improves
downstream classification accuracy. As a by-product, we achieve a new
state-of-the-art accuracy on unsupervised pre-training for ImageNet
classification ($73\%$ top-1 linear readout with a ResNet-50). In addition,
transferring our models to PASCAL VOC object detection and COCO instance
segmentation consistently outperforms supervised pre-training.
Code:http://github.com/HobbitLong/PyContrast | [
"cs.CV",
"cs.LG"
] |
The widely used ChestX-ray14 dataset addresses an important medical image
classification problem and has the following caveats: 1) many lung pathologies
are visually similar, 2) a variant of diseases including lung cancer,
tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels
and 3) The incidence of healthy images is much larger than diseased samples,
creating imbalanced data. These properties are common in medical domain.
Existing literature uses stateof- the-art DensetNet/Resnet models being
transfer learned where output neurons of the networks are trained for
individual diseases to cater for multiple diseases labels in each image.
However, most of them don't consider relationship between multiple classes. In
this work we have proposed a novel error function, Multi-label Softmax Loss
(MSML), to specifically address the properties of multiple labels and
imbalanced data. Moreover, we have designed deep network architecture based on
fine-grained classification concept that incorporates MSML. We have evaluated
our proposed method on various network backbones and showed consistent
performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The
proposed error function provides a new method to gain improved performance
across wider medical datasets. | [
"cs.CV"
] |
We propose a method of improving detection precision (mAP) with the help of
the prior knowledge about the scene geometry: we assume the scene to be a plane
with objects placed on it. We focus our attention on autonomous robots, so
given the robot's dimensions and the inclination angles of the camera, it is
possible to predict the spatial scale for each pixel of the input frame. With
slightly modified YOLOv3-tiny we demonstrate that the detection supplemented by
the scale channel, further referred as S, outperforms standard RGB-based
detection with small computational overhead. | [
"cs.CV",
"cs.RO"
] |
Time series analysis is quickly proceeding towards long and complex tasks. In
recent years, fast approximate algorithms for discord search have been proposed
in order to compensate for the increasing size of the time series. It is more
interesting, however, to find quick exact solutions. In this research, we
improved HOT SAX by exploiting two main ideas: the warm-up process, and the
similarity between sequences close in time. The resulting algorithm, called HOT
SAX Time (HST), has been validated with real and synthetic time series, and
successfully compared with HOT SAX, RRA, SCAMP, and DADD. The complexity of a
discord search has been evaluated with a new indicator, the cost per sequence
(cps), which allows one to compare searches on time series of different
lengths. Numerical evidence suggests that two conditions are involved in
determining the complexity of a discord search in a non-trivial way: the length
of the discords, and the noise/signal ratio. In the case of complex searches,
HST can be more than 100 times faster than HOT SAX, thus being at the forefront
of the exact discord search. | [
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.