text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
While designing inductive bias in neural architectures has been widely
studied, we hypothesize that transformer networks are flexible enough to learn
inductive bias from suitable generic tasks. Here, we replace architecture
engineering by encoding inductive bias in the form of datasets. Inspired by
Peirce's view that deduction, induction, and abduction form an irreducible set
of reasoning primitives, we design three synthetic tasks that are intended to
require the model to have these three abilities. We specifically design these
synthetic tasks in a way that they are devoid of mathematical knowledge to
ensure that only the fundamental reasoning biases can be learned from these
tasks. This defines a new pre-training methodology called "LIME" (Learning
Inductive bias for Mathematical rEasoning). Models trained with LIME
significantly outperform vanilla transformers on three very different large
mathematical reasoning benchmarks. Unlike dominating the computation cost as
traditional pre-training approaches, LIME requires only a small fraction of the
computation cost of the typical downstream task. | [
"cs.LG",
"cs.AI",
"cs.LO"
] |
Molecular property prediction plays a fundamental role in drug discovery to
discover candidate molecules with target properties. However, molecular
property prediction is essentially a few-shot problem which makes it hard to
obtain regular models. In this paper, we propose a property-aware adaptive
relation networks (PAR) for the few-shot molecular property prediction problem.
In comparison to existing works, we leverage the facts that both substructures
and relationships among molecules are different considering various molecular
properties. Our PAR is compatible with existing graph-based molecular encoders,
and are further equipped with the ability to obtain property-aware molecular
embedding and model molecular relation graph adaptively. The resultant relation
graph also facilitates effective label propagation within each task. Extensive
experiments on benchmark molecular property prediction datasets show that our
method consistently outperforms state-of-the-art methods and is able to obtain
property-aware molecular embedding and model molecular relation graph properly. | [
"cs.LG"
] |
Neural ordinary differential equations (NODEs) have recently attracted
increasing attention; however, their empirical performance on benchmark tasks
(e.g. image classification) are significantly inferior to discrete-layer
models. We demonstrate an explanation for their poorer performance is the
inaccuracy of existing gradient estimation methods: the adjoint method has
numerical errors in reverse-mode integration; the naive method directly
back-propagates through ODE solvers, but suffers from a redundantly deep
computation graph when searching for the optimal stepsize. We propose the
Adaptive Checkpoint Adjoint (ACA) method: in automatic differentiation, ACA
applies a trajectory checkpoint strategy which records the forward-mode
trajectory as the reverse-mode trajectory to guarantee accuracy; ACA deletes
redundant components for shallow computation graphs; and ACA supports adaptive
solvers. On image classification tasks, compared with the adjoint and naive
method, ACA achieves half the error rate in half the training time; NODE
trained with ACA outperforms ResNet in both accuracy and test-retest
reliability. On time-series modeling, ACA outperforms competing methods.
Finally, in an example of the three-body problem, we show NODE with ACA can
incorporate physical knowledge to achieve better accuracy. We provide the
PyTorch implementation of ACA:
\url{https://github.com/juntang-zhuang/torch-ACA}. | [
"stat.ML",
"cs.LG"
] |
Performance of deep learning algorithms decreases drastically if the data
distributions of the training and testing sets are different. Due to variations
in staining protocols, reagent brands, and habits of technicians, color
variation in digital histopathology images is quite common. Color variation
causes problems for the deployment of deep learning-based solutions for
automatic diagnosis system in histopathology. Previously proposed color
normalization methods consider a small patch as a reference for normalization,
which creates artifacts on out-of-distribution source images. These methods are
also slow as most of the computation is performed on CPUs instead of the GPUs.
We propose a color normalization technique, which is fast during its
self-supervised training as well as inference. Our method is based on a
lightweight fully-convolutional neural network and can be easily attached to a
deep learning-based pipeline as a pre-processing block. For classification and
segmentation tasks on CAMELYON17 and MoNuSeg datasets respectively, the
proposed method is faster and gives a greater increase in accuracy than the
state of the art methods. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Visual intelligence at the edge is becoming a growing necessity for low
latency applications and situations where real-time decision is vital. Object
detection, the first step in visual data analytics, has enjoyed significant
improvements in terms of state-of-the-art accuracy due to the emergence of
Convolutional Neural Networks (CNNs) and Deep Learning. However, such complex
paradigms intrude increasing computational demands and hence prevent their
deployment on resource-constrained devices. In this work, we propose a
hierarchical framework that enables to detect objects in high-resolution video
frames, and maintain the accuracy of state-of-the-art CNN-based object
detectors while outperforming existing works in terms of processing speed when
targeting a low-power embedded processor using an intelligent data reduction
mechanism. Moreover, a use-case for pedestrian detection from
Unmanned-Areal-Vehicle (UAV) is presented showing the impact that the proposed
approach has on sensitivity, average processing time and power consumption when
is implemented on different platforms. Using the proposed selection process our
framework manages to reduce the processed data by 100x leading to under 4W
power consumption on different edge devices. | [
"cs.CV"
] |
The rapid development of Industrial Internet of Things (IIoT) requires
industrial production towards digitalization to improve network efficiency.
Digital Twin is a promising technology to empower the digital transformation of
IIoT by creating virtual models of physical objects. However, the provision of
network efficiency in IIoT is very challenging due to resource-constrained
devices, stochastic tasks, and resources heterogeneity. Distributed resources
in IIoT networks can be efficiently exploited through computation offloading to
reduce energy consumption while enhancing data processing efficiency. In this
paper, we first propose a new paradigm Digital Twin Networks (DTN) to build
network topology and the stochastic task arrival model in IIoT systems. Then,
we formulate the stochastic computation offloading and resource allocation
problem to minimize the long-term energy efficiency. As the formulated problem
is a stochastic programming problem, we leverage Lyapunov optimization
technique to transform the original problem into a deterministic per-time slot
problem. Finally, we present Asynchronous Actor-Critic (AAC) algorithm to find
the optimal stochastic computation offloading policy. Illustrative results
demonstrate that our proposed scheme is able to significantly outperforms the
benchmarks. | [
"cs.LG",
"cs.AI"
] |
Deep learning-based health status representation learning and clinical
prediction have raised much research interest in recent years. Existing models
have shown superior performance, but there are still several major issues that
have not been fully taken into consideration. First, the historical variation
pattern of the biomarker in diverse time scales plays a vital role in
indicating the health status, but it has not been explicitly extracted by
existing works. Second, key factors that strongly indicate the health risk are
different among patients. It is still challenging to adaptively make use of the
features for patients in diverse conditions. Third, using prediction models as
the black box will limit the reliability in clinical practice. However, none of
the existing works can provide satisfying interpretability and meanwhile
achieve high prediction performance. In this work, we develop a general health
status representation learning model, named AdaCare. It can capture the long
and short-term variations of biomarkers as clinical features to depict the
health status in multiple time scales. It also models the correlation between
clinical features to enhance the ones which strongly indicate the health status
and thus can maintain a state-of-the-art performance in terms of prediction
accuracy while providing qualitative interpretability. We conduct a health risk
prediction experiment on two real-world datasets. Experiment results indicate
that AdaCare outperforms state-of-the-art approaches and provides effective
interpretability, which is verifiable by clinical experts. | [
"cs.LG",
"stat.ML"
] |
The prediction of behavior in dynamical systems, is frequently subject to the
design of models. When a time series obtained from observing the system is
available, the task can be performed by designing the model from these
observations without additional assumptions or by assuming a preconceived
structure in the model, with the help of additional information about the
system. In the second case, it is a question of adequately combining theory
with observations and subsequently optimizing the mixture. In this work, we
proposes the design of time-continuous models of dynamical systems as solutions
of differential equations, from non-uniform sampled or noisy observations,
using machine learning techniques. The performance of strategy is shown with
both, several simulated data sets and experimental data from Hare-Lynx
population and Coronavirus 2019 outbreack. Our results suggest that this
approach to the modeling systems, can be an useful technique in the case of
synthetic or experimental data. | [
"cs.LG"
] |
Explaining the unreasonable effectiveness of deep learning has eluded
researchers around the globe. Various authors have described multiple metrics
to evaluate the capacity of deep architectures. In this paper, we allude to the
radius margin bounds described for a support vector machine (SVM) with hinge
loss, apply the same to the deep feed-forward architectures and derive the
Vapnik-Chervonenkis (VC) bounds which are different from the earlier bounds
proposed in terms of number of weights of the network. In doing so, we also
relate the effectiveness of techniques like Dropout and Dropconnect in bringing
down the capacity of the network. Finally, we describe the effect of maximizing
the input as well as the output margin to achieve an input noise-robust deep
architecture. | [
"cs.LG",
"stat.ML"
] |
The need for simulated data in autonomous driving applications has become
increasingly important, both for validation of pretrained models and for
training new models. In order for these models to generalize to real-world
applications, it is critical that the underlying dataset contains a variety of
driving scenarios and that simulated sensor readings closely mimics real-world
sensors. We present the Carla Automated Dataset Extraction Tool (CADET), a
novel tool for generating training data from the CARLA simulator to be used in
autonomous driving research. The tool is able to export high-quality,
synchronized LIDAR and camera data with object annotations, and offers
configuration to accurately reflect a real-life sensor array. Furthermore, we
use this tool to generate a dataset consisting of 10 000 samples and use this
dataset in order to train the 3D object detection network AVOD-FPN, with
finetuning on the KITTI dataset in order to evaluate the potential for
effective pretraining. We also present two novel LIDAR feature map
configurations in Bird's Eye View for use with AVOD-FPN that can be easily
modified. These configurations are tested on the KITTI and CADET datasets in
order to evaluate their performance as well as the usability of the simulated
dataset for pretraining. Although insufficient to fully replace the use of real
world data, and generally not able to exceed the performance of systems fully
trained on real data, our results indicate that simulated data can considerably
reduce the amount of training on real data required to achieve satisfactory
levels of accuracy. | [
"cs.CV",
"cs.LG"
] |
We study estimation of a gradient-sparse parameter vector
$\boldsymbol{\theta}^* \in \mathbb{R}^p$, having strong gradient-sparsity
$s^*:=\|\nabla_G \boldsymbol{\theta}^*\|_0$ on an underlying graph $G$. Given
observations $Z_1,\ldots,Z_n$ and a smooth, convex loss function $\mathcal{L}$
for which $\boldsymbol{\theta}^*$ minimizes the population risk
$\mathbb{E}[\mathcal{L}(\boldsymbol{\theta};Z_1,\ldots,Z_n)]$, we propose to
estimate $\boldsymbol{\theta}^*$ by a projected gradient descent algorithm that
iteratively and approximately projects gradient steps onto spaces of vectors
having small gradient-sparsity over low-degree spanning trees of $G$. We show
that, under suitable restricted strong convexity and smoothness assumptions for
the loss, the resulting estimator achieves the squared-error risk
$\frac{s^*}{n} \log (1+\frac{p}{s^*})$ up to a multiplicative constant that is
independent of $G$. In contrast, previous polynomial-time algorithms have only
been shown to achieve this guarantee in more specialized settings, or under
additional assumptions for $G$ and/or the sparsity pattern of $\nabla_G
\boldsymbol{\theta}^*$. As applications of our general framework, we apply our
results to the examples of linear models and generalized linear models with
random design. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.ME",
"stat.TH"
] |
State-of-the-art self-supervised learning approaches for monocular depth
estimation usually suffer from scale ambiguity. They do not generalize well
when applied on distance estimation for complex projection models such as in
fisheye and omnidirectional cameras. This paper introduces a novel multi-task
learning strategy to improve self-supervised monocular distance estimation on
fisheye and pinhole camera images. Our contribution to this work is threefold:
Firstly, we introduce a novel distance estimation network architecture using a
self-attention based encoder coupled with robust semantic feature guidance to
the decoder that can be trained in a one-stage fashion. Secondly, we integrate
a generalized robust loss function, which improves performance significantly
while removing the need for hyperparameter tuning with the reprojection loss.
Finally, we reduce the artifacts caused by dynamic objects violating static
world assumptions using a semantic masking strategy. We significantly improve
upon the RMSE of previous work on fisheye by 25% reduction in RMSE. As there is
little work on fisheye cameras, we evaluated the proposed method on KITTI using
a pinhole model. We achieved state-of-the-art performance among self-supervised
methods without requiring an external scale estimation. | [
"cs.CV",
"cs.RO"
] |
There is a vast body of theoretical research on lifted inference in
probabilistic graphical models (PGMs). However, few demonstrations exist where
lifting is applied in conjunction with top of the line applied algorithms. We
pursue the applicability of lifted inference for computer vision (CV), with the
insight that a globally optimal (MAP) labeling will likely have the same label
for two symmetric pixels. The success of our approach lies in efficiently
handling a distinct unary potential on every node (pixel), typical of CV
applications. This allows us to lift the large class of algorithms that model a
CV problem via PGM inference. We propose a generic template for coarse-to-fine
(C2F) inference in CV, which progressively refines an initial coarsely lifted
PGM for varying quality-time trade-offs. We demonstrate the performance of C2F
inference by developing lifted versions of two near state-of-the-art CV
algorithms for stereo vision and interactive image segmentation. We find that,
against flat algorithms, the lifted versions have a much superior anytime
performance, without any loss in final solution quality. | [
"cs.CV"
] |
Clustering is an important facet of explorative data mining and finds
extensive use in several fields. In this paper, we propose an extension of the
classical Fuzzy C-Means clustering algorithm. The proposed algorithm,
abbreviated as VFC, adopts a multi-dimensional membership vector for each data
point instead of the traditional, scalar membership value defined in the
original algorithm. The membership vector for each point is obtained by
considering each feature of that point separately and obtaining individual
membership values for the same. We also propose an algorithm to efficiently
allocate the initial cluster centers close to the actual centers, so as to
facilitate rapid convergence. Further, we propose a scheme to achieve crisp
clustering using the VFC algorithm. The proposed, novel clustering scheme has
been tested on two standard data sets in order to analyze its performance. We
also examine the efficacy of the proposed scheme by analyzing its performance
on image segmentation examples and comparing it with the classical Fuzzy
C-means clustering algorithm. | [
"cs.CV"
] |
Accurate and robust detection of multi-class objects in optical remote
sensing images is essential to many real-world applications such as urban
planning, traffic control, searching and rescuing, etc. However,
state-of-the-art object detection techniques designed for images captured using
ground-level sensors usually experience a sharp performance drop when directly
applied to remote sensing images, largely due to the object appearance
differences in remote sensing images in term of sparse texture, low contrast,
arbitrary orientations, large scale variations, etc. This paper presents a
novel object detection network (CAD-Net) that exploits attention-modulated
features as well as global and local contexts to address the new challenges in
detecting objects from remote sensing images. The proposed CAD-Net learns
global and local contexts of objects by capturing their correlations with the
global scene (at scene-level) and the local neighboring objects or features (at
object-level), respectively. In addition, it designs a spatial-and-scale-aware
attention module that guides the network to focus on more informative regions
and features as well as more appropriate feature scales. Experiments over two
publicly available object detection datasets for remote sensing images
demonstrate that the proposed CAD-Net achieves superior detection performance.
The implementation codes will be made publicly available for facilitating
future researches. | [
"cs.CV"
] |
Very recently, a variety of vision transformer architectures for dense
prediction tasks have been proposed and they show that the design of spatial
attention is critical to their success in these tasks. In this work, we revisit
the design of the spatial attention and demonstrate that a carefully-devised
yet simple spatial attention mechanism performs favourably against the
state-of-the-art schemes. As a result, we propose two vision transformer
architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures
are highly-efficient and easy to implement, only involving matrix
multiplications that are highly optimized in modern deep learning frameworks.
More importantly, the proposed architectures achieve excellent performance on a
wide range of visual tasks including imagelevel classification as well as dense
detection and segmentation. The simplicity and strong performance suggest that
our proposed architectures may serve as stronger backbones for many vision
tasks. Our code will be released soon at
https://github.com/Meituan-AutoML/Twins . | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Boosting techniques and neural networks are particularly effective machine
learning methods for insurance pricing. Often in practice, there are
nevertheless endless debates about the choice of the right loss function to be
used to train the machine learning model, as well as about the appropriate
metric to assess the performances of competing models. Also, the sum of fitted
values can depart from the observed totals to a large extent and this often
confuses actuarial analysts. The lack of balance inherent to training models by
minimizing deviance outside the familiar GLM with canonical link setting has
been empirically documented in W\"uthrich (2019, 2020) who attributes it to the
early stopping rule in gradient descent methods for model fitting. The present
paper aims to further study this phenomenon when learning proceeds by
minimizing Tweedie deviance. It is shown that minimizing deviance involves a
trade-off between the integral of weighted differences of lower partial moments
and the bias measured on a specific scale. Autocalibration is then proposed as
a remedy. This new method to correct for bias adds an extra local GLM step to
the analysis. Theoretically, it is shown that it implements the autocalibration
concept in pure premium calculation and ensures that balance also holds on a
local scale, not only at portfolio level as with existing bias-correction
techniques. The convex order appears to be the natural tool to compare
competing models, putting a new light on the diagnostic graphs and associated
metrics proposed by Denuit et al. (2019). | [
"stat.ML",
"cs.LG",
"econ.EM"
] |
We desgin a novel fully convolutional network architecture for shapes,
denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented
as graph structures in the SFCN architecture, based on novel graph convolution
and pooling operations, which are similar to convolution and pooling operations
used on images. Meanwhile, to build our SFCN architecture in the original image
segmentation fully convolutional network (FCN) architecture, we also design and
implement a generating operation} with bridging function. This ensures that the
convolution and pooling operation we have designed can be successfully applied
in the original FCN architecture. In this paper, we also present a new shape
segmentation approach based on SFCN. Furthermore, we allow more general and
challenging input, such as mixed datasets of different categories of shapes}
which can prove the ability of our generalisation. In our approach, SFCNs are
trained triangles-to-triangles by using three low-level geometric features as
input. Finally, the feature voting-based multi-label graph cuts is adopted to
optimise the segmentation results obtained by SFCN prediction. The experiment
results show that our method can effectively learn and predict mixed shape
datasets of either similar or different characteristics, and achieve excellent
segmentation results. | [
"cs.CV"
] |
Analyzing deep neural networks (DNNs) via information plane (IP) theory has
gained tremendous attention recently as a tool to gain insight into, among
others, their generalization ability. However, it is by no means obvious how to
estimate mutual information (MI) between each hidden layer and the
input/desired output, to construct the IP. For instance, hidden layers with
many neurons require MI estimators with robustness towards the high
dimensionality associated with such layers. MI estimators should also be able
to naturally handle convolutional layers, while at the same time being
computationally tractable to scale to large networks. None of the existing IP
methods to date have been able to study truly deep Convolutional Neural
Networks (CNNs), such as the e.g.\ VGG-16. In this paper, we propose an IP
analysis using the new matrix--based R\'enyi's entropy coupled with tensor
kernels over convolutional layers, leveraging the power of kernel methods to
represent properties of the probability distribution independently of the
dimensionality of the data. The obtained results shed new light on the previous
literature concerning small-scale DNNs, however using a completely new
approach. Importantly, the new framework enables us to provide the first
comprehensive IP analysis of contemporary large-scale DNNs and CNNs,
investigating the different training phases and providing new insights into the
training dynamics of large-scale neural networks. | [
"stat.ML",
"cs.LG"
] |
Generative adversarial networks (GAN) have shown remarkable results in image
generation tasks. High fidelity class-conditional GAN methods often rely on
stabilization techniques by constraining the global Lipschitz continuity. Such
regularization leads to less expressive models and slower convergence speed;
other techniques, such as the large batch training, require unconventional
computing power and are not widely accessible. In this paper, we develop an
efficient algorithm, namely FastGAN (Free AdverSarial Training), to improve the
speed and quality of GAN training based on the adversarial training technique.
We benchmark our method on CIFAR10, a subset of ImageNet, and the full ImageNet
datasets. We choose strong baselines such as SNGAN and SAGAN; the results
demonstrate that our training algorithm can achieve better generation quality
(in terms of the Inception score and Frechet Inception distance) with less
overall training time. Most notably, our training algorithm brings ImageNet
training to the broader public by requiring 2-4 GPUs. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Simulation-to-simulation and simulation-to-real world transfer of neural
network models have been a difficult problem. To close the reality gap, prior
methods to simulation-to-real world transfer focused on domain adaptation,
decoupling perception and dynamics and solving each problem separately, and
randomization of agent parameters and environment conditions to expose the
learning agent to a variety of conditions. While these methods provide
acceptable performance, the computational complexity required to capture a
large variation of parameters for comprehensive scenarios on a given task such
as autonomous driving or robotic manipulation is high. Our key contribution is
to theoretically prove and empirically demonstrate that a deep attention
convolutional neural network (DACNN) with specific visual sensor configuration
performs as well as training on a dataset with high domain and parameter
variation at lower computational complexity. Specifically, the attention
network weights are learned through policy optimization to focus on local
dependencies that lead to optimal actions, and does not require tuning in
real-world for generalization. Our new architecture adapts perception with
respect to the control objective, resulting in zero-shot learning without
pre-training a perception network. To measure the impact of our new deep
network architecture on domain adaptation, we consider autonomous driving as a
use case. We perform an extensive set of experiments in
simulation-to-simulation and simulation-to-real scenarios to compare our
approach to several baselines including the current state-of-art models. | [
"cs.LG",
"cs.RO",
"cs.SY",
"eess.SY"
] |
There has been a current trend in reinforcement learning for healthcare
literature, where in order to prepare clinical datasets, researchers will carry
forward the last results of the non-administered test known as the
last-observation-carried-forward (LOCF) value to fill in gaps, assuming that it
is still an accurate indicator of the patient's current state. These values are
carried forward without maintaining information about exactly how these values
were imputed, leading to ambiguity. Our approach models this problem using
OpenAI Gym's Mountain Car and aims to address when to observe the patient's
physiological state and partly how to intervene, as we have assumed we can only
act after following an observation. So far, we have found that for a
last-observation-carried-forward implementation of the state space, augmenting
the state with counters for each state variable tracking the time since last
observation was made, improves the predictive performance of an agent,
supporting the notion of "informative missingness", and using a neural network
based Dynamics Model to predict the most probable next state value of
non-observed state variables instead of carrying forward the last observed
value through LOCF further improves the agent's performance, leading to faster
convergence and reduced variance. | [
"cs.LG",
"cs.AI"
] |
In this paper, we propose a feature transformation ensemble model with batch
spectral regularization for the Cross-domain few-shot learning (CD-FSL)
challenge. Specifically, we proposes to construct an ensemble prediction model
by performing diverse feature transformations after a feature extraction
network. On each branch prediction network of the model we use a batch spectral
regularization term to suppress the singular values of the feature matrix
during pre-training to improve the generalization ability of the model. The
proposed model can then be fine tuned in the target domain to address few-shot
classification. We also further apply label propagation, entropy minimization
and data augmentation to mitigate the shortage of labeled data in target
domains. Experiments are conducted on a number of CD-FSL benchmark tasks with
four target domains and the results demonstrate the superiority of our proposed
model. | [
"cs.CV"
] |
We tackle the problem of object detection and pose estimation in a shared
space downtown environment. For perception multiple laser scanners with
360{\deg} coverage were fused in a dynamic occupancy grid map (DOGMa). A
single-stage deep convolutional neural network is trained to provide object
hypotheses comprising of shape, position, orientation and an existence score
from a single input DOGMa. Furthermore, an algorithm for offline object
extraction was developed to automatically label several hours of training data.
The algorithm is based on a two-pass trajectory extraction, forward and
backward in time. Typical for engineered algorithms, the automatic label
generation suffers from misdetections, which makes hard negative mining
impractical. Therefore, we propose a loss function counteracting the high
imbalance between mostly static background and extremely rare dynamic grid
cells. Experiments indicate, that the trained network has good generalization
capabilities since it detects objects occasionally lost by the label algorithm.
Evaluation reaches an average precision (AP) of 75.9% | [
"cs.CV",
"cs.RO"
] |
This paper studies the nonparametric modal regression problem systematically
from a statistical learning view. Originally motivated by pursuing a
theoretical understanding of the maximum correntropy criterion based regression
(MCCR), our study reveals that MCCR with a tending-to-zero scale parameter is
essentially modal regression. We show that nonparametric modal regression
problem can be approached via the classical empirical risk minimization. Some
efforts are then made to develop a framework for analyzing and implementing
modal regression. For instance, the modal regression function is described, the
modal regression risk is defined explicitly and its \textit{Bayes} rule is
characterized; for the sake of computational tractability, the surrogate modal
regression risk, which is termed as the generalization risk in our study, is
introduced. On the theoretical side, the excess modal regression risk, the
excess generalization risk, the function estimation error, and the relations
among the above three quantities are studied rigorously. It turns out that
under mild conditions, function estimation consistency and convergence may be
pursued in modal regression as in vanilla regression protocols, such as mean
regression, median regression, and quantile regression. However, it outperforms
these regression models in terms of robustness as shown in our study from a
re-descending M-estimation view. This coincides with and in return explains the
merits of MCCR on robustness. On the practical side, the implementation issues
of modal regression including the computational algorithm and the tuning
parameters selection are discussed. Numerical assessments on modal regression
are also conducted to verify our findings empirically. | [
"stat.ML",
"math.ST",
"stat.ME",
"stat.TH"
] |
Bayesian optimization (BO) is a sample efficient approach to automatically
tune the hyperparameters of machine learning models. In practice, one
frequently has to solve similar hyperparameter tuning problems sequentially.
For example, one might have to tune a type of neural network learned across a
series of different classification problems. Recent work on multi-task BO
exploits knowledge gained from previous tuning tasks to speed up a new tuning
task. However, previous approaches do not account for the fact that BO is a
sequential decision making procedure. Hence, there is in general a mismatch
between the number of evaluations collected in the current tuning task compared
to the number of evaluations accumulated in all previously completed tasks. In
this work, we enable multi-task BO to compensate for this mismatch, such that
the transfer learning procedure is able to handle different data regimes in a
principled way. We propose a new multi-task BO method that learns a set of
ordered, non-linear basis functions of increasing complexity via nested
drop-out and automatic relevance determination. Experiments on a variety of
hyperparameter tuning problems show that our method improves the sample ef | [
"cs.LG",
"stat.ML"
] |
Recent works in medical image segmentation have actively explored various
deep learning architectures or objective functions to encode high-level
features from volumetric data owing to limited image annotations. However, most
existing approaches tend to ignore cross-volume global context and define
context relations in the decision space. In this work, we propose a novel
voxel-level Siamese representation learning method for abdominal multi-organ
segmentation to improve representation space. The proposed method enforces
voxel-wise feature relations in the representation space for leveraging limited
datasets more comprehensively to achieve better performance. Inspired by recent
progress in contrastive learning, we suppressed voxel-wise relations from the
same class to be projected to the same point without using negative samples.
Moreover, we introduce a multi-resolution context aggregation method that
aggregates features from multiple hidden layers, which encodes both the global
and local contexts for segmentation. Our experiments on the multi-organ dataset
outperformed the existing approaches by 2% in Dice score coefficient. The
qualitative visualizations of the representation spaces demonstrate that the
improvements were gained primarily by a disentangled feature space. | [
"cs.CV"
] |
We propose a new deep architecture for person re-identification (re-id).
While re-id has seen much recent progress, spatial localization and
view-invariant representation learning for robust cross-view matching remain
key, unsolved problems. We address these questions by means of a new
attention-driven Siamese learning architecture, called the Consistent Attentive
Siamese Network. Our key innovations compared to existing, competing methods
include (a) a flexible framework design that produces attention with only
identity labels as supervision, (b) explicit mechanisms to enforce attention
consistency among images of the same person, and (c) a new Siamese framework
that integrates attention and attention consistency, producing principled
supervisory signals as well as the first mechanism that can explain the
reasoning behind the Siamese framework's predictions. We conduct extensive
evaluations on the CUHK03-NP, DukeMTMC-ReID, and Market-1501 datasets and
report competitive performance. | [
"cs.CV",
"cs.LG"
] |
Segmentation of 3D images is a fundamental problem in biomedical image
analysis. Deep learning (DL) approaches have achieved state-of-the-art
segmentation perfor- mance. To exploit the 3D contexts using neural networks,
known DL segmentation methods, including 3D convolution, 2D convolution on
planes orthogonal to 2D image slices, and LSTM in multiple directions, all
suffer incompatibility with the highly anisotropic dimensions in common 3D
biomedical images. In this paper, we propose a new DL framework for 3D image
segmentation, based on a com- bination of a fully convolutional network (FCN)
and a recurrent neural network (RNN), which are responsible for exploiting the
intra-slice and inter-slice contexts, respectively. To our best knowledge, this
is the first DL framework for 3D image segmentation that explicitly leverages
3D image anisotropism. Evaluating using a dataset from the ISBI Neuronal
Structure Segmentation Challenge and in-house image stacks for 3D fungus
segmentation, our approach achieves promising results comparing to the known
DL-based 3D segmentation approaches. | [
"cs.CV"
] |
The policy gradient approach is a flexible and powerful reinforcement
learning method particularly for problems with continuous actions such as robot
control. A common challenge in this scenario is how to reduce the variance of
policy gradient estimates for reliable policy updates. In this paper, we
combine the following three ideas and give a highly effective policy gradient
method: (a) the policy gradients with parameter based exploration, which is a
recently proposed policy search method with low variance of gradient estimates,
(b) an importance sampling technique, which allows us to reuse previously
gathered data in a consistent way, and (c) an optimal baseline, which minimizes
the variance of gradient estimates with their unbiasedness being maintained.
For the proposed method, we give theoretical analysis of the variance of
gradient estimates and show its usefulness through extensive experiments. | [
"cs.LG",
"stat.ML"
] |
Pansharpening is a widely used image enhancement technique for remote
sensing. Its principle is to fuse the input high-resolution single-channel
panchromatic (PAN) image and low-resolution multi-spectral image and to obtain
a high-resolution multi-spectral (HRMS) image. The existing deep learning
pansharpening method has two shortcomings. First, features of two input images
need to be concatenated along the channel dimension to reconstruct the HRMS
image, which makes the importance of PAN images not prominent, and also leads
to high computational cost. Second, the implicit information of features is
difficult to extract through the manually designed loss function. To this end,
we propose a generative adversarial network via the fast guided filter (FGF)
for pansharpening. In generator, traditional channel concatenation is replaced
by FGF to better retain the spatial information while reducing the number of
parameters. Meanwhile, the fusion objects can be highlighted by the spatial
attention module. In addition, the latent information of features can be
preserved effectively through adversarial training. Numerous experiments
illustrate that our network generates high-quality HRMS images that can surpass
existing methods, and with fewer parameters. | [
"cs.CV",
"eess.IV"
] |
Egocentric segmentation has attracted recent interest in the computer vision
community due to their potential in Mixed Reality (MR) applications. While most
previous works have been focused on segmenting egocentric human body parts
(mainly hands), little attention has been given to egocentric objects. Due to
the lack of datasets of pixel-wise annotations of egocentric objects, in this
paper we contribute with a semantic-wise labeling of a subset of 2124 images
from the RGB-D THU-READ Dataset. We also report benchmarking results using
Thundernet, a real-time semantic segmentation network, that could allow future
integration with end-to-end MR applications. | [
"cs.CV"
] |
Drug repositioning is an attractive cost-efficient strategy for the
development of treatments for human diseases. Here, we propose an interpretable
model that learns disease self-representations for drug repositioning. Our
self-representation model represents each disease as a linear combination of a
few other diseases. We enforce proximity in the learnt representations in a way
to preserve the geometric structure of the human phenome network - a
domain-specific knowledge that naturally adds relational inductive bias to the
disease self-representations. We prove that our method is globally optimal and
show results outperforming state-of-the-art drug repositioning approaches. We
further show that the disease self-representations are biologically
interpretable. | [
"cs.LG",
"stat.ML"
] |
Off-policy evaluation (OPE) is the task of estimating the expected reward of
a given policy based on offline data previously collected under different
policies. Therefore, OPE is a key step in applying reinforcement learning to
real-world domains such as medical treatment, where interactive data collection
is expensive or even unsafe. As the observed data tends to be noisy and
limited, it is essential to provide rigorous uncertainty quantification, not
just a point estimation, when applying OPE to make high stakes decisions. This
work considers the problem of constructing non-asymptotic confidence intervals
in infinite-horizon off-policy evaluation, which remains a challenging open
question. We develop a practical algorithm through a primal-dual
optimization-based approach, which leverages the kernel Bellman loss (KBL) of
Feng et al.(2019) and a new martingale concentration inequality of KBL
applicable to time-dependent data with unknown mixing conditions. Our algorithm
makes minimum assumptions on the data and the function class of the Q-function,
and works for the behavior-agnostic settings where the data is collected under
a mix of arbitrary unknown behavior policies. We present empirical results that
clearly demonstrate the advantages of our approach over existing methods. | [
"cs.LG",
"stat.ML"
] |
The novel DISTributed Artificial neural Network Architecture (DISTANA) is a
generative, recurrent graph convolution neural network. It implements a grid or
mesh of locally parameterizable laterally connected network modules. DISTANA is
specifically designed to identify the causality behind spatially distributed,
non-linear dynamical processes. We show that DISTANA is very well-suited to
denoise data streams, given that re-occurring patterns are observed,
significantly outperforming alternative approaches, such as temporal
convolution networks and ConvLSTMs, on a complex spatial wave propagation
benchmark. It produces stable and accurate closed-loop predictions even over
hundreds of time steps. Moreover, it is able to effectively filter noise -- an
ability that can be improved further by applying denoising autoencoder
principles or by actively tuning latent neural state activities
retrospectively. Results confirm that DISTANA is ready to model real-world
spatio-temporal dynamics such as brain imaging, supply networks, water flow, or
soil and weather data patterns. | [
"cs.LG",
"stat.ML"
] |
Despite recent success of object detectors using deep neural networks, their
deployment on safety-critical applications such as self-driving cars remains
questionable. This is partly due to the absence of reliable estimation for
detectors' failure under operational conditions such as night, fog, dusk, dawn
and glare. Such unquantifiable failures could lead to safety violations. In
order to solve this problem, we created an algorithm that predicts a
pixel-level invisibility map for color images that does not require manual
labeling - that computes the probability that a pixel/region contains objects
that are invisible in color domain, during various lighting conditions such as
day, night and fog. We propose a novel use of cross modal knowledge
distillation from color to infra-red domain using weakly-aligned image pairs
from the day and construct indicators for the pixel-level invisibility based on
the distances of their intermediate-level features. Quantitative experiments
show the great performance of our pixel-level invisibility mask and also the
effectiveness of distilled mid-level features on object detection in infra-red
imagery. | [
"cs.CV",
"cs.RO",
"eess.IV"
] |
Several Convolutional Deep Learning models have been proposed to classify the
cognitive states utilizing several neuro-imaging domains. These models have
achieved significant results, but they are heavily designed with millions of
parameters, which increases train and test time, making the model complex and
less suitable for real-time analysis. This paper proposes a simple, lightweight
CNN model to classify cognitive states from Electroencephalograph (EEG)
recordings. We develop a novel pipeline to learn distinct cognitive
representation consisting of two stages. The first stage is to generate the 2D
spectral images from neural time series signals in a particular frequency band.
Images are generated to preserve the relationship between the neighboring
electrodes and the spectral property of the cognitive events. The second is to
develop a time-efficient, computationally less loaded, and high-performing
model. We design a network containing 4 blocks and major components include
standard and depth-wise convolution for increasing the performance and followed
by separable convolution to decrease the number of parameters which maintains
the tradeoff between time and performance. We experiment on open access EEG
meditation dataset comprising expert, nonexpert meditative, and control states.
We compare performance with six commonly used machine learning classifiers and
four state of the art deep learning models. We attain comparable performance
utilizing less than 4\% of the parameters of other models. This model can be
employed in a real-time computation environment such as neurofeedback. | [
"cs.LG",
"eess.SP"
] |
We present a deep reinforcement learning method of progressive view
inpainting for 3D point scene completion under volume guidance, achieving
high-quality scene reconstruction from only a single depth image with severe
occlusion. Our approach is end-to-end, consisting of three modules: 3D scene
volume reconstruction, 2D depth map inpainting, and multi-view selection for
completion. Given a single depth image, our method first goes through the 3D
volume branch to obtain a volumetric scene reconstruction as a guide to the
next view inpainting step, which attempts to make up the missing information;
the third step involves projecting the volume under the same view of the input,
concatenating them to complete the current view depth, and integrating all
depth into the point cloud. Since the occluded areas are unavailable, we resort
to a deep Q-Network to glance around and pick the next best view for large hole
completion progressively until a scene is adequately reconstructed while
guaranteeing validity. All steps are learned jointly to achieve robust and
consistent results. We perform qualitative and quantitative evaluations with
extensive experiments on the SUNCG data, obtaining better results than the
state of the art. | [
"cs.CV"
] |
Physical adversarial examples for camera-based computer vision have so far
been achieved through visible artifacts -- a sticker on a Stop sign, colorful
borders around eyeglasses or a 3D printed object with a colorful texture. An
implicit assumption here is that the perturbations must be visible so that a
camera can sense them. By contrast, we contribute a procedure to generate, for
the first time, physical adversarial examples that are invisible to human eyes.
Rather than modifying the victim object with visible artifacts, we modify light
that illuminates the object. We demonstrate how an attacker can craft a
modulated light signal that adversarially illuminates a scene and causes
targeted misclassifications on a state-of-the-art ImageNet deep learning model.
Concretely, we exploit the radiometric rolling shutter effect in commodity
cameras to create precise striping patterns that appear on images. To human
eyes, it appears like the object is illuminated, but the camera creates an
image with stripes that will cause ML models to output the attacker-desired
classification. We conduct a range of simulation and physical experiments with
LEDs, demonstrating targeted attack rates up to 84%. | [
"cs.CV",
"cs.CR",
"cs.LG"
] |
Today, there are two major understandings for graph convolutional networks,
i.e., in the spectral and spatial domain. But both lack transparency. In this
work, we introduce a new understanding for it -- data augmentation, which is
more transparent than the previous understandings. Inspired by it, we propose a
new graph learning paradigm -- Monte Carlo Graph Learning (MCGL). The core idea
of MCGL contains: (1) Data augmentation: propagate the labels of the training
set through the graph structure and expand the training set; (2) Model
training: use the expanded training set to train traditional classifiers. We
use synthetic datasets to compare the strengths of MCGL and graph convolutional
operation on clean graphs. In addition, we show that MCGL's tolerance to graph
structure noise is weaker than GCN on noisy graphs (four real-world datasets).
Moreover, inspired by MCGL, we re-analyze the reasons why the performance of
GCN becomes worse when deepened too much: rather than the mainstream view of
over-smoothing, we argue that the main reason is the graph structure noise, and
experimentally verify our view. The code is available at
https://github.com/DongHande/MCGL. | [
"cs.LG",
"stat.ML"
] |
The future of mobility-as-a-Service (Maas)should embrace an integrated system
of ride-hailing, street-hailing and ride-sharing with optimised intelligent
vehicle routing in response to a real-time, stochastic demand pattern. We aim
to optimise routing policies for a large fleet of vehicles for street-hailing
services, given a stochastic demand pattern in small to medium-sized road
networks. A model-based dispatch algorithm, a high performance model-free
reinforcement learning based algorithm and a novel hybrid algorithm combining
the benefits of both the top-down approach and the model-free reinforcement
learning have been proposed to route the \emph{vacant} vehicles. We design our
reinforcement learning based routing algorithm using proximal policy
optimisation and combined intrinsic and extrinsic rewards to strike a balance
between exploration and exploitation. Using a large-scale agent-based
microscopic simulation platform to evaluate our proposed algorithms, our
model-free reinforcement learning and hybrid algorithm show excellent
performance on both artificial road network and community-based Singapore road
network with empirical demands, and our hybrid algorithm can significantly
accelerate the model-free learner in the process of learning. | [
"cs.LG",
"nlin.AO",
"physics.soc-ph"
] |
One of the challenges in the study of Generative Adversarial Networks (GANs)
is the difficulty of its performance control. Lipschitz constraint is essential
in guaranteeing training stability for GANs. Although heuristic methods such as
weight clipping, gradient penalty and spectral normalization have been proposed
to enforce Lipschitz constraint, it is still difficult to achieve a solution
that is both practically effective and theoretically provably satisfying a
Lipschitz constraint. In this paper, we introduce the boundedness and
continuity ($BC$) conditions to enforce the Lipschitz constraint on the
discriminator functions of GANs. We prove theoretically that GANs with
discriminators meeting the BC conditions satisfy the Lipschitz constraint. We
present a practically very effective implementation of a GAN based on a
convolutional neural network (CNN) by forcing the CNN to satisfy the $BC$
conditions (BC-GAN). We show that as compared to recent techniques including
gradient penalty and spectral normalization, BC-GANs not only have better
performances but also lower computational complexity. | [
"cs.CV"
] |
We propose a human pose estimation framework that solves the task in the
regression-based fashion. Unlike previous regression-based methods, which often
fall behind those state-of-the-art methods, we formulate the pose estimation
task into a sequence prediction problem that can effectively be solved by
transformers. Our framework is simple and direct, bypassing the drawbacks of
the heatmap-based pose estimation. Moreover, with the attention mechanism in
transformers, our proposed framework is able to adaptively attend to the
features most relevant to the target keypoints, which largely overcomes the
feature misalignment issue of previous regression-based methods and
considerably improves the performance. Importantly, our framework can
inherently take advantages of the structured relationship between keypoints.
Experiments on the MS-COCO and MPII datasets demonstrate that our method can
significantly improve the state-of-the-art of regression-based pose estimation
and perform comparably with the best heatmap-based pose estimation methods. | [
"cs.CV"
] |
Learning latent representations of nodes in graphs is an important and
ubiquitous task with widespread applications such as link prediction, node
classification, and graph visualization. Previous methods on graph
representation learning mainly focus on static graphs, however, many real-world
graphs are dynamic and evolve over time. In this paper, we present Dynamic
Self-Attention Network (DySAT), a novel neural architecture that operates on
dynamic graphs and learns node representations that capture both structural
properties and temporal evolutionary patterns. Specifically, DySAT computes
node representations by jointly employing self-attention layers along two
dimensions: structural neighborhood and temporal dynamics. We conduct link
prediction experiments on two classes of graphs: communication networks and
bipartite rating networks. Our experimental results show that DySAT has a
significant performance gain over several different state-of-the-art graph
embedding baselines. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
A signed distance function (SDF) as the 3D shape description is one of the
most effective approaches to represent 3D geometry for rendering and
reconstruction. Our work is inspired by the state-of-the-art method DeepSDF
that learns and analyzes the 3D shape as the iso-surface of its shell and this
method has shown promising results especially in the 3D shape reconstruction
and compression domain. In this paper, we consider the degeneration problem of
reconstruction coming from the capacity decrease of the DeepSDF model, which
approximates the SDF with a neural network and a single latent code. We propose
Local Geometry Code Learning (LGCL), a model that improves the original DeepSDF
results by learning from a local shape geometry of the full 3D shape. We add an
extra graph neural network to split the single transmittable latent code into a
set of local latent codes distributed on the 3D shape. Mentioned latent codes
are used to approximate the SDF in their local regions, which will alleviate
the complexity of the approximation compared to the original DeepSDF.
Furthermore, we introduce a new geometric loss function to facilitate the
training of these local latent codes. Note that other local shape adjusting
methods use the 3D voxel representation, which in turn is a problem highly
difficult to solve or even is insolvable. In contrast, our architecture is
based on graph processing implicitly and performs the learning regression
process directly in the latent code space, thus make the proposed architecture
more flexible and also simple for realization. Our experiments on 3D shape
reconstruction demonstrate that our LGCL method can keep more details with a
significantly smaller size of the SDF decoder and outperforms considerably the
original DeepSDF method under the most important quantitative metrics. | [
"cs.CV"
] |
Artificial neural networks (ANNs) are commonly labelled as black-boxes,
lacking interpretability. This hinders human understanding of ANNs' behaviors.
A need exists to generate a meaningful sequential logic for the production of a
specific output. Decision trees exhibit better interpretability and expressive
power due to their representation language and the existence of efficient
algorithms to generate rules. Growing a decision tree based on the available
data could produce larger than necessary trees or trees that do not generalise
well. In this paper, we introduce two novel multivariate decision tree (MDT)
algorithms for rule extraction from an ANN: an Exact-Convertible Decision Tree
(EC-DT) and an Extended C-Net algorithm to transform a neural network with
Rectified Linear Unit activation functions into a representative tree which can
be used to extract multivariate rules for reasoning. While the EC-DT translates
the ANN in a layer-wise manner to represent exactly the decision boundaries
implicitlylearned by the hidden layers of the network, the Extended C-Net
inherits the decompositional approach from EC-DT and combines with a C5 tree
learning algorithm to construct the decision rules. The results suggest that
while EC-DT is superior in preserving the structure and the accuracy of ANN,
Extended C-Net generates the most compact and highly effective trees from ANN.
Both proposed MDT algorithms generate rules including combinations of multiple
attributes for precise interpretation of decision-making processes. | [
"cs.LG",
"stat.ML"
] |
Developmental Dyslexia (DD) is a learning disability related to the
acquisition of reading skills that affects about 5% of the population. DD can
have an enormous impact on the intellectual and personal development of
affected children, so early detection is key to implementing preventive
strategies for teaching language. Research has shown that there may be
biological underpinnings to DD that affect phoneme processing, and hence these
symptoms may be identifiable before reading ability is acquired, allowing for
early intervention. In this paper we propose a new methodology to assess the
risk of DD before students learn to read. For this purpose, we propose a mixed
neural model that calculates risk levels of dyslexia from tests that can be
completed at the age of 5 years. Our method first trains an auto-encoder, and
then combines the trained encoder with an optimized ordinal regression neural
network devised to ensure consistency of predictions. Our experiments show that
the system is able to detect unaffected subjects two years before it can assess
the risk of DD based mainly on phonological processing, giving a specificity of
0.969 and a correct rate of more than 0.92. In addition, the trained encoder
can be used to transform test results into an interpretable subject spatial
distribution that facilitates risk assessment and validates methodology. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
A commonly used paradigm for representing graphs is to use a vector that
contains normalized frequencies of occurrence of certain motifs or sub-graphs.
This vector representation can be used in a variety of applications, such as,
for computing similarity between graphs. The graphlet kernel of Shervashidze et
al. [32] uses induced sub-graphs of k nodes (christened as graphlets by Przulj
[28]) as motifs in the vector representation, and computes the kernel via a dot
product between these vectors. One can easily show that this is a valid kernel
between graphs. However, such a vector representation suffers from a few
drawbacks. As k becomes larger we encounter the sparsity problem; most higher
order graphlets will not occur in a given graph. This leads to diagonal
dominance, that is, a given graph is similar to itself but not to any other
graph in the dataset. On the other hand, since lower order graphlets tend to be
more numerous, using lower values of k does not provide enough discrimination
ability. We propose a smoothing technique to tackle the above problems. Our
method is based on a novel extension of Kneser-Ney and Pitman-Yor smoothing
techniques from natural language processing to graphs. We use the relationships
between lower order and higher order graphlets in order to derive our method.
Consequently, our smoothing algorithm not only respects the dependency between
sub-graphs but also tackles the diagonal dominance problem by distributing the
probability mass across graphlets. In our experiments, the smoothed graphlet
kernel outperforms graph kernels based on raw frequency counts. | [
"cs.LG"
] |
This paper is based on a machine learning project at the Norwegian University
of Science and Technology, fall 2020. The project was initiated with a
literature review on the latest developments within time-series forecasting
methods in the scientific community over the past five years. The paper
summarizes the essential aspects of this research. Furthermore, in this paper,
we introduce an LSTM cell's architecture, and explain how different components
go together to alter the cell's memory and predict the output. Also, the paper
provides the necessary formulas and foundations to calculate a forward
iteration through an LSTM. Then, the paper refers to some practical
applications and research that emphasize the strength and weaknesses of LSTMs,
shown within the time-series domain and the natural language processing (NLP)
domain. Finally, alternative statistical methods for time series predictions
are highlighted, where the paper outline ARIMA and exponential smoothing.
Nevertheless, as LSTMs can be viewed as a complex architecture, the paper
assumes that the reader has some knowledge of essential machine learning
aspects, such as the multi-layer perceptron, activation functions, overfitting,
backpropagation, bias, over- and underfitting, and more. | [
"cs.LG",
"cs.AI"
] |
In self-supervised learning, one trains a model to solve a so-called pretext
task on a dataset without the need for human annotation. The main objective,
however, is to transfer this model to a target domain and task. Currently, the
most effective transfer strategy is fine-tuning, which restricts one to use the
same model or parts thereof for both pretext and target tasks. In this paper,
we present a novel framework for self-supervised learning that overcomes
limitations in designing and comparing different tasks, models, and data
domains. In particular, our framework decouples the structure of the
self-supervised model from the final task-specific fine-tuned model. This
allows us to: 1) quantitatively assess previously incompatible models including
handcrafted features; 2) show that deeper neural network models can learn
better representations from the same pretext task; 3) transfer knowledge
learned with a deep model to a shallower one and thus boost its learning. We
use this framework to design a novel self-supervised task, which achieves
state-of-the-art performance on the common benchmarks in PASCAL VOC 2007,
ILSVRC12 and Places by a significant margin. Our learned features shrink the
mAP gap between models trained via self-supervised learning and supervised
learning from 5.9% to 2.6% in object detection on PASCAL VOC 2007. | [
"cs.CV"
] |
Neural Architecture Search (NAS) is a promising and rapidly evolving research
area. Training a large number of neural networks requires an exceptional amount
of computational power, which makes NAS unreachable for those researchers who
have limited or no access to high-performance clusters and supercomputers. A
few benchmarks with precomputed neural architectures performances have been
recently introduced to overcome this problem and ensure more reproducible
experiments. However, these benchmarks are only for the computer vision domain
and, thus, are built from the image datasets and convolution-derived
architectures. In this work, we step outside the computer vision domain by
leveraging the language modeling task, which is the core of natural language
processing (NLP). Our main contribution is as follows: we have provided search
space of recurrent neural networks on the text datasets and trained 14k
architectures within it; we have conducted both intrinsic and extrinsic
evaluation of the trained models using datasets for semantic relatedness and
language understanding evaluation; finally, we have tested several NAS
algorithms to demonstrate how the precomputed results can be utilized. We
believe that our results have high potential of usage for both NAS and NLP
communities. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Movie graphs play an important role to bridge heterogenous modalities of
videos and texts in human-centric retrieval. In this work, we propose Graph
Wasserstein Correlation Analysis (GWCA) to deal with the core issue therein,
i.e, cross heterogeneous graph comparison. Spectral graph filtering is
introduced to encode graph signals, which are then embedded as probability
distributions in a Wasserstein space, called graph Wasserstein metric learning.
Such a seamless integration of graph signal filtering together with metric
learning results in a surprise consistency on both learning processes, in which
the goal of metric learning is just to optimize signal filters or vice versa.
Further, we derive the solution of the graph comparison model as a classic
generalized eigenvalue decomposition problem, which has an exactly closed-form
solution. Finally, GWCA together with movie/text graphs generation are unified
into the framework of movie retrieval to evaluate our proposed method.
Extensive experiments on MovieGrpahs dataset demonstrate the effectiveness of
our GWCA as well as the entire framework. | [
"cs.LG",
"stat.ML"
] |
Neural machine translation (NMT) systems have been shown to give undesirable
translation when a small change is made in the source sentence. In this paper,
we study the behaviour of NMT systems when multiple changes are made to the
source sentence. In particular, we ask the following question "Is it possible
for an NMT system to predict same translation even when multiple words in the
source sentence have been replaced?". To this end, we propose a soft-attention
based technique to make the aforementioned word replacements. The experiments
are conducted on two language pairs: English-German (en-de) and English-French
(en-fr) and two state-of-the-art NMT systems: BLSTM-based encoder-decoder with
attention and Transformer. The proposed soft-attention based technique achieves
high success rate and outperforms existing methods like HotFlip by a
significant margin for all the conducted experiments. The results demonstrate
that state-of-the-art NMT systems are unable to capture the semantics of the
source language. The proposed soft-attention based technique is an
invariance-based adversarial attack on NMT systems. To better evaluate such
attacks, we propose an alternate metric and argue its benefits in comparison
with success rate. | [
"cs.LG",
"cs.CL",
"cs.CR",
"stat.ML"
] |
With the popularity of multimedia technology, information is always
represented or transmitted from multiple views. Most of the existing algorithms
are graph-based ones to learn the complex structures within multiview data but
overlooked the information within data representations. Furthermore, many
existing works treat multiple views discriminatively by introducing some
hyperparameters, which is undesirable in practice. To this end, abundant
multiview based methods have been proposed for dimension reduction. However,
there are still no research to leverage the existing work into a unified
framework. To address this issue, in this paper, we propose a general framework
for multiview data dimension reduction, named Kernelized Multiview Subspace
Analysis (KMSA). It directly handles the multi-view feature representation in
the kernel space, which provides a feasible channel for direct manipulations on
multiview data with different dimensions. Meanwhile, compared with those
graph-based methods, KMSA can fully exploit information from multiview data
with nothing to lose. Furthermore, since different views have different
influences on KMSA, we propose a self-weighted strategy to treat different
views discriminatively according to their contributions. A co-regularized term
is proposed to promote the mutual learning from multi-views. KMSA combines
self-weighted learning with the co-regularized term to learn appropriate
weights for all views. We also discuss the influence of the parameters in KMSA
regarding the weights of multi-views. We evaluate our proposed framework on 6
multiview datasets for classification and image retrieval. The experimental
results validate the advantages of our proposed method. | [
"cs.LG",
"cs.MM"
] |
A fundamental problem on graph-structured data is that of quantifying
similarity between graphs. Graph kernels are an established technique for such
tasks; in particular, those based on random walks and return probabilities have
proven to be effective in wide-ranging applications, from bioinformatics to
social networks to computer vision. However, random walk kernels generally
suffer from slowness and tottering, an effect which causes walks to
overemphasize local graph topology, undercutting the importance of global
structure. To correct for these issues, we recast return probability graph
kernels under the more general framework of density of states -- a framework
which uses the lens of spectral analysis to uncover graph motifs and properties
hidden within the interior of the spectrum -- and use our interpretation to
construct scalable, composite density of states based graph kernels which
balance local and global information, leading to higher classification
accuracies on a host of benchmark datasets. | [
"cs.LG",
"cs.NA",
"cs.SI",
"math.NA"
] |
Neural shape representations have recently shown to be effective in shape
analysis and reconstruction tasks. Existing neural network methods require
point coordinates and corresponding normal vectors to learn the implicit level
sets of the shape. Normal vectors are often not provided as raw data,
therefore, approximation and reorientation are required as pre-processing
stages, both of which can introduce noise. In this paper, we propose a
divergence guided shape representation learning approach that does not require
normal vectors as input. We show that incorporating a soft constraint on the
divergence of the distance function favours smooth solutions that reliably
orients gradients to match the unknown normal at each point, in some cases even
better than approaches that use ground truth normal vectors directly.
Additionally, we introduce a novel geometric initialization method for
sinusoidal shape representation networks that further improves convergence to
the desired solution. We evaluate the effectiveness of our approach on the task
of surface reconstruction and show state-of-the-art performance compared to
other unoriented methods and on-par performance compared to oriented methods. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The general method of graph coarsening or graph reduction has been a
remarkably useful and ubiquitous tool in scientific computing and it is now
just starting to have a similar impact in machine learning. The goal of this
paper is to take a broad look into coarsening techniques that have been
successfully deployed in scientific computing and see how similar principles
are finding their way in more recent applications related to machine learning.
In scientific computing, coarsening plays a central role in algebraic multigrid
methods as well as the related class of multilevel incomplete LU
factorizations. In machine learning, graph coarsening goes under various names,
e.g., graph downsampling or graph reduction. Its goal in most cases is to
replace some original graph by one which has fewer nodes, but whose structure
and characteristics are similar to those of the original graph. As will be
seen, a common strategy in these methods is to rely on spectral properties to
define the coarse graph. | [
"cs.LG",
"cs.DS",
"cs.NA",
"math.NA"
] |
We propose a universal and physically realizable adversarial attack on a
cascaded multi-modal deep learning network (DNN), in the context of
self-driving cars. DNNs have achieved high performance in 3D object detection,
but they are known to be vulnerable to adversarial attacks. These attacks have
been heavily investigated in the RGB image domain and more recently in the
point cloud domain, but rarely in both domains simultaneously - a gap to be
filled in this paper. We use a single 3D mesh and differentiable rendering to
explore how perturbing the mesh's geometry and texture can reduce the
robustness of DNNs to adversarial attacks. We attack a prominent cascaded
multi-modal DNN, the Frustum-Pointnet model. Using the popular KITTI benchmark,
we showed that the proposed universal multi-modal attack was successful in
reducing the model's ability to detect a car by nearly 73%. This work can aid
in the understanding of what the cascaded RGB-point cloud DNN learns and its
vulnerability to adversarial attacks. | [
"cs.CV",
"eess.IV"
] |
Social reviews are indispensable resources for modern consumers' decision
making. For financial gain, companies pay fraudsters preferably in groups to
demote or promote products and services since consumers are more likely to be
misled by a large number of similar reviews from groups. Recent approaches on
fraudster group detection employed handcrafted features of group behaviors
without considering the semantic relation between reviews from the reviewers in
a group. In this paper, we propose the first neural approach, HIN-RNN, a
Heterogeneous Information Network (HIN) Compatible RNN for fraudster group
detection that requires no handcrafted features. HIN-RNN provides a unifying
architecture for representation learning of each reviewer, with the initial
vector as the sum of word embeddings of all review text written by the same
reviewer, concatenated by the ratio of negative reviews. Given a co-review
network representing reviewers who have reviewed the same items with the same
ratings and the reviewers' vector representation, a collaboration matrix is
acquired through HIN-RNN training. The proposed approach is confirmed to be
effective with marked improvement over state-of-the-art approaches on both the
Yelp (22% and 12% in terms of recall and F1-value, respectively) and Amazon (4%
and 2% in terms of recall and F1-value, respectively) datasets. | [
"cs.LG",
"cs.NE",
"cs.SI"
] |
Graph neural networks have recently achieved great successes in predicting
quantum mechanical properties of molecules. These models represent a molecule
as a graph using only the distance between atoms (nodes). They do not, however,
consider the spatial direction from one atom to another, despite directional
information playing a central role in empirical potentials for molecules, e.g.
in angular potentials. To alleviate this limitation we propose directional
message passing, in which we embed the messages passed between atoms instead of
the atoms themselves. Each message is associated with a direction in coordinate
space. These directional message embeddings are rotationally equivariant since
the associated directions rotate with the molecule. We propose a message
passing scheme analogous to belief propagation, which uses the directional
information by transforming messages based on the angle between them.
Additionally, we use spherical Bessel functions and spherical harmonics to
construct theoretically well-founded, orthogonal representations that achieve
better performance than the currently prevalent Gaussian radial basis
representations while using fewer than 1/4 of the parameters. We leverage these
innovations to construct the directional message passing neural network
(DimeNet). DimeNet outperforms previous GNNs on average by 76% on MD17 and by
31% on QM9. Our implementation is available online. | [
"cs.LG",
"physics.comp-ph",
"stat.ML"
] |
Adversarial examples can cause catastrophic mistakes in Deep Neural Network
(DNNs) based vision systems e.g., for classification, segmentation and object
detection. The vulnerability of DNNs against such attacks can prove a major
roadblock towards their real-world deployment. Transferability of adversarial
examples demand generalizable defenses that can provide cross-task protection.
Adversarial training that enhances robustness by modifying target model's
parameters lacks such generalizability. On the other hand, different input
processing based defenses fall short in the face of continuously evolving
attacks. In this paper, we take the first step to combine the benefits of both
approaches and propose a self-supervised adversarial training mechanism in the
input space. By design, our defense is a generalizable approach and provides
significant robustness against the \textbf{unseen} adversarial attacks (\eg by
reducing the success rate of translation-invariant \textbf{ensemble} attack
from 82.6\% to 31.9\% in comparison to previous state-of-the-art). It can be
deployed as a plug-and-play solution to protect a variety of vision systems, as
we demonstrate for the case of classification, segmentation and detection. Code
is available at: {\small\url{https://github.com/Muzammal-Naseer/NRP}}. | [
"cs.CV"
] |
With the large uses of the intelligent systems in different domains, and in
order to increase the drivers and pedestrians safety, the road and traffic sign
recognition system has been a challenging issue and an important task for many
years. But studies, done in this field of detection and recognition of traffic
signs in an image, which are interested in the Arab context, are still
insufficient. Detection of the road signs present in the scene is the one of
the main stages of the traffic sign detection and recognition. In this paper,
an efficient solution to enhance road signs detection, including Arabic
context, performance based on color segmentation, Randomized Hough Transform
and the combination of Zernike moments and Haralick features has been made.
Segmentation stage is useful to determine the Region of Interest (ROI) in the
image. The Randomized Hough Transform (RHT) is used to detect the circular and
octagonal shapes. This stage is improved by the extraction of the Haralick
features and Zernike moments. Furthermore, we use it as input of a classifier
based on SVM. Experimental results show that the proposed approach allows us to
perform the measurements precision. | [
"cs.CV",
"eess.IV"
] |
Model selection has been proven an effective strategy for improving accuracy
in time series forecasting applications. However, when dealing with
hierarchical time series, apart from selecting the most appropriate forecasting
model, forecasters have also to select a suitable method for reconciling the
base forecasts produced for each series to make sure they are coherent.
Although some hierarchical forecasting methods like minimum trace are strongly
supported both theoretically and empirically for reconciling the base
forecasts, there are still circumstances under which they might not produce the
most accurate results, being outperformed by other methods. In this paper we
propose an approach for dynamically selecting the most appropriate hierarchical
forecasting method and succeeding better forecasting accuracy along with
coherence. The approach, to be called conditional hierarchical forecasting, is
based on Machine Learning classification methods and uses time series features
as leading indicators for performing the selection for each hierarchy examined
considering a variety of alternatives. Our results suggest that conditional
hierarchical forecasting leads to significantly more accurate forecasts than
standard approaches, especially at lower hierarchical levels. | [
"cs.LG",
"stat.ML"
] |
In order to satisfy safety conditions, an agent may be constrained from
acting freely. A safe controller can be designed a priori if an environment is
well understood, but not when learning is employed. In particular,
reinforcement learned (RL) controllers require exploration, which can be
hazardous in safety critical situations. We study the benefits of giving
structure to the constraints of a constrained Markov decision process by
specifying them in formal languages as a step towards using safety methods from
software engineering and controller synthesis. We instantiate these constraints
as finite automata to efficiently recognise constraint violations. Constraint
states are then used to augment the underlying MDP state and to learn a dense
cost function, easing the problem of quickly learning joint MDP/constraint
dynamics. We empirically evaluate the effect of these methods on training a
variety of RL algorithms over several constraints specified in Safety Gym,
MuJoCo, and Atari environments. | [
"cs.LG",
"stat.ML"
] |
With the wide applications of Unmanned Aerial Vehicle (UAV) in engineering
such as the inspection of the electrical equipment from distance, the demands
of efficient object detection algorithms for abundant images acquired by UAV
have also been significantly increased in recent years. In this work, we study
the performance of the region-based CNN for the electrical equipment defect
detection by using the UAV images. In order to train the detection model, we
collect a UAV images dataset composes of four classes of electrical equipment
defects with thousands of annotated labels. Then, based on the region-based
faster R-CNN model, we present a multi-class defects detection model for
electrical equipment which is more efficient and accurate than traditional
single class detection methods. Technically, we have replaced the RoI pooling
layer with a similar operation in Tensorflow and promoted the mini-batch to 128
per image in the training procedure. These improvements have slightly increased
the speed of detection without any accuracy loss. Therefore, the modified
region-based CNN could simultaneously detect multi-class of defects of the
electrical devices in nearly real time. Experimental results on the real word
electrical equipment images demonstrate that the proposed method achieves
better performance than the traditional object detection algorithms in defect
detection. | [
"cs.CV"
] |
Much of the recent work on learning molecular representations has been based
on Graph Convolution Networks (GCN). These models rely on local aggregation
operations and can therefore miss higher-order graph properties. To remedy
this, we propose Path-Augmented Graph Transformer Networks (PAGTN) that are
explicitly built on longer-range dependencies in graph-structured data.
Specifically, we use path features in molecular graphs to create global
attention layers. We compare our PAGTN model against the GCN model and show
that our model consistently outperforms GCNs on molecular property prediction
datasets including quantum chemistry (QM7, QM8, QM9), physical chemistry (ESOL,
Lipophilictiy) and biochemistry (BACE, BBBP). | [
"cs.LG",
"stat.ML"
] |
We prove performance guarantees of two algorithms for approximating $Q^\star$
in batch reinforcement learning. Compared to classical iterative methods such
as Fitted Q-Iteration---whose performance loss incurs quadratic dependence on
horizon---these methods estimate (some forms of) the Bellman error and enjoy
linear-in-horizon error propagation, a property established for the first time
for algorithms that rely solely on batch data and output stationary policies.
One of the algorithms uses a novel and explicit importance-weighting correction
to overcome the infamous "double sampling" difficulty in Bellman error
estimation, and does not use any squared losses. Our analyses reveal its
distinct characteristics and potential advantages compared to classical
algorithms. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We propose an efficient and straightforward method for compressing deep
convolutional neural networks (CNNs) that uses basis filters to represent the
convolutional layers, and optimizes the performance of the compressed network
directly in the basis space. Specifically, any spatial convolution layer of the
CNN can be replaced by two successive convolution layers: the first is a set of
three-dimensional orthonormal basis filters, followed by a layer of
one-dimensional filters that represents the original spatial filters in the
basis space. We jointly fine-tune both the basis and the filter representation
to directly mitigate any performance loss due to the truncation. Generality of
the proposed approach is demonstrated by applying it to several well known deep
CNN architectures and data sets for image classification and object detection.
We also present the execution time and power usage at different compression
levels on the Xavier Jetson AGX processor. | [
"cs.CV"
] |
A point cloud serves as a representation of the surface of a
three-dimensional (3D) shape. Deep generative models have been adapted to model
their variations typically using a map from a ball-like set of latent
variables. However, previous approaches did not pay much attention to the
topological structure of a point cloud, despite that a continuous map cannot
express the varying numbers of holes and intersections. Moreover, a point cloud
is often composed of multiple subparts, and it is also difficult to express. In
this study, we propose ChartPointFlow, a flow-based generative model with
multiple latent labels for 3D point clouds. Each label is assigned to points in
an unsupervised manner. Then, a map conditioned on a label is assigned to a
continuous subset of a point cloud, similar to a chart of a manifold. This
enables our proposed model to preserve the topological structure with clear
boundaries, whereas previous approaches tend to generate blurry point clouds
and fail to generate holes. The experimental results demonstrate that
ChartPointFlow achieves state-of-the-art performance in terms of generation and
reconstruction compared with other point cloud generators. Moreover,
ChartPointFlow divides an object into semantic subparts using charts, and it
demonstrates superior performance in case of unsupervised segmentation. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Prediction of seizure before they occur is vital for bringing normalcy to the
lives of patients. Researchers employed machine learning methods using
hand-crafted features for seizure prediction. However, ML methods are too
complicated to select the best ML model or best features. Deep Learning methods
are beneficial in the sense of automatic feature extraction. One of the
roadblocks for accurate seizure prediction is scarcity of epileptic seizure
data. This paper addresses this problem by proposing a deep convolutional
generative adversarial network to generate synthetic EEG samples. We use two
methods to validate synthesized data namely, one-class SVM and a new proposal
which we refer to as convolutional epileptic seizure predictor (CESP). Another
objective of our study is to evaluate performance of well-known deep learning
models (e.g., VGG16, VGG19, ResNet50, and Inceptionv3) by training models on
augmented data using transfer learning with average time of 10 min between true
prediction and seizure onset. Our results show that CESP model achieves
sensitivity of 78.11% and 88.21%, and FPR of 0.27/h and 0.14/h for training on
synthesized and testing on real Epilepsyecosystem and CHB-MIT datasets,
respectively. Effective results of CESP trained on synthesized data shows that
synthetic data acquired the correlation between features and labels very well.
We also show that employment of idea of transfer learning and data augmentation
in patient-specific manner provides highest accuracy with sensitivity of 90.03%
and 0.03 FPR/h which was achieved using Inceptionv3, and that augmenting data
with samples generated from DCGAN increased prediction results of our CESP
model and Inceptionv3 by 4-5% as compared to state-of-the-art traditional
augmentation techniques. Finally, we note that prediction results of CESP
achieved by using augmented data are better than chance level for both
datasets. | [
"cs.LG"
] |
Edge detection is among the most fundamental vision problems for its role in
perceptual grouping and its wide applications. Recent advances in
representation learning have led to considerable improvements in this area.
Many state of the art edge detection models are learned with fully
convolutional networks (FCNs). However, FCN-based edge learning tends to be
vulnerable to misaligned labels due to the delicate structure of edges. While
such problem was considered in evaluation benchmarks, similar issue has not
been explicitly addressed in general edge learning. In this paper, we show that
label misalignment can cause considerably degraded edge learning quality, and
address this issue by proposing a simultaneous edge alignment and learning
framework. To this end, we formulate a probabilistic model where edge alignment
is treated as latent variable optimization, and is learned end-to-end during
network training. Experiments show several applications of this work, including
improved edge detection with state of the art performance, and automatic
refinement of noisy annotations. | [
"cs.CV",
"cs.LG",
"cs.MM",
"cs.RO"
] |
We introduce a novel deep learning framework for data-driven motion
retargeting between skeletons, which may have different structure, yet
corresponding to homeomorphic graphs. Importantly, our approach learns how to
retarget without requiring any explicit pairing between the motions in the
training set. We leverage the fact that different homeomorphic skeletons may be
reduced to a common primal skeleton by a sequence of edge merging operations,
which we refer to as skeletal pooling. Thus, our main technical contribution is
the introduction of novel differentiable convolution, pooling, and unpooling
operators. These operators are skeleton-aware, meaning that they explicitly
account for the skeleton's hierarchical structure and joint adjacency, and
together they serve to transform the original motion into a collection of deep
temporal features associated with the joints of the primal skeleton. In other
words, our operators form the building blocks of a new deep motion processing
framework that embeds the motion into a common latent space, shared by a
collection of homeomorphic skeletons. Thus, retargeting can be achieved simply
by encoding to, and decoding from this latent space. Our experiments show the
effectiveness of our framework for motion retargeting, as well as motion
processing in general, compared to existing approaches. Our approach is also
quantitatively evaluated on a synthetic dataset that contains pairs of motions
applied to different skeletons. To the best of our knowledge, our method is the
first to perform retargeting between skeletons with differently sampled
kinematic chains, without any paired examples. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
We introduce the active audio-visual source separation problem, where an
agent must move intelligently in order to better isolate the sounds coming from
an object of interest in its environment. The agent hears multiple audio
sources simultaneously (e.g., a person speaking down the hall in a noisy
household) and it must use its eyes and ears to automatically separate out the
sounds originating from a target object within a limited time budget. Towards
this goal, we introduce a reinforcement learning approach that trains movement
policies controlling the agent's camera and microphone placement over time,
guided by the improvement in predicted audio separation quality. We demonstrate
our approach in scenarios motivated by both augmented reality (system is
already co-located with the target object) and mobile robotics (agent begins
arbitrarily far from the target object). Using state-of-the-art realistic
audio-visual simulations in 3D environments, we demonstrate our model's ability
to find minimal movement sequences with maximal payoff for audio source
separation. Project: http://vision.cs.utexas.edu/projects/move2hear. | [
"cs.CV",
"cs.LG",
"cs.RO",
"cs.SD",
"eess.AS"
] |
A crucial factor to trust Machine Learning (ML) algorithm decisions is a good
representation of its application field by the training dataset. This is
particularly true when parts of the training data have been artificially
generated to overcome common training problems such as lack of data or
imbalanced dataset. Over the last few years, Generative Adversarial Networks
(GANs) have shown remarkable results in generating realistic data. However,
this ML approach lacks an objective function to evaluate the quality of the
generated data. Numerous GAN applications focus on generating image data mostly
because they can be easily evaluated by a human eye. Less efforts have been
made to generate time series data. Assessing their quality is more complicated,
particularly for technical data. In this paper, we propose a human-centered
approach supporting a ML or domain expert to accomplish this task using Visual
Analytics (VA) techniques. The presented approach consists of two views, namely
a GAN Iteration View showing similarity metrics between real and generated data
over the iterations of the generation process and a Detailed Comparative View
equipped with different time series visualizations such as TimeHistograms, to
compare the generated data at different iteration steps. Starting from the GAN
Iteration View, the user can choose suitable iteration steps for detailed
inspection. We evaluate our approach with a usage scenario that enabled an
efficient comparison of two different GAN models. | [
"cs.LG",
"cs.HC",
"eess.IV"
] |
In this paper, we introduce a Point Recurrent Neural Network (PointRNN) for
moving point cloud processing. At each time step, PointRNN takes point
coordinates $\boldsymbol{P} \in \mathbb{R}^{n \times 3}$ and point features
$\boldsymbol{X} \in \mathbb{R}^{n \times d}$ as input ($n$ and $d$ denote the
number of points and the number of feature channels, respectively). The state
of PointRNN is composed of point coordinates $\boldsymbol{P}$ and point states
$\boldsymbol{S} \in \mathbb{R}^{n \times d'}$ ($d'$ denotes the number of state
channels). Similarly, the output of PointRNN is composed of $\boldsymbol{P}$
and new point features $\boldsymbol{Y} \in \mathbb{R}^{n \times d''}$ ($d''$
denotes the number of new feature channels). Since point clouds are orderless,
point features and states from two time steps can not be directly operated.
Therefore, a point-based spatiotemporally-local correlation is adopted to
aggregate point features and states according to point coordinates. We further
propose two variants of PointRNN, i.e., Point Gated Recurrent Unit (PointGRU)
and Point Long Short-Term Memory (PointLSTM). We apply PointRNN, PointGRU and
PointLSTM to moving point cloud prediction, which aims to predict the future
trajectories of points in a set given their history movements. Experimental
results show that PointRNN, PointGRU and PointLSTM are able to produce correct
predictions on both synthetic and real-world datasets, demonstrating their
ability to model point cloud sequences. The code has been released at
\url{https://github.com/hehefan/PointRNN}. | [
"cs.CV"
] |
Active vision is inherently attention-driven: The agent actively selects
views to attend in order to fast achieve the vision task while improving its
internal representation of the scene being observed. Inspired by the recent
success of attention-based models in 2D vision tasks based on single RGB
images, we propose to address the multi-view depth-based active object
recognition using attention mechanism, through developing an end-to-end
recurrent 3D attentional network. The architecture takes advantage of a
recurrent neural network (RNN) to store and update an internal representation.
Our model, trained with 3D shape datasets, is able to iteratively attend to the
best views targeting an object of interest for recognizing it. To realize 3D
view selection, we derive a 3D spatial transformer network which is
differentiable for training with backpropagation, achieving much faster
convergence than the reinforcement learning employed by most existing
attention-based models. Experiments show that our method, with only depth
input, achieves state-of-the-art next-best-view performance in time efficiency
and recognition accuracy. | [
"cs.CV"
] |
Introduction: Real-world data generated from clinical practice can be used to
analyze the real-world evidence (RWE) of COVID-19 pharmacotherapy and validate
the results of randomized clinical trials (RCTs). Machine learning (ML) methods
are being used in RWE and are promising tools for precision-medicine. In this
study, ML methods are applied to study the efficacy of therapies on COVID-19
hospital admissions in the Valencian Region in Spain. Methods: 5244 and 1312
COVID-19 hospital admissions - dated between January 2020 and January 2021 from
10 health departments, were used respectively for training and validation of
separate treatment-effect models (TE-ML) for remdesivir, corticosteroids,
tocilizumab, lopinavir-ritonavir, azithromycin and
chloroquine/hydroxychloroquine. 2390 admissions from 2 additional health
departments were reserved as an independent test to analyze retrospectively the
survival benefits of therapies in the population selected by the TE-ML models
using cox-proportional hazard models. TE-ML models were adjusted using
treatment propensity scores to control for pre-treatment confounding variables
associated to outcome and further evaluated for futility. ML architecture was
based on boosted decision-trees. Results: In the populations identified by the
TE-ML models, only Remdesivir and Tocilizumab were significantly associated
with an increase in survival time, with hazard ratios of 0.41 (P = 0.04) and
0.21 (P = 0.001), respectively. No survival benefits from chloroquine
derivatives, lopinavir-ritonavir and azithromycin were demonstrated. Tools to
explain the predictions of TE-ML models are explored at patient-level as
potential tools for personalized decision making and precision medicine.
Conclusion: ML methods are suitable tools toward RWE analysis of COVID-19
pharmacotherapies. Results obtained reproduce published results on RWE and
validate the results from RCTs. | [
"cs.LG",
"stat.AP"
] |
Due to the increasing use of machine learning in practice it becomes more and
more important to be able to explain the prediction and behavior of machine
learning models. An instance of explanations are counterfactual explanations
which provide an intuitive and useful explanations of machine learning models.
In this survey we review model-specific methods for efficiently computing
counterfactual explanations of many different machine learning models and
propose methods for models that have not been considered in literature so far. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We present a method to populate an unknown environment with models of
previously seen objects, placed in a Euclidean reference frame that is inferred
causally and on-line using monocular video along with inertial sensors. The
system we implement returns a sparse point cloud for the regions of the scene
that are visible but not recognized as a previously seen object, and a detailed
object model and its pose in the Euclidean frame otherwise. The system includes
bottom-up and top-down components, whereby deep networks trained for detection
provide likelihood scores for object hypotheses provided by a nonlinear filter,
whose state serves as memory. Additional networks provide likelihood scores for
edges, which complements detection networks trained to be invariant to small
deformations. We test our algorithm on existing datasets, and also introduce
the VISMA dataset, that provides ground truth pose, point-cloud map, and object
models, along with time-stamped inertial measurements. | [
"cs.CV",
"cs.RO"
] |
Many applications require the ability to judge uncertainty of time-series
forecasts. Uncertainty is often specified as point-wise error bars around a
mean or median forecast. Due to temporal dependencies, such a method obscures
some information. We would ideally have a way to query the posterior
probability of the entire time-series given the predictive variables, or at a
minimum, be able to draw samples from this distribution. We use a Bayesian
dictionary learning algorithm to statistically generate an ensemble of
forecasts. We show that the algorithm performs as well as a physics-based
ensemble method for temperature forecasts for Houston. We conclude that the
method shows promise for scenario forecasting where physics-based methods are
absent. | [
"stat.ML",
"cs.LG",
"stat.AP"
] |
We introduce MAgent, a platform to support research and development of
many-agent reinforcement learning. Unlike previous research platforms on single
or multi-agent reinforcement learning, MAgent focuses on supporting the tasks
and the applications that require hundreds to millions of agents. Within the
interactions among a population of agents, it enables not only the study of
learning algorithms for agents' optimal polices, but more importantly, the
observation and understanding of individual agent's behaviors and social
phenomena emerging from the AI society, including communication languages,
leaderships, altruism. MAgent is highly scalable and can host up to one million
agents on a single GPU server. MAgent also provides flexible configurations for
AI researchers to design their customized environments and agents. In this
demo, we present three environments designed on MAgent and show emerged
collective intelligence by learning from scratch. | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
There has been an increased interest in discovering heuristics for
combinatorial problems on graphs through machine learning. While existing
techniques have primarily focused on obtaining high-quality solutions,
scalability to billion-sized graphs has not been adequately addressed. In
addition, the impact of budget-constraint, which is necessary for many
practical scenarios, remains to be studied. In this paper, we propose a
framework called GCOMB to bridge these gaps. GCOMB trains a Graph Convolutional
Network (GCN) using a novel probabilistic greedy mechanism to predict the
quality of a node. To further facilitate the combinatorial nature of the
problem, GCOMB utilizes a Q-learning framework, which is made efficient through
importance sampling. We perform extensive experiments on real graphs to
benchmark the efficiency and efficacy of GCOMB. Our results establish that
GCOMB is 100 times faster and marginally better in quality than
state-of-the-art algorithms for learning combinatorial algorithms.
Additionally, a case-study on the practical combinatorial problem of Influence
Maximization (IM) shows GCOMB is 150 times faster than the specialized IM
algorithm IMM with similar quality. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Action-value estimation is a critical component of many reinforcement
learning (RL) methods whereby sample complexity relies heavily on how fast a
good estimator for action value can be learned. By viewing this problem through
the lens of representation learning, good representations of both state and
action can facilitate action-value estimation. While advances in deep learning
have seamlessly driven progress in learning state representations, given the
specificity of the notion of agency to RL, little attention has been paid to
learning action representations. We conjecture that leveraging the
combinatorial structure of multi-dimensional action spaces is a key ingredient
for learning good representations of action. To test this, we set forth the
action hypergraph networks framework -- a class of functions for learning
action representations in multi-dimensional discrete action spaces with a
structural inductive bias. Using this framework we realise an agent class based
on a combination with deep Q-networks, which we dub hypergraph Q-networks. We
show the effectiveness of our approach on a myriad of domains: illustrative
prediction problems under minimal confounding effects, Atari 2600 games, and
discretised physical control benchmarks. | [
"cs.LG",
"stat.ML"
] |
Learned communication makes multi-agent systems more effective by aggregating
distributed information. However, it also exposes individual agents to the
threat of erroneous messages they might receive. In this paper, we study the
setting proposed in V2VNet, where nearby self-driving vehicles jointly perform
object detection and motion forecasting in a cooperative manner. Despite a huge
performance boost when the agents solve the task together, the gain is quickly
diminished in the presence of pose noise since the communication relies on
spatial transformations. Hence, we propose a novel neural reasoning framework
that learns to communicate, to estimate potential errors, and finally, to reach
a consensus about those errors. Experiments confirm that our proposed framework
significantly improves the robustness of multi-agent self-driving perception
and motion forecasting systems under realistic and severe localization noise. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
We present a multiple instance learning class activation map (MIL-CAM)
approach for pixel-level minirhizotron image segmentation given weak
image-level labels. Minirhizotrons are used to image plant roots in situ.
Minirhizotron imagery is often composed of soil containing a few long and thin
root objects of small diameter. The roots prove to be challenging for existing
semantic image segmentation methods to discriminate. In addition to learning
from weak labels, our proposed MIL-CAM approach re-weights the root versus soil
pixels during analysis for improved performance due to the heavy imbalance
between soil and root pixels. The proposed approach outperforms other attention
map and multiple instance learning methods for localization of root objects in
minirhizotron imagery. | [
"cs.CV"
] |
Recursive stochastic algorithms have gained significant attention in the
recent past due to data driven applications. Examples include stochastic
gradient descent for solving large-scale optimization problems and empirical
dynamic programming algorithms for solving Markov decision problems. These
recursive stochastic algorithms approximate certain contraction operators and
can be viewed within the framework of iterated random operators. Accordingly,
we consider iterated random operators over a Polish space that simulate
iterated contraction operator over that Polish space. Assume that the iterated
random operators are indexed by certain batch sizes such that as batch sizes
grow to infinity, each realization of the random operator converges (in some
sense) to the contraction operator it is simulating. We show that starting from
the same initial condition, the distribution of the random sequence generated
by the iterated random operators converges weakly to the trajectory generated
by the contraction operator. We further show that under certain conditions, the
time average of the random sequence converges to the spatial mean of the
invariant distribution. We then apply these results to logistic regression,
empirical value iteration, and empirical Q value iteration for finite state
finite action MDPs to illustrate the general theory develop here. | [
"cs.LG",
"cs.SY",
"math.OC",
"math.PR",
"stat.ML"
] |
Convolutional neural networks (CNNs) have recently been very successful in a
variety of computer vision tasks, especially on those linked to recognition.
Optical flow estimation has not been among the tasks where CNNs were
successful. In this paper we construct appropriate CNNs which are capable of
solving the optical flow estimation problem as a supervised learning task. We
propose and compare two architectures: a generic architecture and another one
including a layer that correlates feature vectors at different image locations.
Since existing ground truth data sets are not sufficiently large to train a
CNN, we generate a synthetic Flying Chairs dataset. We show that networks
trained on this unrealistic data still generalize very well to existing
datasets such as Sintel and KITTI, achieving competitive accuracy at frame
rates of 5 to 10 fps. | [
"cs.CV",
"cs.LG",
"I.2.6; I.4.8"
] |
Deep learning (DL) based semantic segmentation methods have been providing
state-of-the-art performance in the last few years. More specifically, these
techniques have been successfully applied to medical image classification,
segmentation, and detection tasks. One deep learning technique, U-Net, has
become one of the most popular for these applications. In this paper, we
propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well
as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net
models, which are named RU-Net and R2U-Net respectively. The proposed models
utilize the power of U-Net, Residual Network, as well as RCNN. There are
several advantages of these proposed architectures for segmentation tasks.
First, a residual unit helps when training deep architecture. Second, feature
accumulation with recurrent residual convolutional layers ensures better
feature representation for segmentation tasks. Third, it allows us to design
better U-Net architecture with same number of network parameters with better
performance for medical image segmentation. The proposed models are tested on
three benchmark datasets such as blood vessel segmentation in retina images,
skin cancer segmentation, and lung lesion segmentation. The experimental
results show superior performance on segmentation tasks compared to equivalent
models including U-Net and residual U-Net (ResU-Net). | [
"cs.CV"
] |
In this paper, we address the problem of car detection from aerial images
using Convolutional Neural Networks (CNN). This problem presents additional
challenges as compared to car (or any object) detection from ground images
because features of vehicles from aerial images are more difficult to discern.
To investigate this issue, we assess the performance of two state-of-the-art
CNN algorithms, namely Faster R-CNN, which is the most popular region-based
algorithm, and YOLOv3, which is known to be the fastest detection algorithm. We
analyze two datasets with different characteristics to check the impact of
various factors, such as UAV's altitude, camera resolution, and object size. A
total of 39 training experiments were conducted to account for the effect of
different hyperparameter values. The objective of this work is to conduct the
most robust and exhaustive comparison between these two cutting-edge algorithms
on the specific domain of aerial images. By using a variety of metrics, we show
that YOLOv3 yields better performance in most configurations, except that it
exhibits a lower recall and less confident detections when object sizes and
scales in the testing dataset differ largely from those in the training
dataset. | [
"cs.CV",
"cs.LG",
"cs.NE",
"eess.IV"
] |
Remarkable performance from Transformer networks in Natural Language
Processing promote the development of these models in dealing with computer
vision tasks such as image recognition and segmentation. In this paper, we
introduce a novel framework, called Multi-level Multi-scale Point Transformer
(MLMSPT) that works directly on the irregular point clouds for representation
learning. Specifically, a point pyramid transformer is investigated to model
features with diverse resolutions or scales we defined, followed by a
multi-level transformer module to aggregate contextual information from
different levels of each scale and enhance their interactions. While a
multi-scale transformer module is designed to capture the dependencies among
representations across different scales. Extensive evaluation on public
benchmark datasets demonstrate the effectiveness and the competitive
performance of our methods on 3D shape classification, part segmentation and
semantic segmentation tasks. | [
"cs.CV"
] |
Consider $N$ points in $\mathbb{R}^d$ and $M$ local coordinate systems that
are related through unknown rigid transforms. For each point we are given
(possibly noisy) measurements of its local coordinates in some of the
coordinate systems. Alternatively, for each coordinate system, we observe the
coordinates of a subset of the points. The problem of estimating the global
coordinates of the $N$ points (up to a rigid transform) from such measurements
comes up in distributed approaches to molecular conformation and sensor network
localization, and also in computer vision and graphics.
The least-squares formulation of this problem, though non-convex, has a well
known closed-form solution when $M=2$ (based on the singular value
decomposition). However, no closed form solution is known for $M\geq 3$.
In this paper, we demonstrate how the least-squares formulation can be
relaxed into a convex program, namely a semidefinite program (SDP). By setting
up connections between the uniqueness of this SDP and results from rigidity
theory, we prove conditions for exact and stable recovery for the SDP
relaxation. In particular, we prove that the SDP relaxation can guarantee
recovery under more adversarial conditions compared to earlier proposed
spectral relaxations, and derive error bounds for the registration error
incurred by the SDP relaxation.
We also present results of numerical experiments on simulated data to confirm
the theoretical findings. We empirically demonstrate that (a) unlike the
spectral relaxation, the relaxation gap is mostly zero for the semidefinite
program (i.e., we are able to solve the original non-convex least-squares
problem) up to a certain noise threshold, and (b) the semidefinite program
performs significantly better than spectral and manifold-optimization methods,
particularly at large noise levels. | [
"cs.CV",
"cs.NA",
"math.NA",
"math.OC",
"90C22, 52C25, 05C50"
] |
Continual learning (CL) -- the ability to continuously learn, building on
previously acquired knowledge -- is a natural requirement for long-lived
autonomous reinforcement learning (RL) agents. While building such agents, one
needs to balance opposing desiderata, such as constraints on capacity and
compute, the ability to not catastrophically forget, and to exhibit positive
transfer on new tasks. Understanding the right trade-off is conceptually and
computationally challenging, which we argue has led the community to overly
focus on catastrophic forgetting. In response to these issues, we advocate for
the need to prioritize forward transfer and propose Continual World, a
benchmark consisting of realistic and meaningfully diverse robotic tasks built
on top of Meta-World as a testbed. Following an in-depth empirical evaluation
of existing CL methods, we pinpoint their limitations and highlight unique
algorithmic challenges in the RL setting. Our benchmark aims to provide a
meaningful and computationally inexpensive challenge for the community and thus
help better understand the performance of existing and future solutions. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Recent progress in semantic segmentation is driven by deep Convolutional
Neural Networks and large-scale labeled image datasets. However, data labeling
for pixel-wise segmentation is tedious and costly. Moreover, a trained model
can only make predictions within a set of pre-defined classes. In this paper,
we present CANet, a class-agnostic segmentation network that performs few-shot
segmentation on new classes with only a few annotated images available. Our
network consists of a two-branch dense comparison module which performs
multi-level feature comparison between the support image and the query image,
and an iterative optimization module which iteratively refines the predicted
results. Furthermore, we introduce an attention mechanism to effectively fuse
information from multiple support examples under the setting of k-shot
learning. Experiments on PASCAL VOC 2012 show that our method achieves a mean
Intersection-over-Union score of 55.4% for 1-shot segmentation and 57.1% for
5-shot segmentation, outperforming state-of-the-art methods by a large margin
of 14.6% and 13.2%, respectively. | [
"cs.CV"
] |
We present a generalised self-supervised learning approach for monocular
estimation of the real depth across scenes with diverse depth ranges from
1--100s of meters. Existing supervised methods for monocular depth estimation
require accurate depth measurements for training. This limitation has led to
the introduction of self-supervised methods that are trained on stereo image
pairs with a fixed camera baseline to estimate disparity which is transformed
to depth given known calibration. Self-supervised approaches have demonstrated
impressive results but do not generalise to scenes with different depth ranges
or camera baselines. In this paper, we introduce RealMonoDepth a
self-supervised monocular depth estimation approach which learns to estimate
the real scene depth for a diverse range of indoor and outdoor scenes. A novel
loss function with respect to the true scene depth based on relative depth
scaling and warping is proposed. This allows self-supervised training of a
single network with multiple data sets for scenes with diverse depth ranges
from both stereo pair and in the wild moving camera data sets. A comprehensive
performance evaluation across five benchmark data sets demonstrates that
RealMonoDepth provides a single trained network which generalises depth
estimation across indoor and outdoor scenes, consistently outperforming
previous self-supervised approaches. | [
"cs.CV"
] |
Graph neural networks (GNNs) have been widely used in representation learning
on graphs and achieved superior performance in tasks such as node
classification. However, analyzing heterogeneous graph of different types of
nodes and links still brings great challenges for injecting the heterogeneity
into a graph neural network. A general remedy is to manually or automatically
design meta-paths to transform a heterogeneous graph into a homogeneous graph,
but this is suboptimal since the features from the first-order neighbors are
not fully leveraged for training and inference. In this paper, we propose
simple and effective graph neural networks for heterogeneous graph, excluding
the use of meta-paths. Specifically, our models focus on relaxing the
heterogeneity stress for model parameters by expanding model capacity of
general GNNs in an effective way. Extensive experimental results on six
real-world graphs not only show the superior performance of our proposed models
over the state-of-the-arts, but also demonstrate the potentially good balance
between reducing the heterogeneity stress and increasing the parameter size.
Our code is freely available for reproducing our results. | [
"cs.LG",
"cs.AI"
] |
Millions of unsolicited medical inquiries are received by pharmaceutical
companies every year. It has been hypothesized that these inquiries represent a
treasure trove of information, potentially giving insight into matters
regarding medicinal products and the associated medical treatments. However,
due to the large volume and specialized nature of the inquiries, it is
difficult to perform timely, recurrent, and comprehensive analyses. Here, we
propose a machine learning approach based on natural language processing and
unsupervised learning to automatically discover key topics in real-world
medical inquiries from customers. This approach does not require ontologies nor
annotations. The discovered topics are meaningful and medically relevant, as
judged by medical information specialists, thus demonstrating that unsolicited
medical inquiries are a source of valuable customer insights. Our work paves
the way for the machine-learning-driven analysis of medical inquiries in the
pharmaceutical industry, which ultimately aims at improving patient care. | [
"cs.LG",
"cs.CL",
"cs.IR"
] |
We introduce WyPR, a Weakly-supervised framework for Point cloud Recognition,
requiring only scene-level class tags as supervision. WyPR jointly addresses
three core 3D recognition tasks: point-level semantic segmentation, 3D proposal
generation, and 3D object detection, coupling their predictions through self
and cross-task consistency losses. We show that in conjunction with standard
multiple-instance learning objectives, WyPR can detect and segment objects in
point cloud data without access to any spatial labels at training time. We
demonstrate its efficacy using the ScanNet and S3DIS datasets, outperforming
prior state of the art on weakly-supervised segmentation by more than 6% mIoU.
In addition, we set up the first benchmark for weakly-supervised 3D object
detection on both datasets, where WyPR outperforms standard approaches and
establishes strong baselines for future work. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM"
] |
This paper presents deformable templates as a tool for segmentation and
localization of biological structures in medical images. Structures are
represented by a prototype template, combined with a parametric warp mapping
used to deform the original shape. The localization procedure is achieved using
a multi-stage, multi-resolution algorithm de-signed to reduce computational
complexity and time. The algorithm initially identifies regions in the image
most likely to contain the desired objects and then examines these regions at
progressively increasing resolutions. The final stage of the algorithm involves
warping the prototype template to match the localized objects. The algorithm is
presented along with the results of four example applications using MRI, x-ray
and ultrasound images. | [
"cs.CV"
] |
With the advancement of remote-sensed imaging large volumes of very high
resolution land cover images can now be obtained. Automation of object
recognition in these 2D images, however, is still a key issue. High intra-class
variance and low inter-class variance in Very High Resolution (VHR) images
hamper the accuracy of prediction in object recognition tasks. Most successful
techniques in various computer vision tasks recently are based on deep
supervised learning. In this work, a deep Convolutional Neural Network (CNN)
based on symmetric encoder-decoder architecture with skip connections is
employed for the 2D semantic segmentation of most common land cover object
classes - impervious surface, buildings, low vegetation, trees and cars. Atrous
convolutions are employed to have large receptive field in the proposed CNN
model. Further, the CNN outputs are post-processed using Fully Connected
Conditional Random Field (FCRF) model to refine the CNN pixel label
predictions. The proposed CNN-FCRF model achieves an overall accuracy of 90.5%
on the ISPRS Vaihingen Dataset. | [
"cs.CV",
"eess.IV"
] |
This paper introduces data augmentation for point clouds by interpolation
between examples. Data augmentation by interpolation has shown to be a simple
and effective approach in the image domain. Such a mixup is however not
directly transferable to point clouds, as we do not have a one-to-one
correspondence between the points of two different objects. In this paper, we
define data augmentation between point clouds as a shortest path linear
interpolation. To that end, we introduce PointMixup, an interpolation method
that generates new examples through an optimal assignment of the path function
between two point clouds. We prove that our PointMixup finds the shortest path
between two point clouds and that the interpolation is assignment invariant and
linear. With the definition of interpolation, PointMixup allows to introduce
strong interpolation-based regularizers such as mixup and manifold mixup to the
point cloud domain. Experimentally, we show the potential of PointMixup for
point cloud classification, especially when examples are scarce, as well as
increased robustness to noise and geometric transformations to points. The code
for PointMixup and the experimental details are publicly available. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.