text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Time-of-flight (TOF) cameras are sensors that can measure the depths of
scene-points, by illuminating the scene with a controlled laser or LED source,
and then analyzing the reflected light. In this paper, we will first describe
the underlying measurement principles of time-of-flight cameras, including: (i)
pulsed-light cameras, which measure directly the time taken for a light pulse
to travel from the device to the object and back again, and (ii)
continuous-wave modulated-light cameras, which measure the phase difference
between the emitted and received signals, and hence obtain the travel time
indirectly. We review the main existing designs, including prototypes as well
as commercially available devices. We also review the relevant camera
calibration principles, and how they are applied to TOF devices. Finally, we
discuss the benefits and challenges of combined TOF and color camera systems. | [
"cs.CV",
"cs.RO"
] |
To make advanced learning machines such as Deep Neural Networks (DNNs) more
transparent in decision making, explainable AI (XAI) aims to provide
interpretations of DNNs' predictions. These interpretations are usually given
in the form of heatmaps, each one illustrating relevant patterns regarding the
prediction for a given instance. Bayesian approaches such as Bayesian Neural
Networks (BNNs) so far have a limited form of transparency (model transparency)
already built-in through their prior weight distribution, but notably, they
lack explanations of their predictions for given instances. In this work, we
bring together these two perspectives of transparency into a holistic
explanation framework for explaining BNNs. Within the Bayesian framework, the
network weights follow a probability distribution. Hence, the standard
(deterministic) prediction strategy of DNNs extends in BNNs to a predictive
distribution, and thus the standard explanation extends to an explanation
distribution. Exploiting this view, we uncover that BNNs implicitly employ
multiple heterogeneous prediction strategies. While some of these are inherited
from standard DNNs, others are revealed to us by considering the inherent
uncertainty in BNNs. Our quantitative and qualitative experiments on
toy/benchmark data and real-world data from pathology show that the proposed
approach of explaining BNNs can lead to more effective and insightful
explanations. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
We present a novel and practical deep fully convolutional neural network
architecture for semantic pixel-wise segmentation termed SegNet. This core
trainable segmentation engine consists of an encoder network, a corresponding
decoder network followed by a pixel-wise classification layer. The architecture
of the encoder network is topologically identical to the 13 convolutional
layers in the VGG16 network. The role of the decoder network is to map the low
resolution encoder feature maps to full input resolution feature maps for
pixel-wise classification. The novelty of SegNet lies is in the manner in which
the decoder upsamples its lower resolution input feature map(s). Specifically,
the decoder uses pooling indices computed in the max-pooling step of the
corresponding encoder to perform non-linear upsampling. This eliminates the
need for learning to upsample. The upsampled maps are sparse and are then
convolved with trainable filters to produce dense feature maps. We compare our
proposed architecture with the widely adopted FCN and also with the well known
DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory
versus accuracy trade-off involved in achieving good segmentation performance.
SegNet was primarily motivated by scene understanding applications. Hence, it
is designed to be efficient both in terms of memory and computational time
during inference. It is also significantly smaller in the number of trainable
parameters than other competing architectures. We also performed a controlled
benchmark of SegNet and other architectures on both road scenes and SUN RGB-D
indoor scene segmentation tasks. We show that SegNet provides good performance
with competitive inference time and more efficient inference memory-wise as
compared to other architectures. We also provide a Caffe implementation of
SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
A method is presented that significantly reduces the character error rates
for OCR text obtained from OCRopus models trained on early printed books when
only small amounts of diplomatic transcriptions are available. This is achieved
by building from already existing models during training instead of starting
from scratch. To overcome the discrepancies between the set of characters of
the pretrained model and the additional ground truth the OCRopus code is
adapted to allow for alphabet expansion or reduction. The character set is now
capable of flexibly adding and deleting characters from the pretrained alphabet
when an existing model is loaded. For our experiments we use a self-trained
mixed model on early Latin prints and the two standard OCRopus models on modern
English and German Fraktur texts. The evaluation on seven early printed books
showed that training from the Latin mixed model reduces the average amount of
errors by 43% and 26%, respectively compared to training from scratch with 60
and 150 lines of ground truth, respectively. Furthermore, it is shown that even
building from mixed models trained on data unrelated to the newly added
training and test data can lead to significantly improved recognition results. | [
"cs.CV"
] |
Graph similarity computation is one of the core operations in many
graph-based applications, such as graph similarity search, graph database
analysis, graph clustering, etc. Since computing the exact distance/similarity
between two graphs is typically NP-hard, a series of approximate methods have
been proposed with a trade-off between accuracy and speed. Recently, several
data-driven approaches based on neural networks have been proposed, most of
which model the graph-graph similarity as the inner product of their
graph-level representations, with different techniques proposed for generating
one embedding per graph. However, using one fixed-dimensional embedding per
graph may fail to fully capture graphs in varying sizes and link structures, a
limitation that is especially problematic for the task of graph similarity
computation, where the goal is to find the fine-grained difference between two
graphs. In this paper, we address the problem of graph similarity computation
from another perspective, by directly matching two sets of node embeddings
without the need to use fixed-dimensional vectors to represent whole graphs for
their similarity computation. The model, GraphSim, achieves the
state-of-the-art performance on four real-world graph datasets under six out of
eight settings (here we count a specific dataset and metric combination as one
setting), compared to existing popular methods for approximate Graph Edit
Distance (GED) and Maximum Common Subgraph (MCS) computation. | [
"cs.LG",
"stat.ML"
] |
We propose a new family of optimization criteria for variational
auto-encoding models, generalizing the standard evidence lower bound. We
provide conditions under which they recover the data distribution and learn
latent features, and formally show that common issues such as blurry samples
and uninformative latent features arise when these conditions are not met.
Based on these new insights, we propose a new sequential VAE model that can
generate sharp samples on the LSUN image dataset based on pixel-wise
reconstruction loss, and propose an optimization criterion that encourages
unsupervised learning of informative latent features. | [
"cs.LG",
"stat.ML"
] |
Existing vision-based action recognition is susceptible to occlusion and
appearance variations, while wearable sensors can alleviate these challenges by
capturing human motion with one-dimensional time-series signal. For the same
action, the knowledge learned from vision sensors and wearable sensors, may be
related and complementary. However, there exists significantly large modality
difference between action data captured by wearable-sensor and vision-sensor in
data dimension, data distribution and inherent information content. In this
paper, we propose a novel framework, named Semantics-aware Adaptive Knowledge
Distillation Networks (SAKDN), to enhance action recognition in vision-sensor
modality (videos) by adaptively transferring and distilling the knowledge from
multiple wearable sensors. The SAKDN uses multiple wearable-sensors as teacher
modalities and uses RGB videos as student modality. To preserve local temporal
relationship and facilitate employing visual deep learning model, we transform
one-dimensional time-series signals of wearable sensors to two-dimensional
images by designing a gramian angular field based virtual image generation
model. Then, we build a novel Similarity-Preserving Adaptive Multi-modal Fusion
Module to adaptively fuse intermediate representation knowledge from different
teacher networks. Finally, to fully exploit and transfer the knowledge of
multiple well-trained teacher networks to the student network, we propose a
novel Graph-guided Semantically Discriminative Mapping loss, which utilizes
graph-guided ablation analysis to produce a good visual explanation
highlighting the important regions across modalities and concurrently
preserving the interrelations of original data. Experimental results on
Berkeley-MHAD, UTD-MHAD and MMAct datasets well demonstrate the effectiveness
of our proposed SAKDN. | [
"cs.CV"
] |
With the ongoing rise of machine learning, the need for methods for
explaining decisions made by artificial intelligence systems is becoming a more
and more important topic. Especially for image classification tasks, many
state-of-the-art tools to explain such classifiers rely on visual highlighting
of important areas of the input data. Contrary, counterfactual explanation
systems try to enable a counterfactual reasoning by modifying the input image
in a way such that the classifier would have made a different prediction. By
doing so, the users of counterfactual explanation systems are equipped with a
completely different kind of explanatory information. However, methods for
generating realistic counterfactual explanations for image classifiers are
still rare. Especially in medical contexts, where relevant information often
consists of textural and structural information, high-quality counterfactual
images have the potential to give meaningful insights into decision processes.
In this work, we present GANterfactual, an approach to generate such
counterfactual image explanations based on adversarial image-to-image
translation techniques. Additionally, we conduct a user study to evaluate our
approach in an exemplary medical use case. Our results show that, in the chosen
medical use-case, counterfactual explanations lead to significantly better
results regarding mental models, explanation satisfaction, trust, emotions, and
self-efficacy than two state-of-the-art systems that work with saliency maps,
namely LIME and LRP. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.HC",
"cs.NE"
] |
How can intelligent agents solve a diverse set of tasks in a data-efficient
manner? The disentangled representation learning approach posits that such an
agent would benefit from separating out (disentangling) the underlying
structure of the world into disjoint parts of its representation. However,
there is no generally agreed-upon definition of disentangling, not least
because it is unclear how to formalise the notion of world structure beyond toy
datasets with a known ground truth generative process. Here we propose that a
principled solution to characterising disentangled representations can be found
by focusing on the transformation properties of the world. In particular, we
suggest that those transformations that change only some properties of the
underlying world state, while leaving all other properties invariant, are what
gives exploitable structure to any kind of data. Similar ideas have already
been successfully applied in physics, where the study of symmetry
transformations has revolutionised the understanding of the world structure. By
connecting symmetry transformations to vector representations using the
formalism of group and representation theory we arrive at the first formal
definition of disentangled representations. Our new definition is in agreement
with many of the current intuitions about disentangling, while also providing
principled resolutions to a number of previous points of contention. While this
work focuses on formally defining disentangling - as opposed to solving the
learning problem - we believe that the shift in perspective to studying data
transformations can stimulate the development of better representation learning
algorithms. | [
"cs.LG",
"stat.ML"
] |
This paper describes a novel method to solve average-reward semi-Markov
decision processes, by reducing them to a minimal sequence of cumulative reward
problems. The usual solution methods for this type of problems update the gain
(optimal average reward) immediately after observing the result of taking an
action. The alternative introduced, optimal nudging, relies instead on setting
the gain to some fixed value, which transitorily makes the problem a
cumulative-reward task, solving it by any standard reinforcement learning
method, and only then updating the gain in a way that minimizes uncertainty in
a minmax sense. The rule for optimal gain update is derived by exploiting the
geometric features of the w-l space, a simple mapping of the space of policies.
The total number of cumulative reward tasks that need to be solved is shown to
be small. Some experiments are presented to explore the features of the
algorithm and to compare its performance with other approaches. | [
"cs.LG",
"cs.AI"
] |
Multi-agent reinforcement learning (MARL) has been increasingly explored to
learn the cooperative policy towards maximizing a certain global reward. Many
existing studies take advantage of graph neural networks (GNN) in MARL to
propagate critical collaborative information over the interaction graph, built
upon inter-connected agents. Nevertheless, the vanilla GNN approach yields
substantial defects in dealing with complex real-world scenarios since the
generic message passing mechanism is ineffective between heterogeneous vertices
and, moreover, simple message aggregation functions are incapable of accurately
modeling the combinational interactions from multiple neighbors. While adopting
complex GNN models with more informative message passing and aggregation
mechanisms can obviously benefit heterogeneous vertex representations and
cooperative policy learning, it could, on the other hand, increase the training
difficulty of MARL and demand more intense and direct reward signals compared
to the original global reward. To address these challenges, we propose a new
cooperative learning framework with pre-trained heterogeneous observation
representations. Particularly, we employ an encoder-decoder based graph
attention to learn the intricate interactions and heterogeneous representations
that can be more easily leveraged by MARL. Moreover, we design a pre-training
with local actor-critic algorithm to ease the difficulty in cooperative policy
learning. Extensive experiments over real-world scenarios demonstrate that our
new approach can significantly outperform existing MARL baselines as well as
operational research solutions that are widely-used in industry. | [
"cs.LG",
"cs.AI"
] |
3D point cloud generation by the deep neural network from a single image has
been attracting more and more researchers' attention. However,
recently-proposed methods require the objects be captured with relatively clean
backgrounds, fixed viewpoint, while this highly limits its application in the
real environment. To overcome these drawbacks, we proposed to integrate the
prior 3D shape knowledge into the network to guide the 3D generation. By taking
additional 3D information, the proposed network can handle the 3D object
generation from a single real image captured from any viewpoint and complex
background. Specifically, giving a query image, we retrieve the nearest shape
model from a pre-prepared 3D model database. Then, the image together with the
retrieved shape model is fed into the proposed network to generate the
fine-grained 3D point cloud. The effectiveness of our proposed framework has
been verified on different kinds of datasets. Experimental results show that
the proposed framework achieves state-of-the-art accuracy compared to other
volumetric-based and point set generation methods. Furthermore, the proposed
framework works well for real images in complex backgrounds with various view
angles. | [
"cs.CV"
] |
Despite recent impressive results on single-object and single-domain image
generation, the generation of complex scenes with multiple objects remains
challenging. In this paper, we start with the idea that a model must be able to
understand individual objects and relationships between objects in order to
generate complex scenes well. Our layout-to-image-generation method, which we
call Object-Centric Generative Adversarial Network (or OC-GAN), relies on a
novel Scene-Graph Similarity Module (SGSM). The SGSM learns representations of
the spatial relationships between objects in the scene, which lead to our
model's improved layout-fidelity. We also propose changes to the conditioning
mechanism of the generator that enhance its object instance-awareness. Apart
from improving image quality, our contributions mitigate two failure modes in
previous approaches: (1) spurious objects being generated without corresponding
bounding boxes in the layout, and (2) overlapping bounding boxes in the layout
leading to merged objects in images. Extensive quantitative evaluation and
ablation studies demonstrate the impact of our contributions, with our model
outperforming previous state-of-the-art approaches on both the COCO-Stuff and
Visual Genome datasets. Finally, we address an important limitation of
evaluation metrics used in previous works by introducing SceneFID -- an
object-centric adaptation of the popular Fr{\'e}chet Inception Distance metric,
that is better suited for multi-object images. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Structured-output learning is a challenging problem; particularly so because
of the difficulty in obtaining large datasets of fully labelled instances for
training. In this paper we try to overcome this difficulty by presenting a
multi-utility learning framework for structured prediction that can learn from
training instances with different forms of supervision. We propose a unified
technique for inferring the loss functions most suitable for quantifying the
consistency of solutions with the given weak annotation. We demonstrate the
effectiveness of our framework on the challenging semantic image segmentation
problem for which a wide variety of annotations can be used. For instance, the
popular training datasets for semantic segmentation are composed of images with
hard-to-generate full pixel labellings, as well as images with easy-to-obtain
weak annotations, such as bounding boxes around objects, or image-level labels
that specify which object categories are present in an image. Experimental
evaluation shows that the use of annotation-specific loss functions
dramatically improves segmentation accuracy compared to the baseline system
where only one type of weak annotation is used. | [
"cs.CV",
"cs.LG"
] |
Ideally, what confuses neural network should be confusing to humans. However,
recent experiments have shown that small, imperceptible perturbations can
change the network prediction. To address this gap in perception, we propose a
novel approach for learning robust classifier. Our main idea is: adversarial
examples for the robust classifier should be indistinguishable from the regular
data of the adversarial target. We formulate a problem of learning robust
classifier in the framework of Generative Adversarial Networks (GAN), where the
adversarial attack on classifier acts as a generator, and the critic network
learns to distinguish between regular and adversarial images. The classifier
cost is augmented with the objective that its adversarial examples should
confuse the adversary critic. To improve the stability of the adversarial
mapping, we introduce adversarial cycle-consistency constraint which ensures
that the adversarial mapping of the adversarial examples is close to the
original. In the experiments, we show the effectiveness of our defense. Our
method surpasses in terms of robustness networks trained with adversarial
training. Additionally, we verify in the experiments with human annotators on
MTurk that adversarial examples are indeed visually confusing. Codes for the
project are available at https://github.com/aam-at/adversary_critic. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
To accelerate deep CNN models, this paper proposes a novel spatially adaptive
framework that can dynamically generate pixel-wise sparsity according to the
input image. The sparse scheme is pixel-wise refined, regional adaptive under a
unified importance map, which makes it friendly to hardware implementation. A
sparse controlling method is further presented to enable online adjustment for
applications with different precision/latency requirements. The sparse model is
applicable to a wide range of vision tasks. Experimental results show that this
method efficiently improve the computing efficiency for both image
classification using ResNet-18 and super resolution using SRResNet. On image
classification task, our method can save 30%-70% MACs with a slightly drop in
top-1 and top-5 accuracy. On super resolution task, our method can reduce more
than 90% MACs while only causing around 0.1 dB and 0.01 decreasing in PSNR and
SSIM. Hardware validation is also included. | [
"cs.CV",
"eess.IV"
] |
Researches have demonstrated that low bit-width (e.g., INT8) quantization can
be employed to accelerate the inference process. It makes the gradient
quantization very promising since the backward propagation requires
approximately twice more computation than forward one. Due to the variability
and uncertainty of gradient distribution, a lot of methods have been proposed
to attain training stability. However, most of them ignore the channel-wise
gradient distributions and the impact of gradients with different magnitudes,
resulting in the degradation of final accuracy. In this paper, we propose a
novel INT8 quantization training framework for convolutional neural network to
address the above issues. Specifically, we adopt Gradient Vectorized
Quantization to quantize the gradient, based on the observation that layer-wise
gradients contain multiple distributions along the channel dimension. Then,
Magnitude-aware Clipping Strategy is introduced by taking the magnitudes of
gradients into consideration when minimizing the quantization error, and we
present a theoretical derivation to solve the quantization parameters of
different distributions. Experimental results on broad range of computer vision
tasks, such as image classification, object detection and video classification,
demonstrate that the proposed Distribution Adaptive INT8 Quantization training
method has achieved almost lossless training accuracy for different backbones,
including ResNet, MobileNetV2, InceptionV3, VGG and AlexNet, which is superior
to the state-of-the-art techniques. Moreover, we further implement the INT8
kernel that can accelerate the training iteration more than 200% under the
latest Turing architecture, i.e., our method excels on both training accuracy
and speed. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We propose a novel Generative Adversarial Network (XingGAN or CrossingGAN)
for person image generation tasks, i.e., translating the pose of a given person
to a desired one. The proposed Xing generator consists of two generation
branches that model the person's appearance and shape information,
respectively. Moreover, we propose two novel blocks to effectively transfer and
update the person's shape and appearance embeddings in a crossing way to
mutually improve each other, which has not been considered by any other
existing GAN-based image generation work. Extensive experiments on two
challenging datasets, i.e., Market-1501 and DeepFashion, demonstrate that the
proposed XingGAN advances the state-of-the-art performance both in terms of
objective quantitative scores and subjective visual realness. The source code
and trained models are available at https://github.com/Ha0Tang/XingGAN. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Image-to-image translation is a long-established and a difficult problem in
computer vision. In this paper we propose an adversarial based model for
image-to-image translation. The regular deep neural-network based methods
perform the task of image-to-image translation by comparing gram matrices and
using image segmentation which requires human intervention. Our generative
adversarial network based model works on a conditional probability approach.
This approach makes the image translation independent of any local, global and
content or style features. In our approach we use a bidirectional
reconstruction model appended with the affine transform factor that helps in
conserving the content and photorealism as compared to other models. The
advantage of using such an approach is that the image-to-image translation is
semi-supervised, independant of image segmentation and inherits the properties
of generative adversarial networks tending to produce realistic. This method
has proven to produce better results than Multimodal Unsupervised
Image-to-image translation. | [
"cs.CV"
] |
Robust geometric and semantic scene understanding is ever more important in
many real-world applications such as autonomous driving and robotic navigation.
In this paper, we propose a multi-task learning-based approach capable of
jointly performing geometric and semantic scene understanding, namely depth
prediction (monocular depth estimation and depth completion) and semantic scene
segmentation. Within a single temporally constrained recurrent network, our
approach uniquely takes advantage of a complex series of skip connections,
adversarial training and the temporal constraint of sequential frame recurrence
to produce consistent depth and semantic class labels simultaneously. Extensive
experimental evaluation demonstrates the efficacy of our approach compared to
other contemporary state-of-the-art techniques. | [
"cs.CV"
] |
The polyhedral model allows a structured way of defining semantics-preserving
transformations to improve the performance of a large class of loops. Finding
profitable points in this space is a hard problem which is usually approached
by heuristics that generalize from domain-expert knowledge. Existing problem
formulations in state-of-the-art heuristics depend on the shape of particular
loops, making it hard to leverage generic and more powerful optimization
techniques from the machine learning domain. In this paper, we propose PolyGym,
a shape-agnostic formulation for the space of legal transformations in the
polyhedral model as a Markov Decision Process (MDP). Instead of using
transformations, the formulation is based on an abstract space of possible
schedules. In this formulation, states model partial schedules, which are
constructed by actions that are reusable across different loops. With a simple
heuristic to traverse the space, we demonstrate that our formulation is
powerful enough to match and outperform state-of-the-art heuristics. On the
Polybench benchmark suite, we found transformations that led to a speedup of
3.39x over LLVM O3, which is 1.83x better than the speedup achieved by ISL. Our
generic MDP formulation enables using reinforcement learning to learn
optimization policies over a wide range of loops. This also contributes to the
emerging field of machine learning in compilers, as it exposes a novel problem
formulation that can push the limits of existing methods. | [
"cs.LG",
"cs.DC",
"cs.DM",
"cs.PF"
] |
Estimation of 3D motion in a dynamic scene from a temporal pair of images is
a core task in many scene understanding problems. In real world applications, a
dynamic scene is commonly captured by a moving camera (i.e., panning, tilting
or hand-held), increasing the task complexity because the scene is observed
from different view points. The main challenge is the disambiguation of the
camera motion from scene motion, which becomes more difficult as the amount of
rigidity observed decreases, even with successful estimation of 2D image
correspondences. Compared to other state-of-the-art 3D scene flow estimation
methods, in this paper we propose to \emph{learn} the rigidity of a scene in a
supervised manner from a large collection of dynamic scene data, and directly
infer a rigidity mask from two sequential images with depths. With the learned
network, we show how we can effectively estimate camera motion and projected
scene flow using computed 2D optical flow and the inferred rigidity mask. For
training and testing the rigidity network, we also provide a new semi-synthetic
dynamic scene dataset (synthetic foreground objects with a real background) and
an evaluation split that accounts for the percentage of observed non-rigid
pixels. Through our evaluation we show the proposed framework outperforms
current state-of-the-art scene flow estimation methods in challenging dynamic
scenes. | [
"cs.CV"
] |
Automatic pain recognition is paramount for medical diagnosis and treatment.
The existing works fall into three categories: assessing facial appearance
changes, exploiting physiological cues, or fusing them in a multi-modal manner.
However, (1) appearance changes are easily affected by subjective factors which
impedes objective pain recognition. Besides, the appearance-based approaches
ignore long-range spatial-temporal dependencies that are important for modeling
expressions over time; (2) the physiological cues are obtained by attaching
sensors on human body, which is inconvenient and uncomfortable. In this paper,
we present a novel multi-task learning framework which encodes both appearance
changes and physiological cues in a non-contact manner for pain recognition.
The framework is able to capture both local and long-range dependencies via the
proposed attention mechanism for the learned appearance representations, which
are further enriched by temporally attended physiological cues (remote
photoplethysmography, rPPG) that are recovered from videos in the auxiliary
task. This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network
(rSTAN) and allows us to establish the state-of-the-art performance of
non-contact pain recognition on publicly available pain databases. It
demonstrates that rPPG predictions can be used as an auxiliary task to
facilitate non-contact automatic pain recognition. | [
"cs.CV"
] |
In preference-based reinforcement learning (RL), an agent interacts with the
environment while receiving preferences instead of absolute feedback. While
there is increasing research activity in preference-based RL, the design of
formal frameworks that admit tractable theoretical analysis remains an open
challenge. Building upon ideas from preference-based bandit learning and
posterior sampling in RL, we present DUELING POSTERIOR SAMPLING (DPS), which
employs preference-based posterior sampling to learn both the system dynamics
and the underlying utility function that governs the preference feedback. As
preference feedback is provided on trajectories rather than individual
state-action pairs, we develop a Bayesian approach for the credit assignment
problem, translating preferences to a posterior distribution over state-action
reward models. We prove an asymptotic Bayesian no-regret rate for DPS with a
Bayesian linear regression credit assignment model. This is the first regret
guarantee for preference-based RL to our knowledge. We also discuss possible
avenues for extending the proof methodology to other credit assignment models.
Finally, we evaluate the approach empirically, showing competitive performance
against existing baselines. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Computational prediction of in-hospital mortality in the setting of an
intensive care unit can help clinical practitioners to guide care and make
early decisions for interventions. As clinical data are complex and varied in
their structure and components, continued innovation of modeling strategies is
required to identify architectures that can best model outcomes. In this work,
we train a Heterogeneous Graph Model (HGM) on Electronic Health Record data and
use the resulting embedding vector as additional information added to a
Convolutional Neural Network (CNN) model for predicting in-hospital mortality.
We show that the additional information provided by including time as a vector
in the embedding captures the relationships between medical concepts, lab
tests, and diagnoses, which enhances predictive performance. We find that
adding HGM to a CNN model increases the mortality prediction accuracy up to
4\%. This framework serves as a foundation for future experiments involving
different EHR data types on important healthcare prediction tasks. | [
"cs.LG"
] |
The clinical management of several cardiovascular conditions, such as
pulmonary hypertension, require the assessment of the right ventricular (RV)
function. This work addresses the fully automatic and robust access to one of
the key RV biomarkers, its ejection fraction, from the gold standard imaging
modality, MRI. The problem becomes the accurate segmentation of the RV blood
pool from cine MRI sequences. This work proposes a solution based on Fully
Convolutional Neural Networks (FCNN), where our first contribution is the
optimal combination of three concepts (the convolution Gated Recurrent Units
(GRU), the Generative Adversarial Networks (GAN), and the L1 loss function)
that achieves an improvement of 0.05 and 3.49 mm in Dice Index and Hausdorff
Distance respectively with respect to the baseline FCNN. This improvement is
then doubled by our second contribution, the ROI-GAN, that sets two GANs to
cooperate working at two fields of view of the image, its full resolution and
the region of interest (ROI). Our rationale here is to better guide the FCNN
learning by combining global (full resolution) and local Region Of Interest
(ROI) features. The study is conducted in a large in-house dataset of $\sim$
23.000 segmented MRI slices, and its generality is verified in a publicly
available dataset. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Graph-structured data ubiquitously appears in science and engineering. Graph
neural networks (GNNs) are designed to exploit the relational inductive bias
exhibited in graphs; they have been shown to outperform other forms of neural
networks in scenarios where structure information supplements node features.
The most common GNN architecture aggregates information from neighborhoods
based on message passing. Its generality has made it broadly applicable. In
this paper, we focus on a special, yet widely used, type of graphs -- DAGs --
and inject a stronger inductive bias -- partial ordering -- into the neural
network design. We propose the \emph{directed acyclic graph neural network},
DAGNN, an architecture that processes information according to the flow defined
by the partial order. DAGNN can be considered a framework that entails earlier
works as special cases (e.g., models for trees and models updating node
representations recurrently), but we identify several crucial components that
prior architectures lack. We perform comprehensive experiments, including
ablation studies, on representative DAG datasets (i.e., source code, neural
architectures, and probabilistic graphical models) and demonstrate the
superiority of DAGNN over simpler DAG architectures as well as general graph
architectures. | [
"cs.LG",
"cs.AI"
] |
Sign language is commonly used by deaf or speech impaired people to
communicate but requires significant effort to master. Sign Language
Recognition (SLR) aims to bridge the gap between sign language users and others
by recognizing signs from given videos. It is an essential yet challenging task
since sign language is performed with the fast and complex movement of hand
gestures, body posture, and even facial expressions. Recently, skeleton-based
action recognition attracts increasing attention due to the independence
between the subject and background variation. However, skeleton-based SLR is
still under exploration due to the lack of annotations on hand keypoints. Some
efforts have been made to use hand detectors with pose estimators to extract
hand key points and learn to recognize sign language via Neural Networks, but
none of them outperforms RGB-based methods. To this end, we propose a novel
Skeleton Aware Multi-modal SLR framework (SAM-SLR) to take advantage of
multi-modal information towards a higher recognition rate. Specifically, we
propose a Sign Language Graph Convolution Network (SL-GCN) to model the
embedded dynamics and a novel Separable Spatial-Temporal Convolution Network
(SSTCN) to exploit skeleton features. RGB and depth modalities are also
incorporated and assembled into our framework to provide global information
that is complementary to the skeleton-based methods SL-GCN and SSTCN. As a
result, SAM-SLR achieves the highest performance in both RGB (98.42\%) and
RGB-D (98.53\%) tracks in 2021 Looking at People Large Scale Signer Independent
Isolated SLR Challenge. Our code is available at
https://github.com/jackyjsy/CVPR21Chal-SLR | [
"cs.CV"
] |
Many reinforcement learning algorithms can be seen as versions of approximate
policy iteration (API). While standard API often performs poorly, it has been
shown that learning can be stabilized by regularizing each policy update by the
KL-divergence to the previous policy. Popular practical algorithms such as
TRPO, MPO, and VMPO replace regularization by a constraint on KL-divergence of
consecutive policies, arguing that this is easier to implement and tune. In
this work, we study this implementation choice in more detail. We compare the
use of KL divergence as a constraint vs. as a regularizer, and point out
several optimization issues with the widely-used constrained approach. We show
that the constrained algorithm is not guaranteed to converge even on simple
problem instances where the constrained problem can be solved exactly, and in
fact incurs linear expected regret. With approximate implementation using
softmax policies, we show that regularization can improve the optimization
landscape of the original objective. We demonstrate these issues empirically on
several bandit and RL environments. | [
"cs.LG",
"stat.ML"
] |
The COVID-19 pandemic continues to have a devastating global impact, and has
placed a tremendous burden on struggling healthcare systems around the world.
Given the limited resources, accurate patient triaging and care planning is
critical in the fight against COVID-19, and one crucial task within care
planning is determining if a patient should be admitted to a hospital's
intensive care unit (ICU). Motivated by the need for transparent and
trustworthy ICU admission clinical decision support, we introduce COVID-Net
Clinical ICU, a neural network for ICU admission prediction based on patient
clinical data. Driven by a transparent, trust-centric methodology, the proposed
COVID-Net Clinical ICU was built using a clinical dataset from Hospital
Sirio-Libanes comprising of 1,925 COVID-19 patients, and is able to predict
when a COVID-19 positive patient would require ICU admission with an accuracy
of 96.9% to facilitate better care planning for hospitals amidst the on-going
pandemic. We conducted system-level insight discovery using a quantitative
explainability strategy to study the decision-making impact of different
clinical features and gain actionable insights for enhancing predictive
performance. We further leveraged a suite of trust quantification metrics to
gain deeper insights into the trustworthiness of COVID-Net Clinical ICU. By
digging deeper into when and why clinical predictive models makes certain
decisions, we can uncover key factors in decision making for critical clinical
decision support tasks such as ICU admission prediction and identify the
situations under which clinical predictive models can be trusted for greater
accountability. | [
"cs.LG",
"cs.AI"
] |
Object pose estimation is frequently achieved by first segmenting an RGB
image and then, given depth data, registering the corresponding point cloud
segment against the object's 3D model. Despite the progress due to CNNs,
semantic segmentation output can be noisy, especially when the CNN is only
trained on synthetic data. This causes registration methods to fail in
estimating a good object pose. This work proposes a novel stochastic
optimization process that treats the segmentation output of CNNs as a
confidence probability. The algorithm, called Stochastic Congruent Sets
(StoCS), samples pointsets on the point cloud according to the soft
segmentation distribution and so as to agree with the object's known geometry.
The pointsets are then matched to congruent sets on the 3D object model to
generate pose estimates. StoCS is shown to be robust on an APC dataset, despite
the fact the CNN is trained only on synthetic data. In the YCB dataset, StoCS
outperforms a recent network for 6D pose estimation and alternative pointset
matching techniques. | [
"cs.CV"
] |
This paper studies learning node representations with GNNs for unsupervised
scenarios. We make a theoretical understanding and empirical demonstration
about the non-steady performance of GNNs over different graph datasets, when
the supervision signals are not appropriately defined. The performance of GNNs
depends on both the node feature smoothness and the graph locality. To smooth
the discrepancy of node proximity measured by graph topology and node feature,
we proposed KS2L - a novel graph \underline{K}nowledge distillation regularized
\underline{S}elf-\underline{S}upervised \underline{L}earning framework, with
two complementary regularization modules, for intra-and cross-model graph
knowledge distillation. We demonstrate the competitive performance of KS2L on a
variety of benchmarks. Even with a single GCN layer, KS2L has consistently
competitive or even better performance on various benchmark datasets. | [
"cs.LG",
"stat.ML"
] |
Every segmentation algorithm has parameters that need to be adjusted in order
to achieve good results. Evolving fuzzy systems for adjustment of segmentation
parameters have been proposed recently (Evolving fuzzy image segmentation --
EFIS [1]. However, similar to any other algorithm, EFIS too suffers from a few
limitations when used in practice. As a major drawback, EFIS depends on
detection of the object of interest for feature calculation, a task that is
highly application-dependent. In this paper, a new version of EFIS is proposed
to overcome these limitations. The new EFIS, called self-configuring EFIS
(SC-EFIS), uses available training data to auto-configure the parameters that
are fixed in EFIS. As well, the proposed SC-EFIS relies on a feature selection
process that does not require the detection of a region of interest (ROI). | [
"cs.CV"
] |
Deep convolutional neural networks (DCNNs) have recently demonstrated
high-quality results in single-image super-resolution (SR). DCNNs often suffer
from over-parametrization and large amounts of redundancy, which results in
inefficient inference and high memory usage, preventing massive applications on
mobile devices. As a way to significantly reduce model size and computation
time, binarized neural network has only been shown to excel on semantic-level
tasks such as image classification and recognition. However, little effort of
network quantization has been spent on image enhancement tasks like SR, as
network quantization is usually assumed to sacrifice pixel-level accuracy. In
this work, we explore an network-binarization approach for SR tasks without
sacrificing much reconstruction accuracy. To achieve this, we binarize the
convolutional filters in only residual blocks, and adopt a learnable weight for
each binary filter. We evaluate this idea on several state-of-the-art
DCNN-based architectures, and show that binarized SR networks achieve
comparable qualitative and quantitative results as their real-weight
counterparts. Moreover, the proposed binarized strategy could help reduce model
size by 80% when applying on SRResNet, and could potentially speed up inference
by 5 times. | [
"cs.CV",
"cs.AI"
] |
Person re-identification has achieved great progress with deep convolutional
neural networks. However, most previous methods focus on learning individual
appearance feature embedding, and it is hard for the models to handle difficult
situations with different illumination, large pose variance and occlusion. In
this work, we take a step further and consider employing context information
for person search. For a probe-gallery pair, we first propose a contextual
instance expansion module, which employs a relative attention module to search
and filter useful context information in the scene. We also build a graph
learning framework to effectively employ context pairs to update target
similarity. These two modules are built on top of a joint detection and
instance feature learning framework, which improves the discriminativeness of
the learned features. The proposed framework achieves state-of-the-art
performance on two widely used person search datasets. | [
"cs.CV"
] |
Many important real-world problems have action spaces that are
high-dimensional, continuous or both, making full enumeration of all possible
actions infeasible. Instead, only small subsets of actions can be sampled for
the purpose of policy evaluation and improvement. In this paper, we propose a
general framework to reason in a principled way about policy evaluation and
improvement over such sampled action subsets. This sample-based policy
iteration framework can in principle be applied to any reinforcement learning
algorithm based upon policy iteration. Concretely, we propose Sampled MuZero,
an extension of the MuZero algorithm that is able to learn in domains with
arbitrarily complex action spaces by planning over sampled actions. We
demonstrate this approach on the classical board game of Go and on two
continuous control benchmark domains: DeepMind Control Suite and Real-World RL
Suite. | [
"cs.LG"
] |
In this paper, we present Co-scale conv-attentional image Transformers
(CoaT), a Transformer-based image classifier equipped with co-scale and
conv-attentional mechanisms. First, the co-scale mechanism maintains the
integrity of Transformers' encoder branches at individual scales, while
allowing representations learned at different scales to effectively communicate
with each other; we design a series of serial and parallel blocks to realize
the co-scale mechanism. Second, we devise a conv-attentional mechanism by
realizing a relative position embedding formulation in the factorized attention
module with an efficient convolution-like implementation. CoaT empowers image
Transformers with enriched multi-scale and contextual modeling capabilities. On
ImageNet, relatively small CoaT models attain superior classification results
compared with similar-sized convolutional neural networks and image/vision
Transformers. The effectiveness of CoaT's backbone is also illustrated on
object detection and instance segmentation, demonstrating its applicability to
downstream computer vision tasks. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
The last decade has shown a tremendous success in solving various computer
vision problems with the help of deep learning techniques. Lately, many works
have demonstrated that learning-based approaches with suitable network
architectures even exhibit superior performance for the solution of (ill-posed)
image reconstruction problems such as deblurring, super-resolution, or medical
image reconstruction. The drawback of purely learning-based methods, however,
is that they cannot provide provable guarantees for the trained network to
follow a given data formation process during inference. In this work we propose
energy dissipating networks that iteratively compute a descent direction with
respect to a given cost function or energy at the currently estimated
reconstruction. Therefore, an adaptive step size rule such as a line-search,
along with a suitable number of iterations can guarantee the reconstruction to
follow a given data formation model encoded in the energy to arbitrary
precision, and hence control the model's behavior even during test time. We
prove that under standard assumptions, descent using the direction predicted by
the network converges (linearly) to the global minimum of the energy. We
illustrate the effectiveness of the proposed approach in experiments on single
image super resolution and computed tomography (CT) reconstruction, and further
illustrate extensions to convex feasibility problems. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We demonstrate how graph neural networks can be used to solve combinatorial
optimization problems. Our approach is broadly applicable to canonical NP-hard
problems in the form of quadratic unconstrained binary optimization problems,
such as maximum cut, minimum vertex cover, maximum independent set, as well as
Ising spin glasses and higher-order generalizations thereof in the form of
polynomial unconstrained binary optimization problems. We apply a relaxation
strategy to the problem Hamiltonian to generate a differentiable loss function
with which we train the graph neural network and apply a simple projection to
integer variables once the unsupervised training process has completed. We
showcase our approach with numerical results for the canonical maximum cut and
maximum independent set problems. We find that the graph neural network
optimizer performs on par or outperforms existing solvers, with the ability to
scale beyond the state of the art to problems with millions of variables. | [
"cs.LG",
"cond-mat.dis-nn",
"cs.AI",
"math.OC",
"quant-ph"
] |
Prior probability models are a fundamental component of many image processing
problems, but density estimation is notoriously difficult for high-dimensional
signals such as photographic images. Deep neural networks have provided
state-of-the-art solutions for problems such as denoising, which implicitly
rely on a prior probability model of natural images. Here, we develop a robust
and general methodology for making use of this implicit prior. We rely on a
statistical result due to Miyasawa (1961), who showed that the least-squares
solution for removing additive Gaussian noise can be written directly in terms
of the gradient of the log of the noisy signal density. We use this fact to
develop a stochastic coarse-to-fine gradient ascent procedure for drawing
high-probability samples from the implicit prior embedded within a CNN trained
to perform blind (i.e., with unknown noise level) least-squares denoising. A
generalization of this algorithm to constrained sampling provides a method for
using the implicit prior to solve any linear inverse problem, with no
additional training. We demonstrate this general form of transfer learning in
multiple applications, using the same algorithm to produce state-of-the-art
levels of unsupervised performance for deblurring, super-resolution,
inpainting, and compressive sensing. | [
"cs.CV",
"eess.IV",
"stat.ML"
] |
Given an input video, its associated audio, and a brief caption, the
audio-visual scene aware dialog (AVSD) task requires an agent to indulge in a
question-answer dialog with a human about the audio-visual content. This task
thus poses a challenging multi-modal representation learning and reasoning
scenario, advancements into which could influence several human-machine
interaction applications. To solve this task, we introduce a
semantics-controlled multi-modal shuffled Transformer reasoning framework,
consisting of a sequence of Transformer modules, each taking a modality as
input and producing representations conditioned on the input question. Our
proposed Transformer variant uses a shuffling scheme on their multi-head
outputs, demonstrating better regularization. To encode fine-grained visual
information, we present a novel dynamic scene graph representation learning
pipeline that consists of an intra-frame reasoning layer producing
spatio-semantic graph representations for every frame, and an inter-frame
aggregation module capturing temporal cues. Our entire pipeline is trained
end-to-end. We present experiments on the benchmark AVSD dataset, both on
answer generation and selection tasks. Our results demonstrate state-of-the-art
performances on all evaluation metrics. | [
"cs.CV",
"cs.CL"
] |
Humans learn from life events to form intuitions towards the understanding of
visual environments and languages. Envision that you are instructed by a
high-level instruction, "Go to the bathroom in the master bedroom and replace
the blue towel on the left wall", what would you possibly do to carry out the
task? Intuitively, we comprehend the semantics of the instruction to form an
overview of where a bathroom is and what a blue towel is in mind; then, we
navigate to the target location by consistently matching the bathroom
appearance in mind with the current scene. In this paper, we present an agent
that mimics such human behaviors. Specifically, we focus on the Remote Embodied
Visual Referring Expression in Real Indoor Environments task, called REVERIE,
where an agent is asked to correctly localize a remote target object specified
by a concise high-level natural language instruction, and propose a two-stage
training pipeline. In the first stage, we pretrain the agent with two
cross-modal alignment sub-tasks, namely the Scene Grounding task and the Object
Grounding task. The agent learns where to stop in the Scene Grounding task and
what to attend to in the Object Grounding task respectively. Then, to generate
action sequences, we propose a memory-augmented attentive action decoder to
smoothly fuse the pre-trained vision and language representations with the
agent's past memory experiences. Without bells and whistles, experimental
results show that our method outperforms previous state-of-the-art(SOTA)
significantly, demonstrating the effectiveness of our method. | [
"cs.CV"
] |
In this paper, we tackle the problem of estimating the depth of a scene from
a monocular video sequence. In particular, we handle challenging scenarios,
such as non-translational camera motion and dynamic scenes, where traditional
structure from motion and motion stereo methods do not apply. To this end, we
first study the problem of depth estimation from a single image. In this
context, we exploit the availability of a pool of images for which the depth is
known, and formulate monocular depth estimation as a discrete-continuous
optimization problem, where the continuous variables encode the depth of the
superpixels in the input image, and the discrete ones represent relationships
between neighboring superpixels. The solution to this discrete-continuous
optimization problem is obtained by performing inference in a graphical model
using particle belief propagation. To handle video sequences, we then extend
our single image model to a two-frame one that naturally encodes short-range
temporal consistency and inherently handles dynamic objects. Based on the
prediction of this model, we then introduce a fully-connected pairwise CRF that
accounts for longer range spatio-temporal interactions throughout a video. We
demonstrate the effectiveness of our model in both the indoor and outdoor
scenarios. | [
"cs.CV"
] |
When tasks change over time, meta-transfer learning seeks to improve the
efficiency of learning a new task via both meta-learning and transfer-learning.
While the standard attention has been effective in a variety of settings, we
question its effectiveness in improving meta-transfer learning since the tasks
being learned are dynamic and the amount of context can be substantially
smaller. In this paper, using a recently proposed meta-transfer learning model,
Sequential Neural Processes (SNP), we first empirically show that it suffers
from a similar underfitting problem observed in the functions inferred by
Neural Processes. However, we further demonstrate that unlike the meta-learning
setting, the standard attention mechanisms are not effective in meta-transfer
setting. To resolve, we propose a new attention mechanism, Recurrent Memory
Reconstruction (RMR), and demonstrate that providing an imaginary context that
is recurrently updated and reconstructed with interaction is crucial in
achieving effective attention for meta-transfer learning. Furthermore,
incorporating RMR into SNP, we propose Attentive Sequential Neural
Processes-RMR (ASNP-RMR) and demonstrate in various tasks that ASNP-RMR
significantly outperforms the baselines. | [
"cs.LG",
"stat.ML"
] |
Despite their successes in the field of self-learning AI, Convolutional
Neural Networks (CNNs) suffer from having too many trainable parameters,
impacting computational performance. Several approaches have been proposed to
reduce the number of parameters in the visual domain, the Inception
architecture [Szegedy et al., 2016] being a prominent example. This raises the
question whether the number of trainable parameters in CNNs can also be reduced
for 1D inputs, such as time-series data, without incurring a substantial loss
in classification performance. We propose and examine two methods for
complexity reduction in AstroNet [Shallue & Vanderburg, 2018], a CNN for
automatic classification of time-varying brightness data of stars to detect
exoplanets. The first method makes only a tactical reduction of layers in
AstroNet while the second method also modifies the original input data by means
of a Gaussian pyramid. We conducted our experiments with various degrees of
dropout regularization. Our results show only a non-substantial loss in
accuracy compared to the original AstroNet, while reducing training time up to
85 percent. These results show potential for similar reductions in other CNN
applications while largely retaining accuracy. | [
"cs.LG",
"stat.ML",
"I.2.6"
] |
Accurate and robust prediction of patient's response to drug treatments is
critical for developing precision medicine. However, it is often difficult to
obtain a sufficient amount of coherent drug response data from patients
directly for training a generalized machine learning model. Although the
utilization of rich cell line data provides an alternative solution, it is
challenging to transfer the knowledge obtained from cell lines to patients due
to various confounding factors. Few existing transfer learning methods can
reliably disentangle common intrinsic biological signals from confounding
factors in the cell line and patient data. In this paper, we develop a Coherent
Deconfounding Autoencoder (CODE-AE) that can extract both common biological
signals shared by incoherent samples and private representations unique to each
data set, transfer knowledge learned from cell line data to tissue data, and
separate confounding factors from them. Extensive studies on multiple data sets
demonstrate that CODE-AE significantly improves the accuracy and robustness
over state-of-the-art methods in both predicting patient drug response and
de-confounding biological signals. Thus, CODE-AE provides a useful framework to
take advantage of in vitro omics data for developing generalized patient
predictive models. The source code is available at
https://github.com/XieResearchGroup/CODE-AE. | [
"cs.LG",
"q-bio.GN"
] |
As an important technology in artificial intelligence Granular Computing
(GrC) has emerged as a new multi-disciplinary paradigm and received much
attention in recent years. Information granules forming an abstract and
efficient characterization of large volumes of numeric data have been
considered as the fundamental constructs of GrC. By generating prototypes and
partition matrix, fuzzy clustering is a commonly encountered way of information
granulation. Degranulation involves data reconstruction completed on a basis of
the granular representatives. Previous studies have shown that there is a
relationship between the reconstruction error and the performance of the
granulation process. Typically, the lower the degranulation error is, the
better performance of granulation is. However, the existing methods of
degranulation usually cannot restore the original numeric data, which is one of
the important reasons behind the occurrence of the reconstruction error. To
enhance the quality of degranulation, in this study, we develop an augmented
scheme through modifying the partition matrix. By proposing the augmented
scheme, we dwell on a novel collection of granulation-degranulation mechanisms.
In the constructed approach, the prototypes can be expressed as the product of
the dataset matrix and the partition matrix. Then, in the degranulation
process, the reconstructed numeric data can be decomposed into the product of
the partition matrix and the matrix of prototypes. Both the granulation and
degranulation are regarded as generalized rotation between the data subspace
and the prototype subspace with the partition matrix and the fuzzification
factor. By modifying the partition matrix, the new partition matrix is
constructed through a series of matrix operations. We offer a thorough analysis
of the developed scheme. The experimental results are in agreement with the
underlying conceptual framework | [
"cs.LG",
"cs.SY",
"eess.SP",
"eess.SY"
] |
The identification of nerve is difficult as structures of nerves are
challenging to image and to detect in ultrasound images. Nevertheless, the
nerve identification in ultrasound images is a crucial step to improve
performance of regional anesthesia. In this paper, a network called Brachial
Plexus Multi-instance Segmentation Network (BPMSegNet) is proposed to identify
different tissues (nerves, arteries, veins, muscles) in ultrasound images. The
BPMSegNet has three novel modules. The first is the spatial local contrast
feature, which computes contrast features at different scales. The second one
is the self-attention gate, which reweighs the channels in feature maps by
their importance. The third is the addition of a skip concatenation with
transposed convolution within a feature pyramid network. The proposed BPMSegNet
is evaluated by conducting experiments on our constructed Ultrasound Brachial
Plexus Dataset (UBPD). Quantitative experimental results show the proposed
network can segment multiple tissues from the ultrasound images with a good
performance. | [
"cs.CV"
] |
In the context of statistical supervised learning, the noiseless linear model
assumes that there exists a deterministic linear relation $Y = \langle
\theta_*, X \rangle$ between the random output $Y$ and the random feature
vector $\Phi(U)$, a potentially non-linear transformation of the inputs $U$. We
analyze the convergence of single-pass, fixed step-size stochastic gradient
descent on the least-square risk under this model. The convergence of the
iterates to the optimum $\theta_*$ and the decay of the generalization error
follow polynomial convergence rates with exponents that both depend on the
regularities of the optimum $\theta_*$ and of the feature vectors $\Phi(u)$. We
interpret our result in the reproducing kernel Hilbert space framework. As a
special case, we analyze an online algorithm for estimating a real function on
the unit interval from the noiseless observation of its value at randomly
sampled points; the convergence depends on the Sobolev smoothness of the
function and of a chosen kernel. Finally, we apply our analysis beyond the
supervised learning setting to obtain convergence rates for the averaging
process (a.k.a. gossip algorithm) on a graph depending on its spectral
dimension. | [
"cs.LG",
"cs.MA",
"math.OC",
"stat.ML"
] |
We propose a new approach to interactive image segmentation based on some
properties of a family of quadratic optimization problems related to dominant
sets, a well-known graph-theoretic notion of a cluster which generalizes the
concept of a maximal clique to edge-weighted graphs. In particular, we show
that by properly controlling a regularization parameter which determines the
structure and the scale of the underlying problem, we are in a position to
extract groups of dominant-set clusters which are constrained to contain
user-selected elements. The resulting algorithm can deal naturally with any
type of input modality, including scribbles, sloppy contours, and bounding
boxes, and is able to robustly handle noisy annotations on the part of the
user. Experiments on standard benchmark datasets show the effectiveness of our
approach as compared to state-of-the-art algorithms on a variety of natural
images under several input conditions. | [
"cs.CV"
] |
Computational facial models that capture properties of facial cues related to
aging and kinship increasingly attract the attention of the research community,
enabling the development of reliable methods for age progression, age
estimation, age-invariant facial characterization, and kinship verification
from visual data. In this paper, we review recent advances in modeling of
facial aging and kinship. In particular, we provide an up-to date, complete
list of available annotated datasets and an in-depth analysis of geometric,
hand-crafted, and learned facial representations that are used for facial aging
and kinship characterization. Moreover, evaluation protocols and metrics are
reviewed and notable experimental results for each surveyed task are analyzed.
This survey allows us to identify challenges and discuss future research
directions for the development of robust facial models in real-world
conditions. | [
"cs.CV"
] |
A method for the local and global interpretation of a black-box model on the
basis of the well-known generalized additive models is proposed. It can be
viewed as an extension or a modification of the algorithm using the neural
additive model. The method is based on using an ensemble of gradient boosting
machines (GBMs) such that each GBM is learned on a single feature and produces
a shape function of the feature. The ensemble is composed as a weighted sum of
separate GBMs resulting a weighted sum of shape functions which form the
generalized additive model. GBMs are built in parallel using randomized
decision trees of depth 1, which provide a very simple architecture. Weights of
GBMs as well as features are computed in each iteration of boosting by using
the Lasso method and then updated by means of a specific smoothing procedure.
In contrast to the neural additive model, the method provides weights of
features in the explicit form, and it is simply trained. A lot of numerical
experiments with an algorithm implementing the proposed method on synthetic and
real datasets demonstrate its efficiency and properties for local and global
interpretation. | [
"cs.LG",
"stat.ML"
] |
We present variational generative adversarial networks, a general learning
framework that combines a variational auto-encoder with a generative
adversarial network, for synthesizing images in fine-grained categories, such
as faces of a specific person or objects in a category. Our approach models an
image as a composition of label and latent attributes in a probabilistic model.
By varying the fine-grained category label fed into the resulting generative
model, we can generate images in a specific category with randomly drawn values
on a latent attribute vector. Our approach has two novel aspects. First, we
adopt a cross entropy loss for the discriminative and classifier network, but a
mean discrepancy objective for the generative network. This kind of asymmetric
loss function makes the GAN training more stable. Second, we adopt an encoder
network to learn the relationship between the latent space and the real image
space, and use pairwise feature matching to keep the structure of generated
images. We experiment with natural images of faces, flowers, and birds, and
demonstrate that the proposed models are capable of generating realistic and
diverse samples with fine-grained category labels. We further show that our
models can be applied to other tasks, such as image inpainting,
super-resolution, and data augmentation for training better face recognition
models. | [
"cs.CV"
] |
Model compression is a critical technique to efficiently deploy neural
network models on mobile devices which have limited computation resources and
tight power budgets. Conventional model compression techniques rely on
hand-crafted heuristics and rule-based policies that require domain experts to
explore the large design space trading off among model size, speed, and
accuracy, which is usually sub-optimal and time-consuming. In this paper, we
propose AutoML for Model Compression (AMC) which leverage reinforcement
learning to provide the model compression policy. This learning-based
compression policy outperforms conventional rule-based compression policy by
having higher compression ratio, better preserving the accuracy and freeing
human labor. Under 4x FLOPs reduction, we achieved 2.7% better accuracy than
the handcrafted model compression policy for VGG-16 on ImageNet. We applied
this automated, push-the-button compression pipeline to MobileNet and achieved
1.81x speedup of measured inference latency on an Android phone and 1.43x
speedup on the Titan XP GPU, with only 0.1% loss of ImageNet Top-1 accuracy. | [
"cs.CV"
] |
Graphs are playing a crucial role in different fields since they are powerful
tools to unveil intrinsic relationships among signals. In many scenarios, an
accurate graph structure representing signals is not available at all and that
motivates people to learn a reliable graph structure directly from observed
signals. However, in real life, it is inevitable that there exists uncertainty
in the observed signals due to noise measurements or limited observability,
which causes a reduction in reliability of the learned graph. To this end, we
propose a graph learning framework using Wasserstein distributionally robust
optimization (WDRO) which handles uncertainty in data by defining an
uncertainty set on distributions of the observed data. Specifically, two models
are developed, one of which assumes all distributions in uncertainty set are
Gaussian distributions and the other one has no prior distributional
assumption. Instead of using interior point method directly, we propose two
algorithms to solve the corresponding models and show that our algorithms are
more time-saving. In addition, we also reformulate both two models into
Semi-Definite Programming (SDP), and illustrate that they are intractable in
the scenario of large-scale graph. Experiments on both synthetic and real world
data are carried out to validate the proposed framework, which show that our
scheme can learn a reliable graph in the context of uncertainty. | [
"cs.LG",
"eess.SP"
] |
Image super-resolution is a challenging task and has attracted increasing
attention in research and industrial communities. In this paper, we propose a
novel end-to-end Attention-based DenseNet with Residual Deconvolution named as
ADRD. In our ADRD, a weighted dense block, in which the current layer receives
weighted features from all previous levels, is proposed to capture valuable
features rely in dense layers adaptively. And a novel spatial attention module
is presented to generate a group of attentive maps for emphasizing informative
regions. In addition, we design an innovative strategy to upsample residual
information via the deconvolution layer, so that the high-frequency details can
be accurately upsampled. Extensive experiments conducted on publicly available
datasets demonstrate the promising performance of the proposed ADRD against the
state-of-the-arts, both quantitatively and qualitatively. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Approximate linear programming (ALP) represents one of the major algorithmic
families to solve large-scale Markov decision processes (MDP). In this work, we
study a primal-dual formulation of the ALP, and develop a scalable, model-free
algorithm called bilinear $\pi$ learning for reinforcement learning when a
sampling oracle is provided. This algorithm enjoys a number of advantages.
First, it adopts (bi)linear models to represent the high-dimensional value
function and state-action distributions, using given state and action features.
Its run-time complexity depends on the number of features, not the size of the
underlying MDPs. Second, it operates in a fully online fashion without having
to store any sample, thus having minimal memory footprint. Third, we prove that
it is sample-efficient, solving for the optimal policy to high precision with a
sample complexity linear in the dimension of the parameter space. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Hyperspectral images show similar statistical properties to natural grayscale
or color photographic images. However, the classification of hyperspectral
images is more challenging because of the very high dimensionality of the
pixels and the small number of labeled examples typically available for
learning. These peculiarities lead to particular signal processing problems,
mainly characterized by indetermination and complex manifolds. The framework of
statistical learning has gained popularity in the last decade. New methods have
been presented to account for the spatial homogeneity of images, to include
user's interaction via active learning, to take advantage of the manifold
structure with semisupervised learning, to extract and encode invariances, or
to adapt classifiers and image representations to unseen yet similar scenes.
This tutuorial reviews the main advances for hyperspectral remote sensing image
classification through illustrative examples. | [
"cs.CV"
] |
Machine learning has shown potential for optimizing existing molecules with
more desirable properties, a critical step towards accelerating new chemical
discovery. In this work, we propose QMO, a generic query-based molecule
optimization framework that exploits latent embeddings from a molecule
autoencoder. QMO improves the desired properties of an input molecule based on
efficient queries, guided by a set of molecular property predictions and
evaluation metrics. We show that QMO outperforms existing methods in the
benchmark tasks of optimizing molecules for drug likeliness and solubility
under similarity constraints. We also demonstrate significant property
improvement using QMO on two new and challenging tasks that are also important
in real-world discovery problems: (i) optimizing existing SARS-CoV-2 Main
Protease inhibitors toward higher binding affinity; and (ii) improving known
antimicrobial peptides towards lower toxicity. Results from QMO show high
consistency with external validations, suggesting effective means of
facilitating molecule optimization problems with design constraints. | [
"cs.LG",
"q-bio.BM"
] |
In this paper, we propose MINE to perform novel view synthesis and depth
estimation via dense 3D reconstruction from a single image. Our approach is a
continuous depth generalization of the Multiplane Images (MPI) by introducing
the NEural radiance fields (NeRF). Given a single image as input, MINE predicts
a 4-channel image (RGB and volume density) at arbitrary depth values to jointly
reconstruct the camera frustum and fill in occluded contents. The reconstructed
and inpainted frustum can then be easily rendered into novel RGB or depth views
using differentiable rendering. Extensive experiments on RealEstate10K, KITTI
and Flowers Light Fields show that our MINE outperforms state-of-the-art by a
large margin in novel view synthesis. We also achieve competitive results in
depth estimation on iBims-1 and NYU-v2 without annotated depth supervision. Our
source code is available at https://github.com/vincentfung13/MINE | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
This paper presents a new approach to 3D object detection that leverages the
properties of the data obtained by a LiDAR sensor. State-of-the-art detectors
use neural network architectures based on assumptions valid for camera images.
However, point clouds obtained from LiDAR are fundamentally different. Most
detectors use shared filter kernels to extract features which do not take into
account the range dependent nature of the point cloud features. To show this,
different detectors are trained on two splits of the KITTI dataset: close range
(objects up to 25 meters from LiDAR) and long-range. Top view images are
generated from point clouds as input for the networks. Combined results
outperform the baseline network trained on the full dataset with a single
backbone. Additional research compares the effect of using different input
features when converting the point cloud to image. The results indicate that
the network focuses on the shape and structure of the objects, rather than
exact values of the input. This work proposes an improvement for 3D object
detectors by taking into account the properties of LiDAR point clouds over
distance. Results show that training separate networks for close-range and
long-range objects boosts performance for all KITTI benchmark difficulties. | [
"cs.CV"
] |
While probabilistic models are an important tool for studying causality,
doing so suffers from the intractability of inference. As a step towards
tractable causal models, we consider the problem of learning interventional
distributions using sum-product networks (SPNs) that are over-parameterized by
gate functions, e.g., neural networks. Providing an arbitrarily intervened
causal graph as input, effectively subsuming Pearl's do-operator, the gate
function predicts the parameters of the SPN. The resulting interventional SPNs
are motivated and illustrated by a structural causal model themed around
personal health. Our empirical evaluation on three benchmark data sets as well
as a synthetic health data set clearly demonstrates that interventional SPNs
indeed are both expressive in modelling and flexible in adapting to the
interventions. | [
"cs.LG"
] |
We apply Reinforcement Learning algorithms to solve the classic quantitative
finance Market Making problem, in which an agent provides liquidity to the
market by placing buy and sell orders while maximizing a utility function. The
optimal agent has to find a delicate balance between the price risk of her
inventory and the profits obtained by capturing the bid-ask spread. We design
an environment with a reward function that determines an order relation between
policies equivalent to the original utility function. When comparing our agents
with the optimal solution and a benchmark symmetric agent, we find that the
Deep Q-Learning algorithm manages to recover the optimal agent. | [
"cs.LG",
"q-fin.ST"
] |
Supervised hashing aims to map the original features to compact binary codes
that are able to preserve label based similarity in the Hamming space.
Non-linear hash functions have demonstrated the advantage over linear ones due
to their powerful generalization capability. In the literature, kernel
functions are typically used to achieve non-linearity in hashing, which achieve
encouraging retrieval performance at the price of slow evaluation and training
time. Here we propose to use boosted decision trees for achieving non-linearity
in hashing, which are fast to train and evaluate, hence more suitable for
hashing with high dimensional data. In our approach, we first propose
sub-modular formulations for the hashing binary code inference problem and an
efficient GraphCut based block search method for solving large-scale inference.
Then we learn hash functions by training boosted decision trees to fit the
binary codes. Experiments demonstrate that our proposed method significantly
outperforms most state-of-the-art methods in retrieval precision and training
time. Especially for high-dimensional data, our method is orders of magnitude
faster than many methods in terms of training time. | [
"cs.CV",
"cs.LG"
] |
We propose a selective learning method using meta-learning and deep
reinforcement learning for medical image interpretation in the setting of
limited labeling resources. Our method, MedSelect, consists of a trainable deep
learning selector that uses image embeddings obtained from contrastive
pretraining for determining which images to label, and a non-parametric
selector that uses cosine similarity to classify unseen images. We demonstrate
that MedSelect learns an effective selection strategy outperforming baseline
selection strategies across seen and unseen medical conditions for chest X-ray
interpretation. We also perform an analysis of the selections performed by
MedSelect comparing the distribution of latent embeddings and clinical
features, and find significant differences compared to the strongest performing
baseline. We believe that our method may be broadly applicable across medical
imaging settings where labels are expensive to acquire. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The recurring context in which objects appear holds valuable information that
can be employed to predict their existence. This intuitive observation indeed
led many researchers to endow appearance-based detectors with explicit
reasoning about context. The underlying thesis suggests that stronger
contextual relations would facilitate greater improvements in detection
capacity. In practice, however, the observed improvement in many cases is
modest at best, and often only marginal. In this work we seek to improve our
understanding of this phenomenon, in part by pursuing an opposite approach.
Instead of attempting to improve detection scores by employing context, we
treat the utility of context as an optimization problem: to what extent can
detection scores be improved by considering context or any other kind of
additional information? With this approach we explore the bounds on improvement
by using contextual relations between objects and provide a tool for
identifying the most helpful ones. We show that simple co-occurrence relations
can often provide large gains, while in other cases a significant improvement
is simply impossible or impractical with either co-occurrence or more precise
spatial relations. To better understand these results we then analyze the
ability of context to handle different types of false detections, revealing
that tested contextual information cannot ameliorate localization errors,
severely limiting its gains. These and additional insights further our
understanding on where and why utilization of context for object detection
succeeds and fails. | [
"cs.CV"
] |
Spherical videos, also known as \ang{360} (panorama) videos, can be viewed
with various virtual reality devices such as computers and head-mounted
displays. They attract large amount of interest since awesome immersion can be
experienced when watching spherical videos. However, capturing, storing and
transmitting high-resolution spherical videos are extremely expensive. In this
paper, we propose a novel single frame and multi-frame joint network (SMFN) for
recovering high-resolution spherical videos from low-resolution inputs. To take
advantage of pixel-level inter-frame consistency, deformable convolutions are
used to eliminate the motion difference between feature maps of the target
frame and its neighboring frames. A mixed attention mechanism is devised to
enhance the feature representation capability. The dual learning strategy is
exerted to constrain the space of solution so that a better solution can be
found. A novel loss function based on the weighted mean square error is
proposed to emphasize on the super-resolution of the equatorial regions. This
is the first attempt to settle the super-resolution of spherical videos, and we
collect a novel dataset from the Internet, MiG Panorama Video, which includes
204 videos. Experimental results on 4 representative video clips demonstrate
the efficacy of the proposed method. The dataset and code are available at
https://github.com/lovepiano/SMFN_For_360VSR. | [
"cs.CV",
"cs.AI",
"stat.ML"
] |
By analyzing the motion of people and other objects in a scene, we
demonstrate how to infer depth, occlusion, lighting, and shadow information
from video taken from a single camera viewpoint. This information is then used
to composite new objects into the same scene with a high degree of automation
and realism. In particular, when a user places a new object (2D cut-out) in the
image, it is automatically rescaled, relit, occluded properly, and casts
realistic shadows in the correct direction relative to the sun, and which
conform properly to scene geometry. We demonstrate results (best viewed in
supplementary video) on a range of scenes and compare to alternative methods
for depth estimation and shadow compositing. | [
"cs.CV"
] |
Automatic document content processing is affected by artifacts caused by the
shape of the paper, non-uniform and diverse color of lighting conditions.
Fully-supervised methods on real data are impossible due to the large amount of
data needed. Hence, the current state of the art deep learning models are
trained on fully or partially synthetic images. However, document shadow or
shading removal results still suffer because: (a) prior methods rely on
uniformity of local color statistics, which limit their application on
real-scenarios with complex document shapes and textures and; (b) synthetic or
hybrid datasets with non-realistic, simulated lighting conditions are used to
train the models. In this paper we tackle these problems with our two main
contributions. First, a physically constrained learning-based method that
directly estimates document reflectance based on intrinsic image formation
which generalizes to challenging illumination conditions. Second, a new dataset
that clearly improves previous synthetic ones, by adding a large range of
realistic shading and diverse multi-illuminant conditions, uniquely customized
to deal with documents in-the-wild. The proposed architecture works in a
self-supervised manner where only the synthetic texture is used as a weak
training signal (obviating the need for very costly ground truth with
disentangled versions of shading and reflectance). The proposed approach leads
to a significant generalization of document reflectance estimation in real
scenes with challenging illumination. We extensively evaluate on the real
benchmark datasets available for intrinsic image decomposition and document
shadow removal tasks. Our reflectance estimation scheme, when used as a
pre-processing step of an OCR pipeline, shows a 26% improvement of character
error rate (CER), thus, proving the practical applicability. | [
"cs.CV"
] |
Recently, transformer has achieved remarkable performance on a variety of
computer vision applications. Compared with mainstream convolutional neural
networks, vision transformers are often of sophisticated architectures for
extracting powerful feature representations, which are more difficult to be
developed on mobile devices. In this paper, we present an effective
post-training quantization algorithm for reducing the memory storage and
computational costs of vision transformers. Basically, the quantization task
can be regarded as finding the optimal low-bit quantization intervals for
weights and inputs, respectively. To preserve the functionality of the
attention mechanism, we introduce a ranking loss into the conventional
quantization objective that aims to keep the relative order of the
self-attention results after quantization. Moreover, we thoroughly analyze the
relationship between quantization loss of different layers and the feature
diversity, and explore a mixed-precision quantization scheme by exploiting the
nuclear norm of each attention map and output feature. The effectiveness of the
proposed method is verified on several benchmark models and datasets, which
outperforms the state-of-the-art post-training quantization algorithms. For
instance, we can obtain an 81.29\% top-1 accuracy using DeiT-B model on
ImageNet dataset with about 8-bit quantization. | [
"cs.CV"
] |
Transfer learning aims at improving the performance of target learners on
target domains by transferring the knowledge contained in different but related
source domains. In this way, the dependence on a large number of target domain
data can be reduced for constructing target learners. Due to the wide
application prospects, transfer learning has become a popular and promising
area in machine learning. Although there are already some valuable and
impressive surveys on transfer learning, these surveys introduce approaches in
a relatively isolated way and lack the recent advances in transfer learning.
Due to the rapid expansion of the transfer learning area, it is both necessary
and challenging to comprehensively review the relevant studies. This survey
attempts to connect and systematize the existing transfer learning researches,
as well as to summarize and interpret the mechanisms and the strategies of
transfer learning in a comprehensive way, which may help readers have a better
understanding of the current research status and ideas. Unlike previous
surveys, this survey paper reviews more than forty representative transfer
learning approaches, especially homogeneous transfer learning approaches, from
the perspectives of data and model. The applications of transfer learning are
also briefly introduced. In order to show the performance of different transfer
learning models, over twenty representative transfer learning models are used
for experiments. The models are performed on three different datasets, i.e.,
Amazon Reviews, Reuters-21578, and Office-31. And the experimental results
demonstrate the importance of selecting appropriate transfer learning models
for different applications in practice. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Networks (CNNs) have dominated computer vision for
years, due to its ability in capturing locality and translation invariance.
Recently, many vision transformer architectures have been proposed and they
show promising performance. A key component in vision transformers is the
fully-connected self-attention which is more powerful than CNNs in modelling
long range dependencies. However, since the current dense self-attention uses
all image patches (tokens) to compute attention matrix, it may neglect locality
of images patches and involve noisy tokens (e.g., clutter background and
occlusion), leading to a slow training process and potentially degradation of
performance. To address these problems, we propose a sparse attention scheme,
dubbed k-NN attention, for boosting vision transformers. Specifically, instead
of involving all the tokens for attention matrix calculation, we only select
the top-k similar tokens from the keys for each query to compute the attention
map. The proposed k-NN attention naturally inherits the local bias of CNNs
without introducing convolutional operations, as nearby tokens tend to be more
similar than others. In addition, the k-NN attention allows for the exploration
of long range correlation and at the same time filter out irrelevant tokens by
choosing the most similar tokens from the entire image. Despite its simplicity,
we verify, both theoretically and empirically, that $k$-NN attention is
powerful in distilling noise from input tokens and in speeding up training.
Extensive experiments are conducted by using ten different vision transformer
architectures to verify that the proposed k-NN attention can work with any
existing transformer architectures to improve its prediction performance. | [
"cs.CV"
] |
In this work, we introduce a novel weakly supervised object detection (WSOD)
paradigm to detect objects belonging to rare classes that have not many
examples using transferable knowledge from human-object interactions (HOI).
While WSOD shows lower performance than full supervision, we mainly focus on
HOI as the main context which can strongly supervise complex semantics in
images. Therefore, we propose a novel module called RRPN (relational region
proposal network) which outputs an object-localizing attention map only with
human poses and action verbs. In the source domain, we fully train an object
detector and the RRPN with full supervision of HOI. With transferred knowledge
about localization map from the trained RRPN, a new object detector can learn
unseen objects with weak verbal supervision of HOI without bounding box
annotations in the target domain. Because the RRPN is designed as an add-on
type, we can apply it not only to the object detection but also to other
domains such as semantic segmentation. The experimental results on HICO-DET
dataset show the possibility that the proposed method can be a cheap
alternative for the current supervised object detection paradigm. Moreover,
qualitative results demonstrate that our model can properly localize unseen
objects on HICO-DET and V-COCO datasets. | [
"cs.CV",
"cs.LG"
] |
We propose a novel and principled method to learn a nonparametric graph model
called graphon, which is defined in an infinite-dimensional space and
represents arbitrary-size graphs. Based on the weak regularity lemma from the
theory of graphons, we leverage a step function to approximate a graphon. We
show that the cut distance of graphons can be relaxed to the Gromov-Wasserstein
distance of their step functions. Accordingly, given a set of graphs generated
by an underlying graphon, we learn the corresponding step function as the
Gromov-Wasserstein barycenter of the given graphs. Furthermore, we develop
several enhancements and extensions of the basic algorithm, $e.g.$, the
smoothed Gromov-Wasserstein barycenter for guaranteeing the continuity of the
learned graphons and the mixed Gromov-Wasserstein barycenters for learning
multiple structured graphons. The proposed approach overcomes drawbacks of
prior state-of-the-art methods, and outperforms them on both synthetic and
real-world data. The code is available at
https://github.com/HongtengXu/SGWB-Graphon. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Image segmentation and image restoration are two important topics in image
processing with great achievements. In this paper, we propose a new multiphase
segmentation model by combining image restoration and image segmentation
models. Utilizing image restoration aspects, the proposed segmentation model
can effectively and robustly tackle high noisy images, blurry images, images
with missing pixels, and vector-valued images. In particular, one of the most
important segmentation models, the piecewise constant Mumford-Shah model, can
be extended easily in this way to segment gray and vector-valued images
corrupted for example by noise, blur or missing pixels after coupling a new
data fidelity term which comes from image restoration topics. It can be solved
efficiently using the alternating minimization algorithm, and we prove the
convergence of this algorithm with three variables under mild condition.
Experiments on many synthetic and real-world images demonstrate that our method
gives better segmentation results in comparison to others state-of-the-art
segmentation models especially for blurry images and images with missing pixels
values. | [
"cs.CV",
"math.NA",
"65Kxx, 65Yxx",
"G.1.0; G.1.6"
] |
Graph Convolutional Networks (GCNs) achieved tremendous success by
effectively gathering local features for nodes. However, commonly do GCNs focus
more on node features but less on graph structures within the neighborhood,
especially higher-order structural patterns. However, such local structural
patterns are shown to be indicative of node properties in numerous fields. In
addition, it is not just single patterns, but the distribution over all these
patterns matter, because networks are complex and the neighborhood of each node
consists of a mixture of various nodes and structural patterns.
Correspondingly, in this paper, we propose Graph Structural-topic Neural
Network, abbreviated GraphSTONE, a GCN model that utilizes topic models of
graphs, such that the structural topics capture indicative graph structures
broadly from a probabilistic aspect rather than merely a few structures.
Specifically, we build topic models upon graphs using anonymous walks and Graph
Anchor LDA, an LDA variant that selects significant structural patterns first,
so as to alleviate the complexity and generate structural topics efficiently.
In addition, we design multi-view GCNs to unify node features and structural
topic features and utilize structural topics to guide the aggregation. We
evaluate our model through both quantitative and qualitative experiments, where
our model exhibits promising performance, high efficiency, and clear
interpretability. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Despite great progress in 3D human pose estimation from videos, it is still
an open problem to take full advantage of redundant 2D pose sequences to learn
representative representation for generating one single 3D pose. To this end,
we propose an improved Transformer-based architecture, called Strided
Transformer, for 3D human pose estimation in videos to lift a sequence of 2D
joint locations to a 3D pose. Specifically, a vanilla Transformer encoder (VTE)
is adopted to model long-range dependencies of 2D pose sequences. To reduce
redundancy of the sequence and aggregate information from local context,
strided convolutions are incorporated into VTE to progressively reduce the
sequence length. The modified VTE is termed as strided Transformer encoder
(STE) which is built upon the outputs of VTE. STE not only effectively
aggregates long-range information to a single-vector representation in a
hierarchical global and local fashion but also significantly reduces the
computation cost. Furthermore, a full-to-single supervision scheme is designed
at both the full sequence scale and single target frame scale, applied to the
outputs of VTE and STE, respectively. This scheme imposes extra temporal
smoothness constraints in conjunction with the single target frame supervision
and improves the representation ability of features for the target frame. The
proposed architecture is evaluated on two challenging benchmark datasets,
Human3.6M and HumanEva-I, and achieves state-of-the-art results with much fewer
parameters. | [
"cs.CV"
] |
Dams impact downstream river dynamics through flow regulation and disruption
of upstream-downstream linkages. However, current dam operation is far from
satisfactory due to the inability to respond the complicated and uncertain
dynamics of the upstream-downstream system and various usages of the reservoir.
Even further, the unsatisfactory dam operation can cause floods in downstream
areas. Therefore, we leverage reinforcement learning (RL) methods to compute
efficient dam operation guidelines in this work. Specifically, we build offline
simulators with real data and different mathematical models for the upstream
inflow, i.e., generalized least square (GLS) and dynamic linear model (DLM),
then use the simulator to train the state-of-the-art RL algorithms, including
DDPG, TD3 and SAC. Experiments show that the simulator with DLM can efficiently
model the inflow dynamics in the upstream and the dam operation policies
trained by RL algorithms significantly outperform the human-generated policy. | [
"cs.LG"
] |
The success of deep neural networks in real-world problems has prompted many
attempts to explain their training dynamics and generalization performance, but
more guiding principles for the training of neural networks are still needed.
Motivated by the edge of chaos principle behind the optimal performance of
neural networks, we study the role of various hyperparameters in modern neural
network training algorithms in terms of the order-chaos phase diagram. In
particular, we study a fully analytical feedforward neural network trained on
the widely adopted Fashion-MNIST dataset, and study the dynamics associated
with the hyperparameters in back-propagation during the training process. We
find that for the basic algorithm of stochastic gradient descent with momentum,
in the range around the commonly used hyperparameter values, clear scaling
relations are present with respect to the training time during the ordered
phase in the phase diagram, and the model's optimal generalization power at the
edge of chaos is similar across different training parameter combinations. In
the chaotic phase, the same scaling no longer exists. The scaling allows us to
choose the training parameters to achieve faster training without sacrificing
performance. In addition, we find that the commonly used model regularization
method - weight decay - effectively pushes the model towards the ordered phase
to achieve better performance. Leveraging on this fact and the scaling
relations in the other hyperparameters, we derived a principled guideline for
hyperparameter determination, such that the model can achieve optimal
performance by saturating it at the edge of chaos. Demonstrated on this simple
neural network model and training algorithm, our work improves the
understanding of neural network training dynamics, and can potentially be
extended to guiding principles of more complex model architectures and
algorithms. | [
"cs.LG",
"cs.AI",
"nlin.CD",
"physics.data-an"
] |
Any intelligent traffic monitoring system must be able to detect anomalies
such as traffic accidents in real time. In this paper, we propose a
Decision-Tree - enabled approach powered by Deep Learning for extracting
anomalies from traffic cameras while accurately estimating the start and end
time of the anomalous event. Our approach included creating a detection model,
followed by anomaly detection and analysis. YOLOv5 served as the foundation for
our detection model. The anomaly detection and analysis step entail traffic
scene background estimation, road mask extraction, and adaptive thresholding.
Candidate anomalies were passed through a decision tree to detect and analyze
final anomalies. The proposed approach yielded an F1 score of 0.8571, and an S4
score of 0.5686, per the experimental validation. | [
"cs.CV"
] |
We focus on solving the univariate times series point forecasting problem
using deep learning. We propose a deep neural architecture based on backward
and forward residual links and a very deep stack of fully-connected layers. The
architecture has a number of desirable properties, being interpretable,
applicable without modification to a wide array of target domains, and fast to
train. We test the proposed architecture on several well-known datasets,
including M3, M4 and TOURISM competition datasets containing time series from
diverse domains. We demonstrate state-of-the-art performance for two
configurations of N-BEATS for all the datasets, improving forecast accuracy by
11% over a statistical benchmark and by 3% over last year's winner of the M4
competition, a domain-adjusted hand-crafted hybrid between neural network and
statistical time series models. The first configuration of our model does not
employ any time-series-specific components and its performance on heterogeneous
datasets strongly suggests that, contrarily to received wisdom, deep learning
primitives such as residual blocks are by themselves sufficient to solve a wide
range of forecasting problems. Finally, we demonstrate how the proposed
architecture can be augmented to provide outputs that are interpretable without
considerable loss in accuracy. | [
"cs.LG",
"stat.ML"
] |
Generative models such as Variational Auto Encoders (VAEs) and Generative
Adversarial Networks (GANs) are typically trained for a fixed prior
distribution in the latent space, such as uniform or Gaussian. After a trained
model is obtained, one can sample the Generator in various forms for
exploration and understanding, such as interpolating between two samples,
sampling in the vicinity of a sample or exploring differences between a pair of
samples applied to a third sample. In this paper, we show that the latent space
operations used in the literature so far induce a distribution mismatch between
the resulting outputs and the prior distribution the model was trained on. To
address this, we propose to use distribution matching transport maps to ensure
that such latent space operations preserve the prior distribution, while
minimally modifying the original operation. Our experimental results validate
that the proposed operations give higher quality samples compared to the
original operations. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Priority dispatching rule (PDR) is widely used for solving real-world
Job-shop scheduling problem (JSSP). However, the design of effective PDRs is a
tedious task, requiring a myriad of specialized knowledge and often delivering
limited performance. In this paper, we propose to automatically learn PDRs via
an end-to-end deep reinforcement learning agent. We exploit the disjunctive
graph representation of JSSP, and propose a Graph Neural Network based scheme
to embed the states encountered during solving. The resulting policy network is
size-agnostic, effectively enabling generalization on large-scale instances.
Experiments show that the agent can learn high-quality PDRs from scratch with
elementary raw features, and demonstrates strong performance against the best
existing PDRs. The learned policies also perform well on much larger instances
that are unseen in training. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Convolutional neural networks have become state-of-the-art in a wide range of
image recognition tasks. The interpretation of their predictions, however, is
an active area of research. Whereas various interpretation methods have been
suggested for image classification, the interpretation of image segmentation
still remains largely unexplored. To that end, we propose SEG-GRAD-CAM, a
gradient-based method for interpreting semantic segmentation. Our method is an
extension of the widely-used Grad-CAM method, applied locally to produce
heatmaps showing the relevance of individual pixels for semantic segmentation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Video question answering has recently received a lot of attention from
multimodal video researchers. Most video question answering datasets are
usually in the form of multiple-choice. But, the model for the multiple-choice
task does not infer the answer. Rather it compares the answer candidates for
picking the correct answer. Furthermore, it makes it difficult to extend to
other tasks. In this paper, we challenge the existing multiple-choice video
question answering by changing it to open-ended video question answering. To
tackle open-ended question answering, we use the pretrained GPT2 model. The
model is fine-tuned with video inputs and subtitles. An ablation study is
performed by changing the existing DramaQA dataset to an open-ended question
answering, and it shows that performance can be improved using video metadata. | [
"cs.CV",
"cs.AI"
] |
In this paper, we introduce a novel approach for diagnosis of Parkinson's
Disease (PD) based on deep Echo State Networks (ESNs). The identification of PD
is performed by analyzing the whole time-series collected from a tablet device
during the sketching of spiral tests, without the need for feature extraction
and data preprocessing. We evaluated the proposed approach on a public dataset
of spiral tests. The results of experimental analysis show that DeepESNs
perform significantly better than shallow ESN model. Overall, the proposed
approach obtains state-of-the-art results in the identification of PD on this
kind of temporal data. | [
"cs.LG"
] |
Compared with model architectures, the training process, which is also
crucial to the success of detectors, has received relatively less attention in
object detection. In this work, we carefully revisit the standard training
practice of detectors, and find that the detection performance is often limited
by the imbalance during the training process, which generally consists in three
levels - sample level, feature level, and objective level. To mitigate the
adverse effects caused thereby, we propose Libra R-CNN, a simple but effective
framework towards balanced learning for object detection. It integrates three
novel components: IoU-balanced sampling, balanced feature pyramid, and balanced
L1 loss, respectively for reducing the imbalance at sample, feature, and
objective level. Benefitted from the overall balanced design, Libra R-CNN
significantly improves the detection performance. Without bells and whistles,
it achieves 2.5 points and 2.0 points higher Average Precision (AP) than FPN
Faster R-CNN and RetinaNet respectively on MSCOCO. | [
"cs.CV"
] |
Knowing the pressure at all times in each node of a water distribution system
(WDS) facilitates safe and efficient operation. Yet, complete measurement data
cannot be collected due to the limited number of instruments in a real-life
WDS. The data-driven methodology of reconstructing all the nodal pressures by
observing only a limited number of nodes is presented in the paper. The
reconstruction method is based on K-localized spectral graph filters, wherewith
graph convolution on water networks is possible. The effect of the number of
layers, layer depth and the degree of the Chebyshev-polynomial applied in the
kernel is discussed taking into account the peculiarities of the application.
In addition, a weighting method is shown, wherewith information on friction
loss can be embed into the spectral graph filters through the adjacency matrix.
The performance of the proposed model is presented on 3 WDSs at different
number of nodes observed compared to the total number of nodes. The weighted
connections prove no benefit over the binary connections, but the proposed
model reconstructs the nodal pressure with at most 5% relative error on average
at an observation ratio of 5% at least. The results are achieved with shallow
graph neural networks by following the considerations discussed in the paper. | [
"cs.LG"
] |
We address the problem of modeling and prediction of a set of temporal events
in the context of intelligent transportation systems. To leverage the
information shared by different events, we propose a multi-task learning
framework. We develop a support vector regression model for joint learning of
mutually dependent time series. It is the regularization-based multi-task
learning previously developed for the classification case and extended to time
series. We discuss the relatedness of observed time series and first deploy the
dynamic time warping distance measure to identify groups of similar series.
Then we take into account both time and scale warping and propose to align
multiple time series by inferring their common latent representation. We test
the proposed models on the problem of travel demand prediction in Nancy
(France) public transport system and analyze the benefits of multi-task
learning. | [
"cs.LG",
"cs.AI"
] |
In this paper, we focus on the task of one-shot sign spotting, i.e. given an
example of an isolated sign (query), we want to identify whether/where this
sign appears in a continuous, co-articulated sign language video (target). To
achieve this goal, we propose a transformer-based network, called SignLookup.
We employ 3D Convolutional Neural Networks (CNNs) to extract spatio-temporal
representations from video clips. To solve the temporal scale discrepancies
between the query and the target videos, we construct multiple queries from a
single video clip using different frame-level strides. Self-attention is
applied across these query clips to simulate a continuous scale space. We also
utilize another self-attention module on the target video to learn the
contextual within the sequence. Finally a mutual-attention is used to match the
temporal scales to localize the query within the target sequence. Extensive
experiments demonstrate that the proposed approach can not only reliably
identify isolated signs in continuous videos, regardless of the signers'
appearance, but can also generalize to different sign languages. By taking
advantage of the attention mechanism and the adaptive features, our model
achieves state-of-the-art performance on the sign spotting task with accuracy
as high as 96% on challenging benchmark datasets and significantly
outperforming other approaches. | [
"cs.CV"
] |
Sequential assembly with geometric primitives has drawn attention in robotics
and 3D vision since it yields a practical blueprint to construct a target
shape. However, due to its combinatorial property, a greedy method falls short
of generating a sequence of volumetric primitives. To alleviate this
consequence induced by a huge number of feasible combinations, we propose a
combinatorial 3D shape generation framework. The proposed framework reflects an
important aspect of human generation processes in real life -- we often create
a 3D shape by sequentially assembling unit primitives with geometric
constraints. To find the desired combination regarding combination evaluations,
we adopt Bayesian optimization, which is able to exploit and explore
efficiently the feasible regions constrained by the current primitive
placements. An evaluation function conveys global structure guidance for an
assembly process and stability in terms of gravity and external forces
simultaneously. Experimental results demonstrate that our method successfully
generates combinatorial 3D shapes and simulates more realistic generation
processes. We also introduce a new dataset for combinatorial 3D shape
generation. All the codes are available at
\url{https://github.com/POSTECH-CVLab/Combinatorial-3D-Shape-Generation}. | [
"cs.CV",
"cs.GR",
"cs.LG",
"stat.ML"
] |
The detection of anomalies is essential mining task for the security and
reliability in computer systems. Logs are a common and major data source for
anomaly detection methods in almost every computer system. They collect a range
of significant events describing the runtime system status. Recent studies have
focused predominantly on one-class deep learning methods on predefined
non-learnable numerical log representations. The main limitation is that these
models are not able to learn log representations describing the semantic
differences between normal and anomaly logs, leading to a poor generalization
of unseen logs. We propose Logsy, a classification-based method to learn log
representations in a way to distinguish between normal data from the system of
interest and anomaly samples from auxiliary log datasets, easily accessible via
the internet. The idea behind such an approach to anomaly detection is that the
auxiliary dataset is sufficiently informative to enhance the representation of
the normal data, yet diverse to regularize against overfitting and improve
generalization. We propose an attention-based encoder model with a new
hyperspherical loss function. This enables learning compact log representations
capturing the intrinsic differences between normal and anomaly logs.
Empirically, we show an average improvement of 0.25 in the F1 score, compared
to the previous methods. To investigate the properties of Logsy, we perform
additional experiments including evaluation of the effect of the auxiliary data
size, the influence of expert knowledge, and the quality of the learned log
representations. The results show that the learned representation boost the
performance of the previous methods such as PCA with a relative improvement of
28.2%. | [
"cs.LG",
"cs.IR",
"stat.ML"
] |
One of the difficulties in 3D reconstruction of molecules from images in
single particle Cryo-Electron Microscopy (Cryo-EM), in addition to high levels
of noise and unknown image orientations, is heterogeneity in samples: in many
cases, the samples contain a mixture of molecules, or multiple conformations of
one molecule. Many algorithms for the reconstruction of molecules from images
in heterogeneous Cryo-EM experiments are based on iterative approximations of
the molecules in a non-convex optimization that is prone to reaching suboptimal
local minima. Other algorithms require an alignment in order to perform
classification, or vice versa. The recently introduced Non-Unique Games
framework provides a representation theoretic approach to studying problems of
alignment over compact groups, and offers convex relaxations for alignment
problems which are formulated as semidefinite programs (SDPs) with certificates
of global optimality under certain circumstances. In this manuscript, we
propose to extend Non-Unique Games to the problem of simultaneous alignment and
classification with the goal of simultaneously classifying Cryo-EM images and
aligning them within their respective classes. Our proposed approach can also
be extended to the case of continuous heterogeneity. | [
"cs.CV",
"math.OC"
] |
Meta-learning is a tool that allows us to build sample-efficient learning
systems. Here we show that, once meta-trained, LSTM Meta-Learners aren't just
faster learners than their sample-inefficient deep learning (DL) and
reinforcement learning (RL) brethren, but that they actually pursue
fundamentally different learning trajectories. We study their learning dynamics
on three sets of structured tasks for which the corresponding learning dynamics
of DL and RL systems have been previously described: linear regression (Saxe et
al., 2013), nonlinear regression (Rahaman et al., 2018; Xu et al., 2018), and
contextual bandits (Schaul et al., 2019). In each case, while
sample-inefficient DL and RL Learners uncover the task structure in a staggered
manner, meta-trained LSTM Meta-Learners uncover almost all task structure
concurrently, congruent with the patterns expected from Bayes-optimal inference
algorithms. This has implications for research areas wherever the learning
behaviour itself is of interest, such as safety, curriculum design, and
human-in-the-loop machine learning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In this paper, we present ViP-DeepLab, a unified model attempting to tackle
the long-standing and challenging inverse projection problem in vision, which
we model as restoring the point clouds from perspective image sequences while
providing each point with instance-level semantic interpretations. Solving this
problem requires the vision models to predict the spatial location, semantic
class, and temporally consistent instance label for each 3D point. ViP-DeepLab
approaches it by jointly performing monocular depth estimation and video
panoptic segmentation. We name this joint task as Depth-aware Video Panoptic
Segmentation, and propose a new evaluation metric along with two derived
datasets for it, which will be made available to the public. On the individual
sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming
previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI
monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The
datasets and the evaluation codes are made publicly available. | [
"cs.CV"
] |
Of later years, numerous bottom-up attention models have been proposed on
different assumptions. However, the produced saliency maps may be different
from each other even from the same input image. We also observe that human
fixation map varies across time greatly. When people freely view an image, they
tend to allocate attention at salient regions of large scale at first, and then
search more and more detailed regions. In this paper, we argue that, for one
input image visual attention cannot be described by only one single saliency
map, and this mechanism should be modeled as a dynamic process. Under the
frequency domain paradigm, we proposed a global inhibition model to mimic this
process by suppressing the {\it non-saliency} in the input image; we also show
that the dynamic process is influenced by one parameter in the frequency
domain. Experiments illustrate that the proposed model is capable of predicting
human dynamic fixation distribution. | [
"cs.CV",
"q-bio.NC"
] |
With the widespread adoption of machine learning in the real world, the
impact of the discriminatory bias has attracted attention. In recent years,
various methods to mitigate the bias have been proposed. However, most of them
have not considered intersectional bias, which brings unfair situations where
people belonging to specific subgroups of a protected group are treated worse
when multiple sensitive attributes are taken into consideration. To mitigate
this bias, in this paper, we propose a method called One-vs.-One Mitigation by
applying a process of comparison between each pair of subgroups related to
sensitive attributes to the fairness-aware machine learning for binary
classification. We compare our method and the conventional fairness-aware
binary classification methods in comprehensive settings using three approaches
(pre-processing, in-processing, and post-processing), six metrics (the ratio
and difference of demographic parity, equalized odds, and equal opportunity),
and two real-world datasets (Adult and COMPAS). As a result, our method
mitigates the intersectional bias much better than conventional methods in all
the settings. With the result, we open up the potential of fairness-aware
binary classification for solving more realistic problems occurring when there
are multiple sensitive attributes. | [
"cs.LG",
"cs.AI",
"cs.CY",
"I.6.5; I.2.6"
] |
Style transfer has recently received a lot of attention, since it allows to
study fundamental challenges in image understanding and synthesis. Recent work
has significantly improved the representation of color and texture and
computational speed and image resolution. The explicit transformation of image
content has, however, been mostly neglected: while artistic style affects
formal characteristics of an image, such as color, shape or texture, it also
deforms, adds or removes content details. This paper explicitly focuses on a
content-and style-aware stylization of a content image. Therefore, we introduce
a content transformation module between the encoder and decoder. Moreover, we
utilize similar content appearing in photographs and style samples to learn how
style alters content details and we generalize this to other class details.
Additionally, this work presents a novel normalization layer critical for high
resolution image synthesis. The robustness and speed of our model enables a
video stylization in real-time and high definition. We perform extensive
qualitative and quantitative evaluations to demonstrate the validity of our
approach. | [
"cs.CV"
] |
We present a method for depth estimation with monocular images, which can
predict high-quality depth on diverse scenes up to an affine transformation,
thus preserving accurate shapes of a scene. Previous methods that predict
metric depth often work well only for a specific scene. In contrast, learning
relative depth (information of being closer or further) can enjoy better
generalization, with the price of failing to recover the accurate geometric
shape of the scene. In this work, we propose a dataset and methods to tackle
this dilemma, aiming to predict accurate depth up to an affine transformation
with good generalization to diverse scenes. First we construct a large-scale
and diverse dataset, termed Diverse Scene Depth dataset (DiverseDepth), which
has a broad range of scenes and foreground contents. Compared with previous
learning objectives, i.e., learning metric depth or relative depth, we propose
to learn the affine-invariant depth using our diverse dataset to ensure both
generalization and high-quality geometric shapes of scenes. Furthermore, in
order to train the model on the complex dataset effectively, we propose a
multi-curriculum learning method. Experiments show that our method outperforms
previous methods on 8 datasets by a large margin with the zero-shot test
setting, demonstrating the excellent generalization capacity of the learned
model to diverse scenes. The reconstructed point clouds with the predicted
depth show that our method can recover high-quality 3D shapes. Code and dataset
are available at: https://tinyurl.com/DiverseDepth | [
"cs.CV"
] |
Exponential growth in Electronic Healthcare Records (EHR) has resulted in new
opportunities and urgent needs for discovery of meaningful data-driven
representations and patterns of diseases in Computational Phenotyping research.
Deep Learning models have shown superior performance for robust prediction in
computational phenotyping tasks, but suffer from the issue of model
interpretability which is crucial for clinicians involved in decision-making.
In this paper, we introduce a novel knowledge-distillation approach called
Interpretable Mimic Learning, to learn interpretable phenotype features for
making robust prediction while mimicking the performance of deep learning
models. Our framework uses Gradient Boosting Trees to learn interpretable
features from deep learning models such as Stacked Denoising Autoencoder and
Long Short-Term Memory. Exhaustive experiments on a real-world clinical
time-series dataset show that our method obtains similar or better performance
than the deep learning models, and it provides interpretable phenotypes for
clinical decision making. | [
"stat.ML",
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.