text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Computer vision algorithms, e.g. for face recognition, favour groups of
individuals that are better represented in the training data. This happens
because of the generalization that classifiers have to make. It is simpler to
fit the majority groups as this fit is more important to overall error. We
propose to create a balanced training dataset, consisting of the original
dataset plus new data points in which the group memberships are intervened,
minorities become majorities and vice versa. We show that current generative
adversarial networks are a powerful tool for learning these data points, called
contrastive examples. We experiment with the equalized odds bias measure on
tabular data as well as image data (CelebA and Diversity in Faces datasets).
Contrastive examples allow us to expose correlations between group membership
and other seemingly neutral features. Whenever a causal graph is available, we
can put those contrastive examples in the perspective of counterfactuals. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Deep convolutional neural networks have achieved great successes over recent
years, particularly in the domain of computer vision. They are fast,
convenient, and -- thanks to mature frameworks -- relatively easy to implement
and deploy. However, their reasoning is hidden inside a black box, in spite of
a number of proposed approaches that try to provide human-understandable
explanations for the predictions of neural networks. It is still a matter of
debate which of these explainers are best suited for which situations, and how
to quantitatively evaluate and compare them. In this contribution, we focus on
the capabilities of explainers for convolutional deep neural networks in an
extreme situation: a setting in which humans and networks fundamentally
disagree. Deep neural networks are susceptible to adversarial attacks that
deliberately modify input samples to mislead a neural network's classification,
without affecting how a human observer interprets the input. Our goal with this
contribution is to evaluate explainers by investigating whether they can
identify adversarially attacked regions of an image. In particular, we
quantitatively and qualitatively investigate the capability of three popular
explainers of classifications -- classic salience, guided backpropagation, and
LIME -- with respect to their ability to identify regions of attack as the
explanatory regions for the (incorrect) prediction in representative examples
from image classification. We find that LIME outperforms the other explainers. | [
"cs.LG",
"stat.ML"
] |
We propose a novel and unsupervised representation learning model, i.e.,
Robust Block-Diagonal Adaptive Locality-constrained Latent Representation
(rBDLR). rBDLR is able to recover multi-subspace structures and extract the
adaptive locality-preserving salient features jointly. Leveraging on the
Frobenius-norm based latent low-rank representation model, rBDLR jointly learns
the coding coefficients and salient features, and improves the results by
enhancing the robustness to outliers and errors in given data, preserving local
information of salient features adaptively and ensuring the block-diagonal
structures of the coefficients. To improve the robustness, we perform the
latent representation and adaptive weighting in a recovered clean data space.
To force the coefficients to be block-diagonal, we perform auto-weighting by
minimizing the reconstruction error based on salient features, constrained
using a block-diagonal regularizer. This ensures that a strict block-diagonal
weight matrix can be obtained and salient features will possess the adaptive
locality preserving ability. By minimizing the difference between the
coefficient and weights matrices, we can obtain a block-diagonal coefficients
matrix and it can also propagate and exchange useful information between
salient features and coefficients. Extensive results demonstrate the
superiority of rBDLR over other state-of-the-art methods. | [
"cs.CV"
] |
Interpretable surrogates of black-box predictors trained on high-dimensional
tabular datasets can struggle to generate comprehensible explanations in the
presence of correlated variables. We propose a model-agnostic interpretable
surrogate that provides global and local explanations of black-box classifiers
to address this issue. We introduce the idea of concepts as intuitive groupings
of variables that are either defined by a domain expert or automatically
discovered using correlation coefficients. Concepts are embedded in a surrogate
decision tree to enhance its comprehensibility. First experiments on FRED-MD, a
macroeconomic database with 134 variables, show improvement in
human-interpretability while accuracy and fidelity of the surrogate model are
preserved. | [
"stat.ML",
"cs.LG"
] |
When tuning the architecture and hyperparameters of large machine learning
models for on-device deployment, it is desirable to understand the optimal
trade-offs between on-device latency and model accuracy. In this work, we
leverage recent methodological advances in Bayesian optimization over
high-dimensional search spaces and multi-objective Bayesian optimization to
efficiently explore these trade-offs for a production-scale on-device natural
language understanding model at Facebook. | [
"cs.LG"
] |
Abnormal behavior detection in surveillance video is a pivotal part of the
intelligent city. Most existing methods only consider how to detect anomalies,
with less considering to explain the reason of the anomalies. We investigate an
orthogonal perspective based on the reason of these abnormal behaviors. To this
end, we propose a multivariate fusion method that analyzes each target through
three branches: object, action and motion. The object branch focuses on the
appearance information, the motion branch focuses on the distribution of the
motion features, and the action branch focuses on the action category of the
target. The information that these branches focus on is different, and they can
complement each other and jointly detect abnormal behavior. The final abnormal
score can then be obtained by combining the abnormal scores of the three
branches. | [
"cs.CV"
] |
Eddy current testing (ECT) is an effective technique in the evaluation of the
depth of metal surface defects. However, in practice, the evaluation primarily
relies on the experience of an operator and is often carried out by manual
inspection. In this paper, we address the challenges of automatic depth
evaluation of metal surface defects by virtual of state-of-the-art deep
learning (DL) techniques. The main contributions are three-fold. Firstly, a
highly-integrated portable ECT device is developed, which takes advantage of an
advanced field programmable gate array (Zynq-7020 system on chip) and provides
fast data acquisition and in-phase/quadrature demodulation. Secondly, a
dataset, termed as MDDECT, is constructed using the ECT device by human
operators and made openly available. It contains 48,000 scans from 18 defects
of different depths and lift-offs. Thirdly, the depth evaluation problem is
formulated as a time series classification problem, and various
state-of-the-art 1-d residual convolutional neural networks are trained and
evaluated on the MDDECT dataset. A 38-layer 1-d ResNeXt achieves an accuracy of
93.58% in discriminating the surface defects in a stainless steel sheet. The
depths of the defects vary from 0.3 mm to 2.0 mm in a resolution of 0.1 mm. In
addition, results show that the trained ResNeXt1D-38 model is immune to
lift-off signals. | [
"cs.LG",
"eess.IV",
"eess.SP"
] |
We consider the problem of predicting edges in a graph from node attributes
in an e-commerce setting. Specifically, given nodes labelled with search query
text, we want to predict links to related queries that share products.
Experiments with a range of deep neural architectures show that simple
feedforward networks with an attention mechanism perform best for learning
embeddings. The simplicity of these models allows us to explain the performance
of attention.
We propose an analytically tractable model of query generation, AttEST, that
views both products and the query text as vectors embedded in a latent space.
We prove (and empirically validate) that the point-wise mutual information
(PMI) matrix of the AttEST query text embeddings displays a low-rank behavior
analogous to that observed in word embeddings. This low-rank property allows us
to derive a loss function that maximizes the mutual information between related
queries which is used to train an attention network to learn query embeddings.
This AttEST network beats traditional memory-based LSTM architectures by over
20% on F-1 score. We justify this out-performance by showing that the weights
from the attention mechanism correlate strongly with the weights of the best
linear unbiased estimator (BLUE) for the product vectors, and conclude that
attention plays an important role in variance reduction. | [
"cs.LG",
"stat.ML"
] |
We consider an agent interacting with an environment in a single stream of
actions, observations, and rewards, with no reset. This process is not assumed
to be a Markov Decision Process (MDP). Rather, the agent has several
representations (mapping histories of past interactions to a discrete state
space) of the environment with unknown dynamics, only some of which result in
an MDP. The goal is to minimize the average regret criterion against an agent
who knows an MDP representation giving the highest optimal reward, and acts
optimally in it. Recent regret bounds for this setting are of order
$O(T^{2/3})$ with an additive term constant yet exponential in some
characteristics of the optimal MDP. We propose an algorithm whose regret after
$T$ time steps is $O(\sqrt{T})$, with all constants reasonably small. This is
optimal in $T$ since $O(\sqrt{T})$ is the optimal regret in the setting of
learning in a (single discrete) MDP. | [
"cs.LG"
] |
The future landscape of modern farming and plant breeding is rapidly changing
due to the complex needs of our society. The explosion of collectable data has
started a revolution in agriculture to the point where innovation must occur.
To a commercial organization, the accurate and efficient collection of
information is necessary to ensure that optimal decisions are made at key
points of the breeding cycle. However, due to the shear size of a breeding
program and current resource limitations, the ability to collect precise data
on individual plants is not possible. In particular, efficient phenotyping of
crops to record its color, shape, chemical properties, disease susceptibility,
etc. is severely limited due to labor requirements and, oftentimes, expert
domain knowledge. In this paper, we propose a deep learning based approach,
named DeepStand, for image-based corn stand counting at early phenological
stages. The proposed method adopts a truncated VGG-16 network as a backbone
feature extractor and merges multiple feature maps with different scales to
make the network robust against scale variation. Our extensive computational
experiments suggest that our proposed method can successfully count corn stands
and out-perform other state-of-the-art methods. It is the goal of our work to
be used by the larger agricultural community as a way to enable high-throughput
phenotyping without the use of extensive time and labor requirements. | [
"cs.CV",
"cs.LG",
"q-bio.QM"
] |
Recent advances in 3D perception have shown impressive progress in
understanding geometric structures of 3Dshapes and even scenes. Inspired by
these advances in geometric understanding, we aim to imbue image-based
perception with representations learned under geometric constraints. We
introduce an approach to learn view-invariant,geometry-aware representations
for network pre-training, based on multi-view RGB-D data, that can then be
effectively transferred to downstream 2D tasks. We propose to employ
contrastive learning under both multi-view im-age constraints and
image-geometry constraints to encode3D priors into learned 2D representations.
This results not only in improvement over 2D-only representation learning on
the image-based tasks of semantic segmentation, instance segmentation, and
object detection on real-world in-door datasets, but moreover, provides
significant improvement in the low data regime. We show a significant
improvement of 6.0% on semantic segmentation on full data as well as 11.9% on
20% data against baselines on ScanNet. | [
"cs.CV"
] |
Progress in generative modelling, especially generative adversarial networks,
have made it possible to efficiently synthesize and alter media at scale.
Malicious individuals now rely on these machine-generated media, or deepfakes,
to manipulate social discourse. In order to ensure media authenticity, existing
research is focused on deepfake detection. Yet, the adversarial nature of
frameworks used for generative modeling suggests that progress towards
detecting deepfakes will enable more realistic deepfake generation. Therefore,
it comes at no surprise that developers of generative models are under the
scrutiny of stakeholders dealing with misinformation campaigns. At the same
time, generative models have a lot of positive applications. As such, there is
a clear need to develop tools that ensure the transparent use of generative
modeling, while minimizing the harm caused by malicious applications.
Our technique optimizes over the source of entropy of each generative model
to probabilistically attribute a deepfake to one of the models. We evaluate our
method on the seminal example of face synthesis, demonstrating that our
approach achieves 97.62% attribution accuracy, and is less sensitive to
perturbations and adversarial examples. We discuss the ethical implications of
our work, identify where our technique can be used, and highlight that a more
meaningful legislative framework is required for a more transparent and ethical
use of generative modeling. Finally, we argue that model developers should be
capable of claiming plausible deniability and propose a second framework to do
so -- this allows a model developer to produce evidence that they did not
produce media that they are being accused of having produced. | [
"cs.LG",
"cs.CR",
"cs.CV",
"cs.CY"
] |
In this paper, we propose a two-steps approach to partition instances of the
Conjunctive Normal Form (CNF) Syntactic Formula Isomorphism problem (CSFI) into
groups of different complexity. First, we build a model, based on the
Transformer architecture, that attempts to solve instances of the CSFI problem.
Then, we leverage the errors of such model and train a second Transformer-based
model to partition the problem instances into groups of different complexity,
thus detecting the ones that can be solved without using too expensive
resources. We evaluate the proposed approach on a pseudo-randomly generated
dataset and obtain promising results. Finally, we discuss the possibility of
extending this approach to other problems based on the same type of textual
representation. | [
"cs.LG"
] |
Since the proposal of big data analysis and Graphic Processing Unit (GPU),
the deep learning technology has received a great deal of attention and has
been widely applied in the field of imaging processing. In this paper, we have
an aim to completely review and summarize the deep learning technologies for
image denoising proposed in recent years. Morever, we systematically analyze
the conventional machine learning methods for image denoising. Finally, we
point out some research directions for the deep learning technologies in image
denoising. | [
"cs.CV"
] |
Recently deep neural networks (DNNs) have achieved tremendous success for
object detection in overhead (e.g., satellite) imagery. One ongoing challenge
however is the acquisition of training data, due to high costs of obtaining
satellite imagery and annotating objects in it. In this work we present a
simple approach - termed Synthetic object IMPLantation (SIMPL) - to easily and
rapidly generate large quantities of synthetic overhead training data for
custom target objects. We demonstrate the effectiveness of using SIMPL
synthetic imagery for training DNNs in zero-shot scenarios where no real
imagery is available; and few-shot learning scenarios, where limited real-world
imagery is available. We also conduct experiments to study the sensitivity of
SIMPL's effectiveness to some key design parameters, providing users for
insights when designing synthetic imagery for custom objects. We release a
software implementation of our SIMPL approach so that others can build upon it,
or use it for their own custom problems. | [
"cs.CV"
] |
Can performance on the task of action quality assessment (AQA) be improved by
exploiting a description of the action and its quality? Current AQA and skills
assessment approaches propose to learn features that serve only one task -
estimating the final score. In this paper, we propose to learn spatio-temporal
features that explain three related tasks - fine-grained action recognition,
commentary generation, and estimating the AQA score. A new multitask-AQA
dataset, the largest to date, comprising of 1412 diving samples was collected
to evaluate our approach (https://github.com/ParitoshParmar/MTL-AQA). We show
that our MTL approach outperforms STL approach using two different kinds of
architectures: C3D-AVG and MSCADC. The C3D-AVG-MTL approach achieves the new
state-of-the-art performance with a rank correlation of 90.44%. Detailed
experiments were performed to show that MTL offers better generalization than
STL, and representations from action recognition models are not sufficient for
the AQA task and instead should be learned. | [
"cs.CV"
] |
We investigate whether post-hoc model explanations are effective for
diagnosing model errors--model debugging. In response to the challenge of
explaining a model's prediction, a vast array of explanation methods have been
proposed. Despite increasing use, it is unclear if they are effective. To
start, we categorize \textit{bugs}, based on their source, into:~\textit{data,
model, and test-time} contamination bugs. For several explanation methods, we
assess their ability to: detect spurious correlation artifacts (data
contamination), diagnose mislabeled training examples (data contamination),
differentiate between a (partially) re-initialized model and a trained one
(model contamination), and detect out-of-distribution inputs (test-time
contamination). We find that the methods tested are able to diagnose a spurious
background bug, but not conclusively identify mislabeled training examples. In
addition, a class of methods, that modify the back-propagation algorithm are
invariant to the higher layer parameters of a deep network; hence, ineffective
for diagnosing model contamination. We complement our analysis with a human
subject study, and find that subjects fail to identify defective models using
attributions, but instead rely, primarily, on model predictions. Taken
together, our results provide guidance for practitioners and researchers
turning to explanations as tools for model debugging. | [
"cs.CV",
"cs.LG"
] |
SVD serves as an exploratory tool in identifying the dominant features in the
form of top rank-r singular factors corresponding to the largest singular
values. For Big Data applications it is well known that Singular Value
Decomposition (SVD) is restrictive due to main memory requirements. However, a
number of applications such as community detection, clustering, or bottleneck
identification in large scale graph data-sets rely upon identifying the lowest
singular values and the singular corresponding vectors. For example, the lowest
singular values of a graph Laplacian reveal the number of isolated clusters
(zero singular values) or bottlenecks (lowest non-zero singular values) for
undirected, acyclic graphs. A naive approach here would be to perform a full
SVD however, this quickly becomes infeasible for practical big data
applications due to the enormous memory requirements. Furthermore, for such
applications only a few lowest singular factors are desired making a full
decomposition computationally exorbitant. In this work, we trivially extend the
previously proposed Range-Net to \textbf{Tail-Net} for a memory and compute
efficient extraction of lowest singular factors of a given big dataset and a
specified rank-r. We present a number of numerical experiments on both
synthetic and practical data-sets for verification and bench-marking using
conventional SVD as the baseline. | [
"cs.LG",
"cs.AI"
] |
Geometry-aware modules are widely applied in recent deep learning
architectures for scene representation and rendering. However, these modules
require intrinsic camera information that might not be obtained accurately. In
this paper, we propose a Spatial Transformation Routing (STR) mechanism to
model the spatial properties without applying any geometric prior. The STR
mechanism treats the spatial transformation as the message passing process, and
the relation between the view poses and the routing weights is modeled by an
end-to-end trainable neural network. Besides, an Occupancy Concept Mapping
(OCM) framework is proposed to provide explainable rationals for scene-fusion
processes. We conducted experiments on several datasets and show that the
proposed STR mechanism improves the performance of the Generative Query Network
(GQN). The visualization results reveal that the routing process can pass the
observed information from one location of some view to the associated location
in the other view, which demonstrates the advantage of the proposed model in
terms of spatial cognition. | [
"cs.CV",
"I.4.8"
] |
Deep neural networks have achieved great success in many real-world
applications, yet it remains unclear and difficult to explain their
decision-making process to an end-user. In this paper, we address the
explainable AI problem for deep neural networks with our proposed framework,
named IASSA, which generates an importance map indicating how salient each
pixel is for the model's prediction with an iterative and adaptive sampling
module. We employ an affinity matrix calculated on multi-level deep learning
features to explore long-range pixel-to-pixel correlation, which can shift the
saliency values guided by our long-range and parameter-free spatial attention.
Extensive experiments on the MS-COCO dataset show that our proposed approach
matches or exceeds the performance of state-of-the-art black-box explanation
methods. | [
"cs.CV"
] |
Generative Adversarial Networks have recently shown promise for video
generation, building off of the success of image generation while also
addressing a new challenge: time. Although time was analyzed in some early
work, the literature has not adequately grown with temporal modeling
developments. We study the effects of Neural Differential Equations to model
the temporal dynamics of video generation. The paradigm of Neural Differential
Equations presents many theoretical strengths including the first continuous
representation of time within video generation. In order to address the effects
of Neural Differential Equations, we investigate how changes in temporal models
affect generated video quality. Our results give support to the usage of Neural
Differential Equations as a simple replacement for older temporal generators.
While keeping run times similar and decreasing parameter count, we produce a
new state-of-the-art model in 64$\times$64 pixel unconditional video
generation, with an Inception Score of 15.20. | [
"cs.CV"
] |
Robotic apple harvesting has received much research attention in the past few
years due to growing shortage and rising cost in labor. One key enabling
technology towards automated harvesting is accurate and robust apple detection,
which poses great challenges as a result of the complex orchard environment
that involves varying lighting conditions and foliage/branch occlusions. This
letter reports on the development of a novel deep learning-based apple
detection framework named DeepApple. Specifically, we first collect a
comprehensive apple orchard dataset for 'Gala' and 'Blondee' apples, using a
color camera, under different lighting conditions (sunny vs. overcast and front
lighting vs. back lighting). We then develop a novel suppression Mask R-CNN for
apple detection, in which a suppression branch is added to the standard Mask
R-CNN to suppress non-apple features generated by the original network.
Comprehensive evaluations are performed, which show that the developed
suppression Mask R-CNN network outperforms state-of-the-art models with a
higher F1-score of 0.905 and a detection time of 0.25 second per frame on a
standard desktop computer. | [
"cs.CV",
"cs.LG"
] |
We present FlowMO: an open-source Python library for molecular property
prediction with Gaussian Processes. Built upon GPflow and RDKit, FlowMO enables
the user to make predictions with well-calibrated uncertainty estimates, an
output central to active learning and molecular design applications. Gaussian
Processes are particularly attractive for modelling small molecular datasets, a
characteristic of many real-world virtual screening campaigns where
high-quality experimental data is scarce. Computational experiments across
three small datasets demonstrate comparable predictive performance to deep
learning methods but with superior uncertainty calibration. | [
"cs.LG",
"stat.ML"
] |
Recent studies have shown that the efficiency of deep neural networks in
mobile applications can be significantly improved by distributing the
computational workload between the mobile device and the cloud. This paradigm,
termed collaborative intelligence, involves communicating feature data between
the mobile and the cloud. The efficiency of such approach can be further
improved by lossy compression of feature data, which has not been examined to
date. In this work we focus on collaborative object detection and study the
impact of both near-lossless and lossy compression of feature data on its
accuracy. We also propose a strategy for improving the accuracy under lossy
feature compression. Experiments indicate that using this strategy, the
communication overhead can be reduced by up to 70% without sacrificing
accuracy. | [
"cs.CV"
] |
Typical architectures of Generative AdversarialNetworks make use of a
unimodal latent distribution transformed by a continuous generator.
Consequently, the modeled distribution always has connected support which is
cumbersome when learning a disconnected set of manifolds. We formalize this
problem by establishing a no free lunch theorem for the disconnected manifold
learning stating an upper bound on the precision of the targeted distribution.
This is done by building on the necessary existence of a low-quality region
where the generator continuously samples data between two disconnected modes.
Finally, we derive a rejection sampling method based on the norm of generators
Jacobian and show its efficiency on several generators including BigGAN. | [
"stat.ML",
"cs.LG"
] |
Bayesian reinforcement learning (BRL) offers a decision-theoretic solution
for reinforcement learning. While "model-based" BRL algorithms have focused
either on maintaining a posterior distribution on models or value functions and
combining this with approximate dynamic programming or tree search, previous
Bayesian "model-free" value function distribution approaches implicitly make
strong assumptions or approximations. We describe a novel Bayesian framework,
Inferential Induction, for correctly inferring value function distributions
from data, which leads to the development of a new class of BRL algorithms. We
design an algorithm, Bayesian Backwards Induction, with this framework. We
experimentally demonstrate that the proposed algorithm is competitive with
respect to the state of the art. | [
"cs.LG",
"stat.ML"
] |
Meta learning algorithms have been widely applied in many tasks for efficient
learning, such as few-shot image classification and fast reinforcement
learning. During meta training, the meta learner develops a common learning
strategy, or experience, from a variety of learning tasks. Therefore, during
meta test, the meta learner can use the learned strategy to quickly adapt to
new tasks even with a few training samples. However, there is still a dark side
about meta learning in terms of reliability and robustness. In particular, is
meta learning vulnerable to adversarial attacks? In other words, would a
well-trained meta learner utilize its learned experience to build wrong or
likely useless knowledge, if an adversary unnoticeably manipulates the given
training set? Without the understanding of this problem, it is extremely risky
to apply meta learning in safety-critical applications. Thus, in this paper, we
perform the initial study about adversarial attacks on meta learning under the
few-shot classification problem. In particular, we formally define key elements
of adversarial attacks unique to meta learning and propose the first attacking
algorithm against meta learning under various settings. We evaluate the
effectiveness of the proposed attacking strategy as well as the robustness of
several representative meta learning algorithms. Experimental results
demonstrate that the proposed attacking strategy can easily break the meta
learner and meta learning is vulnerable to adversarial attacks. The
implementation of the proposed framework will be released upon the acceptance
of this paper. | [
"cs.LG",
"stat.ML"
] |
The growing interest in both the automation of machine learning and deep
learning has inevitably led to the development of a wide variety of automated
methods for neural architecture search. The choice of the network architecture
has proven to be critical, and many advances in deep learning spring from its
immediate improvements. However, deep learning techniques are computationally
intensive and their application requires a high level of domain knowledge.
Therefore, even partial automation of this process helps to make deep learning
more accessible to both researchers and practitioners. With this survey, we
provide a formalism which unifies and categorizes the landscape of existing
methods along with a detailed analysis that compares and contrasts the
different approaches. We achieve this via a comprehensive discussion of the
commonly adopted architecture search spaces and architecture optimization
algorithms based on principles of reinforcement learning and evolutionary
algorithms along with approaches that incorporate surrogate and one-shot
models. Additionally, we address the new research directions which include
constrained and multi-objective architecture search as well as automated data
augmentation, optimizer and activation function search. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
] |
Fair representation learning is an attractive approach that promises fairness
of downstream predictors by encoding sensitive data. Unfortunately, recent work
has shown that strong adversarial predictors can still exhibit unfairness by
recovering sensitive attributes from these representations. In this work, we
present Fair Normalizing Flows (FNF), a new approach offering more rigorous
fairness guarantees for learned representations. Specifically, we consider a
practical setting where we can estimate the probability density for sensitive
groups. The key idea is to model the encoder as a normalizing flow trained to
minimize the statistical distance between the latent representations of
different groups. The main advantage of FNF is that its exact likelihood
computation allows us to obtain guarantees on the maximum unfairness of any
potentially adversarial downstream predictor. We experimentally demonstrate the
effectiveness of FNF in enforcing various group fairness notions, as well as
other attractive properties such as interpretability and transfer learning, on
a variety of challenging real-world datasets. | [
"cs.LG",
"cs.AI"
] |
Manipulation and re-use of images in scientific publications is a concerning
problem that currently lacks a scalable solution. Current tools for detecting
image duplication are mostly manual or semi-automated, despite the availability
of an overwhelming target dataset for a learning-based approach. This paper
addresses the problem of determining if, given two images, one is a manipulated
version of the other by means of copy, rotation, translation, scale,
perspective transform, histogram adjustment, or partial erasing. We propose a
data-driven solution based on a 3-branch Siamese Convolutional Neural Network.
The ConvNet model is trained to map images into a 128-dimensional space, where
the Euclidean distance between duplicate images is smaller than or equal to 1,
and the distance between unique images is greater than 1. Our results suggest
that such an approach has the potential to improve surveillance of the
published and in-peer-review literature for image manipulation. | [
"cs.CV"
] |
Semantic segmentation is a challenging task that needs to handle large scale
variations, deformations and different viewpoints. In this paper, we develop a
novel network named Gated Path Selection Network (GPSNet), which aims to learn
adaptive receptive fields. In GPSNet, we first design a two-dimensional
multi-scale network - SuperNet, which densely incorporates features from
growing receptive fields. To dynamically select desirable semantic context, a
gate prediction module is further introduced. In contrast to previous works
that focus on optimizing sample positions on the regular grids, GPSNet can
adaptively capture free form dense semantic contexts. The derived adaptive
receptive fields are data-dependent, and are flexible that can model different
object geometric transformations. On two representative semantic segmentation
datasets, i.e., Cityscapes, and ADE20K, we show that the proposed approach
consistently outperforms previous methods and achieves competitive performance
without bells and whistles. | [
"cs.CV"
] |
\textit{Attention} computes the dependency between representations, and it
encourages the model to focus on the important selective features.
Attention-based models, such as Transformer and graph attention network (GAT),
are widely utilized for sequential data and graph-structured data. This paper
suggests a new interpretation and generalized structure of the attention in
Transformer and GAT. For the attention in Transformer and GAT, we derive that
the attention is a product of two parts: 1) the RBF kernel to measure the
similarity of two instances and 2) the exponential of $L^{2}$ norm to compute
the importance of individual instances. From this decomposition, we generalize
the attention in three ways. First, we propose implicit kernel attention with
an implicit kernel function instead of manual kernel selection. Second, we
generalize $L^{2}$ norm as the $L^{p}$ norm. Third, we extend our attention to
structured multi-head attention. Our generalized attention shows better
performance on classification, translation, and regression tasks. | [
"cs.LG",
"stat.ML"
] |
The Gaussian process (GP) regression can be severely biased when the data are
contaminated by outliers. This paper presents a new robust GP regression
algorithm that iteratively trims the most extreme data points. While the new
algorithm retains the attractive properties of the standard GP as a
nonparametric and flexible regression method, it can greatly improve the model
accuracy for contaminated data even in the presence of extreme or abundant
outliers. It is also easier to implement compared with previous robust GP
variants that rely on approximate inference. Applied to a wide range of
experiments with different contamination levels, the proposed method
significantly outperforms the standard GP and the popular robust GP variant
with the Student-t likelihood in most test cases. In addition, as a practical
example in the astrophysical study, we show that this method can precisely
determine the main-sequence ridge line in the color-magnitude diagram of star
clusters. | [
"cs.LG",
"astro-ph.IM",
"stat.ML"
] |
This dissertation investigates integer linear programming (ILP) formulation
of Bayesian Network structure learning problem. We review the definition and
key properties of Bayesian network and explain score metrics used to measure
how well certain Bayesian network structure fits the dataset. We outline the
integer linear programming formulation based on the decomposability of score
metrics. In order to ensure acyclicity of the structure, we add ``cluster
constraints'' developed specifically for Bayesian network, in addition to cycle
constraints applicable to directed acyclic graphs in general. Since there would
be exponential number of these constraints if we specify them fully, we explain
the methods to add them as cutting planes without declaring them all in the
initial model. Also, we develop a heuristic algorithm that finds a feasible
solution based on the idea of sink node on directed acyclic graphs. We
implemented the ILP formulation and cutting planes as a \textsf{Python}
package, and present the results of experiments with different settings on
reference datasets. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
Model-free reinforcement learning has been successfully applied to a range of
challenging problems, and has recently been extended to handle large neural
network policies and value functions. However, the sample complexity of
model-free algorithms, particularly when using high-dimensional function
approximators, tends to limit their applicability to physical systems. In this
paper, we explore algorithms and representations to reduce the sample
complexity of deep reinforcement learning for continuous control tasks. We
propose two complementary techniques for improving the efficiency of such
algorithms. First, we derive a continuous variant of the Q-learning algorithm,
which we call normalized adantage functions (NAF), as an alternative to the
more commonly used policy gradient and actor-critic methods. NAF representation
allows us to apply Q-learning with experience replay to continuous tasks, and
substantially improves performance on a set of simulated robotic control tasks.
To further improve the efficiency of our approach, we explore the use of
learned models for accelerating model-free reinforcement learning. We show that
iteratively refitted local linear models are especially effective for this, and
demonstrate substantially faster learning on domains where such models are
applicable. | [
"cs.LG",
"cs.AI",
"cs.RO",
"cs.SY"
] |
DNN is presenting human-level performance for many complex intelligent tasks
in real-world applications. However, it also introduces ever-increasing
security concerns. For example, the emerging adversarial attacks indicate that
even very small and often imperceptible adversarial input perturbations can
easily mislead the cognitive function of deep learning systems (DLS). Existing
DNN adversarial studies are narrowly performed on the ideal software-level DNN
models with a focus on single uncertainty factor, i.e. input perturbations,
however, the impact of DNN model reshaping on adversarial attacks, which is
introduced by various hardware-favorable techniques such as hash-based weight
compression during modern DNN hardware implementation, has never been
discussed. In this work, we for the first time investigate the multi-factor
adversarial attack problem in practical model optimized deep learning systems
by jointly considering the DNN model-reshaping (e.g. HashNet based deep
compression) and the input perturbations. We first augment adversarial example
generating method dedicated to the compressed DNN models by incorporating the
software-based approaches and mathematical modeled DNN reshaping. We then
conduct a comprehensive robustness and vulnerability analysis of deep
compressed DNN models under derived adversarial attacks. A defense technique
named "gradient inhibition" is further developed to ease the generating of
adversarial examples thus to effectively mitigate adversarial attacks towards
both software and hardware-oriented DNNs. Simulation results show that
"gradient inhibition" can decrease the average success rate of adversarial
attacks from 87.99% to 4.77% (from 86.74% to 4.64%) on MNIST (CIFAR-10)
benchmark with marginal accuracy degradation across various DNNs. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
As a consequence of an ever-increasing number of service robots, there is a
growing demand for highly accurate real-time 3D object recognition. Considering
the expansion of robot applications in more complex and dynamic environments,it
is evident that it is not possible to pre-program all object categories and
anticipate all exceptions in advance. Therefore, robots should have the
functionality to learn about new object categories in an open-ended fashion
while working in the environment.Towards this goal, we propose a deep transfer
learning approach to generate a scale- and pose-invariant object representation
by considering shape and texture information in multiple colorspaces. The
obtained global object representation is then fed to an instance-based object
category learning and recognition,where a non-expert human user exists in the
learning loop and can interactively guide the process of experience acquisition
by teaching new object categories, or by correcting insufficient or erroneous
categories. In this work, shape information encodes the common patterns of all
categories, while texture information is used to describes the appearance of
each instance in detail.Multiple color space combinations and network
architectures are evaluated to find the most descriptive system. Experimental
results showed that the proposed network architecture out-performed the
selected state-of-the-art approaches in terms of object classification accuracy
and scalability. Furthermore, we performed a real robot experiment in the
context of serve-a-beer scenario to show the real-time performance of the
proposed approach. | [
"cs.CV",
"cs.RO"
] |
Video captioning is a popular task that challenges models to describe events
in videos using natural language. In this work, we investigate the ability of
various visual feature representations derived from state-of-the-art
convolutional neural networks to capture high-level semantic context. We
introduce the Weighted Additive Fusion Transformer with Memory Augmented
Encoders (WAFTM), a captioning model that incorporates memory in a transformer
encoder and uses a novel method, to fuse features, that ensures due importance
is given to more significant representations. We illustrate a gain in
performance realized by applying Word-Piece Tokenization and a popular
REINFORCE algorithm. Finally, we benchmark our model on two datasets and obtain
a CIDEr of 92.4 on MSVD and a METEOR of 0.091 on the ActivityNet Captions
Dataset. | [
"cs.CV"
] |
Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
https://fsadeghi.github.io/Sim2RealViewInvariantServo | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Deep Neural Networks (DNNs) have an enormous potential to learn from complex
biomedical data. In particular, DNNs have been used to seamlessly fuse
heterogeneous information from neuroanatomy, genetics, biomarkers, and
neuropsychological tests for highly accurate Alzheimer's disease diagnosis. On
the other hand, their black-box nature is still a barrier for the adoption of
such a system in the clinic, where interpretability is absolutely essential. We
propose Shapley Value Explanation of Heterogeneous Neural Networks (SVEHNN) for
explaining the Alzheimer's diagnosis made by a DNN from the 3D point cloud of
the neuroanatomy and tabular biomarkers. Our explanations are based on the
Shapley value, which is the unique method that satisfies all fundamental axioms
for local explanations previously established in the literature. Thus, SVEHNN
has many desirable characteristics that previous work on interpretability for
medical decision making is lacking. To avoid the exponential time complexity of
the Shapley value, we propose to transform a given DNN into a Lightweight
Probabilistic Deep Network without re-training, thus achieving a complexity
only quadratic in the number of features. In our experiments on synthetic and
real data, we show that we can closely approximate the exact Shapley value with
a dramatically reduced runtime and can reveal the hidden knowledge the network
has learned from the data. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We propose a tree ensemble method, referred to as time series forest (TSF),
for time series classification. TSF employs a combination of the entropy gain
and a distance measure, referred to as the Entrance (entropy and distance)
gain, for evaluating the splits. Experimental studies show that the Entrance
gain criterion improves the accuracy of TSF. TSF randomly samples features at
each tree node and has a computational complexity linear in the length of a
time series and can be built using parallel computing techniques such as
multi-core computing used here. The temporal importance curve is also proposed
to capture the important temporal characteristics useful for classification.
Experimental studies show that TSF using simple features such as mean,
deviation and slope outperforms strong competitors such as one-nearest-neighbor
classifiers with dynamic time warping, is computationally efficient, and can
provide insights into the temporal characteristics. | [
"cs.LG"
] |
With the recent advancement in the deep learning technologies such as CNNs
and GANs, there is significant improvement in the quality of the images
reconstructed by deep learning based super-resolution (SR) techniques. In this
work, we propose a robust loss function based on the preservation of edges
obtained by the Canny operator. This loss function, when combined with the
existing loss function such as mean square error (MSE), gives better SR
reconstruction measured in terms of PSNR and SSIM. Our proposed loss function
guarantees improved performance on any existing algorithm using MSE loss
function, without any increase in the computational complexity during testing. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Exploring contextual information in the local region is important for shape
understanding and analysis. Existing studies often employ hand-crafted or
explicit ways to encode contextual information of local regions. However, it is
hard to capture fine-grained contextual information in hand-crafted or explicit
manners, such as the correlation between different areas in a local region,
which limits the discriminative ability of learned features. To resolve this
issue, we propose a novel deep learning model for 3D point clouds, named
Point2Sequence, to learn 3D shape features by capturing fine-grained contextual
information in a novel implicit way. Point2Sequence employs a novel sequence
learning model for point clouds to capture the correlations by aggregating
multi-scale areas of each local region with attention. Specifically,
Point2Sequence first learns the feature of each area scale in a local region.
Then, it captures the correlation between area scales in the process of
aggregating all area scales using a recurrent neural network (RNN) based
encoder-decoder structure, where an attention mechanism is proposed to
highlight the importance of different area scales. Experimental results show
that Point2Sequence achieves state-of-the-art performance in shape
classification and segmentation tasks. | [
"cs.CV"
] |
Recent studies have witnessed that self-supervised methods based on view
synthesis obtain clear progress on multi-view stereo (MVS). However, existing
methods rely on the assumption that the corresponding points among different
views share the same color, which may not always be true in practice. This may
lead to unreliable self-supervised signal and harm the final reconstruction
performance. To address the issue, we propose a framework integrated with more
reliable supervision guided by semantic co-segmentation and data-augmentation.
Specially, we excavate mutual semantic from multi-view images to guide the
semantic consistency. And we devise effective data-augmentation mechanism which
ensures the transformation robustness by treating the prediction of regular
samples as pseudo ground truth to regularize the prediction of augmented
samples. Experimental results on DTU dataset show that our proposed methods
achieve the state-of-the-art performance among unsupervised methods, and even
compete on par with supervised methods. Furthermore, extensive experiments on
Tanks&Temples dataset demonstrate the effective generalization ability of the
proposed method. | [
"cs.CV",
"cs.AI"
] |
Model compression and knowledge distillation have been successfully applied
for cross-architecture and cross-domain transfer learning. However, a key
requirement is that training examples are in correspondence across the domains.
We show that in many scenarios of practical importance such aligned data can be
synthetically generated using computer graphics pipelines allowing domain
adaptation through distillation. We apply this technique to learn models for
recognizing low-resolution images using labeled high-resolution images,
non-localized objects using labeled localized objects, line-drawings using
labeled color images, etc. Experiments on various fine-grained recognition
datasets demonstrate that the technique improves recognition performance on the
low-quality data and beats strong baselines for domain adaptation. Finally, we
present insights into workings of the technique through visualizations and
relating it to existing literature. | [
"cs.CV"
] |
Automatic event detection from time series signals has wide applications,
such as abnormal event detection in video surveillance and event detection in
geophysical data. Traditional detection methods detect events primarily by the
use of similarity and correlation in data. Those methods can be inefficient and
yield low accuracy. In recent years, because of the significantly increased
computational power, machine learning techniques have revolutionized many
science and engineering domains. In this study, we apply a deep-learning-based
method to the detection of events from time series seismic signals. However, a
direct adaptation of the similar ideas from 2D object detection to our problem
faces two challenges. The first challenge is that the duration of earthquake
event varies significantly; The other is that the proposals generated are
temporally correlated. To address these challenges, we propose a novel cascaded
region-based convolutional neural network to capture earthquake events in
different sizes, while incorporating contextual information to enrich features
for each individual proposal. To achieve a better generalization performance,
we use densely connected blocks as the backbone of our network. Because of the
fact that some positive events are not correctly annotated, we further
formulate the detection problem as a learning-from-noise problem. To verify the
performance of our detection methods, we employ our methods to seismic data
generated from a bi-axial "earthquake machine" located at Rock Mechanics
Laboratory, and we acquire labels with the help of experts. Through our
numerical tests, we show that our novel detection techniques yield high
accuracy. Therefore, our novel deep-learning-based detection methods can
potentially be powerful tools for locating events from time series data in
various applications. | [
"cs.LG",
"cs.CV"
] |
Non-uniform and multi-illuminant color constancy are important tasks, the
solution of which will allow to discard information about lighting conditions
in the image. Non-uniform illumination and shadows distort colors of real-world
objects and mostly do not contain valuable information. Thus, many computer
vision and image processing techniques would benefit from automatic discarding
of this information at the pre-processing step. In this work we propose novel
view on this classical problem via generative end-to-end algorithm based on
image conditioned Generative Adversarial Network. We also demonstrate the
potential of the given approach for joint shadow detection and removal. Forced
by the lack of training data, we render the largest existing shadow removal
dataset and make it publicly available. It consists of approximately 6,000
pairs of wide field of view synthetic images with and without shadows. | [
"cs.CV"
] |
Extrapolating fine-grained pixel-level correspondences in a fully
unsupervised manner from a large set of misaligned images can benefit several
computer vision and graphics problems, e.g. co-segmentation, super-resolution,
image edit propagation, structure-from-motion, and 3D reconstruction. Several
joint image alignment and congealing techniques have been proposed to tackle
this problem, but robustness to initialisation, ability to scale to large
datasets, and alignment accuracy seem to hamper their wide applicability. To
overcome these limitations, we propose an unsupervised joint alignment method
leveraging a densely fused spatial transformer network to estimate the warping
parameters for each image and a low-capacity auto-encoder whose reconstruction
error is used as an auxiliary measure of joint alignment. Experimental results
on digits from multiple versions of MNIST (i.e., original, perturbed, affNIST
and infiMNIST) and faces from LFW, show that our approach is capable of
aligning millions of images with high accuracy and robustness to different
levels and types of perturbation. Moreover, qualitative and quantitative
results suggest that the proposed method outperforms state-of-the-art
approaches both in terms of alignment quality and robustness to initialisation. | [
"cs.CV"
] |
Representation learning on static graph-structured data has shown a
significant impact on many real-world applications. However, less attention has
been paid to the evolving nature of temporal networks, in which the edges are
often changing over time. The embeddings of such temporal networks should
encode both graph-structured information and the temporally evolving pattern.
Existing approaches in learning temporally evolving network representations
fail to capture the temporal interdependence. In this paper, we propose Toffee,
a novel approach for temporal network representation learning based on tensor
decomposition. Our method exploits the tensor-tensor product operator to encode
the cross-time information, so that the periodic changes in the evolving
networks can be captured. Experimental results demonstrate that Toffee
outperforms existing methods on multiple real-world temporal networks in
generating effective embeddings for the link prediction tasks. | [
"cs.LG"
] |
Whether to attract viewer attention to a particular object, give the
impression of depth or simply reproduce human-like scene perception, shallow
depth of field images are used extensively by professional and amateur
photographers alike. To this end, high quality optical systems are used in DSLR
cameras to focus on a specific depth plane while producing visually pleasing
bokeh. We propose a physically motivated pipeline to mimic this effect from
all-in-focus stereo images, typically retrieved by mobile cameras. It is
capable to change the focal plane a posteriori at 76 FPS on KITTI images to
enable real-time applications. As our portmanteau suggests, SteReFo
interrelates stereo-based depth estimation and refocusing efficiently. In
contrast to other approaches, our pipeline is simultaneously fully
differentiable, physically motivated, and agnostic to scene content. It also
enables computational video focus tracking for moving objects in addition to
refocusing of static images. We evaluate our approach on the publicly available
datasets SceneFlow, KITTI, CityScapes and quantify the quality of architectural
changes. | [
"cs.CV"
] |
Probabilistic Boolean Networks (PBNs) were introduced as a computational
model for the study of complex dynamical systems, such as Gene Regulatory
Networks (GRNs). Controllability in this context is the process of making
strategic interventions to the state of a network in order to drive it towards
some other state that exhibits favourable biological properties. In this paper
we study the ability of a Double Deep Q-Network with Prioritized Experience
Replay in learning control strategies within a finite number of time steps that
drive a PBN towards a target state, typically an attractor. The control method
is model-free and does not require knowledge of the network's underlying
dynamics, making it suitable for applications where inference of such dynamics
is intractable. We present extensive experiment results on two synthetic PBNs
and the PBN model constructed directly from gene-expression data of a study on
metastatic-melanoma. | [
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
] |
In order to operate autonomously, a robot should explore the environment and
build a model of each of the surrounding objects. A common approach is to
carefully scan the whole workspace. This is time-consuming. It is also often
impossible to reach all the viewpoints required to acquire full knowledge about
the environment. Humans can perform shape completion of occluded objects by
relying on past experience. Therefore, we propose a method that generates
images of an object from various viewpoints using a single input RGB image. A
deep neural network is trained to imagine the object appearance from many
viewpoints. We present the whole pipeline, which takes a single RGB image as
input and returns a sequence of RGB and depth images of the object. The method
utilizes a CNN-based object detector to extract the object from the natural
scene. Then, the proposed network generates a set of RGB and depth images. We
show the results both on a synthetic dataset and on real images. | [
"cs.CV",
"cs.RO"
] |
Incremental few-shot learning has emerged as a new and challenging area in
deep learning, whose objective is to train deep learning models using very few
samples of new class data, and none of the old class data. In this work we
tackle the problem of batch incremental few-shot road object detection using
data from the India Driving Dataset (IDD). Our approach, DualFusion, combines
object detectors in a manner that allows us to learn to detect rare objects
with very limited data, all without severely degrading the performance of the
detector on the abundant classes. In the IDD OpenSet incremental few-shot
detection task, we achieve a mAP50 score of 40.0 on the base classes and an
overall mAP50 score of 38.8, both of which are the highest to date. In the COCO
batch incremental few-shot detection task, we achieve a novel AP score of 9.9,
surpassing the state-of-the-art novel class performance on the same by over 6.6
times. | [
"cs.CV",
"cs.AI"
] |
The video and action classification have extremely evolved by deep neural
networks specially with two stream CNN using RGB and optical flow as inputs and
they present outstanding performance in terms of video analysis. One of the
shortcoming of these methods is handling motion information extraction which is
done out side of the CNNs and relatively time consuming also on GPUs. So
proposing end-to-end methods which are exploring to learn motion
representation, like 3D-CNN can achieve faster and accurate performance. We
present some novel deep CNNs using 3D architecture to model actions and motion
representation in an efficient way to be accurate and also as fast as
real-time. Our new networks learn distinctive models to combine deep motion
features into appearance model via learning optical flow features inside the
network. | [
"cs.CV"
] |
We consider an Intelligent Reflecting Surface (IRS)-aided multiple-input
single-output (MISO) system for downlink transmission. We compare the
performance of Deep Reinforcement Learning (DRL) and conventional optimization
methods in finding optimal phase shifts of the IRS elements to maximize the
user signal-to-noise (SNR) ratio. Furthermore, we evaluate the robustness of
these methods to channel impairments and changes in the system. We demonstrate
numerically that DRL solutions show more robustness to noisy channels and user
mobility. | [
"cs.LG",
"cs.IT",
"math.IT"
] |
This work proposes a novel Graph-based neural ArchiTecture Encoding Scheme,
a.k.a. GATES, to improve the predictor-based neural architecture search.
Specifically, different from existing graph-based schemes, GATES models the
operations as the transformation of the propagating information, which mimics
the actual data processing of neural architecture. GATES is a more reasonable
modeling of the neural architectures, and can encode architectures from both
the "operation on node" and "operation on edge" cell search spaces
consistently. Experimental results on various search spaces confirm GATES's
effectiveness in improving the performance predictor. Furthermore, equipped
with the improved performance predictor, the sample efficiency of the
predictor-based neural architecture search (NAS) flow is boosted. Codes are
available at https://github.com/walkerning/aw_nas. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Exact recovery of tensor decomposition (TD) methods is a desirable property
in both unsupervised learning and scientific data analysis. The numerical
defects of TD methods, however, limit their practical applications on
real-world data. As an alternative, convex tensor decomposition (CTD) was
proposed to alleviate these problems, but its exact-recovery property is not
properly addressed so far. To this end, we focus on latent convex tensor
decomposition (LCTD), a practically widely-used CTD model, and rigorously prove
a sufficient condition for its exact-recovery property. Furthermore, we show
that such property can be also achieved by a more general model than LCTD. In
the new model, we generalize the classic tensor (un-)folding into reshuffling
operation, a more flexible mapping to relocate the entries of the matrix into a
tensor. Armed with the reshuffling operations and exact-recovery property, we
explore a totally novel application for (generalized) LCTD, i.e., image
steganography. Experimental results on synthetic data validate our theory, and
results on image steganography show that our method outperforms the
state-of-the-art methods. | [
"cs.LG",
"stat.ML"
] |
Recently, Minimum Cost Multicut Formulations have been proposed and proven to
be successful in both motion trajectory segmentation and multi-target tracking
scenarios. Both tasks benefit from decomposing a graphical model into an
optimal number of connected components based on attractive and repulsive
pairwise terms. The two tasks are formulated on different levels of granularity
and, accordingly, leverage mostly local information for motion segmentation and
mostly high-level information for multi-target tracking. In this paper we argue
that point trajectories and their local relationships can contribute to the
high-level task of multi-target tracking and also argue that high-level cues
from object detection and tracking are helpful to solve motion segmentation. We
propose a joint graphical model for point trajectories and object detections
whose Multicuts are solutions to motion segmentation {\it and} multi-target
tracking problems at once. Results on the FBMS59 motion segmentation benchmark
as well as on pedestrian tracking sequences from the 2D MOT 2015 benchmark
demonstrate the promise of this joint approach. | [
"cs.CV"
] |
For classification tasks, dictionary learning based methods have attracted
lots of attention in recent years. One popular way to achieve this purpose is
to introduce label information to generate a discriminative dictionary to
represent samples. However, compared with traditional dictionary learning, this
category of methods only achieves significant improvements in supervised
learning, and has little positive influence on semi-supervised or unsupervised
learning. To tackle this issue, we propose a Dynamic Label Dictionary Learning
(DLDL) algorithm to generate the soft label matrix for unlabeled data.
Specifically, we employ hypergraph manifold regularization to keep the
relations among original data, transformed data, and soft labels consistent. We
demonstrate the efficiency of the proposed DLDL approach on two remote sensing
datasets. | [
"cs.CV"
] |
High resolution depth-maps, obtained by upsampling sparse range data from a
3D-LIDAR, find applications in many fields ranging from sensory perception to
semantic segmentation and object detection. Upsampling is often based on
combining data from a monocular camera to compensate the low-resolution of a
LIDAR. This paper, on the other hand, introduces a novel framework to obtain
dense depth-map solely from a single LIDAR point cloud; which is a research
direction that has been barely explored. The formulation behind the proposed
depth-mapping process relies on local spatial interpolation, using
sliding-window (mask) technique, and on the Bilateral Filter (BF) where the
variable of interest, the distance from the sensor, is considered in the
interpolation problem. In particular, the BF is conveniently modified to
perform depth-map upsampling such that the edges (foreground-background
discontinuities) are better preserved by means of a proposed method which
influences the range-based weighting term. Other methods for spatial upsampling
are discussed, evaluated and compared in terms of different error measures.
This paper also researches the role of the mask's size in the performance of
the implemented methods. Quantitative and qualitative results from experiments
on the KITTI Database, using LIDAR point clouds only, show very satisfactory
performance of the approach introduced in this work. | [
"cs.CV"
] |
Machine learning models tend to over-rely on statistical shortcuts. These
spurious correlations between parts of the input and the output labels does not
hold in real-world settings. We target this issue on the recent open-ended
visual counting task which is well suited to study statistical shortcuts. We
aim to develop models that learn a proper mechanism of counting regardless of
the output label. First, we propose the Modifying Count Distribution (MCD)
protocol, which penalizes models that over-rely on statistical shortcuts. It is
based on pairs of training and testing sets that do not follow the same count
label distribution such as the odd-even sets. Intuitively, models that have
learned a proper mechanism of counting on odd numbers should perform well on
even numbers. Secondly, we introduce the Spatial Counting Network (SCN), which
is dedicated to visual analysis and counting based on natural language
questions. Our model selects relevant image regions, scores them with fusion
and self-attention mechanisms, and provides a final counting score. We apply
our protocol on the recent dataset, TallyQA, and show superior performances
compared to state-of-the-art models. We also demonstrate the ability of our
model to select the correct instances to count in the image. Code and datasets
are available: https://github.com/cdancette/spatial-counting-network | [
"cs.CV",
"cs.CL",
"cs.LG",
"eess.IV"
] |
Object detection in natural environments is still a very challenging task,
even though deep learning has brought a tremendous improvement in performance
over the last years. A fundamental problem of object detection based on deep
learning is that neither the training data nor the suggested models are
intended for the challenge of fragmented occlusion. Fragmented occlusion is
much more challenging than ordinary partial occlusion and occurs frequently in
natural environments such as forests. A motivating example of fragmented
occlusion is object detection through foliage which is an essential requirement
in green border surveillance. This paper presents an analysis of
state-of-the-art detectors with imagery of green borders and proposes to train
Mask R-CNN on new training data which captures explicitly the problem of
fragmented occlusion. The results show clear improvements of Mask R-CNN with
this new training strategy (also against other detectors) for data showing
slight fragmented occlusion. | [
"cs.CV",
"eess.IV"
] |
Automatic segmentation of medical images is an important task for many
clinical applications. In practice, a wide range of anatomical structures are
visualised using different imaging modalities. In this paper, we investigate
whether a single convolutional neural network (CNN) can be trained to perform
different segmentation tasks.
A single CNN is trained to segment six tissues in MR brain images, the
pectoral muscle in MR breast images, and the coronary arteries in cardiac CTA.
The CNN therefore learns to identify the imaging modality, the visualised
anatomical structures, and the tissue classes.
For each of the three tasks (brain MRI, breast MRI and cardiac CTA), this
combined training procedure resulted in a segmentation performance equivalent
to that of a CNN trained specifically for that task, demonstrating the high
capacity of CNN architectures. Hence, a single system could be used in clinical
practice to automatically perform diverse segmentation tasks without
task-specific training. | [
"cs.CV"
] |
Underwater image enhancement is such an important low-level vision task with
many applications that numerous algorithms have been proposed in recent years.
These algorithms developed upon various assumptions demonstrate successes from
various aspects using different data sets and different metrics. In this work,
we setup an undersea image capturing system, and construct a large-scale
Real-world Underwater Image Enhancement (RUIE) data set divided into three
subsets. The three subsets target at three challenging aspects for enhancement,
i.e., image visibility quality, color casts, and higher-level
detection/classification, respectively. We conduct extensive and systematic
experiments on RUIE to evaluate the effectiveness and limitations of various
algorithms to enhance visibility and correct color casts on images with
hierarchical categories of degradation. Moreover, underwater image enhancement
in practice usually serves as a preprocessing step for mid-level and high-level
vision tasks. We thus exploit the object detection performance on enhanced
images as a brand new task-specific evaluation criterion. The findings from
these evaluations not only confirm what is commonly believed, but also suggest
promising solutions and new directions for visibility enhancement, color
correction, and object detection on real-world underwater images. | [
"cs.CV"
] |
Implicit surface representations, such as signed-distance functions, combined
with deep learning have led to impressive models which can represent detailed
shapes of objects with arbitrary topology. Since a continuous function is
learned, the reconstructions can also be extracted at any arbitrary resolution.
However, large datasets such as ShapeNet are required to train such models. In
this paper, we present a new mid-level patch-based surface representation. At
the level of patches, objects across different categories share similarities,
which leads to more generalizable models. We then introduce a novel method to
learn this patch-based representation in a canonical space, such that it is as
object-agnostic as possible. We show that our representation trained on one
category of objects from ShapeNet can also well represent detailed shapes from
any other category. In addition, it can be trained using much fewer shapes,
compared to existing approaches. We show several applications of our new
representation, including shape interpolation and partial point cloud
completion. Due to explicit control over positions, orientations and scales of
patches, our representation is also more controllable compared to object-level
representations, which enables us to deform encoded shapes non-rigidly. | [
"cs.CV",
"cs.GR"
] |
Endoscopic artefact detection challenge consists of 1) Artefact detection, 2)
Semantic segmentation, and 3) Out-of-sample generalisation. For Semantic
segmentation task, we propose a multi-plateau ensemble of FPN (Feature Pyramid
Network) with EfficientNet as feature extractor/encoder. For Object detection
task, we used a three model ensemble of RetinaNet with Resnet50 Backbone and
FasterRCNN (FPN + DC5) with Resnext101 Backbone}. A PyTorch implementation to
our approach to the problem is available at
https://github.com/ubamba98/EAD2020. | [
"cs.CV",
"eess.IV"
] |
Dynamic adaptation in single-neuron response plays a fundamental role in
neural coding in biological neural networks. Yet, most neural activation
functions used in artificial networks are fixed and mostly considered as an
inconsequential architecture choice. In this paper, we investigate nonlinear
activation function adaptation over the large time scale of learning, and
outline its impact on sequential processing in recurrent neural networks. We
introduce a novel parametric family of nonlinear activation functions, inspired
by input-frequency response curves of biological neurons, which allows
interpolation between well-known activation functions such as ReLU and sigmoid.
Using simple numerical experiments and tools from dynamical systems and
information theory, we study the role of neural activation features in learning
dynamics. We find that activation adaptation provides distinct task-specific
solutions and in some cases, improves both learning speed and performance.
Importantly, we find that optimal activation features emerging from our
parametric family are considerably different from typical functions used in the
literature, suggesting that exploiting the gap between these usual
configurations can help learning. Finally, we outline situations where neural
activation adaptation alone may help mitigate changes in input statistics in a
given task, suggesting mechanisms for transfer learning optimization. | [
"cs.LG",
"cs.NE",
"q-bio.NC",
"stat.ML"
] |
Lidar based 3D object detection and classification tasks are essential for
autonomous driving(AD). A lidar sensor can provide the 3D point cloud data
reconstruction of the surrounding environment. However, real time detection in
3D point clouds still needs a strong algorithmic. This paper proposes a 3D
object detection method based on point cloud and image which consists of there
parts.(1)Lidar-camera calibration and undistorted image transformation.
(2)YOLO-based detection and PointCloud extraction, (3)K-means based point cloud
segmentation and detection experiment test and evaluation in depth image. In
our research, camera can capture the image to make the Real-time 2D object
detection by using YOLO, we transfer the bounding box to node whose function is
making 3d object detection on point cloud data from Lidar. By comparing whether
2D coordinate transferred from the 3D point is in the object bounding box or
not can achieve High-speed 3D object recognition function in GPU. The accuracy
and precision get imporved after k-means clustering in point cloud. The speed
of our detection method is a advantage faster than PointNet. | [
"cs.CV"
] |
In this paper we propose a deep architecture for detecting people attributes
(e.g. gender, race, clothing ...) in surveillance contexts. Our proposal
explicitly deal with poor resolution and occlusion issues that often occur in
surveillance footages by enhancing the images by means of Deep Convolutional
Generative Adversarial Networks (DCGAN). Experiments show that by combining
both our Generative Reconstruction and Deep Attribute Classification Network we
can effectively extract attributes even when resolution is poor and in presence
of strong occlusions up to 80\% of the whole person figure. | [
"cs.CV"
] |
Monitoring the technical condition of infrastructure is a crucial element to
its maintenance. Currently applied methods are outdated, labour-intensive and
inaccurate. At the same time, the latest methods using Artificial Intelligence
techniques are severely limited in their application due to two main factors --
labour-intensive gathering of new datasets and high demand for computing power.
We propose to utilize custom made framework -- KrakN, to overcome these
limiting factors. It enables the development of unique infrastructure defects
detectors on digital images, achieving the accuracy of above 90%. The framework
supports semi-automatic creation of new datasets and has modest computing power
requirements. It is implemented in the form of a ready-to-use software package
openly distributed to the public. Thus, it can be used to immediately implement
the methods proposed in this paper in the process of infrastructure management
by government units, regardless of their financial capabilities. | [
"cs.CV",
"cs.NE"
] |
We address the problem of localizing waste objects from a color image and an
optional depth image, which is a key perception component for robotic
interaction with such objects. Specifically, our method integrates the
intensity and depth information at multiple levels of spatial granularity.
Firstly, a scene-level deep network produces an initial coarse segmentation,
based on which we select a few potential object regions to zoom in and perform
fine segmentation. The results of the above steps are further integrated into a
densely connected conditional random field that learns to respect the
appearance, depth, and spatial affinities with pixel-level accuracy. In
addition, we create a new RGBD waste object segmentation dataset, MJU-Waste,
that is made public to facilitate future research in this area. The efficacy of
our method is validated on both MJU-Waste and the Trash Annotation in Context
(TACO) dataset. | [
"cs.CV"
] |
Motivation: Traditional image attribution methods struggle to satisfactorily
explain predictions of neural networks. Prediction explanation is important,
especially in medical imaging, for avoiding the unintended consequences of
deploying AI systems when false positive predictions can impact patient care.
Thus, there is a pressing need to develop improved models for model
explainability and introspection. Specific problem: A new approach is to
transform input images to increase or decrease features which cause the
prediction. However, current approaches are difficult to implement as they are
monolithic or rely on GANs. These hurdles prevent wide adoption. Our approach:
Given an arbitrary classifier, we propose a simple autoencoder and gradient
update (Latent Shift) that can transform the latent representation of a
specific input image to exaggerate or curtail the features used for prediction.
We use this method to study chest X-ray classifiers and evaluate their
performance. We conduct a reader study with two radiologists assessing 240
chest X-ray predictions to identify which ones are false positives (half are)
using traditional attribution maps or our proposed method. Results: We found
low overlap with ground truth pathology masks for models with reasonably high
accuracy. However, the results from our reader study indicate that these models
are generally looking at the correct features. We also found that the Latent
Shift explanation allows a user to have more confidence in true positive
predictions compared to traditional approaches (0.15$\pm$0.95 in a 5 point
scale with p=0.01) with only a small increase in false positive predictions
(0.04$\pm$1.06 with p=0.57).
Accompanying webpage: https://mlmed.org/gifsplanation
Source code: https://github.com/mlmed/gifsplanation | [
"cs.CV",
"cs.AI",
"eess.IV"
] |
Temporal information is essential to learning effective policies with
Reinforcement Learning (RL). However, current state-of-the-art RL algorithms
either assume that such information is given as part of the state space or,
when learning from pixels, use the simple heuristic of frame-stacking to
implicitly capture temporal information present in the image observations. This
heuristic is in contrast to the current paradigm in video classification
architectures, which utilize explicit encodings of temporal information through
methods such as optical flow and two-stream architectures to achieve
state-of-the-art performance. Inspired by leading video classification
architectures, we introduce the Flow of Latents for Reinforcement Learning
(Flare), a network architecture for RL that explicitly encodes temporal
information through latent vector differences. We show that Flare (i) recovers
optimal performance in state-based RL without explicit access to the state
velocity, solely with positional state information, (ii) achieves
state-of-the-art performance on pixel-based challenging continuous control
tasks within the DeepMind control benchmark suite, namely quadruped walk,
hopper hop, finger turn hard, pendulum swing, and walker run, and is the most
sample efficient model-free pixel-based RL algorithm, outperforming the prior
model-free state-of-the-art by 1.9X and 1.5X on the 500k and 1M step
benchmarks, respectively, and (iv), when augmented over rainbow DQN,
outperforms this state-of-the-art level baseline on 5 of 8 challenging Atari
games at 100M time step benchmark. | [
"cs.LG"
] |
Can meta-learning discover generic ways of processing time series (TS) from a
diverse dataset so as to greatly improve generalization on new TS coming from
different datasets? This work provides positive evidence to this using a broad
meta-learning framework which we show subsumes many existing meta-learning
algorithms. Our theoretical analysis suggests that residual connections act as
a meta-learning adaptation mechanism, generating a subset of task-specific
parameters based on a given TS input, thus gradually expanding the expressive
power of the architecture on-the-fly. The same mechanism is shown via
linearization analysis to have the interpretation of a sequential update of the
final linear layer. Our empirical results on a wide range of data emphasize the
importance of the identified meta-learning mechanisms for successful zero-shot
univariate forecasting, suggesting that it is viable to train a neural network
on a source TS dataset and deploy it on a different target TS dataset without
retraining, resulting in performance that is at least as good as that of
state-of-practice univariate forecasting models. | [
"cs.LG",
"stat.ML"
] |
Many machine learning models operate on images, but ignore the fact that
images are 2D projections formed by 3D geometry interacting with light, in a
process called rendering. Enabling ML models to understand image formation
might be key for generalization. However, due to an essential rasterization
step involving discrete assignment operations, rendering pipelines are
non-differentiable and thus largely inaccessible to gradient-based ML
techniques. In this paper, we present {\emph DIB-R}, a differentiable rendering
framework which allows gradients to be analytically computed for all pixels in
an image. Key to our approach is to view foreground rasterization as a weighted
interpolation of local properties and background rasterization as a
distance-based aggregation of global geometry. Our approach allows for accurate
optimization over vertex positions, colors, normals, light directions and
texture coordinates through a variety of lighting models. We showcase our
approach in two ML applications: single-image 3D object prediction, and 3D
textured object generation, both trained using exclusively using 2D
supervision. Our project website is: https://nv-tlabs.github.io/DIB-R/ | [
"cs.CV"
] |
Endoscopic videos from multicentres often have different imaging conditions,
e.g., color and illumination, which make the models trained on one domain
usually fail to generalize well to another. Domain adaptation is one of the
potential solutions to address the problem. However, few of existing works
focused on the translation of video-based data. In this work, we propose a
novel generative adversarial network (GAN), namely VideoGAN, to transfer the
video-based data across different domains. As the frames of a video may have
similar content and imaging conditions, the proposed VideoGAN has an X-shape
generator to preserve the intra-video consistency during translation.
Furthermore, a loss function, namely color histogram loss, is proposed to tune
the color distribution of each translated frame. Two colonoscopic datasets from
different centres, i.e., CVC-Clinic and ETIS-Larib, are adopted to evaluate the
performance of domain adaptation of our VideoGAN. Experimental results
demonstrate that the adapted colonoscopic video generated by our VideoGAN can
significantly boost the segmentation accuracy, i.e., an improvement of 5%, of
colorectal polyps on multicentre datasets. As our VideoGAN is a general network
architecture, we also evaluate its performance with the CamVid driving video
dataset on the cloudy-to-sunny translation task. Comprehensive experiments show
that the domain gap could be substantially narrowed down by our VideoGAN. | [
"cs.CV"
] |
We consider the problem of finding optimal policies for a Markov Decision
Process with almost sure constraints on state transitions and action triplets.
We define value and action-value functions that satisfy a barrier-based
decomposition which allows for the identification of feasible policies
independently of the reward process. We prove that, given a policy {\pi},
certifying whether certain state-action pairs lead to feasible trajectories
under {\pi} is equivalent to solving an auxiliary problem aimed at finding the
probability of performing an unfeasible transition. Using this
interpretation,we develop a Barrier-learning algorithm, based on Q-Learning,
that identifies such unsafe state-action pairs. Our analysis motivates the need
to enhance the Reinforcement Learning (RL) framework with an additional signal,
besides rewards, called here damage function that provides feasibility
information and enables the solution of RL problems with model-free
constraints. Moreover, our Barrier-learning algorithm wraps around existing RL
algorithms, such as Q-Learning and SARSA, giving them the ability to solve
almost-surely constrained problems. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
Every segmentation algorithm has parameters that need to be adjusted in order
to achieve good results. Evolving fuzzy systems for adjustment of segmentation
parameters have been proposed recently (Evolving fuzzy image segmentation --
EFIS [1]. However, similar to any other algorithm, EFIS too suffers from a few
limitations when used in practice. As a major drawback, EFIS depends on
detection of the object of interest for feature calculation, a task that is
highly application-dependent. In this paper, a new version of EFIS is proposed
to overcome these limitations. The new EFIS, called self-configuring EFIS
(SC-EFIS), uses available training data to auto-configure the parameters that
are fixed in EFIS. As well, the proposed SC-EFIS relies on a feature selection
process that does not require the detection of a region of interest (ROI). | [
"cs.CV"
] |
Deep neural networks have shown their profound impact on achieving human
level performance in visual saliency prediction. However, it is still unclear
how they learn the task and what it means in terms of understanding human
visual system. In this work, we develop a technique to derive explainable
saliency models from their corresponding deep neural architecture based
saliency models by applying human perception theories and the conventional
concepts of saliency. This technique helps us understand the learning pattern
of the deep network at its intermediate layers through their activation maps.
Initially, we consider two state-of-the-art deep saliency models, namely UNISAL
and MSI-Net for our interpretation. We use a set of biologically plausible
log-gabor filters for identifying and reconstructing the activation maps of
them using our explainable saliency model. The final saliency map is generated
using these reconstructed activation maps. We also build our own deep saliency
model named cross-concatenated multi-scale residual block based network
(CMRNet) for saliency prediction. Then, we evaluate and compare the performance
of the explainable models derived from UNISAL, MSI-Net and CMRNet on three
benchmark datasets with other state-of-the-art methods. Hence, we propose that
this approach of explainability can be applied to any deep visual saliency
model for interpretation which makes it a generic one. | [
"cs.CV",
"cs.LG"
] |
Depth estimation and 3D object detection are critical for scene understanding
but remain challenging to perform with a single image due to the loss of 3D
information during image capture. Recent models using deep neural networks have
improved monocular depth estimation performance, but there is still difficulty
in predicting absolute depth and generalizing outside a standard dataset. Here
we introduce the paradigm of deep optics, i.e. end-to-end design of optics and
image processing, to the monocular depth estimation problem, using coded
defocus blur as an additional depth cue to be decoded by a neural network. We
evaluate several optical coding strategies along with an end-to-end
optimization scheme for depth estimation on three datasets, including NYU Depth
v2 and KITTI. We find an optimized freeform lens design yields the best
results, but chromatic aberration from a singlet lens offers significantly
improved performance as well. We build a physical prototype and validate that
chromatic aberrations improve depth estimation on real-world results. In
addition, we train object detection networks on the KITTI dataset and show that
the lens optimized for depth estimation also results in improved 3D object
detection performance. | [
"cs.CV",
"eess.IV"
] |
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning. | [
"cs.LG",
"stat.ML"
] |
Self-attention learns pairwise interactions to model long-range dependencies,
yielding great improvements for video action recognition. In this paper, we
seek a deeper understanding of self-attention for temporal modeling in videos.
We first demonstrate that the entangled modeling of spatio-temporal information
by flattening all pixels is sub-optimal, failing to capture temporal
relationships among frames explicitly. To this end, we introduce Global
Temporal Attention (GTA), which performs global temporal attention on top of
spatial attention in a decoupled manner. We apply GTA on both pixels and
semantically similar regions to capture temporal relationships at different
levels of spatial granularity. Unlike conventional self-attention that computes
an instance-specific attention matrix, GTA directly learns a global attention
matrix that is intended to encode temporal structures that generalize across
different samples. We further augment GTA with a cross-channel multi-head
fashion to exploit channel interactions for better temporal modeling. Extensive
experiments on 2D and 3D networks demonstrate that our approach consistently
enhances temporal modeling and provides state-of-the-art performance on three
video action recognition datasets. | [
"cs.CV"
] |
Visual dialog is a challenging task that requires the comprehension of the
semantic dependencies among implicit visual and textual contexts. This task can
refer to the relation inference in a graphical model with sparse contexts and
unknown graph structure (relation descriptor), and how to model the underlying
context-aware relation inference is critical. To this end, we propose a novel
Context-Aware Graph (CAG) neural network. Each node in the graph corresponds to
a joint semantic feature, including both object-based (visual) and
history-related (textual) context representations. The graph structure
(relations in dialog) is iteratively updated using an adaptive top-$K$ message
passing mechanism. Specifically, in every message passing step, each node
selects the most $K$ relevant nodes, and only receives messages from them.
Then, after the update, we impose graph attention on all the nodes to get the
final graph embedding and infer the answer. In CAG, each node has dynamic
relations in the graph (different related $K$ neighbor nodes), and only the
most relevant nodes are attributive to the context-aware relational graph
inference. Experimental results on VisDial v0.9 and v1.0 datasets show that CAG
outperforms comparative methods. Visualization results further validate the
interpretability of our method. | [
"cs.CV"
] |
Semantic segmentation is the pixel-wise labelling of an image. Since the
problem is defined at the pixel level, determining image class labels only is
not acceptable, but localising them at the original image pixel resolution is
necessary. Boosted by the extraordinary ability of convolutional neural
networks (CNN) in creating semantic, high level and hierarchical image
features; several deep learning-based 2D semantic segmentation approaches have
been proposed within the last decade. In this survey, we mainly focus on the
recent scientific developments in semantic segmentation, specifically on deep
learning-based methods using 2D images. We started with an analysis of the
public image sets and leaderboards for 2D semantic segmentation, with an
overview of the techniques employed in performance evaluation. In examining the
evolution of the field, we chronologically categorised the approaches into
three main periods, namely pre-and early deep learning era, the fully
convolutional era, and the post-FCN era. We technically analysed the solutions
put forward in terms of solving the fundamental problems of the field, such as
fine-grained localisation and scale invariance. Before drawing our conclusions,
we present a table of methods from all mentioned eras, with a summary of each
approach that explains their contribution to the field. We conclude the survey
by discussing the current challenges of the field and to what extent they have
been solved. | [
"cs.CV"
] |
It is often desired to train 6D pose estimation systems on synthetic data
because manual annotation is expensive. However, due to the large domain gap
between the synthetic and real images, synthesizing color images is expensive.
In contrast, this domain gap is considerably smaller and easier to fill for
depth information. In this work, we present a system that regresses 6D object
pose from depth information represented by point clouds, and a lightweight data
synthesis pipeline that creates synthetic point cloud segments for training. We
use an augmented autoencoder (AAE) for learning a latent code that encodes 6D
object pose information for pose regression. The data synthesis pipeline only
requires texture-less 3D object models and desired viewpoints, and it is cheap
in terms of both time and hardware storage. Our data synthesis process is up to
three orders of magnitude faster than commonly applied approaches that render
RGB image data. We show the effectiveness of our system on the LineMOD, LineMOD
Occlusion, and YCB Video datasets. The implementation of our system is
available at: https://github.com/GeeeG/CloudAAE. | [
"cs.CV"
] |
Widespread applications of deep learning have led to a plethora of
pre-trained neural network models for common tasks. Such models are often
adapted from other models via transfer learning. The models may have varying
training sets, training algorithms, network architectures, and
hyper-parameters. For a given application, what isthe most suitable model in a
model repository? This is a critical question for practical deployments but it
has not received much attention. This paper introduces the novel problem of
searching and ranking models based on suitability relative to a target dataset
and proposes a ranking algorithm called \textit{neuralRank}. The key idea
behind this algorithm is to base model suitability on the discriminating power
of a model, using a novel metric to measure it. With experimental results on
the MNIST, Fashion, and CIFAR10 datasets, we demonstrate that (1) neuralRank is
independent of the domain, the training set, or the network architecture and
(2) that the models ranked highly by neuralRank ranking tend to have higher
model accuracy in practice. | [
"cs.LG",
"stat.ML"
] |
Finding the camera pose is an important step in many egocentric video
applications. It has been widely reported that, state of the art SLAM
algorithms fail on egocentric videos. In this paper, we propose a robust method
for camera pose estimation, designed specifically for egocentric videos. In an
egocentric video, the camera views the same scene point multiple times as the
wearer's head sweeps back and forth. We use this specific motion profile to
perform short loop closures aligned with wearer's footsteps. For egocentric
videos, depth estimation is usually noisy. In an important departure, we use 2D
computations for rotation averaging which do not rely upon depth estimates. The
two modification results in much more stable algorithm as is evident from our
experiments on various egocentric video datasets for different egocentric
applications. The proposed algorithm resolves a long standing problem in
egocentric vision and unlocks new usage scenarios for future applications. | [
"cs.CV"
] |
RGB-D salient object detection(SOD) demonstrates its superiority on detecting
in complex environments due to the additional depth information introduced in
the data. Inevitably, an independent stream is introduced to extract features
from depth images, leading to extra computation and parameters. This
methodology which sacrifices the model size to improve the detection accuracy
may impede the practical application of SOD problems. To tackle this dilemma,
we propose a dynamic distillation method along with a lightweight framework,
which significantly reduces the parameters. This method considers the factors
of both teacher and student performance within the training stage and
dynamically assigns the distillation weight instead of applying a fixed weight
on the student model. Extensive experiments are conducted on five public
datasets to demonstrate that our method can achieve competitive performance
compared to 10 prior methods through a 78.2MB lightweight structure. | [
"cs.CV"
] |
In this paper, we investigate the degree of explainability of graph neural
networks (GNNs). Existing explainers work by finding global/local subgraphs to
explain a prediction, but they are applied after a GNN has already been
trained. Here, we propose a meta-learning framework for improving the level of
explainability of a GNN directly at training time, by steering the optimization
procedure towards what we call `interpretable minima'. Our framework (called
MATE, MetA-Train to Explain) jointly trains a model to solve the original task,
e.g., node classification, and to provide easily processable outputs for
downstream algorithms that explain the model's decisions in a human-friendly
way. In particular, we meta-train the model's parameters to quickly minimize
the error of an instance-level GNNExplainer trained on-the-fly on randomly
sampled nodes. The final internal representation relies upon a set of features
that can be `better' understood by an explanation algorithm, e.g., another
instance of GNNExplainer. Our model-agnostic approach can improve the
explanations produced for different GNN architectures and use any
instance-based explainer to drive this process. Experiments on synthetic and
real-world datasets for node and graph classification show that we can produce
models that are consistently easier to explain by different algorithms.
Furthermore, this increase in explainability comes at no cost for the accuracy
of the model. | [
"cs.LG",
"stat.ML"
] |
Information-theoretic bounded rationality describes utility-optimizing
decision-makers whose limited information-processing capabilities are
formalized by information constraints. One of the consequences of bounded
rationality is that resource-limited decision-makers can join together to solve
decision-making problems that are beyond the capabilities of each individual.
Here, we study an information-theoretic principle that drives division of labor
and specialization when decision-makers with information constraints are joined
together. We devise an on-line learning rule of this principle that learns a
partitioning of the problem space such that it can be solved by specialized
linear policies. We demonstrate the approach for decision-making problems whose
complexity exceeds the capabilities of individual decision-makers, but can be
solved by combining the decision-makers optimally. The strength of the model is
that it is abstract and principled, yet has direct applications in
classification, regression, reinforcement learning and adaptive control. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
In this paper, we applied a dynamic model for manoeuvring targets in SIR
particle filter algorithm for improving tracking accuracy of multiple
manoeuvring targets. In our proposed approach, a color distribution model is
used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than
a predetermined threshold, then the model will be updated. Global Nearest
Neighbor (GNN) algorithm is used as data association algorithm. We named our
proposed method as Deformation Detection Particle Filter (DDPF) . DDPF approach
is compared with basic SIR-PF algorithm on real airshow videos. Comparisons
results show that, the basic SIR-PF algorithm is not able to track the
manoeuvring targets when the rotation or scaling is occurred in target' s
model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the
manoeuvring targets more efficiently and accurately. | [
"cs.CV",
"cs.AI"
] |
Automatic video captioning aims to train models to generate text descriptions
for all segments in a video, however, the most effective approaches require
large amounts of manual annotation which is slow and expensive. Active learning
is a promising way to efficiently build a training set for video captioning
tasks while reducing the need to manually label uninformative examples. In this
work we both explore various active learning approaches for automatic video
captioning and show that a cluster-regularized ensemble strategy provides the
best active learning approach to efficiently gather training sets for video
captioning. We evaluate our approaches on the MSR-VTT and LSMDC datasets using
both transformer and LSTM based captioning models and show that our novel
strategy can achieve high performance while using up to 60% fewer training data
than the strong state of the art baselines. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
There has recently been significant interest in training reinforcement
learning (RL) agents in vision-based environments. This poses many challenges,
such as high dimensionality and potential for observational overfitting through
spurious correlations. A promising approach to solve both of these problems is
a self-attention bottleneck, which provides a simple and effective framework
for learning high performing policies, even in the presence of distractions.
However, due to poor scalability of attention architectures, these methods do
not scale beyond low resolution visual inputs, using large patches (thus small
attention matrices). In this paper we make use of new efficient attention
algorithms, recently shown to be highly effective for Transformers, and
demonstrate that these new techniques can be applied in the RL setting. This
allows our attention-based controllers to scale to larger visual inputs, and
facilitate the use of smaller patches, even individual pixels, improving
generalization. In addition, we propose a new efficient algorithm approximating
softmax attention with what we call hybrid random features, leveraging the
theory of angular kernels. We show theoretically and empirically that hybrid
random features is a promising approach when using attention for vision-based
RL. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.RO"
] |
Searching for objects in indoor organized environments such as homes or
offices is part of our everyday activities. When looking for a target object,
we jointly reason about the rooms and containers the object is likely to be in;
the same type of container will have a different probability of having the
target depending on the room it is in. We also combine geometric and semantic
information to infer what container is best to search, or what other objects
are best to move, if the target object is hidden from view. We propose to use a
3D scene graph representation to capture the hierarchical, semantic, and
geometric aspects of this problem. To exploit this representation in a search
process, we introduce Hierarchical Mechanical Search (HMS), a method that
guides an agent's actions towards finding a target object specified with a
natural language description. HMS is based on a novel neural network
architecture that uses neural message passing of vectors with visual,
geometric, and linguistic information to allow HMS to reason across layers of
the graph while combining semantic and geometric cues. HMS is evaluated on a
novel dataset of 500 3D scene graphs with dense placements of semantically
related objects in storage locations, and is shown to be significantly better
than several baselines at finding objects and close to the oracle policy in
terms of the median number of actions required. Additional qualitative results
can be found at https://ai.stanford.edu/mech-search/hms. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
Molecular structure-property relationships are key to molecular engineering
for materials and drug discovery. The rise of deep learning offers a new viable
solution to elucidate the structure-property relationships directly from
chemical data. Here we show that the performance of graph convolutional
networks (GCNs) for the prediction of molecular properties can be improved by
incorporating attention and gate mechanisms. The attention mechanism enables a
GCN to identify atoms in different environments. The gated skip-connection
further improves the GCN by updating feature maps at an appropriate rate. We
demonstrate that the resulting attention- and gate-augmented GCN could extract
better structural features related to a target molecular property such as
solubility, polarity, synthetic accessibility and photovoltaic efficiency
compared to the vanilla GCN. More interestingly, it identified two distinct
parts of molecules as essential structural features for high photovoltaic
efficiency, and each of them coincided with the areas of donor and acceptor
orbitals for charge-transfer excitations, respectively. As a result, the new
model could accurately predict molecular properties and place molecules with
similar properties close to each other in a well-trained latent space, which is
critical for successful molecular engineering. | [
"cs.LG",
"stat.ML"
] |
Generative Adversarial Networks (GANs) have seen steep ascension to the peak
of ML research zeitgeist in recent years. Mostly catalyzed by its success in
the domain of image generation, the technique has seen wide range of adoption
in a variety of other problem domains. Although GANs have had a lot of success
in producing more realistic images than other approaches, they have only seen
limited use for text sequences. Generation of longer sequences compounds this
problem. Most recently, SeqGAN (Yu et al., 2017) has shown improvements in
adversarial evaluation and results with human evaluation compared to a MLE
based trained baseline. The main contributions of this paper are three-fold: 1.
We show results for sequence generation using a GAN architecture with efficient
policy gradient estimators, 2. We attain improved training stability, and 3. We
perform a comparative study of recent unbiased low variance gradient estimation
techniques such as REBAR (Tucker et al., 2017), RELAX (Grathwohl et al., 2018)
and REINFORCE (Williams, 1992). Using a simple grammar on synthetic datasets
with varying length, we indicate the quality of sequences generated by the
model. | [
"stat.ML",
"cs.LG"
] |
Consider learning a policy from example expert behavior, without interaction
with the expert or access to reinforcement signal. One approach is to recover
the expert's cost function with inverse reinforcement learning, then extract a
policy from that cost function with reinforcement learning. This approach is
indirect and can be slow. We propose a new general framework for directly
extracting a policy from data, as if it were obtained by reinforcement learning
following inverse reinforcement learning. We show that a certain instantiation
of our framework draws an analogy between imitation learning and generative
adversarial networks, from which we derive a model-free imitation learning
algorithm that obtains significant performance gains over existing model-free
methods in imitating complex behaviors in large, high-dimensional environments. | [
"cs.LG",
"cs.AI"
] |
Deep generative modelling for human body analysis is an emerging problem with
many interesting applications. However, the latent space learned by such
approaches is typically not interpretable, resulting in less flexibility. In
this work, we present deep generative models for human body analysis in which
the body pose and the visual appearance are disentangled. Such a
disentanglement allows independent manipulation of pose and appearance, and
hence enables applications such as pose-transfer without specific training for
such a task. Our proposed models, the Conditional-DGPose and the Semi-DGPose,
have different characteristics. In the first, body pose labels are taken as
conditioners, from a fully-supervised training set. In the second, our
structured semi-supervised approach allows for pose estimation to be performed
by the model itself and relaxes the need for labelled data. Therefore, the
Semi-DGPose aims for the joint understanding and generation of people in
images. It is not only capable of mapping images to interpretable latent
representations but also able to map these representations back to the image
space. We compare our models with relevant baselines, the ClothNet-Body and the
Pose Guided Person Generation networks, demonstrating their merits on the
Human3.6M, ChictopiaPlus and DeepFashion benchmarks. | [
"cs.CV",
"stat.ML"
] |
We propose a differentially private data generation paradigm using random
feature representations of kernel mean embeddings when comparing the
distribution of true data with that of synthetic data. We exploit the random
feature representations for two important benefits. First, we require a minimal
privacy cost for training deep generative models. This is because unlike
kernel-based distance metrics that require computing the kernel matrix on all
pairs of true and synthetic data points, we can detach the data-dependent term
from the term solely dependent on synthetic data. Hence, we need to perturb the
data-dependent term only once and then use it repeatedly during the generator
training. Second, we can obtain an analytic sensitivity of the kernel mean
embedding as the random features are norm bounded by construction. This removes
the necessity of hyper-parameter search for a clipping norm to handle the
unknown sensitivity of a generator network. We provide several variants of our
algorithm, differentially-private mean embeddings with random features
(DP-MERF) to jointly generate labels and input features for datasets such as
heterogeneous tabular data and image data. Our algorithm achieves drastically
better privacy-utility trade-offs than existing methods when tested on several
datasets. | [
"cs.LG",
"stat.ML"
] |
Understanding and explaining deep learning models is an imperative task.
Towards this, we propose a method that obtains gradient-based certainty
estimates that also provide visual attention maps. Particularly, we solve for
visual question answering task. We incorporate modern probabilistic deep
learning methods that we further improve by using the gradients for these
estimates. These have two-fold benefits: a) improvement in obtaining the
certainty estimates that correlate better with misclassified samples and b)
improved attention maps that provide state-of-the-art results in terms of
correlation with human attention regions. The improved attention maps result in
consistent improvement for various methods for visual question answering.
Therefore, the proposed technique can be thought of as a recipe for obtaining
improved certainty estimates and explanations for deep learning models. We
provide detailed empirical analysis for the visual question answering task on
all standard benchmarks and comparison with state of the art methods. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.