text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Employing deep neural networks as natural image priors to solve inverse
problems either requires large amounts of data to sufficiently train expressive
generative models or can succeed with no data via untrained neural networks.
However, very few works have considered how to interpolate between these no- to
high-data regimes. In particular, how can one use the availability of a small
amount of data (even $5-25$ examples) to one's advantage in solving these
inverse problems and can a system's performance increase as the amount of data
increases as well? In this work, we consider solving linear inverse problems
when given a small number of examples of images that are drawn from the same
distribution as the image of interest. Comparing to untrained neural networks
that use no data, we show how one can pre-train a neural network with a few
given examples to improve reconstruction results in compressed sensing and
semantic image recovery problems such as colorization. Our approach leads to
improved reconstruction as the amount of available data increases and is on par
with fully trained generative models, while requiring less than $1 \%$ of the
data needed to train a generative model. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Synthesizing images from a given text description involves engaging two types
of information: the content, which includes information explicitly described in
the text (e.g., color, composition, etc.), and the style, which is usually not
well described in the text (e.g., location, quantity, size, etc.). However, in
previous works, it is typically treated as a process of generating images only
from the content, i.e., without considering learning meaningful style
representations. In this paper, we aim to learn two variables that are
disentangled in the latent space, representing content and style respectively.
We achieve this by augmenting current text-to-image synthesis frameworks with a
dual adversarial inference mechanism. Through extensive experiments, we show
that our model learns, in an unsupervised manner, style representations
corresponding to certain meaningful information present in the image that are
not well described in the text. The new framework also improves the quality of
synthesized images when evaluated on Oxford-102, CUB and COCO datasets. | [
"cs.CV"
] |
Place recognition is indispensable for a drift-free localization system. Due
to the variations of the environment, place recognition using single-modality
has limitations. In this paper, we propose a bi-modal place recognition method,
which can extract a compound global descriptor from the two modalities, vision
and LiDAR. Specifically, we first build the elevation image generated from 3D
points as a structural representation. Then, we derive the correspondences
between 3D points and image pixels that are further used in merging the
pixel-wise visual features into the elevation map grids. In this way, we fuse
the structural features and visual features in the consistent bird-eye view
frame, yielding a semantic representation, namely CORAL. And the whole network
is called CORAL-VLAD. Comparisons on the Oxford RobotCar show that CORAL-VLAD
has superior performance against other state-of-the-art methods. We also
demonstrate that our network can be generalized to other scenes and sensor
configurations on cross-city datasets. | [
"cs.CV",
"cs.RO"
] |
The rapid growth of research in exploiting machine learning to predict
chaotic systems has revived a recent interest in Hamiltonian Neural Networks
(HNNs) with physical constraints defined by the Hamilton's equations of motion,
which represent a major class of physics-enhanced neural networks. We introduce
a class of HNNs capable of adaptable prediction of nonlinear physical systems:
by training the neural network based on time series from a small number of
bifurcation-parameter values of the target Hamiltonian system, the HNN can
predict the dynamical states at other parameter values, where the network has
not been exposed to any information about the system at these parameter values.
The architecture of the HNN differs from the previous ones in that we
incorporate an input parameter channel, rendering the HNN parameter--cognizant.
We demonstrate, using paradigmatic Hamiltonian systems, that training the HNN
using time series from as few as four parameter values bestows the neural
machine with the ability to predict the state of the target system in an entire
parameter interval. Utilizing the ensemble maximum Lyapunov exponent and the
alignment index as indicators, we show that our parameter-cognizant HNN can
successfully predict the route of transition to chaos. Physics-enhanced machine
learning is a forefront area of research, and our adaptable HNNs provide an
approach to understanding machine learning with broad applications. | [
"cs.LG",
"nlin.CD"
] |
The typical bottom-up human pose estimation framework includes two stages,
keypoint detection and grouping. Most existing works focus on developing
grouping algorithms, e.g., associative embedding, and pixel-wise keypoint
regression that we adopt in our approach. We present several schemes that are
rarely or unthoroughly studied before for improving keypoint detection and
grouping (keypoint regression) performance. First, we exploit the keypoint
heatmaps for pixel-wise keypoint regression instead of separating them for
improving keypoint regression. Second, we adopt a pixel-wise spatial
transformer network to learn adaptive representations for handling the scale
and orientation variance to further improve keypoint regression quality. Last,
we present a joint shape and heatvalue scoring scheme to promote the estimated
poses that are more likely to be true poses. Together with the tradeoff heatmap
estimation loss for balancing the background and keypoint pixels and thus
improving heatmap estimation quality, we get the state-of-the-art bottom-up
human pose estimation result. Code is available at
https://github.com/HRNet/HRNet-Bottom-up-Pose-Estimation. | [
"cs.CV"
] |
Currently, deep reinforcement learning (RL) shows impressive results in
complex gaming and robotic environments. Often these results are achieved at
the expense of huge computational costs and require an incredible number of
episodes of interaction between the agent and the environment. There are two
main approaches to improving the sample efficiency of reinforcement learning
methods - using hierarchical methods and expert demonstrations. In this paper,
we propose a combination of these approaches that allow the agent to use
low-quality demonstrations in complex vision-based environments with multiple
related goals. Our forgetful experience replay (ForgER) algorithm effectively
handles errors in expert data and reduces quality losses when adapting the
action space and states representation to the agent's capabilities. Our
proposed goal-oriented structuring of replay buffer allows the agent to
automatically highlight sub-goals for solving complex hierarchical tasks in
demonstrations. Our method is universal and can be integrated into various
off-policy methods. It surpasses all known existing state-of-the-art RL methods
using expert demonstrations on various model environments. The solution based
on our algorithm beats all the solutions for the famous MineRL competition and
allows the agent to mine a diamond in the Minecraft environment. | [
"cs.LG",
"cs.AI"
] |
Depth estimation from a stereo image pair has become one of the most explored
applications in computer vision, with most of the previous methods relying on
fully supervised learning settings. However, due to the difficulty in acquiring
accurate and scalable ground truth data, the training of fully supervised
methods is challenging. As an alternative, self-supervised methods are becoming
more popular to mitigate this challenge. In this paper, we introduce the H-Net,
a deep-learning framework for unsupervised stereo depth estimation that
leverages epipolar geometry to refine stereo matching. For the first time, a
Siamese autoencoder architecture is used for depth estimation which allows
mutual information between the rectified stereo images to be extracted. To
enforce the epipolar constraint, the mutual epipolar attention mechanism has
been designed which gives more emphasis to correspondences of features which
lie on the same epipolar line while learning mutual information between the
input stereo pair. Stereo correspondences are further enhanced by incorporating
semantic information to the proposed attention mechanism. More specifically,
the optimal transport algorithm is used to suppress attention and eliminate
outliers in areas not visible in both cameras. Extensive experiments on
KITTI2015 and Cityscapes show that our method outperforms the state-ofthe-art
unsupervised stereo depth estimation methods while closing the gap with the
fully supervised approaches. | [
"cs.CV"
] |
Many real-world applications, such as those in medical domains,
recommendation systems, etc, can be formulated as large state space
reinforcement learning problems with only a small budget of the number of
policy changes, i.e., low switching cost. This paper focuses on the linear
Markov Decision Process (MDP) recently studied in [Yang et al 2019, Jin et al
2020] where the linear function approximation is used for generalization on the
large state space. We present the first algorithm for linear MDP with a low
switching cost. Our algorithm achieves an
$\widetilde{O}\left(\sqrt{d^3H^4K}\right)$ regret bound with a near-optimal
$O\left(d H\log K\right)$ global switching cost where $d$ is the feature
dimension, $H$ is the planning horizon and $K$ is the number of episodes the
agent plays. Our regret bound matches the best existing polynomial algorithm by
[Jin et al 2020] and our switching cost is exponentially smaller than theirs.
When specialized to tabular MDP, our switching cost bound improves those in
[Bai et al 2019, Zhang et al 20020]. We complement our positive result with an
$\Omega\left(dH/\log d\right)$ global switching cost lower bound for any
no-regret algorithm. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Spectral graph theory is well known and widely used in computer vision. In
this paper, we analyze image segmentation algorithms that are based on spectral
graph theory, e.g., normalized cut, and show that there is a natural connection
between spectural graph theory based image segmentationand and edge preserving
filtering. Based on this connection we show that the normalized cut algorithm
is equivalent to repeated iterations of bilateral filtering. Then, using this
equivalence we present and implement a fast normalized cut algorithm for image
segmentation. Experiments show that our implementation can solve the original
optimization problem in the normalized cut algorithm 10 to 100 times faster.
Furthermore, we present a new algorithm called conditioned normalized cut for
image segmentation that can easily incorporate color image patches and
demonstrate how this segmentation problem can be solved with edge preserving
filtering. | [
"cs.CV"
] |
Medical image segmentation is inherently an ambiguous task due to factors
such as partial volumes and variations in anatomical definitions. While in most
cases the segmentation uncertainty is around the border of structures of
interest, there can also be considerable inter-rater differences. The class of
conditional variational autoencoders (cVAE) offers a principled approach to
inferring distributions over plausible segmentations that are conditioned on
input images. Segmentation uncertainty estimated from samples of such
distributions can be more informative than using pixel level probability
scores. In this work, we propose a novel conditional generative model that is
based on conditional Normalizing Flow (cFlow). The basic idea is to increase
the expressivity of the cVAE by introducing a cFlow transformation step after
the encoder. This yields improved approximations of the latent posterior
distribution, allowing the model to capture richer segmentation variations.
With this we show that the quality and diversity of samples obtained from our
conditional generative model is enhanced. Performance of our model, which we
call cFlow Net, is evaluated on two medical imaging datasets demonstrating
substantial improvements in both qualitative and quantitative measures when
compared to a recent cVAE based model. | [
"stat.ML",
"cs.CV",
"cs.LG"
] |
With machine learning models being increasingly applied to various
decision-making scenarios, people have spent growing efforts to make machine
learning models more transparent and explainable. Among various explanation
techniques, counterfactual explanations have the advantages of being
human-friendly and actionable -- a counterfactual explanation tells the user
how to gain the desired prediction with minimal changes to the input. Besides,
counterfactual explanations can also serve as efficient probes to the models'
decisions. In this work, we exploit the potential of counterfactual
explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and
explore a model's decisions on individual instances and data subsets,
supporting users ranging from decision-subjects to model developers. DECE
supports exploratory analysis of model decisions by combining the strengths of
counterfactual explanations at instance- and subgroup-levels. We also introduce
a set of interactions that enable users to customize the generation of
counterfactual explanations to find more actionable ones that can suit their
needs. Through three use cases and an expert interview, we demonstrate the
effectiveness of DECE in supporting decision exploration tasks and instance
explanations. | [
"cs.LG",
"cs.HC",
"stat.ML",
"I.2.0; H.5.2"
] |
Point cloud based place recognition is still an open issue due to the
difficulty in extracting local features from the raw 3D point cloud and
generating the global descriptor, and it's even harder in the large-scale
dynamic environments. In this paper, we develop a novel deep neural network,
named LPD-Net (Large-scale Place Description Network), which can extract
discriminative and generalizable global descriptors from the raw 3D point
cloud. Two modules, the adaptive local feature extraction module and the
graph-based neighborhood aggregation module, are proposed, which contribute to
extract the local structures and reveal the spatial distribution of local
features in the large-scale point cloud, with an end-to-end manner. We
implement the proposed global descriptor in solving point cloud based retrieval
tasks to achieve the large-scale place recognition. Comparison results show
that our LPD-Net is much better than PointNetVLAD and reaches the
state-of-the-art. We also compare our LPD-Net with the vision-based solutions
to show the robustness of our approach to different weather and light
conditions. | [
"cs.CV"
] |
Learning discriminative features is crucial for various robotic applications
such as object detection and classification. In this paper, we present a
general framework for the analysis of the discriminative properties of haptic
signals. Our focus is on two crucial components of a robotic perception system:
discriminative feature extraction and metric-based feature transformation to
enhance the separability of haptic signals in the projected space. We propose a
set of hand-crafted haptic features (generated only from acceleration data),
which enables discrimination of real-world textures. Since the Euclidean space
does not reflect the underlying pattern in the data, we propose to learn an
appropriate transformation function to project the feature onto the new space
and apply different pattern recognition algorithms for texture classification
and discrimination tasks. Unlike other existing methods, we use a triplet-based
method for improved discrimination in the embedded space. We further
demonstrate how to build a haptic vocabulary by selecting a compact set of the
most distinct and representative signals in the embedded space. The
experimental results show that the proposed features augmented with learned
embedding improves the performance of semantic discrimination tasks such as
classification and clustering and outperforms the related state-of-the-art. | [
"cs.LG",
"cs.HC",
"stat.ML"
] |
Convolutional neural networks (CNNs) learn filters in order to capture local
correlation patterns in feature space. We propose to learn these filters as
combinations of preset spectral filters defined by the Discrete Cosine
Transform (DCT). Our proposed DCT-based harmonic blocks replace conventional
convolutional layers to produce partially or fully harmonic versions of new or
existing CNN architectures. Using DCT energy compaction properties, we
demonstrate how the harmonic networks can be efficiently compressed by
truncating high-frequency information in harmonic blocks thanks to the
redundancies in the spectral domain. We report extensive experimental
validation demonstrating benefits of the introduction of harmonic blocks into
state-of-the-art CNN models in image classification, object detection and
semantic segmentation applications. | [
"cs.CV",
"cs.LG"
] |
Sequence models assign probabilities to variable-length sequences such as
natural language texts. The ability of sequence models to capture temporal
dependence can be characterized by the temporal scaling of correlation and
mutual information. In this paper, we study the mutual information of recurrent
neural networks (RNNs) including long short-term memories and self-attention
networks such as Transformers. Through a combination of theoretical study of
linear RNNs and empirical study of nonlinear RNNs, we find their mutual
information decays exponentially in temporal distance. On the other hand,
Transformers can capture long-range mutual information more efficiently, making
them preferable in modeling sequences with slow power-law mutual information,
such as natural languages and stock prices. We discuss the connection of these
results with statistical mechanics. We also point out the non-uniformity
problem in many natural language datasets. We hope this work provides a new
perspective in understanding the expressive power of sequence models and shed
new light on improving the architecture of them. | [
"cs.LG",
"cond-mat.dis-nn",
"cs.IT",
"math.IT",
"stat.ML"
] |
Recently, the concept of teaching has been introduced into machine learning,
in which a teacher model is used to guide the training of a student model
(which will be used in real tasks) through data selection, loss function
design, etc. Learning to reweight, which is a specific kind of teaching that
reweights training data using a teacher model, receives much attention due to
its simplicity and effectiveness. In existing learning to reweight works, the
teacher model only utilizes shallow/surface information such as training
iteration number and loss/accuracy of the student model from
training/validation sets, but ignores the internal states of the student model,
which limits the potential of learning to reweight. In this work, we propose an
improved data reweighting algorithm, in which the student model provides its
internal states to the teacher model, and the teacher model returns adaptive
weights of training samples to enhance the training of the student model. The
teacher model is jointly trained with the student model using meta gradients
propagated from a validation set. Experiments on image classification with
clean/noisy labels and neural machine translation empirically demonstrate that
our algorithm makes significant improvement over previous methods. | [
"cs.LG",
"stat.ML"
] |
To enable a deep learning-based system to be used in the medical domain as a
computer-aided diagnosis system, it is essential to not only classify diseases
but also present the locations of the diseases. However, collecting
instance-level annotations for various thoracic diseases is expensive.
Therefore, weakly supervised localization methods have been proposed that use
only image-level annotation. While the previous methods presented the disease
location as the most discriminative part for classification, this causes a deep
network to localize wrong areas for indistinguishable X-ray images. To solve
this issue, we propose a spatial attention method using disease masks that
describe the areas where diseases mainly occur. We then apply the spatial
attention to find the precise disease area by highlighting the highest
probability of disease occurrence. Meanwhile, the various sizes, rotations and
noise in chest X-ray images make generating the disease masks challenging. To
reduce the variation among images, we employ an alignment module to transform
an input X-ray image into a generalized image. Through extensive experiments on
the NIH-Chest X-ray dataset with eight kinds of diseases, we show that the
proposed method results in superior localization performances compared to
state-of-the-art methods. | [
"cs.CV",
"cs.LG"
] |
In medical image segmentation, it is difficult to mark ambiguous areas
accurately with binary masks, especially when dealing with small lesions.
Therefore, it is a challenge for radiologists to reach a consensus by using
binary masks under the condition of multiple annotations. However, these areas
may contain anatomical structures that are conducive to diagnosis. Uncertainty
is introduced to study these situations. Nevertheless, the uncertainty is
usually measured by the variances between predictions in a multiple trial way.
It is not intuitive, and there is no exact correspondence in the image.
Inspired by image matting, we introduce matting as a soft segmentation method
and a new perspective to deal with and represent uncertain regions into medical
scenes, namely medical matting. More specifically, because there is no
available medical matting dataset, we first labeled two medical datasets with
alpha matte. Secondly, the matting method applied to the natural image is not
suitable for the medical scene, so we propose a new architecture to generate
binary masks and alpha matte in a row. Thirdly, the uncertainty map is
introduced to highlight the ambiguous regions from the binary results and
improve the matting performance. Evaluated on these datasets, the proposed
model outperformed state-of-the-art matting algorithms by a large margin, and
alpha matte is proved to be a more efficient labeling form than a binary mask. | [
"cs.CV"
] |
Unlike images or videos data which can be easily labeled by human being,
sensor data annotation is a time-consuming process. However, traditional
methods of human activity recognition require a large amount of such strictly
labeled data for training classifiers. In this paper, we present an
attention-based convolutional neural network for human recognition from weakly
labeled data. The proposed attention model can focus on labeled activity among
a long sequence of sensor data, and while filter out a large amount of
background noise signals. In experiment on the weakly labeled dataset, we show
that our attention model outperforms classical deep learning methods in
accuracy. Besides, we determine the specific locations of the labeled activity
in a long sequence of weakly labeled data by converting the compatibility score
which is generated from attention model to compatibility density. Our method
greatly facilitates the process of sensor data annotation, and makes data
collection more easy. | [
"cs.LG",
"stat.ML"
] |
Active Traffic Management strategies are often adopted in real-time to
address such sudden flow breakdowns. When queuing is imminent, Speed
Harmonization (SH), which adjusts speeds in upstream traffic to mitigate
traffic showckwaves downstream, can be applied. However, because SH depends on
driver awareness and compliance, it may not always be effective in mitigating
congestion. The use of multiagent reinforcement learning for collaborative
learning, is a promising solution to this challenge. By incorporating this
technique in the control algorithms of connected and autonomous vehicle (CAV),
it may be possible to train the CAVs to make joint decisions that can mitigate
highway bottleneck congestion without human driver compliance to altered speed
limits. In this regard, we present an RL-based multi-agent CAV control model to
operate in mixed traffic (both CAVs and human-driven vehicles (HDVs)). The
results suggest that even at CAV percent share of corridor traffic as low as
10%, CAVs can significantly mitigate bottlenecks in highway traffic. Another
objective was to assess the efficacy of the RL-based controller vis-\`a-vis
that of the rule-based controller. In addressing this objective, we duly
recognize that one of the main challenges of RL-based CAV controllers is the
variety and complexity of inputs that exist in the real world, such as the
information provided to the CAV by other connected entities and sensed
information. These translate as dynamic length inputs which are difficult to
process and learn from. For this reason, we propose the use of Graphical
Convolution Networks (GCN), a specific RL technique, to preserve information
network topology and corresponding dynamic length inputs. We then use this,
combined with Deep Deterministic Policy Gradient (DDPG), to carry out
multi-agent training for congestion mitigation using the CAV controllers. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
Among 2D convolutional networks on point clouds, point-based approaches
consume point clouds of fixed size directly. By analysis of PointNet, a pioneer
in introducing deep learning into point sets, we reveal that current
point-based methods are essentially spatial relationship processing networks.
In this paper, we take a different approach. Our architecture, named PE-Net,
learns the representation of point clouds in high-dimensional space, and
encodes the unordered input points to feature vectors, which standard 2D CNNs
can be applied to. The recommended network can adapt to changes in the number
of input points which is the limit of current methods. Experiments show that in
the tasks of classification and part segmentation, PE-Net achieves the
state-of-the-art performance in multiple challenging datasets, such as ModelNet
and ShapeNetPart. | [
"cs.CV"
] |
In order to answer semantically-complicated questions about an image, a
Visual Question Answering (VQA) model needs to fully understand the visual
scene in the image, especially the interactive dynamics between different
objects. We propose a Relation-aware Graph Attention Network (ReGAT), which
encodes each image into a graph and models multi-type inter-object relations
via a graph attention mechanism, to learn question-adaptive relation
representations. Two types of visual object relations are explored: (i)
Explicit Relations that represent geometric positions and semantic interactions
between objects; and (ii) Implicit Relations that capture the hidden dynamics
between image regions. Experiments demonstrate that ReGAT outperforms prior
state-of-the-art approaches on both VQA 2.0 and VQA-CP v2 datasets. We further
show that ReGAT is compatible to existing VQA architectures, and can be used as
a generic relation encoder to boost the model performance for VQA. | [
"cs.CV",
"cs.AI"
] |
We present an end-to-end, interpretable, deep-learning architecture to learn
a graph kernel that predicts the outcome of chronic disease drug prescription.
This is achieved through a deep metric learning collaborative with a Support
Vector Machine objective using a graphical representation of Electronic Health
Records. We formulate the predictive model as a binary graph classification
problem with an adaptive learned graph kernel through novel cross-global
attention node matching between patient graphs, simultaneously computing on
multiple graphs without training pair or triplet generation. Results using the
Taiwanese National Health Insurance Research Database demonstrate that our
approach outperforms current start-of-the-art models both in terms of accuracy
and interpretability. | [
"cs.LG",
"stat.ML"
] |
Monocular depth estimation and semantic segmentation are two fundamental
goals of scene understanding. Due to the advantages of task interaction, many
works study the joint task learning algorithm. However, most existing methods
fail to fully leverage the semantic labels, ignoring the provided context
structures and only using them to supervise the prediction of segmentation
split, which limit the performance of both tasks. In this paper, we propose a
network injected with contextual information (CI-Net) to solve the problem.
Specifically, we introduce self-attention block in the encoder to generate
attention map. With supervision from the ideal attention map created by
semantic label, the network is embedded with contextual information so that it
could understand scene better and utilize correlated features to make accurate
prediction. Besides, a feature sharing module is constructed to make the
task-specific features deeply fused and a consistency loss is devised to make
the features mutually guided. We evaluate the proposed CI-Net on the
NYU-Depth-v2 and SUN-RGBD datasets. The experimental results validate that our
proposed CI-Net could effectively improve the accuracy of semantic segmentation
and depth estimation. | [
"cs.CV"
] |
Hyperspectral unmixing, the process of estimating a common set of spectral
bases and their corresponding composite percentages at each pixel, is an
important task for hyperspectral analysis, visualization and understanding.
From an unsupervised learning perspective, this problem is very
challenging---both the spectral bases and their composite percentages are
unknown, making the solution space too large. To reduce the solution space,
many approaches have been proposed by exploiting various priors. In practice,
these priors would easily lead to some unsuitable solution. This is because
they are achieved by applying an identical strength of constraints to all the
factors, which does not hold in practice. To overcome this limitation, we
propose a novel sparsity based method by learning a data-guided map to describe
the individual mixed level of each pixel. Through this data-guided map, the
$\ell_{p}(0<p<1)$ constraint is applied in an adaptive manner. Such
implementation not only meets the practical situation, but also guides the
spectral bases toward the pixels under highly sparse constraint. What's more,
an elegant optimization scheme as well as its convergence proof have been
provided in this paper. Extensive experiments on several datasets also
demonstrate that the data-guided map is feasible, and high quality unmixing
results could be obtained by our method. | [
"cs.CV"
] |
Semi-supervised learning has been gaining attention as it allows for
performing image analysis tasks such as classification with limited labeled
data. Some popular algorithms using Generative Adversarial Networks (GANs) for
semi-supervised classification share a single architecture for classification
and discrimination. However, this may require a model to converge to a separate
data distribution for each task, which may reduce overall performance. While
progress in semi-supervised learning has been made, less addressed are
small-scale, fully-supervised tasks where even unlabeled data is unavailable
and unattainable. We therefore, propose a novel GAN model namely External
Classifier GAN (EC-GAN), that utilizes GANs and semi-supervised algorithms to
improve classification in fully-supervised regimes. Our method leverages a GAN
to generate artificial data used to supplement supervised classification. More
specifically, we attach an external classifier, hence the name EC-GAN, to the
GAN's generator, as opposed to sharing an architecture with the discriminator.
Our experiments demonstrate that EC-GAN's performance is comparable to the
shared architecture method, far superior to the standard data augmentation and
regularization-based approach, and effective on a small, realistic dataset. | [
"cs.LG",
"cs.CV"
] |
In semi-supervised graph-based binary classifier learning, a subset of known
labels $\hat{x}_i$ are used to infer unknown labels, assuming that the label
signal $x$ is smooth with respect to a similarity graph specified by a
Laplacian matrix. When restricting labels $x_i$ to binary values, the problem
is NP-hard. While a conventional semi-definite programming (SDP) relaxation can
be solved in polynomial time using, for example, the alternating direction
method of multipliers (ADMM), the complexity of iteratively projecting a
candidate matrix $M$ onto the positive semi-definite (PSD) cone ($M \succeq 0$)
remains high. In this paper, leveraging a recent linear algebraic theory called
Gershgorin disc perfect alignment (GDPA), we propose a fast projection-free
method by solving a sequence of linear programs (LP) instead. Specifically, we
first recast the SDP relaxation to its SDP dual, where a feasible solution $H
\succeq 0$ can be interpreted as a Laplacian matrix corresponding to a balanced
signed graph sans the last node. To achieve graph balance, we split the last
node into two that respectively contain the original positive and negative
edges, resulting in a new Laplacian $\bar{H}$. We repose the SDP dual for
solution $\bar{H}$, then replace the PSD cone constraint $\bar{H} \succeq 0$
with linear constraints derived from GDPA -- sufficient conditions to ensure
$\bar{H}$ is PSD -- so that the optimization becomes an LP per iteration.
Finally, we extract predicted labels from our converged LP solution $\bar{H}$.
Experiments show that our algorithm enjoyed a $40\times$ speedup on average
over the next fastest scheme while retaining comparable label prediction
performance. | [
"cs.LG"
] |
Neural architecture search (NAS) has attracted a lot of attention and has
been illustrated to bring tangible benefits in a large number of applications
in the past few years. Architecture topology and architecture size have been
regarded as two of the most important aspects for the performance of deep
learning models and the community has spawned lots of searching algorithms for
both aspects of the neural architectures. However, the performance gain from
these searching algorithms is achieved under different search spaces and
training setups. This makes the overall performance of the algorithms to some
extent incomparable and the improvement from a sub-module of the searching
model unclear. In this paper, we propose NATS-Bench, a unified benchmark on
searching for both topology and size, for (almost) any up-to-date NAS
algorithm. NATS-Bench includes the search space of 15,625 neural cell
candidates for architecture topology and 32,768 for architecture size on three
datasets. We analyze the validity of our benchmark in terms of various criteria
and performance comparison of all candidates in the search space. We also show
the versatility of NATS-Bench by benchmarking 13 recent state-of-the-art NAS
algorithms on it. All logs and diagnostic information trained using the same
setup for each candidate are provided. This facilitates a much larger community
of researchers to focus on developing better NAS algorithms in a more
comparable and computationally cost friendly environment. All codes are
publicly available at: https://xuanyidong.com/assets/projects/NATS-Bench. | [
"cs.LG",
"stat.ML"
] |
We explore encoding brain symmetry into a neural network for a brain tumor
segmentation task. A healthy human brain is symmetric at a high level of
abstraction, and the high-level asymmetric parts are more likely to be tumor
regions. Paying more attention to asymmetries has the potential to boost the
performance in brain tumor segmentation. We propose a method to encode brain
symmetry into existing neural networks and apply the method to a
state-of-the-art neural network for medical imaging segmentation. We evaluate
our symmetry-encoded network on the dataset from a brain tumor segmentation
challenge and verify that the new model extracts information in the training
images more efficiently than the original model. | [
"cs.CV"
] |
This paper provides an empirical evaluation of recently developed exploration
algorithms within the Arcade Learning Environment (ALE). We study the use of
different reward bonuses that incentives exploration in reinforcement learning.
We do so by fixing the learning algorithm used and focusing only on the impact
of the different exploration bonuses in the agent's performance. We use
Rainbow, the state-of-the-art algorithm for value-based agents, and focus on
some of the bonuses proposed in the last few years. We consider the impact
these algorithms have on performance within the popular game Montezuma's
Revenge which has gathered a lot of interest from the exploration community,
across the the set of seven games identified by Bellemare et al. (2016) as
challenging for exploration, and easier games where exploration is not an
issue. We find that, in our setting, recently developed bonuses do not provide
significantly improved performance on Montezuma's Revenge or hard exploration
games. We also find that existing bonus-based methods may negatively impact
performance on games in which exploration is not an issue and may even perform
worse than $\epsilon$-greedy exploration. | [
"cs.LG",
"stat.ML"
] |
Nowadays digital image compression and decompression techniques are very much
important. So our aim is to calculate the quality of face and other regions of
the compressed image with respect to the original image. Image segmentation is
typically used to locate objects and boundaries (lines, curves etc.)in images.
After segmentation the image is changed into something which is more meaningful
to analyze. Using Universal Image Quality Index(Q),Structural Similarity
Index(SSIM) and Gradient-based Structural Similarity Index(G-SSIM) it can be
shown that face region is less compressed than any other region of the image. | [
"cs.CV"
] |
Modern computer vision requires processing large amounts of data, both while
training the model and/or during inference, once the model is deployed.
Scenarios where images are captured and processed in physically separated
locations are increasingly common (e.g. autonomous vehicles, cloud computing).
In addition, many devices suffer from limited resources to store or transmit
data (e.g. storage space, channel capacity). In these scenarios, lossy image
compression plays a crucial role to effectively increase the number of images
collected under such constraints. However, lossy compression entails some
undesired degradation of the data that may harm the performance of the
downstream analysis task at hand, since important semantic information may be
lost in the process. Moreover, we may only have compressed images at training
time but are able to use original images at inference time, or vice versa, and
in such a case, the downstream model suffers from covariate shift. In this
paper, we analyze this phenomenon, with a special focus on vision-based
perception for autonomous driving as a paradigmatic scenario. We see that loss
of semantic information and covariate shift do indeed exist, resulting in a
drop in performance that depends on the compression rate. In order to address
the problem, we propose dataset restoration, based on image restoration with
generative adversarial networks (GANs). Our method is agnostic to both the
particular image compression method and the downstream task; and has the
advantage of not adding additional cost to the deployed models, which is
particularly important in resource-limited devices. The presented experiments
focus on semantic segmentation as a challenging use case, cover a broad range
of compression rates and diverse datasets, and show how our method is able to
significantly alleviate the negative effects of compression on the downstream
visual task. | [
"cs.CV",
"eess.IV",
"I.4.2"
] |
Many theoretical results on estimation of high dimensional time series
require specifying an underlying data generating model (DGM). Instead, along
the footsteps of~\cite{wong2017lasso}, this paper relies only on (strict)
stationarity and $ \beta $-mixing condition to establish consistency of lasso
when data comes from a $\beta$-mixing process with marginals having subgaussian
tails. Because of the general assumptions, the data can come from DGMs
different than standard time series models such as VAR or ARCH. When the true
DGM is not VAR, the lasso estimates correspond to those of the best linear
predictors using the past observations. We establish non-asymptotic
inequalities for estimation and prediction errors of the lasso estimates.
Together with~\cite{wong2017lasso}, we provide lasso guarantees that cover full
spectrum of the parameters in specifications of $ \beta $-mixing subgaussian
time series. Applications of these results potentially extend to non-Gaussian,
non-Markovian and non-linear times series models as the examples we provide
demonstrate. In order to prove our results, we derive a novel Hanson-Wright
type concentration inequality for $\beta$-mixing subgaussian random vectors
that may be of independent interest. | [
"stat.ML",
"cs.LG"
] |
Zero-shot hyperparameter optimization (HPO) is a simple yet effective use of
transfer learning for constructing a small list of hyperparameter (HP)
configurations that complement each other. That is to say, for any given
dataset, at least one of them is expected to perform well. Current techniques
for obtaining this list are computationally expensive as they rely on running
training jobs on a diverse collection of datasets and a large collection of
randomly drawn HPs. This cost is especially problematic in environments where
the space of HPs is regularly changing due to new algorithm versions, or
changing architectures of deep networks. We provide an overview of available
approaches and introduce two novel techniques to handle the problem. The first
is based on a surrogate model and adaptively chooses pairs of dataset,
configuration to query. The second, for settings where finding, tuning and
testing a surrogate model is problematic, is a multi-fidelity technique
combining HyperBand with submodular optimization. We benchmark our methods
experimentally on five tasks (XGBoost, LightGBM, CatBoost, MLP and AutoML) and
show significant improvement in accuracy compared to standard zero-shot HPO
with the same training budget. In addition to contributing new algorithms, we
provide an extensive study of the zero-shot HPO technique resulting in (1)
default hyper-parameters for popular algorithms that would benefit the
community using them, (2) massive lookup tables to further the research of
hyper-parameter tuning. | [
"stat.ML",
"cs.LG"
] |
Online Tensor Factorization (OTF) is a fundamental tool in learning
low-dimensional interpretable features from streaming multi-modal data. While
various algorithmic and theoretical aspects of OTF have been investigated
recently, general convergence guarantee to stationary points of the objective
function without any incoherence or sparsity assumptions is still lacking even
for the i.i.d. case. In this work, we introduce a novel OTF algorithm that
learns a CANDECOMP/PARAFAC (CP) basis from a given stream of tensor-valued data
under general constraints, including nonnegativity constraints that induce
interpretability of learned CP basis. We prove that our algorithm converges
almost surely to the set of stationary points of the objective function under
the hypothesis that the sequence of data tensors is generated by some
underlying Markov chain. Our setting covers the classical i.i.d. case as well
as a wide range of application contexts including data streams generated by
independent or MCMC sampling. Our result closes a gap between OTF and Online
Matrix Factorization in global convergence analysis. Experimentally, we show
that our OTF algorithm converges much faster than standard algorithms for
nonnegative tensor factorization tasks on both synthetic and real-world data.
Also, we demonstrate the utility of our algorithm on a diverse set of examples
from image, video, and time-series data, illustrating how one may learn
qualitatively different CP-dictionaries from the same tensor data by exploiting
the tensor structure in multiple ways. | [
"stat.ML",
"cs.LG",
"math.OC"
] |
Reinforcement Learning has been able to solve many complicated robotics tasks
without any need for feature engineering in an end-to-end fashion. However,
learning the optimal policy directly from the sensory inputs, i.e the
observations, often requires processing and storage of a huge amount of data.
In the context of robotics, the cost of data from real robotics hardware is
usually very high, thus solutions that achieve high sample-efficiency are
needed. We propose a method that aims at learning a mapping from the
observations into a lower-dimensional state space. This mapping is learned with
unsupervised learning using loss functions shaped to incorporate prior
knowledge of the environment and the task. Using the samples from the state
space, the optimal policy is quickly and efficiently learned. We test the
method on several mobile robot navigation tasks in a simulation environment and
also on a real robot. | [
"cs.LG",
"cs.AI"
] |
By moving a depth sensor around a room, we compute a 3D CAD model of the
environment, capturing the room shape and contents such as chairs, desks,
sofas, and tables. Rather than reconstructing geometry, we match, place, and
align each object in the scene to thousands of CAD models of objects. In
addition to the fully automatic system, the key technical contribution is a
novel approach for aligning CAD models to 3D scans, based on deep reinforcement
learning. This approach, which we call Learning-based ICP, outperforms prior
ICP methods in the literature, by learning the best points to match and
conditioning on object viewpoint. LICP learns to align using only synthetic
data and does not require ground truth annotation of object pose or keypoint
pair matching in real scene scans. While LICP is trained on synthetic data and
without 3D real scene annotations, it outperforms both learned local deep
feature matching and geometric based alignment methods in real scenes. The
proposed method is evaluated on real scenes datasets of SceneNN and ScanNet as
well as synthetic scenes of SUNCG. High quality results are demonstrated on a
range of real world scenes, with robustness to clutter, viewpoint, and
occlusion. | [
"cs.CV",
"cs.LG"
] |
Metric learning algorithms aim to learn a distance function that brings the
semantically similar data items together and keeps dissimilar ones at a
distance. The traditional Mahalanobis distance learning is equivalent to find a
linear projection. In contrast, Deep Metric Learning (DML) methods are proposed
that automatically extract features from data and learn a non-linear
transformation from input space to a semantically embedding space. Recently,
many DML methods are proposed focused to enhance the discrimination power of
the learned metric by providing novel sampling strategies or loss functions.
This approach is very helpful when both the training and test examples are
coming from the same set of categories. However, it is less effective in many
applications of DML such as image retrieval and person-reidentification. Here,
the DML should learn general semantic concepts from observed classes and employ
them to rank or identify objects from unseen categories. Neglecting the
generalization ability of the learned representation and just emphasizing to
learn a more discriminative embedding on the observed classes may lead to the
overfitting problem. To address this limitation, we propose a framework to
enhance the generalization power of existing DML methods in a Zero-Shot
Learning (ZSL) setting by general yet discriminative representation learning
and employing a class adversarial neural network. To learn a more general
representation, we propose to employ feature maps of intermediate layers in a
deep neural network and enhance their discrimination power through an attention
mechanism. Besides, a class adversarial network is utilized to enforce the deep
model to seek class invariant features for the DML task. We evaluate our work
on widely used machine vision datasets in a ZSL setting. | [
"cs.CV",
"cs.IR",
"cs.LG",
"6804 (Primary)"
] |
Semi-supervised learning has been an effective paradigm for leveraging
unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a
new semi-supervised learning method that unifies dominant approaches and
addresses their limitations. CoMatch jointly learns two representations of the
training data, their class probabilities and low-dimensional embeddings. The
two representations interact with each other to jointly evolve. The embeddings
impose a smoothness constraint on the class probabilities to improve the
pseudo-labels, whereas the pseudo-labels regularize the structure of the
embeddings through graph-based contrastive learning. CoMatch achieves
state-of-the-art performance on multiple datasets. It achieves substantial
accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with
1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch
by 12.6%. Furthermore, CoMatch achieves better representation learning
performance on downstream tasks, outperforming both supervised learning and
self-supervised learning. Code and pre-trained models are available at
https://github.com/salesforce/CoMatch. | [
"cs.LG",
"cs.CV"
] |
We introduce AutoGluon-Tabular, an open-source AutoML framework that requires
only a single line of Python to train highly accurate machine learning models
on an unprocessed tabular dataset such as a CSV file. Unlike existing AutoML
frameworks that primarily focus on model/hyperparameter selection,
AutoGluon-Tabular succeeds by ensembling multiple models and stacking them in
multiple layers. Experiments reveal that our multi-layer combination of many
models offers better use of allocated training time than seeking out the best.
A second contribution is an extensive evaluation of public and commercial
AutoML platforms including TPOT, H2O, AutoWEKA, auto-sklearn, AutoGluon, and
Google AutoML Tables. Tests on a suite of 50 classification and regression
tasks from Kaggle and the OpenML AutoML Benchmark reveal that AutoGluon is
faster, more robust, and much more accurate. We find that AutoGluon often even
outperforms the best-in-hindsight combination of all of its competitors. In two
popular Kaggle competitions, AutoGluon beat 99% of the participating data
scientists after merely 4h of training on the raw data. | [
"stat.ML",
"cs.LG"
] |
Establishing mathematical models is a ubiquitous and effective method to
understand the objective world. Due to complex physiological structures and
dynamic behaviors, mathematical representation of the human face is an
especially challenging task. A mathematical model for face image representation
called GmFace is proposed in the form of a multi-Gaussian function in this
paper. The model utilizes the advantages of two-dimensional Gaussian function
which provides a symmetric bell surface with a shape that can be controlled by
parameters. The GmNet is then designed using Gaussian functions as neurons,
with parameters that correspond to each of the parameters of GmFace in order to
transform the problem of GmFace parameter solving into a network optimization
problem of GmNet. The face modeling process can be described by the following
steps: (1) GmNet initialization; (2) feeding GmNet with face image(s); (3)
training GmNet until convergence; (4) drawing out the parameters of GmNet (as
the same as GmFace); (5) recording the face model GmFace. Furthermore, using
GmFace, several face image transformation operations can be realized
mathematically through simple parameter computation. | [
"cs.CV",
"cs.LG"
] |
Self-supervised learning allows for better utilization of unlabelled data.
The feature representation obtained by self-supervision can be used in
downstream tasks such as classification, object detection, segmentation, and
anomaly detection. While classification, object detection, and segmentation
have been investigated with self-supervised learning, anomaly detection needs
more attention. We consider the problem of anomaly detection in images and
videos, and present a new visual anomaly detection technique for videos.
Numerous seminal and state-of-the-art self-supervised methods are evaluated for
anomaly detection on a variety of image datasets. The best performing
image-based self-supervised representation learning method is then used for
video anomaly detection to see the importance of spatial features in visual
anomaly detection in videos. We also propose a simple self-supervision approach
for learning temporal coherence across video frames without the use of any
optical flow information. At its core, our method identifies the frame indices
of a jumbled video sequence allowing it to learn the spatiotemporal features of
the video. This intuitive approach shows superior performance of visual anomaly
detection compared to numerous methods for images and videos on UCF101 and
ILSVRC2015 video datasets. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
LiDAR point cloud analysis is a core task for 3D computer vision, especially
for autonomous driving. However, due to the severe sparsity and noise
interference in the single sweep LiDAR point cloud, the accurate semantic
segmentation is non-trivial to achieve. In this paper, we propose a novel
sparse LiDAR point cloud semantic segmentation framework assisted by learned
contextual shape priors. In practice, an initial semantic segmentation (SS) of
a single sweep point cloud can be achieved by any appealing network and then
flows into the semantic scene completion (SSC) module as the input. By merging
multiple frames in the LiDAR sequence as supervision, the optimized SSC module
has learned the contextual shape priors from sequential LiDAR data, completing
the sparse single sweep point cloud to the dense one. Thus, it inherently
improves SS optimization through fully end-to-end training. Besides, a
Point-Voxel Interaction (PVI) module is proposed to further enhance the
knowledge fusion between SS and SSC tasks, i.e., promoting the interaction of
incomplete local geometry of point cloud and complete voxel-wise global
structure. Furthermore, the auxiliary SSC and PVI modules can be discarded
during inference without extra burden for SS. Extensive experiments confirm
that our JS3C-Net achieves superior performance on both SemanticKITTI and
SemanticPOSS benchmarks, i.e., 4% and 3% improvement correspondingly. | [
"cs.CV"
] |
The major part of the vanilla vision transformer (ViT) is the attention block
that brings the power of mimicking the global context of the input image. For
better performance, ViT needs large-scale training data. To overcome this data
hunger limitation, many ViT-based networks, or hybrid-ViT, have been proposed
to include local context during the training. The robustness of ViTs and its
variants against adversarial attacks has not been widely investigated in the
literature like CNNs. This work studies the robustness of ViT variants 1)
against different Lp-based adversarial attacks in comparison with CNNs, 2)
under adversarial examples (AEs) after applying preprocessing defense methods
and 3) under the adaptive attacks using expectation over transformation (EOT)
framework. To that end, we run a set of experiments on 1000 images from
ImageNet-1k and then provide an analysis that reveals that vanilla ViT or
hybrid-ViT are more robust than CNNs. For instance, we found that 1) Vanilla
ViTs or hybrid-ViTs are more robust than CNNs under Lp-based attacks and under
adaptive attacks. 2) Unlike hybrid-ViTs, Vanilla ViTs are not responding to
preprocessing defenses that mainly reduce the high frequency components.
Furthermore, feature maps, attention maps, and Grad-CAM visualization jointly
with image quality measures, and perturbations' energy spectrum are provided
for an insight understanding of attention-based models. | [
"cs.CV"
] |
Recently, infrared human action recognition has attracted increasing
attention for it has many advantages over visible light, that is, being robust
to illumination change and shadows. However, the infrared action data is
limited until now, which degrades the performance of infrared action
recognition. Motivated by the idea of transfer learning, an infrared human
action recognition framework using auxiliary data from visible light is
proposed to solve the problem of limited infrared action data. In the proposed
framework, we first construct a novel Cross-Dataset Feature Alignment and
Generalization (CDFAG) framework to map the infrared data and visible light
data into a common feature space, where Kernel Manifold Alignment (KEMA) and a
dual alignedto-generalized encoders (AGE) model are employed to represent the
feature. Then, a support vector machine (SVM) is trained, using both the
infrared data and visible light data, and can classify the features derived
from infrared data. The proposed method is evaluated on InfAR, which is a
publicly available infrared human action dataset. To build up auxiliary data,
we set up a novel visible light action dataset XD145. Experimental results show
that the proposed method can achieve state-of-the-art performance compared with
several transfer learning and domain adaptation methods. | [
"cs.CV"
] |
Multiple sclerosis (MS) is a demyelinating disease of the central nervous
system (CNS). A reliable measure of the tissue myelin content is therefore
essential for the understanding of the physiopathology of MS, tracking
progression and assessing treatment efficacy. Positron emission tomography
(PET) with $[^{11} \mbox{C}] \mbox{PIB}$ has been proposed as a promising
biomarker for measuring myelin content changes in-vivo in MS. However, PET
imaging is expensive and invasive due to the injection of a radioactive tracer.
On the contrary, magnetic resonance imaging (MRI) is a non-invasive, widely
available technique, but existing MRI sequences do not provide, to date, a
reliable, specific, or direct marker of either demyelination or remyelination.
In this work, we therefore propose Sketcher-Refiner Generative Adversarial
Networks (GANs) with specifically designed adversarial loss functions to
predict the PET-derived myelin content map from a combination of MRI
modalities. The prediction problem is solved by a sketch-refinement process in
which the sketcher generates the preliminary anatomical and physiological
information and the refiner refines and generates images reflecting the tissue
myelin content in the human brain. We evaluated the ability of our method to
predict myelin content at both global and voxel-wise levels. The evaluation
results show that the demyelination in lesion regions and myelin content in
normal-appearing white matter (NAWM) can be well predicted by our method. The
method has the potential to become a useful tool for clinical management of
patients with MS. | [
"cs.CV"
] |
Super resolution (SR) methods typically assume that the low-resolution (LR)
image was downscaled from the unknown high-resolution (HR) image by a fixed
'ideal' downscaling kernel (e.g. Bicubic downscaling). However, this is rarely
the case in real LR images, in contrast to synthetically generated SR datasets.
When the assumed downscaling kernel deviates from the true one, the performance
of SR methods significantly deteriorates. This gave rise to Blind-SR - namely,
SR when the downscaling kernel ("SR-kernel") is unknown. It was further shown
that the true SR-kernel is the one that maximizes the recurrence of patches
across scales of the LR image. In this paper we show how this powerful
cross-scale recurrence property can be realized using Deep Internal Learning.
We introduce "KernelGAN", an image-specific Internal-GAN, which trains solely
on the LR test image at test time, and learns its internal distribution of
patches. Its Generator is trained to produce a downscaled version of the LR
test image, such that its Discriminator cannot distinguish between the patch
distribution of the downscaled image, and the patch distribution of the
original LR image. The Generator, once trained, constitutes the downscaling
operation with the correct image-specific SR-kernel. KernelGAN is fully
unsupervised, requires no training data other than the input image itself, and
leads to state-of-the-art results in Blind-SR when plugged into existing SR
algorithms. | [
"cs.CV"
] |
We present a novel framework for mesh reconstruction from unstructured point
clouds by taking advantage of the learned visibility of the 3D points in the
virtual views and traditional graph-cut based mesh generation. Specifically, we
first propose a three-step network that explicitly employs depth completion for
visibility prediction. Then the visibility information of multiple views is
aggregated to generate a 3D mesh model by solving an optimization problem
considering visibility in which a novel adaptive visibility weighting in
surface determination is also introduced to suppress line of sight with a large
incident angle. Compared to other learning-based approaches, our pipeline only
exercises the learning on a 2D binary classification task, \ie, points visible
or not in a view, which is much more generalizable and practically more
efficient and capable to deal with a large number of points. Experiments
demonstrate that our method with favorable transferability and robustness, and
achieve competing performances \wrt state-of-the-art learning-based approaches
on small complex objects and outperforms on large indoor and outdoor scenes.
Code is available at https://github.com/GDAOSU/vis2mesh. | [
"cs.CV"
] |
The mainstream crowd counting methods usually utilize the convolution neural
network (CNN) to regress a density map, requiring point-level annotations.
However, annotating each person with a point is an expensive and laborious
process. During the testing phase, the point-level annotations are not
considered to evaluate the counting accuracy, which means the point-level
annotations are redundant. Hence, it is desirable to develop weakly-supervised
counting methods that just rely on count level annotations, a more economical
way of labeling. Current weakly-supervised counting methods adopt the CNN to
regress a total count of the crowd by an image-to-count paradigm. However,
having limited receptive fields for context modeling is an intrinsic limitation
of these weakly-supervised CNN-based methods. These methods thus can not
achieve satisfactory performance, limited applications in the real-word. The
Transformer is a popular sequence-to-sequence prediction model in NLP, which
contains a global receptive field. In this paper, we propose TransCrowd, which
reformulates the weakly-supervised crowd counting problem from the perspective
of sequence-to-count based on Transformer. We observe that the proposed
TransCrowd can effectively extract the semantic crowd information by using the
self-attention mechanism of Transformer. To the best of our knowledge, this is
the first work to adopt a pure Transformer for crowd counting research.
Experiments on five benchmark datasets demonstrate that the proposed TransCrowd
achieves superior performance compared with all the weakly-supervised CNN-based
counting methods and gains highly competitive counting performance compared
with some popular fully-supervised counting methods. Code is available at
https://github.com/dk-liang/TransCrowd. | [
"cs.CV"
] |
Traditional single image super-resolution (SISR) methods that focus on
solving single and uniform degradation (i.e., bicubic down-sampling), typically
suffer from poor performance when applied into real-world low-resolution (LR)
images due to the complicated realistic degradations. The key to solving this
more challenging real image super-resolution (RealSR) problem lies in learning
feature representations that are both informative and content-aware. In this
paper, we propose an Omni-frequency Region-adaptive Network (ORNet) to address
both challenges, here we call features of all low, middle and high frequencies
omni-frequency features. Specifically, we start from the frequency perspective
and design a Frequency Decomposition (FD) module to separate different
frequency components to comprehensively compensate the information lost for
real LR image. Then, considering the different regions of real LR image have
different frequency information lost, we further design a Region-adaptive
Frequency Aggregation (RFA) module by leveraging dynamic convolution and
spatial attention to adaptively restore frequency components for different
regions. The extensive experiments endorse the effective, and scenario-agnostic
nature of our OR-Net for RealSR. | [
"cs.CV",
"eess.IV"
] |
An increasing number of model-agnostic interpretation techniques for machine
learning (ML) models such as partial dependence plots (PDP), permutation
feature importance (PFI) and Shapley values provide insightful model
interpretations, but can lead to wrong conclusions if applied incorrectly. We
highlight many general pitfalls of ML model interpretation, such as using
interpretation techniques in the wrong context, interpreting models that do not
generalize well, ignoring feature dependencies, interactions, uncertainty
estimates and issues in high-dimensional settings, or making unjustified causal
interpretations, and illustrate them with examples. We focus on pitfalls for
global methods that describe the average model behavior, but many pitfalls also
apply to local methods that explain individual predictions. Our paper addresses
ML practitioners by raising awareness of pitfalls and identifying solutions for
correct model interpretation, but also addresses ML researchers by discussing
open issues for further research. | [
"stat.ML",
"cs.LG"
] |
With surge of available but unlabeled data, Positive Unlabeled (PU) learning
is becoming a thriving challenge. This work deals with this demanding task for
which recent GAN-based PU approaches have demonstrated promising results.
Generative adversarial Networks (GANs) are not hampered by deterministic bias
or need for specific dimensionality. However, existing GAN-based PU approaches
also present some drawbacks such as sensitive dependence to prior knowledge, a
cumbersome architecture or first-stage overfitting. To settle these issues, we
propose to incorporate a biased PU risk within the standard GAN discriminator
loss function. In this manner, the discriminator is constrained to request the
generator to converge towards the unlabeled samples distribution while
diverging from the positive samples distribution. This enables the proposed
model, referred to as D-GAN, to exclusively learn the counter-examples
distribution without prior knowledge. Experiments demonstrate that our approach
outperforms state-of-the-art PU methods without prior by overcoming their
issues. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Electronic health record (EHR) data is sparse and irregular as it is recorded
at irregular time intervals, and different clinical variables are measured at
each observation point. In this work, we propose a multi-view features
integration learning from irregular multivariate time series data by
self-attention mechanism in an imputation-free manner. Specifically, we devise
a novel multi-integration attention module (MIAM) to extract complex
information inherent in irregular time series data. In particular, we
explicitly learn the relationships among the observed values, missing
indicators, and time interval between the consecutive observations,
simultaneously. The rationale behind our approach is the use of human knowledge
such as what to measure and when to measure in different situations, which are
indirectly represented in the data. In addition, we build an attention-based
decoder as a missing value imputer that helps empower the representation
learning of the inter-relations among multi-view observations for the
prediction task, which operates at the training phase only. We validated the
effectiveness of our method over the public MIMIC-III and PhysioNet challenge
2012 datasets by comparing with and outperforming the state-of-the-art methods
for in-hospital mortality prediction. | [
"cs.LG",
"cs.AI"
] |
Previously, statistical textbook wisdom has held that interpolating noisy
data will generalize poorly, but recent work has shown that data interpolation
schemes can generalize well. This could explain why overparameterized deep nets
do not necessarily overfit. Optimal data interpolation schemes have been
exhibited that achieve theoretical lower bounds for excess risk in any
dimension for large data (Statistically Consistent Interpolation). These are
non-parametric Nadaraya-Watson estimators with singular kernels. The recently
proposed weighted interpolating nearest neighbors method (wiNN) is in this
class, as is the previously studied Hilbert kernel interpolation scheme, in
which the estimator has the form $\hat{f}(x)=\sum_i y_i w_i(x)$, where $w_i(x)=
\|x-x_i\|^{-d}/\sum_j \|x-x_j\|^{-d}$. This estimator is unique in being
completely parameter-free. While statistical consistency was previously proven,
convergence rates were not established. Here, we comprehensively study the
finite sample properties of Hilbert kernel regression. We prove that the excess
risk is asymptotically equivalent pointwise to $\sigma^2(x)/\ln(n)$ where
$\sigma^2(x)$ is the noise variance. We show that the excess risk of the plugin
classifier is less than $2|f(x)-1/2|^{1-\alpha}\,(1+\varepsilon)^\alpha
\sigma^\alpha(x)(\ln(n))^{-\frac{\alpha}{2}}$, for any $0<\alpha<1$, where $f$
is the regression function $x\mapsto\mathbb{E}[y|x]$. We derive asymptotic
equivalents of the moments of the weight functions $w_i(x)$ for large $n$, for
instance for $\beta>1$, $\mathbb{E}[w_i^{\beta}(x)]\sim_{n\rightarrow
\infty}((\beta-1)n\ln(n))^{-1}$. We derive an asymptotic equivalent for the
Lagrange function and exhibit the nontrivial extrapolation properties of this
estimator. We present heuristic arguments for a universal $w^{-2}$ power-law
behavior of the probability density of the weights in the large $n$ limit. | [
"cs.LG",
"cond-mat.stat-mech",
"math.FA",
"stat.ML"
] |
We propose the first learning-based approach for fast moving objects
detection. Such objects are highly blurred and move over large distances within
one video frame. Fast moving objects are associated with a deblurring and
matting problem, also called deblatting. We show that the separation of
deblatting into consecutive matting and deblurring allows achieving real-time
performance, i.e. an order of magnitude speed-up, and thus enabling new classes
of application. The proposed method detects fast moving objects as a truncated
distance function to the trajectory by learning from synthetic data. For the
sharp appearance estimation and accurate trajectory estimation, we propose a
matting and fitting network that estimates the blurred appearance without
background, followed by an energy minimization based deblurring. The
state-of-the-art methods are outperformed in terms of recall, precision,
trajectory estimation, and sharp appearance reconstruction. Compared to other
methods, such as deblatting, the inference is of several orders of magnitude
faster and allows applications such as real-time fast moving object detection
and retrieval in large video collections. | [
"cs.CV"
] |
Object detection has recently achieved a breakthrough for removing the last
one non-differentiable component in the pipeline, Non-Maximum Suppression
(NMS), and building up an end-to-end system. However, what makes for its
one-to-one prediction has not been well understood. In this paper, we first
point out that one-to-one positive sample assignment is the key factor, while,
one-to-many assignment in previous detectors causes redundant predictions in
inference. Second, we surprisingly find that even training with one-to-one
assignment, previous detectors still produce redundant predictions. We identify
that classification cost in matching cost is the main ingredient: (1) previous
detectors only consider location cost, (2) by additionally introducing
classification cost, previous detectors immediately produce one-to-one
prediction during inference. We introduce the concept of score gap to explore
the effect of matching cost. Classification cost enlarges the score gap by
choosing positive samples as those of highest score in the training iteration
and reducing noisy positive samples brought by only location cost. Finally, we
demonstrate the advantages of end-to-end object detection on crowded scenes.
The code is available at: \url{https://github.com/PeizeSun/OneNet}. | [
"cs.CV"
] |
Machine-learning systems such as self-driving cars or virtual assistants are
composed of a large number of machine-learning models that recognize image
content, transcribe speech, analyze natural language, infer preferences, rank
options, etc. Models in these systems are often developed and trained
independently, which raises an obvious concern: Can improving a
machine-learning model make the overall system worse? We answer this question
affirmatively by showing that improving a model can deteriorate the performance
of downstream models, even after those downstream models are retrained. Such
self-defeating improvements are the result of entanglement between the models
in the system. We perform an error decomposition of systems with multiple
machine-learning models, which sheds light on the types of errors that can lead
to self-defeating improvements. We also present the results of experiments
which show that self-defeating improvements emerge in a realistic stereo-based
detection system for cars and pedestrians. | [
"cs.LG"
] |
For the task of subdecimeter aerial imagery segmentation, fine-grained
semantic segmentation results are usually difficult to obtain because of
complex remote sensing content and optical conditions. Recently, convolutional
neural networks (CNNs) have shown outstanding performance on this task.
Although many deep neural network structures and techniques have been applied
to improve the accuracy, few have paid attention to better differentiating the
easily confused classes. In this paper, we propose TreeSegNet which adopts an
adaptive network to increase the classification rate at the pixelwise level.
Specifically, based on the infrastructure of DeepUNet, a Tree-CNN block in
which each node represents a ResNeXt unit is constructed adaptively according
to the confusion matrix and the proposed TreeCutting algorithm. By transporting
feature maps through concatenating connections, the Tree-CNN block fuses
multiscale features and learns best weights for the model. In experiments on
the ISPRS 2D semantic labeling Potsdam dataset, the results obtained by
TreeSegNet are better than those of other published state-of-the-art methods.
Detailed comparison and analysis show that the improvement brought by the
adaptive Tree-CNN block is significant. | [
"cs.CV"
] |
Confinement during COVID-19 has caused serious effects on agriculture all
over the world. As one of the efficient solutions, mechanical
harvest/auto-harvest that is based on object detection and robotic harvester
becomes an urgent need. Within the auto-harvest system, robust few-shot object
detection model is one of the bottlenecks, since the system is required to deal
with new vegetable/fruit categories and the collection of large-scale annotated
datasets for all the novel categories is expensive. There are many few-shot
object detection models that were developed by the community. Yet whether they
could be employed directly for real life agricultural applications is still
questionable, as there is a context-gap between the commonly used training
datasets and the images collected in real life agricultural scenarios. To this
end, in this study, we present a novel cucumber dataset and propose two data
augmentation strategies that help to bridge the context-gap. Experimental
results show that 1) the state-of-the-art few-shot object detection model
performs poorly on the novel `cucumber' category; and 2) the proposed
augmentation strategies outperform the commonly used ones. | [
"cs.CV",
"cs.LG"
] |
We suggest ways to enforce given constraints in the output of a Generative
Adversarial Network (GAN) generator both for interpolation and extrapolation
(prediction). For the case of dynamical systems, given a time series, we wish
to train GAN generators that can be used to predict trajectories starting from
a given initial condition. In this setting, the constraints can be in algebraic
and/or differential form. Even though we are predominantly interested in the
case of extrapolation, we will see that the tasks of interpolation and
extrapolation are related. However, they need to be treated differently.
For the case of interpolation, the incorporation of constraints is built into
the training of the GAN. The incorporation of the constraints respects the
primary game-theoretic setup of a GAN so it can be combined with existing
algorithms. However, it can exacerbate the problem of instability during
training that is well-known for GANs. We suggest adding small noise to the
constraints as a simple remedy that has performed well in our numerical
experiments.
The case of extrapolation (prediction) is more involved. During training, the
GAN generator learns to interpolate a noisy version of the data and we enforce
the constraints. This approach has connections with model reduction that we can
utilize to improve the efficiency and accuracy of the training. Depending on
the form of the constraints, we may enforce them also during prediction through
a projection step. We provide examples of linear and nonlinear systems of
differential equations to illustrate the various constructions. | [
"cs.LG",
"stat.ML",
"68T05, 65L05, 37M10, 62M45, 68Q32"
] |
Modern neural network architectures use structured linear transformations,
such as low-rank matrices, sparse matrices, permutations, and the Fourier
transform, to improve inference speed and reduce memory usage compared to
general linear maps. However, choosing which of the myriad structured
transformations to use (and its associated parameterization) is a laborious
task that requires trading off speed, space, and accuracy. We consider a
different approach: we introduce a family of matrices called kaleidoscope
matrices (K-matrices) that provably capture any structured matrix with
near-optimal space (parameter) and time (arithmetic operation) complexity. We
empirically validate that K-matrices can be automatically learned within
end-to-end pipelines to replace hand-crafted procedures, in order to improve
model quality. For example, replacing channel shuffles in ShuffleNet improves
classification accuracy on ImageNet by up to 5%. K-matrices can also simplify
hand-engineered pipelines -- we replace filter bank feature computation in
speech data preprocessing with a learnable kaleidoscope layer, resulting in
only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition,
K-matrices can capture latent structure in models: for a challenging permuted
image classification task, a K-matrix based representation of permutations is
able to learn the right latent structure and improves accuracy of a downstream
convolutional model by over 9%. We provide a practically efficient
implementation of our approach, and use K-matrices in a Transformer network to
attain 36% faster end-to-end inference speed on a language translation task. | [
"cs.LG",
"stat.ML"
] |
Learning the kernel functions used in kernel methods has been a vastly
explored area in machine learning. It is now widely accepted that to obtain
'good' performance, learning a kernel function is the key challenge. In this
work we focus on learning kernel representations for structured regression. We
propose use of polynomials expansion of kernels, referred to as Schoenberg
transforms and Gegenbaur transforms, which arise from the seminal result of
Schoenberg (1938). These kernels can be thought of as polynomial combination of
input features in a high dimensional reproducing kernel Hilbert space (RKHS).
We learn kernels over input and output for structured data, such that,
dependency between kernel features is maximized. We use Hilbert-Schmidt
Independence Criterion (HSIC) to measure this. We also give an efficient,
matrix decomposition-based algorithm to learn these kernel transformations, and
demonstrate state-of-the-art results on several real-world datasets. | [
"cs.LG",
"stat.ML"
] |
Convolutional neural networks (CNNs) have been widely used in various vision
tasks, e.g. image classification, semantic segmentation, etc. Unfortunately,
standard 2D CNNs are not well suited for spherical signals such as panorama
images or spherical projections, as the sphere is an unstructured grid. In this
paper, we present Spherical Transformer which can transform spherical signals
into vectors that can be directly processed by standard CNNs such that many
well-designed CNNs architectures can be reused across tasks and datasets by
pretraining. To this end, the proposed method first uses locally structured
sampling methods such as HEALPix to construct a transformer grid by using the
information of spherical points and its adjacent points, and then transforms
the spherical signals to the vectors through the grid. By building the
Spherical Transformer module, we can use multiple CNN architectures directly.
We evaluate our approach on the tasks of spherical MNIST recognition, 3D object
classification and omnidirectional image semantic segmentation. For 3D object
classification, we further propose a rendering-based projection method to
improve the performance and a rotational-equivariant model to improve the
anti-rotation ability. Experimental results on three tasks show that our
approach achieves superior performance over state-of-the-art methods. | [
"cs.CV"
] |
We present a simple method for assessing the quality of generated images in
Generative Adversarial Networks (GANs). The method can be applied in any kind
of GAN without interfering with the learning procedure or affecting the
learning objective. The central idea is to define a likelihood function that
correlates with the quality of the generated images. In particular, we derive a
Gaussian likelihood function from the distribution of the embeddings (hidden
activations) of the real images in the discriminator, and based on this, define
two simple measures of how likely it is that the embeddings of generated images
are from the distribution of the embeddings of the real images. This yields a
simple measure of fitness for generated images, for all varieties of GANs.
Empirical results on CIFAR-10 demonstrate a strong correlation between the
proposed measures and the perceived quality of the generated images. | [
"cs.LG",
"cs.AI"
] |
Can generative adversarial networks (GANs) generate roses of various colors
given only roses of red petals as input? The answer is negative, since GANs'
discriminator would reject all roses of unseen petal colors. In this study, we
propose knowledge-guided GAN (KG-GAN) to fuse domain knowledge with the GAN
framework. KG-GAN trains two generators; one learns from data whereas the other
learns from knowledge with a constraint function. Experimental results
demonstrate the effectiveness of KG-GAN in generating unseen flower categories
from seen categories given textual descriptions of the unseen ones. | [
"cs.CV"
] |
In this paper, we present an Improved Data Augmentation (IDA) technique
focused on Salient Object Detection (SOD). Standard data augmentation
techniques proposed in the literature, such as image cropping, rotation,
flipping, and resizing, only generate variations of the existing examples,
providing a limited generalization. Our method combines image inpainting,
affine transformations, and the linear combination of different generated
background images with salient objects extracted from labeled data. Our
proposed technique enables more precise control of the object's position and
size while preserving background information. The background choice is based on
an inter-image optimization, while object size follows a uniform random
distribution within a specified interval, and the object position is
intra-image optimal. We show that our method improves the segmentation quality
when used for training state-of-the-art neural networks on several famous
datasets of the SOD field. Combining our method with others surpasses
traditional techniques such as horizontal-flip in 0.52% for F-measure and 1.19%
for Precision. We also provide an evaluation in 7 different SOD datasets, with
9 distinct evaluation metrics and an average ranking of the evaluated methods. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Policy gradient methods are very attractive in reinforcement learning due to
their model-free nature and convergence guarantees. These methods, however,
suffer from high variance in gradient estimation, resulting in poor sample
efficiency. To mitigate this issue, a number of variance-reduction approaches
have been proposed. Unfortunately, in the challenging problems with delayed
rewards, these approaches either bring a relatively modest improvement or do
reduce variance at expense of introducing a bias and undermining convergence.
The unbiased methods of gradient estimation, in general, only partially reduce
variance, without eliminating it completely even in the limit of exact
knowledge of the value functions and problem dynamics, as one might have
wished. In this work we propose an unbiased method that does completely
eliminate variance under some, commonly encountered, conditions. Of practical
interest is the limit of deterministic dynamics and small policy stochasticity.
In the case of a quadratic value function, as in linear quadratic Gaussian
models, the policy randomness need not be small. We use such a model to analyze
performance of the proposed variance-elimination approach and compare it with
standard variance-reduction methods. The core idea behind the approach is to
use control variates at all future times down the trajectory. We present both a
model-based and model-free formulations. | [
"cs.LG",
"stat.ML"
] |
There is a large literature explaining why AdaBoost is a successful
classifier. The literature on AdaBoost focuses on classifier margins and
boosting's interpretation as the optimization of an exponential likelihood
function. These existing explanations, however, have been pointed out to be
incomplete. A random forest is another popular ensemble method for which there
is substantially less explanation in the literature. We introduce a novel
perspective on AdaBoost and random forests that proposes that the two
algorithms work for similar reasons. While both classifiers achieve similar
predictive accuracy, random forests cannot be conceived as a direct
optimization procedure. Rather, random forests is a self-averaging,
interpolating algorithm which creates what we denote as a "spikey-smooth"
classifier, and we view AdaBoost in the same light. We conjecture that both
AdaBoost and random forests succeed because of this mechanism. We provide a
number of examples and some theoretical justification to support this
explanation. In the process, we question the conventional wisdom that suggests
that boosting algorithms for classification require regularization or early
stopping and should be limited to low complexity classes of learners, such as
decision stumps. We conclude that boosting should be used like random forests:
with large decision trees and without direct regularization or early stopping. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
The ability to generate natural language explanations conditioned on the
visual perception is a crucial step towards autonomous agents which can explain
themselves and communicate with humans. While the research efforts in image and
video captioning are giving promising results, this is often done at the
expense of the computational requirements of the approaches, limiting their
applicability to real contexts. In this paper, we propose a fully-attentive
captioning algorithm which can provide state-of-the-art performances on
language generation while restricting its computational demands. Our model is
inspired by the Transformer model and employs only two Transformer layers in
the encoding and decoding stages. Further, it incorporates a novel memory-aware
encoding of image regions. Experiments demonstrate that our approach achieves
competitive results in terms of caption quality while featuring reduced
computational demands. Further, to evaluate its applicability on autonomous
agents, we conduct experiments on simulated scenes taken from the perspective
of domestic robots. | [
"cs.CV",
"cs.CL",
"cs.RO"
] |
Automated detection of cervical cancer cells or cell clumps has the potential
to significantly reduce error rate and increase productivity in cervical cancer
screening. However, most traditional methods rely on the success of accurate
cell segmentation and discriminative hand-crafted features extraction. Recently
there are emerging deep learning-based methods which train convolutional neural
networks (CNN) to classify image patches, but they are computationally
expensive. In this paper we propose an efficient CNN-based object detection
methods for cervical cancer cells/clumps detection. Specifically, we utilize
the state-of-the-art two-stage object detection method, the Faster-RCNN with
Feature Pyramid Network (FPN) as the baseline and propose a novel comparison
detector to deal with the limited data problem. The key idea is that classify
the proposals by comparing with the reference samples of each category in
object detection. In addition, we propose to learn the reference samples of the
background from data instead of manually choosing them by some heuristic rules.
Experimental results show that the proposed Comparison Detector yields
significant improvement on the small dataset, achieving a mean Average
Precision (mAP) of 26.3% and an Average Recall (AR) of 35.7%, both improving
about 20 points compared to the baseline. Moreover, Comparison Detector
improved AR by 4.6 points and achieved marginally better performance in terms
of mAP compared with baseline model when training on the medium dataset. Our
method is promising for the development of automation-assisted cervical cancer
screening systems. Code is available at
https://github.com/kuku-sichuan/ComparisonDetector. | [
"cs.CV"
] |
The optical flow of humans is well known to be useful for the analysis of
human action. Given this, we devise an optical flow algorithm specifically for
human motion and show that it is superior to generic flow methods. Designing a
method by hand is impractical, so we develop a new training database of image
sequences with ground truth optical flow. For this we use a 3D model of the
human body and motion capture data to synthesize realistic flow fields. We then
train a convolutional neural network to estimate human flow fields from pairs
of images. Since many applications in human motion analysis depend on speed,
and we anticipate mobile applications, we base our method on SpyNet with
several modifications. We demonstrate that our trained network is more accurate
than a wide range of top methods on held-out test data and that it generalizes
well to real image sequences. When combined with a person detector/tracker, the
approach provides a full solution to the problem of 2D human flow estimation.
Both the code and the dataset are available for research. | [
"cs.CV"
] |
Counterfactual explanations and adversarial examples have emerged as critical
research areas for addressing the explainability and robustness goals of
machine learning (ML). While counterfactual explanations were developed with
the goal of providing recourse to individuals adversely impacted by algorithmic
decisions, adversarial examples were designed to expose the vulnerabilities of
ML models. While prior research has hinted at the commonalities between these
frameworks, there has been little to no work on systematically exploring the
connections between the literature on counterfactual explanations and
adversarial examples. In this work, we make one of the first attempts at
formalizing the connections between counterfactual explanations and adversarial
examples. More specifically, we theoretically analyze salient counterfactual
explanation and adversarial example generation methods, and highlight the
conditions under which they behave similarly. Our analysis demonstrates that
several popular counterfactual explanation and adversarial example generation
methods such as the ones proposed by Wachter et. al. and Carlini and Wagner
(with mean squared error loss), and C-CHVAE and natural adversarial examples by
Zhao et. al. are equivalent. We also bound the distance between counterfactual
explanations and adversarial examples generated by Wachter et. al. and DeepFool
methods for linear models. Finally, we empirically validate our theoretical
findings using extensive experimentation with synthetic and real world
datasets. | [
"cs.LG",
"cs.AI"
] |
Terrestrial laser scanning (TLS) can obtain tree point cloud with high
precision and high density. Efficient classification of wood points and leaf
points is essential to study tree structural parameters and ecological
characteristics. By using both the intensity and spatial information, a
three-step classification and verification method was proposed to achieve
automated wood-leaf classification. Tree point cloud was classified into wood
points and leaf points by using intensity threshold, neighborhood density and
voxelization successively. Experiment was carried in Haidian Park, Beijing, and
24 trees were scanned by using the RIEGL VZ-400 scanner. The tree point clouds
were processed by using the proposed method, whose classification results were
compared with the manual classification results which were used as standard
results. To evaluate the classification accuracy, three indicators were used in
the experiment, which are Overall Accuracy (OA), Kappa coefficient (Kappa) and
Matthews correlation coefficient (MCC). The ranges of OA, Kappa and MCC of the
proposed method are from 0.9167 to 0.9872, from 0.7276 to 0.9191, and from
0.7544 to 0.9211 respectively. The average values of OA, Kappa and MCC are
0.9550, 0.8547 and 0.8627 respectively. Time cost of wood-leaf classification
was also recorded to evaluate the algorithm efficiency. The average processing
time are 1.4 seconds per million points. The results showed that the proposed
method performed well automatically and quickly on wood-leaf classification
based on the experimental dataset. | [
"cs.CV"
] |
Persistence diagrams concisely represent the topology of a point cloud whilst
having strong theoretical guarantees, but the question of how to best integrate
this information into machine learning workflows remains open. In this paper we
extend the ubiquitous Fuzzy c-Means (FCM) clustering algorithm to the space of
persistence diagrams, enabling unsupervised learning that automatically
captures the topological structure of data without the topological prior
knowledge or additional processing of persistence diagrams that many other
techniques require. We give theoretical convergence guarantees that correspond
to the Euclidean case, and empirically demonstrate the capability of our
algorithm to capture topological information via the fuzzy RAND index. We end
with experiments on two datasets that utilise both the topological and fuzzy
nature of our algorithm: pre-trained model selection in machine learning and
lattices structures from materials science. As pre-trained models can perform
well on multiple tasks, selecting the best model is a naturally fuzzy problem;
we show that fuzzy clustering persistence diagrams allows for model selection
using the topology of decision boundaries. In materials science, we classify
transformed lattice structure datasets for the first time, whilst the
probabilistic membership values let us rank candidate lattices in a scenario
where further investigation requires expensive laboratory time and expertise. | [
"cs.LG",
"stat.ML"
] |
For many practical, high-risk applications, it is essential to quantify
uncertainty in a model's predictions to avoid costly mistakes. While predictive
uncertainty is widely studied for neural networks, the topic seems to be
under-explored for models based on gradient boosting. However, gradient
boosting often achieves state-of-the-art results on tabular data. This work
examines a probabilistic ensemble-based framework for deriving uncertainty
estimates in the predictions of gradient boosting classification and regression
models. We conducted experiments on a range of synthetic and real datasets and
investigated the applicability of ensemble approaches to gradient boosting
models that are themselves ensembles of decision trees. Our analysis shows that
ensembles of gradient boosting models successfully detect anomalous inputs
while having limited ability to improve the predicted total uncertainty.
Importantly, we also propose a concept of a virtual ensemble to get the
benefits of an ensemble via only one gradient boosting model, which
significantly reduces complexity. | [
"cs.LG",
"stat.ML"
] |
In this paper, we present a simple yet effective Boolean map based
representation that exploits connectivity cues for visual tracking. We describe
a target object with histogram of oriented gradients and raw color features, of
which each one is characterized by a set of Boolean maps generated by uniformly
thresholding their values. The Boolean maps effectively encode multi-scale
connectivity cues of the target with different granularities. The fine-grained
Boolean maps capture spatially structural details that are effective for
precise target localization while the coarse-grained ones encode global shape
information that are robust to large target appearance variations. Finally, all
the Boolean maps form together a robust representation that can be approximated
by an explicit feature map of the intersection kernel, which is fed into a
logistic regression classifier with online update, and the target location is
estimated within a particle filter framework. The proposed representation
scheme is computationally efficient and facilitates achieving favorable
performance in terms of accuracy and robustness against the state-of-the-art
tracking methods on a large benchmark dataset of 50 image sequences. | [
"cs.CV"
] |
We consider the problem of learning a tree-structured Ising model from data,
such that subsequent predictions computed using the model are accurate.
Concretely, we aim to learn a model such that posteriors $P(X_i|X_S)$ for small
sets of variables $S$ are accurate. Since its introduction more than 50 years
ago, the Chow-Liu algorithm, which efficiently computes the maximum likelihood
tree, has been the benchmark algorithm for learning tree-structured graphical
models. A bound on the sample complexity of the Chow-Liu algorithm with respect
to the prediction-centric local total variation loss was shown in [BK19]. While
those results demonstrated that it is possible to learn a useful model even
when recovering the true underlying graph is impossible, their bound depends on
the maximum strength of interactions and thus does not achieve the
information-theoretic optimum. In this paper, we introduce a new algorithm that
carefully combines elements of the Chow-Liu algorithm with tree metric
reconstruction methods to efficiently and optimally learn tree Ising models
under a prediction-centric loss. Our algorithm is robust to model
misspecification and adversarial corruptions. In contrast, we show that the
celebrated Chow-Liu algorithm can be arbitrarily suboptimal. | [
"cs.LG",
"cs.DS",
"cs.IT",
"math.IT",
"math.ST",
"stat.TH"
] |
Quantifying uncertainty in medical image segmentation applications is
essential, as it is often connected to vital decision-making. Compelling
attempts have been made in quantifying the uncertainty in image segmentation
architectures, e.g. to learn a density segmentation model conditioned on the
input image. Typical work in this field restricts these learnt densities to be
strictly Gaussian. In this paper, we propose to use a more flexible approach by
introducing Normalizing Flows (NFs), which enables the learnt densities to be
more complex and facilitate more accurate modeling for uncertainty. We prove
this hypothesis by adopting the Probabilistic U-Net and augmenting the
posterior density with an NF, allowing it to be more expressive. Our
qualitative as well as quantitative (GED and IoU) evaluations on the
multi-annotated and single-annotated LIDC-IDRI and Kvasir-SEG segmentation
datasets, respectively, show a clear improvement. This is mostly apparent in
the quantification of aleatoric uncertainty and the increased predictive
performance of up to 14 percent. This result strongly indicates that a more
flexible density model should be seriously considered in architectures that
attempt to capture segmentation ambiguity through density modeling. The benefit
of this improved modeling will increase human confidence in annotation and
segmentation, and enable eager adoption of the technology in practice. | [
"cs.CV",
"cs.LG"
] |
The wide adoption of Electronic Health Records (EHR) has resulted in large
amounts of clinical data becoming available, which promises to support service
delivery and advance clinical and informatics research. Deep learning
techniques have demonstrated performance in predictive analytic tasks using
EHRs yet they typically lack model result transparency or explainability
functionalities and require cumbersome pre-processing tasks. Moreover, EHRs
contain heterogeneous and multi-modal data points such as text, numbers and
time series which further hinder visualisation and interpretability. This paper
proposes a deep learning framework to: 1) encode patient pathways from EHRs
into images, 2) highlight important events within pathway images, and 3) enable
more complex predictions with additional intelligibility. The proposed method
relies on a deep attention mechanism for visualisation of the predictions and
allows predicting multiple sequential outcomes. | [
"cs.LG",
"cs.AI"
] |
The unsupervised Pretraining method has been widely used in aiding human
action recognition. However, existing methods focus on reconstructing the
already present frames rather than generating frames which happen in future.In
this paper, We propose an improved Variantial Autoencoder model to extract the
features with a high connection to the coming scenarios, also known as
Predictive Learning. Our framework lists as following: two steam 3D-convolution
neural networks are used to extract both spatial and temporal information as
latent variables. Then a resample method is introduced to create new normal
distribution probabilistic latent variables and finally, the deconvolution
neural network will use these latent variables generate next frames. Through
this possess, we train the model to focus more on how to generate the future
and thus it will extract the future high connected features. In the experiment
stage, A large number of experiments on UT and UCF101 datasets reveal that
future generation aids Prediction does improve the performance. Moreover, the
Future Representation Learning Network reach a higher score than other methods
when in half observation. This means that Future Representation Learning is
better than the traditional Representation Learning and other state- of-the-art
methods in solving the human action prediction problems to some extends. | [
"cs.CV"
] |
We present diffusion-convolutional neural networks (DCNNs), a new model for
graph-structured data. Through the introduction of a diffusion-convolution
operation, we show how diffusion-based representations can be learned from
graph-structured data and used as an effective basis for node classification.
DCNNs have several attractive qualities, including a latent representation for
graphical data that is invariant under isomorphism, as well as polynomial-time
prediction and learning that can be represented as tensor operations and
efficiently implemented on the GPU. Through several experiments with real
structured datasets, we demonstrate that DCNNs are able to outperform
probabilistic relational models and kernel-on-graph methods at relational node
classification tasks. | [
"cs.LG"
] |
We propose a combination of a variational autoencoder and a transformer based
model which fully utilises graph convolutional and graph pooling layers to
operate directly on graphs. The transformer model implements a novel node
encoding layer, replacing the position encoding typically used in transformers,
to create a transformer with no position information that operates on graphs,
encoding adjacent node properties into the edge generation process. The
proposed model builds on graph generative work operating on graphs with edge
features, creating a model that offers improved scalability with the number of
nodes in a graph. In addition, our model is capable of learning a disentangled,
interpretable latent space that represents graph properties through a mapping
between latent variables and graph properties. In experiments we chose a
benchmark task of molecular generation, given the importance of both generated
node and edge features. Using the QM9 dataset we demonstrate that our model
performs strongly across the task of generating valid, unique and novel
molecules. Finally, we demonstrate that the model is interpretable by
generating molecules controlled by molecular properties, and we then analyse
and visualise the learned latent representation. | [
"cs.LG"
] |
Region Proposal Network (RPN) is the cornerstone of two-stage object
detectors, it generates a sparse set of object proposals and alleviates the
extrem foregroundbackground class imbalance problem during training. However,
we find that the potential of the detector has not been fully exploited due to
the IoU distribution imbalance and inadequate quantity of the training samples
generated by RPN. With the increasing intersection over union (IoU), the
exponentially smaller numbers of positive samples would lead to the
distribution skewed towards lower IoUs, which hinders the optimization of
detector at high IoU levels. In this paper, to break through the limitations of
RPN, we propose IoU-Uniform R-CNN, a simple but effective method that directly
generates training samples with uniform IoU distribution for the regression
branch as well as the IoU prediction branch. Besides, we improve the
performance of IoU prediction branch by eliminating the feature offsets of RoIs
at inference, which helps the NMS procedure by preserving accurately localized
bounding box. Extensive experiments on the PASCAL VOC and MS COCO dataset show
the effectiveness of our method, as well as its compatibility and adaptivity to
many object detection architectures. The code is made publicly available at
https://github.com/zl1994/IoU-Uniform-R-CNN, | [
"cs.CV"
] |
Value factorisation proves to be a very useful technique in multi-agent
reinforcement learning (MARL), but the underlying mechanism is not yet fully
understood. This paper explores a theoretic basis for value factorisation. We
generalise the Shapley value in the coalitional game theory to a Markov convex
game (MCG) and use it to guide value factorisation in MARL. We show that the
generalised Shapley value possesses several features such as (1) accurate
estimation of the maximum global value, (2) fairness in the factorisation of
the global value, and (3) being sensitive to dummy agents. The proposed theory
yields a new learning algorithm called Sharpley Q-learning (SHAQ), which
inherits the important merits of ordinary Q-learning but extends it to MARL. In
comparison with prior-arts, SHAQ has a much weaker assumption (MCG) that is
more compatible with real-world problems, but has superior explainability and
performance in many cases. We demonstrated SHAQ and verified the theoretic
claims on Predator-Prey and StarCraft Multi-Agent Challenge (SMAC). | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
This paper develops a Pontryagin Differentiable Programming (PDP)
methodology, which establishes a unified framework to solve a broad class of
learning and control tasks. The PDP distinguishes from existing methods by two
novel techniques: first, we differentiate through Pontryagin's Maximum
Principle, and this allows to obtain the analytical derivative of a trajectory
with respect to tunable parameters within an optimal control system, enabling
end-to-end learning of dynamics, policies, or/and control objective functions;
and second, we propose an auxiliary control system in the backward pass of the
PDP framework, and the output of this auxiliary control system is the
analytical derivative of the original system's trajectory with respect to the
parameters, which can be iteratively solved using standard control tools. We
investigate three learning modes of the PDP: inverse reinforcement learning,
system identification, and control/planning. We demonstrate the capability of
the PDP in each learning mode on different high-dimensional systems, including
multi-link robot arm, 6-DoF maneuvering quadrotor, and 6-DoF rocket powered
landing. | [
"cs.LG",
"cs.RO",
"cs.SY",
"eess.SY",
"math.OC"
] |
This paper describes InfoGAN, an information-theoretic extension to the
Generative Adversarial Network that is able to learn disentangled
representations in a completely unsupervised manner. InfoGAN is a generative
adversarial network that also maximizes the mutual information between a small
subset of the latent variables and the observation. We derive a lower bound to
the mutual information objective that can be optimized efficiently, and show
that our training procedure can be interpreted as a variation of the Wake-Sleep
algorithm. Specifically, InfoGAN successfully disentangles writing styles from
digit shapes on the MNIST dataset, pose from lighting of 3D rendered images,
and background digits from the central digit on the SVHN dataset. It also
discovers visual concepts that include hair styles, presence/absence of
eyeglasses, and emotions on the CelebA face dataset. Experiments show that
InfoGAN learns interpretable representations that are competitive with
representations learned by existing fully supervised methods. | [
"cs.LG",
"stat.ML"
] |
Normalizing flows provide a general mechanism for defining expressive
probability distributions, only requiring the specification of a (usually
simple) base distribution and a series of bijective transformations. There has
been much recent work on normalizing flows, ranging from improving their
expressive power to expanding their application. We believe the field has now
matured and is in need of a unified perspective. In this review, we attempt to
provide such a perspective by describing flows through the lens of
probabilistic modeling and inference. We place special emphasis on the
fundamental principles of flow design, and discuss foundational topics such as
expressive power and computational trade-offs. We also broaden the conceptual
framing of flows by relating them to more general probability transformations.
Lastly, we summarize the use of flows for tasks such as generative modeling,
approximate inference, and supervised learning. | [
"stat.ML",
"cs.LG"
] |
Translating machine learning (ML) models effectively to clinical practice
requires establishing clinicians' trust. Explainability, or the ability of an
ML model to justify its outcomes and assist clinicians in rationalizing the
model prediction, has been generally understood to be critical to establishing
trust. However, the field suffers from the lack of concrete definitions for
usable explanations in different settings. To identify specific aspects of
explainability that may catalyze building trust in ML models, we surveyed
clinicians from two distinct acute care specialties (Intenstive Care Unit and
Emergency Department). We use their feedback to characterize when
explainability helps to improve clinicians' trust in ML models. We further
identify the classes of explanations that clinicians identified as most
relevant and crucial for effective translation to clinical practice. Finally,
we discern concrete metrics for rigorous evaluation of clinical explainability
methods. By integrating perceptions of explainability between clinicians and ML
researchers we hope to facilitate the endorsement and broader adoption and
sustained use of ML systems in healthcare. | [
"cs.LG",
"stat.ML"
] |
Depth completion aims at inferring a dense depth image from sparse depth
measurement since glossy, transparent or distant surface cannot be scanned
properly by the sensor. Most of existing methods directly interpolate the
missing depth measurements based on pixel-wise image content and the
corresponding neighboring depth values. Consequently, this leads to blurred
boundaries or inaccurate structure of object. To address these problems, we
propose a novel self-guided instance-aware network (SG-IANet) that: (1) utilize
self-guided mechanism to extract instance-level features that is needed for
depth restoration, (2) exploit the geometric and context information into
network learning to conform to the underlying constraints for edge clarity and
structure consistency, (3) regularize the depth estimation and mitigate the
impact of noise by instance-aware learning, and (4) train with synthetic data
only by domain randomization to bridge the reality gap. Extensive experiments
on synthetic and real world dataset demonstrate that our proposed method
outperforms previous works. Further ablation studies give more insights into
the proposed method and demonstrate the generalization capability of our model. | [
"cs.CV",
"cs.RO"
] |
Sequence data is challenging for machine learning approaches, because the
lengths of the sequences may vary between samples. In this paper, we present an
unsupervised learning model for sequence data, called the Integrated Sequence
Autoencoder (ISA), to learn a fixed-length vectorial representation by
minimizing the reconstruction error. Specifically, we propose to integrate two
classical mechanisms for sequence reconstruction which takes into account both
the global silhouette information and the local temporal dependencies.
Furthermore, we propose a stop feature that serves as a temporal stamp to guide
the reconstruction process, which results in a higher-quality representation.
The learned representation is able to effectively summarize not only the
apparent features, but also the underlying and high-level style information.
Take for example a speech sequence sample: our ISA model can not only recognize
the spoken text (apparent feature), but can also discriminate the speaker who
utters the audio (more high-level style). One promising application of the ISA
model is that it can be readily used in the semi-supervised learning scenario,
in which a large amount of unlabeled data is leveraged to extract high-quality
sequence representations and thus to improve the performance of the subsequent
supervised learning tasks on limited labeled data. | [
"cs.CV",
"cs.AI"
] |
Graphs are the most ubiquitous form of structured data representation used in
machine learning. They model, however, only pairwise relations between nodes
and are not designed for encoding the higher-order relations found in many
real-world datasets. To model such complex relations, hypergraphs have proven
to be a natural representation. Learning the node representations in a
hypergraph is more complex than in a graph as it involves information
propagation at two levels: within every hyperedge and across the hyperedges.
Most current approaches first transform a hypergraph structure to a graph for
use in existing geometric deep learning algorithms. This transformation leads
to information loss, and sub-optimal exploitation of the hypergraph's
expressive power. We present HyperSAGE, a novel hypergraph learning framework
that uses a two-level neural message passing strategy to accurately and
efficiently propagate information through hypergraphs. The flexible design of
HyperSAGE facilitates different ways of aggregating neighborhood information.
Unlike the majority of related work which is transductive, our approach,
inspired by the popular GraphSAGE method, is inductive. Thus, it can also be
used on previously unseen nodes, facilitating deployment in problems such as
evolving or partially observed hypergraphs. Through extensive experimentation,
we show that HyperSAGE outperforms state-of-the-art hypergraph learning methods
on representative benchmark datasets. We also demonstrate that the higher
expressive power of HyperSAGE makes it more stable in learning node
representations as compared to the alternatives. | [
"cs.LG",
"stat.ML"
] |
In this paper, we adapt the Faster-RCNN framework for the detection of
underground buried objects (i.e. hyperbola reflections) in B-scan ground
penetrating radar (GPR) images. Due to the lack of real data for training, we
propose to incorporate more simulated radargrams generated from different
configurations using the gprMax toolbox. Our designed CNN is first pre-trained
on the grayscale Cifar-10 database. Then, the Faster-RCNN framework based on
the pre-trained CNN is trained and fine-tuned on both real and simulated GPR
data. Preliminary detection results show that the proposed technique can
provide significant improvements compared to classical computer vision methods
and hence becomes quite promising to deal with this kind of specific GPR data
even with few training samples. | [
"cs.CV"
] |
Zero-shot detection, namely, localizing both seen and unseen objects,
increasingly gains importance for large-scale applications, with large number
of object classes, since, collecting sufficient annotated data with ground
truth bounding boxes is simply not scalable. While vanilla deep neural networks
deliver high performance for objects available during training, unseen object
detection degrades significantly. At a fundamental level, while vanilla
detectors are capable of proposing bounding boxes, which include unseen
objects, they are often incapable of assigning high-confidence to unseen
objects, due to the inherent precision/recall tradeoffs that requires rejecting
background objects. We propose a novel detection algorithm Dont Even Look Once
(DELO), that synthesizes visual features for unseen objects and augments
existing training algorithms to incorporate unseen object detection. Our
proposed scheme is evaluated on Pascal VOC and MSCOCO, and we demonstrate
significant improvements in test accuracy over vanilla and other state-of-art
zero-shot detectors | [
"cs.CV"
] |
We consider the problem of semi-supervised 3D action recognition which has
been rarely explored before. Its major challenge lies in how to effectively
learn motion representations from unlabeled data. Self-supervised learning
(SSL) has been proved very effective at learning representations from unlabeled
data in the image domain. However, few effective self-supervised approaches
exist for 3D action recognition, and directly applying SSL for semi-supervised
learning suffers from misalignment of representations learned from SSL and
supervised learning tasks. To address these issues, we present Adversarial
Self-Supervised Learning (ASSL), a novel framework that tightly couples SSL and
the semi-supervised scheme via neighbor relation exploration and adversarial
learning. Specifically, we design an effective SSL scheme to improve the
discrimination capability of learned representations for 3D action recognition,
through exploring the data relations within a neighborhood. We further propose
an adversarial regularization to align the feature distributions of labeled and
unlabeled samples. To demonstrate effectiveness of the proposed ASSL in
semi-supervised 3D action recognition, we conduct extensive experiments on NTU
and N-UCLA datasets. The results confirm its advantageous performance over
state-of-the-art semi-supervised methods in the few label regime for 3D action
recognition. | [
"cs.CV"
] |
Tremendous progress in deep generative models has led to photorealistic image
synthesis. While achieving compelling results, most approaches operate in the
two-dimensional image domain, ignoring the three-dimensional nature of our
world. Several recent works therefore propose generative models which are
3D-aware, i.e., scenes are modeled in 3D and then rendered differentiably to
the image plane. This leads to impressive 3D consistency, but incorporating
such a bias comes at a price: the camera needs to be modeled as well. Current
approaches assume fixed intrinsics and a predefined prior over camera pose
ranges. As a result, parameter tuning is typically required for real-world
data, and results degrade if the data distribution is not matched. Our key
hypothesis is that learning a camera generator jointly with the image generator
leads to a more principled approach to 3D-aware image synthesis. Further, we
propose to decompose the scene into a background and foreground model, leading
to more efficient and disentangled scene representations. While training from
raw, unposed image collections, we learn a 3D- and camera-aware generative
model which faithfully recovers not only the image but also the camera data
distribution. At test time, our model generates images with explicit control
over the camera as well as the shape and appearance of the scene. | [
"cs.CV",
"cs.LG"
] |
We present a modular neural network architecture Main that learns algorithms
given a set of input-output examples. Main consists of a neural controller that
interacts with a variable-length input tape and learns to compose modules
together with their corresponding argument choices. Unlike previous approaches,
Main uses a general domain-agnostic mechanism for selection of modules and
their arguments. It uses a general input tape layout together with a parallel
history tape to indicate most recently used locations. Finally, it uses a
memoryless controller with a length-invariant self-attention based input tape
encoding to allow for random access to tape locations. The Main architecture is
trained end-to-end using reinforcement learning from a set of input-output
examples. We evaluate Main on five algorithmic tasks and show that it can learn
policies that generalizes perfectly to inputs of much longer lengths than the
ones used for training. | [
"cs.LG",
"cs.AI"
] |
Solving the optimal power flow (OPF) problem in real-time electricity market
improves the efficiency and reliability in the integration of low-carbon energy
resources into the power grids. To address the scalability and adaptivity
issues of existing end-to-end OPF learning solutions, we propose a new graph
neural network (GNN) framework for predicting the electricity market prices
from solving OPFs. The proposed GNN-for-OPF framework innovatively exploits the
locality property of prices and introduces physics-aware regularization, while
attaining reduced model complexity and fast adaptivity to varying grid
topology. Numerical tests have validated the learning efficiency and adaptivity
improvements of our proposed method over existing approaches. | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC"
] |
Representation learning (RL) methods learn objects' latent embeddings where
information is preserved by distances. Since distances are invariant to certain
linear transformations, one may obtain different embeddings while preserving
the same information. In dynamic systems, a temporal difference in embeddings
may be explained by the stability of the system or by the misalignment of
embeddings due to arbitrary transformations. In the literature, embedding
alignment has not been defined formally, explored theoretically, or analyzed
empirically. Here, we explore the embedding alignment and its parts, provide
the first formal definitions, propose novel metrics to measure alignment and
stability, and show their suitability through synthetic experiments. Real-world
experiments show that both static and dynamic RL methods are prone to produce
misaligned embeddings and such misalignment worsens the performance of dynamic
network inference tasks. By ensuring alignment, the prediction accuracy raises
by up to 90% in static and by 40% in dynamic RL methods. | [
"cs.LG",
"cs.SI"
] |
Modern optical flow methods make use of salient scene feature points detected
and matched within the scene as a basis for sparse-to-dense optical flow
estimation. Current feature detectors however either give sparse, non uniform
point clouds (resulting in flow inaccuracies) or lack the efficiency for
frame-rate real-time applications. In this work we use the novel Dense Gradient
Based Features (DeGraF) as the input to a sparse-to-dense optical flow scheme.
This consists of three stages: 1) efficient detection of uniformly distributed
Dense Gradient Based Features (DeGraF); 2) feature tracking via robust local
optical flow; and 3) edge preserving flow interpolation to recover overall
dense optical flow. The tunable density and uniformity of DeGraF features yield
superior dense optical flow estimation compared to other popular feature
detectors within this three stage pipeline. Furthermore, the comparable speed
of feature detection also lends itself well to the aim of real-time optical
flow recovery. Evaluation on established real-world benchmark datasets show
test performance in an autonomous vehicle setting where DeGraF-Flow shows
promising results in terms of accuracy with competitive computational
efficiency among non-GPU based methods, including a marked increase in speed
over the conceptually similar EpicFlow approach. | [
"cs.CV",
"cs.AI"
] |
We design multi-horizon forecasting models for limit order book (LOB) data by
using deep learning techniques. Unlike standard structures where a single
prediction is made, we adopt encoder-decoder models with sequence-to-sequence
and Attention mechanisms to generate a forecasting path. Our methods achieve
comparable performance to state-of-art algorithms at short prediction horizons.
Importantly, they outperform when generating predictions over long horizons by
leveraging the multi-horizon setup. Given that encoder-decoder models rely on
recurrent neural layers, they generally suffer from slow training processes. To
remedy this, we experiment with utilising novel hardware, so-called Intelligent
Processing Units (IPUs) produced by Graphcore. IPUs are specifically designed
for machine intelligence workload with the aim to speed up the computation
process. We show that in our setup this leads to significantly faster training
times when compared to training models with GPUs. | [
"cs.LG",
"cs.NE",
"q-fin.TR"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.