text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Cluster analysis plays a very important role in data analysis. In these
years, cluster ensemble, as a cluster analysis tool, has drawn much attention
for its robustness, stability, and accuracy. Many efforts have been done to
combine different initial clustering results into a single clustering solution
with better performance. However, they neglect the structure information of the
raw data in performing the cluster ensemble. In this paper, we propose a
Structural Cluster Ensemble (SCE) algorithm for data partitioning formulated as
a set-covering problem. In particular, we construct a Laplacian regularized
objective function to capture the structure information among clusters.
Moreover, considering the importance of the discriminative information
underlying in the initial clustering results, we add a discriminative
constraint into our proposed objective function. Finally, we verify the
performance of the SCE algorithm on both synthetic and real data sets. The
experimental results show the effectiveness of our proposed method SCE
algorithm. | [
"cs.CV",
"cs.MM"
] |
A robust and accurate 3D detection system is an integral part of autonomous
vehicles. Traditionally, a majority of 3D object detection algorithms focus on
processing 3D point clouds using voxel grids or bird's eye view (BEV). Recent
works, however, demonstrate the utilization of the graph neural network (GNN)
as a promising approach to 3D object detection. In this work, we propose an
attention based feature aggregation technique in GNN for detecting objects in
LiDAR scan. We first employ a distance-aware down-sampling scheme that not only
enhances the algorithmic performance but also retains maximum geometric
features of objects even if they lie far from the sensor. In each layer of the
GNN, apart from the linear transformation which maps the per node input
features to the corresponding higher level features, a per node masked
attention by specifying different weights to different nodes in its first ring
neighborhood is also performed. The masked attention implicitly accounts for
the underlying neighborhood graph structure of every node and also eliminates
the need of costly matrix operations thereby improving the detection accuracy
without compromising the performance. The experiments on KITTI dataset show
that our method yields comparable results for 3D object detection. | [
"cs.CV",
"cs.LG",
"I.2.10; I.4; I.5"
] |
Nowadays, there is a general agreement on the need to better characterize
agricultural monitoring systems in response to the global changes. Timely and
accurate land use/land cover mapping can support this vision by providing
useful information at fine scale. Here, a deep learning approach is proposed to
deal with multi-source land cover mapping at object level. The approach is
based on an extension of Recurrent Neural Network enriched via an attention
mechanism dedicated to multi-temporal data context. Moreover, a new
hierarchical pretraining strategy designed to exploit specific domain knowledge
available under hierarchical relationships within land cover classes is
introduced. Experiments carried out on the Reunion island - a french overseas
department - demonstrate the significance of the proposal compared to remote
sensing standard approaches for land cover mapping. | [
"cs.CV"
] |
This paper proposed a retinal image segmentation method based on conditional
Generative Adversarial Network (cGAN) to segment optic disc. The proposed model
consists of two successive networks: generator and discriminator. The generator
learns to map information from the observing input (i.e., retinal fundus color
image), to the output (i.e., binary mask). Then, the discriminator learns as a
loss function to train this mapping by comparing the ground-truth and the
predicted output with observing the input image as a condition.Experiments were
performed on two publicly available dataset; DRISHTI GS1 and RIM-ONE. The
proposed model outperformed state-of-the-art-methods by achieving around 0.96%
and 0.98% of Jaccard and Dice coefficients, respectively. Moreover, an image
segmentation is performed in less than a second on recent GPU. | [
"cs.CV"
] |
Visual attention mechanisms are a key component of neural network models for
computer vision. By focusing on a discrete set of objects or image regions,
these mechanisms identify the most relevant features and use them to build more
powerful representations. Recently, continuous-domain alternatives to discrete
attention models have been proposed, which exploit the continuity of images.
These approaches model attention as simple unimodal densities (e.g. a
Gaussian), making them less suitable to deal with images whose region of
interest has a complex shape or is composed of multiple non-contiguous patches.
In this paper, we introduce a new continuous attention mechanism that produces
multimodal densities, in the form of mixtures of Gaussians. We use the EM
algorithm to obtain a clustering of relevant regions in the image, and a
description length penalty to select the number of components in the mixture.
Our densities decompose as a linear combination of unimodal attention
mechanisms, enabling closed-form Jacobians for the backpropagation step.
Experiments on visual question answering in the VQA-v2 dataset show competitive
accuracies and a selection of regions that mimics human attention more closely
in VQA-HAT. We present several examples that suggest how multimodal attention
maps are naturally more interpretable than their unimodal counterparts, showing
the ability of our model to automatically segregate objects from ground in
complex scenes. | [
"cs.CV",
"cs.LG"
] |
Stereo matching is a key component of autonomous driving perception. Recent
unsupervised stereo matching approaches have received adequate attention due to
their advantage of not requiring disparity ground truth. These approaches,
however, perform poorly near occlusions. To overcome this drawback, in this
paper, we propose CoT-Stereo, a novel unsupervised stereo matching approach.
Specifically, we adopt a co-teaching framework where two networks interactively
teach each other about the occlusions in an unsupervised fashion, which greatly
improves the robustness of unsupervised stereo matching. Extensive experiments
on the KITTI Stereo benchmarks demonstrate the superior performance of
CoT-Stereo over all other state-of-the-art unsupervised stereo matching
approaches in terms of both accuracy and speed. Our project webpage is
https://sites.google.com/view/cot-stereo. | [
"cs.CV",
"cs.RO"
] |
In a class of piecewise-constant image segmentation models, we propose to
incorporate a weighted difference of anisotropic and isotropic total variation
(AITV) to regularize the partition boundaries in an image. In particular, we
replace the total variation regularization in the Chan-Vese segmentation model
and a fuzzy region competition model by the proposed AITV. To deal with the
nonconvex nature of AITV, we apply the difference-of-convex algorithm (DCA), in
which the subproblems can be minimized by the primal-dual hybrid gradient
method with linesearch. The convergence of the DCA scheme is analyzed. In
addition, a generalization to color image segmentation is discussed. In the
numerical experiments, we compare the proposed models with the classic convex
approaches and the two-stage segmentation methods (smoothing and then
thresholding) on various images, showing that our models are effective in image
segmentation and robust with respect to impulsive noises. | [
"cs.CV"
] |
Detecting manipulations in digital documents is becoming increasingly
important for information verification purposes. Due to the proliferation of
image editing software, altering key information in documents has become widely
accessible. Nearly all approaches in this domain rely on a procedural approach,
using carefully generated features and a hand-tuned scoring system, rather than
a data-driven and generalizable approach. We frame this issue as a graph
comparison problem using the character bounding boxes, and propose a model that
leverages graph features using OCR (Optical Character Recognition). Our model
relies on a data-driven approach to detect alterations by training a random
forest classifier on the graph-based OCR features. We evaluate our algorithm's
forgery detection performance on dataset constructed from real business
documents with slight forgery imperfections. Our proposed model dramatically
outperforms the most closely-related document manipulation detection model on
this task. | [
"cs.CV",
"cs.CR",
"cs.LG",
"cs.MM"
] |
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is
able to learn a controllable generative model by performing latent
representation disentanglement learning. The learning objective is achieved by
providing signals to the latent encoding/embedding in VAE without changing its
main backbone architecture, hence retaining the desirable properties of the
VAE. We design an unsupervised strategy and a supervised strategy in Guided-VAE
and observe enhanced modeling and controlling capability over the vanilla VAE.
In the unsupervised strategy, we guide the VAE learning by introducing a
lightweight decoder that learns latent geometric transformation and principal
components; in the supervised strategy, we use an adversarial excitation and
inhibition mechanism to encourage the disentanglement of the latent variables.
Guided-VAE enjoys its transparency and simplicity for the general
representation learning task, as well as disentanglement learning. On a number
of experiments for representation learning, improved synthesis/sampling, better
disentanglement for classification, and reduced classification errors in
meta-learning have been observed. | [
"cs.CV",
"cs.LG"
] |
Convolutional Neural Networks (CNNs) have achieved tremendous success in a
number of learning tasks including image classification. Recent advanced models
in CNNs, such as ResNets, mainly focus on the skip connection to avoid gradient
vanishing. DenseNet designs suggest creating additional bypasses to transfer
features as an alternative strategy in network design. In this paper, we design
Attentive Feature Integration (AFI) modules, which are widely applicable to
most recent network architectures, leading to new architectures named AFI-Nets.
AFI-Nets explicitly model the correlations among different levels of features
and selectively transfer features with a little overhead.AFI-ResNet-152 obtains
a 1.24% relative improvement on the ImageNet dataset while decreases the FLOPs
by about 10% and the number of parameters by about 9.2% compared to ResNet-152. | [
"cs.CV"
] |
A number of problems in the processing of sound and natural language, as well
as in other areas, can be reduced to simultaneously reading an input sequence
and writing an output sequence of generally different length. There are well
developed methods that produce the output sequence based on the entirely known
input. However, efficient methods that enable such transformations on-line do
not exist. In this paper we introduce an architecture that learns with
reinforcement to make decisions about whether to read a token or write another
token. This architecture is able to transform potentially infinite sequences
on-line. In an experimental study we compare it with state-of-the-art methods
for neural machine translation. While it produces slightly worse translations
than Transformer, it outperforms the autoencoder with attention, even though
our architecture translates texts on-line thereby solving a more difficult
problem than both reference methods. | [
"cs.LG",
"cs.CL",
"I.2.6"
] |
Online Multi-Object Tracking (MOT) from videos is a challenging computer
vision task which has been extensively studied for decades. Most of the
existing MOT algorithms are based on the Tracking-by-Detection (TBD) paradigm
combined with popular machine learning approaches which largely reduce the
human effort to tune algorithm parameters. However, the commonly used
supervised learning approaches require the labeled data (e.g., bounding boxes),
which is expensive for videos. Also, the TBD framework is usually suboptimal
since it is not end-to-end, i.e., it considers the task as detection and
tracking, but not jointly. To achieve both label-free and end-to-end learning
of MOT, we propose a Tracking-by-Animation framework, where a differentiable
neural model first tracks objects from input frames and then animates these
objects into reconstructed frames. Learning is then driven by the
reconstruction error through backpropagation. We further propose a
Reprioritized Attentive Tracking to improve the robustness of data association.
Experiments conducted on both synthetic and real video datasets show the
potential of the proposed model. Our project page is publicly available at:
https://github.com/zhen-he/tracking-by-animation | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Generating high fidelity identity-preserving faces with different facial
attributes has a wide range of applications. Although a number of generative
models have been developed to tackle this problem, there is still much room for
further improvement.In paticular, the current solutions usually ignore the
perceptual information of images, which we argue that it benefits the output of
a high-quality image while preserving the identity information, especially in
facial attributes learning area.To this end, we propose to train GAN
iteratively via regularizing the min-max process with an integrated loss, which
includes not only the per-pixel loss but also the perceptual loss. In contrast
to the existing methods only deal with either image generation or
transformation, our proposed iterative architecture can achieve both of them.
Experiments on the multi-label facial dataset CelebA demonstrate that the
proposed model has excellent performance on recognizing multiple attributes,
generating a high-quality image, and transforming image with controllable
attributes. | [
"cs.CV"
] |
In the fields of nanoscience and nanotechnology, it is important to be able
to functionalize surfaces chemically for a wide variety of applications.
Scanning tunneling microscopes (STMs) are important instruments in this area
used to measure the surface structure and chemistry with better than molecular
resolution. Self-assembly is frequently used to create monolayers that redefine
the surface chemistry in just a single-molecule-thick layer. Indeed, STM images
reveal rich information about the structure of self-assembled monolayers since
they convey chemical and physical properties of the studied material.
In order to assist in and to enhance the analysis of STM and other images, we
propose and demonstrate an image-processing framework that produces two image
segmentations: one is based on intensities (apparent heights in STM images) and
the other is based on textural patterns. The proposed framework begins with a
cartoon+texture decomposition, which separates an image into its cartoon and
texture components. Afterward, the cartoon image is segmented by a modified
multiphase version of the local Chan-Vese model, while the texture image is
segmented by a combination of 2D empirical wavelet transform and a clustering
algorithm. Overall, our proposed framework contains several new features,
specifically in presenting a new application of cartoon+texture decomposition
and of the empirical wavelet transforms and in developing a specialized
framework to segment STM images and other data. To demonstrate the potential of
our approach, we apply it to actual STM images of cyanide monolayers on
Au\{111\} and present their corresponding segmentation results. | [
"cs.CV"
] |
We propose an end-to-end variational generative model for scene layout
synthesis conditioned on scene graphs. Unlike unconditional scene layout
generation, we use scene graphs as an abstract but general representation to
guide the synthesis of diverse scene layouts that satisfy relationships
included in the scene graph. This gives rise to more flexible control over the
synthesis process, allowing various forms of inputs such as scene layouts
extracted from sentences or inferred from a single color image. Using our
conditional layout synthesizer, we can generate various layouts that share the
same structure of the input example. In addition to this conditional generation
design, we also integrate a differentiable rendering module that enables layout
refinement using only 2D projections of the scene. Given a depth and a
semantics map, the differentiable rendering module enables optimizing over the
synthesized layout to fit the given input in an analysis-by-synthesis fashion.
Experiments suggest that our model achieves higher accuracy and diversity in
conditional scene synthesis and allows exemplar-based scene generation from
various input forms. | [
"cs.CV"
] |
Shape correspondence from 3D deformation learning has attracted appealing
academy interests recently. Nevertheless, current deep learning based methods
require the supervision of dense annotations to learn per-point translations,
which severely overparameterize the deformation process. Moreover, they fail to
capture local geometric details of original shape via global feature embedding.
To address these challenges, we develop a new Unsupervised Dense Deformation
Embedding Network (i.e., UD^2E-Net), which learns to predict deformations
between non-rigid shapes from dense local features. Since it is non-trivial to
match deformation-variant local features for deformation prediction, we develop
an Extrinsic-Intrinsic Autoencoder to frst encode extrinsic geometric features
from source into intrinsic coordinates in a shared canonical shape, with which
the decoder then synthesizes corresponding target features. Moreover, a bounded
maximum mean discrepancy loss is developed to mitigate the distribution
divergence between the synthesized and original features. To learn natural
deformation without dense supervision, we introduce a coarse parameterized
deformation graph, for which a novel trace and propagation algorithm is
proposed to improve both the quality and effciency of the deformation. Our
UD^2E-Net outperforms state-of-the-art unsupervised methods by 24% on Faust
Inter challenge and even supervised methods by 13% on Faust Intra challenge. | [
"cs.CV",
"cs.AI",
"cs.GR"
] |
Conditional Variational Auto Encoders (VAE) are gathering significant
attention as an Explainable Artificial Intelligence (XAI) tool. The codes in
the latent space provide a theoretically sound way to produce counterfactuals,
i.e. alterations resulting from an intervention on a targeted semantic feature.
To be applied on real images more complex models are needed, such as
Hierarchical CVAE. This comes with a challenge as the naive conditioning is no
longer effective. In this paper we show how relaxing the effect of the
posterior leads to successful counterfactuals and we introduce VAEX an
Hierarchical VAE designed for this approach that can visually audit a
classifier in applications. | [
"cs.LG"
] |
We present AutoPose, a novel neural architecture search(NAS) framework that
is capable of automatically discovering multiple parallel branches of
cross-scale connections towards accurate and high-resolution 2D human pose
estimation. Recently, high-performance hand-crafted convolutional networks for
pose estimation show growing demands on multi-scale fusion and high-resolution
representations. However, current NAS works exhibit limited flexibility on
scale searching, they dominantly adopt simplified search spaces of
single-branch architectures. Such simplification limits the fusion of
information at different scales and fails to maintain high-resolution
representations. The presentedAutoPose framework is able to search for
multi-branch scales and network depth, in addition to the cell-level
microstructure. Motivated by the search space, a novel bi-level optimization
method is presented, where the network-level architecture is searched via
reinforcement learning, and the cell-level search is conducted by the
gradient-based method. Within 2.5 GPU days, AutoPose is able to find very
competitive architectures on the MS COCO dataset, that are also transferable to
the MPII dataset. Our code is available at
https://github.com/VITA-Group/AutoPose. | [
"cs.CV"
] |
Recently, Neural Architecture Search (NAS) has successfully identified neural
network architectures that exceed human designed ones on large-scale image
classification. In this paper, we study NAS for semantic image segmentation.
Existing works often focus on searching the repeatable cell structure, while
hand-designing the outer network structure that controls the spatial resolution
changes. This choice simplifies the search space, but becomes increasingly
problematic for dense image prediction which exhibits a lot more network level
architectural variations. Therefore, we propose to search the network level
structure in addition to the cell level structure, which forms a hierarchical
architecture search space. We present a network level search space that
includes many popular designs, and develop a formulation that allows efficient
gradient-based architecture search (3 P100 GPU days on Cityscapes images). We
demonstrate the effectiveness of the proposed method on the challenging
Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our
architecture searched specifically for semantic image segmentation, attains
state-of-the-art performance without any ImageNet pretraining. | [
"cs.CV",
"cs.LG"
] |
Event detection has been an important task in transportation, whose task is
to detect points in time when large events disrupts a large portion of the
urban traffic network. Travel information {Origin-Destination} (OD) matrix data
by map service vendors has large potential to give us insights to discover
historic patterns and distinguish anomalies. However, to fully capture the
spatial and temporal traffic patterns remains a challenge, yet serves a crucial
role for effective anomaly detection. Meanwhile, existing anomaly detection
methods have not well-addressed the extreme data sparsity and high-dimension
challenges, which are common in OD matrix datasets. To tackle these challenges,
we formulate the problem in a novel way, as detecting anomalies in a set of
directed weighted graphs representing the traffic conditions at each time
interval. We further propose \textit{Context augmented Graph Autoencoder}
(\textbf{Con-GAE }), that leverages graph embedding and context embedding
techniques to capture the spatial traffic network patterns while working around
the data sparsity and high-dimensionality issue. Con-GAE adopts an autoencoder
framework and detect anomalies via semi-supervised learning. Extensive
experiments show that our method can achieve up can achieve a 0.1-0.4
improvements of the area under the curve (AUC) score over state-of-art anomaly
detection baselines, when applied on several real-world large scale OD matrix
datasets. | [
"cs.LG",
"cs.AI"
] |
Object detection or localization is an incremental step in progression from
coarse to fine digital image inference. It not only provides the classes of the
image objects, but also provides the location of the image objects which have
been classified. The location is given in the form of bounding boxes or
centroids. Semantic segmentation gives fine inference by predicting labels for
every pixel in the input image. Each pixel is labelled according to the object
class within which it is enclosed. Furthering this evolution, instance
segmentation gives different labels for separate instances of objects belonging
to the same class. Hence, instance segmentation may be defined as the technique
of simultaneously solving the problem of object detection as well as that of
semantic segmentation. In this survey paper on instance segmentation -- its
background, issues, techniques, evolution, popular datasets, related work up to
the state of the art and future scope have been discussed. The paper provides
valuable information for those who want to do research in the field of instance
segmentation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We explore deep reinforcement learning methods for multi-agent domains. We
begin by analyzing the difficulty of traditional algorithms in the multi-agent
case: Q-learning is challenged by an inherent non-stationarity of the
environment, while policy gradient suffers from a variance that increases as
the number of agents grows. We then present an adaptation of actor-critic
methods that considers action policies of other agents and is able to
successfully learn policies that require complex multi-agent coordination.
Additionally, we introduce a training regimen utilizing an ensemble of policies
for each agent that leads to more robust multi-agent policies. We show the
strength of our approach compared to existing methods in cooperative as well as
competitive scenarios, where agent populations are able to discover various
physical and informational coordination strategies. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
We present a system for automatic converting of 2D mask object predictions
and raw LiDAR point clouds into full 3D bounding boxes of objects. Because the
LiDAR point clouds are partial, directly fitting bounding boxes to the point
clouds is meaningless. Instead, we suggest that obtaining good results requires
sharing information between \emph{all} objects in the dataset jointly, over
multiple frames. We then make three improvements to the baseline. First, we
address ambiguities in predicting the object rotations via direct optimization
in this space while still backpropagating rotation prediction through the
model. Second, we explicitly model outliers and task the network with learning
their typical patterns, thus better discounting them. Third, we enforce
temporal consistency when video data is available. With these contributions,
our method significantly outperforms previous work despite the fact that those
methods use significantly more complex pipelines, 3D models and additional
human-annotated external sources of prior information. | [
"cs.CV",
"cs.LG"
] |
Neural architecture search (NAS) has emerged as a promising avenue for
automatically designing task-specific neural networks. Existing NAS approaches
require one complete search for each deployment specification of hardware or
objective. This is a computationally impractical endeavor given the potentially
large number of application scenarios. In this paper, we propose Neural
Architecture Transfer (NAT) to overcome this limitation. NAT is designed to
efficiently generate task-specific custom models that are competitive under
multiple conflicting objectives. To realize this goal we learn task-specific
supernets from which specialized subnets can be sampled without any additional
training. The key to our approach is an integrated online transfer learning and
many-objective evolutionary search procedure. A pre-trained supernet is
iteratively adapted while simultaneously searching for task-specific subnets.
We demonstrate the efficacy of NAT on 11 benchmark image classification tasks
ranging from large-scale multi-class to small-scale fine-grained datasets. In
all cases, including ImageNet, NATNets improve upon the state-of-the-art under
mobile settings ($\leq$ 600M Multiply-Adds). Surprisingly, small-scale
fine-grained datasets benefit the most from NAT. At the same time, the
architecture search and transfer is orders of magnitude more efficient than
existing NAS methods. Overall, the experimental evaluation indicates that,
across diverse image classification tasks and computational objectives, NAT is
an appreciably more effective alternative to conventional transfer learning of
fine-tuning weights of an existing network architecture learned on standard
datasets. Code is available at
https://github.com/human-analysis/neural-architecture-transfer | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Distortion quantification of point clouds plays a stealth, yet vital role in
a wide range of human and machine perception tasks. For human perception tasks,
a distortion quantification can substitute subjective experiments to guide 3D
visualization; while for machine perception tasks, a distortion quantification
can work as a loss function to guide the training of deep neural networks for
unsupervised learning tasks. To handle a variety of demands in many
applications, a distortion quantification needs to be distortion discriminable,
differentiable, and have a low computational complexity. Currently, however,
there is a lack of a general distortion quantification that can satisfy all
three conditions. To fill this gap, this work proposes multiscale potential
energy discrepancy (MPED), a distortion quantification to measure point cloud
geometry and color difference. By evaluating at various neighborhood sizes, the
proposed MPED achieves global-local tradeoffs, capturing distortion in a
multiscale fashion. Extensive experimental studies validate MPED's superiority
for both human and machine perception tasks. | [
"cs.CV",
"eess.IV"
] |
This paper introduces MazeBase: an environment for simple 2D games, designed
as a sandbox for machine learning approaches to reasoning and planning. Within
it, we create 10 simple games embodying a range of algorithmic tasks (e.g.
if-then statements or set negation). A variety of neural models (fully
connected, convolutional network, memory network) are deployed via
reinforcement learning on these games, with and without a procedurally
generated curriculum. Despite the tasks' simplicity, the performance of the
models is far from optimal, suggesting directions for future development. We
also demonstrate the versatility of MazeBase by using it to emulate small
combat scenarios from StarCraft. Models trained on the MazeBase version can be
directly applied to StarCraft, where they consistently beat the in-game AI. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Correlation filters (CF) have received considerable attention in visual
tracking because of their computational efficiency. Leveraging deep features
via off-the-shelf CNN models (e.g., VGG), CF trackers achieve state-of-the-art
performance while consuming a large number of computing resources. This limits
deep CF trackers to be deployed to many mobile platforms on which only a
single-core CPU is available. In this paper, we propose to jointly compress and
transfer off-the-shelf CNN models within a knowledge distillation framework. We
formulate a CNN model pretrained from the image classification task as a
teacher network, and distill this teacher network into a lightweight student
network as the feature extractor to speed up CF trackers. In the distillation
process, we propose a fidelity loss to enable the student network to maintain
the representation capability of the teacher network. Meanwhile, we design a
tracking loss to adapt the objective of the student network from object
recognition to visual tracking. The distillation process is performed offline
on multiple layers and adaptively updates the student network using a
background-aware online learning scheme. Extensive experiments on five
challenging datasets demonstrate that the lightweight student network
accelerates the speed of state-of-the-art deep CF trackers to real-time on a
single-core CPU while maintaining almost the same tracking accuracy. | [
"cs.CV"
] |
Neural sequence generation is typically performed token-by-token and
left-to-right. Whenever a token is generated only previously produced tokens
are taken into consideration. In contrast, for problems such as sequence
classification, bidirectional attention, which takes both past and future
tokens into consideration, has been shown to perform much better. We propose to
make the sequence generation process bidirectional by employing special
placeholder tokens. Treated as a node in a fully connected graph, a placeholder
token can take past and future tokens into consideration when generating the
actual output token. We verify the effectiveness of our approach experimentally
on two conversational tasks where the proposed bidirectional model outperforms
competitive baselines by a large margin. | [
"stat.ML",
"cs.CL",
"cs.LG"
] |
In general, graph representation learning methods assume that the train and
test data come from the same distribution. In this work we consider an
underexplored area of an otherwise rapidly developing field of graph
representation learning: The task of out-of-distribution (OOD) graph
classification, where train and test data have different distributions, with
test data unavailable during training. Our work shows it is possible to use a
causal model to learn approximately invariant representations that better
extrapolate between train and test data. Finally, we conclude with synthetic
and real-world dataset experiments showcasing the benefits of representations
that are invariant to train/test distribution shifts. | [
"cs.LG"
] |
In recent years, deep learning methods bring incredible progress to the field
of object detection. However, in the field of remote sensing image processing,
existing methods neglect the relationship between imaging configuration and
detection performance, and do not take into account the importance of detection
performance feedback for improving image quality. Therefore, detection
performance is limited by the passive nature of the conventional object
detection framework. In order to solve the above limitations, this paper takes
adaptive brightness adjustment and scale adjustment as examples, and proposes
an active object detection method based on deep reinforcement learning. The
goal of adaptive image attribute learning is to maximize the detection
performance. With the help of active object detection and image attribute
adjustment strategies, low-quality images can be converted into high-quality
images, and the overall performance is improved without retraining the
detector. | [
"cs.CV"
] |
Generative adversarial networks (GANs) provide an algorithmic framework for
constructing generative models with several appealing properties: they do not
require a likelihood function to be specified, only a generating procedure;
they provide samples that are sharp and compelling; and they allow us to
harness our knowledge of building highly accurate neural network classifiers.
Here, we develop our understanding of GANs with the aim of forming a rich view
of this growing area of machine learning---to build connections to the diverse
set of statistical thinking on this topic, of which much can be gained by a
mutual exchange of ideas. We frame GANs within the wider landscape of
algorithms for learning in implicit generative models--models that only specify
a stochastic procedure with which to generate data--and relate these ideas to
modelling problems in related fields, such as econometrics and approximate
Bayesian computation. We develop likelihood-free inference methods and
highlight hypothesis testing as a principle for learning in implicit generative
models, using which we are able to derive the objective function used by GANs,
and many other related objectives. The testing viewpoint directs our focus to
the general problem of density ratio estimation. There are four approaches for
density ratio estimation, one of which is a solution using classifiers to
distinguish real from generated data. Other approaches such as divergence
minimisation and moment matching have also been explored in the GAN literature,
and we synthesise these views to form an understanding in terms of the
relationships between them and the wider literature, highlighting avenues for
future exploration and cross-pollination. | [
"stat.ML",
"cs.LG",
"stat.CO"
] |
Representation learning on graphs has emerged as a powerful mechanism to
automate feature vector generation for downstream machine learning tasks. The
advances in representation on graphs have centered on both homogeneous and
heterogeneous graphs, where the latter presenting the challenges associated
with multi-typed nodes and/or edges. In this paper, we consider the additional
challenge of evolving graphs. We ask the question of whether the advances in
representation learning for static graphs can be leveraged for dynamic graphs
and how? It is important to be able to incorporate those advances to maximize
the utility and generalization of methods. To that end, we propose the
Framework for Incremental Learning of Dynamic Networks Embedding (FILDNE),
which can utilize any existing static representation learning method for
learning node embeddings, while keeping the computational costs low. FILDNE
integrates the feature vectors computed using the standard methods over
different timesteps into a single representation by developing a convex
combination function and alignment mechanism. Experimental results on several
downstream tasks, over seven real-world data sets, show that FILDNE is able to
reduce memory and computational time costs while providing competitive quality
measure gains with respect to the contemporary methods for representation
learning on dynamic graphs. | [
"stat.ML",
"cs.LG",
"cs.SI"
] |
In the last decade many different algorithms have been proposed to track a
generic object in videos. Their execution on recent large-scale video datasets
can produce a great amount of various tracking behaviours. New trends in
Reinforcement Learning showed that demonstrations of an expert agent can be
efficiently used to speed-up the process of policy learning. Taking inspiration
from such works and from the recent applications of Reinforcement Learning to
visual tracking, we propose two novel trackers, A3CT, which exploits
demonstrations of a state-of-the-art tracker to learn an effective tracking
policy, and A3CTD, that takes advantage of the same expert tracker to correct
its behaviour during tracking. Through an extensive experimental validation on
the GOT-10k, OTB-100, LaSOT, UAV123 and VOT benchmarks, we show that the
proposed trackers achieve state-of-the-art performance while running in
real-time. | [
"cs.CV"
] |
Training recurrent neural networks is known to be difficult when time
dependencies become long. Consequently, training standard gated cells such as
gated recurrent units and long-short term memory on benchmarks where long-term
memory is required remains an arduous task. In this work, we propose a general
way to initialize any recurrent network connectivity through a process called
"warm-up" to improve its capability to learn arbitrarily long time
dependencies. This initialization process is designed to maximize network
reachable multi-stability, i.e. the number of attractors within the network
that can be reached through relevant input trajectories. Warming-up is
performed before training, using stochastic gradient descent on a specifically
designed loss. We show that warming-up greatly improves recurrent neural
network performance on long-term memory benchmarks for multiple recurrent cell
types, but can sometimes impede precision. We therefore introduce a parallel
recurrent network structure with partial warm-up that is shown to greatly
improve learning on long time-series while maintaining high levels of
precision. This approach provides a general framework for improving learning
abilities of any recurrent cell type when long-term memory is required. | [
"cs.LG"
] |
Visual question answering (VQA) is a challenging multi-modal task that
requires not only the semantic understanding of both images and questions, but
also the sound perception of a step-by-step reasoning process that would lead
to the correct answer. So far, most successful attempts in VQA have been
focused on only one aspect, either the interaction of visual pixel features of
images and word features of questions, or the reasoning process of answering
the question in an image with simple objects. In this paper, we propose a deep
reasoning VQA model with explicit visual structure-aware textual information,
and it works well in capturing step-by-step reasoning process and detecting a
complex object-relationship in photo-realistic images. REXUP network consists
of two branches, image object-oriented and scene graph oriented, which jointly
works with super-diagonal fusion compositional attention network. We
quantitatively and qualitatively evaluate REXUP on the GQA dataset and conduct
extensive ablation studies to explore the reasons behind REXUP's effectiveness.
Our best model significantly outperforms the precious state-of-the-art, which
delivers 92.7% on the validation set and 73.1% on the test-dev set. | [
"cs.CV",
"cs.AI"
] |
We present a method for segmenting neuron membranes in 2D electron microscopy
imagery. This segmentation task has been a bottleneck to reconstruction efforts
of the brain's synaptic circuits. One common problem is the misclassification
of blurry membrane fragments as cell interior, which leads to merging of two
adjacent neuron sections into one via the blurry membrane region. Human
annotators can easily avoid such errors by implicitly performing gap
completion, taking into account the continuity of membranes.
Drawing inspiration from these human strategies, we formulate the
segmentation task as an edge labeling problem on a graph with local topological
constraints. We derive an integer linear program (ILP) that enforces membrane
continuity, i.e. the absence of gaps. The cost function of the ILP is the
pixel-wise deviation of the segmentation from a priori membrane probabilities
derived from the data.
Based on membrane probability maps obtained using random forest classifiers
and convolutional neural networks, our method improves the neuron boundary
segmentation accuracy compared to a variety of standard segmentation
approaches. Our method successfully performs gap completion and leads to fewer
topological errors. The method could potentially also be incorporated into
other image segmentation pipelines with known topological constraints. | [
"cs.CV"
] |
Modern retrieval problems are characterised by training sets with potentially
billions of labels, and heterogeneous data distributions across subpopulations
(e.g., users of a retrieval system may be from different countries), each of
which poses a challenge. The first challenge concerns scalability: with a large
number of labels, standard losses are difficult to optimise even on a single
example. The second challenge concerns uniformity: one ideally wants good
performance on each subpopulation. While several solutions have been proposed
to address the first challenge, the second challenge has received relatively
less attention. In this paper, we propose doubly-stochastic mining (S2M ), a
stochastic optimization technique that addresses both challenges. In each
iteration of S2M, we compute a per-example loss based on a subset of hardest
labels, and then compute the minibatch loss based on the hardest examples. We
show theoretically and empirically that by focusing on the hardest examples,
S2M ensures that all data subpopulations are modelled well. | [
"cs.LG",
"stat.ML"
] |
A COllective INtelligence (COIN) is a set of interacting reinforcement
learning (RL) algorithms designed in an automated fashion so that their
collective behavior optimizes a global utility function. We summarize the
theory of COINs, then present experiments using that theory to design COINs to
control internet traffic routing. These experiments indicate that COINs
outperform all previously investigated RL-based, shortest path routing
algorithms. | [
"cs.LG",
"adap-org",
"cond-mat.stat-mech",
"cs.DC",
"cs.NI",
"nlin.AO",
"I.2.6; I.2.11"
] |
Noise reduction is one the most important and still active research topic in
low-level image processing due to its high impact on object detection and scene
understanding for computer vision systems. Recently, we can observe a
substantial increase of interest in the application of deep learning algorithms
in many computer vision problems due to its impressive capability of automatic
feature extraction and classification. These methods have been also
successfully applied in image denoising, significantly improving the
performance, but most of the proposed approaches were designed for Gaussian
noise suppression. In this paper, we present a switching filtering design
intended for impulsive noise removal using deep learning. In the proposed
method, the impulses are identified using a novel deep neural network
architecture and noisy pixels are restored using the fast adaptive mean filter.
The performed experiments show that the proposed approach is superior to the
state-of-the-art filters designed for impulsive noise removal in digital color
images. | [
"cs.CV"
] |
In this paper, we design two fundamental differential operators for the
derivation of rotation differential invariants of images. Each differential
invariant obtained by using the new method can be expressed as a homogeneous
polynomial of image partial derivatives, which preserve their values when the
image is rotated by arbitrary angles. We produce all possible instances of
homogeneous invariants up to the given order and degree, and discuss the
independence of them in detail. As far as we know, no previous papers have
published so many explicit forms of high-order rotation differential invariants
of images. In the experimental part, texture classification and image patch
verification are carried out on popular real databases. These rotation
differential invariants are used as image feature vector. We mainly evaluate
the effects of various factors on the performance of them. The experimental
results also validate that they have better performance than some commonly used
image features in some cases. | [
"cs.CV"
] |
Sonography synthesis has a wide range of applications, including medical
procedure simulation, clinical training and multimodality image registration.
In this paper, we propose a machine learning approach to simulate ultrasound
images at given 3D spatial locations (relative to the patient anatomy), based
on conditional generative adversarial networks (GANs). In particular, we
introduce a novel neural network architecture that can sample anatomically
accurate images conditionally on spatial position of the (real or mock)
freehand ultrasound probe. To ensure an effective and efficient spatial
information assimilation, the proposed spatially-conditioned GANs take
calibrated pixel coordinates in global physical space as conditioning input,
and utilise residual network units and shortcuts of conditioning data in the
GANs' discriminator and generator, respectively. Using optically tracked B-mode
ultrasound images, acquired by an experienced sonographer on a fetus phantom,
we demonstrate the feasibility of the proposed method by two sets of
quantitative results: distances were calculated between corresponding
anatomical landmarks identified in the held-out ultrasound images and the
simulated data at the same locations unseen to the networks; a usability study
was carried out to distinguish the simulated data from the real images. In
summary, we present what we believe are state-of-the-art visually realistic
ultrasound images, simulated by the proposed GAN architecture that is stable to
train and capable of generating plausibly diverse image samples. | [
"cs.LG",
"cs.CV"
] |
Social sensing has emerged as a new sensing paradigm where humans (or devices
on their behalf) collectively report measurements about the physical world.
This paper focuses on a quality-cost-aware task allocation problem in
multi-attribute social sensing applications. The goal is to identify a task
allocation strategy (i.e., decide when and where to collect sensing data) to
achieve an optimized tradeoff between the data quality and the sensing cost.
While recent progress has been made to tackle similar problems, three important
challenges have not been well addressed: (i) "online task allocation": the task
allocation schemes need to respond quickly to the potentially large dynamics of
the measured variables in social sensing; (ii) "multi-attribute constrained
optimization": minimizing the overall sensing error given the dependencies and
constraints of multiple attributes of the measured variables is a non-trivial
problem to solve; (iii) "nonuniform task allocation cost": the task allocation
cost in social sensing often has a nonuniform distribution which adds
additional complexity to the optimized task allocation problem. This paper
develops a Quality-Cost-Aware Online Task Allocation (QCO-TA) scheme to address
the above challenges using a principled online reinforcement learning
framework. We evaluate the QCO-TA scheme through a real-world social sensing
application and the results show that our scheme significantly outperforms the
state-of-the-art baselines in terms of both sensing accuracy and cost. | [
"cs.LG",
"stat.ML"
] |
Unsupervised learning methods based on contrastive learning have drawn
increasing attention and achieved promising results. Most of them aim to learn
representations invariant to instance-level variations, which are provided by
different views of the same instance. In this paper, we propose Invariance
Propagation to focus on learning representations invariant to category-level
variations, which are provided by different instances from the same category.
Our method recursively discovers semantically consistent samples residing in
the same high-density regions in representation space. We demonstrate a hard
sampling strategy to concentrate on maximizing the agreement between the anchor
sample and its hard positive samples, which provide more intra-class variations
to help capture more abstract invariance. As a result, with a ResNet-50 as the
backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear
classification and 78.2% top-5 accuracy fine-tuning on only 1% labels,
surpassing previous results. We also achieve state-of-the-art performance on
other downstream tasks, including linear classification on Places205 and Pascal
VOC, and transfer learning on small scale datasets. | [
"cs.CV",
"cs.LG"
] |
Most previous learning-based visual odometry (VO) methods take VO as a pure
tracking problem. In contrast, we present a VO framework by incorporating two
additional components called Memory and Refining. The Memory component
preserves global information by employing an adaptive and efficient selection
strategy. The Refining component ameliorates previous results with the contexts
stored in the Memory by adopting a spatial-temporal attention mechanism for
feature distilling. Experiments on the KITTI and TUM-RGBD benchmark datasets
demonstrate that our method outperforms state-of-the-art learning-based methods
by a large margin and produces competitive results against classic monocular VO
approaches. Especially, our model achieves outstanding performance in
challenging scenarios such as texture-less regions and abrupt motions, where
classic VO algorithms tend to fail. | [
"cs.CV"
] |
This paper present our color constancy investigation in the hybridization of
Wireless LAN and Camera positioning in the mobile phone. Five typical color
constancy schemes are analyzed in different location environment. The results
can be used to combine with RF signals from Wireless LAN positioning by using
model fitting approach in order to establish absolute positioning output. There
is no conventional searching algorithm required, thus it is expected to reduce
the complexity of computation. Finally we present our preliminary results to
illustrate the indoor positioning algorithm performance evaluation for an
indoor environment set-up. | [
"cs.CV",
"cs.HC"
] |
This paper presents a game, controlled by computer vision, in identification
of hand gestures (hand-tracking). The proposed work is based on image
segmentation and construction of a convex hull with Jarvis Algorithm , and
determination of the pattern based on the extraction of area characteristics in
the convex hull. | [
"cs.CV"
] |
3D semantic scene labeling is a fundamental task for Autonomous Driving.
Recent work shows the capability of Deep Neural Networks in labeling 3D point
sets provided by sensors like LiDAR, and Radar. Imbalanced distribution of
classes in the dataset is one of the challenges that face 3D semantic scene
labeling task. This leads to misclassifying for the non-dominant classes which
suffer from two main problems: a) rare appearance in the dataset, and b) few
sensor points reflected from one object of these classes. This paper proposes a
Weighted Self-Incremental Transfer Learning as a generalized methodology that
solves the imbalanced training dataset problems. It re-weights the components
of the loss function computed from individual classes based on their
frequencies in the training dataset, and applies Self-Incremental Transfer
Learning by running the Neural Network model on non-dominant classes first,
then dominant classes one-by-one are added. The experimental results introduce
a new 3D point cloud semantic segmentation benchmark for KITTI dataset. | [
"cs.CV",
"cs.LG"
] |
Quantized neural network (NN) with a reduced bit precision is an effective
solution to reduces the computational and memory resource requirements and
plays a vital role in machine learning. However, it is still challenging to
avoid the significant accuracy degradation due to its numerical approximation
and lower redundancy. In this paper, a novel robustness-aware 2-bit
quantization scheme is proposed for NN base on binary NN and generative
adversarial network(GAN), witch improves the performance by enriching the
information of binary NN, efficiently extract the structural information and
considering the robustness of the quantized NN. Specifically, using shift
addition operation to replace the multiply-accumulate in the quantization
process witch can effectively speed the NN. Meanwhile, a structural loss
between the original NN and quantized NN is proposed to such that the
structural information of data is preserved after quantization. The structural
information learned from NN not only plays an important role in improving the
performance but also allows for further fine tuning of the quantization network
by applying the Lipschitz constraint to the structural loss. In addition, we
also for the first time take the robustness of the quantized NN into
consideration and propose a non-sensitive perturbation loss function by
introducing an extraneous term of spectral norm. The experiments are conducted
on CIFAR-10 and ImageNet datasets with popular NN( such as MoblieNetV2,
SqueezeNet, ResNet20, etc). The experimental results show that the proposed
algorithm is more competitive under 2-bit-precision than the state-of-the-art
quantization methods. Meanwhile, the experimental results also demonstrate that
the proposed method is robust under the FGSM adversarial samples attack. | [
"cs.LG"
] |
We present a Deep Differentiable Simplex Layer (DDSL) for neural networks for
geometric deep learning. The DDSL is a differentiable layer compatible with
deep neural networks for bridging simplex mesh-based geometry representations
(point clouds, line mesh, triangular mesh, tetrahedral mesh) with raster images
(e.g., 2D/3D grids). The DDSL uses Non-Uniform Fourier Transform (NUFT) to
perform differentiable, efficient, anti-aliased rasterization of simplex-based
signals. We present a complete theoretical framework for the process as well as
an efficient backpropagation algorithm. Compared to previous differentiable
renderers and rasterizers, the DDSL generalizes to arbitrary simplex degrees
and dimensions. In particular, we explore its applications to 2D shapes and
illustrate two applications of this method: (1) mesh editing and optimization
guided by neural network outputs, and (2) using DDSL for a differentiable
rasterization loss to facilitate end-to-end training of polygon generators. We
are able to validate the effectiveness of gradient-based shape optimization
with the example of airfoil optimization, and using the differentiable
rasterization loss to facilitate end-to-end training, we surpass state of the
art for polygonal image segmentation given ground-truth bounding boxes. | [
"cs.CV",
"cs.CG"
] |
We consider binary classification problems using local features of objects.
One of motivating applications is time-series classification, where features
reflecting some local closeness measure between a time series and a pattern
sequence called shapelet are useful. Despite the empirical success of such
approaches using local features, the generalization ability of resulting
hypotheses is not fully understood and previous work relies on a bunch of
heuristics. In this paper, we formulate a class of hypotheses using local
features, where the richness of features is controlled by kernels. We derive
generalization bounds of sparse ensembles over the class which is exponentially
better than a standard analysis in terms of the number of possible local
features. The resulting optimization problem is well suited to the boosting
approach and the weak learning problem is formulated as a DC program, for which
practical algorithms exist. In preliminary experiments on time-series data
sets, our method achieves competitive accuracy with the state-of-the-art
algorithms with small parameter-tuning cost. | [
"cs.LG"
] |
Intelligent diagnosis method based on data-driven and deep learning is an
attractive and meaningful field in recent years. However, in practical
application scenarios, the imbalance of time-series fault is an urgent problem
to be solved. This paper proposes a novel deep metric learning model, where
imbalanced fault data and a quadruplet data pair design manner are considered.
Based on such data pair, a quadruplet loss function which takes into account
the inter-class distance and the intra-class data distribution are proposed.
This quadruplet loss pays special attention to imbalanced sample pair. The
reasonable combination of quadruplet loss and softmax loss function can reduce
the impact of imbalance. Experiment results on two open-source datasets show
that the proposed method can effectively and robustly improve the performance
of imbalanced fault diagnosis. | [
"cs.LG",
"cs.AI"
] |
The collected data from industrial machines are often imbalanced, which poses
a negative effect on learning algorithms. However, this problem becomes more
challenging for a mixed type of data or while there is overlapping between
classes. Class-imbalance problem requires a robust learning system which can
timely predict and classify the data. We propose a new adversarial network for
simultaneous classification and fault detection. In particular, we restore the
balance in the imbalanced dataset by generating faulty samples from the
proposed mixture of data distribution. We designed the discriminator of our
model to handle the generated faulty samples to prevent outlier and
overfitting. We empirically demonstrate that; (i) the discriminator trained
with a generator to generates samples from a mixture of normal and faulty data
distribution which can be considered as a fault detector; (ii), the quality of
the generated faulty samples outperforms the other synthetic resampling
techniques. Experimental results show that the proposed model performs well
when comparing to other fault diagnosis methods across several evaluation
metrics; in particular, coalescing of generative adversarial network (GAN) and
feature matching function is effective at recognizing faulty samples. | [
"cs.CV",
"cs.LG"
] |
Recent approaches for 3D object detection have made tremendous progresses due
to the development of deep learning. However, previous researches are mostly
based on individual frames, leading to limited exploitation of information
between frames. In this paper, we attempt to leverage the temporal information
in streaming data and explore 3D streaming based object detection as well as
tracking. Toward this goal, we set up a dual-way network for 3D object
detection based on keyframes, and then propagate predictions to non-key frames
through a motion based interpolation algorithm guided by temporal information.
Our framework is not only shown to have significant improvements on object
detection compared with frame-by-frame paradigm, but also proven to produce
competitive results on KITTI Object Tracking Benchmark, with 76.68% in MOTA and
81.65% in MOTP respectively. | [
"cs.CV"
] |
Reinforcement learning (RL) has shown great success in estimating sequential
treatment strategies which take into account patient heterogeneity. However,
health-outcome information, which is used as the reward for reinforcement
learning methods, is often not well coded but rather embedded in clinical
notes. Extracting precise outcome information is a resource intensive task, so
most of the available well-annotated cohorts are small. To address this issue,
we propose a semi-supervised learning (SSL) approach that efficiently leverages
a small sized labeled data with true outcome observed, and a large unlabeled
data with outcome surrogates. In particular, we propose a semi-supervised,
efficient approach to Q-learning and doubly robust off policy value estimation.
Generalizing SSL to sequential treatment regimes brings interesting challenges:
1) Feature distribution for Q-learning is unknown as it includes previous
outcomes. 2) The surrogate variables we leverage in the modified SSL framework
are predictive of the outcome but not informative to the optimal policy or
value function. We provide theoretical results for our Q-function and value
function estimators to understand to what degree efficiency can be gained from
SSL. Our method is at least as efficient as the supervised approach, and
moreover safe as it robust to mis-specification of the imputation models. | [
"cs.LG",
"cs.AI",
"stat.ME",
"stat.ML"
] |
Recognizing subtle historical patterns is central to modeling and forecasting
problems in time series analysis. Here we introduce and develop a new approach
to quantify deviations in the underlying hidden generators of observed data
streams, resulting in a new efficiently computable universal metric for time
series. The proposed metric is in the sense that we can compare and contrast
data streams regardless of where and how they are generated and without any
feature engineering step. The approach proposed in this paper is conceptually
distinct from our previous work on data smashing, and vastly improves
discrimination performance and computing speed. The core idea here is the
generalization of the notion of KL divergence often used to compare probability
distributions to a notion of divergence in time series. We call this the
sequence likelihood (SL) divergence, which may be used to measure deviations
within a well-defined class of discrete-valued stochastic processes. We devise
efficient estimators of SL divergence from finite sample paths and subsequently
formulate a universal metric useful for computing distance between time series
produced by hidden stochastic generators. | [
"stat.ML",
"cs.LG",
"q-fin.MF"
] |
Clustering is a class of unsupervised learning methods that has been
extensively applied and studied in computer vision. Little work has been done
to adapt it to the end-to-end training of visual features on large scale
datasets. In this work, we present DeepCluster, a clustering method that
jointly learns the parameters of a neural network and the cluster assignments
of the resulting features. DeepCluster iteratively groups the features with a
standard clustering algorithm, k-means, and uses the subsequent assignments as
supervision to update the weights of the network. We apply DeepCluster to the
unsupervised training of convolutional neural networks on large datasets like
ImageNet and YFCC100M. The resulting model outperforms the current state of the
art by a significant margin on all the standard benchmarks. | [
"cs.CV"
] |
Among interpretable machine learning methods, the class of Generalised
Additive Neural Networks (GANNs) is referred to as Self-Explaining Neural
Networks (SENN) because of the linear dependence on explicit functions of the
inputs. In binary classification this shows the precise weight that each input
contributes towards the logit. The nomogram is a graphical representation of
these weights. We show that functions of individual and pairs of variables can
be derived from a functional Analysis of Variance (ANOVA) representation,
enabling an efficient feature selection to be carried by application of the
logistic Lasso. This process infers the structure of GANNs which otherwise
needs to be predefined. As this method is particularly suited for tabular data,
it starts by fitting a generic flexible model, in this case a Multi-layer
Perceptron (MLP) to which the ANOVA decomposition is applied. This has the
further advantage that the resulting GANN can be replicated as a SENN, enabling
further refinement of the univariate and bivariate component functions to take
place. The component functions are partial responses hence the SENN is a
partial response network. The Partial Response Network (PRN) is equally as
transparent as a traditional logistic regression model, but capable of
non-linear classification with comparable or superior performance to the
original MLP. In other words, the PRN is a fully interpretable representation
of the MLP, at the level of univariate and bivariate effects. The performance
of the PRN is shown to be competitive for benchmark data, against
state-of-the-art machine learning methods including GBM, SVM and Random
Forests. It is also compared with spline-based Sparse Additive Models (SAM)
showing that a semi-parametric representation of the GAM as a neural network
can be as effective as the SAM though less constrained by the need to set
spline nodes. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Regular pavement inspection plays a significant role in road maintenance for
safety assurance. Existing methods mainly address the tasks of crack detection
and segmentation that are only tailored for long-thin crack disease. However,
there are many other types of diseases with a wider variety of sizes and
patterns that are also essential to segment in practice, bringing more
challenges towards fine-grained pavement inspection. In this paper, our goal is
not only to automatically segment cracks, but also to segment other complex
pavement diseases as well as typical landmarks (markings, runway lights, etc.)
and commonly seen water/oil stains in a single model. To this end, we propose a
three-stream boundary-aware network (TB-Net). It consists of three streams
fusing the low-level spatial and the high-level contextual representations as
well as the detailed boundary information. Specifically, the spatial stream
captures rich spatial features. The context stream, where an attention
mechanism is utilized, models the contextual relationships over local features.
The boundary stream learns detailed boundaries using a global-gated convolution
to further refine the segmentation outputs. The network is trained using a
dual-task loss in an end-to-end manner, and experiments on a newly collected
fine-grained pavement disease dataset show the effectiveness of our TB-Net. | [
"cs.CV"
] |
Image restoration is very crucial computer vision task. This paper describes
two novel methods for the restoration of old degraded handwritten documents
using deep neural network. In addition to that, a small-scale dataset of 26
heritage letters images is introduced. The ground truth data to train the
desired network is generated semi automatically involving a pragmatic
combination of color transformation, Gaussian mixture model based segmentation
and shape correction by using mathematical morphological operators. In the
first approach, a deep neural network has been used for text extraction from
the document image and later background reconstruction has been done using
Gaussian mixture modeling. But Gaussian mixture modelling requires to set
parameters manually, to alleviate this we propose a second approach where the
background reconstruction and foreground extraction (which which includes
extracting text with its original colour) both has been done using deep neural
network. Experiments demonstrate that the proposed systems perform well on
handwritten document images with severe degradations, even when trained with
small dataset. Hence, the proposed methods are ideally suited for digital
heritage preservation repositories. It is worth mentioning that, these methods
can be extended easily for printed degraded documents. | [
"cs.CV"
] |
Spatiotemporal systems are common in the real-world. Forecasting the
multi-step future of these spatiotemporal systems based on the past
observations, or, Spatiotemporal Sequence Forecasting (STSF), is a significant
and challenging problem. Although lots of real-world problems can be viewed as
STSF and many research works have proposed machine learning based methods for
them, no existing work has summarized and compared these methods from a unified
perspective. This survey aims to provide a systematic review of machine
learning for STSF. In this survey, we define the STSF problem and classify it
into three subcategories: Trajectory Forecasting of Moving Point Cloud
(TF-MPC), STSF on Regular Grid (STSF-RG) and STSF on Irregular Grid (STSF-IG).
We then introduce the two major challenges of STSF: 1) how to learn a model for
multi-step forecasting and 2) how to adequately model the spatial and temporal
structures. After that, we review the existing works for solving these
challenges, including the general learning strategies for multi-step
forecasting, the classical machine learning based methods for STSF, and the
deep learning based methods for STSF. We also compare these methods and point
out some potential research directions. | [
"cs.LG",
"stat.ML"
] |
Resampling is a key component of sample-based recursive state estimation in
particle filters. Recent work explores differentiable particle filters for
end-to-end learning. However, resampling remains a challenge in these works, as
it is inherently non-differentiable. We address this challenge by replacing
traditional resampling with a learned neural network resampler. We present a
novel network architecture, the particle transformer, and train it for particle
resampling using a likelihood-based loss function over sets of particles.
Incorporated into a differentiable particle filter, our model can be end-to-end
optimized jointly with the other particle filter components via gradient
descent. Our results show that our learned resampler outperforms traditional
resampling techniques on synthetic data and in a simulated robot localization
task. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
A framework of M-estimation based fuzzy C-means clustering (MFCM) algorithm
is proposed with iterative reweighted least squares (IRLS) algorithm, and
penalty constraint and kernelization extensions of MFCM algorithms are also
developed. Introducing penalty information to the object functions of MFCM
algorithms, the spatially constrained fuzzy C-means (SFCM) is extended to
penalty constraints MFCM algorithms(abbr. pMFCM).Substituting the Euclidean
distance with kernel method, the MFCM and pMFCM algorithms are extended to
kernelized MFCM (abbr. KMFCM) and kernelized pMFCM (abbr.pKMFCM) algorithms.
The performances of MFCM, pMFCM, KMFCM and pKMFCM algorithms are evaluated in
three tasks: pattern recognition on 10 standard data sets from UCI Machine
Learning databases, noise image segmentation performances on a synthetic image,
a magnetic resonance brain image (MRI), and image segmentation of a standard
images from Berkeley Segmentation Dataset and Benchmark. The experimental
results demonstrate the effectiveness of our proposed algorithms in pattern
recognition and image segmentation. | [
"cs.CV",
"stat.CO"
] |
We consider the problem of accurately recovering a matrix B of size M by M ,
which represents a probability distribution over M2 outcomes, given access to
an observed matrix of "counts" generated by taking independent samples from the
distribution B. How can structural properties of the underlying matrix B be
leveraged to yield computationally efficient and information theoretically
optimal reconstruction algorithms? When can accurate reconstruction be
accomplished in the sparse data regime? This basic problem lies at the core of
a number of questions that are currently being considered by different
communities, including building recommendation systems and collaborative
filtering in the sparse data regime, community detection in sparse random
graphs, learning structured models such as topic models or hidden Markov
models, and the efforts from the natural language processing community to
compute "word embeddings".
Our results apply to the setting where B has a low rank structure. For this
setting, we propose an efficient algorithm that accurately recovers the
underlying M by M matrix using Theta(M) samples. This result easily translates
to Theta(M) sample algorithms for learning topic models and learning hidden
Markov Models. These linear sample complexities are optimal, up to constant
factors, in an extremely strong sense: even testing basic properties of the
underlying matrix (such as whether it has rank 1 or 2) requires Omega(M)
samples. We provide an even stronger lower bound where distinguishing whether a
sequence of observations were drawn from the uniform distribution over M
observations versus being generated by an HMM with two hidden states requires
Omega(M) observations. This precludes sublinear-sample hypothesis tests for
basic properties, such as identity or uniformity, as well as sublinear sample
estimators for quantities such as the entropy rate of HMMs. | [
"cs.LG"
] |
Spatiotemporal traffic time series (e.g., traffic volume/speed) collected
from sensing systems are often incomplete with considerable corruption and
large amounts of missing values, preventing users from harnessing the full
power of the data. Missing data imputation has been a long-standing research
topic and critical application for real-world intelligent transportation
systems. A widely applied imputation method is low-rank matrix/tensor
completion; however, the low-rank assumption only preserves the global
structure while ignores the strong local consistency in spatiotemporal data. In
this paper, we propose a low-rank autoregressive tensor completion (LATC)
framework by introducing \textit{temporal variation} as a new regularization
term into the completion of a third-order (sensor $\times$ time of day $\times$
day) tensor. The third-order tensor structure allows us to better capture the
global consistency of traffic data, such as the inherent seasonality and
day-to-day similarity. To achieve local consistency, we design the temporal
variation by imposing an AR($p$) model for each time series with coefficients
as learnable parameters. Different from previous spatial and temporal
regularization schemes, the minimization of temporal variation can better
characterize temporal generative mechanisms beyond local smoothness, allowing
us to deal with more challenging scenarios such "blackout" missing. To solve
the optimization problem in LATC, we introduce an alternating minimization
scheme that estimates the low-rank tensor and autoregressive coefficients
iteratively. We conduct extensive numerical experiments on several real-world
traffic data sets, and our results demonstrate the effectiveness of LATC in
diverse missing scenarios. | [
"cs.LG",
"stat.ML"
] |
This paper investigates the problem of pseudo-healthy synthesis that is
defined as synthesizing a subject-specific pathology-free image from a
pathological one. Recent approaches based on Generative Adversarial Network
(GAN) have been developed for this task. However, these methods will inevitably
fall into the trade-off between preserving the subject-specific identity and
generating healthy-like appearances. To overcome this challenge, we propose a
novel adversarial training regime, Generator versus Segmentor (GVS), to
alleviate this trade-off by a divide-and-conquer strategy. We further consider
the deteriorating generalization performance of the segmentor throughout the
training and develop a pixel-wise weighted loss by muting the well-transformed
pixels to promote it. Moreover, we propose a new metric to measure how healthy
the synthetic images look. The qualitative and quantitative experiments on the
public dataset BraTS demonstrate that the proposed method outperforms the
existing methods. Besides, we also certify the effectiveness of our method on
datasets LiTS. Our implementation and pre-trained networks are publicly
available at https://github.com/Au3C2/Generator-Versus-Segmentor. | [
"cs.CV"
] |
Segmentation of findings in the gastrointestinal tract is a challenging but
also an important task which is an important building stone for sufficient
automatic decision support systems. In this work, we present our solution for
the Medico 2020 task, which focused on the problem of colon polyp segmentation.
We present our simple but efficient idea of using an augmentation method that
uses grids in a pyramid-like manner (large to small) for segmentation. Our
results show that the proposed methods work as indented and can also lead to
comparable results when competing with other methods. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM"
] |
Style transfer usually refers to the task of applying color and texture
information from a specific style image to a given content image while
preserving the structure of the latter. Here we tackle the more generic problem
of semantic style transfer: given two unpaired collections of images, we aim to
learn a mapping between the corpus-level style of each collection, while
preserving semantic content shared across the two domains. We introduce XGAN
("Cross-GAN"), a dual adversarial autoencoder, which captures a shared
representation of the common domain semantic content in an unsupervised way,
while jointly learning the domain-to-domain image translations in both
directions. We exploit ideas from the domain adaptation literature and define a
semantic consistency loss which encourages the model to preserve semantics in
the learned embedding space. We report promising qualitative results for the
task of face-to-cartoon translation. The cartoon dataset, CartoonSet, we
collected for this purpose is publicly available at
google.github.io/cartoonset/ as a new benchmark for semantic style transfer. | [
"cs.CV"
] |
Off-policy reinforcement learning with eligibility traces is challenging
because of the discrepancy between target policy and behavior policy. One
common approach is to measure the difference between two policies in a
probabilistic way, such as importance sampling and tree-backup. However,
existing off-policy learning methods based on probabilistic policy measurement
are inefficient when utilizing traces under a greedy target policy, which is
ineffective for control problems. The traces are cut immediately when a
non-greedy action is taken, which may lose the advantage of eligibility traces
and slow down the learning process. Alternatively, some non-probabilistic
measurement methods such as General Q($\lambda$) and Naive Q($\lambda$) never
cut traces, but face convergence problems in practice. To address the above
issues, this paper introduces a new method named TBQ($\sigma$), which
effectively unifies the tree-backup algorithm and Naive Q($\lambda$). By
introducing a new parameter $\sigma$ to illustrate the \emph{degree} of
utilizing traces, TBQ($\sigma$) creates an effective integration of
TB($\lambda$) and Naive Q($\lambda$) and continuous role shift between them.
The contraction property of TB($\sigma$) is theoretically analyzed for both
policy evaluation and control settings. We also derive the online version of
TBQ($\sigma$) and give the convergence proof. We empirically show that, for
$\epsilon\in(0,1]$ in $\epsilon$-greedy policies, there exists some degree of
utilizing traces for $\lambda\in[0,1]$, which can improve the efficiency in
trace utilization for off-policy reinforcement learning, to both accelerate the
learning process and improve the performance. | [
"cs.LG",
"cs.AI",
"stat.ML",
"68Wxx"
] |
Salient object detection or salient region detection models, diverging from
fixation prediction models, have traditionally been dealing with locating and
segmenting the most salient object or region in a scene. While the notion of
most salient object is sensible when multiple objects exist in a scene, current
datasets for evaluation of saliency detection approaches often have scenes with
only one single object. We introduce three main contributions in this paper:
First, we take an indepth look at the problem of salient object detection by
studying the relationship between where people look in scenes and what they
choose as the most salient object when they are explicitly asked. Based on the
agreement between fixations and saliency judgments, we then suggest that the
most salient object is the one that attracts the highest fraction of fixations.
Second, we provide two new less biased benchmark datasets containing scenes
with multiple objects that challenge existing saliency models. Indeed, we
observed a severe drop in performance of 8 state-of-the-art models on our
datasets (40% to 70%). Third, we propose a very simple yet powerful model based
on superpixels to be used as a baseline for model evaluation and comparison.
While on par with the best models on MSRA-5K dataset, our model wins over other
models on our data highlighting a serious drawback of existing models, which is
convoluting the processes of locating the most salient object and its
segmentation. We also provide a review and statistical analysis of some labeled
scene datasets that can be used for evaluating salient object detection models.
We believe that our work can greatly help remedy the over-fitting of models to
existing biased datasets and opens new venues for future research in this
fast-evolving field. | [
"cs.CV"
] |
Noisy labels are ubiquitous in real-world datasets, which poses a challenge
for robustly training deep neural networks (DNNs) as DNNs usually have the high
capacity to memorize the noisy labels. In this paper, we find that the test
accuracy can be quantitatively characterized in terms of the noise ratio in
datasets. In particular, the test accuracy is a quadratic function of the noise
ratio in the case of symmetric noise, which explains the experimental findings
previously published. Based on our analysis, we apply cross-validation to
randomly split noisy datasets, which identifies most samples that have correct
labels. Then we adopt the Co-teaching strategy which takes full advantage of
the identified samples to train DNNs robustly against noisy labels. Compared
with extensive state-of-the-art methods, our strategy consistently improves the
generalization performance of DNNs under both synthetic and real-world training
noise. | [
"cs.LG",
"stat.ML"
] |
Capable of automated near real time superpixel detection and quality
assessment in an uncalibrated monitor typical red green blue (RGB) image,
depicted in either true or false colors, an original low level computer vision
(CV) lightweight computer program, called RGB Image Automatic Mapper (RGBIAM),
is designed and implemented. Constrained by the Calibration Validation (CalVal)
requirements of the Quality Assurance Framework for Earth Observation (QA4EO)
guidelines, RGBIAM requires as mandatory an uncalibrated RGB image pre
processing first stage, consisting of an automated statistical model based
color constancy algorithm. The RGBIAM hybrid inference pipeline comprises: (I)
a direct quantitative to nominal (QN) RGB variable transform, where RGB pixel
values are mapped onto a prior dictionary of color names, equivalent to a
static polyhedralization of the RGB cube. Prior color naming is the deductive
counterpart of inductive vector quantization (VQ), whose typical VQ error
function to minimize is a root mean square error (RMSE). In the output multi
level color map domain, superpixels are automatically detected in linear time
as connected sets of pixels featuring the same color label. (II) An inverse
nominal to quantitative (NQ) RGB variable transform, where a superpixelwise
constant RGB image approximation is generated in linear time to assess a VQ
error image. The hybrid direct and inverse RGBIAM QNQ transform is: (i) general
purpose, data and application independent. (ii) Automated, i.e., it requires no
user machine interaction. (iii) Near real time, with a computational complexity
increasing linearly with the image size. (iv) Implemented in tile streaming
mode, to cope with massive images. Collected outcome and process quality
indicators, including degree of automation, computational efficiency, VQ rate
and VQ error, are consistent with theoretical expectations. | [
"cs.CV"
] |
Food volume estimation is an essential step in the pipeline of dietary
assessment and demands the precise depth estimation of the food surface and
table plane. Existing methods based on computer vision require either
multi-image input or additional depth maps, reducing convenience of
implementation and practical significance. Despite the recent advances in
unsupervised depth estimation from a single image, the achieved performance in
the case of large texture-less areas needs to be improved. In this paper, we
propose a network architecture that jointly performs geometric understanding
(i.e., depth prediction and 3D plane estimation) and semantic prediction on a
single food image, enabling a robust and accurate food volume estimation
regardless of the texture characteristics of the target plane. For the training
of the network, only monocular videos with semantic ground truth are required,
while the depth map and 3D plane ground truth are no longer needed.
Experimental results on two separate food image databases demonstrate that our
method performs robustly on texture-less scenarios and is superior to
unsupervised networks and structure from motion based approaches, while it
achieves comparable performance to fully-supervised methods. | [
"cs.CV"
] |
The periodic table is a fundamental representation of chemical elements that
plays essential theoretical and practical roles. The research article discusses
the experiences of unsupervised training of neural networks to represent
elements on the 2D latent space based on their electron configurations while
forcing disentanglement. To emphasize chemical properties of the elements, the
original data of electron configurations has been realigned towards the
outermost valence orbitals. Recognizing seven shells and four subshells, the
input data has been arranged as (7x4) images. Latent space representation has
been performed using a convolutional beta variational autoencoder (beta-VAE).
Despite discrete and sparse input data, the beta-VAE disentangles elements of
different periods, blocks, groups, and types, while retaining the order along
atomic numbers. In addition, it isolates outliers on the latent space that
turned out to be known cases of Madelung's rule violations for lanthanide and
actinide elements. Considering the generative capabilities of beta-VAE and
discrete input data, the supervised machine learning has been set to find out
if there are insightful patterns distinguishing electron configurations between
real elements and decoded artificial ones. Also, the article addresses the
capability of dual representation by autoencoders. Conventionally, autoencoders
represent observations of input data on the latent space. However, by
transposing and duplicating original input data, it is possible to represent
variables on the latent space as well. The latest can lead to the discovery of
meaningful patterns among input variables. Applying that unsupervised learning
for transposed data of electron configurations, the order of input variables
that has been arranged by the encoder on the latent space has turned out to
exactly match the sequence of Madelung's rule. | [
"cs.LG",
"physics.chem-ph",
"68T30"
] |
Pre-training a deep neural network on the ImageNet dataset is a common
practice for training deep learning models, and generally yields improved
performance and faster training times. The technique of pre-training on one
task and then retraining on a new one is called transfer learning. In this
paper we analyse the effectiveness of using deep transfer learning for
character recognition tasks. We perform three sets of experiments with varying
levels of similarity between source and target tasks to investigate the
behaviour of different types of knowledge transfer. We transfer both parameters
and features and analyse their behaviour. Our results demonstrate that no
significant advantage is gained by using a transfer learning approach over a
traditional machine learning approach for our character recognition tasks. This
suggests that using transfer learning does not necessarily presuppose a better
performing model in all cases. | [
"cs.LG",
"stat.ML"
] |
The normal distributions transform (NDT) is an effective paradigm for the
point set registration. This method is originally designed for pair-wise
registration and it will suffer from great challenges when applied to
multi-view registration. Under the NDT framework, this paper proposes a novel
multi-view registration method, named 3D multi-view registration based on the
normal distributions transform (3DMNDT), which integrates the K-means
clustering and Lie algebra solver to achieve multi-view registration. More
specifically, the multi-view registration is cast into the problem of maximum
likelihood estimation. Then, the K-means algorithm is utilized to divide all
data points into different clusters, where a normal distribution is computed to
locally models the probability of measuring a data point in each cluster.
Subsequently, the registration problem is formulated by the NDT-based
likelihood function. To maximize this likelihood function, the Lie algebra
solver is developed to sequentially optimize each rigid transformation. The
proposed method alternately implements data point clustering, NDT computing,
and likelihood maximization until desired registration results are obtained.
Experimental results tested on benchmark data sets illustrate that the proposed
method can achieve state-of-the-art performance for multi-view registration. | [
"cs.CV",
"cs.RO"
] |
Action localization networks are often structured as a feature encoder
sub-network and a localization sub-network, where the feature encoder learns to
transform an input video to features that are useful for the localization
sub-network to generate reliable action proposals. While some of the encoded
features may be more useful for generating action proposals, prior action
localization approaches do not include any attention mechanism that enables the
localization sub-network to attend more to the more important features. In this
paper, we propose a novel attention mechanism, the Class Semantics-based
Attention (CSA), that learns from the temporal distribution of semantics of
action classes present in an input video to find the importance scores of the
encoded features, which are used to provide attention to the more useful
encoded features. We demonstrate on two popular action detection datasets that
incorporating our novel attention mechanism provides considerable performance
gains on competitive action detection models (e.g., around 6.2% improvement
over BMN action detection baseline to obtain 47.5% mAP on the THUMOS-14
dataset), and a new state-of-the-art of 36.25% mAP on the ActivityNet v1.3
dataset. Further, the CSA localization model family which includes BMN-CSA, was
part of the second-placed submission at the 2021 ActivityNet action
localization challenge. Our attention mechanism outperforms prior
self-attention modules such as the squeeze-and-excitation in action detection
task. We also observe that our attention mechanism is complementary to such
self-attention modules in that performance improvements are seen when both are
used together. | [
"cs.CV"
] |
Transformer-based models are popularly used in natural language processing
(NLP). Its core component, self-attention, has aroused widespread interest. To
understand the self-attention mechanism, a direct method is to visualize the
attention map of a pre-trained model. Based on the patterns observed, a series
of efficient Transformers with different sparse attention masks have been
proposed. From a theoretical perspective, universal approximability of
Transformer-based models is also recently proved. However, the above
understanding and analysis of self-attention is based on a pre-trained model.
To rethink the importance analysis in self-attention, we study the significance
of different positions in attention matrix during pre-training. A surprising
result is that diagonal elements in the attention map are the least important
compared with other attention positions. We provide a proof showing that these
diagonal elements can indeed be removed without deteriorating model
performance. Furthermore, we propose a Differentiable Attention Mask (DAM)
algorithm, which further guides the design of the SparseBERT. Extensive
experiments verify our interesting findings and illustrate the effect of the
proposed algorithm. | [
"cs.LG"
] |
Spectral graph theory is well known and widely used in computer vision. In
this paper, we analyze image segmentation algorithms that are based on spectral
graph theory, e.g., normalized cut, and show that there is a natural connection
between spectural graph theory based image segmentationand and edge preserving
filtering. Based on this connection we show that the normalized cut algorithm
is equivalent to repeated iterations of bilateral filtering. Then, using this
equivalence we present and implement a fast normalized cut algorithm for image
segmentation. Experiments show that our implementation can solve the original
optimization problem in the normalized cut algorithm 10 to 100 times faster.
Furthermore, we present a new algorithm called conditioned normalized cut for
image segmentation that can easily incorporate color image patches and
demonstrate how this segmentation problem can be solved with edge preserving
filtering. | [
"cs.CV"
] |
This paper introduces the offline meta-reinforcement learning (offline
meta-RL) problem setting and proposes an algorithm that performs well in this
setting. Offline meta-RL is analogous to the widely successful supervised
learning strategy of pre-training a model on a large batch of fixed,
pre-collected data (possibly from various tasks) and fine-tuning the model to a
new task with relatively little data. That is, in offline meta-RL, we
meta-train on fixed, pre-collected data from several tasks in order to adapt to
a new task with a very small amount (less than 5 trajectories) of data from the
new task. By nature of being offline, algorithms for offline meta-RL can
utilize the largest possible pool of training data available and eliminate
potentially unsafe or costly data collection during meta-training. This setting
inherits the challenges of offline RL, but it differs significantly because
offline RL does not generally consider a) transfer to new tasks or b) limited
data from the test task, both of which we face in offline meta-RL. Targeting
the offline meta-RL setting, we propose Meta-Actor Critic with Advantage
Weighting (MACAW), an optimization-based meta-learning algorithm that uses
simple, supervised regression objectives for both the inner and outer loop of
meta-training. On offline variants of common meta-RL benchmarks, we empirically
find that this approach enables fully offline meta-reinforcement learning and
achieves notable gains over prior methods. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Contour detection has been a fundamental component in many image segmentation
and object detection systems. Most previous work utilizes low-level features
such as texture or saliency to detect contours and then use them as cues for a
higher-level task such as object detection. However, we claim that recognizing
objects and predicting contours are two mutually related tasks. Contrary to
traditional approaches, we show that we can invert the commonly established
pipeline: instead of detecting contours with low-level cues for a higher-level
recognition task, we exploit object-related features as high-level cues for
contour detection.
We achieve this goal by means of a multi-scale deep network that consists of
five convolutional layers and a bifurcated fully-connected sub-network. The
section from the input layer to the fifth convolutional layer is fixed and
directly lifted from a pre-trained network optimized over a large-scale object
classification task. This section of the network is applied to four different
scales of the image input. These four parallel and identical streams are then
attached to a bifurcated sub-network consisting of two independently-trained
branches. One branch learns to predict the contour likelihood (with a
classification objective) whereas the other branch is trained to learn the
fraction of human labelers agreeing about the contour presence at a given point
(with a regression criterion).
We show that without any feature engineering our multi-scale deep learning
approach achieves state-of-the-art results in contour detection. | [
"cs.CV"
] |
Previous transfer learning methods based on deep network assume the knowledge
should be transferred between the same hidden layers of the source domain and
the target domains. This assumption doesn't always hold true, especially when
the data from the two domains are heterogeneous with different resolutions. In
such case, the most suitable numbers of layers for the source domain data and
the target domain data would differ. As a result, the high level knowledge from
the source domain would be transferred to the wrong layer of target domain.
Based on this observation, "where to transfer" proposed in this paper should be
a novel research frontier. We propose a new mathematic model named DT-LET to
solve this heterogeneous transfer learning problem. In order to select the best
matching of layers to transfer knowledge, we define specific loss function to
estimate the corresponding relationship between high-level features of data in
the source domain and the target domain. To verify this proposed cross-layer
model, experiments for two cross-domain recognition/classification tasks are
conducted, and the achieved superior results demonstrate the necessity of layer
correspondence searching. | [
"cs.LG",
"stat.ML"
] |
The dramatic success of deep neural networks across multiple application
areas often relies on experts painstakingly designing a network architecture
specific to each task. To simplify this process and make it more accessible, an
emerging research effort seeks to automate the design of neural network
architectures, using e.g. evolutionary algorithms or reinforcement learning or
simple search in a constrained space of neural modules.
Considering the typical size of the search space (e.g. $10^{10}$ candidates
for a $10$-layer network) and the cost of evaluating a single candidate,
current architecture search methods are very restricted. They either rely on
static pre-built modules to be recombined for the task at hand, or they define
a static hand-crafted framework within which they can generate new
architectures from the simplest possible operations.
In this paper, we relax these restrictions, by capitalizing on the collective
wisdom contained in the plethora of neural networks published in online code
repositories. Concretely, we (a) extract and publish GitGraph, a corpus of
neural architectures and their descriptions; (b) we create problem-specific
neural architecture search spaces, implemented as a textual search mechanism
over GitGraph; (c) we propose a method of identifying unique common subgraphs
within the architectures solving each problem (e.g., image processing,
reinforcement learning), that can then serve as modules in the newly created
problem specific neural search space. | [
"cs.LG",
"cs.AI"
] |
Knowledge representation of graph-based systems is fundamental across many
disciplines. To date, most existing methods for representation learning
primarily focus on networks with simplex labels, yet real-world objects (nodes)
are inherently complex in nature and often contain rich semantics or labels,
e.g., a user may belong to diverse interest groups of a social network,
resulting in multi-label networks for many applications. The multi-label
network nodes not only have multiple labels for each node, such labels are
often highly correlated making existing methods ineffective or fail to handle
such correlation for node representation learning. In this paper, we propose a
novel multi-label graph convolutional network (ML-GCN) for learning node
representation for multi-label networks. To fully explore label-label
correlation and network topology structures, we propose to model a multi-label
network as two Siamese GCNs: a node-node-label graph and a label-label-node
graph. The two GCNs each handle one aspect of representation learning for nodes
and labels, respectively, and they are seamlessly integrated under one
objective function. The learned label representations can effectively preserve
the inner-label interaction and node label properties, and are then aggregated
to enhance the node representation learning under a unified training framework.
Experiments and comparisons on multi-label node classification validate the
effectiveness of our proposed approach. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Understanding shadows from a single image spontaneously derives into two
types of task in previous studies, containing shadow detection and shadow
removal. In this paper, we present a multi-task perspective, which is not
embraced by any existing work, to jointly learn both detection and removal in
an end-to-end fashion that aims at enjoying the mutually improved benefits from
each other. Our framework is based on a novel STacked Conditional Generative
Adversarial Network (ST-CGAN), which is composed of two stacked CGANs, each
with a generator and a discriminator. Specifically, a shadow image is fed into
the first generator which produces a shadow detection mask. That shadow image,
concatenated with its predicted mask, goes through the second generator in
order to recover its shadow-free image consequently. In addition, the two
corresponding discriminators are very likely to model higher level
relationships and global scene characteristics for the detected shadow region
and reconstruction via removing shadows, respectively. More importantly, for
multi-task learning, our design of stacked paradigm provides a novel view which
is notably different from the commonly used one as the multi-branch version. To
fully evaluate the performance of our proposed framework, we construct the
first large-scale benchmark with 1870 image triplets (shadow image, shadow mask
image, and shadow-free image) under 135 scenes. Extensive experimental results
consistently show the advantages of ST-CGAN over several representative
state-of-the-art methods on two large-scale publicly available datasets and our
newly released one. | [
"cs.CV"
] |
Deep neural networks have been successfully applied to problems such as image
segmentation, image super-resolution, coloration and image inpainting. In this
work we propose the use of convolutional neural networks (CNN) for image
inpainting of large regions in high-resolution textures. Due to limited
computational resources processing high-resolution images with neural networks
is still an open problem. Existing methods separate inpainting of global
structure and the transfer of details, which leads to blurry results and loss
of global coherence in the detail transfer step. Based on advances in texture
synthesis using CNNs we propose patch-based image inpainting by a CNN that is
able to optimize for global as well as detail texture statistics. Our method is
capable of filling large inpainting regions, oftentimes exceeding the quality
of comparable methods for high-resolution images. For reference patch look-up
we propose to use the same summary statistics that are used in the inpainting
process. | [
"cs.CV"
] |
Given an untrimmed video and a natural language query, Natural Language Video
Localization (NLVL) aims to identify the video moment described by the query.
To address this task, existing methods can be roughly grouped into two groups:
1) propose-and-rank models first define a set of hand-designed moment
candidates and then find out the best-matching one. 2) proposal-free models
directly predict two temporal boundaries of the referential moment from frames.
Currently, almost all the propose-and-rank methods have inferior performance
than proposal-free counterparts. In this paper, we argue that propose-and-rank
approach is underestimated due to the predefined manners: 1) Hand-designed
rules are hard to guarantee the complete coverage of targeted segments. 2)
Densely sampled candidate moments cause redundant computation and degrade the
performance of ranking process. To this end, we propose a novel model termed
LPNet (Learnable Proposal Network for NLVL) with a fixed set of learnable
moment proposals. The position and length of these proposals are dynamically
adjusted during training process. Moreover, a boundary-aware loss has been
proposed to leverage frame-level information and further improve the
performance. Extensive ablations on two challenging NLVL benchmarks have
demonstrated the effectiveness of LPNet over existing state-of-the-art methods. | [
"cs.CV"
] |
Reparameterization (RP) and likelihood ratio (LR) gradient estimators are
used throughout machine and reinforcement learning; however, they are usually
explained as simple mathematical tricks without providing any insight into
their nature. We use a first principles approach to explain LR and RP, and show
a connection between the two via the divergence theorem. The theory motivated
us to derive optimal importance sampling schemes to reduce LR gradient
variance. Our newly derived distributions have analytic probability densities
and can be directly sampled from. The improvement for Gaussian target
distributions was modest, but for other distributions such as a Beta
distribution, our method could lead to arbitrarily large improvements, and was
crucial to obtain competitive performance in evolution strategies experiments. | [
"cs.LG",
"stat.ML"
] |
We are creating multimedia contents everyday and everywhere. While automatic
content generation has played a fundamental challenge to multimedia community
for decades, recent advances of deep learning have made this problem feasible.
For example, the Generative Adversarial Networks (GANs) is a rewarding approach
to synthesize images. Nevertheless, it is not trivial when capitalizing on GANs
to generate videos. The difficulty originates from the intrinsic structure
where a video is a sequence of visually coherent and semantically dependent
frames. This motivates us to explore semantic and temporal coherence in
designing GANs to generate videos. In this paper, we present a novel Temporal
GANs conditioning on Captions, namely TGANs-C, in which the input to the
generator network is a concatenation of a latent noise vector and caption
embedding, and then is transformed into a frame sequence with 3D
spatio-temporal convolutions. Unlike the naive discriminator which only judges
pairs as fake or real, our discriminator additionally notes whether the video
matches the correct caption. In particular, the discriminator network consists
of three discriminators: video discriminator classifying realistic videos from
generated ones and optimizes video-caption matching, frame discriminator
discriminating between real and fake frames and aligning frames with the
conditioning caption, and motion discriminator emphasizing the philosophy that
the adjacent frames in the generated videos should be smoothly connected as in
real ones. We qualitatively demonstrate the capability of our TGANs-C to
generate plausible videos conditioning on the given captions on two synthetic
datasets (SBMG and TBMG) and one real-world dataset (MSVD). Moreover,
quantitative experiments on MSVD are performed to validate our proposal via
Generative Adversarial Metric and human study. | [
"cs.CV"
] |
Order dispatch is one of the central problems to ride-sharing platforms.
Recently, value-based reinforcement learning algorithms have shown promising
performance on this problem. However, in real-world applications, the
non-stationarity of the demand-supply system poses challenges to re-utilizing
data generated in different time periods to learn the value function. In this
work, motivated by the fact that the relative relationship between the values
of some states is largely stable across various environments, we propose a
pattern transfer learning framework for value-based reinforcement learning in
the order dispatch problem. Our method efficiently captures the value patterns
by incorporating a concordance penalty. The superior performance of the
proposed method is supported by experiments. | [
"cs.LG"
] |
In this paper, we introduce a new dataset consisting of 360,001 focused
natural language descriptions for 10,738 images. This dataset, the Visual
Madlibs dataset, is collected using automatically produced fill-in-the-blank
templates designed to gather targeted descriptions about: people and objects,
their appearances, activities, and interactions, as well as inferences about
the general scene or its broader context. We provide several analyses of the
Visual Madlibs dataset and demonstrate its applicability to two new description
generation tasks: focused description generation, and multiple-choice
question-answering for images. Experiments using joint-embedding and deep
learning methods show promising results on these tasks. | [
"cs.CV",
"cs.CL"
] |
The gradient-based optimization method for deep machine learning models
suffers from gradient vanishing and exploding problems, particularly when the
computational graph becomes deep. In this work, we propose the tangent-space
gradient optimization (TSGO) for the probabilistic models to keep the gradients
from vanishing or exploding. The central idea is to guarantee the orthogonality
between the variational parameters and the gradients. The optimization is then
implemented by rotating parameter vector towards the direction of gradient. We
explain and testify TSGO in tensor network (TN) machine learning, where the TN
describes the joint probability distribution as a normalized state $\left| \psi
\right\rangle $ in Hilbert space. We show that the gradient can be restricted
in the tangent space of $\left\langle \psi \right.\left| \psi \right\rangle =
1$ hyper-sphere. Instead of additional adaptive methods to control the learning
rate in deep learning, the learning rate of TSGO is naturally determined by the
angle $\theta $ as $\eta = \tan \theta $. Our numerical results reveal better
convergence of TSGO in comparison to the off-the-shelf Adam. | [
"cs.LG",
"cond-mat.dis-nn",
"stat.ML"
] |
Bayesian change-point detection, together with latent variable models, allows
to perform segmentation over high-dimensional time-series. We assume that
change-points lie on a lower-dimensional manifold where we aim to infer subsets
of discrete latent variables. For this model, full inference is computationally
unfeasible and pseudo-observations based on point-estimates are used instead.
However, if estimation is not certain enough, change-point detection gets
affected. To circumvent this problem, we propose a multinomial sampling
methodology that improves the detection rate and reduces the delay while
keeping complexity stable and inference analytically tractable. Our experiments
show results that outperform the baseline method and we also provide an example
oriented to a human behavior study. | [
"stat.ML",
"cs.LG"
] |
In low-altitude Unmanned Aerial Vehicle (UAV) flights, power lines are
considered as one of the most threatening hazards and one of the most difficult
obstacles to avoid. In recent years, many vision-based techniques have been
proposed to detect power lines to facilitate self-driving UAVs and automatic
obstacle avoidance. However, most of the proposed methods are typically based
on a common three-step approach: (i) edge detection, (ii) the Hough transform,
and (iii) spurious line elimination based on power line constrains. These
approaches not only are slow and inaccurate but also require a huge amount of
effort in post-processing to distinguish between power lines and spurious
lines. In this paper, we introduce LS-Net, a fast single-shot line-segment
detector, and apply it to power line detection. The LS-Net is by design fully
convolutional and consists of three modules: (i) a fully convolutional feature
extractor, (ii) a classifier, and (iii) a line segment regressor. Due to the
unavailability of large datasets with annotations of power lines, we render
synthetic images of power lines using the Physically Based Rendering (PBR)
approach and propose a series of effective data augmentation techniques to
generate more training data. With a customized version of the VGG-16 network as
the backbone, the proposed approach outperforms existing state-of-the-art
approaches. In addition, the LS-Net can detect power lines in near real-time
(20.4 FPS). This suggests that our proposed approach has a promising role in
automatic obstacle avoidance and as a valuable component of self-driving UAVs,
especially for automatic autonomous power line inspection. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Disentangled representations have recently been shown to improve fairness,
data efficiency and generalisation in simple supervised and reinforcement
learning tasks. To extend the benefits of disentangled representations to more
complex domains and practical applications, it is important to enable
hyperparameter tuning and model selection of existing unsupervised approaches
without requiring access to ground truth attribute labels, which are not
available for most datasets. This paper addresses this problem by introducing a
simple yet robust and reliable method for unsupervised disentangled model
selection. Our approach, Unsupervised Disentanglement Ranking (UDR), leverages
the recent theoretical results that explain why variational autoencoders
disentangle (Rolinek et al, 2019), to quantify the quality of disentanglement
by performing pairwise comparisons between trained model representations. We
show that our approach performs comparably to the existing supervised
alternatives across 5,400 models from six state of the art unsupervised
disentangled representation learning model classes. Furthermore, we show that
the ranking produced by our approach correlates well with the final task
performance on two different domains. | [
"cs.LG",
"stat.ML"
] |
In this work we address the task of semantic image segmentation with Deep
Learning and make three main contributions that are experimentally shown to
have substantial practical merit. First, we highlight convolution with
upsampled filters, or 'atrous convolution', as a powerful tool in dense
prediction tasks. Atrous convolution allows us to explicitly control the
resolution at which feature responses are computed within Deep Convolutional
Neural Networks. It also allows us to effectively enlarge the field of view of
filters to incorporate larger context without increasing the number of
parameters or the amount of computation. Second, we propose atrous spatial
pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP
probes an incoming convolutional feature layer with filters at multiple
sampling rates and effective fields-of-views, thus capturing objects as well as
image context at multiple scales. Third, we improve the localization of object
boundaries by combining methods from DCNNs and probabilistic graphical models.
The commonly deployed combination of max-pooling and downsampling in DCNNs
achieves invariance but has a toll on localization accuracy. We overcome this
by combining the responses at the final DCNN layer with a fully connected
Conditional Random Field (CRF), which is shown both qualitatively and
quantitatively to improve localization performance. Our proposed "DeepLab"
system sets the new state-of-art at the PASCAL VOC-2012 semantic image
segmentation task, reaching 79.7% mIOU in the test set, and advances the
results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and
Cityscapes. All of our code is made publicly available online. | [
"cs.CV"
] |
Image segmentation is one of the principal approaches of image processing.
The choice of the most appropriate Binarization algorithm for each case proved
to be a very interesting procedure itself. In this paper, we have done the
comparison study between the various algorithms based on Binarization
algorithms and propose a methodologies for the validation of Binarization
algorithms. In this work we have developed two novel algorithms to determine
threshold values for the pixels value of the gray scale image. The performance
estimation of the algorithm utilizes test images with, the evaluation metrics
for Binarization of textual and synthetic images. We have achieved better
resolution of the image by using the Binarization method of optimum
thresholding techniques. | [
"cs.CV",
"cs.MM"
] |
Different types of spectroscopies, such as X-ray absorption near edge
structure (XANES) and Raman spectroscopy, play a very important role in
analyzing the characteristics of different materials. In scientific literature,
XANES/Raman data are usually plotted in line graphs which is a visually
appropriate way to represent the information when the end-user is a human
reader. However, such graphs are not conducive to direct programmatic analysis
due to the lack of automatic tools. In this paper, we develop a plot digitizer,
named Plot2Spectra, to extract data points from spectroscopy graph images in an
automatic fashion, which makes it possible for large scale data acquisition and
analysis. Specifically, the plot digitizer is a two-stage framework. In the
first axis alignment stage, we adopt an anchor-free detector to detect the plot
region and then refine the detected bounding boxes with an edge-based
constraint to locate the position of two axes. We also apply scene text
detector to extract and interpret all tick information below the x-axis. In the
second plot data extraction stage, we first employ semantic segmentation to
separate pixels belonging to plot lines from the background, and from there,
incorporate optical flow constraints to the plot line pixels to assign them to
the appropriate line (data instance) they encode. Extensive experiments are
conducted to validate the effectiveness of the proposed plot digitizer, which
shows that such a tool could help accelerate the discovery and machine learning
of materials properties. | [
"cs.CV"
] |
Multi-Source Domain Adaptation (MSDA) is a more practical domain adaptation
scenario in real-world scenarios. It relaxes the assumption in conventional
Unsupervised Domain Adaptation (UDA) that source data are sampled from a single
domain and match a uniform data distribution. MSDA is more difficult due to the
existence of different domain shifts between distinct domain pairs. When
considering videos, the negative transfer would be provoked by spatial-temporal
features and can be formulated into a more challenging Multi-Source Video
Domain Adaptation (MSVDA) problem. In this paper, we address the MSVDA problem
by proposing a novel Temporal Attentive Moment Alignment Network (TAMAN) which
aims for effective feature transfer by dynamically aligning both spatial and
temporal feature moments. TAMAN further constructs robust global temporal
features by attending to dominant domain-invariant local temporal features with
high local classification confidence and low disparity between global and local
feature discrepancies. To facilitate future research on the MSVDA problem, we
introduce comprehensive benchmarks, covering extensive MSVDA scenarios.
Empirical results demonstrate a superior performance of the proposed TAMAN
across multiple MSVDA benchmarks. | [
"cs.CV"
] |
Material classification in natural settings is a challenge due to complex
interplay of geometry, reflectance properties, and illumination. Previous work
on material classification relies strongly on hand-engineered features of
visual samples. In this work we use a Convolutional Neural Network (convnet)
that learns descriptive features for the specific task of material recognition.
Specifically, transfer learning from the task of object recognition is
exploited to more effectively train good features for material classification.
The approach of transfer learning using convnets yields significantly higher
recognition rates when compared to previous state-of-the-art approaches. We
then analyze the relative contribution of reflectance and shading information
by a decomposition of the image into its intrinsic components. The use of
convnets for material classification was hindered by the strong demand for
sufficient and diverse training data, even with transfer learning approaches.
Therefore, we present a new data set containing approximately 10k images
divided into 10 material categories. | [
"cs.CV"
] |
An optimal feedback controller for a given Markov decision process (MDP) can
in principle be synthesized by value or policy iteration. However, if the
system dynamics and the reward function are unknown, a learning agent must
discover an optimal controller via direct interaction with the environment.
Such interactive data gathering commonly leads to divergence towards dangerous
or uninformative regions of the state space unless additional regularization
measures are taken. Prior works proposed bounding the information loss measured
by the Kullback-Leibler (KL) divergence at every policy improvement step to
eliminate instability in the learning dynamics. In this paper, we consider a
broader family of $f$-divergences, and more concretely $\alpha$-divergences,
which inherit the beneficial property of providing the policy improvement step
in closed form at the same time yielding a corresponding dual objective for
policy evaluation. Such entropic proximal policy optimization view gives a
unified perspective on compatible actor-critic architectures. In particular,
common least-squares value function estimation coupled with advantage-weighted
maximum likelihood policy improvement is shown to correspond to the Pearson
$\chi^2$-divergence penalty. Other actor-critic pairs arise for various choices
of the penalty-generating function $f$. On a concrete instantiation of our
framework with the $\alpha$-divergence, we carry out asymptotic analysis of the
solutions for different values of $\alpha$ and demonstrate the effects of the
divergence function choice on common standard reinforcement learning problems. | [
"cs.LG",
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.