text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Modern deep neural networks are typically highly overparameterized. Pruning
techniques are able to remove a significant fraction of network parameters with
little loss in accuracy. Recently, techniques based on dynamic reallocation of
non-zero parameters have emerged, allowing direct training of sparse networks
without having to pre-train a large dense model. Here we present a novel
dynamic sparse reparameterization method that addresses the limitations of
previous techniques such as high computational cost and the need for manual
configuration of the number of free parameters allocated to each layer. We
evaluate the performance of dynamic reallocation methods in training deep
convolutional networks and show that our method outperforms previous static and
dynamic reparameterization methods, yielding the best accuracy for a fixed
parameter budget, on par with accuracies obtained by iteratively pruning a
pre-trained dense model. We further investigated the mechanisms underlying the
superior generalization performance of the resultant sparse networks. We found
that neither the structure, nor the initialization of the non-zero parameters
were sufficient to explain the superior performance. Rather, effective learning
crucially depended on the continuous exploration of the sparse network
structure space during training. Our work suggests that exploring structural
degrees of freedom during training is more effective than adding extra
parameters to the network. | [
"cs.LG",
"stat.ML"
] |
Evaluation of large-scale fingerprint search algorithms has been limited due
to lack of publicly available datasets. To address this problem, we utilize a
Generative Adversarial Network (GAN) to synthesize a fingerprint dataset
consisting of 100 million fingerprint images. In contrast to existing
fingerprint synthesis algorithms, we incorporate an identity loss which guides
the generator to synthesize fingerprints corresponding to more distinct
identities. The characteristics of our synthesized fingerprints are shown to be
more similar to real fingerprints than existing methods via eight different
metrics (minutiae count - block and template, minutiae direction - block and
template, minutiae convex hull area, minutiae spatial distribution, block
minutiae quality distribution, and NFIQ 2.0 scores). Additionally, the
synthetic fingerprints based on our approach are shown to be more distinct than
synthetic fingerprints based on published methods through search results and
imposter distribution statistics. Finally, we report for the first time in open
literature, search accuracy against a gallery of 100 million fingerprint images
(NIST SD4 Rank-1 accuracy of 89.7%). | [
"cs.CV"
] |
The present paper introduces the $\eta$ and {\eta} connections in order to
add regional information on $\lambda$-flat zones, which only take into account
a local information. A top-down approach is considered. First $\lambda$-flat
zones are built in a way leading to a sub-segmentation. Then a finer
segmentation is obtained by computing $\eta$-bounded regions and $\mu$-geodesic
balls inside the $\lambda$-flat zones. The proposed algorithms for the
construction of new partitions are based on queues with an ordered selection of
seeds using the cumulative distance. $\eta$-bounded regions offers a control on
the variations of amplitude in the class from a point, called center, and
$\mu$-geodesic balls controls the "size" of the class. These results are
applied to hyperspectral images. | [
"cs.CV",
"math.NA"
] |
Tabular datasets are ubiquitous in data science applications. Given their
importance, it seems natural to apply state-of-the-art deep learning algorithms
in order to fully unlock their potential. Here we propose neural network models
that represent tabular time series that can optionally leverage their
hierarchical structure. This results in two architectures for tabular time
series: one for learning representations that is analogous to BERT and can be
pre-trained end-to-end and used in downstream tasks, and one that is akin to
GPT and can be used for generation of realistic synthetic tabular sequences. We
demonstrate our models on two datasets: a synthetic credit card transaction
dataset, where the learned representations are used for fraud detection and
synthetic data generation, and on a real pollution dataset, where the learned
encodings are used to predict atmospheric pollutant concentrations. Code and
data are available at https://github.com/IBM/TabFormer. | [
"cs.LG",
"cs.AI"
] |
Objective: Herein, a neural network-based liver segmentation algorithm is
proposed, and its performance was evaluated using abdominal computed tomography
(CT) images. Methods: A fully convolutional network was developed to overcome
the volumetric image segmentation problem. To guide a neural network to
accurately delineate a target liver object, the network was deeply supervised
by applying the adaptive self-supervision scheme to derive the essential
contour, which acted as a complement with the global shape. The discriminative
contour, shape, and deep features were internally merged for the segmentation
results. Results and Conclusion: 160 abdominal CT images were used for training
and validation. The quantitative evaluation of the proposed network was
performed through an eight-fold cross-validation. The result showed that the
method, which uses the contour feature, segmented the liver more accurately
than the state-of-the-art with a 2.13% improvement in the dice score.
Significance: In this study, a new framework was introduced to guide a neural
network and learn complementary contour features. The proposed neural network
demonstrates that the guided contour features can significantly improve the
performance of the segmentation task. | [
"cs.CV",
"68U10"
] |
Q-learning, which seeks to learn the optimal Q-function of a Markov decision
process (MDP) in a model-free fashion, lies at the heart of reinforcement
learning. When it comes to the synchronous setting (such that independent
samples for all state-action pairs are drawn from a generative model in each
iteration), substantial progress has been made recently towards understanding
the sample efficiency of Q-learning. Take a $\gamma$-discounted
infinite-horizon MDP with state space $\mathcal{S}$ and action space
$\mathcal{A}$: to yield an entrywise $\varepsilon$-accurate estimate of the
optimal Q-function, state-of-the-art theory for Q-learning proves that a sample
size on the order of
$\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$ is sufficient,
which, however, fails to match with the existing minimax lower bound. This
gives rise to natural questions: what is the sharp sample complexity of
Q-learning? Is Q-learning provably sub-optimal? In this work, we settle these
questions by (1) demonstrating that the sample complexity of Q-learning is at
most on the order of
$\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4\varepsilon^2}$ (up to some log
factor) for any $0<\varepsilon <1$, and (2) developing a matching lower bound
to confirm the sharpness of our result. Our findings unveil both the
effectiveness and limitation of Q-learning: its sample complexity matches that
of speedy Q-learning without requiring extra computation and storage, albeit
still being considerably higher than the minimax lower bound. | [
"stat.ML",
"cs.IT",
"cs.LG",
"math.IT",
"math.OC",
"math.ST",
"stat.TH"
] |
In this work, we present a novel approach for training Generative Adversarial
Networks (GANs). Using the attention maps produced by a Teacher- Network we are
able to improve the quality of the generated images as well as perform weakly
object localization on the generated images. To this end, we generate images of
HEp-2 cells captured with Indirect Imunofluoresence (IIF) and study the ability
of our network to perform a weakly localization of the cell. Firstly, we
demonstrate that whilst GANs can learn the mapping between the input domain and
the target distribution efficiently, the discriminator network is not able to
detect the regions of interest. Secondly, we present a novel attention transfer
mechanism which allows us to enforce the discriminator to put emphasis on the
regions of interest via transfer learning. Thirdly, we show that this leads to
more realistic images, as the discriminator learns to put emphasis on the area
of interest. Fourthly, the proposed method allows one to generate both images
as well as attention maps which can be useful for data annotation e.g in object
detection. | [
"cs.CV"
] |
Predictive business process monitoring focuses on predicting future
characteristics of a running process using event logs. The foresight into
process execution promises great potentials for efficient operations, better
resource management, and effective customer services. Deep learning-based
approaches have been widely adopted in process mining to address the
limitations of classical algorithms for solving multiple problems, especially
the next event and remaining-time prediction tasks. Nevertheless, designing a
deep neural architecture that performs competitively across various tasks is
challenging as existing methods fail to capture long-range dependencies in the
input sequences and perform poorly for lengthy process traces. In this paper,
we propose ProcessTransformer, an approach for learning high-level
representations from event logs with an attention-based network. Our model
incorporates long-range memory and relies on a self-attention mechanism to
establish dependencies between a multitude of event sequences and corresponding
outputs. We evaluate the applicability of our technique on nine real event
logs. We demonstrate that the transformer-based model outperforms several
baselines of prior techniques by obtaining on average above 80% accuracy for
the task of predicting the next activity. Our method also perform
competitively, compared to baselines, for the tasks of predicting event time
and remaining time of a running case | [
"cs.LG",
"cs.AI"
] |
Closing the gap between the hardware requirements of state-of-the-art
convolutional neural networks and the limited resources constraining embedded
applications is the next big challenge in deep learning research. The
computational complexity and memory footprint of such neural networks are
typically daunting for deployment in resource constrained environments. Model
compression techniques, such as pruning, are emphasized among other
optimization methods for solving this problem. Most existing techniques require
domain expertise or result in irregular sparse representations, which increase
the burden of deploying deep learning applications on embedded hardware
accelerators. In this paper, we propose the autoencoder-based low-rank
filter-sharing technique technique (ALF). When applied to various networks, ALF
is compared to state-of-the-art pruning methods, demonstrating its efficient
compression capabilities on theoretical metrics as well as on an accurate,
deterministic hardware-model. In our experiments, ALF showed a reduction of
70\% in network parameters, 61\% in operations and 41\% in execution time, with
minimal loss in accuracy. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We propose an improved eye center localization method based on the Hough
transform, called Circle-based Eye Center Localization (CECL) that is simple,
robust, and achieves accuracy on a par with typically more complex
state-of-the-art methods. The CECL method relies on color and shape cues that
distinguish the iris from other facial structures. The accuracy of the CECL
method is demonstrated through a comparison with 15 state-of-the-art eye center
localization methods against five error thresholds, as reported in the
literature. The CECL method achieved an accuracy of 80.8% to 99.4% and ranked
first for 2 of the 5 thresholds. It is concluded that the CECL method offers an
attractive alternative to existing methods for automatic eye center
localization. | [
"cs.CV"
] |
Tree-structured data usually contain both topological and geometrical
information, and are necessarily considered on manifold instead of Euclidean
space for appropriate data parameterization and analysis. In this study, we
propose a novel tree-structured data parameterization, called
Topology-Attribute matrix (T-A matrix), so the data clustering task can be
conducted on matrix manifold. We incorporate the structure constraints embedded
in data into the negative matrix factorization method to determine meta-trees
from the T-A matrix, and the signature vector of each single tree can then be
extracted by meta-tree decomposition. The meta-tree space turns out to be a
cone space, in which we explore the distance metric and implement the
clustering algorithm based on the concepts like Fr\'echet mean. Finally, the
T-A matrix based clustering (TAMBAC) framework is evaluated and compared using
both simulated data and real retinal images to illustrate its efficiency and
accuracy. | [
"cs.CV",
"cs.LG",
"68T10, 62H30"
] |
Recent advances in convolutional neural networks have shown promising results
in 3D shape completion. But due to GPU memory limitations, these methods can
only produce low-resolution outputs. To inpaint 3D models with semantic
plausibility and contextual details, we introduce a hybrid framework that
combines a 3D Encoder-Decoder Generative Adversarial Network (3D-ED-GAN) and a
Long-term Recurrent Convolutional Network (LRCN). The 3D-ED-GAN is a 3D
convolutional neural network trained with a generative adversarial paradigm to
fill missing 3D data in low-resolution. LRCN adopts a recurrent neural network
architecture to minimize GPU memory usage and incorporates an Encoder-Decoder
pair into a Long Short-term Memory Network. By handling the 3D model as a
sequence of 2D slices, LRCN transforms a coarse 3D shape into a more complete
and higher resolution volume. While 3D-ED-GAN captures global contextual
structure of the 3D shape, LRCN localizes the fine-grained details.
Experimental results on both real-world and synthetic data show reconstructions
from corrupted models result in complete and high-resolution 3D objects. | [
"cs.CV"
] |
Phrase grounding, the problem of associating image regions to caption words,
is a crucial component of vision-language tasks. We show that phrase grounding
can be learned by optimizing word-region attention to maximize a lower bound on
mutual information between images and caption words. Given pairs of images and
captions, we maximize compatibility of the attention-weighted regions and the
words in the corresponding caption, compared to non-corresponding pairs of
images and captions. A key idea is to construct effective negative captions for
learning through language model guided word substitutions. Training with our
negatives yields a $\sim10\%$ absolute gain in accuracy over randomly-sampled
negatives from the training data. Our weakly supervised phrase grounding model
trained on COCO-Captions shows a healthy gain of $5.7\%$ to achieve $76.7\%$
accuracy on Flickr30K Entities benchmark. | [
"cs.CV",
"cs.CL",
"cs.LG",
"stat.ML"
] |
This paper describes a simple technique to analyze Generative Adversarial
Networks (GANs) and create interpretable controls for image synthesis, such as
change of viewpoint, aging, lighting, and time of day. We identify important
latent directions based on Principal Components Analysis (PCA) applied either
in latent space or feature space. Then, we show that a large number of
interpretable controls can be defined by layer-wise perturbation along the
principal directions. Moreover, we show that BigGAN can be controlled with
layer-wise inputs in a StyleGAN-like manner. We show results on different GANs
trained on various datasets, and demonstrate good qualitative matches to edit
directions found through earlier supervised approaches. | [
"cs.CV",
"cs.GR"
] |
To cope with high annotation costs, training a classifier only from weakly
supervised data has attracted a great deal of attention these days. Among
various approaches, strengthening supervision from completely unsupervised
classification is a promising direction, which typically employs class priors
as the only supervision and trains a binary classifier from unlabeled (U)
datasets. While existing risk-consistent methods are theoretically grounded
with high flexibility, they can learn only from two U sets. In this paper, we
propose a new approach for binary classification from $m$ U-sets for $m\ge2$.
Our key idea is to consider an auxiliary classification task called surrogate
set classification (SSC), which is aimed at predicting from which U set each
observed data is drawn. SSC can be solved by a standard (multi-class)
classification method, and we use the SSC solution to obtain the final binary
classifier through a certain linear-fractional transformation. We built our
method in a flexible and efficient end-to-end deep learning framework and prove
it to be classifier-consistent. Through experiments, we demonstrate the
superiority of our proposed method over state-of-the-art methods. | [
"cs.LG",
"stat.ML"
] |
Semantic segmentation for aerial platforms has been one of the fundamental
scene understanding task for the earth observation. Most of the semantic
segmentation research focused on scenes captured in nadir view, in which
objects have relatively smaller scale variation compared with scenes captured
in oblique view. The huge scale variation of objects in oblique images limits
the performance of deep neural networks (DNN) that process images in a single
scale fashion. In order to tackle the scale variation issue, in this paper, we
propose the novel bidirectional multi-scale attention networks, which fuse
features from multiple scales bidirectionally for more adaptive and effective
feature extraction. The experiments are conducted on the UAVid2020 dataset and
have shown the effectiveness of our method. Our model achieved the
state-of-the-art (SOTA) result with a mean intersection over union (mIoU) score
of 70.80%. | [
"cs.CV"
] |
Multi-scene reinforcement learning involves training the RL agent across
multiple scenes / levels from the same task, and has become essential for many
generalization applications. However, the inclusion of multiple scenes leads to
an increase in sample variance for policy gradient computations, often
resulting in suboptimal performance with the direct application of traditional
methods (e.g. PPO, A3C). One strategy for variance reduction is to consider
each scene as a distinct Markov decision process (MDP) and learn a joint value
function dependent on both state (s) and MDP (M). However, this is non-trivial
as the agent is usually unaware of the underlying level at train / test times
in multi-scene RL. Recently, Singh et al. [1] tried to address this by
proposing a dynamic value estimation approach that models the true joint value
function distribution as a Gaussian mixture model (GMM). In this paper, we
argue that the error between the true scene-specific value function and the
predicted dynamic estimate can be further reduced by progressively enforcing
sparse cluster assignments once the agent has explored most of the state space.
The resulting agents not only show significant improvements in the final reward
score across a range of OpenAI ProcGen environments, but also exhibit increased
navigation efficiency while completing a game level. | [
"cs.LG",
"stat.ML"
] |
This paper introduces a novel perspective about error in machine learning and
proposes inverse feature learning (IFL) as a representation learning approach
that learns a set of high-level features based on the representation of error
for classification or clustering purposes. The proposed perspective about error
representation is fundamentally different from current learning methods, where
in classification approaches they interpret the error as a function of the
differences between the true labels and the predicted ones or in clustering
approaches, in which the clustering objective functions such as compactness are
used. Inverse feature learning method operates based on a deep clustering
approach to obtain a qualitative form of the representation of error as
features. The performance of the proposed IFL method is evaluated by applying
the learned features along with the original features, or just using the
learned features in different classification and clustering techniques for
several data sets. The experimental results show that the proposed method leads
to promising results in classification and especially in clustering. In
classification, the proposed features along with the primary features improve
the results of most of the classification methods on several popular data sets.
In clustering, the performance of different clustering methods is considerably
improved on different data sets. There are interesting results that show some
few features of the representation of error capture highly informative aspects
of primary features. We hope this paper helps to utilize the error
representation learning in different feature learning domains. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
] |
Designing a single neural network architecture that performs competitively
across a range of molecule property prediction tasks remains largely an open
challenge, and its solution may unlock a widespread use of deep learning in the
drug discovery industry. To move towards this goal, we propose Molecule
Attention Transformer (MAT). Our key innovation is to augment the attention
mechanism in Transformer using inter-atomic distances and the molecular graph
structure. Experiments show that MAT performs competitively on a diverse set of
molecular prediction tasks. Most importantly, with a simple self-supervised
pretraining, MAT requires tuning of only a few hyperparameter values to achieve
state-of-the-art performance on downstream tasks. Finally, we show that
attention weights learned by MAT are interpretable from the chemical point of
view. | [
"cs.LG",
"physics.comp-ph",
"stat.ML"
] |
Extracting effective deep features to represent content and style information
is the key to universal style transfer. Most existing algorithms use VGG19 as
the feature extractor, which incurs a high computational cost and impedes
real-time style transfer on high-resolution images. In this work, we propose a
lightweight alternative architecture - ArtNet, which is based on GoogLeNet, and
later pruned by a novel channel pruning method named Zero-channel Pruning
specially designed for style transfer approaches. Besides, we propose a
theoretically sound sandwich swap transform (S2) module to transfer deep
features, which can create a pleasing holistic appearance and good local
textures with an improved content preservation ability. By using ArtNet and S2,
our method is 2.3 to 107.4 times faster than state-of-the-art approaches. The
comprehensive experiments demonstrate that ArtNet can achieve universal,
real-time, and high-quality style transfer on high-resolution images
simultaneously, (68.03 FPS on 512 times 512 images). | [
"cs.CV",
"eess.IV"
] |
Cross-domain crowd counting (CDCC) is a hot topic due to its importance in
public safety. The purpose of CDCC is to alleviate the domain shift between the
source and target domain. Recently, typical methods attempt to extract
domain-invariant features via image translation and adversarial learning. When
it comes to specific tasks, we find that the domain shifts are reflected on
model parameters' differences. To describe the domain gap directly at the
parameter-level, we propose a Neuron Linear Transformation (NLT) method,
exploiting domain factor and bias weights to learn the domain shift.
Specifically, for a specific neuron of a source model, NLT exploits few labeled
target data to learn domain shift parameters. Finally, the target neuron is
generated via a linear transformation. Extensive experiments and analysis on
six real-world datasets validate that NLT achieves top performance compared
with other domain adaptation methods. An ablation study also shows that the NLT
is robust and more effective than supervised and fine-tune training. Code is
available at: \url{https://github.com/taohan10200/NLT}. | [
"cs.CV"
] |
In this work, we seek new insights into the underlying challenges of the
Scene Graph Generation (SGG) task. Quantitative and qualitative analysis of the
Visual Genome dataset implies -- 1) Ambiguity: even if inter-object
relationship contains the same object (or predicate), they may not be visually
or semantically similar, 2) Asymmetry: despite the nature of the relationship
that embodied the direction, it was not well addressed in previous studies, and
3) Higher-order contexts: leveraging the identities of certain graph elements
can help to generate accurate scene graphs. Motivated by the analysis, we
design a novel SGG framework, Local-to-Global Interaction Networks (LOGIN).
Locally, interactions extract the essence between three instances - subject,
object, and background - while baking direction awareness into the network by
constraining the input order. Globally, interactions encode the contexts
between every graph components -- nodes and edges. Also we introduce Attract &
Repel loss which finely adjusts predicate embeddings. Our framework enables
predicting the scene graph in a local-to-global manner by design, leveraging
the possible complementariness. To quantify how much LOGIN is aware of
relational direction, we propose a new diagnostic task called Bidirectional
Relationship Classification (BRC). We see that LOGIN can successfully
distinguish relational direction than existing methods (in BRC task) while
showing state-of-the-art results on the Visual Genome benchmark (in SGG task). | [
"cs.CV"
] |
This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.NE",
"stat.ML"
] |
Recent works in geometric deep learning have introduced neural networks that
allow performing inference tasks on three-dimensional geometric data by
defining convolution, and sometimes pooling, operations on triangle meshes.
These methods, however, either consider the input mesh as a graph, and do not
exploit specific geometric properties of meshes for feature aggregation and
downsampling, or are specialized for meshes, but rely on a rigid definition of
convolution that does not properly capture the local topology of the mesh. We
propose a method that combines the advantages of both types of approaches,
while addressing their limitations: we extend a primal-dual framework drawn
from the graph-neural-network literature to triangle meshes, and define
convolutions on two types of graphs constructed from an input mesh. Our method
takes features for both edges and faces of a 3D mesh as input and dynamically
aggregates them using an attention mechanism. At the same time, we introduce a
pooling operation with a precise geometric interpretation, that allows handling
variations in the mesh connectivity by clustering mesh faces in a task-driven
fashion. We provide theoretical insights of our approach using tools from the
mesh-simplification literature. In addition, we validate experimentally our
method in the tasks of shape classification and shape segmentation, where we
obtain comparable or superior performance to the state of the art. | [
"cs.CV",
"cs.CG",
"cs.LG"
] |
Augmented reality (AR) has gained increasingly attention from both research
and industry communities. By overlaying digital information and content onto
the physical world, AR enables users to experience the world in a more
informative and efficient manner. As a major building block for AR systems,
localization aims at determining the device's pose from a pre-built "map"
consisting of visual and depth information in a known environment. While the
localization problem has been widely studied in the literature, the "map" for
AR systems is rarely discussed. In this paper, we introduce the AR Map for a
specific scene to be composed of 1) color images with 6-DOF poses; 2) dense
depth maps for each image and 3) a complete point cloud map. We then propose an
efficient end-to-end solution to generating and evaluating AR Maps. Firstly,
for efficient data capture, a backpack scanning device is presented with a
unified calibration pipeline. Secondly, we propose an AR mapping pipeline which
takes the input from the scanning device and produces accurate AR Maps.
Finally, we present an approach to evaluating the accuracy of AR Maps with the
help of the highly accurate reconstruction result from a high-end laser
scanner. To the best of our knowledge, it is the first time to present an
end-to-end solution to efficient and accurate mapping for AR applications. | [
"cs.CV",
"cs.RO"
] |
Noisy labels are very common in deep supervised learning. Although many
studies tend to improve the robustness of deep training for noisy labels, rare
works focus on theoretically explaining the training behaviors of learning with
noisily labeled data, which is a fundamental principle in understanding its
generalization. In this draft, we study its two phenomena, clean data first and
phase transition, by explaining them from a theoretical viewpoint.
Specifically, we first show that in the first epoch training, the examples with
clean labels will be learned first. We then show that after the learning from
clean data stage, continuously training model can achieve further improvement
in testing error when the rate of corrupted class labels is smaller than a
certain threshold; otherwise, extensively training could lead to an increasing
testing error. | [
"cs.LG"
] |
In a Massive Open Online Course (MOOC), predictive models of student behavior
can support multiple aspects of learning, including instructor feedback and
timely intervention. Ongoing courses, when the student outcomes are yet
unknown, must rely on models trained from the historical data of previously
offered courses. It is possible to transfer models, but they often have poor
prediction performance. One reason is features that inadequately represent
predictive attributes common to both courses. We present an automated
transductive transfer learning approach that addresses this issue. It relies on
problem-agnostic, temporal organization of the MOOC clickstream data, where,
for each student, for multiple courses, a set of specific MOOC event types is
expressed for each time unit. It consists of two alternative transfer methods
based on representation learning with auto-encoders: a passive approach using
transductive principal component analysis and an active approach that uses a
correlation alignment loss term. With these methods, we investigate the
transferability of dropout prediction across similar and dissimilar MOOCs and
compare with known methods. Results show improved model transferability and
suggest that the methods are capable of automatically learning a feature
representation that expresses common predictive characteristics of MOOCs. | [
"cs.LG",
"cs.CY",
"stat.ML"
] |
Unsupervised evaluation of segmentation quality is a crucial step in image
segmentation applications. Previous unsupervised evaluation methods usually
lacked the adaptability to multi-scale segmentation. A scale-constrained
evaluation method that evaluates segmentation quality according to the
specified target scale is proposed in this paper. First, regional saliency and
merging cost are employed to describe intra-region homogeneity and inter-region
heterogeneity, respectively. Subsequently, both of them are standardized into
equivalent spectral distances of a predefined region. Finally, by analyzing the
relationship between image characteristics and segmentation quality, we
establish the evaluation model. Experimental results show that the proposed
method outperforms four commonly used unsupervised methods in multi-scale
evaluation tasks. | [
"cs.CV"
] |
Transformer has been widely used for self-supervised pre-training in Natural
Language Processing (NLP) and achieved great success. However, it has not been
fully explored in visual self-supervised learning. Meanwhile, previous methods
only consider the high-level feature and learning representation from a global
perspective, which may fail to transfer to the downstream dense prediction
tasks focusing on local features. In this paper, we present a novel Masked
Self-supervised Transformer approach named MST, which can explicitly capture
the local context of an image while preserving the global semantic information.
Specifically, inspired by the Masked Language Modeling (MLM) in NLP, we propose
a masked token strategy based on the multi-head self-attention map, which
dynamically masks some tokens of local patches without damaging the crucial
structure for self-supervised learning. More importantly, the masked tokens
together with the remaining tokens are further recovered by a global image
decoder, which preserves the spatial information of the image and is more
friendly to the downstream dense prediction tasks. The experiments on multiple
datasets demonstrate the effectiveness and generality of the proposed method.
For instance, MST achieves Top-1 accuracy of 76.9% with DeiT-S only using
300-epoch pre-training by linear evaluation, which outperforms supervised
methods with the same epoch by 0.4% and its comparable variant DINO by 1.0\%.
For dense prediction tasks, MST also achieves 42.7% mAP on MS COCO object
detection and 74.04% mIoU on Cityscapes segmentation only with 100-epoch
pre-training. | [
"cs.CV"
] |
With fires becoming increasingly frequent and severe across the globe in
recent years, understanding climate change's role in fire behavior is critical
for quantifying current and future fire risk. However, global climate models
typically simulate fire behavior at spatial scales too coarse for local risk
assessments. Therefore, we propose a novel approach towards super-resolution
(SR) enhancement of fire risk exposure maps that incorporates not only 2000 to
2020 monthly satellite observations of active fires but also local information
on land cover and temperature. Inspired by SR architectures, we propose an
efficient deep learning model trained for SR on fire risk exposure maps. We
evaluate this model on resolution enhancement and find it outperforms standard
image interpolation techniques at both 4x and 8x enhancement while having
comparable performance at 2x enhancement. We then demonstrate the
generalizability of this SR model over northern California and New South Wales,
Australia. We conclude with a discussion and application of our proposed model
to climate model simulations of fire risk in 2040 and 2100, illustrating the
potential for SR enhancement of fire risk maps from the latest state-of-the-art
climate models. | [
"cs.LG",
"eess.IV"
] |
Blind image super-resolution (SR), aiming to super-resolve low-resolution
images with unknown degradation, has attracted increasing attention due to its
significance in promoting real-world applications. Many novel and effective
solutions have been proposed recently, especially with the powerful deep
learning techniques. Despite years of efforts, it still remains as a
challenging research problem. This paper serves as a systematic review on
recent progress in blind image SR, and proposes a taxonomy to categorize
existing methods into three different classes according to their ways of
degradation modelling and the data used for solving the SR model. This taxonomy
helps summarize and distinguish among existing methods. We hope to provide
insights into current research states, as well as to reveal novel research
directions worth exploring. In addition, we make a summary on commonly used
datasets and previous competitions related to blind image SR. Last but not
least, a comparison among different methods is provided with detailed analysis
on their merits and demerits using both synthetic and real testing images. | [
"cs.CV"
] |
While deep learning has received a surge of interest in a variety of fields
in recent years, major deep learning models barely use complex numbers.
However, speech, signal and audio data are naturally complex-valued after
Fourier Transform, and studies have shown a potentially richer representation
of complex nets. In this paper, we propose a Complex Transformer, which
incorporates the transformer model as a backbone for sequence modeling; we also
develop attention and encoder-decoder network operating for complex input. The
model achieves state-of-the-art performance on the MusicNet dataset and an
In-phase Quadrature (IQ) signal dataset. | [
"cs.LG",
"cs.SD",
"eess.AS",
"stat.ML"
] |
Semi-supervised learning has recently been attracting attention as an
alternative to fully supervised models that require large pools of labeled
data. Moreover, optimizing a model for multiple tasks can provide better
generalizability than single-task learning. Leveraging self-supervision and
adversarial training, we propose a novel general purpose semi-supervised,
multiple-task model---namely, self-supervised, semi-supervised, multitask
learning (S$^4$MTL)---for accomplishing two important tasks in medical imaging,
segmentation and diagnostic classification. Experimental results on chest and
spine X-ray datasets suggest that our S$^4$MTL model significantly outperforms
semi-supervised single task, semi/fully-supervised multitask, and
fully-supervised single task models, even with a 50\% reduction of class and
segmentation labels. We hypothesize that our proposed model can be effective in
tackling limited annotation problems for joint training, not only in medical
imaging domains, but also for general-purpose vision tasks. | [
"cs.CV"
] |
For infinitesimal learning rates, stochastic gradient descent (SGD) follows
the path of gradient flow on the full batch loss function. However moderately
large learning rates can achieve higher test accuracies, and this
generalization benefit is not explained by convergence bounds, since the
learning rate which maximizes test accuracy is often larger than the learning
rate which minimizes training loss. To interpret this phenomenon we prove that
for SGD with random shuffling, the mean SGD iterate also stays close to the
path of gradient flow if the learning rate is small and finite, but on a
modified loss. This modified loss is composed of the original loss function and
an implicit regularizer, which penalizes the norms of the minibatch gradients.
Under mild assumptions, when the batch size is small the scale of the implicit
regularization term is proportional to the ratio of the learning rate to the
batch size. We verify empirically that explicitly including the implicit
regularizer in the loss can enhance the test accuracy when the learning rate is
small. | [
"cs.LG",
"stat.ML"
] |
Indoor positioning aims at navigation inside areas with no GPS-data
availability and could be employed in many applications such as augmented
reality, autonomous driving specially inside closed areas and tunnels. In this
paper, a deep neural network-based architecture has been proposed to address
this problem. In this regard, a tandem set of convolutional neural networks, as
well as a Pix2Pix GAN network have been leveraged to perform as the scene
classifier, scene RGB image to point cloud converter, and position regressor,
respectively. The proposed architecture outperforms the previous works,
including our recent work, in the sense that it makes data generation task
easier and more robust against scene small variations, whilst the accuracy of
the positioning is remarkably well, for both Cartesian position and quaternion
information of the camera. | [
"cs.CV"
] |
Online searches have been used to study different health-related behaviours,
including monitoring disease outbreaks. An obvious caveat is that several
reasons can motivate individuals to seek online information and models that are
blind to people's motivations are of limited use and can even mislead. This is
particularly true during extraordinary public health crisis, such as the
ongoing pandemic, when fear, curiosity and many other reasons can lead
individuals to search for health-related information, masking the
disease-driven searches. However, health crisis can also offer an opportunity
to disentangle between different drivers and learn about human behavior. Here,
we focus on the two pandemics of the 21st century (2009-H1N1 flu and Covid-19)
and propose a methodology to discriminate between search patterns linked to
general information seeking (media driven) and search patterns possibly more
associated with actual infection (disease driven). We show that by learning
from such pandemic periods, with high anxiety and media hype, it is possible to
select online searches and improve model performance both in pandemic and
seasonal settings. Moreover, and despite the common claim that more data is
always better, our results indicate that lower volume of the right data can be
better than including large volumes of apparently similar data, especially in
the long run. Our work provides a general framework that can be applied beyond
specific events and diseases, and argues that algorithms can be improved simply
by using less (better) data. This has important consequences, for example, to
solve the accuracy-explainability trade-off in machine-learning. | [
"cs.LG"
] |
We attempt to set a mathematical foundation of immunology and amino acid
chains. To measure the similarities of these chains, a kernel on strings is
defined using only the sequence of the chains and a good amino acid
substitution matrix (e.g. BLOSUM62). The kernel is used in learning machines to
predict binding affinities of peptides to human leukocyte antigens DR (HLA-DR)
molecules. On both fixed allele (Nielsen and Lund 2009) and pan-allele (Nielsen
et.al. 2010) benchmark databases, our algorithm achieves the state-of-the-art
performance. The kernel is also used to define a distance on an HLA-DR allele
set based on which a clustering analysis precisely recovers the serotype
classifications assigned by WHO (Nielsen and Lund 2009, and Marsh et.al. 2010).
These results suggest that our kernel relates well the chain structure of both
peptides and HLA-DR molecules to their biological functions, and that it offers
a simple, powerful and promising methodology to immunology and amino acid chain
studies. | [
"stat.ML",
"cs.LG",
"q-bio.GN"
] |
Graph Neural Networks (GNNs) have proved to be an effective representation
learning framework for graph-structured data, and have achieved
state-of-the-art performance on many practical predictive tasks, such as node
classification, link prediction and graph classification. Among the variants of
GNNs, Graph Attention Networks (GATs) learn to assign dense attention
coefficients over all neighbors of a node for feature aggregation, and improve
the performance of many graph learning tasks. However, real-world graphs are
often very large and noisy, and GATs are prone to overfitting if not
regularized properly. Even worse, the local aggregation mechanism of GATs may
fail on disassortative graphs, where nodes within local neighborhood provide
more noise than useful information for feature aggregation. In this paper, we
propose Sparse Graph Attention Networks (SGATs) that learn sparse attention
coefficients under an $L_0$-norm regularization, and the learned sparse
attentions are then used for all GNN layers, resulting in an edge-sparsified
graph. By doing so, we can identify noisy/task-irrelevant edges, and thus
perform feature aggregation on most informative neighbors. Extensive
experiments on synthetic and real-world graph learning benchmarks demonstrate
the superior performance of SGATs. In particular, SGATs can remove about
50\%-80\% edges from large assortative graphs, while retaining similar
classification accuracies. On disassortative graphs, SGATs prune majority of
noisy edges and outperform GATs in classification accuracies by significant
margins. Furthermore, the removed edges can be interpreted intuitively and
quantitatively. To the best of our knowledge, this is the first graph learning
algorithm that shows significant redundancies in graphs and edge-sparsified
graphs can achieve similar or sometimes higher predictive performances than
original graphs. | [
"cs.LG",
"stat.ML"
] |
Deep Neural Networks have achieved huge success at a wide spectrum of
applications from language modeling, computer vision to speech recognition.
However, nowadays, good performance alone is not sufficient to satisfy the
needs of practical deployment where interpretability is demanded for cases
involving ethics and mission critical applications. The complex models of Deep
Neural Networks make it hard to understand and reason the predictions, which
hinders its further progress. To tackle this problem, we apply the Knowledge
Distillation technique to distill Deep Neural Networks into decision trees in
order to attain good performance and interpretability simultaneously. We
formulate the problem at hand as a multi-output regression problem and the
experiments demonstrate that the student model achieves significantly better
accuracy performance (about 1\% to 5\%) than vanilla decision trees at the same
level of tree depth. The experiments are implemented on the TensorFlow platform
to make it scalable to big datasets. To the best of our knowledge, we are the
first to distill Deep Neural Networks into vanilla decision trees on
multi-class datasets. | [
"cs.LG",
"stat.ML"
] |
We develop an automated variational inference method for Bayesian structured
prediction problems with Gaussian process (GP) priors and linear-chain
likelihoods. Our approach does not need to know the details of the structured
likelihood model and can scale up to a large number of observations.
Furthermore, we show that the required expected likelihood term and its
gradients in the variational objective (ELBO) can be estimated efficiently by
using expectations over very low-dimensional Gaussian distributions.
Optimization of the ELBO is fully parallelizable over sequences and amenable to
stochastic optimization, which we use along with control variate techniques and
state-of-the-art incremental optimization to make our framework useful in
practice. Results on a set of natural language processing tasks show that our
method can be as good as (and sometimes better than) hard-coded approaches
including SVM-struct and CRFs, and overcomes the scalability limitations of
previous inference algorithms based on sampling. Overall, this is a fundamental
step to developing automated inference methods for Bayesian structured
prediction. | [
"stat.ML"
] |
Recent advances in person re-identification have demonstrated enhanced
discriminability, especially with supervised learning or transfer learning.
However, since the data requirements---including the degree of data
curations---are becoming increasingly complex and laborious, there is a
critical need for unsupervised methods that are robust to large intra-class
variations, such as changes in perspective, illumination, articulated motion,
resolution, etc. Therefore, we propose an unsupervised framework for person
re-identification which is trained in an end-to-end manner without any
pre-training. Our proposed framework leverages a new attention mechanism that
combines group convolutions to (1) enhance spatial attention at multiple scales
and (2) reduce the number of trainable parameters by 59.6%. Additionally, our
framework jointly optimizes the network with agglomerative clustering and
instance learning to tackle hard samples. We perform extensive analysis using
the Market1501 and DukeMTMC-reID datasets to demonstrate that our method
consistently outperforms the state-of-the-art methods (with and without
pre-trained weights). | [
"cs.CV"
] |
Due to the hierarchical structure of many machine learning problems, bilevel
programming is becoming more and more important recently, however, the
complicated correlation between the inner and outer problem makes it extremely
challenging to solve. Although several intuitive algorithms based on the
automatic differentiation have been proposed and obtained success in some
applications, not much attention has been paid to finding the optimal
formulation of the bilevel model. Whether there exists a better formulation is
still an open problem. In this paper, we propose an improved bilevel model
which converges faster and better compared to the current formulation. We
provide theoretical guarantee and evaluation results over two tasks: Data
Hyper-Cleaning and Hyper Representation Learning. The empirical results show
that our model outperforms the current bilevel model with a great margin.
\emph{This is a concurrent work with \citet{liu2020generic} and we submitted to
ICML 2020. Now we put it on the arxiv for record.} | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Reviews spams are prevalent in e-commerce to manipulate product ranking and
customers decisions maliciously. While spams generated based on simple spamming
strategy can be detected effectively, hardened spammers can evade regular
detectors via more advanced spamming strategies. Previous work gave more
attention to evasion against text and graph-based detectors, but evasions
against behavior-based detectors are largely ignored, leading to
vulnerabilities in spam detection systems. Since real evasion data are scarce,
we first propose EMERAL (Evasion via Maximum Entropy and Rating sAmpLing) to
generate evasive spams to certain existing detectors. EMERAL can simulate
spammers with different goals and levels of knowledge about the detectors,
targeting at different stages of the life cycle of target products. We show
that in the evasion-defense dynamic, only a few evasion types are meaningful to
the spammers, and any spammer will not be able to evade too many detection
signals at the same time. We reveal that some evasions are quite insidious and
can fail all detection signals. We then propose DETER (Defense via Evasion
generaTion using EmeRal), based on model re-training on diverse evasive samples
generated by EMERAL. Experiments confirm that DETER is more accurate in
detecting both suspicious time window and individual spamming reviews. In terms
of security, DETER is versatile enough to be vaccinated against diverse and
unexpected evasions, is agnostic about evasion strategy and can be released
without privacy concern. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Existing neural networks proposed for low-level image processing tasks are
usually implemented by stacking convolution layers with limited kernel size.
Every convolution layer merely involves in context information from a small
local neighborhood. More contextual features can be explored as more
convolution layers are adopted. However it is difficult and costly to take full
advantage of long-range dependencies. We propose a novel non-local module,
Pyramid Non-local Block, to build up connection between every pixel and all
remain pixels. The proposed module is capable of efficiently exploiting
pairwise dependencies between different scales of low-level structures. The
target is fulfilled through first learning a query feature map with full
resolution and a pyramid of reference feature maps with downscaled resolutions.
Then correlations with multi-scale reference features are exploited for
enhancing pixel-level feature representation. The calculation procedure is
economical considering memory consumption and computational cost. Based on the
proposed module, we devise a Pyramid Non-local Enhanced Networks for
edge-preserving image smoothing which achieves state-of-the-art performance in
imitating three classical image smoothing algorithms. Additionally, the pyramid
non-local block can be directly incorporated into convolution neural networks
for other image restoration tasks. We integrate it into two existing methods
for image denoising and single image super-resolution, achieving consistently
improved performance. | [
"cs.CV"
] |
We present ShapeVis, a scalable visualization technique for point cloud data
inspired from topological data analysis. Our method captures the underlying
geometric and topological structure of the data in a compressed graphical
representation. Much success has been reported by the data visualization
technique Mapper, that discreetly approximates the Reeb graph of a filter
function on the data. However, when using standard dimensionality reduction
algorithms as the filter function, Mapper suffers from considerable
computational cost. This makes it difficult to scale to high-dimensional data.
Our proposed technique relies on finding a subset of points called landmarks
along the data manifold to construct a weighted witness-graph over it. This
graph captures the structural characteristics of the point cloud, and its
weights are determined using a Finite Markov Chain. We further compress this
graph by applying induced maps from standard community detection algorithms.
Using techniques borrowed from manifold tearing, we prune and reinstate edges
in the induced graph based on their modularity to summarize the shape of data.
We empirically demonstrate how our technique captures the structural
characteristics of real and synthetic data sets. Further, we compare our
approach with Mapper using various filter functions like t-SNE, UMAP, LargeVis
and show that our algorithm scales to millions of data points while preserving
the quality of data visualization. | [
"cs.LG",
"cs.HC",
"stat.ML"
] |
We study the robustness of reinforcement learning (RL) with adversarially
perturbed state observations, which aligns with the setting of many adversarial
attacks to deep reinforcement learning (DRL) and is also important for rolling
out real-world RL agent under unpredictable sensing noise. With a fixed agent
policy, we demonstrate that an optimal adversary to perturb state observations
can be found, which is guaranteed to obtain the worst case agent reward. For
DRL settings, this leads to a novel empirical adversarial attack to RL agents
via a learned adversary that is much stronger than previous ones. To enhance
the robustness of an agent, we propose a framework of alternating training with
learned adversaries (ATLA), which trains an adversary online together with the
agent using policy gradient following the optimal adversarial attack framework.
Additionally, inspired by the analysis of state-adversarial Markov decision
process (SA-MDP), we show that past states and actions (history) can be useful
for learning a robust agent, and we empirically find a LSTM based policy can be
more robust under adversaries. Empirical evaluations on a few continuous
control environments show that ATLA achieves state-of-the-art performance under
strong adversaries. Our code is available at
https://github.com/huanzhang12/ATLA_robust_RL. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Recently, fully-connected and convolutional neural networks have been trained
to achieve state-of-the-art performance on a wide variety of tasks such as
speech recognition, image classification, natural language processing, and
bioinformatics. For classification tasks, most of these "deep learning" models
employ the softmax activation function for prediction and minimize
cross-entropy loss. In this paper, we demonstrate a small but consistent
advantage of replacing the softmax layer with a linear support vector machine.
Learning minimizes a margin-based loss instead of the cross-entropy loss. While
there have been various combinations of neural nets and SVMs in prior art, our
results using L2-SVMs show that by simply replacing softmax with linear SVMs
gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and
the ICML 2013 Representation Learning Workshop's face expression recognition
challenge. | [
"cs.LG",
"stat.ML"
] |
Evaluating image generation models such as generative adversarial networks
(GANs) is a challenging problem. A common approach is to compare the
distributions of the set of ground truth images and the set of generated test
images. The Frech\'et Inception distance is one of the most widely used metrics
for evaluation of GANs, which assumes that the features from a trained
Inception model for a set of images follow a normal distribution. In this
paper, we argue that this is an over-simplified assumption, which may lead to
unreliable evaluation results, and more accurate density estimation can be
achieved using a truncated generalized normal distribution. Based on this, we
propose a novel metric for accurate evaluation of GANs, named TREND (TRuncated
gEneralized Normal Density estimation of inception embeddings). We demonstrate
that our approach significantly reduces errors of density estimation, which
consequently eliminates the risk of faulty evaluation results. Furthermore, we
show that the proposed metric significantly improves robustness of evaluation
results against variation of the number of image samples. | [
"cs.CV",
"cs.LG"
] |
We consider the learning and prediction of nonlinear time series generated by
a latent symplectic map. A special case is (not necessarily separable)
Hamiltonian systems, whose solution flows give such symplectic maps. For this
special case, both generic approaches based on learning the vector field of the
latent ODE and specialized approaches based on learning the Hamiltonian that
generates the vector field exist. Our method, however, is different as it does
not rely on the vector field nor assume its existence; instead, it directly
learns the symplectic evolution map in discrete time. Moreover, we do so by
representing the symplectic map via a generating function, which we approximate
by a neural network (hence the name GFNN). This way, our approximation of the
evolution map is always \emph{exactly} symplectic. This additional geometric
structure allows the local prediction error at each step to accumulate in a
controlled fashion, and we will prove, under reasonable assumptions, that the
global prediction error grows at most \emph{linearly} with long prediction
time, which significantly improves an otherwise exponential growth. In
addition, as a map-based and thus purely data-driven method, GFNN avoids two
additional sources of inaccuracies common in vector-field based approaches,
namely the error in approximating the vector field by finite difference of the
data, and the error in numerical integration of the vector field for making
predictions. Numerical experiments further demonstrate our claims. | [
"cs.LG",
"cs.NA",
"math.DS",
"math.NA",
"physics.comp-ph"
] |
Social networks give free access to their services in exchange for the right
to exploit their users' data. Data sharing is done in an initial context which
is chosen by the users. However, data are used by social networks and third
parties in different contexts which are often not transparent. We propose a new
approach which unveils potential effects of data sharing in impactful real-life
situations. Focus is put on visual content because of its strong influence in
shaping online user profiles. The approach relies on three components: (1) a
set of concepts with associated situation impact ratings obtained by
crowdsourcing, (2) a corresponding set of object detectors used to analyze
users' photos and (3) a ground truth dataset made of 500 visual user profiles
which are manually rated for each situation. These components are combined in
LERVUP, a method which learns to rate visual user profiles in each situation.
LERVUP exploits a new image descriptor which aggregates concept ratings and
object detections at user level. It also uses an attention mechanism to boost
the detections of highly-rated concepts to prevent them from being overwhelmed
by low-rated ones. Performance is evaluated per situation by measuring the
correlation between the automatic ranking of profile ratings and a manual
ground truth. Results indicate that LERVUP is effective since a strong
correlation of the two rankings is obtained. This finding indicates that
providing meaningful automatic situation-related feedback about the effects of
data sharing is feasible. | [
"cs.CV",
"cs.LG",
"cs.SI"
] |
In this work we discuss the incorporation of quadratic neurons into policy
networks in the context of model-free actor-critic reinforcement learning.
Quadratic neurons admit an explicit quadratic function approximation in
contrast to conventional approaches where the the non-linearity is induced by
the activation functions. We perform empiric experiments on several MuJoCo
continuous control tasks and find that when quadratic neurons are added to MLP
policy networks those outperform the baseline MLP whilst admitting a smaller
number of parameters. The top returned reward is in average increased by
$5.8\%$ while being about $21\%$ more sample efficient. Moreover, it can
maintain its advantage against added action and observation noise. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Recent improvements in generative adversarial visual synthesis incorporate
real and fake image transformation in a self-supervised setting, leading to
increased stability and perceptual fidelity. However, these approaches
typically involve image augmentations via additional regularizers in the GAN
objective and thus spend valuable network capacity towards approximating
transformation equivariance instead of their desired task. In this work, we
explicitly incorporate inductive symmetry priors into the network architectures
via group-equivariant convolutional networks. Group-convolutions have higher
expressive power with fewer samples and lead to better gradient feedback
between generator and discriminator. We show that group-equivariance integrates
seamlessly with recent techniques for GAN training across regularizers,
architectures, and loss functions. We demonstrate the utility of our methods
for conditional synthesis by improving generation in the limited data regime
across symmetric imaging datasets and even find benefits for natural images
with preferred orientation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
We discuss a general method to learn data representations from multiple
tasks. We provide a justification for this method in both settings of multitask
learning and learning-to-learn. The method is illustrated in detail in the
special case of linear feature learning. Conditions on the theoretical
advantage offered by multitask representation learning over independent task
learning are established. In particular, focusing on the important example of
half-space learning, we derive the regime in which multitask representation
learning is beneficial over independent task learning, as a function of the
sample size, the number of tasks and the intrinsic data dimensionality. Other
potential applications of our results include multitask feature learning in
reproducing kernel Hilbert spaces and multilayer, deep networks. | [
"stat.ML",
"cs.LG"
] |
Capturing global contextual representations by exploiting long-range
pixel-pixel dependencies has shown to improve semantic segmentation
performance. However, how to do this efficiently is an open question as current
approaches of utilising attention schemes or very deep models to increase the
models field of view, result in complex models with large memory consumption.
Inspired by recent work on graph neural networks, we propose the
Self-Constructing Graph (SCG) module that learns a long-range dependency graph
directly from the image and uses it to propagate contextual information
efficiently to improve semantic segmentation. The module is optimised via a
novel adaptive diagonal enhancement method and a variational lower bound that
consists of a customized graph reconstruction term and a Kullback-Leibler
divergence regularization term. When incorporated into a neural network
(SCG-Net), semantic segmentation is performed in an end-to-end manner and
competitive performance (mean F1-scores of 92.0% and 89.8% respectively) on the
publicly available ISPRS Potsdam and Vaihingen datasets is achieved, with much
fewer parameters, and at a lower computational cost compared to related pure
convolutional neural network (CNN) based models. | [
"cs.CV"
] |
In this paper, we propose a pipeline to generate 3D point cloud of an object
from a single-view RGB image. Most previous work predict the 3D point
coordinates from single RGB images directly. We decompose this problem into
depth estimation from single images and point cloud completion from partial
point clouds.
Our method sequentially predicts the depth maps from images and then infers
the complete 3D object point clouds based on the predicted partial point
clouds. We explicitly impose the camera model geometrical constraint in our
pipeline and enforce the alignment of the generated point clouds and estimated
depth maps.
Experimental results for the single image 3D object reconstruction task show
that the proposed method outperforms existing state-of-the-art methods. Both
the qualitative and quantitative results demonstrate the generality and
suitability of our method. | [
"cs.CV"
] |
As a powerful statistical image modeling technique, sparse representation has
been successfully used in various image restoration applications. The success
of sparse representation owes to the development of l1-norm optimization
techniques, and the fact that natural images are intrinsically sparse in some
domain. The image restoration quality largely depends on whether the employed
sparse domain can represent well the underlying image. Considering that the
contents can vary significantly across different images or different patches in
a single image, we propose to learn various sets of bases from a pre-collected
dataset of example image patches, and then for a given patch to be processed,
one set of bases are adaptively selected to characterize the local sparse
domain. We further introduce two adaptive regularization terms into the sparse
representation framework. First, a set of autoregressive (AR) models are
learned from the dataset of example image patches. The best fitted AR models to
a given patch are adaptively selected to regularize the image local structures.
Second, the image non-local self-similarity is introduced as another
regularization term. In addition, the sparsity regularization parameter is
adaptively estimated for better image restoration performance. Extensive
experiments on image deblurring and super-resolution validate that by using
adaptive sparse domain selection and adaptive regularization, the proposed
method achieves much better results than many state-of-the-art algorithms in
terms of both PSNR and visual perception. | [
"cs.CV",
"cs.MM",
"68U10"
] |
Traditionally, researchers in automatic face recognition and biometric
technologies have focused on developing accurate algorithms. With this
technology being integrated into operational systems, engineers and scientists
are being asked, do these systems meet societal norms? The origin of this line
of inquiry is `trust' of artificial intelligence (AI) systems. In this paper,
we concentrate on adapting explainable AI to face recognition and biometrics,
and we present four principles of explainable AI to face recognition and
biometrics. The principles are illustrated by $\it{four}$ case studies, which
show the challenges and issues in developing algorithms that can produce
explanations. | [
"cs.CV",
"cs.AI"
] |
We propose Pixel-BERT to align image pixels with text by deep multi-modal
transformers that jointly learn visual and language embedding in a unified
end-to-end framework. We aim to build a more accurate and thorough connection
between image pixels and language semantics directly from image and sentence
pairs instead of using region-based image features as the most recent vision
and language tasks. Our Pixel-BERT which aligns semantic connection in pixel
and text level solves the limitation of task-specific visual representation for
vision and language tasks. It also relieves the cost of bounding box
annotations and overcomes the unbalance between semantic labels in visual task
and language semantic. To provide a better representation for down-stream
tasks, we pre-train a universal end-to-end model with image and sentence pairs
from Visual Genome dataset and MS-COCO dataset. We propose to use a random
pixel sampling mechanism to enhance the robustness of visual representation and
to apply the Masked Language Model and Image-Text Matching as pre-training
tasks. Extensive experiments on downstream tasks with our pre-trained model
show that our approach makes the most state-of-the-arts in downstream tasks,
including Visual Question Answering (VQA), image-text retrieval, Natural
Language for Visual Reasoning for Real (NLVR). Particularly, we boost the
performance of a single model in VQA task by 2.17 points compared with SOTA
under fair comparison. | [
"cs.CV",
"cs.CL",
"cs.LG",
"cs.MM"
] |
The growing use of Machine Learning has produced significant advances in many
fields. For image-based tasks, however, the use of deep learning remains
challenging in small datasets. In this article, we review, evaluate and compare
the current state-of-the-art techniques in training neural networks to
elucidate which techniques work best for small datasets. We further propose a
path forward for the improvement of model accuracy in medical imaging
applications. We observed best results from one cycle training, discriminative
learning rates with gradual freezing and parameter modification after transfer
learning. We also established that when datasets are small, transfer learning
plays an important role beyond parameter initialization by reusing previously
learned features. Surprisingly we observed that there is little advantage in
using pre-trained networks in images from another part of the body compared to
Imagenet. On the contrary, if images from the same part of the body are
available then transfer learning can produce a significant improvement in
performance with as little as 50 images in the training data. | [
"cs.LG",
"stat.ML"
] |
We explore recurrent encoder multi-decoder neural network architectures for
semi-supervised sequence classification and reconstruction. We find that the
use of multiple reconstruction modules helps models generalize in a
classification task when only a small amount of labeled data is available,
which is often the case in practice. Such models provide useful high-level
representations of motions allowing clustering, searching and faster labeling
of new sequences. We also propose a new, realistic partitioning of a
well-known, high quality motion-capture dataset for better evaluations. We
further explore a novel formulation for future-predicting decoders based on
conditional recurrent generative adversarial networks, for which we propose
both soft and hard constraints for transition generation derived from desired
physical properties of synthesized future movements and desired animation
goals. We find that using such constraints allow to stabilize the training of
recurrent adversarial architectures for animation generation. | [
"cs.CV",
"cs.LG"
] |
The black-box nature of deep learning models prevents them from being
completely trusted in domains like biomedicine. Most explainability techniques
do not capture the concept-based reasoning that human beings follow. In this
work, we attempt to understand the behavior of trained models that perform
image processing tasks in the medical domain by building a graphical
representation of the concepts they learn. Extracting such a graphical
representation of the model's behavior on an abstract, higher conceptual level
would unravel the learnings of these models and would help us to evaluate the
steps taken by the model for predictions. We show the application of our
proposed implementation on two biomedical problems - brain tumor segmentation
and fundus image classification. We provide an alternative graphical
representation of the model by formulating a concept level graph as discussed
above, which makes the problem of intervention to find active inference trails
more tractable. Understanding these trails would provide an understanding of
the hierarchy of the decision-making process followed by the model. [As well as
overall nature of model]. Our framework is available at
https://github.com/koriavinash1/BioExp | [
"cs.CV",
"cs.AI",
"stat.ML"
] |
In representation learning (RL), how to make the learned representations easy
to interpret and less overfitted to training data are two important but
challenging issues. To address these problems, we study a new type of
regulariza- tion approach that encourages the supports of weight vectors in RL
models to have small overlap, by simultaneously promoting near-orthogonality
among vectors and sparsity of each vector. We apply the proposed regularizer to
two models: neural networks (NNs) and sparse coding (SC), and develop an
efficient ADMM-based algorithm for regu- larized SC. Experiments on various
datasets demonstrate that weight vectors learned under our regularizer are more
interpretable and have better generalization performance. | [
"cs.LG",
"stat.ML"
] |
In this paper, we study Reinforcement Learning from Demonstrations (RLfD)
that improves the exploration efficiency of Reinforcement Learning (RL) by
providing expert demonstrations. Most of existing RLfD methods require
demonstrations to be perfect and sufficient, which yet is unrealistic to meet
in practice. To work on imperfect demonstrations, we first define an imperfect
expert setting for RLfD in a formal way, and then point out that previous
methods suffer from two issues in terms of optimality and convergence,
respectively. Upon the theoretical findings we have derived, we tackle these
two issues by regarding the expert guidance as a soft constraint on regulating
the policy exploration of the agent, which eventually leads to a constrained
optimization problem. We further demonstrate that such problem is able to be
addressed efficiently by performing a local linear search on its dual form.
Considerable empirical evaluations on a comprehensive collection of benchmarks
indicate our method attains consistent improvement over other RLfD
counterparts. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
We propose the novel use of a generative adversarial network (GAN) (i) to
make predictions in time (PredGAN) and (ii) to assimilate measurements
(DA-PredGAN). In the latter case, we take advantage of the natural adjoint-like
properties of generative models and the ability to simulate forwards and
backwards in time. GANs have received much attention recently, after achieving
excellent results for their generation of realistic-looking images. We wish to
explore how this property translates to new applications in computational
modelling and to exploit the adjoint-like properties for efficient data
assimilation. To predict the spread of COVID-19 in an idealised town, we apply
these methods to a compartmental model in epidemiology that is able to model
space and time variations. To do this, the GAN is set within a reduced-order
model (ROM), which uses a low-dimensional space for the spatial distribution of
the simulation states. Then the GAN learns the evolution of the low-dimensional
states over time. The results show that the proposed methods can accurately
predict the evolution of the high-fidelity numerical simulation, and can
efficiently assimilate observed data and determine the corresponding model
parameters. | [
"cs.LG",
"stat.ML"
] |
In video object tracking, there exist rich temporal contexts among successive
frames, which have been largely overlooked in existing trackers. In this work,
we bridge the individual video frames and explore the temporal contexts across
them via a transformer architecture for robust object tracking. Different from
classic usage of the transformer in natural language processing tasks, we
separate its encoder and decoder into two parallel branches and carefully
design them within the Siamese-like tracking pipelines. The transformer encoder
promotes the target templates via attention-based feature reinforcement, which
benefits the high-quality tracking model generation. The transformer decoder
propagates the tracking cues from previous templates to the current frame,
which facilitates the object searching process. Our transformer-assisted
tracking framework is neat and trained in an end-to-end manner. With the
proposed transformer, a simple Siamese matching approach is able to outperform
the current top-performing trackers. By combining our transformer with the
recent discriminative tracking pipeline, our method sets several new
state-of-the-art records on prevalent tracking benchmarks. | [
"cs.CV"
] |
Visual odometry networks commonly use pretrained optical flow networks in
order to derive the ego-motion between consecutive frames. The features
extracted by these networks represent the motion of all the pixels between
frames. However, due to the existence of dynamic objects and texture-less
surfaces in the scene, the motion information for every image region might not
be reliable for inferring odometry due to the ineffectiveness of dynamic
objects in derivation of the incremental changes in position. Recent works in
this area lack attention mechanisms in their structures to facilitate dynamic
reweighing of the feature maps for extracting more refined egomotion
information. In this paper, we explore the effectiveness of self-attention in
visual odometry. We report qualitative and quantitative results against the
SOTA methods. Furthermore, saliency-based studies alongside specially designed
experiments are utilized to investigate the effect of self-attention on VO. Our
experiments show that using self-attention allows for the extraction of better
features while achieving a better odometry performance compared to networks
that lack such structures. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Missing value problem in spatiotemporal traffic data has long been a
challenging topic, in particular for large-scale and high-dimensional data with
complex missing mechanisms and diverse degrees of missingness. Recent studies
based on tensor nuclear norm have demonstrated the superiority of tensor
learning in imputation tasks by effectively characterizing the complex
correlations/dependencies in spatiotemporal data. However, despite the
promising results, these approaches do not scale well to large data tensors. In
this paper, we focus on addressing the missing data imputation problem for
large-scale spatiotemporal traffic data. To achieve both high accuracy and
efficiency, we develop a scalable tensor learning model -- Low-Tubal-Rank
Smoothing Tensor Completion (LSTC-Tubal) -- based on the existing framework of
Low-Rank Tensor Completion, which is well-suited for spatiotemporal traffic
data that is characterized by multidimensional structure of location$\times$
time of day $\times$ day. In particular, the proposed LSTC-Tubal model involves
a scalable tensor nuclear norm minimization scheme by integrating linear
unitary transformation. Therefore, tensor nuclear norm minimization can be
solved by singular value thresholding on the transformed matrix of each day
while the day-to-day correlation can be effectively preserved by the unitary
transform matrix. We compare LSTC-Tubal with state-of-the-art baseline models,
and find that LSTC-Tubal can achieve competitive accuracy with a significantly
lower computational cost. In addition, the LSTC-Tubal will also benefit other
tasks in modeling large-scale spatiotemporal traffic data, such as
network-level traffic forecasting. | [
"stat.ML",
"cs.LG"
] |
Scene text recognition with arbitrary shape is very challenging due to large
variations in text shapes, fonts, colors, backgrounds, etc. Most
state-of-the-art algorithms rectify the input image into the normalized image,
then treat the recognition as a sequence prediction task. The bottleneck of
such methods is the rectification, which will cause errors due to distortion
perspective. In this paper, we find that the rectification is completely
unnecessary. What all we need is the spatial attention. We therefore propose a
simple but extremely effective scene text recognition method based on
transformer [50]. Different from previous transformer based models [56,34],
which just use the decoder of the transformer to decode the convolutional
attention, the proposed method use a convolutional feature maps as word
embedding input into transformer. In such a way, our method is able to make
full use of the powerful attention mechanism of the transformer. Extensive
experimental results show that the proposed method significantly outperforms
state-of-the-art methods by a very large margin on both regular and irregular
text datasets. On one of the most challenging CUTE dataset whose
state-of-the-art prediction accuracy is 89.6%, our method achieves 99.3%, which
is a pretty surprising result. We will release our source code and believe that
our method will be a new benchmark of scene text recognition with arbitrary
shapes. | [
"cs.CV"
] |
Data augmentation methods have been shown to be a fundamental technique to
improve generalization in tasks such as image, text and audio classification.
Recently, automated augmentation methods have led to further improvements on
image classification and object detection leading to state-of-the-art
performances. Nevertheless, little work has been done on time-series data, an
area that could greatly benefit from automated data augmentation given the
usually limited size of the datasets. We present two sample-adaptive automatic
weighting schemes for data augmentation: the first learns to weight the
contribution of the augmented samples to the loss, and the second method
selects a subset of transformations based on the ranking of the predicted
training loss. We validate our proposed methods on a large, noisy financial
dataset and on time-series datasets from the UCR archive. On the financial
dataset, we show that the methods in combination with a trading strategy lead
to improvements in annualized returns of over 50$\%$, and on the time-series
data we outperform state-of-the-art models on over half of the datasets, and
achieve similar performance in accuracy on the others. | [
"cs.LG",
"stat.ML"
] |
The prevalence of accessible depth sensing and 3D laser scanning techniques
has enabled the convenient acquisition of 3D dynamic point clouds, which
provide efficient representation of arbitrarily-shaped objects in motion.
Nevertheless, dynamic point clouds are often perturbed by noise due to
hardware, software or other causes. While a plethora of methods have been
proposed for static point cloud denoising, few efforts are made for the
denoising of dynamic point clouds with varying number of irregularly-sampled
points in each frame. In this paper, we represent dynamic point clouds
naturally on graphs and address the denoising problem by inferring the
underlying graph via spatio-temporal graph learning, exploiting both the
intra-frame similarity and inter-frame consistency. Firstly, assuming the
availability of a relevant feature vector per node, we pose spatial-temporal
graph learning as optimizing a Mahalanobis distance metric $\mathbf{M}$, which
is formulated as the minimization of graph Laplacian regularizer. Secondly, to
ease the optimization of the symmetric and positive definite metric matrix
$\mathbf{M}$, we decompose it into $\mathbf{M}=\mathbf{R}^{\top}\mathbf{R}$ and
solve $\mathbf{R}$ instead via proximal gradient. Finally, based on the
spatial-temporal graph learning, we formulate dynamic point cloud denoising as
the joint optimization of the desired point cloud and underlying
spatio-temporal graph, which leverages both intra-frame affinities and
inter-frame consistency and is solved via alternating minimization.
Experimental results show that the proposed method significantly outperforms
independent denoising of each frame from state-of-the-art static point cloud
denoising approaches. | [
"cs.CV",
"cs.MM"
] |
Dynamic scene understanding is a challenging problem and motion segmentation
plays a crucial role in solving it. Incorporating semantics and motion enhances
the overall perception of the dynamic scene. For applications of outdoor
robotic navigation, joint learning methods have not been extensively used for
extracting spatio-temporal features or adding different priors into the
formulation. The task becomes even more challenging without stereo information
being incorporated. This paper proposes an approach to fuse semantic features
and motion clues using CNNs, to address the problem of monocular semantic
motion segmentation. We deduce semantic and motion labels by integrating
optical flow as a constraint with semantic features into dilated convolution
network. The pipeline consists of three main stages i.e Feature extraction,
Feature amplification and Multi Scale Context Aggregation to fuse the semantics
and flow features. Our joint formulation shows significant improvements in
monocular motion segmentation over the state of the art methods on challenging
KITTI tracking dataset. | [
"cs.CV"
] |
This paper addresses the search for a fast and meaningful image segmentation
in the context of $k$-means clustering. The proposed method builds on a
widely-used local version of Lloyd's algorithm, called Simple Linear Iterative
Clustering (SLIC). We propose an algorithm which extends SLIC to dynamically
adjust the local search, adopting superpixel resolution dynamically to
structure existent in the image, and thus provides for more meaningful
superpixels in the same linear runtime as standard SLIC. The proposed method is
evaluated against state-of-the-art techniques and improved boundary adherence
and undersegmentation error are observed, whilst still remaining among the
fastest algorithms which are tested. | [
"cs.CV"
] |
Graph convolution network (GCN) attracts intensive research interest with
broad applications. While existing work mainly focused on designing novel GCN
architectures for better performance, few of them studied a practical yet
challenging problem: How to learn GCNs from data with extremely limited
annotation? In this paper, we propose a new learning method by sampling
strategy and model compression to overcome this challenge. Our approach has
multifold advantages: 1) the adaptive sampling strategy largely suppresses the
GCN training deviation over uniform sampling; 2) compressed GCN-based methods
with a smaller scale of parameters need fewer labeled data to train; 3) the
smaller scale of training data is beneficial to reduce the human resource cost
to label them. We choose six popular GCN baselines and conduct extensive
experiments on three real-world datasets. The results show that by applying our
method, all GCN baselines cut down the annotation requirement by as much as
90$\%$ and compress the scale of parameters more than 6$\times$ without
sacrificing their strong performance. It verifies that the training method
could extend the existing semi-supervised GCN-based methods to the scenarios
with the extremely small scale of labeled data. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Real-time flame detection is crucial in video based surveillance systems. We
propose a vision-based method to detect flames using Deep Convolutional
Generative Adversarial Neural Networks (DCGANs). Many existing supervised
learning approaches using convolutional neural networks do not take temporal
information into account and require substantial amount of labeled data. In
order to have a robust representation of sequences with and without flame, we
propose a two-stage training of a DCGAN exploiting spatio-temporal flame
evolution. Our training framework includes the regular training of a DCGAN with
real spatio-temporal images, namely, temporal slice images, and noise vectors,
and training the discriminator separately using the temporal flame images
without the generator. Experimental results show that the proposed method
effectively detects flame in video with negligible false positive rates in
real-time. | [
"cs.CV"
] |
Learning information-rich and generalizable representations effectively from
unlabeled multivariate cardiac signals to identify abnormal heart rhythms
(cardiac arrhythmias) is valuable in real-world clinical settings but often
challenging due to its complex temporal dynamics. Cardiac arrhythmias can vary
significantly in temporal patterns even for the same patient ($i.e.$, intra
subject difference). Meanwhile, the same type of cardiac arrhythmia can show
different temporal patterns among different patients due to different cardiac
structures ($i.e.$, inter subject difference). In this paper, we address the
challenges by proposing an Intra-inter Subject self-supervised Learning (ISL)
model that is customized for multivariate cardiac signals. Our proposed ISL
model integrates medical knowledge into self-supervision to effectively learn
from intra-inter subject differences. In intra subject self-supervision, ISL
model first extracts heartbeat-level features from each subject using a
channel-wise attentional CNN-RNN encoder. Then a stationarity test module is
employed to capture the temporal dependencies between heartbeats. In inter
subject self-supervision, we design a set of data augmentations according to
the clinical characteristics of cardiac signals and perform contrastive
learning among subjects to learn distinctive representations for various types
of patients. Extensive experiments on three real-world datasets were conducted.
In a semi-supervised transfer learning scenario, our pre-trained ISL model
leads about 10% improvement over supervised training when only 1% labeled data
is available, suggesting strong generalizability and robustness of the model. | [
"cs.LG",
"cs.AI",
"eess.SP"
] |
Effective training of deep neural networks can be challenging, and there
remain many open questions on how to best learn these models. Recently
developed methods to improve neural network training examine teaching:
providing learned information during the training process to improve downstream
model performance. In this paper, we take steps towards extending the scope of
teaching. We propose a flexible teaching framework using commentaries, learned
meta-information helpful for training on a particular task. We present
gradient-based methods to learn commentaries, leveraging recent work on
implicit differentiation for scalability. We explore diverse applications of
commentaries, from weighting training examples, to parameterising
label-dependent data augmentation policies, to representing attention masks
that highlight salient image regions. We find that commentaries can improve
training speed and/or performance, and provide insights about the dataset and
training process. We also observe that commentaries generalise: they can be
reused when training new models to obtain performance benefits, suggesting a
use-case where commentaries are stored with a dataset and leveraged in future
for improved model training. | [
"cs.LG"
] |
Optimal parameter initialization remains a crucial problem for neural network
training. A poor weight initialization may take longer to train and/or converge
to sub-optimal solutions. Here, we propose a method of weight re-initialization
by repeated annealing and injection of noise in the training process. We
implement this through a cyclical batch size schedule motivated by a Bayesian
perspective of neural network training. We evaluate our methods through
extensive experiments on tasks in language modeling, natural language
inference, and image classification. We demonstrate the ability of our method
to improve language modeling performance by up to 7.91 perplexity and reduce
training iterations by up to $61\%$, in addition to its flexibility in enabling
snapshot ensembling and use with adversarial training. | [
"cs.LG"
] |
Understanding a scene by decoding the visual relationships depicted in an
image has been a long studied problem. While the recent advances in deep
learning and the usage of deep neural networks have achieved near human
accuracy on many tasks, there still exists a pretty big gap between human and
machine level performance when it comes to various visual relationship
detection tasks. Developing on earlier tasks like object recognition,
segmentation and captioning which focused on a relatively coarser image
understanding, newer tasks have been introduced recently to deal with a finer
level of image understanding. A Scene Graph is one such technique to better
represent a scene and the various relationships present in it. With its wide
number of applications in various tasks like Visual Question Answering,
Semantic Image Retrieval, Image Generation, among many others, it has proved to
be a useful tool for deeper and better visual relationship understanding. In
this paper, we present a detailed survey on the various techniques for scene
graph generation, their efficacy to represent visual relationships and how it
has been used to solve various downstream tasks. We also attempt to analyze the
various future directions in which the field might advance in the future. Being
one of the first papers to give a detailed survey on this topic, we also hope
to give a succinct introduction to scene graphs, and guide practitioners while
developing approaches for their applications. | [
"cs.CV"
] |
The recognition and clustering of coins which have been struck by the same
die is of interest for archeological studies. Nowadays, this work can only be
performed by experts and is very tedious. In this paper, we propose a method to
automatically cluster dies, based on 3D scans of coins. It is based on three
steps: registration, comparison and graph-based clustering. Experimental
results on 90 coins coming from a Celtic treasury from the II-Ith century BC
show a clustering quality equivalent to expert's work. | [
"cs.CV"
] |
Transformer models have advanced the state of the art in many Natural
Language Processing (NLP) tasks. In this paper, we present a new Transformer
architecture, Extended Transformer Construction (ETC), that addresses two key
challenges of standard Transformer architectures, namely scaling input length
and encoding structured inputs. To scale attention to longer inputs, we
introduce a novel global-local attention mechanism between global tokens and
regular input tokens. We also show that combining global-local attention with
relative position encodings and a Contrastive Predictive Coding (CPC)
pre-training objective allows ETC to encode structured inputs. We achieve
state-of-the-art results on four natural language datasets requiring long
and/or structured inputs. | [
"cs.LG",
"stat.ML"
] |
Single-image super-resolution is the process of increasing the resolution of
an image, obtaining a high-resolution (HR) image from a low-resolution (LR)
one. By leveraging large training datasets, convolutional neural networks
(CNNs) currently achieve the state-of-the-art performance in this task. Yet,
during testing/deployment, they fail to enforce consistency between the HR and
LR images: if we downsample the output HR image, it never matches its LR input.
Based on this observation, we propose to post-process the CNN outputs with an
optimization problem that we call TV-TV minimization, which enforces
consistency. As our extensive experiments show, such post-processing not only
improves the quality of the images, in terms of PSNR and SSIM, but also makes
the super-resolution task robust to operator mismatch, i.e., when the true
downsampling operator is different from the one used to create the training
dataset. | [
"cs.CV",
"cs.LG",
"math.OC"
] |
Deep learning is gaining instant popularity in computer aided diagnosis of
COVID-19. Due to the high sensitivity of Computed Tomography (CT) to this
disease, CT-based COVID-19 detection with visual models is currently at the
forefront of medical imaging research. Outcomes published in this direction are
frequently claiming highly accurate detection under deep transfer learning.
This is leading medical technologists to believe that deep transfer learning is
the mainstream solution for the problem. However, our critical analysis of the
literature reveals an alarming performance disparity between different
published results. Hence, we conduct a systematic thorough investigation to
analyze the effectiveness of deep transfer learning for COVID-19 detection with
CT images. Exploring 14 state-of-the-art visual models with over 200 model
training sessions, we conclusively establish that the published literature is
frequently overestimating transfer learning performance for the problem, even
in the prestigious scientific sources. The roots of overestimation trace back
to inappropriate data curation. We also provide case studies that consider more
realistic scenarios, and establish transparent baselines for the problem. We
hope that our reproducible investigation will help in curbing hype-driven
claims for the critical problem of COVID-19 diagnosis, and pave the way for a
more transparent performance evaluation of techniques for CT-based COVID-19
detection. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Graph Neural Networks(GNNs) are useful deep learning models to deal with the
non-Euclid data. However, recent works show that GNNs are vulnerable to
adversarial attacks. Small perturbations can lead to poor performance in many
GNNs, such as Graph attention networks(GATs). Therefore, enhancing the
robustness of GNNs is a critical problem.
Robust GAT(RoGAT) is proposed to improve the robustness of GNNs in this
paper, . Note that the original GAT uses the attention mechanism for different
edges but is still sensitive to the perturbation, RoGAT adjusts the edges'
weight to adjust the attention scores progressively. Firstly, RoGAT tunes the
edges weight based on the assumption that the adjacent nodes should have
similar nodes. Secondly, RoGAT further tunes the features to eliminate
feature's noises since even for the clean graph, there exists some unreasonable
data. Then, we trained the adjusted GAT model to defense the adversarial
attacks. Different experiments against targeted and untargeted attacks
demonstrate that RoGAT outperforms significantly than most the state-of-the-art
defense methods. The implementation of RoGAT based on the DeepRobust repository
for adversarial attacks. | [
"cs.LG",
"stat.ML"
] |
In this paper, we propose a novel text-based talking-head video generation
framework that synthesizes high-fidelity facial expressions and head motions in
accordance with contextual sentiments as well as speech rhythm and pauses. To
be specific, our framework consists of a speaker-independent stage and a
speaker-specific stage. In the speaker-independent stage, we design three
parallel networks to generate animation parameters of the mouth, upper face,
and head from texts, separately. In the speaker-specific stage, we present a 3D
face model guided attention network to synthesize videos tailored for different
individuals. It takes the animation parameters as input and exploits an
attention mask to manipulate facial expression changes for the input
individuals. Furthermore, to better establish authentic correspondences between
visual motions (i.e., facial expression changes and head movements) and audios,
we leverage a high-accuracy motion capture dataset instead of relying on long
videos of specific individuals. After attaining the visual and audio
correspondences, we can effectively train our network in an end-to-end fashion.
Extensive experiments on qualitative and quantitative results demonstrate that
our algorithm achieves high-quality photo-realistic talking-head videos
including various facial expressions and head motions according to speech
rhythms and outperforms the state-of-the-art. | [
"cs.CV"
] |
With the rapid development and wide application of computer, camera device,
network and hardware technology, 3D object (or model) retrieval has attracted
widespread attention and it has become a hot research topic in the computer
vision domain. Deep learning features already available in 3D object retrieval
have been proven to be better than the retrieval performance of hand-crafted
features. However, most existing networks do not take into account the impact
of multi-view image selection on network training, and the use of contrastive
loss alone only forcing the same-class samples to be as close as possible. In
this work, a novel solution named Multi-view Discrimination and Pairwise CNN
(MDPCNN) for 3D object retrieval is proposed to tackle these issues. It can
simultaneously input of multiple batches and multiple views by adding the Slice
layer and the Concat layer. Furthermore, a highly discriminative network is
obtained by training samples that are not easy to be classified by clustering.
Lastly, we deploy the contrastive-center loss and contrastive loss as the
optimization objective that has better intra-class compactness and inter-class
separability. Large-scale experiments show that the proposed MDPCNN can achieve
a significant performance over the state-of-the-art algorithms in 3D object
retrieval. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In this paper we present Horizon, Facebook's open source applied
reinforcement learning (RL) platform. Horizon is an end-to-end platform
designed to solve industry applied RL problems where datasets are large
(millions to billions of observations), the feedback loop is slow (vs. a
simulator), and experiments must be done with care because they don't run in a
simulator. Unlike other RL platforms, which are often designed for fast
prototyping and experimentation, Horizon is designed with production use cases
as top of mind. The platform contains workflows to train popular deep RL
algorithms and includes data preprocessing, feature transformation, distributed
training, counterfactual policy evaluation, optimized serving, and a
model-based data understanding tool. We also showcase and describe real
examples where reinforcement learning models trained with Horizon significantly
outperformed and replaced supervised learning systems at Facebook. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In this paper we investigate the feasibility of using synthetic data to
augment face datasets. In particular, we propose a novel generative adversarial
network (GAN) that can disentangle identity-related attributes from
non-identity-related attributes. This is done by training an embedding network
that maps discrete identity labels to an identity latent space that follows a
simple prior distribution, and training a GAN conditioned on samples from that
distribution. Our proposed GAN allows us to augment face datasets by generating
both synthetic images of subjects in the training set and synthetic images of
new subjects not in the training set. By using recent advances in GAN training,
we show that the synthetic images generated by our model are photo-realistic,
and that training with augmented datasets can indeed increase the accuracy of
face recognition models as compared with models trained with real images alone. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Over the past decade, multivariate time series classification (MTSC) has
received great attention with the advance of sensing techniques. Current deep
learning methods for MTSC are based on convolutional and recurrent neural
network, with the assumption that time series variables have the same effect to
each other. Thus they cannot model the pairwise dependencies among variables
explicitly. What's more, current spatial-temporal modeling methods based on
GNNs are inherently flat and lack the capability of aggregating node
information in a hierarchical manner. To address this limitation and attain
expressive global representation of MTS, we propose a graph pooling based
framework MTPool and view MTSC task as graph classification task. With graph
structure learning and temporal convolution, MTS slices are converted to graphs
and spatial-temporal features are extracted. Then, we propose a novel graph
pooling method, which uses an ``encoder-decoder'' mechanism to generate
adaptive centroids for cluster assignments. GNNs and graph pooling layers are
used for joint graph representation learning and graph coarsening. With
multiple graph pooling layers, the input graphs are hierachically coarsened to
one node. Finally, differentiable classifier takes this coarsened one-node
graph as input to get the final predicted class. Experiments on 10 benchmark
datasets demonstrate MTPool outperforms state-of-the-art methods in MTSC tasks. | [
"cs.LG",
"cs.AI"
] |
Traffic forecasting has recently attracted increasing interest due to the
popularity of online navigation services, ridesharing and smart city projects.
Owing to the non-stationary nature of road traffic, forecasting accuracy is
fundamentally limited by the lack of contextual information. To address this
issue, we propose the Hybrid Spatio-Temporal Graph Convolutional Network
(H-STGCN), which is able to "deduce" future travel time by exploiting the data
of upcoming traffic volume. Specifically, we propose an algorithm to acquire
the upcoming traffic volume from an online navigation engine. Taking advantage
of the piecewise-linear flow-density relationship, a novel transformer
structure converts the upcoming volume into its equivalent in travel time. We
combine this signal with the commonly-utilized travel-time signal, and then
apply graph convolution to capture the spatial dependency. Particularly, we
construct a compound adjacency matrix which reflects the innate traffic
proximity. We conduct extensive experiments on real-world datasets. The results
show that H-STGCN remarkably outperforms state-of-the-art methods in various
metrics, especially for the prediction of non-recurring congestion. | [
"cs.LG",
"stat.ML"
] |
Video salient object detection aims at discovering the most visually
distinctive objects in a video. How to effectively take object motion into
consideration during video salient object detection is a critical issue.
Existing state-of-the-art methods either do not explicitly model and harvest
motion cues or ignore spatial contexts within optical flow images. In this
paper, we develop a multi-task motion guided video salient object detection
network, which learns to accomplish two sub-tasks using two sub-networks, one
sub-network for salient object detection in still images and the other for
motion saliency detection in optical flow images. We further introduce a series
of novel motion guided attention modules, which utilize the motion saliency
sub-network to attend and enhance the sub-network for still images. These two
sub-networks learn to adapt to each other by end-to-end training. Experimental
results demonstrate that the proposed method significantly outperforms existing
state-of-the-art algorithms on a wide range of benchmarks. We hope our simple
and effective approach will serve as a solid baseline and help ease future
research in video salient object detection. Code and models will be made
available. | [
"cs.CV"
] |
The parsing of windows in building facades is a long-desired but challenging
task in computer vision. It is crucial to urban analysis, semantic
reconstruction, lifecycle analysis, digital twins, and scene parsing amongst
other building-related tasks that require high-quality semantic data. This
article investigates the usage of the mask R-CNN framework to be used for
window detection of facade imagery input. We utilize transfer learning to train
our proposed method on COCO weights with our own collected dataset of street
view images of facades to produce instance segmentations of our new window
class. Experimental results show that our suggested approach with a relatively
small dataset trains the network only with transfer learning and augmentation
achieves results on par with prior state-of-the-art window detection
approaches, even without post-optimization techniques. | [
"cs.CV",
"cs.LG"
] |
The success of deep learning in the computer vision and natural language
processing communities can be attributed to training of very deep neural
networks with millions or billions of parameters which can then be trained with
massive amounts of data. However, similar trend has largely eluded training of
deep reinforcement learning (RL) algorithms where larger networks do not lead
to performance improvement. Previous work has shown that this is mostly due to
instability during training of deep RL agents when using larger networks. In
this paper, we make an attempt to understand and address training of larger
networks for deep RL. We first show that naively increasing network capacity
does not improve performance. Then, we propose a novel method that consists of
1) wider networks with DenseNet connection, 2) decoupling representation
learning from training of RL, 3) a distributed training method to mitigate
overfitting problems. Using this three-fold technique, we show that we can
train very large networks that result in significant performance gains. We
present several ablation studies to demonstrate the efficacy of the proposed
method and some intuitive understanding of the reasons for performance gain. We
show that our proposed method outperforms other baseline algorithms on several
challenging locomotion tasks. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Despite data augmentation being a de facto technique for boosting the
performance of deep neural networks, little attention has been paid to
developing augmentation strategies for generative adversarial networks (GANs).
To this end, we introduce a novel augmentation scheme designed specifically for
GAN-based semantic image synthesis models. We propose to randomly warp object
shapes in the semantic label maps used as an input to the generator. The local
shape discrepancies between the warped and non-warped label maps and images
enable the GAN to learn better the structural and geometric details of the
scene and thus to improve the quality of generated images. While benchmarking
the augmented GAN models against their vanilla counterparts, we discover that
the quantification metrics reported in the previous semantic image synthesis
studies are strongly biased towards specific semantic classes as they are
derived via an external pre-trained segmentation network. We therefore propose
to improve the established semantic image synthesis evaluation scheme by
analyzing separately the performance of generated images on the biased and
unbiased classes for the given segmentation network. Finally, we show strong
quantitative and qualitative improvements obtained with our augmentation
scheme, on both class splits, using state-of-the-art semantic image synthesis
models across three different datasets. On average across COCO-Stuff, ADE20K
and Cityscapes datasets, the augmented models outperform their vanilla
counterparts by ~3 mIoU and ~10 FID points. | [
"cs.CV",
"cs.CG",
"cs.LG"
] |
Exploiting relationships among objects has achieved remarkable progress in
interpreting images or videos by natural language. Most existing methods resort
to first detecting objects and their relationships, and then generating textual
descriptions, which heavily depends on pre-trained detectors and leads to
performance drop when facing problems of heavy occlusion, tiny-size objects and
long-tail in object detection. In addition, the separate procedure of detecting
and captioning results in semantic inconsistency between the pre-defined
object/relation categories and the target lexical words. We exploit prior human
commonsense knowledge for reasoning relationships between objects without any
pre-trained detectors and reaching semantic coherency within one image or video
in captioning. The prior knowledge (e.g., in the form of knowledge graph)
provides commonsense semantic correlation and constraint between objects that
are not explicit in the image and video, serving as useful guidance to build
semantic graph for sentence generation. Particularly, we present a joint
reasoning method that incorporates 1) commonsense reasoning for embedding image
or video regions into semantic space to build semantic graph and 2) relational
reasoning for encoding semantic graph to generate sentences. Extensive
experiments on the MS-COCO image captioning benchmark and the MSVD video
captioning benchmark validate the superiority of our method on leveraging prior
commonsense knowledge to enhance relational reasoning for visual captioning. | [
"cs.CV"
] |
We study a recent class of models which uses graph neural networks (GNNs) to
improve forecasting in multivariate time series.
The core assumption behind these models is that there is a latent graph
between the time series (nodes) that governs the evolution of the multivariate
time series.
By parameterizing a graph in a differentiable way, the models aim to improve
forecasting quality.
We compare four recent models of this class on the forecasting task. Further,
we perform ablations to study their behavior under changing conditions, e.g.,
when disabling the graph-learning modules and providing the ground-truth
relations instead. Based on our findings, we propose novel ways of combining
the existing architectures. | [
"cs.LG"
] |
This paper seeks to tackle the bin packing problem (BPP) through a learning
perspective. Building on self-attention-based encoding and deep reinforcement
learning algorithms, we propose a new end-to-end learning model for this task
of interest. By decomposing the combinatorial action space, as well as
utilizing a new training technique denoted as prioritized oversampling, which
is a general scheme to speed up on-policy learning, we achieve state-of-the-art
performance in a range of experimental settings. Moreover, although the
proposed approach attend2pack targets offline-BPP, we strip our method down to
the strict online-BPP setting where it is also able to achieve state-of-the-art
performance. With a set of ablation studies as well as comparisons against a
range of previous works, we hope to offer as a valid baseline approach to this
field of study. | [
"cs.LG",
"cs.AI"
] |
Multi-view clustering methods have been a focus in recent years because of
their superiority in clustering performance. However, typical traditional
multi-view clustering algorithms still have shortcomings in some aspects, such
as removal of redundant information, utilization of various views and fusion of
multi-view features. In view of these problems, this paper proposes a new
multi-view clustering method, low-rank subspace multi-view clustering based on
adaptive graph regularization. We construct two new data matrix decomposition
models into a unified optimization model. In this framework, we address the
significance of the common knowledge shared by the cross view and the unique
knowledge of each view by presenting new low-rank and sparse constraints on the
sparse subspace matrix. To ensure that we achieve effective sparse
representation and clustering performance on the original data matrix, adaptive
graph regularization and unsupervised clustering constraints are also
incorporated in the proposed model to preserve the internal structural features
of the data. Finally, the proposed method is compared with several
state-of-the-art algorithms. Experimental results for five widely used
multi-view benchmarks show that our proposed algorithm surpasses other
state-of-the-art methods by a clear margin. | [
"cs.LG",
"stat.ML"
] |
The recent developments and growing interest in neural-symbolic models has
shown that hybrid approaches can offer richer models for Artificial
Intelligence. The integration of effective relational learning and reasoning
methods is one of the key challenges in this direction, as neural learning and
symbolic reasoning offer complementary characteristics that can benefit the
development of AI systems. Relational labelling or link prediction on knowledge
graphs has become one of the main problems in deep learning-based natural
language processing research. Moreover, other fields which make use of
neural-symbolic techniques may also benefit from such research endeavours.
There have been several efforts towards the identification of missing facts
from existing ones in knowledge graphs. Two lines of research try and predict
knowledge relations between two entities by considering all known facts
connecting them or several paths of facts connecting them. We propose a
neural-symbolic graph neural network which applies learning over all the paths
by feeding the model with the embedding of the minimal subset of the knowledge
graph containing such paths. By learning to produce representations for
entities and facts corresponding to word embeddings, we show how the model can
be trained end-to-end to decode these representations and infer relations
between entities in a multitask approach. Our contribution is two-fold: a
neural-symbolic methodology leverages the resolution of relational inference in
large graphs, and we also demonstrate that such neural-symbolic model is shown
more effective than path-based approaches | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This work studies reinforcement learning in the Sim-to-Real setting, in which
an agent is first trained on a number of simulators before being deployed in
the real world, with the aim of decreasing the real-world sample complexity
requirement. Using a dynamic model known as a rich observation Markov decision
process (ROMDP), we formulate a theoretical framework for Sim-to-Real in the
situation where feedback in the real world is not available. We establish
real-world sample complexity guarantees that are smaller than what is currently
known for directly (i.e., without access to simulators) learning a ROMDP with
feedback. | [
"cs.LG",
"stat.ML"
] |
Deep learning-based models have been very successful in achieving
state-of-the-art results in many of the computer vision, speech recognition,
and natural language processing tasks in the last few years. These models seem
a natural fit for handling the ever-increasing scale of biometric recognition
problems, from cellphone authentication to airport security systems. Deep
learning-based models have increasingly been leveraged to improve the accuracy
of different biometric recognition systems in recent years. In this work, we
provide a comprehensive survey of more than 120 promising works on biometric
recognition (including face, fingerprint, iris, palmprint, ear, voice,
signature, and gait recognition), which deploy deep learning models, and show
their strengths and potentials in different applications. For each biometric,
we first introduce the available datasets that are widely used in the
literature and their characteristics. We will then talk about several promising
deep learning works developed for that biometric, and show their performance on
popular public benchmarks. We will also discuss some of the main challenges
while using these models for biometric recognition, and possible future
directions to which research in this area is headed. | [
"cs.CV",
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.