text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Continuous-time event sequences represent discrete events occurring in
continuous time. Such sequences arise frequently in real-life. Usually we
expect the sequences to follow some regular pattern over time. However,
sometimes these patterns may be interrupted by unexpected absence or
occurrences of events. Identification of these unexpected cases can be very
important as they may point to abnormal situations that need human attention.
In this work, we study and develop methods for detecting outliers in
continuous-time event sequences, including unexpected absence and unexpected
occurrences of events. Since the patterns that event sequences tend to follow
may change in different contexts, we develop outlier detection methods based on
point processes that can take context information into account. Our methods are
based on Bayesian decision theory and hypothesis testing with theoretical
guarantees. To test the performance of the methods, we conduct experiments on
both synthetic data and real-world clinical data and show the effectiveness of
the proposed methods. | [
"cs.LG",
"stat.ML"
] |
In e-commerce platforms such as Amazon and TaoBao, ranking items in a search
session is a typical multi-step decision-making problem. Learning to rank (LTR)
methods have been widely applied to ranking problems. However, such methods
often consider different ranking steps in a session to be independent, which
conversely may be highly correlated to each other. For better utilizing the
correlation between different ranking steps, in this paper, we propose to use
reinforcement learning (RL) to learn an optimal ranking policy which maximizes
the expected accumulative rewards in a search session. Firstly, we formally
define the concept of search session Markov decision process (SSMDP) to
formulate the multi-step ranking problem. Secondly, we analyze the property of
SSMDP and theoretically prove the necessity of maximizing accumulative rewards.
Lastly, we propose a novel policy gradient algorithm for learning an optimal
ranking policy, which is able to deal with the problem of high reward variance
and unbalanced reward distribution of an SSMDP. Experiments are conducted in
simulation and TaoBao search engine. The results demonstrate that our algorithm
performs much better than online LTR methods, with more than 40% and 30% growth
of total transaction amount in the simulation and the real application,
respectively. | [
"cs.LG"
] |
Despite the wide applications of Adam in reinforcement learning (RL), the
theoretical convergence of Adam-type RL algorithms has not been established.
This paper provides the first such convergence analysis for two fundamental RL
algorithms of policy gradient (PG) and temporal difference (TD) learning that
incorporate AMSGrad updates (a standard alternative of Adam in theoretical
analysis), referred to as PG-AMSGrad and TD-AMSGrad, respectively. Moreover,
our analysis focuses on Markovian sampling for both algorithms. We show that
under general nonlinear function approximation, PG-AMSGrad with a constant
stepsize converges to a neighborhood of a stationary point at the rate of
$\mathcal{O}(1/T)$ (where $T$ denotes the number of iterations), and with a
diminishing stepsize converges exactly to a stationary point at the rate of
$\mathcal{O}(\log^2 T/\sqrt{T})$. Furthermore, under linear function
approximation, TD-AMSGrad with a constant stepsize converges to a neighborhood
of the global optimum at the rate of $\mathcal{O}(1/T)$, and with a diminishing
stepsize converges exactly to the global optimum at the rate of
$\mathcal{O}(\log T/\sqrt{T})$. Our study develops new techniques for analyzing
the Adam-type RL algorithms under Markovian sampling. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Object detection has seen tremendous progress in recent years. However,
current algorithms don't generalize well when tested on diverse data
distributions. We address the problem of incremental learning in object
detection on the India Driving Dataset (IDD). Our approach involves using
multiple domain-specific classifiers and effective transfer learning techniques
focussed on avoiding catastrophic forgetting. We evaluate our approach on the
IDD and BDD100K dataset. Results show the effectiveness of our domain adaptive
approach in the case of domain shifts in environments. | [
"cs.CV"
] |
When an AI system interacts with multiple users, it frequently needs to make
allocation decisions. For instance, a virtual agent decides whom to pay
attention to in a group setting, or a factory robot selects a worker to deliver
a part. Demonstrating fairness in decision making is essential for such systems
to be broadly accepted. We introduce a Multi-Armed Bandit algorithm with
fairness constraints, where fairness is defined as a minimum rate that a task
or a resource is assigned to a user. The proposed algorithm uses contextual
information about the users and the task and makes no assumptions on how the
losses capturing the performance of different users are generated. We provide
theoretical guarantees of performance and empirical results from simulation and
an online user study. The results highlight the benefit of accounting for
contexts in fair decision making, especially when users perform better at some
contexts and worse at others. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Proceedings of the BMVC 2019 Workshop on Interpretable and Explainable
Machine Vision, Cardiff, UK, September 12, 2019. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
The essence of multivariate sequential learning is all about how to extract
dependencies in data. These data sets, such as hourly medical records in
intensive care units and multi-frequency phonetic time series, often time
exhibit not only strong serial dependencies in the individual components (the
"marginal" memory) but also non-negligible memories in the cross-sectional
dependencies (the "joint" memory). Because of the multivariate complexity in
the evolution of the joint distribution that underlies the data generating
process, we take a data-driven approach and construct a novel recurrent network
architecture, termed Memory-Gated Recurrent Networks (mGRN), with gates
explicitly regulating two distinct types of memories: the marginal memory and
the joint memory. Through a combination of comprehensive simulation studies and
empirical experiments on a range of public datasets, we show that our proposed
mGRN architecture consistently outperforms state-of-the-art architectures
targeting multivariate time series. | [
"cs.LG",
"q-fin.ST"
] |
In recent years, with the advent of deep-learning, face recognition has
achieved exceptional success. However, many of these deep face recognition
models perform much better in handling frontal faces compared to profile faces.
The major reason for poor performance in handling of profile faces is that it
is inherently difficult to learn pose-invariant deep representations that are
useful for profile face recognition. In this paper, we hypothesize that the
profile face domain possesses a latent connection with the frontal face domain
in a latent feature subspace. We look to exploit this latent connection by
projecting the profile faces and frontal faces into a common latent subspace
and perform verification or retrieval in the latent domain. We leverage a
coupled conditional generative adversarial network (cpGAN) structure to find
the hidden relationship between the profile and frontal images in a latent
common embedding subspace. Specifically, the cpGAN framework consists of two
conditional GAN-based sub-networks, one dedicated to the frontal domain and the
other dedicated to the profile domain. Each sub-network tends to find a
projection that maximizes the pair-wise correlation between the two feature
domains in a common embedding feature subspace. The efficacy of our approach
compared with the state-of-the-art is demonstrated using the CFP, CMU
Multi-PIE, IJB-A, and IJB-C datasets. Additionally, we have also implemented a
coupled convolutional neural network (cpCNN) and an adversarial discriminative
domain adaptation network (ADDA) for profile to frontal face recognition. We
have evaluated the performance of cpCNN and ADDA and compared it with the
proposed cpGAN. Finally, we have also evaluated our cpGAN for reconstruction of
frontal faces from input profile faces contained in the VGGFace2 dataset. | [
"cs.CV",
"cs.AI"
] |
Cosplay has grown from its origins at fan conventions into a billion-dollar
global dress phenomenon. To facilitate imagination and reinterpretation from
animated images to real garments, this paper presents an automatic costume
image generation method based on image-to-image translation. Cosplay items can
be significantly diverse in their styles and shapes, and conventional methods
cannot be directly applied to the wide variation in clothing images that are
the focus of this study. To solve this problem, our method starts by collecting
and preprocessing web images to prepare a cleaned, paired dataset of the anime
and real domains. Then, we present a novel architecture for generative
adversarial networks (GANs) to facilitate high-quality cosplay image
generation. Our GAN consists of several effective techniques to fill the gap
between the two domains and improve both the global and local consistency of
generated images. Experiments demonstrated that, with two types of evaluation
metrics, the proposed GAN achieves better performance than existing methods. We
also showed that the images generated by the proposed method are more realistic
than those generated by the conventional methods. Our codes and pretrained
model are available on the web. | [
"cs.CV",
"eess.IV"
] |
Federated Learning (FL) is a framework which enables distributed model
training using a large corpus of decentralized training data. Existing methods
aggregate models disregarding their internal representations, which are crucial
for training models in vision tasks. System and statistical heterogeneity
(e.g., highly imbalanced and non-i.i.d. data) further harm model training. To
this end, we introduce a method, called FedProto, which computes client
deviations using margins of prototypical representations learned on distributed
data, and applies them to drive federated optimization via an attention
mechanism. In addition, we propose three methods to analyse statistical
properties of feature representations learned in FL, in order to elucidate the
relationship between accuracy, margins and feature discrepancy of FL models. In
experimental analyses, FedProto demonstrates state-of-the-art accuracy and
convergence rate across image classification and semantic segmentation
benchmarks by enabling maximum margin training of FL models. Moreover, FedProto
reduces uncertainty of predictions of FL models compared to the baseline. To
our knowledge, this is the first work evaluating FL models in dense prediction
tasks, such as semantic segmentation. | [
"cs.LG",
"cs.CV"
] |
Attention operators have been applied on both 1-D data like texts and
higher-order data such as images and videos. Use of attention operators on
high-order data requires flattening of the spatial or spatial-temporal
dimensions into a vector, which is assumed to follow a multivariate normal
distribution. This not only incurs excessive requirements on computational
resources, but also fails to preserve structures in data. In this work, we
propose to avoid flattening by assuming the data follow matrix-variate normal
distributions. Based on this new view, we develop Kronecker attention operators
(KAOs) that operate on high-order tensor data directly. More importantly, the
proposed KAOs lead to dramatic reductions in computational resources.
Experimental results show that our methods reduce the amount of required
computational resources by a factor of hundreds, with larger factors for
higher-dimensional and higher-order data. Results also show that networks with
KAOs outperform models without attention, while achieving competitive
performance as those with original attention operators. | [
"cs.CV",
"cs.LG"
] |
The convolution operation is a powerful tool for feature extraction and plays
a prominent role in the field of computer vision. However, when targeting the
pixel-wise tasks like image fusion, it would not fully perceive the
particularity of each pixel in the image if the uniform convolution kernel is
used on different patches. In this paper, we propose a local adaptive
convolution (LAConv), which is dynamically adjusted to different spatial
locations. LAConv enables the network to pay attention to every specific local
area in the learning process. Besides, the dynamic bias (DYB) is introduced to
provide more possibilities for the depiction of features and make the network
more flexible. We further design a residual structure network equipped with the
proposed LAConv and DYB modules, and apply it to two image fusion tasks.
Experiments for pansharpening and hyperspectral image super-resolution (HISR)
demonstrate the superiority of our method over other state-of-the-art methods.
It is worth mentioning that LAConv can also be competent for other
super-resolution tasks with less computation effort. | [
"cs.CV",
"eess.IV"
] |
Color channel selection is essential for accurate segmentation of sky and
clouds in images obtained from ground-based sky cameras. Most prior works in
cloud segmentation use threshold based methods on color channels selected in an
ad-hoc manner. In this letter, we propose the use of rough sets for color
channel selection in visible-light images. Our proposed approach assesses color
channels with respect to their contribution for segmentation, and identifies
the most effective ones. | [
"cs.CV"
] |
This paper presents the novel approach towards table structure recognition by
leveraging the guided anchors. The concept differs from current
state-of-the-art approaches for table structure recognition that naively apply
object detection methods. In contrast to prior techniques, first, we estimate
the viable anchors for table structure recognition. Subsequently, these anchors
are exploited to locate the rows and columns in tabular images. Furthermore,
the paper introduces a simple and effective method that improves the results by
using tabular layouts in realistic scenarios. The proposed method is
exhaustively evaluated on the two publicly available datasets of table
structure recognition i.e ICDAR-2013 and TabStructDB. We accomplished
state-of-the-art results on the ICDAR-2013 dataset with an average F-Measure of
95.05$\%$ (94.6$\%$ for rows and 96.32$\%$ for columns) and surpassed the
baseline results on the TabStructDB dataset with an average F-Measure of
94.17$\%$ (94.08$\%$ for rows and 95.06$\%$ for columns). | [
"cs.CV"
] |
Recently, recognition of handwritten Bengali letters and digits have captured
a lot of attention among the researchers of the AI community. In this work, we
propose a Convolutional Neural Network (CNN) based object detection model which
can recognize and evaluate handwritten Bengali mathematical expressions. This
method is able to detect multiple Bengali digits and operators and locate their
positions in the image. With that information, it is able to construct numbers
from series of digits and perform mathematical operations on them. For the
object detection task, the state-of-the-art YOLOv3 algorithm was utilized. For
training and evaluating the model, we have engineered a new dataset 'Hishab'
which is the first Bengali handwritten digits dataset intended for object
detection. The model achieved an overall validation mean average precision
(mAP) of 98.6%. Also, the classification accuracy of the feature extractor
backbone CNN used in our model was tested on two publicly available Bengali
handwritten digits datasets: NumtaDB and CMATERdb. The backbone CNN achieved a
test set accuracy of 99.6252% on NumtaDB and 99.0833% on CMATERdb. | [
"cs.CV",
"cs.LG"
] |
Training machine learning models requires feeding input data for models to
ingest. Input pipelines for machine learning jobs are often challenging to
implement efficiently as they require reading large volumes of data, applying
complex transformations, and transferring data to hardware accelerators while
overlapping computation and communication to achieve optimal performance. We
present tf.data, a framework for building and executing efficient input
pipelines for machine learning jobs. The tf.data API provides operators which
can be parameterized with user-defined computation, composed, and reused across
different machine learning domains. These abstractions allow users to focus on
the application logic of data processing, while tf.data's runtime ensures that
pipelines run efficiently.
We demonstrate that input pipeline performance is critical to the end-to-end
training time of state-of-the-art machine learning models. tf.data delivers the
high performance required, while avoiding the need for manual tuning of
performance knobs. We show that tf.data features, such as parallelism, caching,
static optimizations, and non-deterministic execution are essential for high
performance. Finally, we characterize machine learning input pipelines for
millions of jobs that ran in Google's fleet, showing that input data processing
is highly diverse and consumes a significant fraction of job resources. Our
analysis motivates future research directions, such as sharing computation
across jobs and pushing data projection to the storage layer. | [
"cs.LG",
"cs.MS"
] |
Electronic Health Records (EHR) have been heavily used in modern healthcare
systems for recording patients' admission information to hospitals. Many
data-driven approaches employ temporal features in EHR for predicting specific
diseases, readmission times, or diagnoses of patients. However, most existing
predictive models cannot fully utilize EHR data, due to an inherent lack of
labels in supervised training for some temporal events. Moreover, it is hard
for existing works to simultaneously provide generic and personalized
interpretability. To address these challenges, we first propose a hyperbolic
embedding method with information flow to pre-train medical code
representations in a hierarchical structure. We incorporate these pre-trained
representations into a graph neural network to detect disease complications,
and design a multi-level attention method to compute the contributions of
particular diseases and admissions, thus enhancing personalized
interpretability. We present a new hierarchy-enhanced historical prediction
proxy task in our self-supervised learning framework to fully utilize EHR data
and exploit medical domain knowledge. We conduct a comprehensive set of
experiments and case studies on widely used publicly available EHR datasets to
verify the effectiveness of our model. The results demonstrate our model's
strengths in both predictive tasks and interpretable abilities. | [
"cs.LG",
"cs.AI"
] |
The benefits of utilizing spatial context in fast object detection algorithms
have been studied extensively. Detectors increase inference speed by doing a
single forward pass per image which means they implicitly use contextual
reasoning for their predictions. However, one can show that an adversary can
design adversarial patches which do not overlap with any objects of interest in
the scene and exploit contextual reasoning to fool standard detectors. In this
paper, we examine this problem and design category specific adversarial patches
which make a widely used object detector like YOLO blind to an attacker chosen
object category. We also show that limiting the use of spatial context during
object detector training improves robustness to such adversaries. We believe
the existence of context based adversarial attacks is concerning since the
adversarial patch can affect predictions without being in vicinity of any
objects of interest. Hence, defending against such attacks becomes challenging
and we urge the research community to give attention to this vulnerability. | [
"cs.CV"
] |
Deep learning generates state-of-the-art semantic segmentation provided that
a large number of images together with pixel-wise annotations are available. To
alleviate the expensive data collection process, we propose a semi-supervised
domain adaptation method for the specific case of images with similar semantic
content but different pixel distributions. A network trained with supervision
on a past dataset is finetuned on the new dataset to conserve its features
maps. The domain adaptation becomes a simple regression between feature maps
and does not require annotations on the new dataset. This method reaches
performances similar to classic transfer learning on the PASCAL VOC dataset
with synthetic transformations. | [
"cs.CV"
] |
Scene text recognition has received increased attention in the research
community. Text in the wild often possesses irregular arrangements, typically
including perspective text, curved text, oriented text. Most existing methods
are hard to work well for irregular text, especially for severely distorted
text. In this paper, we propose a novel Recurrent Calibration Network (RCN) for
irregular scene text recognition. The RCN progressively calibrates the
irregular text to boost the recognition performance. By decomposing the
calibration process into multiple steps, the irregular text can be calibrated
to normal one step by step. Besides, in order to avoid the accumulation of lost
information caused by inaccurate transformation, we further design a
fiducial-point refinement structure to keep the integrity of text during the
recurrent process. Instead of the calibrated images, the coordinates of
fiducial points are tracked and refined, which implicitly models the
transformation information. Based on the refined fiducial points, we estimate
the transformation parameters and sample from the original image at each step.
In this way, the original character information is preserved until the final
transformation. Such designs lead to optimal calibration results to boost the
performance of succeeding recognition. Extensive experiments on challenging
datasets demonstrate the superiority of our method, especially on irregular
benchmarks. | [
"cs.CV"
] |
The performance of federated learning systems is bottlenecked by
communication costs and training variance. The communication overhead problem
is usually addressed by three communication-reduction techniques, namely, model
compression, partial device participation, and periodic aggregation, at the
cost of increased training variance. Different from traditional distributed
learning systems, federated learning suffers from data heterogeneity (since the
devices sample their data from possibly different distributions), which induces
additional variance among devices during training. Various variance-reduced
training algorithms have been introduced to combat the effects of data
heterogeneity, while they usually cost additional communication resources to
deliver necessary control information. Additionally, data privacy remains a
critical issue in FL, and thus there have been attempts at bringing
Differential Privacy to this framework as a mediator between utility and
privacy requirements. This paper investigates the trade-offs between
communication costs and training variance under a resource-constrained
federated system theoretically and experimentally, and how communication
reduction techniques interplay in a differentially private setting. The results
provide important insights into designing practical privacy-aware federated
learning systems. | [
"cs.LG",
"cs.DC"
] |
A novel 3D shape classification scheme, based on collaborative representation
learning, is investigated in this work. A data-driven feature-extraction
procedure, taking the form of a simple projection operator, is in the core of
our methodology. Provided a shape database, a graph encapsulating the
structural relationships among all the available shapes, is first constructed
and then employed in defining low-dimensional sparse projections. The recently
introduced method of CRPs (collaborative representation based projections),
which is based on L2-Graph, is the first variant that is included towards this
end. A second algorithm, that particularizes the CRPs to shape descriptors that
are inherently nonnegative, is also introduced as potential alternative. In
both cases, the weights in the graph reflecting the database structure are
calculated so as to approximate each shape as a sparse linear combination of
the remaining dataset objects. By way of solving a generalized eigenanalysis
problem, a linear matrix operator is designed that will act as the feature
extractor. Two popular, inherently high dimensional descriptors, namely
ShapeDNA and Global Point Signature (GPS), are employed in our experimentations
with SHREC10, SHREC11 and SCHREC 15 datasets, where shape recognition is cast
as a multi-class classification problem that is tackled by means of an SVM
(support vector machine) acting within the reduced dimensional space of the
crafted projections. The results are very promising and outperform state of the
art methods, providing evidence about the highly discriminative nature of the
introduced 3D shape representations. | [
"cs.CV"
] |
We propose and demonstrate a novel machine learning algorithm that assesses
pulmonary edema severity from chest radiographs. While large publicly available
datasets of chest radiographs and free-text radiology reports exist, only
limited numerical edema severity labels can be extracted from radiology
reports. This is a significant challenge in learning such models for image
classification. To take advantage of the rich information present in the
radiology reports, we develop a neural network model that is trained on both
images and free-text to assess pulmonary edema severity from chest radiographs
at inference time. Our experimental results suggest that the joint image-text
representation learning improves the performance of pulmonary edema assessment
compared to a supervised model trained on images only. We also show the use of
the text for explaining the image classification by the joint model. To the
best of our knowledge, our approach is the first to leverage free-text
radiology reports for improving the image model performance in this
application. Our code is available at
https://github.com/RayRuizhiLiao/joint_chestxray. | [
"cs.CV"
] |
Multi-view time series classification (MVTSC) aims to improve the performance
by fusing the distinctive temporal information from multiple views. Existing
methods mainly focus on fusing multi-view information at an early stage, e.g.,
by learning a common feature subspace among multiple views. However, these
early fusion methods may not fully exploit the unique temporal patterns of each
view in complicated time series. Moreover, the label correlations of multiple
views, which are critical to boost-ing, are usually under-explored for the
MVTSC problem. To address the aforementioned issues, we propose a Correlative
Channel-Aware Fusion (C2AF) network. First, C2AF extracts comprehensive and
robust temporal patterns by a two-stream structured encoder for each view, and
captures the intra-view and inter-view label correlations with a graph-based
correlation matrix. Second, a channel-aware learnable fusion mechanism is
implemented through convolutional neural networks to further explore the global
correlative patterns. These two steps are trained end-to-end in the proposed
C2AF network. Extensive experimental results on three real-world datasets
demonstrate the superiority of our approach over the state-of-the-art methods.
A detailed ablation study is also provided to show the effectiveness of each
model component. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Machine learning applications have become ubiquitous. Their applications from
machine embedded control in production over process optimization in diverse
areas (e.g., traffic, finance, sciences) to direct user interactions like
advertising and recommendations. This has led to an increased effort of making
machine learning trustworthy. Explainable and fair AI have already matured.
They address knowledgeable users and application engineers. However, there are
users that want to deploy a learned model in a similar way as their washing
machine. These stakeholders do not want to spend time understanding the model.
Instead, they want to rely on guaranteed properties. What are the relevant
properties? How can they be expressed to stakeholders without presupposing
machine learning knowledge? How can they be guaranteed for a certain
implementation of a model? These questions move far beyond the current
state-of-the-art and we want to address them here. We propose a unified
framework that certifies learning methods via care labels. They are easy to
understand and draw inspiration from well-known certificates like textile
labels or property cards of electronic devices. Our framework considers both,
the machine learning theory and a given implementation. We test the
implementation's compliance with theoretical properties and bounds. In this
paper, we illustrate care labels by a prototype implementation of a
certification suite for a selection of probabilistic graphical models. | [
"cs.LG",
"cs.AI"
] |
We propose a novel type of balanced clustering algorithm to approximate
attention. Attention complexity is reduced from $O(N^2)$ to $O(N \log N)$,
where $N$ is the sequence length. Our algorithm, SMYRF, uses Locality Sensitive
Hashing (LSH) in a novel way by defining new Asymmetric transformations and an
adaptive scheme that produces balanced clusters. The biggest advantage of SMYRF
is that it can be used as a drop-in replacement for dense attention layers
without any retraining. On the contrary, prior fast attention methods impose
constraints (e.g. queries and keys share the same vector representations) and
require re-training from scratch. We apply our method to pre-trained
state-of-the-art Natural Language Processing and Computer Vision models and we
report significant memory and speed benefits. Notably, SMYRF-BERT outperforms
(slightly) BERT on GLUE, while using $50\%$ less memory. We also show that
SMYRF can be used interchangeably with dense attention before and after
training. Finally, we use SMYRF to train GANs with attention in high
resolutions. Using a single TPU, we were able to scale attention to 128x128=16k
and 256x256=65k tokens on BigGAN on CelebA-HQ. | [
"cs.LG"
] |
There are two main lines of research on visual question answering (VQA):
compositional model with explicit multi-hop reasoning, and monolithic network
with implicit reasoning in the latent feature space. The former excels in
interpretability and compositionality but fails on real-world images, while the
latter usually achieves better performance due to model flexibility and
parameter efficiency. We aim to combine the two to build an interpretable
framework for real-world compositional VQA. In our framework, images and
questions are disentangled into scene graphs and programs, and a symbolic
program executor runs on them with full transparency to select the attention
regions, which are then iteratively passed to a visual-linguistic pre-trained
encoder to predict answers. Experiments conducted on the GQA benchmark
demonstrate that our framework outperforms the compositional prior arts and
achieves competitive accuracy among monolithic ones. With respect to the
validity, plausibility and distribution metrics, our framework surpasses others
by a considerable margin. | [
"cs.CV"
] |
This paper presents a new model-free algorithm for episodic finite-horizon
Markov Decision Processes (MDP), Adaptive Multi-step Bootstrap (AMB), which
enjoys a stronger gap-dependent regret bound. The first innovation is to
estimate the optimal $Q$-function by combining an optimistic bootstrap with an
adaptive multi-step Monte Carlo rollout. The second innovation is to select the
action with the largest confidence interval length among admissible actions
that are not dominated by any other actions. We show when each state has a
unique optimal action, AMB achieves a gap-dependent regret bound that only
scales with the sum of the inverse of the sub-optimality gaps. In contrast,
Simchowitz and Jamieson (2019) showed all upper-confidence-bound (UCB)
algorithms suffer an additional $\Omega\left(\frac{S}{\Delta_{min}}\right)$
regret due to over-exploration where $\Delta_{min}$ is the minimum
sub-optimality gap and $S$ is the number of states. We further show that for
general MDPs, AMB suffers an additional $\frac{|Z_{mul}|}{\Delta_{min}}$
regret, where $Z_{mul}$ is the set of state-action pairs $(s,a)$'s satisfying
$a$ is a non-unique optimal action for $s$. We complement our upper bound with
a lower bound showing the dependency on $\frac{|Z_{mul}|}{\Delta_{min}}$ is
unavoidable for any consistent algorithm. This lower bound also implies a
separation between reinforcement learning and contextual bandits. | [
"cs.LG"
] |
Transfer learning is focused on the reuse of supervised learning models in a
new context. Prominent applications can be found in robotics, image processing
or web mining. In these fields, the learning scenarios are naturally changing
but often remain related to each other motivating the reuse of existing
supervised models. Current transfer learning models are neither sparse nor
interpretable. Sparsity is very desirable if the methods have to be used in
technically limited environments and interpretability is getting more critical
due to privacy regulations. In this work, we propose two transfer learning
extensions integrated into the sparse and interpretable probabilistic
classification vector machine. They are compared to standard benchmarks in the
field and show their relevance either by sparsity or performance improvements. | [
"cs.LG",
"stat.ML"
] |
Protein is linked to almost every life process. Therefore, analyzing the
biological structure and property of protein sequences is critical to the
exploration of life, as well as disease detection and drug discovery.
Traditional protein analysis methods tend to be labor-intensive and
time-consuming. The emergence of deep learning models makes modeling data
patterns in large quantities of data possible. Interdisciplinary researchers
have begun to leverage deep learning methods to model large biological
datasets, e.g. using long short-term memory and convolutional neural network
for protein sequence classification. After millions of years of evolution,
evolutionary information is encoded in protein sequences. Inspired by the
similarity between natural language and protein sequences, we use large-scale
language models to model evolutionary-scale protein sequences, encoding protein
biology information in representation. Significant improvements are observed in
both token-level and sequence-level tasks, demonstrating that our large-scale
model can accurately capture evolution information from pretraining on
evolutionary-scale individual sequences. Our code and model are available at
https://github.com/THUDM/ProteinLM. | [
"cs.LG",
"cs.CL",
"q-bio.BM"
] |
We present a context aware object detection method based on a
retrieve-and-transform scene layout model. Given an input image, our approach
first retrieves a coarse scene layout from a codebook of typical layout
templates. In order to handle large layout variations, we use a variant of the
spatial transformer network to transform and refine the retrieved layout,
resulting in a set of interpretable and semantically meaningful feature maps of
object locations and scales. The above steps are implemented as a Layout
Transfer Network which we integrate into Faster RCNN to allow for joint
reasoning of object detection and scene layout estimation. Extensive
experiments on three public datasets verified that our approach provides
consistent performance improvements to the state-of-the-art object detection
baselines on a variety of challenging tasks in the traffic surveillance and the
autonomous driving domains. | [
"cs.CV"
] |
A reinforcement-learning-based non-uniform compressed sensing (NCS) framework
for time-varying signals is introduced. The proposed scheme, referred to as
RL-NCS, aims to boost the performance of signal recovery through an optimal and
adaptive distribution of sensing energy among two groups of coefficients of the
signal, referred to as the region of interest (ROI) coefficients and non-ROI
coefficients. The coefficients in ROI usually have greater importance and need
to be reconstructed with higher accuracy compared to non-ROI coefficients. In
order to accomplish this task, the ROI is predicted at each time step using two
specific approaches. One of these approaches incorporates a long short-term
memory (LSTM) network for the prediction. The other approach employs the
previous ROI information for predicting the next step ROI. Using the
exploration-exploitation technique, a Q-network learns to choose the best
approach for designing the measurement matrix. Furthermore, a joint loss
function is introduced for the efficient training of the Q-network as well as
the LSTM network. The result indicates a significant performance gain for our
proposed method, even for rapidly varying signals and a reduced number of
measurements. | [
"cs.LG",
"cs.IT",
"eess.SP",
"math.IT"
] |
Reinforcement learning algorithms struggle when the reward signal is very
sparse. In these cases, naive random exploration methods essentially rely on a
random walk to stumble onto a rewarding state. Recent works utilize intrinsic
motivation to guide the exploration via generative models, predictive forward
models, or discriminative modeling of novelty. We propose EMI, which is an
exploration method that constructs embedding representation of states and
actions that does not rely on generative decoding of the full observation but
extracts predictive signals that can be used to guide exploration based on
forward prediction in the representation space. Our experiments show
competitive results on challenging locomotion tasks with continuous control and
on image-based exploration tasks with discrete actions on Atari. The source
code is available at https://github.com/snu-mllab/EMI . | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Tracking multiple objects individually differs from tracking groups of
related objects. When an object is a part of the group, its trajectory depends
on the trajectories of the other group members. Most of the current
state-of-the-art trackers follow the approach of tracking each object
independently, with the mechanism to handle the overlapping trajectories where
necessary. Such an approach does not take inter-object relations into account,
which may cause unreliable tracking for the members of the groups, especially
in crowded scenarios, where individual cues become unreliable due to
occlusions. To overcome these limitations and to extend such trackers to
crowded scenes, we propose a plug-in Relation Encoding Module (REM). REM
encodes relations between tracked objects by running a message passing over a
corresponding spatio-temporal graph, computing relation embeddings for the
tracked objects. Our experiments on MOT17 and MOT20 demonstrate that the
baseline tracker improves its results after a simple extension with REM. The
proposed module allows for tracking severely or even fully occluded objects by
utilizing relational cues. | [
"cs.CV"
] |
Mixture-of-Experts (MoE) with sparse conditional computation has been proved
an effective architecture for scaling attention-based models to more parameters
with comparable computation cost. In this paper, we propose Sparse-MLP, scaling
the recent MLP-Mixer model with sparse MoE layers, to achieve a more
computation-efficient architecture. We replace a subset of dense MLP blocks in
the MLP-Mixer model with Sparse blocks. In each Sparse block, we apply two
stages of MoE layers: one with MLP experts mixing information within channels
along image patch dimension, one with MLP experts mixing information within
patches along the channel dimension. Besides, to reduce computational cost in
routing and improve expert capacity, we design Re-represent layers in each
Sparse block. These layers are to re-scale image representations by two simple
but effective linear transformations. When pre-training on ImageNet-1k with
MoCo v3 algorithm, our models can outperform dense MLP models by 2.5\% on
ImageNet Top-1 accuracy with fewer parameters and computational cost. On
small-scale downstream image classification tasks, i.e. Cifar10 and Cifar100,
our Sparse-MLP can still achieve better performance than baselines. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Multi-view deep neural network is perhaps the most successful approach in 3D
shape classification. However, the fusion of multi-view features based on max
or average pooling lacks a view selection mechanism, limiting its application
in, e.g., multi-view active object recognition by a robot. This paper presents
VERAM, a recurrent attention model capable of actively selecting a sequence of
views for highly accurate 3D shape classification. VERAM addresses an important
issue commonly found in existing attention-based models, i.e., the unbalanced
training of the subnetworks corresponding to next view estimation and shape
classification. The classification subnetwork is easily overfitted while the
view estimation one is usually poorly trained, leading to a suboptimal
classification performance. This is surmounted by three essential
view-enhancement strategies: 1) enhancing the information flow of gradient
backpropagation for the view estimation subnetwork, 2) devising a highly
informative reward function for the reinforcement training of view estimation
and 3) formulating a novel loss function that explicitly circumvents view
duplication. Taking grayscale image as input and AlexNet as CNN architecture,
VERAM with 9 views achieves instance-level and class-level accuracy of 95:5%
and 95:3% on ModelNet10, 93:7% and 92:1% on ModelNet40, both are the
state-of-the-art performance under the same number of views. | [
"cs.CV",
"cs.GR"
] |
Autonomous driving vehicles and robotic systems rely on accurate perception
of their surroundings. Scene understanding is one of the crucial components of
perception modules. Among all available sensors, LiDARs are one of the
essential sensing modalities of autonomous driving systems due to their active
sensing nature with high resolution of sensor readings. Accurate and fast
semantic segmentation methods are needed to fully utilize LiDAR sensors for
scene understanding. In this paper, we present Lite-HDSeg, a novel real-time
convolutional neural network for semantic segmentation of full $3$D LiDAR point
clouds. Lite-HDSeg can achieve the best accuracy vs. computational complexity
trade-off in SemanticKitti benchmark and is designed on the basis of a new
encoder-decoder architecture with light-weight harmonic dense convolutions as
its core. Moreover, we introduce ICM, an improved global contextual module to
capture multi-scale contextual features, and MCSPN, a multi-class Spatial
Propagation Network to further refine the semantic boundaries. Our experimental
results show that the proposed method outperforms state-of-the-art semantic
segmentation approaches which can run real-time, thus is suitable for robotic
and autonomous driving applications. | [
"cs.CV"
] |
Identifying common patterns among events is a key ability in human and
machine perception, as it underlies intelligent decision making. We propose an
approach for learning semantic relational set abstractions on videos, inspired
by human learning. We combine visual features with natural language supervision
to generate high-level representations of similarities across a set of videos.
This allows our model to perform cognitive tasks such as set abstraction (which
general concept is in common among a set of videos?), set completion (which new
video goes well with the set?), and odd one out detection (which video does not
belong to the set?). Experiments on two video benchmarks, Kinetics and
Multi-Moments in Time, show that robust and versatile representations emerge
when learning to recognize commonalities among sets. We compare our model to
several baseline algorithms and show that significant improvements result from
explicitly learning relational abstractions with semantic supervision. | [
"cs.CV"
] |
This paper presents a novel neural network training approach for faster
convergence and better generalization abilities in deep reinforcement learning.
Particularly, we focus on the enhancement of training and evaluation
performance in reinforcement learning algorithms by systematically reducing
gradient's variance and thereby providing a more targeted learning process. The
proposed method which we term as Gradient Monitoring(GM), is an approach to
steer the learning in the weight parameters of a neural network based on the
dynamic development and feedback from the training process itself. We propose
different variants of the GM methodology which have been proven to increase the
underlying performance of the model. The one of the proposed variant, Momentum
with Gradient Monitoring (M-WGM), allows for a continuous adjustment of the
quantum of back-propagated gradients in the network based on certain learning
parameters. We further enhance the method with Adaptive Momentum with Gradient
Monitoring (AM-WGM) method which allows for automatic adjustment between
focused learning of certain weights versus a more dispersed learning depending
on the feedback from the rewards collected. As a by-product, it also allows for
automatic derivation of the required deep network sizes during training as the
algorithm automatically freezes trained weights. The approach is applied to two
discrete (Multi-Robot Co-ordination problem and Atari games) and one continuous
control task (MuJoCo) using Advantage Actor-Critic (A2C) and Proximal Policy
Optimization (PPO) respectively. The results obtained particularly underline
the applicability and performance improvements of the methods in terms of
generalization capability. | [
"cs.LG",
"stat.ML"
] |
Conditional generative adversarial networks (cGANs) have gained a
considerable attention in recent years due to its class-wise controllability
and superior quality for complex generation tasks. We introduce a simple yet
effective approach to improving cGANs by measuring the discrepancy between the
data distribution and the model distribution on given samples. The proposed
measure, coined the gap of log-densities (GOLD), provides an effective
self-diagnosis for cGANs while being efficienty computed from the
discriminator. We propose three applications of the GOLD: example re-weighting,
rejection sampling, and active learning, which improve the training, inference,
and data selection of cGANs, respectively. Our experimental results demonstrate
that the proposed methods outperform corresponding baselines for all three
applications on different image datasets. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Recently, Deep Learning (DL) methods have shown an excellent performance in
image captioning and visual question answering. However, despite their
performance, DL methods do not learn the semantics of the words that are being
used to describe a scene, making it difficult to spot incorrect words used in
captions or to interchange words that have similar meanings. This work proposes
a combination of DL methods for object detection and natural language
processing to validate image's captions. We test our method in the FOIL-COCO
data set, since it provides correct and incorrect captions for various images
using only objects represented in the MS-COCO image data set. Results show that
our method has a good overall performance, in some cases similar to the human
performance. | [
"cs.CV"
] |
Recent advances in multi-agent reinforcement learning have been largely
limited in training one model from scratch for every new task. The limitation
is due to the restricted model architecture related to fixed input and output
dimensions. This hinders the experience accumulation and transfer of the
learned agent over tasks with diverse levels of difficulty (e.g. 3 vs 3 or 5 vs
6 multi-agent games). In this paper, we make the first attempt to explore a
universal multi-agent reinforcement learning pipeline, designing one single
architecture to fit tasks with the requirement of different observation and
action configurations. Unlike previous RNN-based models, we utilize a
transformer-based model to generate a flexible policy by decoupling the policy
distribution from the intertwined input observation with an importance weight
measured by the merits of the self-attention mechanism. Compared to a standard
transformer block, the proposed model, named as Universal Policy Decoupling
Transformer (UPDeT), further relaxes the action restriction and makes the
multi-agent task's decision process more explainable. UPDeT is general enough
to be plugged into any multi-agent reinforcement learning pipeline and equip
them with strong generalization abilities that enables the handling of multiple
tasks at a time. Extensive experiments on large-scale SMAC multi-agent
competitive games demonstrate that the proposed UPDeT-based multi-agent
reinforcement learning achieves significant results relative to
state-of-the-art approaches, demonstrating advantageous transfer capability in
terms of both performance and training speed (10 times faster). | [
"cs.LG",
"cs.AI"
] |
Training real-world neural network models to achieve high performance and
generalizability typically requires a substantial amount of labeled data,
spanning a broad range of variation. This data-labeling process can be both
labor and cost intensive. To achieve desirable predictive performance, a
trained model is typically applied into a domain where the data distribution is
similar to the training dataset. However, for many agricultural machine
learning problems, training datasets are collected at a specific location,
during a specific period in time of the growing season. Since agricultural
systems exhibit substantial variability in terms of crop type, cultivar,
management, seasonal growth dynamics, lighting condition, sensor type, etc, a
model trained from one dataset often does not generalize well across domains.
To enable more data efficient and generalizable neural network models in
agriculture, we propose a method that generates photorealistic agricultural
images from a synthetic 3D crop model domain into real world crop domains. The
method uses a semantically constrained GAN (generative adversarial network) to
preserve the fruit position and geometry. We observe that a baseline CycleGAN
method generates visually realistic target domain images but does not preserve
fruit position information while our method maintains fruit positions well.
Image generation results in vineyard grape day and night images show the visual
outputs of our network are much better compared to a baseline network.
Incremental training experiments in vineyard grape detection tasks show that
the images generated from our method can significantly speed the domain
adaption process, increase performance for a given number of labeled images
(i.e. data efficiency), and decrease labeling requirements. | [
"cs.CV",
"cs.AI"
] |
Probabilistic point-set registration methods have been gaining more attention
for their robustness to noise, outliers and occlusions. However, these methods
tend to be much slower than the popular iterative closest point (ICP)
algorithms, which severely limits their usability. In this paper, we contribute
a novel probabilistic registration method that achieves state-of-the-art
robustness as well as substantially faster computational performance than
modern ICP implementations. This is achieved using a rigorous yet
computationally-efficient probabilistic formulation. Point-set registration is
cast as a maximum likelihood estimation and solved using the EM algorithm. We
show that with a simple augmentation, the E step can be formulated as a
filtering problem, allowing us to leverage advances in efficient Gaussian
filtering methods. We also propose a customized permutohedral filter for
improved efficiency while retaining sufficient accuracy for our task.
Additionally, we present a simple and efficient twist parameterization that
generalizes our method to the registration of articulated and deformable
objects. For articulated objects, the complexity of our method is almost
independent of the Degrees Of Freedom (DOFs), which makes it highly efficient
even for high DOF systems. The results demonstrate the proposed method
consistently outperforms many competitive baselines on a variety of
registration tasks. | [
"cs.CV"
] |
It is abundantly clear that time dependent data is a vital source of
information in the world. The challenge has been for applications in machine
learning to gain access to a considerable amount of quality data needed for
algorithm development and analysis. Modeling synthetic data using a Generative
Adversarial Network (GAN) has been at the heart of providing a viable solution.
Our work focuses on one dimensional times series and explores the few shot
approach, which is the ability of an algorithm to perform well with limited
data. This work attempts to ease the frustration by proposing a new
architecture, Time Series GAN (TSGAN), to model realistic time series data. We
evaluate TSGAN on 70 data sets from a benchmark time series database. Our
results demonstrate that TSGAN performs better than the competition both
quantitatively using the Frechet Inception Score (FID) metric, and
qualitatively when classification is used as the evaluation criteria. | [
"cs.LG",
"stat.ML"
] |
Increasing the mini-batch size for stochastic gradient descent offers
significant opportunities to reduce wall-clock training time, but there are a
variety of theoretical and systems challenges that impede the widespread
success of this technique. We investigate these issues, with an emphasis on
time to convergence and total computational cost, through an extensive
empirical analysis of network training across several architectures and problem
domains, including image classification, image segmentation, and language
modeling. Although it is common practice to increase the batch size in order to
fully exploit available computational resources, we find a substantially more
nuanced picture. Our main finding is that across a wide range of network
architectures and problem domains, increasing the batch size beyond a certain
point yields no decrease in wall-clock time to convergence for \emph{either}
train or test loss. This batch size is usually substantially below the capacity
of current systems. We show that popular training strategies for large batch
size optimization begin to fail before we can populate all available compute
resources, and we show that the point at which these methods break down depends
more on attributes like model architecture and data complexity than it does
directly on the size of the dataset. | [
"cs.LG",
"cs.DC",
"stat.ML"
] |
Iris recognition systems transform an iris image into a feature vector. The
seminal pipeline segments an image into iris and non-iris pixels, normalizes
this region into a fixed-dimension rectangle, and extracts features which are
stored and called a template (Daugman, 2009). This template is stored on a
system. A future reading of an iris can be transformed and compared against
template vectors to determine or verify the identity of an individual. As
templates are often stored together, they are a valuable target to an attacker.
We show how to invert templates across a variety of iris recognition systems.
That is, we show how to transform templates into realistic looking iris images
that are also deemed as the same iris by the corresponding recognition system.
Our inversion is based on a convolutional neural network architecture we call
RESIST (REconStructing IriSes from Templates). We apply RESIST to a traditional
Gabor filter pipeline, to a DenseNet (Huang et al., CVPR 2017) feature
extractor, and to a DenseNet architecture that works without normalization.
Both DenseNet feature extractors are based on the recent ThirdEye recognition
system (Ahmad and Fuller, BTAS 2019). When training and testing using the
ND-0405 dataset, reconstructed images demonstrate a rank-1 accuracy of 100%,
76%, and 96% respectively for the three pipelines. The core of our approach is
similar to an autoencoder. However, standalone training the core produced low
accuracy. The final architecture integrates into an generative adversarial
network (Goodfellow et al., NeurIPS, 2014) producing higher accuracy. | [
"cs.CV",
"cs.CR"
] |
In this paper, we focus on category-level 6D pose and size estimation from
monocular RGB-D image. Previous methods suffer from inefficient category-level
pose feature extraction which leads to low accuracy and inference speed. To
tackle this problem, we propose a fast shape-based network (FS-Net) with
efficient category-level feature extraction for 6D pose estimation. First, we
design an orientation aware autoencoder with 3D graph convolution for latent
feature extraction. The learned latent feature is insensitive to point shift
and object size thanks to the shift and scale-invariance properties of the 3D
graph convolution. Then, to efficiently decode category-level rotation
information from the latent feature, we propose a novel decoupled rotation
mechanism that employs two decoders to complementarily access the rotation
information. Meanwhile, we estimate translation and size by two residuals,
which are the difference between the mean of object points and ground truth
translation, and the difference between the mean size of the category and
ground truth size, respectively. Finally, to increase the generalization
ability of FS-Net, we propose an online box-cage based 3D deformation mechanism
to augment the training data. Extensive experiments on two benchmark datasets
show that the proposed method achieves state-of-the-art performance in both
category- and instance-level 6D object pose estimation. Especially in
category-level pose estimation, without extra synthetic data, our method
outperforms existing methods by 6.3% on the NOCS-REAL dataset. | [
"cs.CV"
] |
Goal-conditioned policies are used in order to break down complex
reinforcement learning (RL) problems by using subgoals, which can be defined
either in state space or in a latent feature space. This can increase the
efficiency of learning by using a curriculum, and also enables simultaneous
learning and generalization across goals. A crucial requirement of
goal-conditioned policies is to be able to determine whether the goal has been
achieved. Having a notion of distance to a goal is thus a crucial component of
this approach. However, it is not straightforward to come up with an
appropriate distance, and in some tasks, the goal space may not even be known a
priori. In this work we learn a distance-to-goal estimate which is computed in
terms of the number of actions that would need to be carried out in a
self-supervised approach. Our method solves complex tasks without prior domain
knowledge in the online setting in three different scenarios in the context of
goal-conditioned policies a) the goal space is the same as the state space b)
the goal space is given but an appropriate distance is unknown and c) the state
space is accessible, but only a subset of the state space represents desired
goals, and this subset is known a priori. We also propose a goal-generation
mechanism as a secondary contribution. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In the paper, we present the ADD-Lib, our efficient and easy to use framework
for Algebraic Decision Diagrams (ADDs). The focus of the ADD-Lib is not so much
on its efficient implementation of individual operations, which are taken by
other established ADD frameworks, but its ease and flexibility, which arise at
two levels: the level of individual ADD-tools, which come with a dedicated
user-friendly web-based graphical user interface, and at the meta level, where
such tools are specified. Both levels are described in the paper: the meta
level by explaining how we can construct an ADD-tool tailored for Random Forest
refinement and evaluation, and the accordingly generated Web-based
domain-specific tool, which we also provide as an artifact for cooperative
experimentation. In particular, the artifact allows readers to combine a given
Random Forest with their own ADDs regarded as expert knowledge and to
experience the corresponding effect. | [
"cs.LG",
"cs.AI",
"cs.PL",
"cs.SE"
] |
Optimization in Deep Learning is mainly guided by vague intuitions and strong
assumptions, with a limited understanding how and why these work in practice.
To shed more light on this, our work provides some deeper understandings of how
SGD behaves by empirically analyzing the trajectory taken by SGD from a line
search perspective. Specifically, a costly quantitative analysis of the
full-batch loss along SGD trajectories from common used models trained on a
subset of CIFAR-10 is performed. Our core results include that the full-batch
loss along lines in update step direction is highly parabolically. Further on,
we show that there exists a learning rate with which SGD always performs almost
exact line searches on the full-batch loss. Finally, we provide a different
perspective why increasing the batch size has almost the same effect as
decreasing the learning rate by the same factor. | [
"cs.LG",
"math.OC"
] |
Learning low-dimensional representations for entities and relations in
knowledge graphs using contrastive estimation represents a scalable and
effective method for inferring connectivity patterns. A crucial aspect of
contrastive learning approaches is the choice of corruption distribution that
generates hard negative samples, which force the embedding model to learn
discriminative representations and find critical characteristics of observed
data. While earlier methods either employ too simple corruption distributions,
i.e. uniform, yielding easy uninformative negatives or sophisticated
adversarial distributions with challenging optimization schemes, they do not
explicitly incorporate known graph structure resulting in suboptimal negatives.
In this paper, we propose Structure Aware Negative Sampling (SANS), an
inexpensive negative sampling strategy that utilizes the rich graph structure
by selecting negative samples from a node's k-hop neighborhood. Empirically, we
demonstrate that SANS finds semantically meaningful negatives and is
competitive with SOTA approaches while requires no additional parameters nor
difficult adversarial optimization. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
This paper presents an automated pipeline for processing multi-view satellite
images to 3D digital surface models (DSM). The proposed pipeline performs
automated geo-referencing and generates high-quality densely matched point
clouds. In particular, a novel approach is developed that fuses multiple depth
maps derived by stereo matching to generate high-quality 3D maps. By learning
critical configurations of stereo pairs from sample LiDAR data, we rank the
image pairs based on the proximity of the results to the sample data. Multiple
depth maps derived from individual image pairs are fused with an adaptive 3D
median filter that considers the image spectral similarities. We demonstrate
that the proposed adaptive median filter generally delivers better results in
general as compared to normal median filter, and achieved an accuracy of
improvement of 0.36 meters RMSE in the best case. Results and analysis are
introduced in detail. | [
"cs.CV",
"eess.IV"
] |
Computer vision coupled with Deep Learning (DL) techniques bring out a
substantial prospect in the field of traffic control, monitoring and law
enforcing activities. This paper presents a YOLOv4 object detection model in
which the Convolutional Neural Network (CNN) is trained and tuned for detecting
the license plate of the vehicles of Bangladesh and recognizing characters
using tesseract from the detected license plates. Here we also present a
Graphical User Interface (GUI) based on Tkinter, a python package. The license
plate detection model is trained with mean average precision (mAP) of 90.50%
and performed in a single TESLA T4 GPU with an average of 14 frames per second
(fps) on real time video footage. | [
"cs.CV"
] |
Reinforcement learning would enjoy better success on real-world problems if
domain knowledge could be imparted to the algorithm by the modelers. Most
problems have both hidden state and unknown dynamics. Partially observable
Markov decision processes (POMDPs) allow for the modeling of both.
Unfortunately, they do not provide a natural framework in which to specify
knowledge about the domain dynamics. The designer must either admit to knowing
nothing about the dynamics or completely specify the dynamics (thereby turning
it into a planning problem). We propose a new framework called a partially
known Markov decision process (PKMDP) which allows the designer to specify
known dynamics while still leaving portions of the environment s dynamics
unknown.The model represents NOT ONLY the environment dynamics but also the
agents knowledge of the dynamics. We present a reinforcement learning algorithm
for this model based on importance sampling. The algorithm incorporates
planning based on the known dynamics and learning about the unknown dynamics.
Our results clearly demonstrate the ability to add domain knowledge and the
resulting benefits for learning. | [
"cs.LG",
"stat.ML"
] |
Semi-supervised learning has been an effective paradigm for leveraging
unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a
new semi-supervised learning method that unifies dominant approaches and
addresses their limitations. CoMatch jointly learns two representations of the
training data, their class probabilities and low-dimensional embeddings. The
two representations interact with each other to jointly evolve. The embeddings
impose a smoothness constraint on the class probabilities to improve the
pseudo-labels, whereas the pseudo-labels regularize the structure of the
embeddings through graph-based contrastive learning. CoMatch achieves
state-of-the-art performance on multiple datasets. It achieves substantial
accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with
1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch
by 12.6%. Furthermore, CoMatch achieves better representation learning
performance on downstream tasks, outperforming both supervised learning and
self-supervised learning. Code and pre-trained models are available at
https://github.com/salesforce/CoMatch. | [
"cs.LG",
"cs.CV"
] |
Numerous control and learning problems face the situation where sequences of
high-dimensional highly dependent data are available but no or little feedback
is provided to the learner, which makes any inference rather challenging. To
address this challenge, we formulate the following problem. Given a series of
observations $X_0,\dots,X_n$ coming from a large (high-dimensional) space
$\mathcal X$, find a representation function $f$ mapping $\mathcal X$ to a
finite space $\mathcal Y$ such that the series $f(X_0),\dots,f(X_n)$ preserves
as much information as possible about the original time-series dependence in
$X_0,\dots,X_n$. We show that, for stationary time series, the function $f$ can
be selected as the one maximizing a certain information criterion that we call
time-series information. Some properties of this functions are investigated,
including its uniqueness and consistency of its empirical estimates.
Implications for the problem of optimal control are presented. | [
"cs.LG",
"q-bio.QM",
"stat.ML"
] |
Urban spatial-temporal flows prediction is of great importance to traffic
management, land use, public safety, etc. Urban flows are affected by several
complex and dynamic factors, such as patterns of human activities, weather,
events and holidays. Datasets evaluated the flows come from various sources in
different domains, e.g. mobile phone data, taxi trajectories data, metro/bus
swiping data, bike-sharing data and so on. To summarize these methodologies of
urban flows prediction, in this paper, we first introduce four main factors
affecting urban flows. Second, in order to further analysis urban flows, a
preparation process of multi-sources spatial-temporal data related with urban
flows is partitioned into three groups. Third, we choose the spatial-temporal
dynamic data as a case study for the urban flows prediction task. Fourth, we
analyze and compare some well-known and state-of-the-art flows prediction
methods in detail, classifying them into five categories: statistics-based,
traditional machine learning-based, deep learning-based, reinforcement
learning-based and transfer learning-based methods. Finally, we give open
challenges of urban flows prediction and an outlook in the future of this
field. This paper will facilitate researchers find suitable methods and open
datasets for addressing urban spatial-temporal flows forecast problems. | [
"cs.LG",
"stat.ML"
] |
Differential Dynamic Microscopy (DDM) is the combination of optical
microscopy to statistical analysis to obtain information about the dynamical
behaviour of a variety of samples spanning from soft matter physics to biology.
In DDM, the dynamical evolution of the samples is investigated separately at
different length scales and extracted from a set of images recorded at
different times. A specific result of interest is the structure function that
can be computed via spatial Fourier transforms and differences of signals. In
this work, we present an algorithm to efficiently process a set of images
according to the DDM analysis scheme. We bench-marked the new approach against
the state-of-the-art algorithm reported in previous work. The new
implementation computes the DDM analysis faster, thanks to an additional
Fourier transform in time instead of performing differences of signals. This
allows obtaining very fast analysis also in CPU based machine. In order to test
the new code, we performed the DDM analysis over sets of more than 1000 images
with and without the help of GPU hardware acceleration. As an example, for
images of $512 \times 512$ pixels, the new algorithm is 10 times faster than
the previous GPU code. Without GPU hardware acceleration and for the same set
of images, we found that the new algorithm is 300 faster than the old one both
running only on the CPU. | [
"cs.CV",
"eess.IV",
"physics.app-ph",
"physics.data-an"
] |
In recent years, mobile Internet has accelerated the proliferation of smart
mobile development. The mobile payment, mobile security and privacy protection
have become the focus of widespread attention. Iris recognition becomes a
high-security authentication technology in these fields, it is widely used in
distinct science fields in biometric authentication fields. The Convolutional
Neural Network (CNN) is one of the mainstream deep learning approaches for
image recognition, whereas its anti-noise ability is weak and needs a certain
amount of memory to train in image classification tasks. Under these conditions
we put forward a fine-tuning neural network model based on the Mask R-CNN and
Inception V4 neural network model, which integrates every component in an
overall system that combines the iris detection, extraction, and recognition
function as an iris recognition system. The proposed framework has the
characteristics of scalability and high availability; it not only can learn
part-whole relationships of the iris image but also enhancing the robustness of
the whole framework. Importantly, the proposed model can be trained using the
different spectrum of samples, such as Visible Wavelength (VW) and Near
Infrared (NIR) iris biometric databases. The recognition average accuracy of
99.10% is achieved while executing in the mobile edge calculation device of the
Jetson Nano. | [
"cs.CV",
"cs.LG"
] |
We propose the first qualitative hypothesis characterizing the behavior of
visual transformation based self-supervision, called the VTSS hypothesis. Given
a dataset upon which a self-supervised task is performed while predicting
instantiations of a transformation, the hypothesis states that if the predicted
instantiations of the transformations are already present in the dataset, then
the representation learned will be less useful. The hypothesis was derived by
observing a key constraint in the application of self-supervision using a
particular transformation. This constraint, which we term the transformation
conflict for this paper, forces a network learn degenerative features thereby
reducing the usefulness of the representation. The VTSS hypothesis helps us
identify transformations that have the potential to be effective as a
self-supervision task. Further, it helps to generally predict whether a
particular transformation based self-supervision technique would be effective
or not for a particular dataset. We provide extensive evaluations on CIFAR 10,
CIFAR 100, SVHN and FMNIST confirming the hypothesis and the trends it
predicts. We also propose novel cost-effective self-supervision techniques
based on translation and scale, which when combined with rotation outperforms
all transformations applied individually. Overall, this paper aims to shed
light on the phenomenon of visual transformation based self-supervision. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Deep learning based LiDAR odometry (LO) estimation attracts increasing
research interests in the field of autonomous driving and robotics. Existing
works feed consecutive LiDAR frames into neural networks as point clouds and
match pairs in the learned feature space. In contrast, motivated by the success
of image based feature extractors, we propose to transfer the LiDAR frames to
image space and reformulate the problem as image feature extraction. With the
help of scale-invariant feature transform (SIFT) for feature extraction, we are
able to generate matched keypoint pairs (MKPs) that can be precisely returned
to the 3D space. A convolutional neural network pipeline is designed for LiDAR
odometry estimation by extracted MKPs. The proposed scheme, namely LodoNet, is
then evaluated in the KITTI odometry estimation benchmark, achieving on par
with or even better results than the state-of-the-art. | [
"cs.CV",
"I.5.4"
] |
Graph embedding provides an efficient solution for graph analysis by
converting the graph into a low-dimensional space which preserves the structure
information. In contrast to the graph structure data, the i.i.d. node embedding
can be processed efficiently in terms of both time and space. Current
semi-supervised graph embedding algorithms assume the labelled nodes are given,
which may not be always true in the real world. While manually label all
training data is inapplicable, how to select the subset of training data to
label so as to maximize the graph analysis task performance is of great
importance. This motivates our proposed active graph embedding (AGE) framework,
in which we design a general active learning query strategy for any
semi-supervised graph embedding algorithm. AGE selects the most informative
nodes as the training labelled nodes based on the graphical information (i.e.,
node centrality) as well as the learnt node embedding (i.e., node
classification uncertainty and node embedding representativeness). Different
query criteria are combined with the time-sensitive parameters which shift the
focus from graph based query criteria to embedding based criteria as the
learning progresses. Experiments have been conducted on three public data sets
and the results verified the effectiveness of each component of our query
strategy and the power of combining them using time-sensitive parameters. Our
code is available online at: https://github.com/vwz/AGE. | [
"cs.LG",
"stat.ML"
] |
We propose a simple but strong baseline for time series classification from
scratch with deep neural networks. Our proposed baseline models are pure
end-to-end without any heavy preprocessing on the raw data or feature crafting.
The proposed Fully Convolutional Network (FCN) achieves premium performance to
other state-of-the-art approaches and our exploration of the very deep neural
networks with the ResNet structure is also competitive. The global average
pooling in our convolutional model enables the exploitation of the Class
Activation Map (CAM) to find out the contributing region in the raw data for
the specific labels. Our models provides a simple choice for the real world
application and a good starting point for the future research. An overall
analysis is provided to discuss the generalization capability of our models,
learned features, network structures and the classification semantics. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Access to high-quality data repositories and benchmarks have been
instrumental in advancing the state of the art in many experimental research
domains. While advanced analytics tasks over time series data have been gaining
lots of attention, lack of such community resources severely limits scientific
progress. In this paper, we present Exathlon, the first comprehensive public
benchmark for explainable anomaly detection over high-dimensional time series
data. Exathlon has been systematically constructed based on real data traces
from repeated executions of large-scale stream processing jobs on an Apache
Spark cluster. Some of these executions were intentionally disturbed by
introducing instances of six different types of anomalous events (e.g.,
misbehaving inputs, resource contention, process failures). For each of the
anomaly instances, ground truth labels for the root cause interval as well as
those for the extended effect interval are provided, supporting the development
and evaluation of a wide range of anomaly detection (AD) and explanation
discovery (ED) tasks. We demonstrate the practical utility of Exathlon's
dataset, evaluation methodology, and end-to-end data science pipeline design
through an experimental study with three state-of-the-art AD and ED techniques. | [
"cs.LG",
"cs.DB"
] |
We merge computational mechanics' definition of causal states
(predictively-equivalent histories) with reproducing-kernel Hilbert space
(RKHS) representation inference. The result is a widely-applicable method that
infers causal structure directly from observations of a system's behaviors
whether they are over discrete or continuous events or time. A structural
representation -- a finite- or infinite-state kernel $\epsilon$-machine -- is
extracted by a reduced-dimension transform that gives an efficient
representation of causal states and their topology. In this way, the system
dynamics are represented by a stochastic (ordinary or partial) differential
equation that acts on causal states. We introduce an algorithm to estimate the
associated evolution operator. Paralleling the Fokker-Plank equation, it
efficiently evolves causal-state distributions and makes predictions in the
original data space via an RKHS functional mapping. We demonstrate these
techniques, together with their predictive abilities, on discrete-time,
discrete-value infinite Markov-order processes generated by finite-state hidden
Markov models with (i) finite or (ii) uncountably-infinite causal states and
(iii) a continuous-time, continuous-value process generated by a
thermally-driven chaotic flow. The method robustly estimates causal structure
in the presence of varying external and measurement noise levels. | [
"cs.LG",
"cond-mat.stat-mech",
"stat.ML"
] |
Deep convolutional networks for semantic image segmentation typically require
large-scale labeled data, e.g. ImageNet and MS COCO, for network pre-training.
To reduce annotation efforts, self-supervised semantic segmentation is recently
proposed to pre-train a network without any human-provided labels. The key of
this new form of learning is to design a proxy task (e.g. image colorization),
from which a discriminative loss can be formulated on unlabeled data. Many
proxy tasks, however, lack the critical supervision signals that could induce
discriminative representation for the target image segmentation task. Thus
self-supervision's performance is still far from that of supervised
pre-training. In this study, we overcome this limitation by incorporating a
"mix-and-match" (M&M) tuning stage in the self-supervision pipeline. The
proposed approach is readily pluggable to many self-supervision methods and
does not use more annotated samples than the original process. Yet, it is
capable of boosting the performance of target image segmentation task to
surpass fully-supervised pre-trained counterpart. The improvement is made
possible by better harnessing the limited pixel-wise annotations in the target
dataset. Specifically, we first introduce the "mix" stage, which sparsely
samples and mixes patches from the target set to reflect rich and diverse local
patch statistics of target images. A "match" stage then forms a class-wise
connected graph, which can be used to derive a strong triplet-based
discriminative loss for fine-tuning the network. Our paradigm follows the
standard practice in existing self-supervised studies and no extra data or
label is required. With the proposed M&M approach, for the first time, a
self-supervision method can achieve comparable or even better performance
compared to its ImageNet pre-trained counterpart on both PASCAL VOC2012 dataset
and CityScapes dataset. | [
"cs.CV",
"cs.LG"
] |
We present an extension to the model-free anomaly detection algorithm,
Isolation Forest. This extension, named Extended Isolation Forest (EIF),
resolves issues with assignment of anomaly score to given data points. We
motivate the problem using heat maps for anomaly scores. These maps suffer from
artifacts generated by the criteria for branching operation of the binary tree.
We explain this problem in detail and demonstrate the mechanism by which it
occurs visually. We then propose two different approaches for improving the
situation. First we propose transforming the data randomly before creation of
each tree, which results in averaging out the bias. Second, which is the
preferred way, is to allow the slicing of the data to use hyperplanes with
random slopes. This approach results in remedying the artifact seen in the
anomaly score heat maps. We show that the robustness of the algorithm is much
improved using this method by looking at the variance of scores of data points
distributed along constant level sets. We report AUROC and AUPRC for our
synthetic datasets, along with real-world benchmark datasets. We find no
appreciable difference in the rate of convergence nor in computation time
between the standard Isolation Forest and EIF. | [
"cs.LG",
"astro-ph.IM",
"stat.ML"
] |
While domain adaptation has been used to improve the performance of object
detectors when the training and test data follow different distributions,
previous work has mostly focused on two-stage detectors. This is because their
use of region proposals makes it possible to perform local adaptation, which
has been shown to significantly improve the adaptation effectiveness. Here, by
contrast, we target single-stage architectures, which are better suited to
resource-constrained detection than two-stage ones but do not provide region
proposals. To nonetheless benefit from the strength of local adaptation, we
introduce an attention mechanism that lets us identify the important regions on
which adaptation should focus. Our method gradually adapts the features from
global, image-level to local, instance-level. Our approach is generic and can
be integrated into any single-stage detector. We demonstrate this on standard
benchmark datasets by applying it to both SSD and YOLOv5. Furthermore, for
equivalent single-stage architectures, our method outperforms the
state-of-the-art domain adaptation techniques even though they were designed
for specific detectors. | [
"cs.CV"
] |
In the field of human-robot interaction, teaching learning agents from human
demonstrations via supervised learning has been widely studied and successfully
applied to multiple domains such as self-driving cars and robot manipulation.
However, the majority of the work on learning from human demonstrations
utilizes only behavioral information from the demonstrator, i.e. what actions
were taken, and ignores other useful information. In particular, eye gaze
information can give valuable insight towards where the demonstrator is
allocating their visual attention, and leveraging such information has the
potential to improve agent performance. Previous approaches have only studied
the utilization of attention in simple, synchronous environments, limiting
their applicability to real-world domains. This work proposes a novel imitation
learning architecture to learn concurrently from human action demonstration and
eye tracking data to solve tasks where human gaze information provides
important context. The proposed method is applied to a visual navigation task,
in which an unmanned quadrotor is trained to search for and navigate to a
target vehicle in a real-world, photorealistic simulated environment. When
compared to a baseline imitation learning architecture, results show that the
proposed gaze augmented imitation learning model is able to learn policies that
achieve significantly higher task completion rates, with more efficient paths,
while simultaneously learning to predict human visual attention. This research
aims to highlight the importance of multimodal learning of visual attention
information from additional human input modalities and encourages the community
to adopt them when training agents from human demonstrations to perform
visuomotor tasks. | [
"cs.LG",
"cs.HC",
"cs.RO"
] |
With the growth in social media, there is a huge amount of images of faces
available on the internet. Often, people use other people's pictures on their
own profile. Perceptual hashing is often used to detect whether two images are
identical. Therefore, it can be used to detect whether people are misusing
others' pictures. In perceptual hashing, a hash is calculated for a given
image, and a new test image is mapped to one of the existing hashes if
duplicate features are present. Therefore, it can be used as an image filter to
flag banned image content or adversarial attacks --which are modifications that
are made on purpose to deceive the filter-- even though the content might be
changed to deceive the filters. For this reason, it is critical for perceptual
hashing to be robust enough to take transformations such as resizing, cropping,
and slight pixel modifications into account. In this paper, we would like to
propose to experiment with effect of gaussian blurring in perceptual hashing
for detecting misuse of personal images specifically for face images. We
hypothesize that use of gaussian blurring on the image before calculating its
hash will increase the accuracy of our filter that detects adversarial attacks
which consist of image cropping, adding text annotation, and image rotation. | [
"cs.CV",
"I.4.1; I.4.9"
] |
Reinforcement learning policies based on deep neural networks are vulnerable
to imperceptible adversarial perturbations to their inputs, in much the same
way as neural network image classifiers. Recent work has proposed several
methods to improve the robustness of deep reinforcement learning agents to
adversarial perturbations based on training in the presence of these
imperceptible perturbations (i.e. adversarial training). In this paper, we
study the effects of adversarial training on the neural policy learned by the
agent. In particular, we follow two distinct parallel approaches to investigate
the outcomes of adversarial training on deep neural policies based on
worst-case distributional shift and feature sensitivity. For the first
approach, we compare the Fourier spectrum of minimal perturbations computed for
both adversarially trained and vanilla trained neural policies. Via experiments
in the OpenAI Atari environments we show that minimal perturbations computed
for adversarially trained policies are more focused on lower frequencies in the
Fourier domain, indicating a higher sensitivity of these policies to low
frequency perturbations. For the second approach, we propose a novel method to
measure the feature sensitivities of deep neural policies and we compare these
feature sensitivity differences in state-of-the-art adversarially trained deep
neural policies and vanilla trained deep neural policies. We believe our
results can be an initial step towards understanding the relationship between
adversarial training and different notions of robustness for neural policies. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
In this work we seek to bridge the concepts of topographic organization and
equivariance in neural networks. To accomplish this, we introduce the
Topographic VAE: a novel method for efficiently training deep generative models
with topographically organized latent variables. We show that such a model
indeed learns to organize its activations according to salient characteristics
such as digit class, width, and style on MNIST. Furthermore, through
topographic organization over time (i.e. temporal coherence), we demonstrate
how predefined latent space transformation operators can be encouraged for
observed transformed input sequences -- a primitive form of unsupervised
learned equivariance. We demonstrate that this model successfully learns sets
of approximately equivariant features (i.e. "capsules") directly from sequences
and achieves higher likelihood on correspondingly transforming test sequences.
Equivariance is verified quantitatively by measuring the approximate
commutativity of the inference network and the sequence transformations.
Finally, we demonstrate approximate equivariance to complex transformations,
expanding upon the capabilities of existing group equivariant neural networks. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Inspired by how the human brain employs a higher number of neural pathways
when describing a highly focused subject, we show that deep attentive models
used for the main vision-language task of image captioning, could be extended
to achieve better performance. Image captioning bridges a gap between computer
vision and natural language processing. Automated image captioning is used as a
tool to eliminate the need for human agent for creating descriptive captions
for unseen images.Automated image captioning is challenging and yet
interesting. One reason is that AI based systems capable of generating
sentences that describe an input image could be used in a wide variety of tasks
beyond generating captions for unseen images found on web or uploaded to social
media. For example, in biology and medical sciences, these systems could
provide researchers and physicians with a brief linguistic description of
relevant images, potentially expediting their work. | [
"cs.CV"
] |
We introduce exploration potential, a quantity that measures how much a
reinforcement learning agent has explored its environment class. In contrast to
information gain, exploration potential takes the problem's reward structure
into account. This leads to an exploration criterion that is both necessary and
sufficient for asymptotic optimality (learning to act optimally across the
entire environment class). Our experiments in multi-armed bandits use
exploration potential to illustrate how different algorithms make the tradeoff
between exploration and exploitation. | [
"cs.LG",
"cs.AI"
] |
3D human shape and pose estimation from monocular images has been an active
area of research in computer vision, having a substantial impact on the
development of new applications, from activity recognition to creating virtual
avatars. Existing deep learning methods for 3D human shape and pose estimation
rely on relatively high-resolution input images; however, high-resolution
visual content is not always available in several practical scenarios such as
video surveillance and sports broadcasting. Low-resolution images in real
scenarios can vary in a wide range of sizes, and a model trained in one
resolution does not typically degrade gracefully across resolutions. Two common
approaches to solve the problem of low-resolution input are applying
super-resolution techniques to the input images which may result in visual
artifacts, or simply training one model for each resolution, which is
impractical in many realistic applications. To address the above issues, this
paper proposes a novel algorithm called RSC-Net, which consists of a
Resolution-aware network, a Self-supervision loss, and a Contrastive learning
scheme. The proposed network is able to learn the 3D body shape and pose across
different resolutions with a single model. The self-supervision loss encourages
scale-consistency of the output, and the contrastive learning scheme enforces
scale-consistency of the deep features. We show that both these new training
losses provide robustness when learning 3D shape and pose in a
weakly-supervised manner. Extensive experiments demonstrate that the RSC-Net
can achieve consistently better results than the state-of-the-art methods for
challenging low-resolution images. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
The task of point cloud upsampling aims to acquire dense and uniform point
sets from sparse and irregular point sets. Although significant progress has
been made with deep learning models, they require ground-truth dense point sets
as the supervision information, which can only trained on synthetic paired
training data and are not suitable for training under real-scanned sparse data.
However, it is expensive and tedious to obtain large scale paired sparse-dense
point sets for training from real scanned sparse data. To address this problem,
we propose a self-supervised point cloud upsampling network, named SPU-Net, to
capture the inherent upsampling patterns of points lying on the underlying
object surface. Specifically, we propose a coarse-to-fine reconstruction
framework, which contains two main components: point feature extraction and
point feature expansion, respectively. In the point feature extraction, we
integrate self-attention module with graph convolution network (GCN) to
simultaneously capture context information inside and among local regions. In
the point feature expansion, we introduce a hierarchically learnable folding
strategy to generate the upsampled point sets with learnable 2D grids.
Moreover, to further optimize the noisy points in the generated point sets, we
propose a novel self-projection optimization associated with uniform and
reconstruction terms, as a joint loss, to facilitate the self-supervised point
cloud upsampling. We conduct various experiments on both synthetic and
real-scanned datasets, and the results demonstrate that we achieve comparable
performance to the state-of-the-art supervised methods. | [
"cs.CV"
] |
Deep neural networks (DNNs) are typically optimized for a specific input
resolution (e.g. $224 \times 224$ px) and their adoption to inputs of higher
resolution (e.g., satellite or medical images) remains challenging, as it leads
to excessive computation and memory overhead, and may require substantial
engineering effort (e.g., streaming). We show that multi-scale hard-attention
can be an effective solution to this problem. We propose a novel architecture,
TNet, which traverses an image pyramid in a top-down fashion, visiting only the
most informative regions along the way. We compare our model against strong
hard-attention baselines, achieving a better trade-off between resources and
accuracy on ImageNet. We further verify the efficacy of our model on satellite
images (fMoW dataset) of size up to $896 \times 896$ px. In addition, our
hard-attention mechanism guarantees predictions with a degree of
interpretability, without extra cost beyond inference. We also show that we can
reduce data acquisition and annotation cost, since our model attends only to a
fraction of the highest resolution content, while using only image-level labels
without bounding boxes. | [
"cs.CV"
] |
Vision Transformers (ViTs) and MLPs signal further efforts on replacing
hand-wired features or inductive biases with general-purpose neural
architectures. Existing works empower the models by massive data, such as
large-scale pretraining and/or repeated strong data augmentations, and still
report optimization-related problems (e.g., sensitivity to initialization and
learning rate). Hence, this paper investigates ViTs and MLP-Mixers from the
lens of loss geometry, intending to improve the models' data efficiency at
training and generalization at inference. Visualization and Hessian reveal
extremely sharp local minima of converged models. By promoting smoothness with
a recently proposed sharpness-aware optimizer, we substantially improve the
accuracy and robustness of ViTs and MLP-Mixers on various tasks spanning
supervised, adversarial, contrastive, and transfer learning (e.g., +5.3\% and
+11.0\% top-1 accuracy on ImageNet for ViT-B/16 and Mixer-B/16, respectively,
with the simple Inception-style preprocessing). We show that the improved
smoothness attributes to sparser active neurons in the first few layers. The
resultant ViTs outperform ResNets of similar size and throughput when trained
from scratch on ImageNet without large-scale pretraining or strong data
augmentations. They also possess more perceptive attention maps. | [
"cs.CV",
"cs.LG"
] |
Adversarial examples are inputs to machine learning models designed to cause
the model to make a mistake. They are useful for understanding the shortcomings
of machine learning models, interpreting their results, and for regularisation.
In NLP, however, most example generation strategies produce input text by using
known, pre-specified semantic transformations, requiring significant manual
effort and in-depth understanding of the problem and domain. In this paper, we
investigate the problem of automatically generating adversarial examples that
violate a set of given First-Order Logic constraints in Natural Language
Inference (NLI). We reduce the problem of identifying such adversarial examples
to a combinatorial optimisation problem, by maximising a quantity measuring the
degree of violation of such constraints and by using a language model for
generating linguistically-plausible examples. Furthermore, we propose a method
for adversarially regularising neural NLI models for incorporating background
knowledge. Our results show that, while the proposed method does not always
improve results on the SNLI and MultiNLI datasets, it significantly and
consistently increases the predictive accuracy on adversarially-crafted
datasets -- up to a 79.6% relative improvement -- while drastically reducing
the number of background knowledge violations. Furthermore, we show that
adversarial examples transfer among model architectures, and that the proposed
adversarial training procedure improves the robustness of NLI models to
adversarial examples. | [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
] |
Cultural heritage is the asset of all the peoples of the world. The
preservation and inheritance of cultural heritage is conducive to the progress
of human civilization. In northwestern China, there is a world heritage site --
Mogao Grottoes -- that has a plenty of mural paintings showing the historical
cultures of ancient China. To study these historical cultures, one critical
procedure is to date the mural paintings, i.e., determining the era when they
were created. Until now, most mural paintings at Mogao Grottoes have been dated
by directly referring to the mural texts or historical documents. However, some
are still left with creation-era undetermined due to the lack of reference
materials. Considering that the drawing style of mural paintings was changing
along the history and the drawing style can be learned and quantified through
painting data, we formulate the problem of mural-painting dating into a problem
of drawing-style classification. In fact, drawing styles can be expressed not
only in color or curvature, but also in some unknown forms -- the forms that
have not been observed. To this end, besides sophisticated color and shape
descriptors, a deep convolution neural network is designed to encode the
implicit drawing styles. 3860 mural paintings collected from 194 different
grottoes with determined creation-era labels are used to train the
classification model and build the dating method. In experiments, the proposed
dating method is applied to seven mural paintings which were previously dated
with controversies, and the exciting new dating results are approved by the
Dunhuang expert. | [
"cs.CV"
] |
Data privacy has become an increasingly important issue in machine learning.
Many approaches have been developed to tackle this issue, e.g., cryptography
(Homomorphic Encryption, Differential Privacy, etc.) and collaborative training
(Secure Multi-Party Computation, Distributed Learning and Federated Learning).
These techniques have a particular focus on data encryption or secure local
computation. They transfer the intermediate information to the third-party to
compute the final result. Gradient exchanging is commonly considered to be a
secure way of training a robust model collaboratively in deep learning.
However, recent researches have demonstrated that sensitive information can be
recovered from the shared gradient. Generative Adversarial Networks (GAN), in
particular, have shown to be effective in recovering those information.
However, GAN based techniques require additional information, such as class
labels which are generally unavailable for privacy persevered learning. In this
paper, we show that, in Federated Learning (FL) system, image-based privacy
data can be easily recovered in full from the shared gradient only via our
proposed Generative Regression Neural Network (GRNN). We formulate the attack
to be a regression problem and optimise two branches of the generative model by
minimising the distance between gradients. We evaluate our method on several
image classification tasks. The results illustrate that our proposed GRNN
outperforms state-of-the-art methods with better stability, stronger
robustness, and higher accuracy. It also has no convergence requirement to the
global FL model. Moreover, we demonstrate information leakage using face
re-identification. Some defense strategies are also discussed in this work. | [
"cs.LG",
"cs.CR",
"cs.CV",
"I.2.6; I.5.1"
] |
Deep generative models are able to suggest new organic molecules by
generating strings, trees, and graphs representing their structure. While such
models allow one to generate molecules with desirable properties, they give no
guarantees that the molecules can actually be synthesized in practice. We
propose a new molecule generation model, mirroring a more realistic real-world
process, where (a) reactants are selected, and (b) combined to form more
complex molecules. More specifically, our generative model proposes a bag of
initial reactants (selected from a pool of commercially-available molecules)
and uses a reaction model to predict how they react together to generate new
molecules. We first show that the model can generate diverse, valid and unique
molecules due to the useful inductive biases of modeling reactions.
Furthermore, our model allows chemists to interrogate not only the properties
of the generated molecules but also the feasibility of the synthesis routes. We
conclude by using our model to solve retrosynthesis problems, predicting a set
of reactants that can produce a target product. | [
"cs.LG",
"physics.comp-ph",
"stat.ML"
] |
The goal of computational color constancy is to preserve the perceptive
colors of objects under different lighting conditions by removing the effect of
color casts caused by the scene's illumination. With the rapid development of
deep learning based techniques, significant progress has been made in image
semantic segmentation. In this work, we exploit the semantic information
together with the color and spatial information of the input image in order to
remove color casts. We train a convolutional neural network (CNN) model that
learns to estimate the illuminant color and gamma correction parameters based
on the semantic information of the given image. Experimental results show that
feeding the CNN with the semantic information leads to a significant
improvement in the results by reducing the error by more than 40%. | [
"cs.CV"
] |
Floorplans are commonly used to represent the layout of buildings. In
computer aided-design (CAD) floorplans are usually represented in the form of
hierarchical graph structures. Research works towards computational techniques
that facilitate the design process, such as automated analysis and
optimization, often use simple floorplan representations that ignore the
semantics of the space and do not take into account usage related analytics. We
present a floorplan embedding technique that uses an attributed graph to
represent the geometric information as well as design semantics and behavioral
features of the inhabitants as node and edge attributes. A Long Short-Term
Memory (LSTM) Variational Autoencoder (VAE) architecture is proposed and
trained to embed attributed graphs as vectors in a continuous space. A user
study is conducted to evaluate the coupling of similar floorplans retrieved
from the embedding space with respect to a given input (e.g., design layout).
The qualitative, quantitative and user-study evaluations show that our
embedding framework produces meaningful and accurate vector representations for
floorplans. In addition, our proposed model is a generative model. We studied
and showcased its effectiveness for generating new floorplans. We also release
the dataset that we have constructed and which, for each floorplan, includes
the design semantics attributes as well as simulation generated human
behavioral features for further study in the community. | [
"cs.LG",
"cs.AI"
] |
Temporal-difference learning with gradient correction (TDC) is a two
time-scale algorithm for policy evaluation in reinforcement learning. This
algorithm was initially proposed with linear function approximation, and was
later extended to the one with general smooth function approximation. The
asymptotic convergence for the on-policy setting with general smooth function
approximation was established in [bhatnagar2009convergent], however, the
finite-sample analysis remains unsolved due to challenges in the non-linear and
two-time-scale update structure, non-convex objective function and the
time-varying projection onto a tangent plane. In this paper, we develop novel
techniques to explicitly characterize the finite-sample error bound for the
general off-policy setting with i.i.d.\ or Markovian samples, and show that it
converges as fast as $\mathcal O(1/\sqrt T)$ (up to a factor of $\mathcal
O(\log T)$). Our approach can be applied to a wide range of value-based
reinforcement learning algorithms with general smooth function approximation. | [
"cs.LG"
] |
When designing new molecules with particular properties, it is not only
important what to make but crucially how to make it. These instructions form a
synthesis directed acyclic graph (DAG), describing how a large vocabulary of
simple building blocks can be recursively combined through chemical reactions
to create more complicated molecules of interest. In contrast, many current
deep generative models for molecules ignore synthesizability. We therefore
propose a deep generative model that better represents the real world process,
by directly outputting molecule synthesis DAGs. We argue that this provides
sensible inductive biases, ensuring that our model searches over the same
chemical space that chemists would also have access to, as well as
interpretability. We show that our approach is able to model chemical space
well, producing a wide range of diverse molecules, and allows for unconstrained
optimization of an inherently constrained problem: maximize certain chemical
properties such that discovered molecules are synthesizable. | [
"cs.LG",
"q-bio.BM",
"q-bio.QM"
] |
Tubular structure tracking is an important task in the fields of computer
vision and medical image analysis. The minimal paths-based approaches have
exhibited their powerful ability in tracing tubular structures, by which a
tubular structure can be naturally treated as a minimal geodesic path computed
with a suitable geodesic metric. However, existing minimal paths-based tracing
approaches still suffer from difficulty, for instances the shortcuts and short
branches combination problems, especially when dealing with the images
involving complicated tubular tree structures or background. In this paper, we
introduce a new minimal paths-based model for minimally interactive tubular
structure centerline extraction in conjunction with a perceptual grouping
scheme. Basically, we take into account the prescribed tubular trajectories and
curvature-penalized geodesic paths to seek favourable shortest paths. The
proposed approach can benefit from the local smoothness prior on tubular
structures and the global optimality of the used graph-based path searching
scheme. Experimental results on both synthetic and real images prove that the
proposed model indeed obtains outperformance comparing with the
state-of-the-art minimal path-based tubular structure tracing algorithms. | [
"cs.CV"
] |
In online reinforcement learning (RL), efficient exploration remains
particularly challenging in high-dimensional environments with sparse rewards.
In low-dimensional environments, where tabular parameterization is possible,
count-based upper confidence bound (UCB) exploration methods achieve minimax
near-optimal rates. However, it remains unclear how to efficiently implement
UCB in realistic RL tasks that involve non-linear function approximation. To
address this, we propose a new exploration approach via \textit{maximizing} the
deviation of the occupancy of the next policy from the explored regions. We add
this term as an adaptive regularizer to the standard RL objective to balance
exploration vs. exploitation. We pair the new objective with a provably
convergent algorithm, giving rise to a new intrinsic reward that adjusts
existing bonuses. The proposed intrinsic reward is easy to implement and
combine with other existing RL algorithms to conduct exploration. As a proof of
concept, we evaluate the new intrinsic reward on tabular examples across a
variety of model-based and model-free algorithms, showing improvements over
count-only exploration strategies. When tested on navigation and locomotion
tasks from MiniGrid and DeepMind Control Suite benchmarks, our approach
significantly improves sample efficiency over state-of-the-art methods. Our
code is available at https://github.com/tianjunz/MADE. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In this paper, we present work in progress on activity recognition and
prediction in real homes using either binary sensor data or depth video data.
We present our field trial and set-up for collecting and storing the data, our
methods, and our current results. We compare the accuracy of predicting the
next binary sensor event using probabilistic methods and Long Short-Term Memory
(LSTM) networks, include the time information to improve prediction accuracy,
as well as predict both the next sensor event and its mean time of occurrence
using one LSTM model. We investigate transfer learning between apartments and
show that it is possible to pre-train the model with data from other apartments
and achieve good accuracy in a new apartment straight away. In addition, we
present preliminary results from activity recognition using low-resolution
depth video data from seven apartments, and classify four activities - no
movement, standing up, sitting down, and TV interaction - by using a relatively
simple processing method where we apply an Infinite Impulse Response (IIR)
filter to extract movements from the frames prior to feeding them to a
convolutional LSTM network for the classification. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Both scientists and children make important structural discoveries, yet their
computational underpinnings are not well understood. Structure discovery has
previously been formalized as probabilistic inference about the right
structural form --- where form could be a tree, ring, chain, grid, etc. [Kemp &
Tenenbaum (2008). The discovery of structural form. PNAS, 105(3), 10687-10692].
While this approach can learn intuitive organizations, including a tree for
animals and a ring for the color circle, it assumes a strong inductive bias
that considers only these particular forms, and each form is explicitly
provided as initial knowledge. Here we introduce a new computational model of
how organizing structure can be discovered, utilizing a broad hypothesis space
with a preference for sparse connectivity. Given that the inductive bias is
more general, the model's initial knowledge shows little qualitative
resemblance to some of the discoveries it supports. As a consequence, the model
can also learn complex structures for domains that lack intuitive description,
as well as predict human property induction judgments without explicit
structural forms. By allowing form to emerge from sparsity, our approach
clarifies how both the richness and flexibility of human conceptual
organization can coexist. | [
"cs.LG",
"stat.ML"
] |
Federated learning improves data privacy and efficiency in machine learning
performed over networks of distributed devices, such as mobile phones, IoT and
wearable devices, etc. Yet models trained with federated learning can still
fail to generalize to new devices due to the problem of domain shift. Domain
shift occurs when the labeled data collected by source nodes statistically
differs from the target node's unlabeled data. In this work, we present a
principled approach to the problem of federated domain adaptation, which aims
to align the representations learned among the different nodes with the data
distribution of the target node. Our approach extends adversarial adaptation
techniques to the constraints of the federated setting. In addition, we devise
a dynamic attention mechanism and leverage feature disentanglement to enhance
knowledge transfer. Empirically, we perform extensive experiments on several
image and text classification tasks and show promising results under
unsupervised federated domain adaptation setting. | [
"cs.CV",
"cs.LG"
] |
Convolutional neural networks have a significant improvement in the accuracy
of Object detection. As convolutional neural networks become deeper, the
accuracy of detection is also obviously improved, and more floating-point
calculations are needed. Many researchers use the knowledge distillation method
to improve the accuracy of student networks by transferring knowledge from a
deeper and larger teachers network to a small student network, in object
detection. Most methods of knowledge distillation need to designed complex cost
functions and they are aimed at the two-stage object detection algorithm. This
paper proposes a clean and effective knowledge distillation method for the
one-stage object detection. The feature maps generated by teacher network and
student network are used as true samples and fake samples respectively, and
generate adversarial training for both to improve the performance of the
student network in one-stage object detection. | [
"cs.CV"
] |
Communication is a important factor that enables agents work cooperatively in
multi-agent reinforcement learning (MARL). Most previous work uses continuous
message communication whose high representational capacity comes at the expense
of interpretability. Allowing agents to learn their own discrete message
communication protocol emerged from a variety of domains can increase the
interpretability for human designers and other agents.This paper proposes a
method to generate discrete messages analogous to human languages, and achieve
communication by a broadcast-and-listen mechanism based on self-attention. We
show that discrete message communication has performance comparable to
continuous message communication but with much a much smaller vocabulary
size.Furthermore, we propose an approach that allows humans to interactively
send discrete messages to agents. | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
Recently, clustering with deep network framework has attracted attention of
several researchers in the computer vision community. Deep framework gains
extensive attention due to its efficiency and scalability towards large-scale
and high-dimensional data. In this paper, we transform supervised CNN
classifier architecture into an unsupervised clustering model, called RECAL,
which jointly learns discriminative embedding subspace and cluster labels.
RECAL is made up of feature extraction layers which are convolutional, followed
by unsupervised classifier layers which is fully connected. A multinomial
logistic regression function (softmax) stacked on top of classifier layers. We
train this network using stochastic gradient descent (SGD) optimizer. However,
the successful implementation of our model is revolved around the design of
loss function. Our loss function uses the heuristics that true partitioning
entails lower entropy given that the class distribution is not heavily skewed.
This is a trade-off between the situations of "skewed distribution" and
"low-entropy". To handle this, we have proposed classification entropy and
class entropy which are the two components of our loss function. In this
approach, size of the mini-batch should be kept high. Experimental results
indicate the consistent and competitive behavior of our model for clustering
well-known digit, multi-viewed object and face datasets. Morever, we use this
model to generate unsupervised patch segmentation for multi-spectral LISS-IV
images. We observe that it is able to distinguish built-up area, wet land,
vegetation and waterbody from the underlying scene. | [
"cs.CV"
] |
It is difficult to be able to imitate well in unknown states from a small
amount of expert data and sampling data. Supervised learning methods such as
Behavioral Cloning do not require sampling data, but usually suffer from
distribution shift. The methods based on reinforcement learning, such as
inverse reinforcement learning and generative adversarial imitation learning
(GAIL), can learn from only a few expert data. However, they often need to
interact with the environment. Soft Q imitation learning addressed the
problems, and it was shown that it could learn efficiently by combining
Behavioral Cloning and soft Q-learning with constant rewards. In order to make
this algorithm more robust to distribution shift, we propose Discriminator Soft
Actor Critic (DSAC). It uses a reward function based on adversarial inverse
reinforcement learning instead of constant rewards. We evaluated it on PyBullet
environments with only four expert trajectories. | [
"cs.LG",
"stat.ML"
] |
We propose a simple, intuitive yet powerful method for human-object
interaction (HOI) detection. HOIs are so diverse in spatial distribution in an
image that existing CNN-based methods face the following three major drawbacks;
they cannot leverage image-wide features due to CNN's locality, they rely on a
manually defined location-of-interest for the feature aggregation, which
sometimes does not cover contextually important regions, and they cannot help
but mix up the features for multiple HOI instances if they are located closely.
To overcome these drawbacks, we propose a transformer-based feature extractor,
in which an attention mechanism and query-based detection play key roles. The
attention mechanism is effective in aggregating contextually important
information image-wide, while the queries, which we design in such a way that
each query captures at most one human-object pair, can avoid mixing up the
features from multiple instances. This transformer-based feature extractor
produces so effective embeddings that the subsequent detection heads may be
fairly simple and intuitive. The extensive analysis reveals that the proposed
method successfully extracts contextually important features, and thus
outperforms existing methods by large margins (5.37 mAP on HICO-DET, and 5.7
mAP on V-COCO). The source codes are available at
$\href{https://github.com/hitachi-rd-cv/qpic}{\text{this https URL}}$. | [
"cs.CV",
"cs.LG"
] |
Real-world binary classification tasks are in many cases imbalanced, where
the minority class is much smaller than the majority class. This skewness is
challenging for machine learning algorithms as they tend to focus on the
majority and greatly misclassify the minority. Adding synthetic minority
samples to the dataset before training the model is a popular technique to
address this difficulty and is commonly achieved by interpolating minority
samples. Tabular datasets are often multi-modal and contain discrete
(categorical) features in addition to continuous ones which makes interpolation
of samples non-trivial. To address this, we propose a latent space
interpolation framework which (1) maps the multi-modal samples to a dense
continuous latent space using an autoencoder; (2) applies oversampling by
interpolation in the latent space; and (3) maps the synthetic samples back to
the original feature space. We defined metrics to directly evaluate the quality
of the minority data generated and showed that our framework generates better
synthetic data than the existing methods. Furthermore, the superior synthetic
data yields better prediction quality in downstream binary classification
tasks, as was demonstrated in extensive experiments with 27 publicly available
real-world datasets | [
"cs.LG"
] |
Time-series forecasting is an important task in both academic and industry,
which can be applied to solve many real forecasting problems like stock,
water-supply, and sales predictions. In this paper, we study the case of
retailers' sales forecasting on Tmall|the world's leading online B2C platform.
By analyzing the data, we have two main observations, i.e., sales seasonality
after we group different groups of retails and a Tweedie distribution after we
transform the sales (target to forecast). Based on our observations, we design
two mechanisms for sales forecasting, i.e., seasonality extraction and
distribution transformation. First, we adopt Fourier decomposition to
automatically extract the seasonalities for different categories of retailers,
which can further be used as additional features for any established regression
algorithms. Second, we propose to optimize the Tweedie loss of sales after
logarithmic transformations. We apply these two mechanisms to classic
regression models, i.e., neural network and Gradient Boosting Decision Tree,
and the experimental results on Tmall dataset show that both mechanisms can
significantly improve the forecasting results. | [
"cs.LG",
"stat.AP",
"stat.ML"
] |
Computing environment is moving towards human-centered designs instead of
computer centered designs and human's tend to communicate wealth of information
through affective states or expressions. Traditional Human Computer Interaction
(HCI) based systems ignores bulk of information communicated through those
affective states and just caters for user's intentional input. Generally, for
evaluating and benchmarking different facial expression analysis algorithms,
standardized databases are needed to enable a meaningful comparison. In the
absence of comparative tests on such standardized databases it is difficult to
find relative strengths and weaknesses of different facial expression
recognition algorithms. In this article we present a novel video database for
Children's Spontaneous facial Expressions (LIRIS-CSE). Proposed video database
contains six basic spontaneous facial expressions shown by 12 ethnically
diverse children between the ages of 6 and 12 years with mean age of 7.3 years.
To the best of our knowledge, this database is first of its kind as it records
and shows spontaneous facial expressions of children. Previously there were few
database of children expressions and all of them show posed or exaggerated
expressions which are different from spontaneous or natural expressions. Thus,
this database will be a milestone for human behavior researchers. This database
will be a excellent resource for vision community for benchmarking and
comparing results. In this article, we have also proposed framework for
automatic expression recognition based on convolutional neural network (CNN)
architecture with transfer learning approach. Proposed architecture achieved
average classification accuracy of 75% on our proposed database i.e. LIRIS-CSE. | [
"cs.CV",
"cs.LG"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.