text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
Dense pixel matching is required for many computer vision algorithms such as
disparity, optical flow or scene flow estimation. Feature Pyramid Networks
(FPN) have proven to be a suitable feature extractor for CNN-based dense
matching tasks. FPN generates well localized and semantically strong features
at multiple scales. However, the generic FPN is not utilizing its full
potential, due to its reasonable but limited localization accuracy. Thus, we
present ResFPN -- a multi-resolution feature pyramid network with multiple
residual skip connections, where at any scale, we leverage the information from
higher resolution maps for stronger and better localized features. In our
ablation study, we demonstrate the effectiveness of our novel architecture with
clearly higher accuracy than FPN. In addition, we verify the superior accuracy
of ResFPN in many different pixel matching applications on established datasets
like KITTI, Sintel, and FlyingThings3D. | [
"cs.CV"
]
|
In this paper, we present IRON (Invariant-based global Robust estimation and
OptimizatioN), a non-minimal and highly robust solution for point cloud
registration with a great number of outliers among the correspondences. To
realize this, we decouple the registration problem into the estimation of
scale, rotation and translation, respectively. Our first contribution is to
propose RANSIC (RANdom Samples with Invariant Compatibility), which employs the
invariant compatibility to seek inliers from random samples and robustly
estimates the scale between two sets of point clouds in the meantime. Once the
scale is estimated, our second contribution is to relax the non-convex global
registration problem into a convex Semi-Definite Program (SDP) in a certifiable
way using Sum-of-Squares (SOS) Relaxation and show that the relaxation is
tight. For robust estimation, we further propose RT-GNC (Rough Trimming and
Graduated Non-Convexity), a global outlier rejection heuristic having better
robustness and time-efficiency than traditional GNC, as our third contribution.
With these contributions, we can render our registration algorithm, IRON.
Through experiments over real datasets, we show that IRON is efficient, highly
accurate and robust against as many as 99% outliers whether the scale is known
or unknown, outperforming the existing state-of-the-art algorithms. | [
"cs.CV",
"cs.RO"
]
|
Dynamic spatial graph construction is a challenge in graph neural network
(GNN) for time series data problems. Although some adaptive graphs are
conceivable, only a 2D graph is embedded in the network to reflect the current
spatial relation, regardless of all the previous situations. In this work, we
generate a spatial tensor graph (STG) to collect all the dynamic spatial
relations, as well as a temporal tensor graph (TTG) to find the latent pattern
along time at each node. These two tensor graphs share the same nodes and
edges, which leading us to explore their entangled correlations by Projected
Entangled Pair States (PEPS) to optimize the two graphs. We experimentally
compare the accuracy and time costing with the state-of-the-art GNN based
methods on the public traffic datasets. | [
"cs.CV"
]
|
We propose a dynamic neighborhood aggregation (DNA) procedure guided by
(multi-head) attention for representation learning on graphs. In contrast to
current graph neural networks which follow a simple neighborhood aggregation
scheme, our DNA procedure allows for a selective and node-adaptive aggregation
of neighboring embeddings of potentially differing locality. In order to avoid
overfitting, we propose to control the channel-wise connections between input
and output by making use of grouped linear projections. In a number of
transductive node-classification experiments, we demonstrate the effectiveness
of our approach. | [
"cs.LG",
"stat.ML"
]
|
Deep learning approaches to optical flow estimation have seen rapid progress
over the recent years. One common trait of many networks is that they refine an
initial flow estimate either through multiple stages or across the levels of a
coarse-to-fine representation. While leading to more accurate results, the
downside of this is an increased number of parameters. Taking inspiration from
both classical energy minimization approaches as well as residual networks, we
propose an iterative residual refinement (IRR) scheme based on weight sharing
that can be combined with several backbone networks. It reduces the number of
parameters, improves the accuracy, or even achieves both. Moreover, we show
that integrating occlusion prediction and bi-directional flow estimation into
our IRR scheme can further boost the accuracy. Our full network achieves
state-of-the-art results for both optical flow and occlusion estimation across
several standard datasets. | [
"cs.CV",
"cs.LG"
]
|
Monocular depth estimation is a challenging task that aims to predict a
corresponding depth map from a given single RGB image. Recent deep learning
models have been proposed to predict the depth from the image by learning the
alignment of deep features between the RGB image and the depth domains. In this
paper, we present a novel approach, named Structure-Attentioned Memory Network,
to more effectively transfer domain features for monocular depth estimation by
taking into account the common structure regularities (e.g., repetitive
structure patterns, planar surfaces, symmetries) in domain adaptation. To this
end, we introduce a new Structure-Oriented Memory (SOM) module to learn and
memorize the structure-specific information between RGB image domain and the
depth domain. More specifically, in the SOM module, we develop a Memorable Bank
of Filters (MBF) unit to learn a set of filters that memorize the
structure-aware image-depth residual pattern, and also an Attention Guided
Controller (AGC) unit to control the filter selection in the MBF given image
features queries. Given the query image feature, the trained SOM module is able
to adaptively select the best customized filters for cross-domain feature
transferring with an optimal structural disparity between image and depth. In
summary, we focus on addressing this structure-specific domain adaption
challenge by proposing a novel end-to-end multi-scale memorable network for
monocular depth estimation. The experiments show that our proposed model
demonstrates the superior performance compared to the existing supervised
monocular depth estimation approaches on the challenging KITTI and NYU Depth V2
benchmarks. | [
"cs.CV"
]
|
In this work, we propose a novel straightforward method for medical volume
and sequence segmentation with limited annotations. To avert laborious
annotating, the recent success of self-supervised learning(SSL) motivates the
pre-training on unlabeled data. Despite its success, it is still challenging to
adapt typical SSL methods to volume/sequence segmentation, due to their lack of
mining on local semantic discrimination and rare exploitation on volume and
sequence structures. Based on the continuity between slices/frames and the
common spatial layout of organs across volumes/sequences, we introduced a novel
bootstrap self-supervised representation learning method by leveraging the
predictable possibility of neighboring slices. At the core of our method is a
simple and straightforward dense self-supervision on the predictions of local
representations and a strategy of predicting locals based on global context,
which enables stable and reliable supervision for both global and local
representation mining among volumes. Specifically, we first proposed an
asymmetric network with an attention-guided predictor to enforce
distance-specific prediction and supervision on slices within and across
volumes/sequences. Secondly, we introduced a novel prototype-based
foreground-background calibration module to enhance representation consistency.
The two parts are trained jointly on labeled and unlabeled data. When evaluated
on three benchmark datasets of medical volumes and sequences, our model
outperforms existing methods with a large margin of 4.5\% DSC on ACDC, 1.7\% on
Prostate, and 2.3\% on CAMUS. Intensive evaluations reveals the effectiveness
and superiority of our method. | [
"cs.CV"
]
|
Deep neural networks can generate more accurate shale gas production
forecasts in counties with a limited number of sample wells by utilizing
transfer learning. This paper provides a way of transferring the knowledge
gained from other deep neural network models trained on adjacent counties into
the county of interest. The paper uses data from more than 6000 shale gas wells
across 17 counties from Texas Barnett and Pennsylvania Marcellus shale
formations to test the capabilities of transfer learning. The results reduce
the forecasting error between 11% and 47% compared to the widely used Arps
decline curve model. | [
"cs.LG"
]
|
Bird's-eye-view (BEV) is a powerful and widely adopted representation for
road scenes that captures surrounding objects and their spatial locations,
along with overall context in the scene. In this work, we focus on bird's eye
semantic segmentation, a task that predicts pixel-wise semantic segmentation in
BEV from side RGB images. This task is made possible by simulators such as
Carla, which allow for cheap data collection, arbitrary camera placements, and
supervision in ways otherwise not possible in the real world. There are two
main challenges to this task: the view transformation from side view to bird's
eye view, as well as transfer learning to unseen domains. Existing work
transforms between views through fully connected layers and transfer learns via
GANs. This suffers from a lack of depth reasoning and performance degradation
across domains. Our novel 2-staged perception pipeline explicitly predicts
pixel depths and combines them with pixel semantics in an efficient manner,
allowing the model to leverage depth information to infer objects' spatial
locations in the BEV. In addition, we transfer learning by abstracting
high-level geometric features and predicting an intermediate representation
that is common across different domains. We publish a new dataset called
BEVSEG-Carla and show that our approach improves state-of-the-art by 24% mIoU
and performs well when transferred to a new domain. | [
"cs.CV"
]
|
A sum-product network (SPN) is a probabilistic model, based on a rooted
acyclic directed graph, in which terminal nodes represent univariate
probability distributions and non-terminal nodes represent convex combinations
(weighted sums) and products of probability functions. They are closely related
to probabilistic graphical models, in particular to Bayesian networks with
multiple context-specific independencies. Their main advantage is the
possibility of building tractable models from data, i.e., models that can
perform several inference tasks in time proportional to the number of links in
the graph. They are somewhat similar to neural networks and can address the
same kinds of problems, such as image processing and natural language
understanding. This paper offers a survey of SPNs, including their definition,
the main algorithms for inference and learning from data, the main
applications, a brief review of software libraries, and a comparison with
related models | [
"cs.LG",
"cs.AI"
]
|
We study the challenging task of neural network quantization without
end-to-end retraining, called Post-training Quantization (PTQ). PTQ usually
requires a small subset of training data but produces less powerful quantized
models than Quantization-Aware Training (QAT). In this work, we propose a novel
PTQ framework, dubbed BRECQ, which pushes the limits of bitwidth in PTQ down to
INT2 for the first time. BRECQ leverages the basic building blocks in neural
networks and reconstructs them one-by-one. In a comprehensive theoretical study
of the second-order error, we show that BRECQ achieves a good balance between
cross-layer dependency and generalization error. To further employ the power of
quantization, the mixed precision technique is incorporated in our framework by
approximating the inter-layer and intra-layer sensitivity. Extensive
experiments on various handcrafted and searched neural architectures are
conducted for both image classification and object detection tasks. And for the
first time we prove that, without bells and whistles, PTQ can attain 4-bit
ResNet and MobileNetV2 comparable with QAT and enjoy 240 times faster
production of quantized models. Codes are available at
https://github.com/yhhhli/BRECQ. | [
"cs.LG",
"cs.CV"
]
|
We consider the problem of image representation for the tasks of unsupervised
learning and semi-supervised learning. In those learning tasks, the raw image
vectors may not provide enough representation for their intrinsic structures
due to their highly dense feature space. To overcome this problem, the raw
image vectors should be mapped to a proper representation space which can
capture the latent structure of the original data and represent the data
explicitly for further learning tasks such as clustering.
Inspired by the recent research works on deep neural network and
representation learning, in this paper, we introduce the multiple-layer
auto-encoder into image representation, we also apply the locally invariant
ideal to our image representation with auto-encoders and propose a novel
method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact
representation which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure.
Extensive experiments on image clustering show encouraging results of the
proposed algorithm in comparison to the state-of-the-art algorithms on
real-word cases. | [
"cs.LG",
"K.3.2"
]
|
Self-attention has been successfully applied to video representation learning
due to the effectiveness of modeling long range dependencies. Existing
approaches build the dependencies merely by computing the pairwise correlations
along spatial and temporal dimensions simultaneously. However, spatial
correlations and temporal correlations represent different contextual
information of scenes and temporal reasoning. Intuitively, learning spatial
contextual information first will benefit temporal modeling. In this paper, we
propose a separable self-attention (SSA) module, which models spatial and
temporal correlations sequentially, so that spatial contexts can be efficiently
used in temporal modeling. By adding SSA module into 2D CNN, we build a SSA
network (SSAN) for video representation learning. On the task of video action
recognition, our approach outperforms state-of-the-art methods on
Something-Something and Kinetics-400 datasets. Our models often outperform
counterparts with shallower network and fewer modalities. We further verify the
semantic learning ability of our method in visual-language task of video
retrieval, which showcases the homogeneity of video representations and text
embeddings. On MSR-VTT and Youcook2 datasets, video representations learnt by
SSA significantly improve the state-of-the-art performance. | [
"cs.CV"
]
|
There is a large variation in the activities that humans perform in their
everyday lives. We consider modeling these composite human activities which
comprises multiple basic level actions in a completely unsupervised setting.
Our model learns high-level co-occurrence and temporal relations between the
actions. We consider the video as a sequence of short-term action clips, which
contains human-words and object-words. An activity is about a set of
action-topics and object-topics indicating which actions are present and which
objects are interacting with. We then propose a new probabilistic model
relating the words and the topics. It allows us to model long-range action
relations that commonly exist in the composite activities, which is challenging
in previous works. We apply our model to the unsupervised action segmentation
and clustering, and to a novel application that detects forgotten actions,
which we call action patching. For evaluation, we contribute a new challenging
RGB-D activity video dataset recorded by the new Kinect v2, which contains
several human daily activities as compositions of multiple actions interacting
with different objects. Moreover, we develop a robotic system that watches
people and reminds people by applying our action patching algorithm. Our
robotic setup can be easily deployed on any assistive robot. | [
"cs.CV",
"cs.LG",
"cs.RO"
]
|
In this study, we present an analysis of model-based ensemble learning for 3D
point-cloud object classification and detection. An ensemble of multiple model
instances is known to outperform a single model instance, but there is little
study of the topic of ensemble learning for 3D point clouds. First, an ensemble
of multiple model instances trained on the same part of the
$\textit{ModelNet40}$ dataset was tested for seven deep learning, point
cloud-based classification algorithms: $\textit{PointNet}$,
$\textit{PointNet++}$, $\textit{SO-Net}$, $\textit{KCNet}$,
$\textit{DeepSets}$, $\textit{DGCNN}$, and $\textit{PointCNN}$. Second, the
ensemble of different architectures was tested. Results of our experiments show
that the tested ensemble learning methods improve over state-of-the-art on the
$\textit{ModelNet40}$ dataset, from $92.65\%$ to $93.64\%$ for the ensemble of
single architecture instances, $94.03\%$ for two different architectures, and
$94.15\%$ for five different architectures. We show that the ensemble of two
models with different architectures can be as effective as the ensemble of 10
models with the same architecture. Third, a study on classic bagging i.e. with
different subsets used for training multiple model instances) was tested and
sources of ensemble accuracy growth were investigated for best-performing
architecture, i.e. $\textit{SO-Net}$. We also investigate the ensemble learning
of $\textit{Frustum PointNet}$ approach in the task of 3D object detection,
increasing the average precision of 3D box detection on the $\textit{KITTI}$
dataset from $63.1\%$ to $66.5\%$ using only three model instances. We measure
the inference time of all 3D classification architectures on a $\textit{Nvidia
Jetson TX2}$, a common embedded computer for mobile robots, to allude to the
use of these models in real-life applications. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
]
|
We develop a novel method for carrying out model selection for Bayesian
autoencoders (BAEs) by means of prior hyper-parameter optimization. Inspired by
the common practice of type-II maximum likelihood optimization and its
equivalence to Kullback-Leibler divergence minimization, we propose to optimize
the distributional sliced-Wasserstein distance (DSWD) between the output of the
autoencoder and the empirical data distribution. The advantages of this
formulation are that we can estimate the DSWD based on samples and handle
high-dimensional problems. We carry out posterior estimation of the BAE
parameters via stochastic gradient Hamiltonian Monte Carlo and turn our BAE
into a generative model by fitting a flexible Dirichlet mixture model in the
latent space. Consequently, we obtain a powerful alternative to variational
autoencoders, which are the preferred choice in modern applications of
autoencoders for representation learning with uncertainty. We evaluate our
approach qualitatively and quantitatively using a vast experimental campaign on
a number of unsupervised learning tasks and show that, in small-data regimes
where priors matter, our approach provides state-of-the-art results,
outperforming multiple competitive baselines. | [
"stat.ML",
"cs.LG"
]
|
We study in this paper the problems of both image captioning and
text-to-image generation, and present a novel turbo learning approach to
jointly training an image-to-text generator (a.k.a. CaptionBot) and a
text-to-image generator (a.k.a. DrawingBot). The key idea behind the joint
training is that image-to-text generation and text-to-image generation as dual
problems can form a closed loop to provide informative feedback to each other.
Based on such feedback, we introduce a new loss metric by comparing the
original input with the output produced by the closed loop. In addition to the
old loss metrics used in CaptionBot and DrawingBot, this extra loss metric
makes the jointly trained CaptionBot and DrawingBot better than the separately
trained CaptionBot and DrawingBot. Furthermore, the turbo-learning approach
enables semi-supervised learning since the closed loop can provide
pseudo-labels for unlabeled samples. Experimental results on the COCO dataset
demonstrate that the proposed turbo learning can significantly improve the
performance of both CaptionBot and DrawingBot by a large margin. | [
"cs.CV"
]
|
Face forgery by deepfake is widely spread over the internet and this raises
severe societal concerns. In this paper, we propose a novel video transformer
with incremental learning for detecting deepfake videos. To better align the
input face images, we use a 3D face reconstruction method to generate UV
texture from a single input face image. The aligned face image can also provide
pose, eyes blink and mouth movement information that cannot be perceived in the
UV texture image, so we use both face images and their UV texture maps to
extract the image features. We present an incremental learning strategy to
fine-tune the proposed model on a smaller amount of data and achieve better
deepfake detection performance. The comprehensive experiments on various public
deepfake datasets demonstrate that the proposed video transformer model with
incremental learning achieves state-of-the-art performance in the deepfake
video detection task with enhanced feature learning from the sequenced data. | [
"cs.CV"
]
|
Bayesian method is capable of capturing real world
uncertainties/incompleteness and properly addressing the over-fitting issue
faced by deep neural networks. In recent years, Bayesian Neural Networks (BNNs)
have drawn tremendous attentions of AI researchers and proved to be successful
in many applications. However, the required high computation complexity makes
BNNs difficult to be deployed in computing systems with limited power budget.
In this paper, an efficient BNN inference flow is proposed to reduce the
computation cost then is evaluated by means of both software and hardware
implementations. A feature decomposition and memorization (\texttt{DM})
strategy is utilized to reform the BNN inference flow in a reduced manner.
About half of the computations could be eliminated compared to the traditional
approach that has been proved by theoretical analysis and software validations.
Subsequently, in order to resolve the hardware resource limitations, a
memory-friendly computing framework is further deployed to reduce the memory
overhead introduced by \texttt{DM} strategy. Finally, we implement our approach
in Verilog and synthesise it with 45 $nm$ FreePDK technology. Hardware
simulation results on multi-layer BNNs demonstrate that, when compared with the
traditional BNN inference method, it provides an energy consumption reduction
of 73\% and a 4$\times$ speedup at the expense of 14\% area overhead. | [
"cs.LG",
"stat.ML"
]
|
Visual attributes constitute a large portion of information contained in a
scene. Objects can be described using a wide variety of attributes which
portray their visual appearance (color, texture), geometry (shape, size,
posture), and other intrinsic properties (state, action). Existing work is
mostly limited to study of attribute prediction in specific domains. In this
paper, we introduce a large-scale in-the-wild visual attribute prediction
dataset consisting of over 927K attribute annotations for over 260K object
instances. Formally, object attribute prediction is a multi-label
classification problem where all attributes that apply to an object must be
predicted. Our dataset poses significant challenges to existing methods due to
large number of attributes, label sparsity, data imbalance, and object
occlusion. To this end, we propose several techniques that systematically
tackle these challenges, including a base model that utilizes both low- and
high-level CNN features with multi-hop attention, reweighting and resampling
techniques, a novel negative label expansion scheme, and a novel supervised
attribute-aware contrastive learning algorithm. Using these techniques, we
achieve near 3.7 mAP and 5.7 overall F1 points improvement over the current
state of the art. Further details about the VAW dataset can be found at
http://vawdataset.com/. | [
"cs.CV"
]
|
Transfer learning is one of the subjects undergoing intense study in the area
of machine learning. In object recognition and object detection there are known
experiments for the transferability of parameters, but not for neural networks
which are suitable for object detection in real time embedded applications,
such as the SqueezeDet neural network. We use transfer learning to accelerate
the training of SqueezeDet to a new group of classes. Also, experiments are
conducted to study the transferability and co-adaptation phenomena introduced
by the transfer learning process. To accelerate training, we propose a new
implementation of the SqueezeDet training which provides a faster pipeline for
data processing and achieves 1.8 times speedup compared to the initial
implementation. Finally, we created a mechanism for automatic hyperparameter
optimization using an empirical method. | [
"cs.CV"
]
|
While many works focus on 3D reconstruction from images, in this paper, we
focus on 3D shape reconstruction and completion from a variety of 3D inputs,
which are deficient in some respect: low and high resolution voxels, sparse and
dense point clouds, complete or incomplete. Processing of such 3D inputs is an
increasingly important problem as they are the output of 3D scanners, which are
becoming more accessible, and are the intermediate output of 3D computer vision
algorithms. Recently, learned implicit functions have shown great promise as
they produce continuous reconstructions. However, we identified two limitations
in reconstruction from 3D inputs: 1) details present in the input data are not
retained, and 2) poor reconstruction of articulated humans. To solve this, we
propose Implicit Feature Networks (IF-Nets), which deliver continuous outputs,
can handle multiple topologies, and complete shapes for missing or sparse input
data retaining the nice properties of recent learned implicit functions, but
critically they can also retain detail when it is present in the input data,
and can reconstruct articulated humans. Our work differs from prior work in two
crucial aspects. First, instead of using a single vector to encode a 3D shape,
we extract a learnable 3-dimensional multi-scale tensor of deep features, which
is aligned with the original Euclidean space embedding the shape. Second,
instead of classifying x-y-z point coordinates directly, we classify deep
features extracted from the tensor at a continuous query point. We show that
this forces our model to make decisions based on global and local shape
structure, as opposed to point coordinates, which are arbitrary under Euclidean
transformations. Experiments demonstrate that IF-Nets clearly outperform prior
work in 3D object reconstruction in ShapeNet, and obtain significantly more
accurate 3D human reconstructions. | [
"cs.CV",
"cs.LG"
]
|
Representations of data that are invariant to changes in specified factors
are useful for a wide range of problems: removing potential biases in
prediction problems, controlling the effects of covariates, and disentangling
meaningful factors of variation. Unfortunately, learning representations that
exhibit invariance to arbitrary nuisance factors yet remain useful for other
tasks is challenging. Existing approaches cast the trade-off between task
performance and invariance in an adversarial way, using an iterative minimax
optimization. We show that adversarial training is unnecessary and sometimes
counter-productive; we instead cast invariant representation learning as a
single information-theoretic objective that can be directly optimized. We
demonstrate that this approach matches or exceeds performance of
state-of-the-art adversarial approaches for learning fair representations and
for generative modeling with controllable transformations. | [
"cs.LG",
"stat.ML"
]
|
License plate detection and recognition (LPDR) is of growing importance for
enabling intelligent transportation and ensuring the security and safety of the
cities. However, LPDR faces a big challenge in a practical environment. The
license plates can have extremely diverse sizes, fonts and colors, and the
plate images are usually of poor quality caused by skewed capturing angles,
uneven lighting, occlusion, and blurring. In applications such as surveillance,
it often requires fast processing. To enable real-time and accurate license
plate recognition, in this work, we propose a set of techniques: 1) a contour
reconstruction method along with edge-detection to quickly detect the candidate
plates; 2) a simple zero-one-alternation scheme to effectively remove the fake
top and bottom borders around plates to facilitate more accurate segmentation
of characters on plates; 3) a set of techniques to augment the training data,
incorporate SIFT features into the CNN network, and exploit transfer learning
to obtain the initial parameters for more effective training; and 4) a
two-phase verification procedure to determine the correct plate at low cost, a
statistical filtering in the plate detection stage to quickly remove unwanted
candidates, and the accurate CR results after the CR process to perform further
plate verification without additional processing. We implement a complete LPDR
system based on our algorithms. The experimental results demonstrate that our
system can accurately recognize license plate in real-time. Additionally, it
works robustly under various levels of illumination and noise, and in the
presence of car movement. Compared to peer schemes, our system is not only
among the most accurate ones but is also the fastest, and can be easily applied
to other scenarios. | [
"cs.CV"
]
|
Reinforcement Learning in large action spaces is a challenging problem.
Cooperative multi-agent reinforcement learning (MARL) exacerbates matters by
imposing various constraints on communication and observability. In this work,
we consider the fundamental hurdle affecting both value-based and
policy-gradient approaches: an exponential blowup of the action space with the
number of agents. For value-based methods, it poses challenges in accurately
representing the optimal value function. For policy gradient methods, it makes
training the critic difficult and exacerbates the problem of the lagging
critic. We show that from a learning theory perspective, both problems can be
addressed by accurately representing the associated action-value function with
a low-complexity hypothesis class. This requires accurately modelling the agent
interactions in a sample efficient way. To this end, we propose a novel
tensorised formulation of the Bellman equation. This gives rise to our method
Tesseract, which views the Q-function as a tensor whose modes correspond to the
action spaces of different agents. Algorithms derived from Tesseract decompose
the Q-tensor across agents and utilise low-rank tensor approximations to model
agent interactions relevant to the task. We provide PAC analysis for
Tesseract-based algorithms and highlight their relevance to the class of rich
observation MDPs. Empirical results in different domains confirm Tesseract's
gains in sample efficiency predicted by the theory. | [
"cs.LG"
]
|
We propose a new randomized algorithm for solving L2-regularized
least-squares problems based on sketching. We consider two of the most popular
random embeddings, namely, Gaussian embeddings and the Subsampled Randomized
Hadamard Transform (SRHT). While current randomized solvers for least-squares
optimization prescribe an embedding dimension at least greater than the data
dimension, we show that the embedding dimension can be reduced to the effective
dimension of the optimization problem, and still preserve high-probability
convergence guarantees. In this regard, we derive sharp matrix deviation
inequalities over ellipsoids for both Gaussian and SRHT embeddings.
Specifically, we improve on the constant of a classical Gaussian concentration
bound whereas, for SRHT embeddings, our deviation inequality involves a novel
technical approach. Leveraging these bounds, we are able to design a practical
and adaptive algorithm which does not require to know the effective dimension
beforehand. Our method starts with an initial embedding dimension equal to 1
and, over iterations, increases the embedding dimension up to the effective one
at most. Hence, our algorithm improves the state-of-the-art computational
complexity for solving regularized least-squares problems. Further, we show
numerically that it outperforms standard iterative solvers such as the
conjugate gradient method and its pre-conditioned version on several standard
machine learning datasets. | [
"cs.LG",
"stat.ML"
]
|
An important component for generalization in machine learning is to uncover
underlying latent factors of variation as well as the mechanism through which
each factor acts in the world. In this paper, we test whether 17 unsupervised,
weakly supervised, and fully supervised representation learning approaches
correctly infer the generative factors of variation in simple datasets
(dSprites, Shapes3D, MPI3D). In contrast to prior robustness work that
introduces novel factors of variation during test time, such as blur or other
(un)structured noise, we here recompose, interpolate, or extrapolate only
existing factors of variation from the training data set (e.g., small and
medium-sized objects during training and large objects during testing). Models
that learn the correct mechanism should be able to generalize to this
benchmark. In total, we train and test 2000+ models and observe that all of
them struggle to learn the underlying mechanism regardless of supervision
signal and architectural bias. Moreover, the generalization capabilities of all
tested models drop significantly as we move from artificial datasets towards
more realistic real-world datasets. Despite their inability to identify the
correct mechanism, the models are quite modular as their ability to infer other
in-distribution factors remains fairly stable, providing only a single factor
is out-of-distribution. These results point to an important yet understudied
problem of learning mechanistic models of observations that can facilitate
generalization. | [
"cs.LG",
"cs.CV"
]
|
Timely prediction of clinically critical events in Intensive Care Unit (ICU)
is important for improving care and survival rate. Most of the existing
approaches are based on the application of various classification methods on
explicitly extracted statistical features from vital signals. In this work, we
propose to eliminate the high cost of engineering hand-crafted features from
multivariate time-series of physiologic signals by learning their
representation with a sequence-to-sequence auto-encoder. We then propose to
hash the learned representations to enable signal similarity assessment for the
prediction of critical events. We apply this methodological framework to
predict Acute Hypotensive Episodes (AHE) on a large and diverse dataset of
vital signal recordings. Experiments demonstrate the ability of the presented
framework in accurately predicting an upcoming AHE. | [
"cs.CV",
"cs.AI",
"stat.ML"
]
|
Despite the recent active research on processing point clouds with deep
networks, few attention has been on the sensitivity of the networks to
rotations. In this paper, we propose a deep learning architecture that achieves
discrete $\mathbf{SO}(2)$/$\mathbf{SO}(3)$ rotation equivariance for point
cloud recognition. Specifically, the rotation of an input point cloud with
elements of a rotation group is similar to shuffling the feature vectors
generated by our approach. The equivariance is easily reduced to invariance by
eliminating the permutation with operations such as maximum or average. Our
method can be directly applied to any existing point cloud based networks,
resulting in significant improvements in their performance for rotated inputs.
We show state-of-the-art results in the classification tasks with various
datasets under both $\mathbf{SO}(2)$ and $\mathbf{SO}(3)$ rotations. In
addition, we further analyze the necessary conditions of applying our approach
to PointNet based networks. Source codes at
https://github.com/lijx10/rot-equ-net | [
"cs.CV"
]
|
Deep learning has made significant impacts on multi-view stereo systems.
State-of-the-art approaches typically involve building a cost volume, followed
by multiple 3D convolution operations to recover the input image's pixel-wise
depth. While such end-to-end learning of plane-sweeping stereo advances public
benchmarks' accuracy, they are typically very slow to compute. We present
MVS2D, a highly efficient multi-view stereo algorithm that seamlessly
integrates multi-view constraints into single-view networks via an attention
mechanism. Since MVS2D only builds on 2D convolutions, it is at least 4x faster
than all the notable counterparts. Moreover, our algorithm produces precise
depth estimations, achieving state-of-the-art results on challenging benchmarks
ScanNet, SUN3D, and RGBD. Even under inexact camera poses, our algorithm still
out-performs all other algorithms. Supplementary materials and code will be
available at the project page: https://zhenpeiyang.github.io/MVS2D | [
"cs.CV"
]
|
Reinforcement learning (RL) in discrete action space is ubiquitous in
real-world applications, but its complexity grows exponentially with the
action-space dimension, making it challenging to apply existing on-policy
gradient based deep RL algorithms efficiently. To effectively operate in
multidimensional discrete action spaces, we construct a critic to estimate
action-value functions, apply it on correlated actions, and combine these
critic estimated action values to control the variance of gradient estimation.
We follow rigorous statistical analysis to design how to generate and combine
these correlated actions, and how to sparsify the gradients by shutting down
the contributions from certain dimensions. These efforts result in a new
discrete action on-policy RL algorithm that empirically outperforms related
on-policy algorithms relying on variance control techniques. We demonstrate
these properties on OpenAI Gym benchmark tasks, and illustrate how discretizing
the action space could benefit the exploration phase and hence facilitate
convergence to a better local optimal solution thanks to the flexibility of
discrete policy. | [
"stat.ML",
"cs.LG"
]
|
In this article, we introduce a new mode for training Generative Adversarial
Networks (GANs). Rather than minimizing the distance of evidence distribution
$\tilde{p}(x)$ and the generative distribution $q(x)$, we minimize the distance
of $\tilde{p}(x_r)q(x_f)$ and $\tilde{p}(x_f)q(x_r)$. This adversarial pattern
can be interpreted as a Turing test in GANs. It allows us to use information of
real samples during training generator and accelerates the whole training
procedure. We even find that just proportionally increasing the size of
discriminator and generator, it succeeds on 256x256 resolution without
adjusting hyperparameters carefully. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
While improvements in deep learning architectures have played a crucial role
in improving the state of supervised and unsupervised learning in computer
vision and natural language processing, neural network architecture choices for
reinforcement learning remain relatively under-explored. We take inspiration
from successful architectural choices in computer vision and generative
modelling, and investigate the use of deeper networks and dense connections for
reinforcement learning on a variety of simulated robotic learning benchmark
environments. Our findings reveal that current methods benefit significantly
from dense connections and deeper networks, across a suite of manipulation and
locomotion tasks, for both proprioceptive and image-based observations. We hope
that our results can serve as a strong baseline and further motivate future
research into neural network architectures for reinforcement learning. The
project website with code is at this link
https://sites.google.com/view/d2rl/home. | [
"cs.LG"
]
|
Disentangled representation learning has been proposed as an approach to
learning general representations. This can be done in the absence of, or with
limited, annotations. A good general representation can be readily fine-tuned
for new target tasks using modest amounts of data, or even be used directly in
unseen domains achieving remarkable performance in the corresponding task. This
alleviation of the data and annotation requirements offers tantalising
prospects for tractable and affordable applications in computer vision and
healthcare. Finally, disentangled representations can offer model
explainability and can help us understand the underlying causal relations of
the factors of variation, increasing their suitability for real-world
deployment. In this tutorial paper, we will offer an overview of the
disentangled representation learning, its building blocks and criteria, and
discuss applications in computer vision and medical imaging. We conclude our
tutorial by presenting the identified opportunities for the integration of
recent machine learning advances into disentanglement, as well as the remaining
challenges. | [
"cs.CV",
"cs.LG"
]
|
Neural networks have proven their capabilities by outperforming many other
approaches on regression or classification tasks on various kinds of data.
Other astonishing results have been achieved using neural nets as data
generators, especially in settings of generative adversarial networks (GANs).
One special application is the field of image domain translations. Here, the
goal is to take an image with a certain style (e.g. a photography) and
transform it into another one (e.g. a painting). If such a task is performed
for unpaired training examples, the corresponding GAN setting is complex, the
neural networks are large, and this leads to a high peak memory consumption
during, both, training and evaluation phase. This sets a limit to the highest
processable image size. We address this issue by the idea of not processing the
whole image at once, but to train and evaluate the domain translation on the
level of overlapping image subsamples. This new approach not only enables us to
translate high-resolution images that otherwise cannot be processed by the
neural network at once, but also allows us to work with comparably small neural
networks and with limited hardware resources. Additionally, the number of
images required for the training process is significantly reduced. We present
high-quality results on images with a total resolution of up to over 50
megapixels and emonstrate that our method helps to preserve local image details
while it also keeps global consistency. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
Pap smear testing has been widely used for detecting cervical cancers based
on the morphology properties of cell nuclei in microscopic image. An accurate
nuclei segmentation could thus improve the success rate of cervical cancer
screening. In this work, a method of automated cervical nuclei segmentation
using Deformable Multipath Ensemble Model (D-MEM) is proposed. The approach
adopts a U-shaped convolutional network as a backbone network, in which dense
blocks are used to transfer feature information more effectively. To increase
the flexibility of the model, we then use deformable convolution to deal with
different nuclei irregular shapes and sizes. To reduce the predictive bias, we
further construct multiple networks with different settings, which form an
ensemble model. The proposed segmentation framework has achieved
state-of-the-art accuracy on Herlev dataset with Zijdenbos similarity index
(ZSI) of 0.933, and has the potential to be extended for solving other medical
image segmentation tasks. | [
"cs.CV"
]
|
The gold standard for discovering causal relations is by means of
experimentation. Over the last decades, alternative methods have been proposed
that can infer causal relations between variables from certain statistical
patterns in purely observational data. We introduce Joint Causal Inference
(JCI), a novel approach to causal discovery from multiple data sets from
different contexts that elegantly unifies both approaches. JCI is a causal
modeling framework rather than a specific algorithm, and it can be implemented
using any causal discovery algorithm that can take into account certain
background knowledge. JCI can deal with different types of interventions (e.g.,
perfect, imperfect, stochastic, etc.) in a unified fashion, and does not
require knowledge of intervention targets or types in case of interventional
data. We explain how several well-known causal discovery algorithms can be seen
as addressing special cases of the JCI framework, and we also propose novel
implementations that extend existing causal discovery methods for purely
observational data to the JCI setting. We evaluate different JCI
implementations on synthetic data and on flow cytometry protein expression data
and conclude that JCI implementations can considerably outperform
state-of-the-art causal discovery algorithms. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
In this paper, we tackle the problem of detecting objects in 3D and
forecasting their future motion in the context of self-driving. Towards this
goal, we design a novel approach that explicitly takes into account the
interactions between actors. To capture their spatial-temporal dependencies, we
propose a recurrent neural network with a novel Transformer architecture, which
we call the Interaction Transformer. Importantly, our model can be trained
end-to-end, and runs in real-time. We validate our approach on two challenging
real-world datasets: ATG4D and nuScenes. We show that our approach can
outperform the state-of-the-art on both datasets. In particular, we
significantly improve the social compliance between the estimated future
trajectories, resulting in far fewer collisions between the predicted actors. | [
"cs.CV",
"cs.RO"
]
|
This paper tackles the problem of motion deblurring of dynamic scenes.
Although end-to-end fully convolutional designs have recently advanced the
state-of-the-art in non-uniform motion deblurring, their performance-complexity
trade-off is still sub-optimal. Existing approaches achieve a large receptive
field by increasing the number of generic convolution layers and kernel-size,
but this comes at the expense of of the increase in model size and inference
speed. In this work, we propose an efficient pixel adaptive and feature
attentive design for handling large blur variations across different spatial
locations and process each test image adaptively. We also propose an effective
content-aware global-local filtering module that significantly improves
performance by considering not only global dependencies but also by dynamically
exploiting neighbouring pixel information. We use a patch-hierarchical
attentive architecture composed of the above module that implicitly discovers
the spatial variations in the blur present in the input image and in turn,
performs local and global modulation of intermediate features. Extensive
qualitative and quantitative comparisons with prior art on deblurring
benchmarks demonstrate that our design offers significant improvements over the
state-of-the-art in accuracy as well as speed. | [
"cs.CV",
"eess.IV"
]
|
Semi-supervised learning lately has shown much promise in improving deep
learning models when labeled data is scarce. Common among recent approaches is
the use of consistency training on a large amount of unlabeled data to
constrain model predictions to be invariant to input noise. In this work, we
present a new perspective on how to effectively noise unlabeled examples and
argue that the quality of noising, specifically those produced by advanced data
augmentation methods, plays a crucial role in semi-supervised learning. By
substituting simple noising operations with advanced data augmentation methods
such as RandAugment and back-translation, our method brings substantial
improvements across six language and three vision tasks under the same
consistency training framework. On the IMDb text classification dataset, with
only 20 labeled examples, our method achieves an error rate of 4.20,
outperforming the state-of-the-art model trained on 25,000 labeled examples. On
a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms
all previous approaches and achieves an error rate of 5.43 with only 250
examples. Our method also combines well with transfer learning, e.g., when
finetuning from BERT, and yields improvements in high-data regime, such as
ImageNet, whether when there is only 10% labeled data or when a full labeled
set with 1.3M extra unlabeled examples is used. Code is available at
https://github.com/google-research/uda. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CV",
"stat.ML"
]
|
User-intended visual content fills the hole regions of an input image in the
image editing scenario. The coarse low-level inputs, which typically consist of
sparse sketch lines and color dots, convey user intentions for content creation
(\ie, free-form editing). While existing methods combine an input image and
these low-level controls for CNN inputs, the corresponding feature
representations are not sufficient to convey user intentions, leading to
unfaithfully generated content. In this paper, we propose DeFLOCNet which
relies on a deep encoder-decoder CNN to retain the guidance of these controls
in the deep feature representations. In each skip-connection layer, we design a
structure generation block. Instead of attaching low-level controls to an input
image, we inject these controls directly into each structure generation block
for sketch line refinement and color propagation in the CNN feature space. We
then concatenate the modulated features with the original decoder features for
structure generation. Meanwhile, DeFLOCNet involves another decoder branch for
texture generation and detail enhancement. Both structures and textures are
rendered in the decoder, leading to user-intended editing results. Experiments
on benchmarks demonstrate that DeFLOCNet effectively transforms different user
intentions to create visually pleasing content. | [
"cs.CV"
]
|
We propose Embedding Propagation (EP), an unsupervised learning framework for
graph-structured data. EP learns vector representations of graphs by passing
two types of messages between neighboring nodes. Forward messages consist of
label representations such as representations of words and other attributes
associated with the nodes. Backward messages consist of gradients that result
from aggregating the label representations and applying a reconstruction loss.
Node representations are finally computed from the representation of their
labels. With significantly fewer parameters and hyperparameters an instance of
EP is competitive with and often outperforms state of the art unsupervised and
semi-supervised learning methods on a range of benchmark data sets. | [
"cs.LG"
]
|
In this paper, we propose a novel controllable text-to-image generative
adversarial network (ControlGAN), which can effectively synthesise high-quality
images and also control parts of the image generation according to natural
language descriptions. To achieve this, we introduce a word-level spatial and
channel-wise attention-driven generator that can disentangle different visual
attributes, and allow the model to focus on generating and manipulating
subregions corresponding to the most relevant words. Also, a word-level
discriminator is proposed to provide fine-grained supervisory feedback by
correlating words with image regions, facilitating training an effective
generator which is able to manipulate specific visual attributes without
affecting the generation of other content. Furthermore, perceptual loss is
adopted to reduce the randomness involved in the image generation, and to
encourage the generator to manipulate specific attributes required in the
modified text. Extensive experiments on benchmark datasets demonstrate that our
method outperforms existing state of the art, and is able to effectively
manipulate synthetic images using natural language descriptions. Code is
available at https://github.com/mrlibw/ControlGAN. | [
"cs.CV",
"cs.CL",
"cs.LG"
]
|
The task of accelerating large neural networks on general purpose hardware
has, in recent years, prompted the use of channel pruning to reduce network
size. However, the efficacy of pruning based approaches has since been called
into question. In this paper, we turn to distillation for model
compression---specifically, attention transfer---and develop a simple method
for discovering performance enhanced student networks. We combine channel
saliency metrics with empirical observations of runtime performance to design
more accurate networks for a given latency budget. We apply our methodology to
residual and densely-connected networks, and show that we are able to find
resource-efficient student networks on different hardware platforms while
maintaining very high accuracy. These performance-enhanced student networks
achieve up to 10% boosts in top-1 ImageNet accuracy over their channel-pruned
counterparts for the same inference time. | [
"stat.ML",
"cs.LG",
"cs.PF"
]
|
We show the equivalence of discrete choice models and a forest of binary
decision trees. This suggests that standard machine learning techniques based
on random forests can serve to estimate discrete choice models with an
interpretable output: the underlying trees can be viewed as the internal choice
process of customers. Our data-driven theoretical results show that random
forests can predict the choice probability of any discrete choice model
consistently. Moreover, our algorithm predicts unseen assortments with
mechanisms and errors that can be theoretically analyzed. We also prove that
the splitting criterion in random forests, the Gini index, is capable of
recovering preference rankings of customers. The framework has unique practical
advantages: it can capture behavioral patterns such as irrationality or
sequential searches; it handles nonstandard formats of training data that
result from aggregation; it can measure product importance based on how
frequently a random customer would make decisions depending on the presence of
the product; it can also incorporate price information and customer features.
Our numerical results show that using random forests to estimate customer
choices can outperform the best parametric models in synthetic and real
datasets when presented with enough data or when the underlying discrete choice
model cannot be correctly specified by existing parametric models. | [
"cs.LG",
"econ.EM",
"stat.ML"
]
|
Solar cell electroluminescence (EL) defect segmentation is an interesting and
challenging topic. Many methods have been proposed for EL defect detection, but
these methods are still unsatisfactory due to the diversity of the defect and
background. In this paper, we provide a new idea of using generative
adversarial network (GAN) for defect segmentation. Firstly, the GAN-based
method removes the defect region in the input defective image to get a
defect-free image, while keeping the background almost unchanged. Then, the
subtracted image is obtained by making difference between the defective input
image with the generated defect-free image. Finally, the defect region can be
segmented through thresholding the subtracted image. To keep the background
unchanged before and after image generation, we propose a novel strong identity
GAN (SIGAN), which adopts a novel strong identity loss to constraint the
background consistency. The SIGAN can be used not only for defect segmentation,
but also small-samples defective dataset augmentation. Moreover, we release a
new solar cell EL image dataset named as EL-2019, which includes three types of
images: crack, finger interruption and defect-free. Experiments on EL-2019
dataset show that the proposed method achieves 90.34% F-score, which
outperforms many state-of-the-art methods in terms of solar cell defects
segmentation results. | [
"cs.CV",
"eess.IV"
]
|
In this paper, we investigate the decentralized statistical inference
problem, where a network of agents cooperatively recover a (structured) vector
from private noisy samples without centralized coordination. Existing
optimization-based algorithms suffer from issues of model mismatch and poor
convergence speed, and thus their performance would be degraded, provided that
the number of communication rounds is limited. This motivates us to propose a
learning-based framework, which unrolls well-noted decentralized optimization
algorithms (e.g., Prox-DGD and PG-EXTRA) into graph neural networks (GNNs). By
minimizing the recovery error via end-to-end training, this learning-based
framework resolves the model mismatch issue. Our convergence analysis (with
PG-EXTRA as the base algorithm) reveals that the learned model parameters may
accelerate the convergence and reduce the recovery error to a large extent. The
simulation results demonstrate that the proposed GNN-based learning methods
prominently outperform several state-of-the-art optimization-based algorithms
in convergence speed and recovery error. | [
"cs.LG",
"math.OC"
]
|
Robustness to transformation is desirable in many computer vision tasks,
given that input data often exhibits pose variance within classes. While
translation invariance and equivariance is a documented phenomenon of CNNs,
sensitivity to other transformations is typically encouraged through data
augmentation. We investigate the modulation of complex valued convolutional
weights with learned Gabor filters to enable orientation robustness. With Gabor
modulation, the designed network is able to generate orientation dependent
features free of interpolation with a single set of rotation-governing
parameters. Moreover, by learning rotation parameters alongside traditional
convolutional weights, the representation space is not constrained and may
adapt to the exact input transformation. We present Learnable Convolutional
Gabor Networks (LCGNs), that are parameter-efficient and offer increased model
complexity while keeping backpropagation simple. We demonstrate that learned
Gabor modulation utilising an end-to-end complex architecture enables rotation
invariance and equivariance on MNIST and a new dataset of simulated images of
galactic cirri. | [
"cs.CV"
]
|
Research in action detection has grown in the recentyears, as it plays a key
role in video understanding. Modelling the interactions (either spatial or
temporal) between actors and their context has proven to be essential for this
task. While recent works use spatial features with aggregated temporal
information, this work proposes to use non-aggregated temporal information.
This is done by adding an attention based method that leverages spatio-temporal
interactions between elements in the scene along the clip.The main contribution
of this work is the introduction of two cross attention blocks to effectively
model the spatial relations and capture short range temporal
interactions.Experiments on the AVA dataset show the advantages of the proposed
approach that models spatio-temporal relations between relevant elements in the
scene, outperforming other methods that model actor interactions with their
context by +0.31 mAP. | [
"cs.CV"
]
|
The aim of this work is to evaluate the feasibility of re-implementing some
key parts of the widely used Weather Research and Forecasting WRF-SFIRE
simulator by replacing its core differential equations numerical solvers with
state-of-the-art physics-informed machine learning techniques to solve ODEs and
PDEs, in order to transform it into a real-time simulator for wildfire spread
prediction. The main programming language used is Julia, a compiled language
which offers better perfomance than interpreted ones, providing Just in Time
(JIT) compilation with different optimization levels. Moreover, Julia is
particularly well suited for numerical computation and for the solution of
complex physical models, both considering the syntax and the presence of some
specific libraries such as DifferentialEquations.jl and ModellingToolkit.jl. | [
"cs.LG"
]
|
Recently, language-guided global image editing draws increasing attention
with growing application potentials. However, previous GAN-based methods are
not only confined to domain-specific, low-resolution data but also lacking in
interpretability. To overcome the collective difficulties, we develop a
text-to-operation model to map the vague editing language request into a series
of editing operations, e.g., change contrast, brightness, and saturation. Each
operation is interpretable and differentiable. Furthermore, the only
supervision in the task is the target image, which is insufficient for a stable
training of sequential decisions. Hence, we propose a novel operation planning
algorithm to generate possible editing sequences from the target image as
pseudo ground truth. Comparison experiments on the newly collected MA5k-Req
dataset and GIER dataset show the advantages of our methods. Code is available
at https://jshi31.github.io/T2ONet. | [
"cs.CV"
]
|
Fine-grained image recognition is central to many multimedia tasks such as
search, retrieval and captioning. Unfortunately, these tasks are still
challenging since the appearance of samples of the same class can be more
different than those from different classes. Attention has been typically
implemented in neural networks by selecting the most informative regions of the
image that improve classification. In contrast, in this paper, attention is not
applied at the image level but to the convolutional feature activations. In
essence, with our approach, the neural model learns to attend to lower-level
feature activations without requiring part annotations and uses those
activations to update and rectify the output likelihood distribution. The
proposed mechanism is modular, architecture-independent and efficient in terms
of both parameters and computation required. Experiments demonstrate that
well-known networks such as Wide Residual Networks and ResNeXt, when augmented
with our approach, systematically improve their classification accuracy and
become more robust to changes in deformation and pose and to the presence of
clutter. As a result, our proposal reaches state-of-the-art classification
accuracies in CIFAR-10, the Adience gender recognition task, Stanford Dogs, and
UEC-Food100 while obtaining competitive performance in ImageNet, CIFAR-100,
CUB200 Birds, and Stanford Cars. In addition, we analyze the different
components of our model, showing that the proposed attention modules succeed in
finding the most discriminative regions of the image. Finally, as a proof of
concept, we demonstrate that with only local predictions, an augmented neural
network can successfully classify an image before reaching any fully connected
layer, thus reducing the computational amount up to 10%. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
The semantic segmentation of skin lesions is an important and common initial
task in the computer aided diagnosis of dermoscopic images. Although deep
learning-based approaches have considerably improved the segmentation accuracy,
there is still room for improvement by addressing the major challenges, such as
variations in lesion shape, size, color and varying levels of contrast. In this
work, we propose the first deep semantic segmentation framework for dermoscopic
images which incorporates, along with the original RGB images, information
extracted using the physics of skin illumination and imaging. In particular, we
incorporate information from specific color bands, illumination invariant
grayscale images, and shading-attenuated images. We evaluate our method on
three datasets: the ISBI ISIC 2017 Skin Lesion Segmentation Challenge dataset,
the DermoFit Image Library, and the PH2 dataset and observe improvements of
12.02%, 4.30%, and 8.86% respectively in the mean Jaccard index over a baseline
model trained only with RGB images. | [
"cs.CV"
]
|
We present Language-mediated, Object-centric Representation Learning (LORL),
a paradigm for learning disentangled, object-centric scene representations from
vision and language. LORL builds upon recent advances in unsupervised object
discovery and segmentation, notably MONet and Slot Attention. While these
algorithms learn an object-centric representation just by reconstructing the
input image, LORL enables them to further learn to associate the learned
representations to concepts, i.e., words for object categories, properties, and
spatial relationships, from language input. These object-centric concepts
derived from language facilitate the learning of object-centric
representations. LORL can be integrated with various unsupervised object
discovery algorithms that are language-agnostic. Experiments show that the
integration of LORL consistently improves the performance of unsupervised
object discovery methods on two datasets via the help of language. We also show
that concepts learned by LORL, in conjunction with object discovery methods,
aid downstream tasks such as referring expression comprehension. | [
"cs.LG",
"cs.CL",
"cs.CV",
"stat.ML"
]
|
We develop an assisted learning framework for assisting organization-level
learners to improve their learning performance with limited and imbalanced
data. In particular, learners at the organization level usually have sufficient
computation resource, but are subject to stringent collaboration policy and
information privacy. Their limited imbalanced data often cause biased inference
and sub-optimal decision-making. In our assisted learning framework, an
organizational learner purchases assistance service from a service provider and
aims to enhance its model performance within a few assistance rounds. We
develop effective stochastic training algorithms for assisted deep learning and
assisted reinforcement learning. Different from existing distributed algorithms
that need to frequently transmit gradients or models, our framework allows the
learner to only occasionally share information with the service provider, and
still achieve a near-oracle model as if all the data were centralized. | [
"cs.LG"
]
|
Image matting is generally modeled as a space transform from the color space
to the alpha space. By estimating the alpha factor of the model, the foreground
of an image can be extracted. However, there is some dimensional information
redundancy in the alpha space. It usually leads to the misjudgments of some
pixels near the boundary between the foreground and the background. In this
paper, a manifold matting framework named Patch Alignment Manifold Matting is
proposed for image matting. In particular, we first propose a part modeling of
color space in the local image patch. We then perform whole alignment
optimization for approximating the alpha results using subspace reconstructing
error. Furthermore, we utilize Nesterov's algorithm to solve the optimization
problem. Finally, we apply some manifold learning methods in the framework, and
obtain several image matting methods, such as named ISOMAP matting and its
derived Cascade ISOMAP matting. The experimental results reveal that the
manifold matting framework and its two examples are effective when compared
with several representative matting methods. | [
"cs.CV"
]
|
Early detection of suicidal ideation in depressed individuals can allow for
adequate medical attention and support, which in many cases is life-saving.
Recent NLP research focuses on classifying, from a given piece of text, if an
individual is suicidal or clinically healthy. However, there have been no major
attempts to differentiate between depression and suicidal ideation, which is an
important clinical challenge. Due to the scarce availability of EHR data,
suicide notes, or other similar verified sources, web query data has emerged as
a promising alternative. Online sources, such as Reddit, allow for anonymity
that prompts honest disclosure of symptoms, making it a plausible source even
in a clinical setting. However, these online datasets also result in lower
performance, which can be attributed to the inherent noise in web-scraped
labels, which necessitates a noise-removal process. Thus, we propose SDCNL, a
suicide versus depression classification method through a deep learning
approach. We utilize online content from Reddit to train our algorithm, and to
verify and correct noisy labels, we propose a novel unsupervised label
correction method which, unlike previous work, does not require prior noise
distribution information. Our extensive experimentation with multiple deep word
embedding models and classifiers display the strong performance of the method
in anew, challenging classification application. We make our code and dataset
available at https://github.com/ayaanzhaque/SDCNL | [
"cs.LG"
]
|
To mitigate the radiologist's workload, computer-aided diagnosis with the
capability to review and analyze medical images is gradually deployed. Deep
learning-based region of interest segmentation is among the most exciting use
cases. However, this paradigm is restricted in real-world clinical applications
due to poor robustness and generalization. The issue is more sinister with a
lack of training data. In this paper, we address the challenge from the
representation learning point of view. We investigate that the collapsed
representations, as one of the main reasons which caused poor robustness and
generalization, could be avoided through transfer learning. Therefore, we
propose a novel two-stage framework for robust generalized segmentation. In
particular, an unsupervised Tile-wise AutoEncoder (T-AE) pretraining
architecture is coined to learn meaningful representation for improving the
generalization and robustness of the downstream tasks. Furthermore, the learned
knowledge is transferred to the segmentation benchmark. Coupled with an image
reconstruction network, the representation keeps to be decoded, encouraging the
model to capture more semantic features. Experiments of lung segmentation on
multi chest X-ray datasets are conducted. Empirically, the related experimental
results demonstrate the superior generalization capability of the proposed
framework on unseen domains in terms of high performance and robustness to
corruption, especially under the scenario of the limited training data. | [
"cs.CV",
"cs.AI"
]
|
In multiagent environments, several decision-making individuals interact
while adhering to the dynamics constraints imposed by the environment. These
interactions, combined with the potential stochasticity of the agents'
decision-making processes, make such systems complex and interesting to study
from a dynamical perspective. Significant research has been conducted on
learning models for forward-direction estimation of agent behaviors, for
example, pedestrian predictions used for collision-avoidance in self-driving
cars. However, in many settings, only sporadic observations of agents may be
available in a given trajectory sequence. For instance, in football, subsets of
players may come in and out of view of broadcast video footage, while
unobserved players continue to interact off-screen. In this paper, we study the
problem of multiagent time-series imputation, where available past and future
observations of subsets of agents are used to estimate missing observations for
other agents. Our approach, called the Graph Imputer, uses forward- and
backward-information in combination with graph networks and variational
autoencoders to enable learning of a distribution of imputed trajectories. We
evaluate our approach on a dataset of football matches, using a projective
camera module to train and evaluate our model for the off-screen player state
estimation setting. We illustrate that our method outperforms several
state-of-the-art approaches, including those hand-crafted for football. | [
"cs.LG",
"cs.AI",
"cs.MA"
]
|
As an effective data preprocessing step, feature selection has shown its
effectiveness to prepare high-dimensional data for many machine learning tasks.
The proliferation of high di-mension and huge volume big data, however, has
brought major challenges, e.g. computation complexity and stability on noisy
data, upon existing feature-selection techniques. This paper introduces a novel
neural network-based feature selection architecture, dubbed Attention-based
Feature Selec-tion (AFS). AFS consists of two detachable modules: an at-tention
module for feature weight generation and a learning module for the problem
modeling. The attention module for-mulates correlation problem among features
and supervision target into a binary classification problem, supported by a
shallow attention net for each feature. Feature weights are generated based on
the distribution of respective feature se-lection patterns adjusted by
backpropagation during the train-ing process. The detachable structure allows
existing off-the-shelf models to be directly reused, which allows for much less
training time, demands for the training data and requirements for expertise. A
hybrid initialization method is also intro-duced to boost the selection
accuracy for datasets without enough samples for feature weight generation.
Experimental results show that AFS achieves the best accuracy and stability in
comparison to several state-of-art feature selection algo-rithms upon both
MNIST, noisy MNIST and several datasets with small samples. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Visual dialog is a task of answering a sequence of questions grounded in an
image using the previous dialog history as context. In this paper, we study how
to address two fundamental challenges for this task: (1) reasoning over
underlying semantic structures among dialog rounds and (2) identifying several
appropriate answers to the given question. To address these challenges, we
propose a Sparse Graph Learning (SGL) method to formulate visual dialog as a
graph structure learning task. SGL infers inherently sparse dialog structures
by incorporating binary and score edges and leveraging a new structural loss
function. Next, we introduce a Knowledge Transfer (KT) method that extracts the
answer predictions from the teacher model and uses them as pseudo labels. We
propose KT to remedy the shortcomings of single ground-truth labels, which
severely limit the ability of a model to obtain multiple reasonable answers. As
a result, our proposed model significantly improves reasoning capability
compared to baseline methods and outperforms the state-of-the-art approaches on
the VisDial v1.0 dataset. The source code is available at
https://github.com/gicheonkang/SGLKT-VisDial. | [
"cs.CV",
"cs.CL",
"cs.LG"
]
|
In this work, we consider the problem of robust parameter estimation from
observational data in the context of linear structural equation models (LSEMs).
LSEMs are a popular and well-studied class of models for inferring causality in
the natural and social sciences. One of the main problems related to LSEMs is
to recover the model parameters from the observational data. Under various
conditions on LSEMs and the model parameters the prior work provides efficient
algorithms to recover the parameters. However, these results are often about
generic identifiability. In practice, generic identifiability is not sufficient
and we need robust identifiability: small changes in the observational data
should not affect the parameters by a huge amount. Robust identifiability has
received far less attention and remains poorly understood. Sankararaman et al.
(2019) recently provided a set of sufficient conditions on parameters under
which robust identifiability is feasible. However, a limitation of their work
is that their results only apply to a small sub-class of LSEMs, called
``bow-free paths.'' In this work, we significantly extend their work along
multiple dimensions. First, for a large and well-studied class of LSEMs, namely
``bow free'' models, we provide a sufficient condition on model parameters
under which robust identifiability holds, thereby removing the restriction of
paths required by prior work. We then show that this sufficient condition holds
with high probability which implies that for a large set of parameters robust
identifiability holds and that for such parameters, existing algorithms already
achieve robust identifiability. Finally, we validate our results on both
simulated and real-world datasets. | [
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ML"
]
|
Many computer vision problems can be formulated as binary quadratic programs
(BQPs). Two classic relaxation methods are widely used for solving BQPs,
namely, spectral methods and semidefinite programming (SDP), each with their
own advantages and disadvantages. Spectral relaxation is simple and easy to
implement, but its bound is loose. Semidefinite relaxation has a tighter bound,
but its computational complexity is high for large scale problems. We present a
new SDP formulation for BQPs, with two desirable properties. First, it has a
similar relaxation bound to conventional SDP formulations. Second, compared
with conventional SDP methods, the new SDP formulation leads to a significantly
more efficient and scalable dual optimization approach, which has the same
degree of complexity as spectral methods. Extensive experiments on various
applications including clustering, image segmentation, co-segmentation and
registration demonstrate the usefulness of our SDP formulation for solving
large-scale BQPs. | [
"cs.CV",
"cs.LG"
]
|
An advanced conceptual validation framework for multimodal multivariate time
series defines a multi-level contextual anomaly detection ranging from an
univariate context definition, to a multimodal abstract context representation
learnt by an Autoencoder from heterogeneous data (images, time series, sounds,
etc.) associated to an industrial process. Each level of the framework is
either applicable to historical data and/or live data. The ultimate level is
based on causal discovery to identify causal relations in observational data in
order to exclude biased data to train machine learning models and provide means
to the domain expert to discover unknown causal relations in the underlying
process represented by the data sample. A Long Short-Term Memory Autoencoder is
successfully evaluated on multivariate time series to validate the learnt
representation of abstract contexts associated to multiple assets of a blast
furnace. A research roadmap is identified to combine causal discovery and
representation learning as an enabler for unsupervised Root Cause Analysis
applied to the process industry. | [
"cs.LG",
"stat.ML"
]
|
The reasonable employment of RGB and depth data show great significance in
promoting the development of computer vision tasks and robot-environment
interaction. However, there are different advantages and disadvantages in the
early and late fusion of the two types of data. Besides, due to the diversity
of object information, using a single type of data in a specific scenario tends
to result in semantic misleading. Based on the above considerations, we propose
an adaptively-cooperative fusion network (ACFNet) with ResinRes structure for
salient object detection. This structure is designed to flexibly utilize the
advantages of feature fusion in early and late stages. Secondly, an
adaptively-cooperative semantic guidance (ACG) scheme is designed to suppress
inaccurate features in the guidance phase. Further, we proposed a type-based
attention module (TAM) to optimize the network and enhance the multi-scale
perception of different objects. For different objects, the features generated
by different types of convolution are enhanced or suppressed by the gated
mechanism for segmentation optimization. ACG and TAM optimize the transfer of
feature streams according to their data attributes and convolution attributes,
respectively. Sufficient experiments conducted on RGB-D SOD datasets illustrate
that the proposed network performs favorably against 18 state-of-the-art
algorithms. | [
"cs.CV"
]
|
Water quality has a direct impact on industry, agriculture, and public
health. Algae species are common indicators of water quality. It is because
algal communities are sensitive to changes in their habitats, giving valuable
knowledge on variations in water quality. However, water quality analysis
requires professional inspection of algal detection and classification under
microscopes, which is very time-consuming and tedious. In this paper, we
propose a novel multi-target deep learning framework for algal detection and
classification. Extensive experiments were carried out on a large-scale colored
microscopic algal dataset. Experimental results demonstrate that the proposed
method leads to the promising performance on algal detection, class
identification and genus identification. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Volumetric image segmentation with convolutional neural networks (CNNs)
encounters several challenges, which are specific to medical images. Among
these challenges are large volumes of interest, high class imbalances, and
difficulties in learning shape representations. To tackle these challenges, we
propose to improve over traditional CNN-based volumetric image segmentation
through point-wise classification of point clouds. The sparsity of point clouds
allows processing of entire image volumes, balancing highly imbalanced
segmentation problems, and explicitly learning an anatomical shape. We build
upon PointCNN, a neural network proposed to process point clouds, and propose
here to jointly encode shape and volumetric information within the point cloud
in a compact and computationally effective manner. We demonstrate how this
approach can then be used to refine CNN-based segmentation, which yields
significantly improved results in our experiments on the difficult task of
peripheral nerve segmentation from magnetic resonance neurography images. By
synthetic experiments, we further show the capability of our approach in
learning an explicit anatomical shape representation. | [
"cs.CV",
"eess.IV"
]
|
In recent years an increasing number of researchers and practitioners have
been suggesting algorithms for large-scale neural network architecture search:
genetic algorithms, reinforcement learning, learning curve extrapolation, and
accuracy predictors. None of them, however, demonstrated high-performance
without training new experiments in the presence of unseen datasets. We propose
a new deep neural network accuracy predictor, that estimates in fractions of a
second classification performance for unseen input datasets, without training.
In contrast to previously proposed approaches, our prediction is not only
calibrated on the topological network information, but also on the
characterization of the dataset-difficulty which allows us to re-tune the
prediction without any training. Our predictor achieves a performance which
exceeds 100 networks per second on a single GPU, thus creating the opportunity
to perform large-scale architecture search within a few minutes. We present
results of two searches performed in 400 seconds on a single GPU. Our best
discovered networks reach 93.67% accuracy for CIFAR-10 and 81.01% for
CIFAR-100, verified by training. These networks are performance competitive
with other automatically discovered state-of-the-art networks however we only
needed a small fraction of the time to solution and computational resources. | [
"cs.LG",
"stat.ML"
]
|
We propose the use of hyperedge replacement graph grammars for factor graphs,
or factor graph grammars (FGGs) for short. FGGs generate sets of factor graphs
and can describe a more general class of models than plate notation, dynamic
graphical models, case-factor diagrams, and sum-product networks can. Moreover,
inference can be done on FGGs without enumerating all the generated factor
graphs. For finite variable domains (but possibly infinite sets of graphs), a
generalization of variable elimination to FGGs allows exact and tractable
inference in many situations. For finite sets of graphs (but possibly infinite
variable domains), a FGG can be converted to a single factor graph amenable to
standard inference techniques. | [
"cs.LG"
]
|
Neural network compression techniques have become increasingly popular as
they can drastically reduce the storage and computation requirements for very
large networks. Recent empirical studies have illustrated that even simple
pruning strategies can be surprisingly effective, and several theoretical
studies have shown that compressible networks (in specific senses) should
achieve a low generalization error. Yet, a theoretical characterization of the
underlying cause that makes the networks amenable to such simple compression
schemes is still missing. In this study, we address this fundamental question
and reveal that the dynamics of the training algorithm has a key role in
obtaining such compressible networks. Focusing our attention on stochastic
gradient descent (SGD), our main contribution is to link compressibility to two
recently established properties of SGD: (i) as the network size goes to
infinity, the system can converge to a mean-field limit, where the network
weights behave independently, (ii) for a large step-size/batch-size ratio, the
SGD iterates can converge to a heavy-tailed stationary distribution. In the
case where these two phenomena occur simultaneously, we prove that the networks
are guaranteed to be '$\ell_p$-compressible', and the compression errors of
different pruning techniques (magnitude, singular value, or node pruning)
become arbitrarily small as the network size increases. We further prove
generalization bounds adapted to our theoretical framework, which indeed
confirm that the generalization error will be lower for more compressible
networks. Our theory and numerical study on various neural networks show that
large step-size/batch-size ratios introduce heavy-tails, which, in combination
with overparametrization, result in compressibility. | [
"stat.ML",
"cs.LG"
]
|
A deep reinforcement learning (DRL) agent observes its states through
observations, which may contain natural measurement errors or adversarial
noises. Since the observations deviate from the true states, they can mislead
the agent into making suboptimal actions. Several works have shown this
vulnerability via adversarial attacks, but existing approaches on improving the
robustness of DRL under this setting have limited success and lack for
theoretical principles. We show that naively applying existing techniques on
improving robustness for classification tasks, like adversarial training, is
ineffective for many RL tasks. We propose the state-adversarial Markov decision
process (SA-MDP) to study the fundamental properties of this problem, and
develop a theoretically principled policy regularization which can be applied
to a large family of DRL algorithms, including proximal policy optimization
(PPO), deep deterministic policy gradient (DDPG) and deep Q networks (DQN), for
both discrete and continuous action control problems. We significantly improve
the robustness of PPO, DDPG and DQN agents under a suite of strong white box
adversarial attacks, including new attacks of our own. Additionally, we find
that a robust policy noticeably improves DRL performance even without an
adversary in a number of environments. Our code is available at
https://github.com/chenhongge/StateAdvDRL. | [
"cs.LG",
"stat.ML"
]
|
In an effort to overcome limitations of reward-driven feature learning in
deep reinforcement learning (RL) from images, we propose decoupling
representation learning from policy learning. To this end, we introduce a new
unsupervised learning (UL) task, called Augmented Temporal Contrast (ATC),
which trains a convolutional encoder to associate pairs of observations
separated by a short time difference, under image augmentations and using a
contrastive loss. In online RL experiments, we show that training the encoder
exclusively using ATC matches or outperforms end-to-end RL in most
environments. Additionally, we benchmark several leading UL algorithms by
pre-training encoders on expert demonstrations and using them, with weights
frozen, in RL agents; we find that agents using ATC-trained encoders outperform
all others. We also train multi-task encoders on data from multiple
environments and show generalization to different downstream RL tasks. Finally,
we ablate components of ATC, and introduce a new data augmentation to enable
replay of (compressed) latent images from pre-trained encoders when RL requires
augmentation. Our experiments span visually diverse RL benchmarks in DeepMind
Control, DeepMind Lab, and Atari, and our complete code is available at
https://github.com/astooke/rlpyt/tree/master/rlpyt/ul. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
]
|
Recently, DETR pioneered the solution of vision tasks with transformers, it
directly translates the image feature map into the object detection result.
Though effective, translating the full feature map can be costly due to
redundant computation on some area like the background. In this work, we
encapsulate the idea of reducing spatial redundancy into a novel poll and pool
(PnP) sampling module, with which we build an end-to-end PnP-DETR architecture
that adaptively allocates its computation spatially to be more efficient.
Concretely, the PnP module abstracts the image feature map into fine foreground
object feature vectors and a small number of coarse background contextual
feature vectors. The transformer models information interaction within the
fine-coarse feature space and translates the features into the detection
result. Moreover, the PnP-augmented model can instantly achieve various desired
trade-offs between performance and computation with a single model by varying
the sampled feature length, without requiring to train multiple models as
existing methods. Thus it offers greater flexibility for deployment in diverse
scenarios with varying computation constraint. We further validate the
generalizability of the PnP module on panoptic segmentation and the recent
transformer-based image recognition model ViT and show consistent efficiency
gain. We believe our method makes a step for efficient visual analysis with
transformers, wherein spatial redundancy is commonly observed. Code will be
available at \url{https://github.com/twangnh/pnp-detr}. | [
"cs.CV"
]
|
Normalizing unwanted color variations due to differences in staining
processes and scanner responses has been shown to aid machine learning in
computational pathology. Of the several popular techniques for color
normalization, structure preserving color normalization (SPCN) is
well-motivated, convincingly tested, and published with its code base. However,
SPCN makes occasional errors in color basis estimation leading to artifacts
such as swapping the color basis vectors between stains or giving a colored
tinge to the background with no tissue. We made several algorithmic
improvements to remove these artifacts. Additionally, the original SPCN code is
not readily usable on gigapixel whole slide images (WSIs) due to long run
times, use of proprietary software platform and libraries, and its inability to
automatically handle WSIs. We completely rewrote the software such that it can
automatically handle images of any size in popular WSI formats. Our software
utilizes GPU-acceleration and open-source libraries that are becoming
ubiquitous with the advent of deep learning. We also made several other small
improvements and achieved a multifold overall speedup on gigapixel images. Our
algorithm and software is usable right out-of-the-box by the computational
pathology community. | [
"cs.CV"
]
|
Recommendation is a prevalent application of machine learning that affects
many users; therefore, it is important for recommender models to be accurate
and interpretable. In this work, we propose a method to both interpret and
augment the predictions of black-box recommender systems. In particular, we
propose to interpret feature interactions from a source recommender model and
explicitly encode these interactions in a target recommender model, where both
source and target models are black-boxes. By not assuming the structure of the
recommender system, our approach can be used in general settings. In our
experiments, we focus on a prominent use of machine learning recommendation:
ad-click prediction. We found that our interaction interpretations are both
informative and predictive, e.g., significantly outperforming existing
recommender models. What's more, the same approach to interpret interactions
can provide new insights into domains even beyond recommendation, such as text
and image classification. | [
"stat.ML",
"cs.LG"
]
|
Predicting the future trajectories of multiple interacting agents in a scene
has become an increasingly important problem for many different applications
ranging from control of autonomous vehicles and social robots to security and
surveillance. This problem is compounded by the presence of social interactions
between humans and their physical interactions with the scene. While the
existing literature has explored some of these cues, they mainly ignored the
multimodal nature of each human's future trajectory. In this paper, we present
Social-BiGAT, a graph-based generative adversarial network that generates
realistic, multimodal trajectory predictions by better modelling the social
interactions of pedestrians in a scene. Our method is based on a graph
attention network (GAT) that learns reliable feature representations that
encode the social interactions between humans in the scene, and a recurrent
encoder-decoder architecture that is trained adversarially to predict, based on
the features, the humans' paths. We explicitly account for the multimodal
nature of the prediction problem by forming a reversible transformation between
each scene and its latent noise vector, as in Bicycle-GAN. We show that our
framework achieves state-of-the-art performance comparing it to several
baselines on existing trajectory forecasting benchmarks. | [
"cs.CV",
"cs.LG"
]
|
Optimal engine operation during a transient driving cycle is the key to
achieving greater fuel economy, engine efficiency, and reduced emissions. In
order to achieve continuously optimal engine operation, engine calibration
methods use a combination of static correlations obtained from dynamometer
tests for steady-state operating points and road and/or track performance data.
As the parameter space of control variables, design variable constraints, and
objective functions increases, the cost and duration for optimal calibration
become prohibitively large. In order to reduce the number of dynamometer tests
required for calibrating modern engines, a large-scale simulation-driven
machine learning approach is presented in this work. A parallel, fast, robust,
physics-based reduced-order engine simulator is used to obtain performance and
emission characteristics of engines over a wide range of control parameters
under various transient driving conditions (drive cycles). We scale the
simulation up to 3,906 nodes of the Theta supercomputer at the Argonne
Leadership Computing Facility to generate data required to train a machine
learning model. The trained model is then used to predict various engine
parameters of interest. Our results show that a deep-neural-network-based
surrogate model achieves high accuracy for various engine parameters such as
exhaust temperature, exhaust pressure, nitric oxide, and engine torque. Once
trained, the deep-neural-network-based surrogate model is fast for inference:
it requires about 16 micro sec for predicting the engine performance and
emissions for a single design configuration compared with about 0.5 s per
configuration with the engine simulator. Moreover, we demonstrate that transfer
learning and retraining can be leveraged to incrementally retrain the surrogate
model to cope with new configurations that fall outside the training data
space. | [
"cs.LG",
"stat.ML"
]
|
Scalability in terms of object density in a scene is a primary challenge in
unsupervised sequential object-oriented representation learning. Most of the
previous models have been shown to work only on scenes with a few objects. In
this paper, we propose SCALOR, a probabilistic generative world model for
learning SCALable Object-oriented Representation of a video. With the proposed
spatially-parallel attention and proposal-rejection mechanisms, SCALOR can deal
with orders of magnitude larger numbers of objects compared to the previous
state-of-the-art models. Additionally, we introduce a background module that
allows SCALOR to model complex dynamic backgrounds as well as many foreground
objects in the scene. We demonstrate that SCALOR can deal with crowded scenes
containing up to a hundred objects while jointly modeling complex dynamic
backgrounds. Importantly, SCALOR is the first unsupervised object
representation model shown to work for natural scenes containing several tens
of moving objects. | [
"cs.LG",
"stat.ML"
]
|
We investigate the internal representations that a recurrent neural network
(RNN) uses while learning to recognize a regular formal language. Specifically,
we train a RNN on positive and negative examples from a regular language, and
ask if there is a simple decoding function that maps states of this RNN to
states of the minimal deterministic finite automaton (MDFA) for the language.
Our experiments show that such a decoding function indeed exists, and that it
maps states of the RNN not to MDFA states, but to states of an {\em
abstraction} obtained by clustering small sets of MDFA states into
"superstates". A qualitative analysis reveals that the abstraction often has a
simple interpretation. Overall, the results suggest a strong structural
relationship between internal representations used by RNNs and finite automata,
and explain the well-known ability of RNNs to recognize formal grammatical
structure. | [
"cs.LG",
"cs.FL"
]
|
We present a learning-based method for synthesizing novel views of complex
scenes using only unstructured collections of in-the-wild photographs. We build
on Neural Radiance Fields (NeRF), which uses the weights of a multilayer
perceptron to model the density and color of a scene as a function of 3D
coordinates. While NeRF works well on images of static subjects captured under
controlled settings, it is incapable of modeling many ubiquitous, real-world
phenomena in uncontrolled images, such as variable illumination or transient
occluders. We introduce a series of extensions to NeRF to address these issues,
thereby enabling accurate reconstructions from unstructured image collections
taken from the internet. We apply our system, dubbed NeRF-W, to internet photo
collections of famous landmarks, and demonstrate temporally consistent novel
view renderings that are significantly closer to photorealism than the prior
state of the art. | [
"cs.CV",
"cs.GR",
"cs.LG"
]
|
A* is a popular path-finding algorithm, but it can only be applied to those
domains where a good heuristic function is known. Inspired by recent methods
combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to
train a heuristic represented by a DNN and combine it with A*. This new
algorithm which we call aleph-star can be used efficiently in domains where the
input to the heuristic could be processed by a neural network. We compare
aleph-star to N-Step Deep Q-Learning (DQN Mnih et al. 2013) in a driving
simulation with pixel-based input, and demonstrate significantly better
performance in this scenario. | [
"cs.LG",
"stat.ML"
]
|
Visual Information Extraction (VIE) task aims to extract key information from
multifarious document images (e.g., invoices and purchase receipts). Most
previous methods treat the VIE task simply as a sequence labeling problem or
classification problem, which requires models to carefully identify each kind
of semantics by introducing multimodal features, such as font, color, layout.
But simply introducing multimodal features couldn't work well when faced with
numeric semantic categories or some ambiguous texts. To address this issue, in
this paper we propose a novel key-value matching model based on a graph neural
network for VIE (MatchVIE). Through key-value matching based on relevancy
evaluation, the proposed MatchVIE can bypass the recognitions to various
semantics, and simply focuses on the strong relevancy between entities.
Besides, we introduce a simple but effective operation, Num2Vec, to tackle the
instability of encoded values, which helps model converge more smoothly.
Comprehensive experiments demonstrate that the proposed MatchVIE can
significantly outperform previous methods. Notably, to the best of our
knowledge, MatchVIE may be the first attempt to tackle the VIE task by modeling
the relevancy between keys and values and it is a good complement to the
existing methods. | [
"cs.CV",
"cs.AI"
]
|
Deep neural networks (DNN) are black box algorithms. They are trained using a
gradient descent back propagation technique which trains weights in each layer
for the sole goal of minimizing training error. Hence, the resulting weights
cannot be directly explained. Using Topological Data Analysis (TDA) we can get
an insight on how the neural network is thinking, specifically by analyzing the
activation values of validation images as they pass through each layer. | [
"cs.LG",
"55U99, 68T05"
]
|
Molecule representation learning (MRL) methods aim to embed molecules into a
real vector space. However, existing SMILES-based (Simplified Molecular-Input
Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take
SMILES strings as input that have difficulty in encoding molecule structure
information, or over-emphasize the importance of GNN architectures but neglect
their generalization ability. Here we propose using chemical reactions to
assist learning molecule representation. The key idea of our approach is to
preserve the equivalence of molecules with respect to chemical reactions in the
embedding space, i.e., forcing the sum of reactant embeddings and the sum of
product embeddings to be equal for each chemical equation. This constraint is
proven effective to 1) keep the embedding space well-organized and 2) improve
the generalization ability of molecule embeddings. Moreover, our model can use
any GNN as the molecule encoder and is thus agnostic to GNN architectures.
Experimental results demonstrate that our method achieves state-of-the-art
performance in a variety of downstream tasks, e.g., 17.4% absolute Hit@1 gain
in chemical reaction prediction, 2.3% absolute AUC gain in molecule property
prediction, and 18.5% relative RMSE gain in graph-edit-distance prediction,
respectively, over the best baseline method. The code is available at
https://github.com/hwwang55/MolR. | [
"cs.LG",
"physics.chem-ph",
"q-bio.QM"
]
|
Class imbalance presents a major hurdle in the application of data mining
methods. A common practice to deal with it is to create ensembles of
classifiers that learn from resampled balanced data. For example, bagged
decision trees combined with random undersampling (RUS) or the synthetic
minority oversampling technique (SMOTE). However, most of the resampling
methods entail asymmetric changes to the examples of different classes, which
in turn can introduce its own biases in the model. Furthermore, those methods
require a performance measure to be specified a priori before learning. An
alternative is to use a so-called threshold-moving method that a posteriori
changes the decision threshold of a model to counteract the imbalance, thus has
a potential to adapt to the performance measure of interest. Surprisingly,
little attention has been paid to the potential of combining bagging ensemble
with threshold-moving. In this paper, we present probability thresholding
bagging (PT-bagging), a versatile plug-in method that fills this gap. Contrary
to usual rebalancing practice, our method preserves the natural class
distribution of the data resulting in well calibrated posterior probabilities.
We also extend the proposed method to handle multiclass data. The method is
validated on binary and multiclass benchmark data sets. We perform analyses
that provide insights into the proposed method. | [
"cs.LG",
"stat.AP",
"stat.ML"
]
|
In the space of only a few years, deep generative modeling has revolutionized
how we think of artificial creativity, yielding autonomous systems which
produce original images, music, and text. Inspired by these successes,
researchers are now applying deep generative modeling techniques to the
generation and optimization of molecules - in our review we found 45 papers on
the subject published in the past two years. These works point to a future
where such systems will be used to generate lead molecules, greatly reducing
resources spent downstream synthesizing and characterizing bad leads in the
lab. In this review we survey the increasingly complex landscape of models and
representation schemes that have been proposed. The four classes of techniques
we describe are recursive neural networks, autoencoders, generative adversarial
networks, and reinforcement learning. After first discussing some of the
mathematical fundamentals of each technique, we draw high level connections and
comparisons with other techniques and expose the pros and cons of each. Several
important high level themes emerge as a result of this work, including the
shift away from the SMILES string representation of molecules towards more
sophisticated representations such as graph grammars and 3D representations,
the importance of reward function design, the need for better standards for
benchmarking and testing, and the benefits of adversarial training and
reinforcement learning over maximum likelihood based training. | [
"cs.LG",
"physics.chem-ph",
"stat.ML"
]
|
We present here a model to take advantage of the multi-task nature of complex
datasets by learning to separate tasks and subtasks in and end to end manner by
biasing competitive interactions in the network. This method does not require
additional labelling or reformatting of data in a dataset. We propose an
alternate view to the monolithic one-task-fits-all learning of multi-task
problems, and describe a model based on a theory of neuronal attention from
neuroscience, proposed by Desimone. We create and exhibit a new toy dataset,
based on the MNIST dataset, which we call MNIST-QA, for testing Visual Question
Answering architectures in a low-dimensional environment while preserving the
more difficult components of the Visual Question Answering task, and
demonstrate the proposed network architecture on this new dataset, as well as
on COCO-QA and DAQUAR-FULL. We then demonstrate that this model eliminates
catastrophic interference between tasks on a newly created toy dataset and
provides competitive results in the Visual Question Answering space. We provide
further evidence that Visual Question Answering can be approached as a
multi-task problem, and demonstrate that this new architecture based on the
Biased Competition model is capable of learning to separate and learn the tasks
in an end-to-end fashion without the need for task labels. | [
"cs.CV",
"cs.LG"
]
|
The recent advances in deep transfer learning reveal that adversarial
learning can be embedded into deep networks to learn more transferable features
to reduce the distribution discrepancy between two domains. Existing
adversarial domain adaptation methods either learn a single domain
discriminator to align the global source and target distributions or pay
attention to align subdomains based on multiple discriminators. However, in
real applications, the marginal (global) and conditional (local) distributions
between domains are often contributing differently to the adaptation. There is
currently no method to dynamically and quantitatively evaluate the relative
importance of these two distributions for adversarial learning. In this paper,
we propose a novel Dynamic Adversarial Adaptation Network (DAAN) to dynamically
learn domain-invariant representations while quantitatively evaluate the
relative importance of global and local domain distributions. To the best of
our knowledge, DAAN is the first attempt to perform dynamic adversarial
distribution adaptation for deep adversarial learning. DAAN is extremely easy
to implement and train in real applications. We theoretically analyze the
effectiveness of DAAN, and it can also be explained in an attention strategy.
Extensive experiments demonstrate that DAAN achieves better classification
accuracy compared to state-of-the-art deep and adversarial methods. Results
also imply the necessity and effectiveness of the dynamic distribution
adaptation in adversarial transfer learning. | [
"cs.LG",
"stat.ML"
]
|
Neurons in the brain communicate with each other through discrete action
spikes as opposed to continuous signal transmission in artificial neural
networks. Therefore, the traditional techniques for optimization of parameters
in neural networks which rely on the assumption of differentiability of
activation functions are no longer applicable to modeling the learning
processes in the brain. In this project, we propose biologically-plausible
alternatives to backpropagation to facilitate the training of spiking neural
networks. We primarily focus on investigating the candidacy of reinforcement
learning (RL) rules in solving the spatial and temporal credit assignment
problems to enable decision-making in complex tasks. In one approach, we
consider each neuron in a multi-layer neural network as an independent RL agent
forming a different representation of the feature space while the network as a
whole forms the representation of the complex policy to solve the task at hand.
In other approach, we apply the reparameterization trick to enable
differentiation through stochastic transformations in spiking neural networks.
We compare and contrast the two approaches by applying them to traditional RL
domains such as gridworld, cartpole and mountain car. Further we also suggest
variations and enhancements to enable future research in this area. | [
"cs.LG",
"cs.NE",
"stat.ML"
]
|
We introduce deep neural networks for the analysis of anatomical shapes that
learn a low-dimensional shape representation from the given task, instead of
relying on hand-engineered representations. Our framework is modular and
consists of several computing blocks that perform fundamental shape processing
tasks. The networks operate on unordered point clouds and provide invariance to
similarity transformations, avoiding the need to identify point correspondences
between shapes. Based on the framework, we assemble a discriminative model for
disease classification and age regression, as well as a generative model for
the accruate reconstruction of shapes. In particular, we propose a conditional
generative model, where the condition vector provides a mechanism to control
the generative process. instance, it enables to assess shape variations
specific to a particular diagnosis, when passing it as side information. Next
to working on single shapes, we introduce an extension for the joint analysis
of multiple anatomical structures, where the simultaneous modeling of multiple
structures can lead to a more compact encoding and a better understanding of
disorders. We demonstrate the advantages of our framework in comprehensive
experiments on real and synthetic data. The key insights are that (i) learning
a shape representation specific to the given task yields higher performance
than alternative shape descriptors, (ii) multi-structure analysis is both more
efficient and more accurate than single-structure analysis, and (iii) point
clouds generated by our model capture morphological differences associated to
Alzheimers disease, to the point that they can be used to train a
discriminative model for disease classification. Our framework naturally scales
to the analysis of large datasets, giving it the potential to learn
characteristic variations in large populations. | [
"cs.CV",
"cs.LG",
"68T07(Primary)",
"I.2.1"
]
|
The growing urban complexity demands an efficient algorithm to acquire and
process various sensor information from autonomous vehicles. In this paper, we
introduce an algorithm to utilize object detection results from the image to
adaptively sample and acquire radar data using Compressed Sensing (CS). This
novel algorithm is motivated by the hypothesis that with a limited sampling
budget, allocating more sampling budget to areas with the object as opposed to
a uniform sampling ultimately improves relevant object detection performance.
We improve detection performance by dynamically allocating a lower sampling
rate to objects such as buses than pedestrians leading to better reconstruction
than baseline across areas with objects of interest. We automate the sampling
rate allocation using linear programming and show significant time savings
while reducing the radar block size by a factor of 2. We also analyze a Binary
Permuted Diagonal measurement matrix for radar acquisition which is
hardware-efficient and show its performance is similar to Gaussian and Binary
Permuted Block Diagonal matrix. Our experiments on the Oxford radar dataset
show an effective reconstruction of objects of interest with 10% sampling rate.
Finally, we develop a transformer-based 2D object detection network using the
NuScenes radar and image data. | [
"cs.CV",
"cs.LG"
]
|
Policy optimization methods are one of the most widely used classes of
Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been
mostly analyzed from an optimization perspective, without addressing the
problem of exploration, or by making strong assumptions on the interaction with
the environment. In this paper we consider model-based RL in the tabular
finite-horizon MDP setting with unknown transitions and bandit feedback. For
this setting, we propose an optimistic trust region policy optimization (TRPO)
algorithm for which we establish $\tilde O(\sqrt{S^2 A H^4 K})$ regret for
stochastic rewards. Furthermore, we prove $\tilde O( \sqrt{ S^2 A H^4 } K^{2/3}
) $ regret for adversarial rewards. Interestingly, this result matches previous
bounds derived for the bandit feedback case, yet with known transitions. To the
best of our knowledge, the two results are the first sub-linear regret bounds
obtained for policy optimization algorithms with unknown transitions and bandit
feedback. | [
"cs.LG",
"stat.ML"
]
|
Offline Signature Verification (OSV) is a challenging pattern recognition
task, especially when it is expected to generalize well on the skilled
forgeries that are not available during the training. Its challenges also
include small training sample and large intra-class variations. Considering the
limitations, we suggest a novel transfer learning approach from Persian
handwriting domain to multi-language OSV domain. We train two Residual CNNs on
the source domain separately based on two different tasks of word
classification and writer identification. Since identifying a person signature
resembles identifying ones handwriting, it seems perfectly convenient to use
handwriting for the feature learning phase. The learned representation on the
more varied and plentiful handwriting dataset can compensate for the lack of
training data in the original task, i.e. OSV, without sacrificing the
generalizability. Our proposed OSV system includes two steps: learning
representation and verification of the input signature. For the first step, the
signature images are fed into the trained Residual CNNs. The output
representations are then used to train SVMs for the verification. We test our
OSV system on three different signature datasets, including MCYT (a Spanish
signature dataset), UTSig (a Persian one) and GPDS-Synthetic (an artificial
dataset). On UT-SIG, we achieved 9.80% Equal Error Rate (EER) which showed
substantial improvement over the best EER in the literature, 17.45%. Our
proposed method surpassed state-of-the-arts by 6% on GPDS-Synthetic, achieving
6.81%. On MCYT, EER of 3.98% was obtained which is comparable to the best
previously reported results. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
The total variation (TV) model and its related variants have already been
proposed for image processing in previous literature. In this paper a novel
total variation model based on kernel functions is proposed. In this novel
model, we first map each pixel value of an image into a Hilbert space by using
a nonlinear map, and then define a coupled image of an original image in order
to construct a kernel function. Finally, the proposed model is solved in a
kernel function space instead of in the projecting space from a nonlinear map.
For the proposed model, we theoretically show under what conditions the mapping
image is in the space of bounded variation when the original image is in the
space of bounded variation. It is also found that the proposed model further
extends the generalized TV model and the information from three different
channels of color images can be fused by adopting various kernel functions. A
series of experiments on some gray and color images are carried out to
demonstrate the effectiveness of the proposed model. | [
"cs.CV"
]
|
Increased drone proliferation in civilian and professional settings has
created new threat vectors for airports and national infrastructures. The
economic damage for a single major airport from drone incursions is estimated
to be millions per day. Due to the lack of diverse drone training data,
accurate training of deep learning detection algorithms under scarce data is an
open challenge. Existing methods largely rely on collecting diverse and
comprehensive experimental drone footage data, artificially induced data
augmentation, transfer and meta-learning, as well as physics-informed learning.
However, these methods cannot guarantee capturing diverse drone designs and
fully understanding the deep feature space of drones. Here, we show how
understanding the general distribution of the drone data via a Generative
Adversarial Network (GAN) and explaining the missing features using Topological
Data Analysis (TDA) - can allow us to acquire missing data to achieve rapid and
more accurate learning. We demonstrate our results on a drone image dataset,
which contains both real drone images as well as simulated images from
computer-aided design. When compared to random data collection (usual practice
- discriminator accuracy of 94.67\% after 200 epochs), our proposed GAN-TDA
informed data collection method offers a significant 4\% improvement (99.42\%
after 200 epochs). We believe that this approach of exploiting general data
distribution knowledge form neural networks can be applied to a wide range of
scarce data open challenges. | [
"cs.CV"
]
|
Multi-omic data provides multiple views of the same patients. Integrative
analysis of multi-omic data is crucial to elucidate the molecular underpinning
of disease etiology. However, multi-omic data has the "big p, small N" problem
(the number of features is large, but the number of samples is small), it is
challenging to train a complicated machine learning model from the multi-omic
data alone and make it generalize well. Here we propose a framework termed
Multi-view Factorization AutoEncoder with network constraints to integrate
multi-omic data with domain knowledge (biological interactions networks). Our
framework employs deep representation learning to learn feature embeddings and
patient embeddings simultaneously, enabling us to integrate feature interaction
network and patient view similarity network constraints into the training
objective. The whole framework is end-to-end differentiable. We applied our
approach to the TCGA Pan-cancer dataset and achieved satisfactory results to
predict disease progression-free interval (PFI) and patient overall survival
(OS) events. Code will be made publicly available. | [
"cs.LG",
"stat.ML"
]
|
Statistical methods such as the Box-Jenkins method for time-series
forecasting have been prominent since their development in 1970. Many
researchers rely on such models as they can be efficiently estimated and also
provide interpretability. However, advances in machine learning research
indicate that neural networks can be powerful data modeling techniques, as they
can give higher accuracy for a plethora of learning problems and datasets. In
the past, they have been tried on time-series forecasting as well, but their
overall results have not been significantly better than the statistical models
especially for intermediate length times series data. Their modeling capacities
are limited in cases where enough data may not be available to estimate the
large number of parameters that these non-linear models require. This paper
presents an easy to implement data augmentation method to significantly improve
the performance of such networks. Our method, Augmented-Neural-Network, which
involves using forecasts from statistical models, can help unlock the power of
neural networks on intermediate length time-series and produces competitive
results. It shows that data augmentation, when paired with Automated Machine
Learning techniques such as Neural Architecture Search, can help to find the
best neural architecture for a given time-series. Using the combination of
these, demonstrates significant enhancement in the forecasting accuracy of
three neural network-based models for a COVID-19 dataset, with a maximum
improvement in forecasting accuracy by 21.41%, 24.29%, and 16.42%,
respectively, over the neural networks that do not use augmented data. | [
"cs.LG",
"stat.AP",
"stat.ME"
]
|
Neural networks require careful weight initialization to prevent signals from
exploding or vanishing. Existing initialization schemes solve this problem in
specific cases by assuming that the network has a certain activation function
or topology. It is difficult to derive such weight initialization strategies,
and modern architectures therefore often use these same initialization schemes
even though their assumptions do not hold. This paper introduces AutoInit, a
weight initialization algorithm that automatically adapts to different neural
network architectures. By analytically tracking the mean and variance of
signals as they propagate through the network, AutoInit is able to
appropriately scale the weights at each layer to avoid exploding or vanishing
signals. Experiments demonstrate that AutoInit improves performance of various
convolutional and residual networks across a range of activation function,
dropout, weight decay, learning rate, and normalizer settings. Further, in
neural architecture search and activation function meta-learning, AutoInit
automatically calculates specialized weight initialization strategies for
thousands of unique architectures and hundreds of unique activation functions,
and improves performance in vision, language, tabular, multi-task, and transfer
learning scenarios. AutoInit thus serves as an automatic configuration tool
that makes design of new neural network architectures more robust. The AutoInit
package provides a wrapper around existing TensorFlow models and is available
at https://github.com/cognizant-ai-labs/autoinit. | [
"cs.LG"
]
|
Adversarial adaptation models have demonstrated significant progress towards
transferring knowledge from a labeled source dataset to an unlabeled target
dataset. Partial domain adaptation (PDA) investigates the scenarios in which
the source domain is large and diverse, and the target label space is a subset
of the source label space. The main purpose of PDA is to identify the shared
classes between the domains and promote learning transferable knowledge from
these classes. In this paper, we propose a multi-class adversarial architecture
for PDA. The proposed approach jointly aligns the marginal and
class-conditional distributions in the shared label space by minimaxing a novel
multi-class adversarial loss function. Furthermore, we incorporate effective
regularization terms to encourage selecting the most relevant subset of source
domain classes. In the absence of target labels, the proposed approach is able
to effectively learn domain-invariant feature representations, which in turn
can enhance the classification performance in the target domain. Comprehensive
experiments on three benchmark datasets Office-31, Office-Home, and
Caltech-Office corroborate the effectiveness of the proposed approach in
addressing different partial transfer learning tasks. | [
"cs.CV"
]
|
Event cameras are novel sensors that perceive the per-pixel intensity changes
and output asynchronous event streams with high dynamic range and less motion
blur. It has been shown that events alone can be used for end-task learning,
\eg, semantic segmentation, based on encoder-decoder-like networks. However, as
events are sparse and mostly reflect edge information, it is difficult to
recover original details merely relying on the decoder. Moreover, most methods
resort to pixel-wise loss alone for supervision, which might be insufficient to
fully exploit the visual details from sparse events, thus leading to less
optimal performance. In this paper, we propose a simple yet flexible two-stream
framework named Dual Transfer Learning (DTL) to effectively enhance the
performance on the end-tasks without adding extra inference cost. The proposed
approach consists of three parts: event to end-task learning (EEL) branch,
event to image translation (EIT) branch, and transfer learning (TL) module that
simultaneously explores the feature-level affinity information and pixel-level
knowledge from the EIT branch to improve the EEL branch. This simple yet novel
method leads to strong representation learning from events and is evidenced by
the significant performance boost on the end-tasks such as semantic
segmentation and depth estimation. | [
"cs.CV"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.