abstract
stringlengths 42
2.09k
|
---|
We introduce Robust Restless Bandits, a challenging generalization of
restless multi-arm bandits (RMAB). RMABs have been widely studied for
intervention planning with limited resources. However, most works make the
unrealistic assumption that the transition dynamics are known perfectly,
restricting the applicability of existing methods to real-world scenarios. To
make RMABs more useful in settings with uncertain dynamics: (i) We introduce
the Robust RMAB problem and develop solutions for a minimax regret objective
when transitions are given by interval uncertainties; (ii) We develop a double
oracle algorithm for solving Robust RMABs and demonstrate its effectiveness on
three experimental domains; (iii) To enable our double oracle approach, we
introduce RMABPPO, a novel deep reinforcement learning algorithm for solving
RMABs. RMABPPO hinges on learning an auxiliary "$\lambda$-network" that allows
each arm's learning to decouple, greatly reducing sample complexity required
for training; (iv) Under minimax regret, the adversary in the double oracle
approach is notoriously difficult to implement due to non-stationarity. To
address this, we formulate the adversary oracle as a multi-agent reinforcement
learning problem and solve it with a multi-agent extension of RMABPPO, which
may be of independent interest as the first known algorithm for this setting.
Code is available at https://github.com/killian-34/RobustRMAB.
|
In this paper, we introduce a new framework for unsupervised deep homography
estimation. Our contributions are 3 folds. First, unlike previous methods that
regress 4 offsets for a homography, we propose a homography flow
representation, which can be estimated by a weighted sum of 8 pre-defined
homography flow bases. Second, considering a homography contains 8
Degree-of-Freedoms (DOFs) that is much less than the rank of the network
features, we propose a Low Rank Representation (LRR) block that reduces the
feature rank, so that features corresponding to the dominant motions are
retained while others are rejected. Last, we propose a Feature Identity Loss
(FIL) to enforce the learned image feature warp-equivariant, meaning that the
result should be identical if the order of warp operation and feature
extraction is swapped. With this constraint, the unsupervised optimization is
achieved more effectively and more stable features are learned. Extensive
experiments are conducted to demonstrate the effectiveness of all the newly
proposed components, and results show that our approach outperforms the
state-of-the-art on the homography benchmark datasets both qualitatively and
quantitatively. Code is available at
https://github.com/megvii-research/BasesHomo.
|
Irregular cusp of an orthogonal modular variety is a cusp where the lattice
for Fourier expansion is strictly smaller than the lattice of translation.
Presence of such a cusp affects the study of pluricanonical forms on the
modular variety using modular forms. We study toroidal compactification over an
irregular cusp, and clarify there the cusp form criterion for the calculation
of Kodaira dimension. At the same time, we show that irregular cusps do not
arise frequently: besides the cases when the group is neat or contains -1, we
prove that the stable orthogonal groups of most (but not all) even lattices
have no irregular cusp.
|
Collisions electrically charge grains which promotes growth by coagulation.
We present aggregation experiments with three large ensembles of basalt beads
($150\,\mu\mathrm{m} - 180\,\mu\mathrm{m})$, two of which are charged, while
one remains almost neutral as control system. In microgravity experiments, free
collisions within these samples are induced with moderate collision velocities
($0 - 0.2 \,\mathrm{m\,s}^{-1}$). In the control system, coagulation stops at
(sub-)mm size while the charged grains continue to grow. A maximum agglomerate
size of 5\,cm is reached, limited only by bead depletion in the free volume.
For the first time, charge-driven growth well into the centimeter range is
directly proven by experiments. In protoplanetary disks, this agglomerate size
is well beyond the critical size needed for hydrodynamic particle concentration
as, e.g., by the streaming instabilities.
|
We offer an embedding of CPython that runs entirely in memory without
"touching" the disk. This in-memory embedding can load Python scripts directly
from memory instead these scripts having to be loaded from files on disk.
Malware that resides only in memory is harder to detect or mitigate against. We
intend for our work to be used by security researchers to rapidly develop and
deploy offensive techniques that is difficult for security products to analyze
given these instructions are in bytecode and only translated to machine-code by
the interpreter immediately prior to execution. Our work helps security
researchers and enterprise Red Teams who play offense. Red Teams want to
rapidly prototype malware for their periodic campaigns and do not want their
malware to be detected by the Incident Response (IR) teams prior to
accomplishing objectives. Red Teams also have difficulty running malware in
production from files on disk as modern enterprise security products emulate,
inspect, or quarantine such executables given these files have no reputation.
Our work also helps enterprise Hunt and IR teams by making them aware of the
viability of this type of attack. Our approach has been in use in production
for over a year and meets our customers' needs to quickly emulate
threat-actors' tasks, techniques, and procedures (TTPs).
|
The tracking and timely resolution of service requests is one of the major
challenges in agile project management. Having an efficient solution to this
problem is a key requirement for Walmart to facilitate seamless collaboration
across its different business units. The Jira software is one of the popular
choices in industries for monitoring such service requests. A service request
once logged into the system by a reporter is referred to as a (Jira) ticket
which is assigned to an engineer for servicing. In this work, we explore how
the tickets which may arise in any of the Walmart stores and offices
distributed over several countries can be assigned to engineers efficiently.
Specifically, we will discuss how the introduction of a bot for automated
ticket assignment has helped in reducing the disparity in ticket assignment to
engineers by human managers and also decreased the average ticket resolution
time - thereby improving the experience for both the reporters and the
engineers. Additionally, the bot sends reminders and status updates over
different business communication platforms for timely tracking of tickets; it
can be suitably modified to provision for human intervention in case of special
needs by some teams. The current study conducted over data collected from
various teams within Walmart shows the efficacy of our bot.
|
We rewrite the numerical ansatz of the Method of Auxiliary Sources (MAS),
typically used in computational electromagnetics, as a neural network, i.e. as
a composed function of linear and activation layers. MAS is a numerical method
for Partial Differential Equations (PDEs) that employs point sources, which are
also exact solutions of the considered PDE, as radial basis functions to match
a given boundary condition. In the framework of neural networks we rely on
optimization algorithms such as Adam to train MAS and find both its optimal
coefficients and positions of the central singularities of the sources. In this
work we also show that the MAS ansatz trained as a neural network can be used,
in the case of an unknown function with a central singularity, to detect the
position of such singularity.
|
The process of learning a manipulation task depends strongly on the action
space used for exploration: posed in the incorrect action space, solving a task
with reinforcement learning can be drastically inefficient. Additionally,
similar tasks or instances of the same task family impose latent manifold
constraints on the most effective action space: the task family can be best
solved with actions in a manifold of the entire action space of the robot.
Combining these insights we present LASER, a method to learn latent action
spaces for efficient reinforcement learning. LASER factorizes the learning
problem into two sub-problems, namely action space learning and policy learning
in the new action space. It leverages data from similar manipulation task
instances, either from an offline expert or online during policy learning, and
learns from these trajectories a mapping from the original to a latent action
space. LASER is trained as a variational encoder-decoder model to map raw
actions into a disentangled latent action space while maintaining action
reconstruction and latent space dynamic consistency. We evaluate LASER on two
contact-rich robotic tasks in simulation, and analyze the benefit of policy
learning in the generated latent action space. We show improved sample
efficiency compared to the original action space from better alignment of the
action space to the task space, as we observe with visualizations of the
learned action space manifold. Additional details:
https://www.pair.toronto.edu/laser
|
Cardiac imaging known as echocardiography is a non-invasive tool utilized to
produce data including images and videos, which cardiologists use to diagnose
cardiac abnormalities in general and myocardial infarction (MI) in particular.
Echocardiography machines can deliver abundant amounts of data that need to be
quickly analyzed by cardiologists to help them make a diagnosis and treat
cardiac conditions. However, the acquired data quality varies depending on the
acquisition conditions and the patient's responsiveness to the setup
instructions. These constraints are challenging to doctors especially when
patients are facing MI and their lives are at stake. In this paper, we propose
an innovative real-time end-to-end fully automated model based on convolutional
neural networks (CNN) to detect MI depending on regional wall motion
abnormalities (RWMA) of the left ventricle (LV) from videos produced by
echocardiography. Our model is implemented as a pipeline consisting of a 2D CNN
that performs data preprocessing by segmenting the LV chamber from the apical
four-chamber (A4C) view, followed by a 3D CNN that performs a binary
classification to detect if the segmented echocardiography shows signs of MI.
We trained both CNNs on a dataset composed of 165 echocardiography videos each
acquired from a distinct patient. The 2D CNN achieved an accuracy of 97.18% on
data segmentation while the 3D CNN achieved 90.9% of accuracy, 100% of
precision and 95% of recall on MI detection. Our results demonstrate that
creating a fully automated system for MI detection is feasible and propitious.
|
For a semimartingale with jumps, we propose a new estimation method for
integrated volatility, i.e., the quadratic variation of the continuous
martingale part, based on the global jump filter proposed by Inatsugu and
Yoshida [8]. To decide whether each increment of the process has jumps, the
global jump filter adopts the upper $\alpha$-quantile of the absolute
increments as the threshold. This jump filter is called global since it uses
all the observations to classify one increment. We give a rate of convergence
and prove asymptotic mixed normality of the global realized volatility and its
variant "Winsorized global volatility". By simulation studies, we show that our
estimators outperform previous realized volatility estimators that use a few
adjacent increments to mitigate the effects of jumps.
|
The singular value decomposition going with many problems in medical imaging,
non-destructive testing, geophysics, is of central importance. Unfortunately
the effective numerical determination of the singular functions in question is
a very ill-posed problem. The best known remedy to this problem goes back to
the work of D. Slepian, H.Landau and H. Pollak, Bell Labs 1960-1965. We show
that the master symmetries of the Korteweg-de Vries equation give a way to
extend the remarkable result of D. Slepian in connection with the Bessel
integral kernel and the existence of a differential operator that commutes with
the corresponding integral operator. The original results of the Bell Labs
group has already played an important role in the study of the limited angle
problem in X-ray tomography as well as in Random Matrix theory.
|
We present d3p, a software package designed to help fielding runtime
efficient widely-applicable Bayesian inference under differential privacy
guarantees. d3p achieves general applicability to a wide range of probabilistic
modelling problems by implementing the differentially private variational
inference algorithm, allowing users to fit any parametric probabilistic model
with a differentiable density function. d3p adopts the probabilistic
programming paradigm as a powerful way for the user to flexibly define such
models. We demonstrate the use of our software on a hierarchical logistic
regression example, showing the expressiveness of the modelling approach as
well as the ease of running the parameter inference. We also perform an
empirical evaluation of the runtime of the private inference on a complex model
and find a $\sim$10 fold speed-up compared to an implementation using
TensorFlow Privacy.
|
The structural connectome is often represented by fiber bundles generated
from various types of tractography. We propose a method of analyzing
connectomes by representing them as a Riemannian metric, thereby viewing them
as points in an infinite-dimensional manifold. After equipping this space with
a natural metric structure, the Ebin metric, we apply object-oriented
statistical analysis to define an atlas as the Fr\'echet mean of a population
of Riemannian metrics. We demonstrate connectome registration and atlas
formation using connectomes derived from diffusion tensors estimated from a
subset of subjects from the Human Connectome Project.
|
Interest in physical therapy and individual exercises such as yoga/dance has
increased alongside the well-being trend. However, such exercises are hard to
follow without expert guidance (which is impossible to scale for personalized
feedback to every trainee remotely). Thus, automated pose correction systems
are required more than ever, and we introduce a new captioning dataset named
FixMyPose to address this need. We collect descriptions of correcting a
"current" pose to look like a "target" pose (in both English and Hindi). The
collected descriptions have interesting linguistic properties such as
egocentric relations to environment objects, analogous references, etc.,
requiring an understanding of spatial relations and commonsense knowledge about
postures. Further, to avoid ML biases, we maintain a balance across characters
with diverse demographics, who perform a variety of movements in several
interior environments (e.g., homes, offices). From our dataset, we introduce
the pose-correctional-captioning task and its reverse target-pose-retrieval
task. During the correctional-captioning task, models must generate
descriptions of how to move from the current to target pose image, whereas in
the retrieval task, models should select the correct target pose given the
initial pose and correctional description. We present strong cross-attention
baseline models (uni/multimodal, RL, multilingual) and also show that our
baselines are competitive with other models when evaluated on other
image-difference datasets. We also propose new task-specific metrics
(object-match, body-part-match, direction-match) and conduct human evaluation
for more reliable evaluation, and we demonstrate a large human-model
performance gap suggesting room for promising future work. To verify the
sim-to-real transfer of our FixMyPose dataset, we collect a set of real images
and show promising performance on these images.
|
Industrial Internet of Things (IIoT) applications can benefit from leveraging
edge computing. For example, applications underpinned by deep neural networks
(DNN) models can be sliced and distributed across the IIoT device and the edge
of the network for improving the overall performance of inference and for
enhancing privacy of the input data, such as industrial product images.
However, low network performance between IIoT devices and the edge is often a
bottleneck. In this study, we develop ScissionLite, a holistic framework for
accelerating distributed DNN inference using the Transfer Layer (TL). The TL is
a traffic-aware layer inserted between the optimal slicing point of a DNN model
slice in order to decrease the outbound network traffic without a significant
accuracy drop. For the TL, we implement a new lightweight down/upsampling
network for performance-limited IIoT devices. In ScissionLite, we develop
ScissionTL, the Preprocessor, and the Offloader for end-to-end activities for
deploying DNN slices with the TL. They decide the optimal slicing point of the
DNN, prepare pre-trained DNN slices including the TL, and execute the DNN
slices on an IIoT device and the edge. Employing the TL for the sliced DNN
models has a negligible overhead. ScissionLite improves the inference latency
by up to 16 and 2.8 times when compared to execution on the local device and an
existing state-of-the-art model slicing approach respectively.
|
The influence of implantation-induced point defects (PDs) on SiC oxidation is
investigated via molecular dynamics simulations. PDs generally increase the
oxidation rate of crystalline grains. Particularly, accelerations caused by Si
antisites and vacancies are comparable, and followed by Si interstitials, which
are higher than those by C antisites and C interstitials. However, in the grain
boundary (GB) region, defect contribution to oxidation is more complex, with C
antisites decelerating oxidation. The underlying reason is the formation of a
C-rich region along the oxygen diffusion pathway that blocks the access of O to
Si and thus reduces the oxidation rate, as compared to the oxidation along a GB
without defects.
|
In this paper, we study the numerical stabilization of a 1D system of two
wave equations coupled by velocities with an internal, local control acting on
only one equation. In the theoretical part of this study, we distinguished two
cases. In the first one, the two waves assumed propagate at the same speed.
Under appropriate geometric conditions, we had proved that the energy decays
exponentially. While in the second case, when the waves propagate at different
speeds, under appropriate geometric conditions, we had proved that the energy
decays only at a polynomial rate. In this paper, we confirmed these two results
in a 1D numerical approximation. However, when the coupling region does not
intersect the damping region, the stabilization of the system is still
theoretically an open problem. But, here in both cases, we observed an
unpredicted behavior : the energy decays at an exponential rate when the
propagation speeds are the same or at a polynomial rate when they are
different.
|
Recently, a generative variational autoencoder (VAE) has been proposed for
speech enhancement to model speech statistics. However, this approach only uses
clean speech in the training phase, making the estimation particularly
sensitive to noise presence, especially in low signal-to-noise ratios (SNRs).
To increase the robustness of the VAE, we propose to include noise information
in the training phase by using a noise-aware encoder trained on noisy-clean
speech pairs. We evaluate our approach on real recordings of different noisy
environments and acoustic conditions using two different noise datasets. We
show that our proposed noise-aware VAE outperforms the standard VAE in terms of
overall distortion without increasing the number of model parameters. At the
same time, we demonstrate that our model is capable of generalizing to unseen
noise conditions better than a supervised feedforward deep neural network
(DNN). Furthermore, we demonstrate the robustness of the model performance to a
reduction of the noisy-clean speech training data size.
|
A long standing puzzle in the rheology of living cells is the origin of the
experimentally observed long time stress relaxation. The mechanics of the cell
is largely dictated by the cytoskeleton, which is a biopolymer network
consisting of transient crosslinkers, allowing for stress relaxation over time.
Moreover, these networks are internally stressed due to the presence of
molecular motors. In this work we propose a theoretical model that uses a
mode-dependent mobility to describe the stress relaxation of such prestressed
transient networks. Our theoretical predictions agree favorably with
experimental data of reconstituted cytoskeletal networks and may provide an
explanation for the slow stress relaxation observed in cells.
|
The classic censored regression model (tobit model) has been widely used in
the economic literature. This model assumes normality for the error
distribution and is not recommended for cases where positive skewness is
present. Moreover, in regression analysis, it is well-known that a quantile
regression approach allows us to study the influences of the explanatory
variables on the dependent variable considering different quantiles. Therefore,
we propose in this paper a quantile tobit regression model based on
quantile-based log-symmetric distributions. The proposed methodology allows us
to model data with positive skewness (which is not suitable for the classic
tobit model), and to study the influence of the quantiles of interest, in
addition to accommodating heteroscedasticity. The model parameters are
estimated using the maximum likelihood method and an elaborate Monte Carlo
study is performed to evaluate the performance of the estimates. Finally, the
proposed methodology is illustrated using two female labor supply data sets.
The results show that the proposed log-symmetric quantile tobit model has a
better fit than the classic tobit model.
|
We present the first study of cross-correlation between Cosmic Microwave
Background (CMB) gravitational lensing potential map measured by the $Planck$
satellite and $z\geq 0.8$ galaxies from the photometric redshift catalogues
from Herschel Extragalactic Legacy Project (HELP), divided into four sky
patches: NGP, Herschel Stripe-82 and two halves of SGP field, covering in total
$\sim 660$ deg$^{2}$ of the sky. Contrary to previous studies exploiting only
the common area between galaxy surveys and CMB lensing data, we improve the
cross-correlation measurements using the full available area of the CMB lensing
map. We estimate galaxy linear bias parameter, $b$, from joint analysis of
cross-power spectrum and galaxy auto-power spectrum using Maximum Likelihood
Estimation technique to obtain the value averaged over four fields as
$b=2.06_{-0.02}^{+0.02}$, ranging from $1.94_{-0.03}^{+0.04}$ for SGP Part-2 to
$3.03_{-0.09}^{+0.10}$ for NGP. We also estimate the amplitude of
cross-correlation and find the averaged value to be $A=0.52_{-0.08}^{+0.08}$
spanning from $0.34_{-0.19}^{+0.19}$ for NGP to $0.67_{-0.20}^{+0.21}$ for SGP
Part-1 respectively, significantly lower than expected value for the standard
cosmological model. We perform several tests on systematic errors that can
account for this discrepancy. We find that lower amplitude could be to some
extent explained by the lower value of median redshift of the catalogue,
however, we do not have any evidence that redshifts are systematically
overestimated.
|
In the theory of local fields we have the well-known filtration of unit
groups. In this short paper we compute the first cohomology groups of unit
gorups for a finite Galois extension of local fields. We show that these
cohomology groups are closely related to the ramification indices.
|
We introduce DeepGLEAM, a hybrid model for COVID-19 forecasting. DeepGLEAM
combines a mechanistic stochastic simulation model GLEAM with deep learning. It
uses deep learning to learn the correction terms from GLEAM, which leads to
improved performance. We further integrate various uncertainty quantification
methods to generate confidence intervals. We demonstrate DeepGLEAM on
real-world COVID-19 mortality forecasting tasks.
|
In collaborative intelligence, an artificial intelligence (AI) model is
typically split between an edge device and the cloud. Feature tensors produced
by the edge sub-model are sent to the cloud via an imperfect communication
channel. At the cloud side, parts of the feature tensor may be missing due to
packet loss. In this paper we propose a method called Content-Adaptive Linear
Tensor Completion (CALTeC) to recover the missing feature data. The proposed
method is fast, data-adaptive, does not require pre-training, and produces
better results than existing methods for tensor data recovery in collaborative
intelligence.
|
GADTs can be represented either as their Church encodings \`a la Atkey, or as
fixpoints \`a la Johann and Polonsky. While a GADT represented as its Church
encoding need not support a map function satisfying the functor laws, the
fixpoint representation of a GADT must support such a map function even to be
well-defined. The two representations of a GADT thus need not be the same in
general. This observation forces a choice of representation of data types in
languages supporting GADTs. In this paper we show that choosing whether to
represent data types as their Church encodings or as fixpoints determines
whether or not a language supporting GADTs can have parametric models. This
choice thus has important consequences for how we can program with, and reason
about, these advanced data types.
|
Context. The North Ecliptic Pole (NEP) field provides a unique set of
panchromatic data, well suited for active galactic nuclei (AGN) studies.
Selection of AGN candidates is often based on mid-infrared (MIR) measurements.
Such method, despite its effectiveness, strongly reduces a catalog volume due
to the MIR detection condition. Modern machine learning techniques can solve
this problem by finding similar selection criteria using only optical and
near-infrared (NIR) data. Aims. Aims of this work were to create a reliable AGN
candidates catalog from the NEP field using a combination of optical SUBARU/HSC
and NIR AKARI/IRC data and, consequently, to develop an efficient alternative
for the MIR-based AKARI/IRC selection technique. Methods. A set of supervised
machine learning algorithms was tested in order to perform an efficient AGN
selection. Best of the models were formed into a majority voting scheme, which
used the most popular classification result to produce the final AGN catalog.
Additional analysis of catalog properties was performed in form of the spectral
energy distribution (SED) fitting via the CIGALE software. Results. The
obtained catalog of 465 AGN candidates (out of 33 119 objects) is characterized
by 73% purity and 64% completeness. This new classification shows consistency
with the MIR-based selection. Moreover, 76% of the obtained catalog can be
found only with the new method due to the lack of MIR detection for most of the
new AGN candidates. Training data, codes and final catalog are available via
the github repository. Final AGN candidates catalog will be also available via
the CDS service after publication.
|
The note concerns the $\bar\partial$ problem on product domains in $\mathbb
C^2$. We show that there exists a bounded solution operator from $C^{k,
\alpha}$ into itself, $k\in \mathbb Z^+\cup \{0\}, 0<\alpha< 1$. The regularity
result is optimal in view of an example of Stein-Kerzman.
|
It is shown numerically, in a chiral U(1) gauge Higgs theory in which the
left and right-handed fermion components have opposite U(1) charges, that the
spectrum of gauge and Higgs fields surrounding a static fermion contains both a
ground state and at least one stable excited state. To bypass the difficulties
associated with dynamical fermions in a lattice chiral gauge theory we consider
only static fermion sources in a quenched approximation, at fixed lattice
spacing and couplings, and with a lattice action along the lines suggested long
ago by Smit and Swift.
|
We investigate the explicit implementation of quantum repeater protocols that
rely on three-qubit repetition codes using nitrogen-vacancy (NV) centers in
diamond as quantum memories. NV centers offer a two-qubit register,
corresponding to their electron and nuclear spins, which makes it possible to
perform deterministic two-qubit operations within one NV center. For quantum
repeater applications, we however need to do joint operations on two separate
NV centers. Here, we study two NV-based repeater structures that enable such
deterministic joint operations. One structure offers less consumption of
classical communication, hence is more resilient to decoherence effects,
whereas the other one relies on fewer numbers of physical resources and
operations. We assess and compare their performance for the task of secret key
generation under the influence of noise and decoherence with current and
near-term experimental parameters. We quantify the regimes of operation, where
one structure outperforms the other, and find the regions where encoded QRs
offer practical advantages over their non-encoded counterparts.
|
Real-world videos contain many complex actions with inherent relationships
between action classes. In this work, we propose an attention-based
architecture that models these action relationships for the task of temporal
action localization in untrimmed videos. As opposed to previous works that
leverage video-level co-occurrence of actions, we distinguish the relationships
between actions that occur at the same time-step and actions that occur at
different time-steps (i.e. those which precede or follow each other). We define
these distinct relationships as action dependencies. We propose to improve
action localization performance by modeling these action dependencies in a
novel attention-based Multi-Label Action Dependency (MLAD)layer. The MLAD layer
consists of two branches: a Co-occurrence Dependency Branch and a Temporal
Dependency Branch to model co-occurrence action dependencies and temporal
action dependencies, respectively. We observe that existing metrics used for
multi-label classification do not explicitly measure how well action
dependencies are modeled, therefore, we propose novel metrics that consider
both co-occurrence and temporal dependencies between action classes. Through
empirical evaluation and extensive analysis, we show improved performance over
state-of-the-art methods on multi-label action localization
benchmarks(MultiTHUMOS and Charades) in terms of f-mAP and our proposed metric.
|
As a basic building block, optical resonant cavities (ORCs) are widely used
in light manipulation; they can confine electromagnetic waves and improve the
interaction between light and matter, which also plays an important role in
cavity quantum electrodynamics, nonlinear optics and quantum optics. Especially
in recent years, the rise of metamaterials, artificial materials composed of
subwavelength unit cells, greatly enriches the design and function of ORCs.
Here, we review zero-index and hyperbolic metamaterials for constructing the
novel ORCs. Firstly, this paper introduces the classification and
implementation of zero-index and hyperbolic metamaterials. Secondly, the
distinctive properties of zero-index and hyperbolic cavities are summarized,
including the geometry-invariance, homogeneous/inhomogeneous field
distribution, and the topological protection (anomalous scaling law, size
independence, continuum of high-order modes, and dispersionless modes) for the
zero-index (hyperbolic) metacavities. Finally, the paper introduces some
typical applications of zero-index and hyperbolic metacavities, and prospects
the research of metacavities.
|
Let $p(x)$ be an integer polynomial with $m\ge 2$ distinct roots
$\rho_1,\ldots,\rho_m$ whose multiplicities are
$\boldsymbol{\mu}=(\mu_1,\ldots,\mu_m)$. We define the D-plus discriminant of
$p(x)$ to be $D^+(p):= \prod_{1\le i<j\le m}(\rho_i-\rho_j)^{\mu_i+\mu_j}$. We
first prove a conjecture that $D^+(p)$ is a $\boldsymbol{\mu}$-symmetric
function of its roots $\rho_1,\ldots,\rho_m$. Our main result gives an explicit
formula for $D^+(p)$, as a rational function of its coefficients. Our proof is
ideal-theoretic, based on re-casting the classic Poisson resultant as the
"symbolic Poisson formula". The D-plus discriminant first arose in the
complexity analysis of a root clustering algorithm from Becker et al. (ISSAC
2016). The bit-complexity of this algorithm is proportional to a quantity
$\log(|D^+(p)|^{-1})$. As an application of our main result, we give an
explicit upper bound on this quantity in terms of the degree of $p$ and its
leading coefficient.
|
We introduce Robin boundary conditions for biharmonic operators, which are a
model for elastically supported plates and are closely related to the study of
spaces of traces of Sobolev functions. We study the dependence of the operator,
its eigenvalues, and eigenfunctions on the Robin parameters. We show in
particular that when the parameters go to plus infinity the Robin problem
converges to other biharmonic problems, and obtain estimates on the rate of
divergence when the parameters go to minus infinity. We also analyse the
dependence of the operator on smooth perturbations of the domain, computing the
shape derivatives of the eigenvalues and giving a characterisation for critical
domains under volume and perimeter constraints. We include a number of open
problems arising in the context of our results.
|
Let $h$ be the planar Gaussian free field and let $D_h$ be a supercritical
Liouville quantum gravity (LQG) metric associated with $h$. Such metrics arise
as subsequential scaling limits of supercritical Liouville first passage
percolation (Gwynne-Ding, 2020) and correspond to values of the matter central
charge $\mathbf{c}_{\mathrm M} \in (1,25)$. We show that a.s. the boundary of
each complementary connected component of a $D_h$-metric ball is a Jordan curve
and is compact and finite-dimensional with respect to $D_h$. This is in
contrast to the whole boundary of the $D_h$-metric ball, which is non-compact
and infinite-dimensional with respect to $D_h$ (Pfeffer, 2021). Using our
regularity results for boundary components of $D_h$-metric balls, we extend the
confluence of geodesics results of Gwynne-Miller (2019) to the case of
supercritical Liouville quantum gravity. These results show that two
$D_h$-geodesics with the same starting point and different target points
coincide for a non-trivial initial time interval.
|
Under the validity of the positive mass theorem, the Yamabe flow on a smooth
compact Riemannian manifold of dimension $N \ge 3$ is known to exist for all
time $t$ and converges to a solution to the Yamabe problem as $t \to \infty$.
We prove that if a suitable perturbation, which may be smooth and arbitrarily
small, is imposed on the Yamabe flow on any given Riemannian manifold $M$ of
dimension $N \ge 5$, the resulting flow may blow up at multiple points on $M$
in the infinite time. Our proof is constructive, and indeed we construct such a
flow by using solutions of the Yamabe problem on the unit sphere $\mathbb{S}^N$
as blow-up profiles. We also examine the stability of the blow-up phenomena
under a negativity condition on the Ricci curvature at blow-up points.
|
The recent and upcoming releases of the 3rd Generation Partnership Project's
5G New Radio specifications include features that are motivated by providing
connectivity services to a broad set of verticals, including the automotive,
rail, and air transport industries. Currently, several radio access network
features are being further enhanced or newly introduced in NR to improve 5G's
capability to provide fast, reliable, and non-limiting connectivity for
transport applications. In this article, we review the most important
characteristics and requirements of a wide range of services that are driven by
the desire to help the transport sector to become more sustainable,
economically viable, safe, and secure. These requirements will be supported by
the evolving and entirely new features of 5G NR systems, including accurate
positioning, reference signal design to enable multi-transmission and reception
points, service-specific scheduling configuration, and service quality
prediction.
|
3D model generation from single 2D RGB images is a challenging and actively
researched computer vision task. Various techniques using conventional network
architectures have been proposed for the same. However, the body of research
work is limited and there are various issues like using inefficient 3D
representation formats, weak 3D model generation backbones, inability to
generate dense point clouds, dependence of post-processing for generation of
dense point clouds, and dependence on silhouettes in RGB images. In this paper,
a novel 2D RGB image to point cloud conversion technique is proposed, which
improves the state of art in the field due to its efficient, robust and simple
model by using the concept of parallelization in network architecture. It not
only uses the efficient and rich 3D representation of point clouds, but also
uses a novel and robust point cloud generation backbone in order to address the
prevalent issues. This involves using a single-encoder multiple-decoder deep
network architecture wherein each decoder generates certain fixed viewpoints.
This is followed by fusing all the viewpoints to generate a dense point cloud.
Various experiments are conducted on the technique and its performance is
compared with those of other state of the art techniques and impressive gains
in performance are demonstrated. Code is available at
https://github.com/mueedhafiz1982/
|
A large number of processes in the mesoscopic world occur out of equilibrium,
where the time course of the system evolution becomes immensely important --
they being driven principally by dissipative effects. Non-equilibrium steady
states (NESS) represent a crucial category in such systems -- which are widely
observed in biological domains -- especially in chemical kinetics in cellular
processes, and molecular motors. In this study, we employ a model NESS
stochastic system which comprises of an colloidal microparticle, optically
trapped in a viscous fluid and externally driven by a temporally correlated
colored noise, and show that the work done on the system and the work
dissipated by it -- both follow the three Levy arcsine laws. These statistics
remain unchanged even in the presence of a perturbation generated by a
microbubble at close proximity to the trapped particle. We confirm our
experimental findings with theoretical simulations of the systems. Our work
provides an interesting insight into the NESS statistics of the meso-regime,
where stochastic fluctuations play a pivotal role.
|
Precision medicine involves answering counterfactual questions such as "Would
this patient respond better to treatment A or treatment B?" These types of
questions are causal in nature and require the tools of causal inference to be
answered, e.g., with a structural causal model (SCM). In this work, we develop
an SCM that models the interaction between demographic information, disease
covariates, and magnetic resonance (MR) images of the brain for people with
multiple sclerosis. Inference in the SCM generates counterfactual images that
show what an MR image of the brain would look like if demographic or disease
covariates are changed. These images can be used for modeling disease
progression or used for image processing tasks where controlling for
confounders is necessary.
|
Analysis and clustering of multivariate time-series data attract growing
interest in immunological and clinical studies. In such applications,
researchers are interested in clustering subjects based on potentially
high-dimensional longitudinal features, and in investigating how clinical
covariates may affect the clustering results. These studies are often
challenging due to high dimensionality, as well as the sparse and irregular
nature of sample collection along the time dimension. We propose a smoothed
probabilistic PARAFAC model with covariates (SPACO) to tackle these two
problems while utilizing auxiliary covariates of interest. We provide intensive
simulations to test different aspects of SPACO and demonstrate its use on
immunological data sets from two recent cohorts of SARs-CoV-2 patients.
|
This report formulates a conjectural combinatorial rule that positively
expands Grothendieck polynomials into Lascoux polynomials. It generalizes one
such formula expanding Schubert polynomials into key polynomials, and refines
another one expanding stable Grothendieck polynomials.
|
It is well known that many problems in image recovery, signal processing, and
machine learning can be modeled as finding zeros of the sum of maximal monotone
and Lipschitz continuous monotone operators. Many papers have studied
forward-backward splitting methods for finding zeros of the sum of two monotone
operators in Hilbert spaces. Most of the proposed splitting methods in the
literature have been proposed for the sum of maximal monotone and
inverse-strongly monotone operators in Hilbert spaces. In this paper, we
consider splitting methods for finding zeros of the sum of maximal monotone
operators and Lipschitz continuous monotone operators in Banach spaces. We
obtain weak and strong convergence results for the zeros of the sum of maximal
monotone and Lipschitz continuous monotone operators in Banach spaces. Many
already studied problems in the literature can be considered as special cases
of this paper.
|
Unstructured pruning reduces the memory footprint in deep neural networks
(DNNs). Recently, researchers proposed different types of structural pruning
intending to reduce also the computation complexity. In this work, we first
suggest a new measure called mask-diversity which correlates with the expected
accuracy of the different types of structural pruning. We focus on the recently
suggested N:M fine-grained block sparsity mask, in which for each block of M
weights, we have at least N zeros. While N:M fine-grained block sparsity allows
acceleration in actual modern hardware, it can be used only to accelerate the
inference phase. In order to allow for similar accelerations in the training
phase, we suggest a novel transposable fine-grained sparsity mask, where the
same mask can be used for both forward and backward passes. Our transposable
mask guarantees that both the weight matrix and its transpose follow the same
sparsity pattern; thus, the matrix multiplication required for passing the
error backward can also be accelerated. We formulate the problem of finding the
optimal transposable-mask as a minimum-cost flow problem. Additionally, to
speed up the minimum-cost flow computation, we also introduce a fast
linear-time approximation that can be used when the masks dynamically change
during training. Our experiments suggest a 2x speed-up in the matrix
multiplications with no accuracy degradation over vision and language models.
Finally, to solve the problem of switching between different structure
constraints, we suggest a method to convert a pre-trained model with
unstructured sparsity to an N:M fine-grained block sparsity model with little
to no training. A reference implementation can be found at
https://github.com/papers-submission/structured_transposable_masks.
|
Skyrmion-containing devices have been proposed as a promising solution for
low energy data storage. These devices include racetrack or logic structures
and require skyrmions to be confined in regions with dimensions comparable to
the size of a single skyrmion. Here we examine Bloch skyrmions in {FeGe} device
shapes using Lorentz transmission electron microscopy (LTEM) to reveal the
consequences of skyrmion confinement in a device structure. Dumbbell-shaped
devices were created by focused ion beam (FIB) milling to provide regions where
single skyrmions are confined adjacent to areas containing a skyrmion lattice.
Simple block shapes of equivalent dimensions were prepared within the specimen
to allow a direct comparison with skyrmion formation in a less complex, yet
still confined, device geometry. The impact of the application of an applied
external field and varying the temperature on skyrmion formation within the
shapes was examined and this revealed that it is not just confinement within a
small device structure that controls the position and number of skyrmions, but
that a complex device geometry changes the skyrmion behaviour, including
allowing formation of skyrmions at lower applied magnetic fields than in simple
shapes. This could allow experimental methods to be developed to control the
positioning and number of skyrmions within device shapes.
|
Solar coronal rain is classified generally into two categories: flare-driven
and quiescent coronal rain. The latter is observed to form along both closed
and open magnetic field structures. Recently, we proposed that some of the
quiescent coronal rain events, detected in the transition region and
chromospheric diagnostics, along loop-like paths could be explained by the
formation mechanism for quiescent coronal rain facilitated by interchange
magnetic reconnection between open and closed field lines. In this study, we
revisited 38 coronal rain reports from the literature. From these earlier
works, we picked 15 quiescent coronal rain events out of the solar limb, mostly
suggested to occur in active region closed loops due to thermal nonequilibrium,
to scrutinize their formation mechanism. Employing the extreme ultraviolet
images and line-of-sight magnetograms, the evolution of the quiescent coronal
rain events and their magnetic fields and context coronal structures is
examined. We find that 6, comprising 40%, of the 15 quiescent coronal rain
events could be totally or partially interpreted by the formation mechanism for
quiescent coronal rain along open structures facilitated by interchange
reconnection. The results suggest that the quiescent coronal rain facilitated
by interchange reconnection between open and closed field lines deserves more
attention.
|
We characterize the soliton solutions and their interactions for a system of
coupled evolution equations of nonlinear Schr\"odinger (NLS) type that models
the dynamics in one-dimensional repulsive Bose-Einstein condensates with spin
one, taking advantage of the representation of such model as a special
reduction of a 2 x 2 matrix NLS system. Specifically, we study in detail the
case in which solutions tend to a non-zero background at space infinities.
First we derive a compact representation for the multi-soliton solutions in the
system using the Inverse Scattering Transform (IST). We introduce the notion of
canonical form of a solution, corresponding to the case when the background is
proportional to the identity. We show that solutions for which the asymptotic
behavior at infinity is not proportional to the identity, referred to as being
in non-canonical form, can be reduced to canonical form by unitary
transformations that preserve the symmetric nature of the solution (physically
corresponding to complex rotations of the quantization axes). Then we give a
complete characterization of the two families of one-soliton solutions arising
in this problem, corresponding to ferromagnetic and to polar states of the
system, and we discuss how the physical parameters of the solitons for each
family are related to the spectral data in the IST. We also show that any
ferromagnetic one-soliton solution in canonical form can be reduced to a single
dark soliton of the scalar NLS equation, and any polar one-soliton solution in
canonical form is unitarily equivalent to a pair of oppositely polarized
displaced scalar dark solitons up to a rotation of the quantization axes.
Finally, we discuss two-soliton interactions and we present a complete
classification of the possible scenarios that can arise depending on whether
either soliton is of ferromagnetic or polar type.
|
The volume of data moving through a network increases with new scientific
experiments and simulations. Network bandwidth requirements also increase
proportionally to deliver data within a certain time frame. We observe that a
significant portion of the popular dataset is transferred multiple times to
different users as well as to the same user for various reasons. In-network
data caching for the shared data has shown to reduce the redundant data
transfers and consequently save network traffic volume. In addition, overall
application performance is expected to improve with in-network caching because
access to the locally cached data results in lower latency. This paper shows
how much data was shared over the study period, how much network traffic volume
was consequently saved, and how much the temporary in-network caching increased
the scientific application performance. It also analyzes data access patterns
in applications and the impacts of caching nodes on the regional data
repository. From the results, we observed that the network bandwidth demand was
reduced by nearly a factor of 3 over the study period.
|
Synthetic data generation is an appealing approach to generate novel traffic
scenarios in autonomous driving. However, deep learning perception algorithms
trained solely on synthetic data encounter serious performance drops when they
are tested on real data. Such performance drops are commonly attributed to the
domain gap between real and synthetic data. Domain adaptation methods that have
been applied to mitigate the aforementioned domain gap achieve visually
appealing results, but usually introduce semantic inconsistencies into the
translated samples. In this work, we propose a novel, unsupervised, end-to-end
domain adaptation network architecture that enables semantically consistent
\textit{sim2real} image transfer. Our method performs content disentanglement
by employing shared content encoder and fixed style code.
|
By properly considering the propagation dynamics of the dipole field, we
obtain the full magnetic dipolar interaction between two quantum dipoles for
general situations. With the help the Maxwell equation and the corresponding
Green function, this result applies for general boundary conditions, and
naturally unifies all the interaction terms between permanent dipoles, resonant
or non-resonant transition dipoles, and even the counter-rotating interaction
terms altogether. In particular, we study the dipolar interaction in a
rectangular 3D cavity with discrete field modes. When the two dipoles are quite
near to each other and far from the cavity boundary, their interaction simply
returns the freespace result; when the distance between the two dipoles is
comparable to their distance to the cavity boundary and the field mode
wavelength, the dipole images and near-resonant cavity modes bring in
significant changes to the freespace interaction. This approach also provides a
general way to study the interaction mediated by other kinds of fields.
|
We present a data-driven approach to construct entropy-based closures for the
moment system from kinetic equations. The proposed closure learns the entropy
function by fitting the map between the moments and the entropy of the moment
system, and thus does not depend on the space-time discretization of the moment
system and specific problem configurations such as initial and boundary
conditions. With convex and $C^2$ approximations, this data-driven closure
inherits several structural properties from entropy-based closures, such as
entropy dissipation, hyperbolicity, and H-Theorem. We construct convex
approximations to the Maxwell-Boltzmann entropy using convex splines and neural
networks, test them on the plane source benchmark problem for linear transport
in slab geometry, and compare the results to the standard, optimization-based
M$_N$ closures. Numerical results indicate that these data-driven closures
provide accurate solutions in much less computation time than the M$_N$
closures.
|
Recently the LHAASO Collaboration published the detection of 12
ultra-high-energy gamma-ray sources above 100 TeV, with the highest energy
photon reaching 1.4 PeV. The first detection of PeV gamma rays from
astrophysical sources may provide a very sensitive probe of the effect of the
Lorentz invariance violation (LIV), which results in decay of high-energy gamma
rays in the superluminal scenario and hence a sharp cutoff of the energy
spectrum. Two highest energy sources are studied in this work. No signature of
the existence of LIV is found in their energy spectra, and the lower limits on
the LIV energy scale are derived. Our results show that the first-order LIV
energy scale should be higher than about 10^5 times the Planck scale M_{pl} and
that the second-order LIV scale is >10^{-3}M_{pl}. Both limits improve by at
least one order of magnitude the previous results.
|
In 1979 Pisier proved remarkably that a sequence of independent and
identically distributed standard Gaussian random variables determines, via
random Fourier series, a homogeneous Banach algebra $\mathscr{P}$ strictly
contained in $C(\mathbb{T})$, the class of continuous functions on the unit
circle $\mathbb{T}$ and strictly containing the classical Wiener algebra
$\mathbb{A}(\mathbb{T})$, that is, $\mathbb{A}(\mathbb{T}) \subsetneqq
\mathscr{P} \subsetneqq C(\mathbb{T}).$ This improved some previous results
obtained by Zafran in solving a long-standing problem raised by Katznelson. In
this paper we extend Pisier's result by showing that any probability measure on
the unit circle defines a homogeneous Banach algebra contained in
$C(\mathbb{T})$. Thus Pisier algebra is not an isolated object but rather an
element in a large class of Pisier-type algebras. We consider the case of
spectral measures of stationary sequences of Gaussian random variables and
obtain a sufficient condition for the boundedness of the random Fourier series
$\sum_{n\in \mathbb{Z}}\hat f(n) \,\xi_n \exp(2\pi i n t)$ in the general
setting of dependent random variables $(\xi_n)$.
|
We present MatchKAT, an algebraic language for modeling match-action packet
processing in network switches. Although the match-action paradigm has remained
a popular low-level programming model for specifying packet forwarding
behavior, little has been done towards giving it formal semantics. With
MatchKAT, we hope to embark on the first steps in exploring how network
programs compiled to match-action rules can be reasoned about formally in a
reliable, algebraic way. In this paper, we give details of MatchKAT and its
metatheory, as well as a formal treatment of match expressions on binary
strings that form the basis of "match" in match-action. Through a
correspondence with NetKAT, we show that MatchKAT's equational theory is sound
and complete with regards to a similar packet filtering semantics. We also
demonstrate the complexity of deciding equivalence in MatchKAT is
PSPACE-complete.
|
We develop connections between the qualitative dynamics of Hamiltonian
isotopies on a surface $\Sigma$ and their chain-level Floer theory using ideas
drawn from Hofer-Wysocki-Zehnder's theory of finite energy foliations. We
associate to every collection of capped $1$-periodic orbits which is `maximally
unlinked relative the Morse range' a singular foliation on $S^1 \times \Sigma$
which is positively transverse to the vector field $\partial_t \oplus X^H$ and
which is assembled in a straight-forward way from the relevant Floer moduli
spaces. We derive a purely topological and Turing-computable characterization
of the spectral invariant $c(H;[\Sigma])$ for generic Hamiltonians on arbitrary
closed surfaces. This completes, for generic Hamiltonians, a project initiated
by Humili\`{e}re-Le Roux-Seyfaddini, in addition to fulfilling a desideratum
expressed by Gambaudo-Ghys seeking a topological characterization of the
Entov-Polterovich quasi-morphism on $Ham(S^2)$.
|
Placing robots outside controlled conditions requires versatile movement
representations that allow robots to learn new tasks and adapt them to
environmental changes. The introduction of obstacles or the placement of
additional robots in the workspace, the modification of the joint range due to
faults or range-of-motion constraints are typical cases where the adaptation
capabilities play a key role for safely performing the robot's task.
Probabilistic movement primitives (ProMPs) have been proposed for representing
adaptable movement skills, which are modelled as Gaussian distributions over
trajectories. These are analytically tractable and can be learned from a small
number of demonstrations. However, both the original ProMP formulation and the
subsequent approaches only provide solutions to specific movement adaptation
problems, e.g., obstacle avoidance, and a generic, unifying, probabilistic
approach to adaptation is missing. In this paper we develop a generic
probabilistic framework for adapting ProMPs. We unify previous adaptation
techniques, for example, various types of obstacle avoidance, via-points,
mutual avoidance, in one single framework and combine them to solve complex
robotic problems. Additionally, we derive novel adaptation techniques such as
temporally unbound via-points and mutual avoidance. We formulate adaptation as
a constrained optimisation problem where we minimise the Kullback-Leibler
divergence between the adapted distribution and the distribution of the
original primitive while we constrain the probability mass associated with
undesired trajectories to be low. We demonstrate our approach on several
adaptation problems on simulated planar robot arms and 7-DOF Franka-Emika
robots in a dual robot arm setting.
|
Many applications require accurate indoor localization. Fingerprint-based
localization methods propose a solution to this problem, but rely on a radio
map that is effort-intensive to acquire. We automate the radio map acquisition
phase using a software-defined radio (SDR) and a wheeled robot. Furthermore, we
open-source a radio map acquired with our automated tool for a 3GPP Long-Term
Evolution (LTE) wireless link. To the best of our knowledge, this is the first
publicly available radio map containing channel state information (CSI).
Finally, we describe first localization experiments on this radio map using a
convolutional neural network to regress for location coordinates.
|
This paper addresses the problem of identifying a linear time-varying (LTV)
system characterized by a (possibly infinite) discrete set of delay-Doppler
shifts without a lattice (or other geometry-discretizing) constraint on the
support set. Concretely, we show that a class of such LTV systems is
identifiable whenever the upper uniform Beurling density of the delay-Doppler
support sets, measured uniformly over the class, is strictly less than 1/2. The
proof of this result reveals an interesting relation between LTV system
identification and interpolation in the Bargmann-Fock space. Moreover, we show
that this density condition is also necessary for classes of systems invariant
under time-frequency shifts and closed under a natural topology on the support
sets. We furthermore show that identifiability guarantees robust recovery of
the delay-Doppler support set, as well as the weights of the individual
delay-Doppler shifts, both in the sense of asymptotically vanishing
reconstruction error for vanishing measurement error.
|
Topological effects exist from a macroscopic system such as the universe to a
microscopic system described by quantum mechanics. We show here that an
interesting geometric structure can be created by the self replication
procedure of a square with an enclosed circle, in which the sum of the circles
area will remain the same but the sum of the circumference will increase. It is
demonstrated by means of Monte Carlo simulations that these topological
features have great impacts to the vacuum pumping probability and the photon
absorption probability of the active surface. The results show significant
improvement of the system performance and have application potential in vacuum
pumping of large research facilities such as a nuclear fusion reactor,
synchrotron and in photovoltaic industry.
|
The use of abundance ratios involving Y, or other slow-neutron capture
elements, are routinely used to infer stellar ages.Aims.We aim to explain the
observed [Y/H] and [Y/Mg] abundance ratios of star clusters located in the
inner disc with a new prescription for mixing in Asymptotic Giant Branch (AGB)
stars. In a Galactic chemical evolution model, we adopt a new set of AGB
stellar yields in which magnetic mixing is included. We compare the results of
the model with a sample of abundances and ages of open clusters located at
different Galactocentric distances. The magnetic mixing causes a less efficient
production of Y at high metallicity. A non-negligible fraction of stars with
super-solar metallicity is produced in the inner disc, and their Y abundances
are affected by the reduced yields. The results of the new AGB model
qualitatively reproduce the observed trends for both [Y/H] and [Y/Mg] vs age at
different Galactocetric distances. Our results confirm from a theoretical point
of view that the relationship between [Y/Mg] and stellar age cannot be
universal, i.e., the same in every part of the Galaxy. It has a strong
dependence on the star formation rate, on the s-process yields and their
relation with metallicity, and thus it varies across the Galactic disc.
|
In this paper, a multi-objective approach for the design of composite
data-driven mathematical models is proposed. It allows automating the
identification of graph-based heterogeneous pipelines that consist of different
blocks: machine learning models, data preprocessing blocks, etc. The
implemented approach is based on a parameter-free genetic algorithm (GA) for
model design called GPComp@Free. It is developed to be part of automated
machine learning solutions and to increase the efficiency of the modeling
pipeline automation. A set of experiments was conducted to verify the
correctness and efficiency of the proposed approach and substantiate the
selected solutions. The experimental results confirm that a multi-objective
approach to the model design allows achieving better diversity and quality of
obtained models. The implemented approach is available as a part of the
open-source AutoML framework FEDOT.
|
Effective human-vehicle collaboration requires an appropriate un-derstanding
of vehicle behavior for safety and trust. Improvingon our prior work by adding
a future prediction module, we in-troduce our framework, calledAutoPreview, to
enable humans topreview autopilot behaviors prior to direct interaction with
thevehicle. Previewing autopilot behavior can help to ensure
smoothhuman-vehicle collaboration during the initial exploration stagewith the
vehicle. To demonstrate its practicality, we conducted acase study on
human-vehicle collaboration and built a prototypeof our framework with the
CARLA simulator. Additionally, weconducted a between-subject control experiment
(n=10) to studywhether ourAutoPreviewframework can provide a deeper
under-standing of autopilot behavior compared to direct interaction. Ourresults
suggest that theAutoPreviewframework does, in fact, helpusers understand
autopilot behavior and develop appropriate men-tal models
|
In this paper, we prove sharp gradient estimates for positive solutions to
the weighted heat equation on smooth metric measure spaces with compact
boundary. As an application, we prove Liouville theorems for ancient solutions
satisfying the Dirichlet boundary condition and some sharp growth restriction
near infinity. Our results can be regarded as a refinement of recent results
due to Kunikawa and Sakurai.
|
We extend a semantic verification framework for hybrid systems with the
Isabelle/HOL proof assistant by an algebraic model for hybrid program stores, a
shallow expression model for hybrid programs and their correctness
specifications, and domain-specific deductive and calculational support. The
new store model yields clean separations and dynamic local views of variables,
e.g. discrete/continuous, mutable/immutable, program/logical, and enhanced ways
of manipulating them using combinators, projections and framing. This leads to
more local inference rules, procedures and tactics for reasoning with invariant
sets, certifying solutions of hybrid specifications or calculating derivatives
with increased proof automation and scalability. The new expression model
provides more user-friendly syntax, better control of name spaces and
interfaces connecting the framework with real-world modelling languages.
|
Let $(N,\rho)$ be a Riemannian manifold, $S$ a surface of genus at least two
and let $f\colon S \to N$ be a continuous map. We consider the energy spectrum
of $(N,\rho)$ (and $f$) which assigns to each point $[J]\in \mathcal{T}(S)$ in
the Teichm\"uller space of $S$ the infimum of the Dirichlet energies of all
maps $(S,J)\to (N,\rho)$ homotopic to $f$. We study the relation between the
energy spectrum and the simple length spectrum. Our main result is that if
$N=S$, $f=id$ and $\rho$ is a metric of non-positive curvature, then the energy
spectrum determines the simple length spectrum. Furthermore, we prove that the
converse does not hold by exhibiting two metrics on $S$ with equal simple
length spectrum but different energy spectrum. As corollaries to our results we
obtain that the set of hyperbolic metrics and the set of singular flat metrics
induced by quadratic differentials satisfy energy spectrum rigidity, i.e. a
metric in these sets is determined, up to isotopy, by its energy spectrum. We
prove that analogous statements also hold true for Kleinian surface groups.
|
We introduce Trankit, a light-weight Transformer-based Toolkit for
multilingual Natural Language Processing (NLP). It provides a trainable
pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained
pipelines for 56 languages. Built on a state-of-the-art pretrained language
model, Trankit significantly outperforms prior multilingual NLP pipelines over
sentence segmentation, part-of-speech tagging, morphological feature tagging,
and dependency parsing while maintaining competitive performance for
tokenization, multi-word token expansion, and lemmatization over 90 Universal
Dependencies treebanks. Despite the use of a large pretrained transformer, our
toolkit is still efficient in memory usage and speed. This is achieved by our
novel plug-and-play mechanism with Adapters where a multilingual pretrained
transformer is shared across pipelines for different languages. Our toolkit
along with pretrained models and code are publicly available at:
https://github.com/nlp-uoregon/trankit. A demo website for our toolkit is also
available at: http://nlp.uoregon.edu/trankit. Finally, we create a demo video
for Trankit at: https://youtu.be/q0KGP3zGjGc.
|
I summarize evidence against the hypothesis that `Oumuamua is the artificial
creation of an advanced civilization. An appendix discusses the flaws and
inconsistencies of the "Breakthrough" proposal for laser acceleration of
spacecraft to semi-relativistic speeds. Reality is much more challenging, and
interesting.
|
Shared mental models are critical to team success; however, in practice, team
members may have misaligned models due to a variety of factors. In
safety-critical domains (e.g., aviation, healthcare), lack of shared mental
models can lead to preventable errors and harm. Towards the goal of mitigating
such preventable errors, here, we present a Bayesian approach to infer
misalignment in team members' mental models during complex healthcare task
execution. As an exemplary application, we demonstrate our approach using two
simulated team-based scenarios, derived from actual teamwork in cardiac
surgery. In these simulated experiments, our approach inferred model
misalignment with over 75% recall, thereby providing a building block for
enabling computer-assisted interventions to augment human cognition in the
operating room and improve teamwork.
|
We study a high throughput satellite system, where the feeder link uses
free-space optical (FSO) and the user link uses radio frequency (RF)
communication. In particular, we first propose a transmit diversity using
Alamouti space time block coding to mitigate the atmospheric turbulence in the
feeder link. Then, based on the concept of average virtual
signal-to-interference-plus-noise ratio and one-bit feedback, we propose a
beamforming algorithm for the user link to maximize the ergodic capacity (EC).
Moreover, by assuming that the FSO links follow the Malaga distribution whereas
RF links undergo the shadowed-Rician fading, we derive a closed-form EC
expression of the considered system. Finally, numerical simulations validate
the accuracy of our theoretical analysis, and show that the proposed schemes
can achieve higher capacity compared with the reference schemes.
|
Of the many modern approaches to calculating evolutionary distance via models
of genome rearrangement, most are tied to a particular set of genomic modelling
assumptions and to a restricted class of allowed rearrangements. The "position
paradigm", in which genomes are represented as permutations signifying the
position (and orientation) of each region, enables a refined model-based
approach, where one can select biologically plausible rearrangements and assign
to them relative probabilities/costs. Here, one must further incorporate any
underlying structural symmetry of the genomes into the calculations and ensure
that this symmetry is reflected in the model. In our recently-introduced
framework of {\em genome algebras}, each genome corresponds to an element that
simultaneously incorporates all of its inherent physical symmetries. The
representation theory of these algebras then provides a natural model of
evolution via rearrangement as a Markov chain. Whilst the implementation of
this framework to calculate distances for genomes with `practical' numbers of
regions is currently computationally infeasible, we consider it to be a
significant theoretical advance: one can incorporate different genomic
modelling assumptions, calculate various genomic distances, and compare the
results under different rearrangement models. The aim of this paper is to
demonstrate some of these features.
|
The Institute of Materials and Processes, IMP, of the University of Applied
Sciences in Karlsruhe, Germany in cooperation with VDI Verein Deutscher
Ingenieure e.V, AEN Automotive Engineering Network and their cooperation
partners present their competences of AI-based solution approaches in the
production engineering field. The online congress KI 4 Industry on November 12
and 13, 2020, showed what opportunities the use of artificial intelligence
offers for medium-sized manufacturing companies, SMEs, and where potential
fields of application lie. The main purpose of KI 4 Industry is to increase the
transfer of knowledge, research and technology from universities to small and
medium-sized enterprises, to demystify the term AI and to encourage companies
to use AI-based solutions in their own value chain or in their products.
|
Given a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$
contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k
\leq n$ such that for all $s,t \in [k]$, it holds that $f(i_s) < f(i_t)$ if and
only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no
$\pi$-appearances. In this paper, we investigate the problem of testing whether
an input function $f$ is $\pi$-free or whether at least $\varepsilon n$ values
in $f$ need to be changed in order to make it $\pi$-free. This problem is a
generalization of the well-studied monotonicity testing and was first studied
by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and
Algorithms 2019). We show that for all constants $k \in \mathbb{N}$,
$\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a
one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions
$f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve
significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by
Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the
earlier best upper bound is known to be tight for nonadaptive algorithms.
|
In random quantum magnets, like the random transverse Ising chain, the low
energy excitations are localized in rare regions and there are only weak
correlations between them. It is a fascinating question whether these
correlations are completely irrelevant in the sense of the renormalization
group. To answer this question, we calculate the distribution of the excitation
energy of the random transverse Ising chain in the disordered Griffiths phase
with high numerical precision by the strong disorder renormalization group
method and - for shorter chains - by free-fermion techniques. Asymptotically,
the two methods give identical results, which are well fitted by the Fr\'echet
limit law of the extremes of independent and identically distributed random
numbers. Given the finite size corrections, the two numerical methods give very
similar results, but they differ from the correction term for uncorrelated
random variables. This fact shows that the weak correlations between low-energy
excitations in random quantum magnets are not entirely irrelevant.
|
New words are regularly introduced to communities, yet not all of these words
persist in a community's lexicon. Among the many factors contributing to
lexical change, we focus on the understudied effect of social networks. We
conduct a large-scale analysis of over 80k neologisms in 4420 online
communities across a decade. Using Poisson regression and survival analysis,
our study demonstrates that the community's network structure plays a
significant role in lexical change. Apart from overall size, properties
including dense connections, the lack of local clusters and more external
contacts promote lexical innovation and retention. Unlike offline communities,
these topic-based communities do not experience strong lexical levelling
despite increased contact but accommodate more niche words. Our work provides
support for the sociolinguistic hypothesis that lexical change is partially
shaped by the structure of the underlying network but also uncovers findings
specific to online communities.
|
It is well known that a classical Fubini theorem for Hausdorff dimension
cannot hold; that is, the dimension of the intersections of a fixed set with a
parallel family of planes do not determine the dimension of the set. Here we
prove that a Fubini theorem for Hausdorff dimension does hold modulo sets that
are small on all Lipschitz curves/surfaces.
We say that $G\subset \mathbb{R}^k\times \mathbb{R}^n$ is $\Gamma_k$-null if
for every Lipschitz function $f:\mathbb{R}^k\to \mathbb{R}^n$ the set $\{t\in
\mathbb{R}^k\,:\,(t,f(t))\in G\}$ has measure zero. We show that for every
compact set $E\subset \mathbb{R}^k\times \mathbb{R}^n$ there is a
$\Gamma_k$-null subset $G\subset E$ such that $$\dim (E\setminus G) =
k+\text{ess-}\sup(\dim E_t)$$ where $\text{ess-}\sup(\dim E_t)$ is the
essential supremum of the Hausdorff dimension of the vertical sections
$\{E_t\}_{t\in \mathbb{R}^k}$ of $E$, assuming that $proj_{\mathbb{R}^k} E$ has
positive measure.
We also obtain more general results by replacing $\mathbb{R}^k$ by an Ahlfors
regular set. Applications of our results include Fubini-type results for unions
of affine subspaces and related projection theorems.
|
Multi-scale, multi-fidelity numerical simulations form the pillar of
scientific applications related to numerically modeling fluids. However,
simulating the fluid behavior characterized by the non-linear Navier Stokes
equations are often times computational expensive. Physics informed machine
learning methods is a viable alternative and as such has seen great interest in
the community [refer to Kutz (2017); Brunton et al. (2020); Duraisamy et al.
(2019) for a detailed review on this topic]. For full physics emulators, the
cost of network inference is often trivial. However, in the current paradigm of
data-driven fluid mechanics models are built as surrogates for complex
sub-processes. These models are then used in conjunction to the Navier Stokes
solvers, which makes ML model inference an important factor in the terms of
algorithmic latency. With the ever growing size of networks, and often times
overparameterization, exploring effective network compression techniques
becomes not only relevant but critical for engineering systems design. In this
study, we explore the applicability of pruning and quantization (FP32 to int8)
methods for one such application relevant to modeling fluid turbulence.
Post-compression, we demonstrate the improvement in the accuracy of network
predictions and build intuition in the process by comparing the compressed to
the original network state.
|
We argue that deriving an effective field theory from string theory requires
a Wilsonian perspective with a physical cutoff. Employing proper time
regularization we demonstrate the decoupling of states and contrast this with
what happens in dimensional regularization. In particular we point out that
even if the cosmological constant (CC) calculated from some classical action at
some ultra-violet scale is negative, this does not necessarily imply that the
CC calculated at cosmological scales is also negative, and discuss the possible
criteria for achieving a positive CC starting with a CC at the string/KK scale
which is negative. Obviously this has implications for swampland claims.
|
A central problem in Binary Hypothesis Testing (BHT) is to determine the
optimal tradeoff between the Type I error (referred to as false alarm) and Type
II (referred to as miss) error. In this context, the exponential rate of
convergence of the optimal miss error probability -- as the sample size tends
to infinity -- given some (positive) restrictions on the false alarm
probabilities is a fundamental question to address in theory. Considering the
more realistic context of a BHT with a finite number of observations, this
paper presents a new non-asymptotic result for the scenario with monotonic
(sub-exponential decreasing) restriction on the Type I error probability, which
extends the result presented by Strassen in 2009. Building on the use of
concentration inequalities, we offer new upper and lower bounds to the optimal
Type II error probability for the case of finite observations. Finally, the
derived bounds are evaluated and interpreted numerically (as a function of the
number samples) for some vanishing Type I error restrictions.
|
Molecular simulations of the forced unfolding and refolding of biomolecules
or molecular complexes allow to gain important kinetic, structural and
thermodynamic information about the folding process and the underlying energy
landscape. In force probe molecular dynamics (FPMD) simulations, one pulls one
end of the molecule with a constant velocity in order to induce the relevant
conformational transitions. Since the extended configuration of the system has
to fit into the simulation box together with the solvent such simulations are
very time consuming. Here, we apply a hybrid scheme in which the solute is
treated with atomistic resolution and the solvent molecules far away from the
solute are described in a coarse-grained manner. We use the adaptive resolution
scheme (AdResS) that has very successfully been applied to various examples of
equilibrium simulations. We perform FPMD simulations using AdResS on a well
studied system, a dimer formed from mechanically interlocked calixarene
capsules. The results of the multiscale simulations are compared to all-atom
simulations of the identical system and we observe that the size of the region
in which atomistic resolution is required depends on the pulling velocity, i.e.
the particular non-equilibrium situation. For large pulling velocities a larger
all atom region is required. Our results show that multiscale simulations can
be applied also in the strong non-equilibrium situations that the system
experiences in FPMD simulations.
|
Graph neural networks (GNNs) constitute a class of deep learning methods for
graph data. They have wide applications in chemistry and biology, such as
molecular property prediction, reaction prediction and drug-target interaction
prediction. Despite the interest, GNN-based modeling is challenging as it
requires graph data pre-processing and modeling in addition to programming and
deep learning. Here we present DGL-LifeSci, an open-source package for deep
learning on graphs in life science. DGL-LifeSci is a python toolkit based on
RDKit, PyTorch and Deep Graph Library (DGL). DGL-LifeSci allows GNN-based
modeling on custom datasets for molecular property prediction, reaction
prediction and molecule generation. With its command-line interfaces, users can
perform modeling without any background in programming and deep learning. We
test the command-line interfaces using standard benchmarks MoleculeNet, USPTO,
and ZINC. Compared with previous implementations, DGL-LifeSci achieves a speed
up by up to 6x. For modeling flexibility, DGL-LifeSci provides well-optimized
modules for various stages of the modeling pipeline. In addition, DGL-LifeSci
provides pre-trained models for reproducing the test experiment results and
applying models without training. The code is distributed under an Apache-2.0
License and is freely accessible at https://github.com/awslabs/dgl-lifesci.
|
We report the discovery of three new pulsars in the Globular Cluster (GC)
NGC6517, namely NGC 6517 E, F, and G, made with the Five-hundred-meter Aperture
Spherical radio Telescope (FAST). The spin periods of NGC 6517 E, F, and G are
7.60~ms, 24.89~ms, and 51.59~ms, respectively. Their dispersion measures are
183.29, 183.713, and 185.3~pc~cm$^{-3}$, respectively, all slightly larger than
those of the previously known pulsars in this cluster. The spin period
derivatives are at the level of 1$\times$10$^{-18}$~s~s$^{-1}$, which suggests
these are recycled pulsars. In addition to the discovery of these three new
pulsars, we updated the timing solutions of the known isolated pulsars, NGC
6517 A, C, and D. The solutions are consistent with those from Lynch et al.
(2011) and with smaller timing residuals. From the timing solution, NGC 6517 A,
B (position from Lynch et al. 2011), C, E, and F are very close to each other
on the sky and only a few arcseconds from the optical core of NGC 6517. With
currently published and unpublished discoveries, NGC6517 now has 9 pulsars,
ranking 5$^{th}$ of the GCs with the most pulsars. The discoveries take
advantage of the high sensitivity of FAST and a new algorithm used to check and
filter possible candidate signals.
|
We report $^{121/123}$Sb nuclear quadrupole resonance (NQR) and $^{51}$V
nuclear magnetic resonance (NMR) measurements on kagome metal CsV$_3$Sb$_5$
with $T_{\rm c}=2.5$ K. Both $^{51}$V NMR spectra and $^{121/123}$Sb NQR
spectra split after a charge density wave (CDW) transition, which demonstrates
a commensurate CDW state. The coexistence of the high temperature phase and the
CDW phase between $91$ K and $94$ K manifests that it is a first order phase
transition. At low temperature, electric-field-gradient fluctuations diminish
and magnetic fluctuations become dominant. Superconductivity emerges in the
charge order state. Knight shift decreases and $1/T_{1}T$ shows a
Hebel--Slichter coherence peak just below $T_{\rm c}$, indicating that
CsV$_3$Sb$_5$ is an s-wave superconductor.
|
Vibrational and electronic absorption spectra calculated at the
(time-dependent) density functional theory level for the bismuth carbide
clusters Bi$_{n}$C$_{2n}$$^+$ ($3 \le n \le 9$) indicate significant
differences in types of bonding that depend on cluster geometry. Analysis of
the electronic charge densities of these clusters highlighted bonding trends
consistent with the spectroscopic information. The combined data suggest that
larger clusters ($n > 5$) are likely to be kinetically unstable in agreement
with the cluster mass distribution obtained in gas-aggregation source
experiments. The spectral fingerprints of the different clusters obtained from
our calculations also suggest that identification of specific
Bi$_{n}$C$_{2n}$$^+$ isomers of should be possible based on infra-red and
optical absorption spectroscopy.
|
Observations carried out toward starless and pre-stellar cores have revealed
that complex organic molecules are prevalent in these objects, but it is
unclear what chemical processes are involved in their formation. Recently, it
has been shown that complex organics are preferentially produced at an
intermediate-density shell within the L1544 pre-stellar core at radial
distances of ~4000 au with respect to the core center. However, the spatial
distribution of complex organics has only been inferred toward this core and it
remains unknown whether these species present a similar behaviour in other
cores. We report high-sensitivity observations carried out toward two positions
in the L1498 pre-stellar core, the dust peak and a position located at a
distance of ~11000 au from the center of the core where the emission of
CH$_3$OH peaks. Similarly to L1544, our observations reveal that small
O-bearing molecules and N-bearing species are enhanced by factors ~4-14 toward
the outer shell of L1498. However, unlike L1544, large O-bearing organics such
as CH3CHO, CH3OCH3 or CH3OCHO are not detected within our sensitivity limits.
For N-bearing organics, these species are more abundant toward the outer shell
of the L1498 pre-stellar core than toward the one in L1544. We propose that the
differences observed between O-bearing and N-bearing species in L1498 and L1544
are due to the different physical structure of these cores, which in turn is a
consequence of their evolutionary stage, with L1498 being younger than L1544.
|
Successful active speaker detection requires a three-stage pipeline: (i)
audio-visual encoding for all speakers in the clip, (ii) inter-speaker relation
modeling between a reference speaker and the background speakers within each
frame, and (iii) temporal modeling for the reference speaker. Each stage of
this pipeline plays an important role for the final performance of the created
architecture. Based on a series of controlled experiments, this work presents
several practical guidelines for audio-visual active speaker detection.
Correspondingly, we present a new architecture called ASDNet, which achieves a
new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 93.5%
outperforming the second best with a large margin of 4.7%. Our code and
pretrained models are publicly available.
|
We investigate the optical response of a hybrid electro-optomechanical system
interacting with a qubit. In our experimentally feasible system, tunable
all-optical-switching, double-optomechanically induced transparency (OMIT) and
optomechanically induced absorption (OMIA) can be realized. The proposed system
is also shown to generate anomalous dispersion. Based on our theoretical
results, we provide a tunable switch between OMIT and OMIA of the probe field
by manipulating the relevant system parameters. Also, the normal-mode-splitting
(NMS) effect induced by the interactions between the subsystems are discussed
in detail and the effects of varying the interactions on the NMS are clarified.
These rich optical properties of the probe field may provide a promising
platform for controllable all-optical-switch and various other quantum photonic
devices.
|
During the early months of the current COVID-19 pandemic, social-distancing
measures effectively slowed disease transmission in many countries in Europe
and Asia, but the same benefits have not been observed in some developing
countries such as Brazil. In part, this is due to a failure to organise
systematic testing campaigns at nationwide or even regional levels. To gain
effective control of the pandemic, decision-makers in developing countries,
particularly those with large populations, must overcome difficulties posed by
an unequal distribution of wealth combined with low daily testing capacities.
The economic infrastructure of the country, often concentrated in a few cities,
forces workers to travel from commuter cities and rural areas, which induces
strong nonlinear effects on disease transmission. In the present study, we
develop a smart testing strategy to identify geographic regions where COVID-19
testing could most effectively be deployed to limit further disease
transmission. The strategy uses readily available anonymised mobility and
demographic data integrated with intensive care unit (ICU) occupancy data and
city-specific social-distancing measures. Taking into account the heterogeneity
of ICU bed occupancy in differing regions and the stages of disease evolution,
we use a data-driven study of the Brazilian state of Sao Paulo as an example to
show that smart testing strategies can rapidly limit transmission while
reducing the need for social-distancing measures, thus returning life to a
so-called new normal, even when testing capacity is limited.
|
Teaching collaborative argumentation is an advanced skill that many K-12
teachers struggle to develop. To address this, we have developed Discussion
Tracker, a classroom discussion analytics system based on novel algorithms for
classifying argument moves, specificity, and collaboration. Results from a
classroom deployment indicate that teachers found the analytics useful, and
that the underlying classifiers perform with moderate to substantial agreement
with humans.
|
As is well known, the smallest neutrino mass turns out to be vanishing in the
minimal seesaw model, since the effective neutrino mass matrix $M^{}_\nu$ is of
rank two due to the fact that only two heavy right-handed neutrinos are
introduced. In this paper, we point out that the one-loop matching condition
for the effective dimension-five neutrino mass operator can make an important
contribution to the smallest neutrino mass. By using the available one-loop
matching condition and two-loop renormalization group equations in the
supersymmetric version of the minimal seesaw model, we explicitly calculate the
smallest neutrino mass in the case of normal neutrino mass ordering and find
$m^{}_1 \in [10^{-10}, 10^{-8}]~{\rm eV}$ at the Fermi scale $\Lambda^{}_{\rm
F} = 91.2~{\rm GeV}$, where the range of $m^{}_1$ results from the
uncertainties on the choice of the seesaw scale $\Lambda^{}_{\rm SS}$ and on
the input values of relevant parameters at $\Lambda^{}_{\rm SS}$.
|
Hypergraphs are used to model higher-order interactions amongst agents and
there exist many practically relevant instances of hypergraph datasets. To
enable efficient processing of hypergraph-structured data, several hypergraph
neural network platforms have been proposed for learning hypergraph properties
and structure, with a special focus on node classification. However, almost all
existing methods use heuristic propagation rules and offer suboptimal
performance on many datasets. We propose AllSet, a new hypergraph neural
network paradigm that represents a highly general framework for (hyper)graph
neural networks and for the first time implements hypergraph neural network
layers as compositions of two multiset functions that can be efficiently
learned for each task and each dataset. Furthermore, AllSet draws on new
connections between hypergraph neural networks and recent advances in deep
learning of multiset functions. In particular, the proposed architecture
utilizes Deep Sets and Set Transformer architectures that allow for significant
modeling flexibility and offer high expressive power. To evaluate the
performance of AllSet, we conduct the most extensive experiments to date
involving ten known benchmarking datasets and three newly curated datasets that
represent significant challenges for hypergraph node classification. The
results demonstrate that AllSet has the unique ability to consistently either
match or outperform all other hypergraph neural networks across the tested
datasets. Our implementation and dataset will be released upon acceptance.
|
The planet-metallicity correlation serves as a potential link between
exoplanet systems as we observe them today and the effects of bulk composition
on the planet formation process. Many observers have noted a tendency for
Jovian planets to form around stars with higher metallicities; however, there
is no consensus on a trend for smaller planets. Here, we investigate the
planet-metallicity correlation for rocky planets in single and multi-planet
systems around Kepler M-dwarf and late K-dwarf stars. Due to molecular
blanketing and the dim nature of these low mass stars, it is difficult to make
direct elemental abundance measurements via spectroscopy. We instead use a
combination of accurate and uniformly measured parallaxes and photometry to
obtain relative metallicities and validate this method with a subsample of
spectroscopically determined metallicities. We use the Kolmogorov-Smirnov (KS)
test, Mann-Whitney U test, and Anderson-Darling test to compare the compact
multiple planetary systems with single transiting planet systems and systems
with no detected transiting planets. We find that the compact multiple
planetary systems are derived from a statistically more metal-poor population,
with a p-value of 0.015 in the KS test, a p-value of 0.005 in the Mann-Whitney
U test, and a value of 2.574 in the Anderson-Darling test statistic, which
exceeds the derived threshold for significance by a factor of 25. We conclude
that metallicity plays a significant role in determining the architecture of
rocky planet systems. Compact multiples either form more readily, or are more
likely to survive on Gyr timescales, around metal-poor stars.
|
We construct weight-preserving bijections between column strict shifted plane
partitions with one row and alternating sign trapezoids with exactly one column
in the left half that sums to $1$. Amongst other things, they relate the number
of $-1$s in the alternating sign trapezoids to certain elements in the column
strict shifted plane partitions that generalise the notion of special parts in
descending plane partitions. The advantage of these bijections is that they
include configurations with $-1$s, which is a feature that many of the
bijections in the realm of alternating sign arrays lack.
|
The nonlinear response associated with the current dependence of the
superconducting kinetic inductance was studied in capacitively shunted NbTiN
microstrip transmission lines. It was found that the inductance per unit length
of one microstrip line could be changed by up to 20% by applying a DC current,
corresponding to a single pass time delay of 0.7 ns. To investigate nonlinear
dissipation, Bragg reflectors were placed on either end of a section of this
type of transmission line, creating resonances over a range of frequencies.
From the change in the resonance linewidth and amplitude with DC current, the
ratio of the reactive to the dissipative response of the line was found to be
788. The low dissipation makes these transmission lines suitable for a number
of applications that are microwave and millimeter-wave band analogues of
nonlinear optical processes. As an example, by applying a millimeter-wave pump
tone, very wide band parametric amplification was observed between about 3 and
34 GHz. Use as a current variable delay line for an on-chip millimeter-wave
Fourier transform spectrometer is also considered.
|
Determining the architecture of multi-planetary systems is one of the
cornerstones of understanding planet formation and evolution. Resonant systems
are especially important as the fragility of their orbital configuration
ensures that no significant scattering or collisional event has taken place
since the earliest formation phase when the parent protoplanetary disc was
still present. In this context, TOI-178 has been the subject of particular
attention since the first TESS observations hinted at a 2:3:3 resonant chain.
Here we report the results of observations from CHEOPS, ESPRESSO, NGTS, and
SPECULOOS with the aim of deciphering the peculiar orbital architecture of the
system. We show that TOI-178 harbours at least six planets in the super-Earth
to mini-Neptune regimes, with radii ranging from 1.152(-0.070/+0.073) to
2.87(-0.13/+0.14) Earth radii and periods of 1.91, 3.24, 6.56, 9.96, 15.23, and
20.71 days. All planets but the innermost one form a 2:4:6:9:12 chain of
Laplace resonances, and the planetary densities show important variations from
planet to planet, jumping from 1.02(+0.28/-0.23) to 0.177(+0.055/-0.061) times
the Earth's density between planets c and d. Using Bayesian interior structure
retrieval models, we show that the amount of gas in the planets does not vary
in a monotonous way, contrary to what one would expect from simple formation
and evolution models and unlike other known systems in a chain of Laplace
resonances. The brightness of TOI-178 allows for a precise characterisation of
its orbital architecture as well as of the physical nature of the six presently
known transiting planets it harbours. The peculiar orbital configuration and
the diversity in average density among the planets in the system will enable
the study of interior planetary structures and atmospheric evolution, providing
important clues on the formation of super-Earths and mini-Neptunes.
|
We advance and experimentally implement a protocol to generate perfect
optical coherence lattices (OCL) that are not modulated by an envelope field.
Structuring the amplitude and phase of an input partially coherent beam in a
Fourier plane of an imaging system lies at the heart of our protocol. In the
proposed approach, the OCL node profile depends solely on the degree of
coherence (DOC) of the input beam such that, in principle, any lattice
structure can be attained via proper manipulations in the Fourier plane.
Moreover, any genuine partially coherent source can serve as an input to our
lattice generating imaging system. Our results are anticipated to find
applications to optical field engineering and multi-target probing among
others.
|
We carried out a comprehensive study of electronic transport, thermal and
thermodynamic properties in FeCr$_2$Te$_4$ single crystals. It exhibits
bad-metallic behavior and anomalous Hall effect (AHE) below a weak-itinerant
paramagentic-to-ferrimagnetic transition $T_c$ $\sim$ 123 K. The linear scaling
between the anomalous Hall resistivity $\rho_{xy}$ and the longitudinal
resistivity $\rho_{xx}$ implies that the AHE in FeCr$_2$Te$_4$ is most likely
dominated by extrinsic skew-scattering mechanism rather than intrinsic KL or
extrinsic side-jump mechanism, which is supported by our Berry phase
calculations.
|
In this article, we study the problem of air-to-ground ultra-reliable and
low-latency communication (URLLC) for a moving ground user. This is done by
controlling multiple unmanned aerial vehicles (UAVs) in real time while
avoiding inter-UAV collisions. To this end, we propose a novel multi-agent deep
reinforcement learning (MADRL) framework, coined a graph attention exchange
network (GAXNet). In GAXNet, each UAV constructs an attention graph locally
measuring the level of attention to its neighboring UAVs, while exchanging the
attention weights with other UAVs so as to reduce the attention mismatch
between them. Simulation results corroborates that GAXNet achieves up to 4.5x
higher rewards during training. At execution, without incurring inter-UAV
collisions, GAXNet achieves 6.5x lower latency with the target 0.0000001 error
rate, compared to a state-of-the-art baseline framework.
|
Animals can quickly learn the timing of events with fixed intervals and their
rate of acquisition does not depend on the length of the interval. In contrast,
recurrent neural networks that use gradient based learning have difficulty
predicting the timing of events that depend on stimulus that occurred long ago.
We present the latent time-adaptive drift-diffusion model (LTDDM), an extension
to the time-adaptive drift-diffusion model (TDDM), a model for animal learning
of timing that exhibits behavioural properties consistent with experimental
data from animals. The performance of LTDDM is compared to that of a state of
the art long short-term memory (LSTM) recurrent neural network across three
timing tasks. Differences in the relative performance of these two models is
discussed and it is shown how LTDDM can learn these events time series orders
of magnitude faster than recurrent neural networks.
|
In the last three decades, memory safety issues in system programming
languages such as C or C++ have been one of the significant sources of security
vulnerabilities. However, there exist only a few attempts with limited success
to cope with the complexity of C++ program verification. Here we describe and
evaluate a novel verification approach based on bounded model checking (BMC)
and satisfiability modulo theories (SMT) to verify C++ programs formally. Our
verification approach analyzes bounded C++ programs by encoding into SMT
various sophisticated features that the C++ programming language offers, such
as templates, inheritance, polymorphism, exception handling, and the Standard
C++ Libraries. We formalize these features within our formal verification
framework using a decidable fragment of first-order logic and then show how
state-of-the-art SMT solvers can efficiently handle that. We implemented our
verification approach on top of ESBMC. We compare ESBMC to LLBMC and DIVINE,
which are state-of-the-art verifiers to check C++ programs directly from the
LLVM bitcode. Experimental results show that ESBMC can handle a wide range of
C++ programs, presenting a higher number of correct verification results. At
the same time, it reduces the verification time if compared to LLBMC and DIVINE
tools. Additionally, ESBMC has been applied to a commercial C++ application in
the telecommunication domain and successfully detected arithmetic overflow
errors, potentially leading to security vulnerabilities.
|
We investigate the asymptotic symmetry group of a scalar field
minimally-coupled to an abelian gauge field using the Hamiltonian formulation.
This extends previous work by Henneaux and Troessaert on the pure
electromagnetic case. We deal with minimally coupled massive and massless
scalar fields and find that they behave differently insofar as the latter do
not allow for canonically implemented asymptotic boost symmetries. We also
consider the abelian Higgs model and show that its asymptotic canonical
symmetries reduce to the Poincar\'e group in an unproblematic fashion.
|
Despite the rich literature on scheduling algorithms for wireless networks,
algorithms that can provide deadline guarantees on packet delivery for general
traffic and interference models are very limited. In this paper, we study the
problem of scheduling real-time traffic under a conflict-graph interference
model with unreliable links due to channel fading. Packets that are not
successfully delivered within their deadlines are of no value. We consider
traffic (packet arrival and deadline) and fading (link reliability) processes
that evolve as an unknown finite-state Markov chain. The performance metric is
efficiency ratio which is the fraction of packets of each link which are
delivered within their deadlines compared to that under the optimal (unknown)
policy. We first show a conversion result that shows classical non-real-time
scheduling algorithms can be ported to the real-time setting and yield a
constant efficiency ratio, in particular, Max-Weight Scheduling (MWS) yields an
efficiency ratio of 1/2. We then propose randomized algorithms that achieve
efficiency ratios strictly higher than 1/2, by carefully randomizing over the
maximal schedules. We further propose low-complexity and myopic distributed
randomized algorithms, and characterize their efficiency ratio. Simulation
results are presented that verify that randomized algorithms outperform
classical algorithms such as MWS and GMS.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.