abstract
stringlengths 42
2.09k
|
---|
The design of feedback control systems to block observability in a network
synchronization model, i.e. to make the dynamics unobservable from measurements
at a subset of the network's nodes, is studied. First, a general design
algorithm is presented for blocking observability at any specified group of $m$
nodes, by applying state feedback controls at $m+2$ specified actuation nodes.
The algorithm is based on a method for eigenstructure assignment, which allows
surgical modification of particular eigenvectors to block observability while
preserving the remaining open-loop eigenstructure. Next, the topological
structure of the network is exploited to reduce the number of controllers
required for blocking observability; the result is based on blocking
observability on the nodes associated with a vertex-cutset separating the
actuation and measurement locations. Also, the design is modified to encompass
regional feedback controls, which only use data from a subset of accessible
nodes. The regional feedback design does not maintain the open-loop
eigenstructure, but can be guaranteed to preserve stability via a time-scale
argument. The results are illustrated with numerical examples.
|
Context: In the light of the swift and iterative nature of Agile Software
Development (ASD) practices, establishing deeper insights into capability
measurement within the context of team formation is crucial, as the capability
of individuals and teams can affect team performance and productivity. Although
a former Systematic Literature Review (SLR) synthesized the state of the art in
relation to capability measurement in ASD with a focus on selecting individuals
to agile teams, and capabilities related to team performance and success,
determining to what degree the SLR's results apply to practice can provide
progressive insights to both research and practice.
Objective: Our study investigates how agile practitioners perceive the
relevance of individual and team level measures for characterizing the
capability of an agile team and its members. Furthermore, to scrutinize
variations in practitioners' perceptions, our study further analyzes
perceptions across stratified demographic groups.
Method: We undertook a Web-based survey using a questionnaire built based on
the capability measures identified from a previously conducted SLR.
Results: Our survey responses (60) indicate that 127 individual and 28 team
capability measures were considered as relevant by the majority of
practitioners. We also identified seven individual and one team capability
measure that have not been previously characterized by our SLR. The surveyed
practitioners suggested that an agile team member's responsibility and
questioning skills significantly represent the member's capability.
Conclusion: Results from our survey align with our SLR's findings. Measures
associated with social aspects were observed to be dominant compared to
technical and innovative aspects. Our results can support agile practitioners
in their team composition decisions.
|
Early speculations about the existence of heavy hadron molecules were
grounded on the idea that light-meson exchanges forces could lead to binding.
In analogy to the deuteron, the light-mesons usually considered include the
pion, sigma, rho and omega, but not the axial meson $a_1(1260)$. Though it has
been argued in the past that the coupling of the axial meson to the nucleons is
indeed strong, its mass is considerably heavier than that of the vector mesons
and thus its exchange ends up being suppressed. Yet, this is not necessarily
the case in heavy hadrons molecules: we find that even though the contribution
to binding from the axial meson is modest, it cannot be neglected in the
isovector sector where vector meson exchange cancels out. This might provide a
natural binding mechanism for molecular candidates such as the $Z_c(3900)$,
$Z_c(4020)$ or the more recently observed $Z_{cs}(3985)$. However the
$Z_{cs}(3985)$ is more dependent on a mixture of different factors, which
(besides axial meson exchange) include $\eta$ exchange and the nature of scalar
meson exchange. Together they point towards the existence of two
$Z_{cs}(3985)$-like resonances instead of one, while the observations about the
role of scalar meson exchange in the $Z_{cs}(3985)$ might be relevant for the
$P_{cs}(4459)$. Finally, the combination of axial meson exchange and flavor
symmetry breaking effects indicates that the isovector $J^{PC} = 0^{++}$
$D^*\bar{D}^*$ and the strange $J^P = 2^{+}$ $D^*\bar{D}_s^*$ molecules are the
most attractive configurations and thus the most likely molecular partners of
the $Z_c(3900)$, $Z_c(4020)$ and $Z_{cs}(3985)$.
|
The 44.7 ms X-ray pulsar in the supernova remnant G12.82-0.02/HESS J1813-178
has the second highest spin-down luminosity of known pulsars in the Galaxy,
with E-dot=5.6e37 erg/s. Using the Green Bank Telescope, we have detected radio
pulsations from PSR J1813-1749 at 4.4-10.2 GHz. The pulse is highly scattered,
with an exponential decay timescale \tau longer than that of any other pulsar
at these frequencies. A point source detected at this position by Dzib et al.
in several observations with the Jansky Very Large Array can be attributed to
the pulsed emission. The steep dependence of \tau on observing frequency
explains why all previous pulsation searches at lower frequencies failed
(\tau~0.25 s at 2 GHz). The large dispersion measure, DM=1087 pc/cc, indicates
a distance of either 6.2 or 12 kpc according to two widely used models of the
electron density distribution in the Galaxy. These disfavor a previously
suggested association with a young stellar cluster at the closer distance of
4.8 kpc. The high X-ray measured column density of ~1e23/cm^2 also supports a
large distance. If at d~12 kpc, HESS J1813-178 would be one of the most
luminous TeV sources in the Galaxy.
|
Two recent papers (Renou et al., arXiv:2101.10873, and Chen et al.,
arXiv:2103.08123) have indicated that complex numbers are necessary for quantum
theory. This short note is a comment on their result.
|
This work analyzes the optical properties of a localized surface plasmon
(LSP) spaser made of a dielectric active wire coated with a graphene monolayer.
Our theoretical results, obtained by using rigorous electromagnetic methods,
illustrate the non-radiative transfer between the active medium and the
localized surface plasmons of the graphene. In particular, we focus on the
lasing conditions and the tunability of the LSP spaser in two cases: when the
wire is made of an infrared/THz transparent dielectric material and when it is
made of a metal-like material. We analyze the results by comparing them with
analytical expressions obtained by us using the quasistatic approximation. We
show that the studied systems present a high tunability of the spaser
resonances with the geometrical parameters as well as with the chemical
potential of the graphene.
|
We prove positive mass theorems for asymptotically hyperbolic and
asymptotically locally hyperbolic Riemannian manifolds with black-hole-type
boundaries.
|
Progress in making neural networks more robust against adversarial attacks is
mostly marginal, despite the great efforts of the research community. Moreover,
the robustness evaluation is often imprecise, making it difficult to identify
promising approaches. We analyze the classification decisions of 19 different
state-of-the-art neural networks trained to be robust against adversarial
attacks. Our findings suggest that current untargeted adversarial attacks
induce misclassification towards only a limited amount of different classes.
Additionally, we observe that both over- and under-confidence in model
predictions result in an inaccurate assessment of model robustness. Based on
these observations, we propose a novel loss function for adversarial attacks
that consistently improves attack success rate compared to prior loss functions
for 19 out of 19 analyzed models.
|
Estimation of permutation entropy (PE) using Bayesian statistical methods is
presented for systems where the ordinal pattern sampling follows an
independent, multinomial distribution. It is demonstrated that the PE posterior
distribution is closely approximated by a standard Beta distribution, whose
hyperparameters can be estimated directly from moments computed analytically
from observed ordinal pattern counts. Equivalence with expressions derived
previously using frequentist methods is also demonstrated. Because Bayesian
estimation of PE naturally incorporates uncertainty and prior information, the
orthodox requirement that $N \gg D!$ is effectively circumvented, allowing PE
to be estimated even for very short time series. Self-similarity tests on PE
posterior distributions computed for a semiconductor laser with optical
feedback (SLWOF) system show its PE to vary periodically over time.
|
Unconventional fermions, such as three-fold, four-fold, six-fold, and
eight-fold fermions have attracted intense attention in recent years. However,
the concrete materials hosting unconventional fermions are still in urgent
scarcity. In this work, based first-principle calculations and symmetry
analysis, we reveal rich unconventional fermions in existing compound Re2C3 (Re
= Y, La, Ce, Pr, Nd, Sm, Tb, Dy, Ho, Er, Tm, Yb, Lu). We show that these
compounds host quadratic dispersive three-fold (TP), linear dispersive
four-fold (FP) and six-fold points (SP) near the Fermi level in their electric
band structures when spin-orbital coupling (SOC) is not included. Notably, the
FP is charge-2 Dirac-like point. More importantly, among compound Re2C3, the
compound Yb2C3 has very clean band structure, and its unconventional fermions
are closed to the Fermi level. We also find that a uniaxial strain can
transform the unconventional fermions into other types fermions, depending on
the directions of strain. When SOC is considered, a SP transform to an
eightfold degenerate point and a fourfold degenerate point. Overall, our work
provides a family of realistic materials to study the unconventional fermions.
|
We condition a Brownian motion with arbitrary starting point $y \in
\mathbb{R}$ on spending at most $1$ time unit below $0$ and provide an explicit
description of the resulting process. In particular, we provide explicit
formulas for the distributions of its last zero $g=g^y$ and of its occupation
time $\Gamma=\Gamma^y$ below $0$ as functions of $y$. This generalizes a result
of Benjamini and Berestycki from 2011, which covers the special case $y=0$.
Additionally, we study the behavior of the distributions of $g^y$ and
$\Gamma^y$, respectively, for $y \to \pm\infty$.
|
Assessing sensitivity to unmeasured confounding is an important step in
observational studies, which typically estimate effects under the assumption
that all confounders are measured. In this paper, we develop a sensitivity
analysis framework for balancing weights estimators, an increasingly popular
approach that solves an optimization problem to obtain weights that directly
minimizes covariate imbalance. In particular, we adapt a sensitivity analysis
framework using the percentile bootstrap for a broad class of balancing weights
estimators. We prove that the percentile bootstrap procedure can, with only
minor modifications, yield valid confidence intervals for causal effects under
restrictions on the level of unmeasured confounding. We also propose an
amplification to allow for interpretable sensitivity parameters in the
balancing weights framework. We illustrate our method through extensive real
data examples.
|
Topological band theory has achieved great success in the high-throughput
search for topological band structures both in paramagnetic and magnetic
crystal materials. However, a significant proportion of materials are
topologically trivial insulators at the Fermi level. In this paper, we show
that, remarkably, for a subset of the topologically trivial insulators, knowing
only their electron number and the Wyckoff positions of the atoms we can
separate them into two groups: the obstructed atomic insulator (OAI) and the
atomic insulator (AI). The interesting group, the OAI, have a center of charge
not localized on the atoms. Using the theory of topological quantum chemistry,
in this work we first derive the necessary and sufficient conditions for a
topologically trivial insulator to be a filling enforced obstructed atomic
insulator (feOAI) in the 1651 Shubnikov space groups. Remarkably, the filling
enforced criteria enable the identification of obstructed atomic bands without
knowing the representations of the band structures. Hence, no ab-initio
calculations are needed for the filling enforced criteria, although they are
needed to obtain the band gaps. With the help of the Topological Quantum
Chemistry website, we have performed a high-throughput search for feOAIs and
have found that 957 ICSD entries (638 unique materials) are paramagnetic
feOAIs, among which 738 (475) materials have an indirect gap. The metallic
obstructed surface states of feOAIs are also showcased by several material
examples.
|
Orthogonal drawings, i.e., embeddings of graphs into grids, are a classic
topic in Graph Drawing. Often the goal is to find a drawing that minimizes the
number of bends on the edges. A key ingredient for bend minimization algorithms
is the existence of an orthogonal representation that describes such drawings
combinatorially by only listing the angles between the edges around each vertex
and the directions of bends on the edges, but neglecting any kind of geometric
information such as vertex coordinates or edge lengths.
We generalize this idea to ortho-radial representations of ortho-radial
drawings, which are embeddings into an ortho-radial grid, whose gridlines are
concentric circles around the origin and straight-line spokes emanating from
the origin but excluding the origin itself. Unlike the orthogonal case, there
exist ortho-radial representations that do not admit a corresponding drawing,
for example so-called strictly monotone cycles. An ortho-radial drawing is
called valid if it does not contain a strictly monotone cycle. Our first result
is that an ortho-radial representation admits a corresponding drawing if and
only if it is valid. Previously such a characterization was only known for
ortho-radial drawings of paths, cycles, and theta graphs, and in the special
case of rectangular drawings of cubic graphs, where the contour of each face is
required to be a rectangle. Further, we give a quadratic-time algorithm that
tests for an ortho-radial representation whether it is valid, and we show how
to draw a valid ortho-radial representation in the same running time.
Altogether, this reduces the problem of computing a minimum-bend ortho-radial
drawing to the task of computing a valid ortho-radial representation with the
minimum number of bends, and hence establishes an ortho-radial analogue of the
topology-shape-metrics framework for planar orthogonal drawings by Tamassia.
|
Engineering materials with high thermal conductivity are of fundamental
interest for efficiently dissipating heat in micro/nanoelectronics. Using first
principles computations we report an ultra-high thermal conductivity of 2090
Wm-1K-1 (1395 Wm-1K-1) for hexagonal pure (natural) BC6N(h-BC6N). This value is
among the highest thermal conductivities known after diamond and cubic boron
arsenide. This ultra-high lattice thermal conductivity (k) is mainly attributed
with high phonon group velocities of both acoustic and optical phonons arising
from strong C-C and B-N bonds as well as the light atomic mass of the
constituent elements such as boron (B), carbon (C) and nitrogen (N). We also
report size dependent thermal conductivity of h-BC6N nanostructures by
including boundary scattering. At room temperature (300 K) and at nanoscale
length (L) of 100 nm, a high k value of 175 Wm-1K-1 is observed (higher than
the bulk k value of silicon). Optical phonons with large group velocities are
mainly responsible for this high thermal conductivity in h-BC6N nanostructures.
High thermal conductivity of h-BC6N makes it a candidate material for heat
dissipation in micro/nano thermal management applications.
|
The generic fluidity observed in the nature of political protest movements
across the world during the last decade weigh heavily with the presence of
social media. As such, there is a possibility to study the contemporary
movements with an interdisciplinary approach combining computational analytics
with social science perspectives. The present study has put efforts to
understand such dynamics in the context of the ongoing nationwide movement in
India opposing the NRC-CAA enactment. The transformative nature of individual
discontent into collective mobilization, especially with a reflective
intervention in social media across a sensitive region of the nation state, is
presented here with a combination of qualitative (fieldwork) and quantitative
(computing) techniques. The study is augmented further by the primary data
generation coupled with real-time application of analytical approaches.
|
We explore the constraints on the three-nucleon force (3NF) of chiral
effective field theory ($\chi$EFT) that are provided by bound-state observables
in the $A=3$ and $A=4$ sectors. Our statistically rigorous analysis
incorporates experimental error, computational method uncertainty, and the
uncertainty due to truncation of the $\chi$EFT expansion at
next-to-next-to-leading order. A consistent solution for the ${}^3$H binding
energy, the ${}^4$He binding energy and radius, and the ${}^3$H $\beta$-decay
rate can only be obtained if $\chi$EFT truncation errors are included in the
analysis. All of these except the $\beta$-decay rate give essentially
degenerate constraints on the 3NF low-energy constants, so it is crucial for
estimating these parameters. We use eigenvector continuation for fast and
accurate emulation of No-Core Shell Model calculations of the considered
few-nucleon observables. This facilitates sampling of the posterior probability
distribution, allowing us to also determine the distributions of the
hyperparameters that quantify the truncation error. We find a $\chi$EFT
expansion parameter of $Q=0.33 \pm 0.06$ for these observables.
|
An instance of the multiperiod binary knapsack problem (MPBKP) is given by a
horizon length $T$, a non-decreasing vector of knapsack sizes $(c_1, \ldots,
c_T)$ where $c_t$ denotes the cumulative size for periods $1,\ldots,t$, and a
list of $n$ items. Each item is a triple $(r, q, d)$ where $r$ denotes the
reward of the item, $q$ its size, and $d$ its time index (or, deadline). The
goal is to choose, for each deadline $t$, which items to include to maximize
the total reward, subject to the constraints that for all $t=1,\ldots,T$, the
total size of selected items with deadlines at most $t$ does not exceed the
cumulative capacity of the knapsack up to time $t$. We also consider the
multiperiod binary knapsack problem with soft capacity constraints (MPBKP-S)
where the capacity constraints are allowed to be violated by paying a penalty
that is linear in the violation. The goal is to maximize the total profit,
i.e., the total reward of selected items less the total penalty. Finally, we
consider the multiperiod binary knapsack problem with soft stochastic capacity
constraints (MPBKP-SS), where the non-decreasing vector of knapsack sizes
$(c_1, \ldots, c_T)$ follow some arbitrary joint distribution but we are given
access to the profit as an oracle, and we choose a subset of items to maximize
the total expected profit, i.e., the total reward less the total expected
penalty. For MPBKP, we exhibit a fully polynomial-time approximation scheme
with runtime
$\tilde{\mathcal{O}}\left(\min\left\{n+\frac{T^{3.25}}{\epsilon^{2.25}},n+\frac{T^{2}}{\epsilon^{3}},\frac{nT}{\epsilon^2},\frac{n^2}{\epsilon}\right\}\right)$
that achieves $(1+\epsilon)$ approximation; for MPBKP-S, the $(1+\epsilon)$
approximation can be achieved in $\mathcal{O}\left(\frac{n\log
n}{\epsilon}\cdot\min\left\{\frac{T}{\epsilon},n\right\}\right)$; for MPBKP-SS,
a greedy algorithm is a 2-approximation when items have the same size.
|
This manuscript focus on in the transmission problem for one dimensional
waves with nonlinear weights on the frictional damping and time-varying delay.
We prove global existence of solutions using Kato's variable norm technique and
we show the exponential stability by the energy method with the construction of
a suitable Lyapunov functional.
|
Predictive state representations (PSRs) are models of controlled non-Markov
observation sequences which exhibit the same generative process governing POMDP
observations without relying on an underlying latent state. In that respect, a
PSR is indistinguishable from the corresponding POMDP. However, PSRs
notoriously ignore the notion of rewards, which undermines the general utility
of PSR models for control, planning, or reinforcement learning. Therefore, we
describe a sufficient and necessary accuracy condition which determines whether
a PSR is able to accurately model POMDP rewards, we show that rewards can be
approximated even when the accuracy condition is not satisfied, and we find
that a non-trivial number of POMDPs taken from a well-known third-party
repository do not satisfy the accuracy condition. We propose reward-predictive
state representations (R-PSRs), a generalization of PSRs which accurately
models both observations and rewards, and develop value iteration for R-PSRs.
We show that there is a mismatch between optimal POMDP policies and the optimal
PSR policies derived from approximate rewards. On the other hand, optimal R-PSR
policies perfectly match optimal POMDP policies, reconfirming R-PSRs as
accurate state-less generative models of observations and rewards.
|
This paper presents Gem, a model-agnostic approach for providing
interpretable explanations for any GNNs on various graph learning tasks.
Specifically, we formulate the problem of providing explanations for the
decisions of GNNs as a causal learning task. Then we train a causal explanation
model equipped with a loss function based on Granger causality. Different from
existing explainers for GNNs, Gem explains GNNs on graph-structured data from a
causal perspective. It has better generalization ability as it has no
requirements on the internal structure of the GNNs or prior knowledge on the
graph learning tasks. In addition, Gem, once trained, can be used to explain
the target GNN very quickly. Our theoretical analysis shows that several recent
explainers fall into a unified framework of additive feature attribution
methods. Experimental results on synthetic and real-world datasets show that
Gem achieves a relative increase of the explanation accuracy by up to $30\%$
and speeds up the explanation process by up to $110\times$ as compared to its
state-of-the-art alternatives.
|
We report new precision measurements of the elastic electron-proton
scattering cross section for momentum transfer squared (Q$^2$) up to
15.75~\gevsq. These data allow for improved extraction of the proton magnetic
form factor at high Q$^2$ and nearly double the Q$^2$ range of direct
longitudinal/transverse separated cross sections. A comparison of our results
to polarization measurements establishes the presence of hard two-photon
exchange in the $e$-$p$ elastic scattering cross section at greater than 95\%
confidence level for Q$^2$ up to 8 (GeV/c)$^2$.
|
Intuition dictates that a very long, very thin cavity (e.g., a fiber optic
cable) could perhaps be modeled as an approximately one dimensional system. In
this paper we rigorously explore the validity of such intuition from the
perspective of a localized probe coupling to a quantum field inside a cavity
(e.g., an atom or an Unruh-DeWitt particle detector in a fiber optic cable). To
do so, we introduce the notion of subfield decomposition in which a $D+1$
dimensional quantum field in an axially-symmetric cavity can be reduced to an
infinite collection of uncoupled, massive $1+1$ dimensional fields. We show
that the ability to approximate a higher-dimensional scenario by a $1+1$
dimensional model is equivalent to making a certain change of the probe's shape
in the higher-dimensional space. The approximation is justified whenever this
change of shape is "small enough". In this light, we identify the dynamically
relevant norm by which the magnitude of these changes in probe shape ought to
be judged. Finally, we explore this approximation in particular setups
corresponding to quantum optics and superconducting circuits.
|
Sound localization aims to find the source of the audio signal in the visual
scene. However, it is labor-intensive to annotate the correlations between the
signals sampled from the audio and visual modalities, thus making it difficult
to supervise the learning of a machine for this task. In this work, we propose
an iterative contrastive learning framework that requires no data annotations.
At each iteration, the proposed method takes the 1) localization results in
images predicted in the previous iteration, and 2) semantic relationships
inferred from the audio signals as the pseudo-labels. We then use the
pseudo-labels to learn the correlation between the visual and audio signals
sampled from the same video (intra-frame sampling) as well as the association
between those extracted across videos (inter-frame relation). Our iterative
strategy gradually encourages the localization of the sounding objects and
reduces the correlation between the non-sounding regions and the reference
audio. Quantitative and qualitative experimental results demonstrate that the
proposed framework performs favorably against existing unsupervised and
weakly-supervised methods on the sound localization task.
|
We examine adaptive strategies adopted by vehicles for route selection
en-route in transportation networks. By studying a model of two-dimensional
cellular automata, we model vehicles characterized by a parameter called
path-greediness, which corresponds to the tendency for them to travel to their
destinations via the shortest path. The path-greediness of each individual
vehicle is updated based on the local traffic conditions, to either keep the
vehicle travels via a shorter path in an un-congested region or to explore
longer diverted paths in a congested region. We found that the optimal number
of steps to trigger an update of path-greediness is dependent on the density of
vehicles, and the magnitude of path-greediness increment affects the
macroscopic traffic conditions of the system. To better coordinate vehicles in
denser networks, the update on the tendency for vehicles to travel via the
shorter paths should be gradual and less frequent.
|
We realize and investigate a nonlinear metasurface taking advantage of
intersubband transitions in ultranarrow GaN/AlN multi-quantum well
heterostructures. Owing to huge band offsets, the structures offer resonant
transitions in the telecom window around 1.55 $\mu$m. These heterostructures
are functionalized with an array of plasmonic antennas featuring
cross-polarized resonances at these near-infrared wavelengths and their second
harmonic. This kind of nonlinear metasurface allows for substantial
second-harmonic generation at normal incidence which is completely absent for
an antenna array without the multi-quantum well structure underneath. While the
second harmonic is originally radiated only into the plane of the quantum
wells, a proper geometrical arrangement of the plasmonic elements permits to
redirect the second-harmonic light to free-space radiation, which is emitted
perpendicular to the surface.
|
This work deals with thick branes in bulk with a single extra dimension
modeled by a two-field configuration. We first consider the inclusion of the
cuscuton to also control the dynamics of one of the fields and investigate how
it contributes to change the internal structure of the configuration in two
distinct situations, with the standard and the asymmetric Bloch brane. The
results show that the branes get a rich internal structure, with the geometry
presenting a novel behavior which is also governed by the parameter that
controls the strength of the cuscuton term. We also study the case where the
dynamics of one of the two fields is only described by the cuscuton. All the
models support analytical solutions which are stable against fluctuations in
the metric, and the main results unveil significant modifications in the warp
factor and energy density of the branes.
|
We study the metric backreaction of mass and angular momentum accretion on
black holes. We first develop the formalism of monopole and dipole linear
gravitational perturbations around the Schwarzschild black holes in the
Eddington-Finkelstein coordinates against the generic time-dependent matters.
We derive the relation between the time dependence of the mass and angular
momentum of the black hole and the energy-momentum tensors of accreting
matters. As a concrete example, we apply our formalism to the Blandford-Znajek
process around the slowly rotating black holes. We find that the time
dependence of the monopole and dipole perturbations can be interpreted as the
slowly rotating Kerr metric with time-dependent mass and spin parameters, which
are determined from the energy and angular momentum extraction rates of the
Blandford-Znajek process. We also show that the Komar angular momentum and the
area of the apparent horizon are decreasing and increasing in time,
respectively, while they are consistent with the Blandford-Znajek argument of
energy extraction in term of black hole mechanics if we regard the
time-dependent mass parameter as the energy of the black hole.
|
Offline reinforcement learning (RL) enables learning policies using
pre-collected datasets without environment interaction, which provides a
promising direction to make RL usable in real-world systems. Although recent
offline RL studies have achieved much progress, existing methods still face
many practical challenges in real-world system control tasks, such as
computational restriction during agent training and the requirement of extra
control flexibility. Model-based planning framework provides an attractive
solution for such tasks. However, most model-based planning algorithms are not
designed for offline settings. Simply combining the ingredients of offline RL
with existing methods either provides over-restrictive planning or leads to
inferior performance. We propose a new light-weighted model-based offline
planning framework, namely MOPP, which tackles the dilemma between the
restrictions of offline learning and high-performance planning. MOPP encourages
more aggressive trajectory rollout guided by the behavior policy learned from
data, and prunes out problematic trajectories to avoid potential
out-of-distribution samples. Experimental results show that MOPP provides
competitive performance compared with existing model-based offline planning and
RL approaches.
|
The design of metamaterials which support unique optical responses is the
basis for most thin-film nanophotonics applications. In practice this inverse
design problem can be difficult to solve systematically due to the large design
parameter space associated with general multi-layered systems. We apply
convolutional neural networks, a subset of deep machine learning, as a tool to
solve this inverse design problem for metamaterials composed of stacks of thin
films. We demonstrate the remarkable ability of neural networks to probe the
large global design space (up to $10^{12}$ possible parameter combinations) and
resolve all relationships between metamaterial structure and corresponding
ellipsometric and reflectance / transmittance spectra. The applicability of the
approach is further expanded to include the inverse design of synthetic
engineered spectra in general design scenarios. Furthermore, this approach is
compared with traditional optimization methods. We find an increase in the
relative optimization efficiency of the networks with increasing total layer
number, revealing the advantage of the machine learning approach in
many-layered systems where traditional methods become impractical.
|
This paper addresses the challenge of learning to do procedural reasoning
over text to answer "What if..." questions. We propose a novel relational
gating network that learns to filter the key entities and relationships and
learns contextual and cross representations of both procedure and question for
finding the answer. Our relational gating network contains an entity gating
module, relation gating module, and contextual interaction module. These
modules help in solving the "What if..." reasoning problem. We show that
modeling pairwise relationships helps to capture higher-order relations and
find the line of reasoning for causes and effects in the procedural
descriptions. Our proposed approach achieves the state-of-the-art results on
the WIQA dataset.
|
We present MISTIQS, a Multiplatform Software for Time-dependent Quantum
Simulations. MISTIQS delivers end-to-end functionality for simulating the
quantum many-body dynamics of systems governed by time-dependent Heisenberg
Hamiltonians across multiple quantum computing platforms. It provides
high-level programming functionality for generating intermediate
representations of quantum circuits which can be translated into a variety of
industry-standard representations. Furthermore, it offers a selection of
circuit compilation and optimization methods and facilitates execution of the
quantum circuits on currently available cloud-based quantum computing backends.
MISTIQS serves as an accessible and highly flexible research and education
platform, allowing a broader community of scientists and students to perform
quantum many-body dynamics simulations on current quantum computers.
|
Learning a generative model of visual information with sparse and
compositional features has been a challenge for both theoretical neuroscience
and machine learning communities. Sparse coding models have achieved great
success in explaining the receptive fields of mammalian primary visual cortex
with sparsely activated latent representation. In this paper, we focus on a
recently proposed model, sparse coding variational autoencoder (SVAE) (Barello
et al., 2018), and show that the end-to-end training scheme of SVAE leads to a
large group of decoding filters not fully optimized with noise-like receptive
fields. We propose a few heuristics to improve the training of SVAE and show
that a unit $L_2$ norm constraint on the decoder is critical to produce sparse
coding filters. Such normalization can be considered as local lateral
inhibition in the cortex. We verify this claim empirically on both natural
image patches and MNIST dataset and show that projection of the filters onto
unit norm drastically increases the number of active filters. Our results
highlight the importance of weight normalization for learning sparse
representation from data and suggest a new way of reducing the number of
inactive latent components in VAE learning.
|
In this article we derive Biot-Savar law for exterior 2D domain. No-slip
condition in terms of vorticity is expressed in integral form. For
non-stationary fluid dynamics equations no-slip condition generates affine
invariant manifold and no-slip integral relations on vorticity can be
transferred to Robin-type boundary condition.
|
We consider the semilinear heat equation $$\partial_t u -\Delta u =f(u),
\quad (x,t)\in \mathbb{R}^N\times [0,T),\qquad (1)$$ with
$f(u)=|u|^{p-1}u\log^a (2+u^2)$, where $p>1$ is Sobolev subcritical and $a\in
\mathbb{R}$. We first show an upper bound for any blow-up solution of (1).
Then, using this estimate and the logarithmic property, we prove that the exact
blow-up rate of any singular solution of (1) is given by the ODE solution
associated with (1), namely $u' =|u|^{p-1}u\log^a (2+u^2)$. In other terms, all
blow-up solutions in the Sobolev subcritical range are Type I solutions. Up to
our knowledge, this is the first determination of the blow-up rate for a
semilinear heat equation where the main nonlinear term is not homogeneous.
|
In the future the International Linear Collider (ILC), a helical
undulator-based polarized positron source, is expected to be chosen. A high
energy electron beam passes through a superconducting helical undulator in
order to create circularly polarized photons which will be directed to a
conversion target, resulting in electron-positron pairs. The resulting positron
beam is longitudinally polarized. Since the photons are produced with an
opening angle and pass through a long superconducting helical undulator, some
of these photons will strike the undulator walls. Therefore photon masks must
be placed along the undulator line in order to keep the power deposited in the
undulator walls below the acceptable limit of 1W/m. The baseline design of the
ILC is focused on 250 GeV center-of-mass energy and upgrade to center-of-mass
energies of 350 and 500 GeV is foreseen. This paper shows a detailed study of
the ideal power deposited along the masks for both 350 and 500 GeV
center-of-mass energies.
|
The list of physically motivated urn models that can be solved in terms of
classical orthogonal polynomials is very small. It includes a model proposed by
D. Bernoulli and further analyzed by S. Laplace and a model proposed by P. and
T. Ehrenfest and eventually connected with the Krawtchouk and Hahn polynomials.
This connection was reversed recently in the case of the Jacobi polynomials
where a rather contrived, and later a simpler urn model was proposed. Here we
consider an urn model going with the Jacobi-Pi\~neiro multiple orthogonal
polynomials. These polynomials have recently been put forth in connection with
a stochastic matrix.
|
In this article, we take into account our previous calculations based on the
QCD sum rules, and tentatively assign the $X(4630)$ as the
$D_s^*\bar{D}_{s1}-D_{s1}\bar{D}_s^*$ tetraquark molecular state or
$[cs]_P[\bar{c}\bar{s}]_A+[cs]_A[\bar{c}\bar{s}]_P$ tetraquark state with the
$J^{PC}=1^{-+}$, and assign the $X(3915)$ and $X(4500)$ as the 1S and 2S
$[cs]_A[\bar{c}\bar{s}]_A$ tetraquark states respectively with the
$J^{PC}=0^{++}$. Then we extend our previous works to investigate the LHCb's
new tetraquark candidate $X(4685)$ as the first radial excited state of the
$X(4140)$ with the QCD sum rules, and obtain the mass
$M_{X}=4.70\pm0.12\,\rm{GeV}$, which is in very good agreement with the
experimental value $4684 \pm 7 {}^{+13}_{-16}\,\rm{MeV}$. Furthermore, we
investigate the two-meson scattering state contributions in details, and
observe that the two-meson scattering states alone cannot saturate the QCD sum
rules, the contributions of the tetraquark states play an un-substitutable
role, we can saturate the QCD sum rules with or without the two-meson
scattering states.
|
An anytime decoding algorithm for tree codes using Monte-Carlo tree search is
proposed. The meaning of anytime decoding here is twofold: 1) the decoding
algorithm is an anytime algorithm, whose decoding performance improves as more
computational resource, measured by decoding time, is allowed, and 2) the
proposed decoding algorithm can approximate the maximum-likelihood sequence
decoding of tree codes, which has the anytime reliability when the code is
properly designed. The above anytime properties are demonstrated through
experiments. The proposed method may be extended to the decoding of
convolutional codes and block codes by Monte-Carlo trellis search, to enable
smooth complexity-performance trade-offs in these decoding tasks. Some other
extensions and possible improvements are also discussed.
|
Dynamic systems that consist of a set of interacting elements can be
abstracted as temporal networks. Recently, higher-order patterns that involve
multiple interacting nodes have been found crucial to indicate domain-specific
laws of different temporal networks. This posts us the challenge of designing
more sophisticated hypergraph models for these higher-order patterns and the
associated new learning algorithms. Here, we propose the first model, named
HIT, for higher-order pattern prediction in temporal hypergraphs. Particularly,
we focus on predicting three types of common but important interaction patterns
involving three interacting elements in temporal networks, which could be
extended to even higher-order patterns. HIT extracts the structural
representation of a node triplet of interest on the temporal hypergraph and
uses it to tell what type of, when, and why the interaction expansion could
happen in this triplet. HIT could achieve significant improvement(averaged 20%
AUC gain to identify the interaction type, uniformly more accurate time
estimation) compared to both heuristic and other neural-network-based baselines
on 5 real-world large temporal hypergraphs. Moreover, HIT provides a certain
degree of interpretability by identifying the most discriminatory structural
features on the temporal hypergraphs for predicting different higher-order
patterns.
|
Factorizing speech as disentangled speech representations is vital to achieve
highly controllable style transfer in voice conversion (VC). Conventional
speech representation learning methods in VC only factorize speech as speaker
and content, lacking controllability on other prosody-related factors.
State-of-the-art speech representation learning methods for more speechfactors
are using primary disentangle algorithms such as random resampling and ad-hoc
bottleneck layer size adjustment,which however is hard to ensure robust speech
representationdisentanglement. To increase the robustness of highly
controllable style transfer on multiple factors in VC, we propose a
disentangled speech representation learning framework based on adversarial
learning. Four speech representations characterizing content, timbre, rhythm
and pitch are extracted, and further disentangled by an adversarial
Mask-And-Predict (MAP)network inspired by BERT. The adversarial network is used
tominimize the correlations between the speech representations,by randomly
masking and predicting one of the representationsfrom the others. Experimental
results show that the proposedframework significantly improves the robustness
of VC on multiple factors by increasing the speech quality MOS from 2.79 to3.30
and decreasing the MCD from 3.89 to 3.58.
|
Data augmentation is an effective way to improve the performance of many
neural text generation models. However, current data augmentation methods need
to define or choose proper data mapping functions that map the original samples
into the augmented samples. In this work, we derive an objective to formulate
the problem of data augmentation on text generation tasks without any use of
augmented data constructed by specific mapping functions. Our proposed
objective can be efficiently optimized and applied to popular loss functions on
text generation tasks with a convergence rate guarantee. Experiments on five
datasets of two text generation tasks show that our approach can approximate or
even surpass popular data augmentation methods.
|
Biomolecular condensates are small droplets forming spontaneously in
biological cells via phase separation. They play a role in many cellular
processes, but it is unclear how cells control them. Cellular regulation often
relies on post-translational modifications of proteins. For biomolecular
condensates, such chemical modifications could alter the molecular interaction
of key condensate components. We here test this idea using a theoretical model
based on non-equilibrium thermodynamics. In particular, we describe the
chemical reactions using transition-state theory, which accounts for the
non-ideality of phase separation. We identify that fast control, like in cell
signaling, is only possible when external energy input drives the reaction out
of equilibrium. If this reaction differs inside and outside the droplet, it is
even possible to control droplet sizes. Such an imbalance in the reaction could
be created by enzymes localizing to the droplet. Since this situation is
typical inside cells, we speculate that our proposed mechanism is used to
stabilize multiple droplets with independently controlled size and count. Our
model provides a novel and thermodynamically consistent framework for
describing droplets subject to non-equilibrium chemical reactions.
|
Fe, Mg, and O are among the most abundant elements in terrestrial planets.
While the behavior of the Fe-O, Mg-O, and Fe-Mg binary systems under pressure
have been investigated, there are still very few studies of the Fe-Mg-O ternary
system at relevant Earth's core and super-Earth's mantle pressures. Here, we
use the adaptive genetic algorithm (AGA) to study ternary Fe$_x$Mg$_y$O$_z$
phases in a wide range of stoichiometries at 200 GPa and 350 GPa. We discovered
three dynamically stable phases with stoichiometries FeMg$_2$O$_4$,
Fe$_2$MgO$_4$, and FeMg$_3$O$_4$ with lower enthalpy than any known combination
of Fe-Mg-O high-pressure compounds at 350 GPa. With the discovery of these
phases, we construct the Fe-Mg-O ternary convex hull. We further clarify the
composition- and pressure-dependence of structural motifs with the analysis of
the AGA-found stable and metastable structures. Analysis of binary and ternary
stable phases suggest that O, Mg, or both could stabilize a BCC iron alloy at
inner core pressures.
|
Contemporary grasp detection approaches employ deep learning to achieve
robustness to sensor and object model uncertainty. The two dominant approaches
design either grasp-quality scoring or anchor-based grasp recognition networks.
This paper presents a different approach to grasp detection by treating it as
keypoint detection in image-space. The deep network detects each grasp
candidate as a pair of keypoints, convertible to the grasp representationg =
{x, y, w, {\theta}} T , rather than a triplet or quartet of corner points.
Decreasing the detection difficulty by grouping keypoints into pairs boosts
performance. To promote capturing dependencies between keypoints, a non-local
module is incorporated into the network design. A final filtering strategy
based on discrete and continuous orientation prediction removes false
correspondences and further improves grasp detection performance. GKNet, the
approach presented here, achieves a good balance between accuracy and speed on
the Cornell and the abridged Jacquard datasets (96.9% and 98.39% at 41.67 and
23.26 fps). Follow-up experiments on a manipulator evaluate GKNet using 4 types
of grasping experiments reflecting different nuisance sources: static grasping,
dynamic grasping, grasping at varied camera angles, and bin picking. GKNet
outperforms reference baselines in static and dynamic grasping experiments
while showing robustness to varied camera viewpoints and moderate clutter. The
results confirm the hypothesis that grasp keypoints are an effective output
representation for deep grasp networks that provide robustness to expected
nuisance factors.
|
One of the challenges faced by many video providers is the heterogeneity of
network specifications, user requirements, and content compression performance.
The universal solution of a fixed bitrate ladder is inadequate in ensuring a
high quality of user experience without re-buffering or introducing annoying
compression artifacts. However, a content-tailored solution, based on
extensively encoding across all resolutions and over a wide quality range is
highly expensive in terms of computational, financial, and energy costs.
Inspired by this, we propose an approach that exploits machine learning to
predict a content-optimized bitrate ladder. The method extracts spatio-temporal
features from the uncompressed content, trains machine-learning models to
predict the Pareto front parameters, and, based on that, builds the ladder
within a defined bitrate range. The method has the benefit of significantly
reducing the number of encodes required per sequence. The presented results,
based on 100 HEVC-encoded sequences, demonstrate a reduction in the number of
encodes required when compared to an exhaustive search and an
interpolation-based method, by 89.06% and 61.46%, respectively, at the cost of
an average Bj{\o}ntegaard Delta Rate difference of 1.78% compared to the
exhaustive approach. Finally, a hybrid method is introduced that selects either
the proposed or the interpolation-based method depending on the sequence
features. This results in an overall 83.83% reduction of required encodings at
the cost of an average Bj{\o}ntegaard Delta Rate difference of 1.26%.
|
We will discuss about a possible method of using the cubit rod by the
architects and the surveyors of Ancient Egypt to measure and draw lengths,
comparing it with the other interpretations present in Literature. Instead of
the modern decimal notation, which sees the use of comma to represent a number
or a measure, at that time there was a wide use of fractions in calculations.
The current work proposes that, through the cubit rod and its partitions of the
finger into fractions, it could be possible to obtain very accurate
measurements.
|
We propose a roadmap for bootstrapping conformal field theories (CFTs)
described by gauge theories in dimensions $d>2$. In particular, we provide a
simple and workable answer to the question of how to detect the gauge group in
the bootstrap calculation. Our recipe is based on the notion of
\emph{decoupling operator}, which has a simple (gauge) group theoretical
origin, and is reminiscent of the null operator of $2d$ Wess-Zumino-Witten CFTs
in higher dimensions. Using the decoupling operator we can efficiently detect
the rank (i.e. color number) of gauge groups, e.g., by imposing gap conditions
in the CFT spectrum. We also discuss the physics of the equation of motion,
which has interesting consequences in the CFT spectrum as well. As an
application of our recipes, we study a prototypical critical gauge theory,
namely the scalar QED which has a $U(1)$ gauge field interacting with critical
bosons. We show that the scalar QED can be solved by conformal bootstrap,
namely we have obtained its kinks and islands in both $d=3$ and $d=2+\epsilon$
dimensions.
|
Self-Attention has become prevalent in computer vision models. Inspired by
fully connected Conditional Random Fields (CRFs), we decompose self-attention
into local and context terms. They correspond to the unary and binary terms in
CRF and are implemented by attention mechanisms with projection matrices. We
observe that the unary terms only make small contributions to the outputs, and
meanwhile standard CNNs that rely solely on the unary terms achieve great
performances on a variety of tasks. Therefore, we propose Locally Enhanced
Self-Attention (LESA), which enhances the unary term by incorporating it with
convolutions, and utilizes a fusion module to dynamically couple the unary and
binary operations. In our experiments, we replace the self-attention modules
with LESA. The results on ImageNet and COCO show the superiority of LESA over
convolution and self-attention baselines for the tasks of image recognition,
object detection, and instance segmentation. The code is made publicly
available.
|
In this work we establish that the Inozemtsev system is the Seiberg-Witten
integrable system encoding the Coulomb branch physics of 4d $\mathcal{N}=2$
USp(2N) gauge theory with four fundamental and (for $N \geq 2$) one
antisymmetric tensor hypermultiplets. We describe the transformation from the
spectral curves and canonical one-form of the Inozemtsev system in the $N=1$
and $N=2$ cases to the Seiberg-Witten curves and differentials explicitly,
along with the explicit matching of the modulus of the elliptic curve of
spectral parameters to the gauge coupling of the field theory, and of the
couplings of the Inozemtsev system to the field theory mass parameters. This
result is a particular instance of a more general correspondence between
crystallographic elliptic Calogero-Moser systems with Seiberg-Witten integrable
systems, which will be explored in future work.
|
We present a multi-camera 3D pedestrian detection method that does not need
to train using data from the target scene. We estimate pedestrian location on
the ground plane using a novel heuristic based on human body poses and person's
bounding boxes from an off-the-shelf monocular detector. We then project these
locations onto the world ground plane and fuse them with a new formulation of a
clique cover problem. We also propose an optional step for exploiting
pedestrian appearance during fusion by using a domain-generalizable person
re-identification model. We evaluated the proposed approach on the challenging
WILDTRACK dataset. It obtained a MODA of 0.569 and an F-score of 0.78, superior
to state-of-the-art generalizable detection techniques.
|
Inefficient traffic control may cause numerous problems such as traffic
congestion and energy waste. This paper proposes a novel multi-agent
reinforcement learning method, named KS-DDPG (Knowledge Sharing Deep
Deterministic Policy Gradient) to achieve optimal control by enhancing the
cooperation between traffic signals. By introducing the knowledge-sharing
enabled communication protocol, each agent can access to the collective
representation of the traffic environment collected by all agents. The proposed
method is evaluated through two experiments respectively using synthetic and
real-world datasets. The comparison with state-of-the-art reinforcement
learning-based and conventional transportation methods demonstrate the proposed
KS-DDPG has significant efficiency in controlling large-scale transportation
networks and coping with fluctuations in traffic flow. In addition, the
introduced communication mechanism has also been proven to speed up the
convergence of the model without significantly increasing the computational
burden.
|
We analyze reflection positive representations in terms of positive Hankel
operators. This is motivated by the fact that positive Hankel operators are
described in terms of their Carleson measures, whereas the compatibility
condition between representations and reflection positive Hilbert spaces is
quite intricate. This leads us to the concept of a Hankel positive
representation of triples $(G,S,\tau)$, where $G$ is a group, $\tau$ an
involutive automorphism of $G$ and $S \subseteq G$ a subsemigroup with $\tau(S)
= S^{-1}$. For the triples $(\mathbb Z,\mathbb N,-id_{\mathbb Z})$,
corresponding to reflection positive operators, and $(\mathbb R,\mathbb
R_+,-id_{\mathbb R})$, corresponding to reflection positive one-parameter
groups, we show that every Hankel positive representation can be made
reflection positive by a slight change of the scalar product. A key method
consists in using the measure $\mu_H$ on $\mathbb R_+$ defined by a positive
Hankel operator $H$ on $H^2(\mathbb C_+)$ to define a Pick function whose
imaginary part, restricted to the imaginary axis, provides an operator symbol
for $H$.
|
The goal of metric learning is to learn a function that maps samples to a
lower-dimensional space where similar samples lie closer than dissimilar ones.
Particularly, deep metric learning utilizes neural networks to learn such a
mapping. Most approaches rely on losses that only take the relations between
pairs or triplets of samples into account, which either belong to the same
class or two different classes. However, these methods do not explore the
embedding space in its entirety. To this end, we propose an approach based on
message passing networks that takes all the relations in a mini-batch into
account. We refine embedding vectors by exchanging messages among all samples
in a given batch allowing the training process to be aware of its overall
structure. Since not all samples are equally important to predict a decision
boundary, we use an attention mechanism during message passing to allow samples
to weigh the importance of each neighbor accordingly. We achieve
state-of-the-art results on clustering and image retrieval on the CUB-200-2011,
Cars196, Stanford Online Products, and In-Shop Clothes datasets. To facilitate
further research, we make available the code and the models at
https://github.com/dvl-tum/intra_batch_connections.
|
The Transformer model is widely used in natural language processing for
sentence representation. However, the previous Transformer-based models focus
on function words that have limited meaning in most cases and could merely
extract high-level semantic abstraction features. In this paper, two approaches
are introduced to improve the performance of Transformers. We calculated the
attention score by multiplying the part-of-speech weight vector with the
correlation coefficient, which helps extract the words with more practical
meaning. The weight vector is obtained by the input text sequence based on the
importance of the part-of-speech. Furthermore, we fuse the features of each
layer to make the sentence representation results more comprehensive and
accurate. In experiments, we demonstrate the effectiveness of our model
Transformer-F on three standard text classification datasets. Experimental
results show that our proposed model significantly boosts the performance of
text classification as compared to the baseline model. Specifically, we obtain
a 5.28% relative improvement over the vanilla Transformer on the simple tasks.
|
The anisotropic optical response of the layered, nodal-line semimetal ZrSiS
at ambient and high pressure is investigated by frequency-dependent
reflectivity measurements for the polarization along and perpendicular to the
layers. The highly anisotropic optical conductivity is in very good agreement
with results from density functional theory calculations and confirms the
anisotropic character of ZrSiS. Whereas the in-plane optical conductivity shows
only modest pressure-induced changes, we found strong effects on the
out-of-plane optical conductivity spectrum of ZrSiS, with the appearance of two
prominent excitations. These pronounced pressure-induced effects can neither be
attributed to a structural phase transition according to our single-crystal
x-ray diffraction measurements, nor can they be explained by electronic
correlation and electron-hole pairing effects, as revealed by theoretical
calculations. Our findings are discussed in the context of the recently
proposed excitonic insulator phase in ZrSiS.
|
Thanks to the availability of powerful computing resources, big data and deep
learning algorithms, we have made great progress on computer vision in the last
few years. Computer vision systems begin to surpass humans in some tasks, such
as object recognition, object detection, face recognition and pose estimation.
Lots of computer vision algorithms have been deployed to real world
applications and started to improve our life quality. However, big data and
labels are not always available. Sometimes we only have very limited labeled
data, such as medical images which requires experts to label them. In this
paper, we study few shot image classification, in which we only have very few
labeled data. Machine learning with little data is a big challenge. To tackle
this challenge, we propose two methods and test their effectiveness thoroughly.
One method is to augment image features by mixing the style of these images.
The second method is applying spatial attention to explore the relations
between patches of images. We also find that domain shift is a critical issue
in few shot learning when the training domain and testing domain are different.
So we propose a more realistic cross-domain few-shot learning with unlabeled
data setting, in which some unlabeled data is available in the target domain.
We propose two methods in this setting. Our first method transfers the style
information of the unlabeled target dataset to the samples in the source
dataset and trains a model with stylized images and original images. Our second
method proposes a unified framework to fully utilize all the data. Both of our
methods surpass the baseline method by a large margin.
|
We study the temporal evolution of the circuit complexity after the local
quench where two harmonic chains are suddenly joined, choosing the initial
state as the reference state. We discuss numerical results for the complexity
for the entire chain and the subsystem complexity for a block of consecutive
sites, obtained by exploiting the Fisher information geometry of the covariance
matrices. The qualitative behaviour of the temporal evolutions of the subsystem
complexity depends on whether the joining point is inside the subsystem. The
revivals and a logarithmic growth observed during these temporal evolutions are
discussed. When the joining point is outside the subsystem, the temporal
evolutions of the subsystem complexity and of the corresponding entanglement
entropy are qualitatively similar.
|
The $SU(N)$--invariant matrix model potential is written as a sum of squares
with only four frequencies (whose multiplicities and simple $N$--dependence are
calculated).
|
Codes based on sparse matrices have good performance and can be efficiently
decoded by belief-propagation (BP). Decoding binary stabilizer codes needs a
quaternary BP for (additive) codes over GF(4), which has a higher check-node
complexity compared to a binary BP for codes over GF(2). Moreover, BP decoding
of stabilizer codes suffers a performance loss from the short cycles in the
underlying Tanner graph. In this paper, we propose a refined BP algorithm for
decoding quantum codes by passing scalar messages. For a given error syndrome,
this algorithm decodes to the same output as the conventional quaternary BP but
with a check-node complexity the same as binary BP. As every message is a
scalar, the message normalization can be naturally applied to improve the
performance. Another observation is that the message-update schedule affects
the BP decoding performance against short cycles. We show that running BP with
message normalization according to a serial schedule (or other schedules) may
significantly improve the decoding performance and error-floor in computer
simulation.
|
Structural and magnetic transitions in a double perovskite hosting 5d1 Re
ions are discussed on the basis of recently published high-resolution x-ray
diffraction patterns [D. Hirai, et al., Phys. Rev. Res. 2, 022063(R) (2020)]. A
reported structural transition below room temperature, from cubic to tetragonal
symmetry, appears not to be driven by T2g-type quadrupoles, as suggested. A
magnetic motif at lower temperature is shown to be composed of two order
parameters, associated with propagation vectors k = (0, 0, 1) and k = (0, 0,
0). Findings from our studies, for structural and magnetic properties of
Ba2MgReO6, surface in predicted amplitudes for x-ray diffraction at rhenium L2
and L3 absorption edges, and magnetic neutron Bragg diffraction. Specifically,
entanglement of anapole and spatial degrees of freedom creates a quadrupole in
the neutron scattering amplitude. It would be excluded in an unexpected
scenario whereby the rhenium atomic state is a manifold. Also, a chiral
signature visible in resonant x-ray diffraction will be one consequence of
predicted electronic quadrupole and magnetic dipole orders. A model Re wave
function consistent with all current knowledge is a guide to electronic and
magnetic multipoles engaged in x-ray and neutron diffraction investigations.
|
Aiming at facilitating a real-world, ever-evolving and scalable autonomous
driving system, we present a large-scale dataset for standardizing the
evaluation of different self-supervised and semi-supervised approaches by
learning from raw data, which is the first and largest dataset to date.
Existing autonomous driving systems heavily rely on `perfect' visual perception
models (i.e., detection) trained using extensive annotated data to ensure
safety. However, it is unrealistic to elaborately label instances of all
scenarios and circumstances (i.e., night, extreme weather, cities) when
deploying a robust autonomous driving system. Motivated by recent advances of
self-supervised and semi-supervised learning, a promising direction is to learn
a robust detection model by collaboratively exploiting large-scale unlabeled
data and few labeled data. Existing datasets either provide only a small amount
of data or covers limited domains with full annotation, hindering the
exploration of large-scale pre-trained models. Here, we release a Large-Scale
2D Self/semi-supervised Object Detection dataset for Autonomous driving, named
as SODA10M, containing 10 million unlabeled images and 20K images labeled with
6 representative object categories. To improve diversity, the images are
collected within 27833 driving hours under different weather conditions,
periods and location scenes of 32 different cities. We provide extensive
experiments and deep analyses of existing popular self/semi-supervised
approaches, and give some interesting findings in autonomous driving scope.
Experiments show that SODA10M can serve as a promising pre-training dataset for
different self-supervised learning methods, which gives superior performance
when fine-tuning with different downstream tasks (i.e., detection,
semantic/instance segmentation) in autonomous driving domain. More information
can refer to https://soda-2d.github.io.
|
In this paper, we propose a novel, convolutional neural network model to
extract highly precise depth maps from missing viewpoints, especially well
applicable to generate holographic 3D contents. The depth map is an essential
element for phase extraction which is required for synthesis of
computer-generated hologram (CGH). The proposed model called the HDD Net uses
MSE for the better performance of depth map estimation as loss function, and
utilizes the bilinear interpolation in up sampling layer with the Relu as
activation function. We design and prepare a total of 8,192 multi-view images,
each resolution of 640 by 360 for the deep learning study. The proposed model
estimates depth maps through extracting features, up sampling. For quantitative
assessment, we compare the estimated depth maps with the ground truths by using
the PSNR, ACC, and RMSE. We also compare the CGH patterns made from estimated
depth maps with ones made from ground truths. Furthermore, we demonstrate the
experimental results to test the quality of estimated depth maps through
directly reconstructing holographic 3D image scenes from the CGHs.
|
We study propagation effects due to the finite speed of light in ionization
of extended systems. We present a general quantitative theory of these effects
and show under which conditions such effects should appear. The finite speed of
light propagation effects are encoded in the non-dipole terms of the
time-dependent Shr\"odinger equation and display themselves in the
photoelectron momentum distribution projected on the molecular axis. Our
numerical modeling for the \Hp molecular ion and the \Ne dimer shows that the
finite light propagation time from one atomic center to another can be
accurately determined in a table top laser experiment which is much more
readily affordable than an earlier synchrotron measurement by Grundmann {\em et
al} [Science 370, 339 (2020)]
|
Chest X-rays are an important and accessible clinical imaging tool for the
detection of many thoracic diseases. Over the past decade, deep learning, with
a focus on the convolutional neural network (CNN), has become the most powerful
computer-aided diagnosis technology for improving disease identification
performance. However, training an effective and robust deep CNN usually
requires a large amount of data with high annotation quality. For chest X-ray
imaging, annotating large-scale data requires professional domain knowledge and
is time-consuming. Thus, existing public chest X-ray datasets usually adopt
language pattern based methods to automatically mine labels from reports.
However, this results in label uncertainty and inconsistency. In this paper, we
propose many-to-one distribution learning (MODL) and K-nearest neighbor
smoothing (KNNS) methods from two perspectives to improve a single model's
disease identification performance, rather than focusing on an ensemble of
models. MODL integrates multiple models to obtain a soft label distribution for
optimizing the single target model, which can reduce the effects of original
label uncertainty. Moreover, KNNS aims to enhance the robustness of the target
model to provide consistent predictions on images with similar medical
findings. Extensive experiments on the public NIH Chest X-ray and CheXpert
datasets show that our model achieves consistent improvements over the
state-of-the-art methods.
|
Fluorescence microscopy images contain several channels, each indicating a
marker staining the sample. Since many different marker combinations are
utilized in practice, it has been challenging to apply deep learning based
segmentation models, which expect a predefined channel combination for all
training samples as well as at inference for future application. Recent work
circumvents this problem using a modality attention approach to be effective
across any possible marker combination. However, for combinations that do not
exist in a labeled training dataset, one cannot have any estimation of
potential segmentation quality if that combination is encountered during
inference. Without this, not only one lacks quality assurance but one also does
not know where to put any additional imaging and labeling effort. We herein
propose a method to estimate segmentation quality on unlabeled images by (i)
estimating both aleatoric and epistemic uncertainties of convolutional neural
networks for image segmentation, and (ii) training a Random Forest model for
the interpretation of uncertainty features via regression to their
corresponding segmentation metrics. Additionally, we demonstrate that including
these uncertainty measures during training can provide an improvement on
segmentation performance.
|
Features of the response of pickup ions (PUIs) to the Kelvin--Helmholtz
instability (KHI) on the heliopause (HP) are examined by means of
two-dimensional hybrid simulations. We assume the supersonic neutral solar wind
as the source of PUIs gyrating about the magnetic field in the outer
heliosheath. These PUIs become energetic neutral atoms (ENAs) via charge
exchange with interstellar hydrogen, and a portion of these ENAs are detected
by spacecraft such as the Interstellar Boundary Explorer (IBEX}. To evaluate
the possibility of identifying the KHI on HP from ENA observations, we assume
that an imprint of the KHI may be displayed in spatial and temporal variations
in the observed ENA profile. As an alternative to ENA, the column density of
PUIs integrated across the HP is calculated. The KH-inducing vortex forces not
only background protons but also PUIs to roll up deep in the inner heliosheath.
The KH vortex also results in the emission of magnetosonic pulses that sweep
PUIs in the outer heliosheath and lead to their local confinement. These
effects elongate the PUIs spatial distribution in the direction normal to the
HP. The appearance of the local confined structure in the PUIs column density
is consequently confirmed, and this feature can be confirmed as the KHI
evolution. Although the simulation cannot be quantitatively compared with the
observations currently available because its resolution is too small, we expect
that the derived properties will be useful for diagnosing the nature of HP
fluctuation in future missions.
|
Carpet-type structures constitute an ideal laboratory to study and analyze
the robustness of the interference process that underlies this phenomenon
against the harmful effects of decoherence. Here, without losing any
generality, for simplicity, the case of a particle with a mass m is considered
and described by a localized state corresponding to the ground state of a
square box of width w, which is released inside a wider cavity (with a width L
> w). The effects of decoherence are then numerically investigated by means of
a simple dynamical model that captures the essential features of the phenomenon
under Markovian conditions, leaving aside extra complications associated with a
more detailed dynamical description of the system-environment interaction. As
it is shown, this model takes into account and reproduces the fact that
decoherence effects are stronger as energy levels become more separated (in
energy), which translates into a progressive collapse of the energy density
matrix to its main diagonal. However, because energy dissipation is not
considered, an analogous behavior is not observed in the position
representation, where a proper spatial localization of the probability density
does not take place, but rather a delocalized distribution. This result
emphasizes the fact that classicality is reached only if both decoherence and
dissipation coexist; otherwise, non-classical traits might still persist.
Actually, as it is also shown, in the position representation some off-diagonal
correlations indeed survive unless an additional spatial-type factor is
included in the model. This makes evident the rather complex nature of the
decoherence phenomenon and hence the importance to have a familiarity with how
it manifests in different representations, particularly with the purpose to
determine and design reliable control mechanisms.
|
Unmanned Aerial Vehicle (UAV) has already demonstrated its potential in many
civilian applications, and the fa\c{c}ade inspection is among the most
promising ones. In this paper, we focus on enabling the autonomous perception
and control of a small UAV for a fa\c{c}ade inspection task. Specifically, we
consider the perception as a planar object pose estimation problem by
simplifying the building structure as concatenation of planes, and the control
as an optimal reference tracking control problem. First, a vision based
adaptive observer is proposed which can realize stable plane pose estimation
under very mild observation conditions. Second, a model predictive controller
is designed to achieve stable tracking and smooth transition in a multi-plane
scenario, while the persistent excitation (PE) condition of the observer and
the maneuver constraints of the UAV are satisfied. The proposed autonomous
plane pose estimation and plane tracking methods are tested in both simulation
and practical building fas\c{c}ade inspection scenarios, which demonstrate
their effectiveness and practicability.
|
It is well-known that combining life annuities and death benefits introduce
opposite effects in payments with respect to the mortality risk on the lifetime
of the insured. In a general multi-state framework with multiple product types,
such joint effects are less trivial. In this paper, we consider a multivariate
payment process in multi-state life insurance, where the components are defined
in terms of the same Markovian state process. The multivariate present value of
future payments is introduced, and we derive differential equations and product
integral representations of its conditional moments and moment generating
function. Special attention is given to pair-wise covariances between two
present values, where results closely connected to Hattendorff type of results
for the variance are derived. The results are illustrated in a numerical
example in a disability model.
|
We propose a new algorithm for joint dereverberation and blind source
separation (DR-BSS). Our work builds upon the IRLMA-T framework that applies a
unified filter combining dereverberation and separation. One drawback of this
framework is that it requires several matrix inversions, an operation
inherently costly and with potential stability issues. We leverage the recently
introduced iterative source steering (ISS) updates to propose two algorithms
mitigating this issue. Albeit derived from first principles, the first
algorithm turns out to be a natural combination of weighted prediction error
(WPE) dereverberation and ISS-based BSS, applied alternatingly. In this case,
we manage to reduce the number of matrix inversion to only one per iteration
and source. The second algorithm updates the ILRMA-T matrix using only
sequential ISS updates requiring no matrix inversion at all. Its implementation
is straightforward and memory efficient. Numerical experiments demonstrate that
both methods achieve the same final performance as ILRMA-T in terms of several
relevant objective metrics. In the important case of two sources, the number of
iterations required is also similar.
|
We propose HyperDynamics, a dynamics meta-learning framework that conditions
on an agent's interactions with the environment and optionally its visual
observations, and generates the parameters of neural dynamics models based on
inferred properties of the dynamical system. Physical and visual properties of
the environment that are not part of the low-dimensional state yet affect its
temporal dynamics are inferred from the interaction history and visual
observations, and are implicitly captured in the generated parameters. We test
HyperDynamics on a set of object pushing and locomotion tasks. It outperforms
existing dynamics models in the literature that adapt to environment variations
by learning dynamics over high dimensional visual observations, capturing the
interactions of the agent in recurrent state representations, or using
gradient-based meta-optimization. We also show our method matches the
performance of an ensemble of separately trained experts, while also being able
to generalize well to unseen environment variations at test time. We attribute
its good performance to the multiplicative interactions between the inferred
system properties -- captured in the generated parameters -- and the
low-dimensional state representation of the dynamical system.
|
We give on the 2-torus a characterization of smooth conjugacy of special
Anosov endomorphisms with their linearizations in terms of the regularity of
the stable and unstable foliations. This regularity condition is the uniform
bounded density (UBD) property, which is the uniform version absolute
continuity for foliations.
|
Several high energy $e^{+}e^{-}$ colliders are proposed as Higgs factories by
the international high energy physics community. One of the most important
goals of these projects is to study the Higgs properties, such as its
couplings, mass, width, and production rate, with unprecedented precision.
Precision studies of the Higgs boson should be the priority and drive the
design and optimization of detectors. A global analysis approach based on the
multinomial distribution and Machine Learning techniques is proposed to realize
an ``end-to-end'' analysis and to enhance the precision of all accessible decay
branching fractions of the Higgs significantly. A proof-of-principle Monte
Carlo simulation study is performed to show the feasibility. This approach
shows that the statistical uncertainties of all branching fractions are
proportional to a single parameter, which can be used as a metric to optimize
the detector design, reconstruction algorithms, and data analyses. In the Higgs
factories, the global analysis approach is valuable both to the Higgs
measurements and detector R & D, because it has the potential for superior
precision and makes detector optimization single-purpose.
|
A Kempe swap in a proper coloring interchanges the colors on some maximal
connected 2-colored subgraph. Two $k$-colorings are $k$-equivalent if we can
transform one into the other using Kempe swaps. The triangulated toroidal grid,
$T[m\times n]$, is formed from (a toroidal embedding of) the Cartesian product
of $C_m$ and $C_n$ by adding parallel diagonals inside all 4-faces. Mohar and
Salas showed that not all 4-colorings of $T[m\times n]$ are 4-equivalent. In
contrast, Bonamy, Bousquet, Feghali, and Johnson showed that all 6-colorings of
$T[m\times n]$ are 6-equivalent. They asked whether the same is true for
5-colorings. We answer their question affirmatively when $m,n\ge 6$. Further,
we show that if $G$ is 6-regular with a toroidal embedding where every
non-contractible cycle has length at least 7, then all 5-colorings of $G$ are
5-equivalent. Our results relate to the antiferromagnetic Pott's model, in
statistical mechanics.
|
Analytic continuation from Minkowski space to $(2,2)$ split signature
spacetime has proven to be a powerful tool for the study of scattering
amplitudes. Here we show that, under this continuation, null infinity becomes
the product of a null interval with a celestial torus (replacing the celestial
sphere) and has only one connected component. Spacelike and timelike infinity
are time-periodic quotients of AdS$_3$. These three components of infinity
combine to an $S^3$ represented as a toric fibration over the interval.
Privileged scattering states of scalars organize into $SL(2,\mathbb{R})_L
\times SL(2,\mathbb{R})_R$ conformal primary wave functions and their
descendants with real integral or half-integral conformal weights, giving the
normally continuous scattering problem a discrete character.
|
In this article, we perform a sensitivity study of an un-binned angular
analysis of the $B\to D^*\ell \nu_\ell$ decay, including the contributions from
the right-handed current. We show that the angular observable can constrain
very strongly the right-handed current without the intervention of the yet
unsolved $V_{cb}$ puzzle.
|
We describe a novel application of the end-to-end deep learning technique to
the task of discriminating top quark-initiated jets from those originating from
the hadronization of a light quark or a gluon. The end-to-end deep learning
technique combines deep learning algorithms and low-level detector
representation of the high-energy collision event. In this study, we use
low-level detector information from the simulated CMS Open Data samples to
construct the top jet classifiers. To optimize classifier performance we
progressively add low-level information from the CMS tracking detector,
including pixel detector reconstructed hits and impact parameters, and
demonstrate the value of additional tracking information even when no new
spatial structures are added. Relying only on calorimeter energy deposits and
reconstructed pixel detector hits, the end-to-end classifier achieves an AUC
score of 0.975$\pm$0.002 for the task of classifying boosted top quark jets.
After adding derived track quantities, the classifier AUC score increases to
0.9824$\pm$0.0013, serving as the first performance benchmark for these CMS
Open Data samples. We additionally provide a timing performance comparison of
different processor unit architectures for training the network.
|
We study the thermodynamic properties of topological Josephson junctions
using a quantum spin Hall (QSH) insulator-based junction as an example. In
particular, we propose that phase-dependent measurements of the heat capacity
offer an alternative to Josephson-current measurements to demonstrate key
topological features. Even in an equilibrium situation, where the fermion
parity is not conserved, the heat capacity exhibits a pronounced double peak in
its phase dependence as a signature of the protected zero-energy crossing in
the Andreev spectrum. This double-peak feature is robust against changes of the
tunneling barrier and thus allows one to distinguish between topological and
trivial junctions. At short time scales fermion parity is conserved and the
heat capacity is $4\pi$-periodic in the superconducting phase difference. We
propose a dispersive setup coupling the Josephson junction to a tank LC circuit
to measure the heat capacity of the QSH-based Josephson junction sufficiently
fast to detect the $4\pi$-periodicity. Although explicitly calculated for a
short QSH-based Josephson junction, our results are also applicable to long as
well as nanowire-based topological Josephson junctions.
|
We compare the ground-state features of alternating ferrimagnetic chains
$(1/2, S)$ with $S=1,3/2,2,5/2$ in a magnetic field and the corresponding
Holstein-Primakoff bosonic models up to order $\sqrt{s/S}$, with $s=1/2$,
considering the fully polarized magnetization as the boson vacuum. {The
single-particle Hamiltonian is a Rice-Mele model with uniform hopping and
modified boundaries, while the interactions have a correlated
(density-dependent) hopping term and magnon-magnon repulsion.} The
magnon-magnon repulsion increases the many-magnon energy and the
density-dependent hopping decreases the kinetic energy. We use density matrix
renormalization group calculations to investigate the effects of these two
interaction terms in the bosonic model{, and display the quantitative agreement
between the results from the spin model and the full bosonic approximation. In
particular, we verify the good accordance in the behavior of the edge states,
associated with the ferrimagnetic plateau, from the spin and from the bosonic
models. Furthermore, we show that the boundary magnon density strongly depends
on the interactions and particle statistics.
|
Stephenson~(2018) established annealed local convergence of Boltzmann planar
maps conditioned to be large. The present work uses results on rerooted
multi-type branching trees to prove a quenched version of this limit.
|
We present the open-source Bayesian Atmospheric Radiative Transfer (BART)
retrieval package, which produces estimates and uncertainties for an
atmosphere's thermal profile and chemical abundances from observations. Several
BART components are also stand-alone packages, including the parallel
Multi-Core Markov chain Monte Carlo (MC3), which implements several Bayesian
samplers; a line-by-line radiative-transfer model, transit; a code that
calculates Thermochemical Equilibrium Abundances, TEA; and a test suite for
verifying radiative-transfer and retrieval codes, BARTTest. The codes are in
Python and C. BART and TEA are under a Reproducible Research (RR) license,
which requires reviewed-paper authors to publish a compendium of all inputs,
codes, and outputs supporting the paper's scientific claims. BART and TEA
produce the compendium's content. Otherwise, these codes are under permissive
open-source terms, as are MC3 and BARTTest, for any purpose. This paper
presents an overview of the code, BARTTest, and an application to eclipse data
for exoplanet HD 189733 b. Appendices address RR methodology for accelerating
science, a reporting checklist for retrieval papers, the spectral resolution
required for synthetic tests, and a derivation of the effective sample size
required to estimate any Bayesian posterior distribution to a given precision,
which determines how many iterations to run. Paper II, by Cubillos et al.,
presents the underlying radiative-transfer scheme and an application to transit
data for exoplanet HAT-P-11b. Paper III, by Blecic et al., discusses the
initialization and post-processing routines, with an application to eclipse
data for exoplanet WASP-43b. We invite the community to use and improve BART
and its components at http://GitHub.com/ExOSPORTS/BART/.
|
We study a class of power-law stored energy functions for spherically
symmetric elastic bodies that includes well-known material models, such as the
Saint Venant-Kirchhoff, Hadamard, Signorini and John models. We identify a
finite subclass of these stored energy functions, which we call Lam\'e type,
that depend on no more material parameters than the bulk modulus $\kappa>0$ and
the Poisson ratio $-1<\nu\leq1/2$. A general theorem proving the existence of
static self-gravitating elastic balls for some power-law materials has been
given elsewhere. In this paper numerical evidence is provided that some
hypotheses in this theorem are necessary, while others are not.
|
A polycentric approach to ecosystem service (ES) governance that combines
individual incentives for interdependent ES providers with collective action is
a promising lever to overcome the decline in ES and generate win-win solutions
in agricultural landscapes. In this study, we explored the effectiveness of
such an approach by focusing on incentives for managed pollination targeting
either beekeepers or farmers who were either in communication with each other
or not. We used a stylized bioeconomic model to simulate (i) the mutual
interdependency through pollination in intensive agricultural landscapes and
(ii) the economic and ecological impacts of introducing two beekeeping
subsidies and one pesticide tax. The findings showed that incentives generated
a spillover effect, affecting not only targeted stakeholders but non-targeted
stakeholders as well as the landscape, and that this effect was amplified by
communication. However, none of the simulated types of polycentric ES
governance proved sustainable overall: subsidies showed excellent economic but
low environmental performance, while the tax led to economic losses but was
beneficial for the landscape. Based on these results, we identified three
conditions for sustainable ES governance based on communication between
stakeholders and incentives: (i) strong mutual interdependency (i.e. few
alternatives exist for stakeholders), (ii) the benefits of communication
outweigh the costs, and (iii) the incentivized ES drivers are not detrimental
to other ES. Further research is needed to systematize which combination of
individual payments and collaboration are sustainable in which conditions.
|
Inductive logic programming (ILP) is a form of logic-based machine learning.
The goal is to induce a hypothesis (a logic program) that generalises given
training examples. As ILP turns 30, we review the last decade of research. We
focus on (i) new meta-level search methods, (ii) techniques for learning
recursive programs, (iii) new approaches for predicate invention, and (iv) the
use of different technologies. We conclude by discussing current limitations of
ILP and directions for future research.
|
Diffusion of atoms in solids is one of the most fundamental kinetic processes
that ultimately governs many materials properties. Here, we report on a
combined first-principles and kinetic Monte Carlo study of macroscopic
diffusion properties of disordered Ti-Ta alloys over the entire composition
range. Using simple cluster expansion model Hamiltonians parametrized on
density functional theory data, we compute transport properties explicitly
including local interactions between the two atomic species and compare them
with the non-interacting diffusion model for disordered, random alloys.
Surprisingly, we find that although these alloys thermodynamically behave as
nearly random solid solutions, their kinetic properties deviate significantly
from the behavior predicted by diffusion models for non-interacting systems. We
attribute these differences in transport properties to the local interactions
that create a rather corrugated potential energy landscape and consequently
give rise to energetically non-degenerate end-states of diffusion processes
which cannot be realized in a non-interacting disordered or other simpler
diffusion models. The findings emphasize the limitations of the widely known
non-interacting disordered diffusion model for such systems. Furthermore, we
explain that changes in mobility in these alloys is predominantly due to
changes in the correlation factor caused by the local interactions. Our work
thus highlights the importance of explicitly including local interactions when
assessing the transport properties of thermodynamically nearly disordered
alloys.
|
In the context of collaborative robotics, distributed situation awareness is
essential for supporting collective intelligence in teams of robots and human
agents where it can be used for both individual and collective decision
support. This is particularly important in applications pertaining to emergency
rescue and crisis management. During operational missions, data and knowledge
is gathered incrementally and in different ways by heterogeneous robots and
humans. We describe this as the creation of \emph{Hastily Formed Knowledge
Networks} (HFKNs). The focus of this paper is the specification and prototyping
of a general distributed system architecture that supports the creation of
HFKNs by teams of robots and humans. The information collected ranges from
low-level sensor data to high-level semantic knowledge, the latter represented
in part as RDF Graphs. The framework includes a synchronization protocol and
associated algorithms that allow for the automatic distribution and sharing of
data and knowledge between agents. This is done through the distributed
synchronization of RDF Graphs shared between agents. High-level semantic
queries specified in SPARQL can be used by robots and humans alike to acquire
both knowledge and data content from team members. The system is empirically
validated and complexity results of the proposed algorithms are provided.
Additionally, a field robotics case study is described, where a 3D mapping
mission has been executed using several UAVs in a collaborative emergency
rescue scenario while using the full HFKN Framework.
|
State-of-the-art performance for many emerging edge applications is achieved
by deep neural networks (DNNs). Often, these DNNs are location and time
sensitive, and the parameters of a specific DNN must be delivered from an edge
server to the edge device rapidly and efficiently to carry out time-sensitive
inference tasks. We introduce AirNet, a novel training and analog transmission
method that allows efficient wireless delivery of DNNs. We first train the DNN
with noise injection to counter the wireless channel noise. We also employ
pruning to reduce the channel bandwidth necessary for transmission, and perform
knowledge distillation from a larger model to achieve satisfactory performance,
despite the channel perturbations. We show that AirNet achieves significantly
higher test accuracy compared to digital alternatives under the same bandwidth
and power constraints. It also exhibits graceful degradation with channel
quality, which reduces the requirement for accurate channel estimation.
|
It is shown that the infinite tower of tree-level soft graviton symmetries in
asymptotically flat 4D quantum gravity can be organized into a single chiral 2D
Kac-Moody symmetry based on the wedge algebra of w(1+infinity). The infinite
towers of soft photon or gluon symmetries also transform irreducibly under
w(1+infinity).
|
We propose Asymptotic Expansion Conjectures of the relative
Reshetikhin-Turaev invariants, of the relative Turaev-Viro invariants and of
the discrete Fourier transforms of the quantum 6j-symbols, and prove them for
families of special cases. The significance of these expansions is that we do
not specify the way that the sequence of the colorings converges to the limit.
As a consequence, the terms in the expansion will have to depend on the index
r, but the dependence is in a way that the terms are purely geometric
invariants of the metrics on the underlying manifold and only the metrics vary
with r.
|
Quantum cosmology with quintom dark energy model has been investigated in the
present work using symmetry analysis of the underlying physical system. In the
background of the flat FLRW model quintom cosmological model has been studied
using Noether symmetry and appropriate conserved charge is obtained. The
Wheeler-DeWitt (WD) equation is constructed on the minisuperspace and solutions
are obtained using conserved charge.
|
We develop a new orbit equivalence framework for holomorphically mating the
dynamics of complex polynomials with that of Kleinian surface groups. We show
that the only torsion-free Fuchsian groups that can be thus mated are punctured
sphere groups. We describe a new class of maps that are topologically orbit
equivalent to Fuchsian punctured sphere groups. We call these higher
Bowen-Series maps. The existence of this class ensures that the Teichm\"uller
space of matings is disconnected. Further, they also show that, unlike in
higher dimensions, topological orbit equivalence rigidity fails for Fuchsian
groups acting on the circle. We also classify the collection of Kleinian Bers'
boundary groups that are mateable in our framework.
|
The kagome lattice, whose electronic valence band (VB) structure includes two
Dirac bands and one flat band, offers a rich space to realise tuneable
topological and strongly correlated electronic phases in two-dimensional (2D)
and layered materials. While strong electron correlations have been observed in
inorganic kagome crystals, they remain elusive in organic systems. The latter
offer versatile synthesis protocols via molecular self-assembly and
metal-ligand coordination. Here, we report the synthesis of a 2D metal-organic
framework (MOF) where di-cyano-anthracene (DCA) molecules form a kagome
arrangement via coordination with copper (Cu) atoms on a silver surface
[Ag(111)]. Low-temperature scanning tunnelling spectroscopy revealed Kondo
screening--by the Ag(111) conduction electrons--of magnetic moments localised
at Cu and DCA sites of the MOF. Density functional theory and mean-field
Hubbard modelling show that these local moments emerge from strong interactions
between kagome MOF electrons, enabling organic 2D materials as viable platforms
for strongly correlated nanoelectronics and spintronics.
|
Compositional generalization is the ability to generalize systematically to a
new data distribution by combining known components. Although humans seem to
have a great ability to generalize compositionally, state-of-the-art neural
models struggle to do so. In this work, we study compositional generalization
in classification tasks and present two main contributions. First, we study
ways to convert a natural language sequence-to-sequence dataset to a
classification dataset that also requires compositional generalization. Second,
we show that providing structural hints (specifically, providing parse trees
and entity links as attention masks for a Transformer model) helps
compositional generalization.
|
Some 6G use cases include augmented reality and high-fidelity holograms, with
this information flowing through the network. Hence, it is expected that 6G
systems can feed machine learning algorithms with such context information to
optimize communication performance. This paper focuses on the simulation of 6G
MIMO systems that rely on a 3-D representation of the environment as captured
by cameras and eventually other sensors. We present new and improved Raymobtime
datasets, which consist of paired MIMO channels and multimodal data. We also
discuss tradeoffs between speed and accuracy when generating channels via
ray-tracing. We finally provide results of beam selection and channel
estimation to assess the impact of the improvements in the ray-tracing
simulation methodology.
|
Several large stellar spectroscopic surveys are producing overwhelming
amounts of data that can be used for determining stellar atmospheric parameters
and chemical abundances. Nonetheless, the accuracy achieved in the derived
astrophysical parameters is still insufficient, mainly because of the paucity
of adequate calibrators, particularly in the metal-poor regime ([Fe/H] $\leq
-$1.0). Here, we introduce the Titans metal-poor reference stars: a sample of
41 dwarf and subgiant stars with accurate parameters. Effective temperatures
(Teff) were derived by fitting observed H$\alpha$ profiles with synthetic lines
computed using 3D hydrodynamic NLTE models. Surface gravities (logg) were
computed using evolutionary tracks and parallaxes from Gaia EDR3. The same
methods recover the Teff values of the Gaia benchmark stars, which are mostly
based on interferometric measurements, with a 1$\sigma$ dispersion of $\pm 50$
K. We assume this to be the accuracy of the H$\alpha$ profiles computed from 3D
non-LTE models for metal-poor dwarfs and subgiants. We achieved an internal
precision typically between 30-40 K, these errors dominated by instrumental
effects. The final total uncertainty for the Teff values of the Titans are thus
estimated to be of the order of $1\%$. The typical error for logg is $\leq$
0.04 dex. In addition, we identified a few members of Gaia-Enceladus, of
Sequoia, and of the Helmi stream in our sample. These stars can pave the way
for the accurate chemical characterization of these Galactic substructures.
Using the Titans as reference, large stellar surveys will be able to improve
the internal calibration of their astrophysical parameters. Ultimately, this
sample will help users of data from Gaia and large surveys in reaching their
goal of redefining our understanding of stars, stellar systems, and the Milky
Way.
|
We investigate the nonperturbative structure of Jackiw-Teitelboim gravity at
finite cutoff, as given by its proposed formulation in terms of a
$T\bar{T}$-deformed Schwarzian quantum mechanics. Our starting point is a
careful computation of the disk partition function to all orders in the
perturbative expansion in the cutoff parameter. We show that the perturbative
series is asymptotic and that it admits a precise completion exploiting the
analytical properties of its Borel transform, as prescribed by resurgence
theory. The final result is then naturally interpreted in terms of the
nonperturbative branch of the $T\bar{T}$-deformed spectrum. The finite-cutoff
trumpet partition function is computed by applying the same strategy. In the
second part of the paper, we propose an extension of this formalism to
arbitrary topologies, using the basic gluing rules of the undeformed case. The
Weil-Petersson integrations can be safely performed due to the nonperturbative
corrections and give results that are compatible with the flow equation
associated with the $T\bar{T}$ deformation. We derive exact expressions for
general topologies and show that these are captured by a suitable deformation
of the Eynard-Orantin topological recursion. Finally, we study the "slope" and
"ramp" regimes of the spectral form factor as functions of the cutoff
parameter.
|
Given a {features, target} dataset, we introduce an incremental algorithm
that constructs an aggregate regressor, using an ensemble of neural networks.
It is well known that ensemble methods suffer from the multicollinearity issue,
which is the manifestation of redundancy arising mainly due to the common
training-dataset. In the present incremental approach, at each stage we
optimally blend the aggregate regressor with a newly trained neural network
under a convexity constraint which, if necessary, induces negative
correlations. Under this framework, collinearity issues do not arise at all,
rendering so the method both accurate and robust.
|
We prove lower bound for the first closed or Neumann nonzero eigenvalue of
the Laplacian on a compact quaternion-K\"ahler manifold in terms of dimension,
diameter, and scalar curvature lower bound. It is derived as large time
implication of the modulus of continuity estimates for solutions of the heat
equation. We also establish lower bound for the first Dirichlet eigenvalue in
terms of geometric data, via a Laplace comparison theorem for the distance to
the boundary function.
|
We investigate spherically symmetric gravitational collapse of thick matter
shell without radiation in the Einstein gravity with cosmological constant. The
orbit of the infalling thick matter is determined by imposing an equation of
state for the matter near interface, where pressure constituted of the
transverse component and the longitudinal one is proportional to energy
density. We present analytic solutions for the equation of state and discuss
parameter region free from shell crossing singularity. We finally show that
adopting the definition presented in arXiv:2005.13233 the total energy in this
time-dependent system is invariant under the given time evolution.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.