abstract
stringlengths 42
2.09k
|
---|
Traditional telesurgery relies on the surgeon's full control of the robot on
the patient's side, which tends to increase surgeon fatigue and may reduce the
efficiency of the operation. This paper introduces a Robotic Partner (RP) to
facilitate intuitive bimanual telesurgery, aiming at reducing the surgeon
workload and enhancing surgeon-assisted capability. An interval type-2
polynomial fuzzy-model-based learning algorithm is employed to extract expert
domain knowledge from surgeons and reflect environmental interaction
information. Based on this, a bimanual shared control is developed to interact
with the other robot teleoperated by the surgeon, understanding their control
and providing assistance. As prior information of the environment model is not
required, it reduces reliance on force sensors in control design. Experimental
results on the DaVinci Surgical System show that the RP could assist
peg-transfer tasks and reduce the surgeon's workload by 51\% in
force-sensor-free scenarios.
|
Self-driving vehicles must perceive and predict the future positions of
nearby actors in order to avoid collisions and drive safely. A learned deep
learning module is often responsible for this task, requiring large-scale,
high-quality training datasets. As data collection is often significantly
cheaper than labeling in this domain, the decision of which subset of examples
to label can have a profound impact on model performance. Active learning
techniques, which leverage the state of the current model to iteratively select
examples for labeling, offer a promising solution to this problem. However,
despite the appeal of this approach, there has been little scientific analysis
of active learning approaches for the perception and prediction (P&P) problem.
In this work, we study active learning techniques for P&P and find that the
traditional active learning formulation is ill-suited for the P&P setting. We
thus introduce generalizations that ensure that our approach is both cost-aware
and allows for fine-grained selection of examples through partially labeled
scenes. Our experiments on a real-world, large-scale self-driving dataset
suggest that fine-grained selection can improve the performance across
perception, prediction, and downstream planning tasks.
|
We study the problem of determining the minimum number $f(n,k,d)$ of affine
subspaces of codimension $d$ that are required to cover all points of
$\mathbb{F}_2^n\setminus \{\vec{0}\}$ at least $k$ times while covering the
origin at most $k-1$ times. The case $k=1$ is a classic result of Jamison,
which was independently obtained by Brouwer and Schrijver for $d = 1$. The
value of $f(n,1,1)$ also follows from a well-known theorem of Alon and F\"uredi
about coverings of finite grids in affine spaces over arbitrary fields. Here we
determine the value of this function exactly in various ranges of the
parameters. In particular, we prove that for $k \ge 2^{n-d-1}$ we have
$f(n,k,d)=2^d k - \left \lfloor \frac{k}{2^{n-d}} \right \rfloor$, while for $n
> 2^{2^d k-k-d+1}$ we have $f(n,k,d)= n + 2^dk-d-2$, and also study the
transition between these two ranges. While previous work in this direction has
primarily employed the polynomial method, we prove our results through more
direct combinatorial and probabilistic arguments, and also exploit a connection
to coding theory.
|
Automatic speaker verification (ASV) is one of the core technologies in
biometric identification. With the ubiquitous usage of ASV systems in
safety-critical applications, more and more malicious attackers attempt to
launch adversarial attacks at ASV systems. In the midst of the arms race
between attack and defense in ASV, how to effectively improve the robustness of
ASV against adversarial attacks remains an open question. We note that the
self-supervised learning models possess the ability to mitigate superficial
perturbations in the input after pretraining. Hence, with the goal of effective
defense in ASV against adversarial attacks, we propose a standard and
attack-agnostic method based on cascaded self-supervised learning models to
purify the adversarial perturbations. Experimental results demonstrate that the
proposed method achieves effective defense performance and can successfully
counter adversarial attacks in scenarios where attackers may either be aware or
unaware of the self-supervised learning models.
|
Geographic ranges of communities of species evolve in response to
environmental, ecological, and evolutionary forces. Understanding the effects
of these forces on species' range dynamics is a major goal of spatial ecology.
Previous mathematical models have jointly captured the dynamic changes in
species' population distributions and the selective evolution of
fitness-related phenotypic traits in the presence of an environmental gradient.
These models inevitably include some unrealistic assumptions, and biologically
reasonable ranges of values for their parameters are not easy to specify. As a
result, simulations of the seminal models of this type can lead to markedly
different conclusions about the behavior of such populations, including the
possibility of maladaptation setting stable range boundaries. Here, we
harmonize such results by developing and simulating a continuum model of range
evolution in a community of species that interact competitively while diffusing
over an environmental gradient. Our model extends existing models by
incorporating both competition and freely changing intraspecific trait
variance. Simulations of this model predict a spatial profile of species' trait
variance that is consistent with experimental measurements available in the
literature. Moreover, they reaffirm interspecific competition as an effective
factor in limiting species' ranges, even when trait variance is not
artificially constrained. These theoretical results can inform the design of,
as yet rare, empirical studies to clarify the evolutionary causes of range
stabilization.
|
A natural choice for quantum communication is to use the relative phase
between two paths of a single-photon for information encoding. This method was
nevertheless quickly identified as impractical over long distances and thus a
modification based on single-photon time-bins has then become widely adopted.
It however, introduces a fundamental loss, which increases with the dimension
and that limits its application over long distances. Here, we are able to solve
this long-standing hurdle by employing a few-mode fiber space-division
multiplexing platform working with orbital angular momentum modes. In our
scheme, we maintain the practicability provided by the time-bin scheme, while
the quantum states are transmitted through a few-mode fiber in a configuration
that does not introduce post-selection losses. We experimentally demonstrate
our proposal by successfully transmitting phase-encoded single-photon states
for quantum cryptography over 500 m of few-mode fiber, showing the feasibility
of our scheme.
|
We study the possibility to realize Majorana zero mode that's robust and may
be easily manipulated for braiding in quantum computing in the ground state of
the Kitaev model in this work. To achieve this we first apply a uniform [111]
magnetic field to the gapless Kitaev model and turn the Kitaev model to an
effective p + ip topological superconductor of spinons. We then study possible
vortex binding in such system to a topologically trivial spot in the ground
state. We consider two cases in the system: one is a vacancy and the other is a
fully polarized spin. We show that in both cases, the system binds a vortex
with the defect and a robust Majorana zero mode in the ground state at a weak
uniform [111] magnetic field. The distribution and asymptotic behavior of these
Majorana zero modes is studied. The Majorana zero modes in both cases decay
exponentially in space, and are robust against local perturbations and other
Majorana zero modes far away, which makes them promising candidate for braiding
in topological quantum computing.
|
Timed automata (TA) is used for modeling systems with timing aspects. A TA
extends a finite automaton with a set of real valued variables called clocks,
that measure the time and constraints over the clocks guard the transitions. A
parametric TA (PTA) is a TA extension that allows parameters in clock
constraints. In this paper, we focus on synthesis of a control strategy and
parameter valuation for a PTA such that each run of the resulting TA reaches a
target location within the given amount of time while avoiding unsafe
locations. We propose an algorithm based on depth first analysis combined with
an iterative feasibility check. The algorithm iteratively constructs a symbolic
representation of the possible solutions, and employs a feasibility check to
terminate the exploration along infeasible directions. Once the construction is
completed, a mixed integer linear program is solved for each candidate strategy
to generate a parameter valuation and a control strategy pair. We present a
robotic planning example to motivate the problem and to illustrate the results.
|
Logarithmic number systems (LNS) are used to represent real numbers in many
applications using a constant base raised to a fixed-point exponent making its
distribution exponential. This greatly simplifies hardware multiply, divide and
square root. LNS with base-2 is most common, but in this paper we show that for
low-precision LNS the choice of base has a significant impact.
We make four main contributions. First, LNS is not closed under addition and
subtraction, so the result is approximate. We show that choosing a suitable
base can manipulate the distribution to reduce the average error. Second, we
show that low-precision LNS addition and subtraction can be implemented
efficiently in logic rather than commonly used ROM lookup tables, the
complexity of which can be reduced by an appropriate choice of base. A similar
effect is shown where the result of arithmetic has greater precision than the
input. Third, where input data from external sources is not expected to be in
LNS, we can reduce the conversion error by selecting a LNS base to match the
expected distribution of the input. Thus, there is no one base which gives the
global optimum, and base selection is a trade-off between different factors.
Fourth, we show that circuits realized in LNS require lower area and power
consumption for short word lengths.
|
Ramanujan provided several results involving the modified Bessel function
$K_z(x)$ in his Lost Notebook. One of them is the famous Ramanujan-Guinand
formula, equivalent to the functional equation of the non-holomorphic
Eiesenstien series on $SL_2(z)$. Recently, this formula was generalized by
Dixit, Kesarwani, and Moll. In this article, we first obtain a generalization
of a theorem of Watson and, as an application of it, give a new proof of the
result of Dixit, Kesarwani, and Moll. Watson's theorem is also generalized in a
different direction using ${}_\mu K_z(x,\lambda)$ which is itself a
generalization of $K_z(x)$. Analytic continuation of all these results are also
given.
|
When bars form within galaxy formation simulations in the standard
cosmological context, dynamical friction with dark matter (DM) causes them to
rotate rather slowly. However, almost all observed galactic bars are fast in
terms of the ratio between corotation radius and bar length. Here, we
explicitly display an $8\sigma$ tension between the observed distribution of
this ratio and that in the EAGLE simulation at redshift 0. We also compare the
evolution of Newtonian galactic discs embedded in DM haloes to their evolution
in three extended gravity theories: Milgromian Dynamics (MOND), a model of
non-local gravity, and a scalar-tensor-vector gravity theory (MOG). Although
our models start with the same initial baryonic distribution and rotation
curve, the long-term evolution is different. The bar instability happens more
violently in MOND compared to the other models. There are some common features
between the extended gravity models, in particular the negligible role played
by dynamical friction $-$ which plays a key role in the DM model. Partly for
this reason, all extended gravity models predict weaker bars and faster bar
pattern speeds compared to the DM case. Although the absence of strong bars in
our idealized, isolated extended gravity simulations is in tension with
observations, they reproduce the strong observational preference for `fast' bar
pattern speeds, which we could not do with DM. We confirm previous findings
that apparently `ultrafast' bars can be due to bar-spiral arm alignment leading
to an overestimated bar length, especially in extended gravity scenarios where
the bar is already fast.
|
In recent years, quantitative investment methods combined with artificial
intelligence have attracted more and more attention from investors and
researchers. Existing related methods based on the supervised learning are not
very suitable for learning problems with long-term goals and delayed rewards in
real futures trading. In this paper, therefore, we model the price prediction
problem as a Markov decision process (MDP), and optimize it by reinforcement
learning with expert trajectory. In the proposed method, we employ more than
100 short-term alpha factors instead of price, volume and several technical
factors in used existing methods to describe the states of MDP. Furthermore,
unlike DQN (deep Q-learning) and BC (behavior cloning) in related methods, we
introduce expert experience in training stage, and consider both the
expert-environment interaction and the agent-environment interaction to design
the temporal difference error so that the agents are more adaptable for
inevitable noise in financial data. Experimental results evaluated on share
price index futures in China, including IF (CSI 300) and IC (CSI 500), show
that the advantages of the proposed method compared with three typical
technical analysis and two deep leaning based methods.
|
Understanding the origin and mechanism of the transverse polarization of
hyperons produced in unpolarized proton-proton collision, $pp\to
\Lambda^\uparrow X$, has been one of the long-standing issues in high-energy
spin physics. In the framework of the collinear factorization applicable to
large-$p_T$ hadron productions, this phenomenon is a twist-3 observable which
is caused by multi-parton correlations either in the initial protons or in the
process of fragmentation into the hyperon. We derive the twist-3 gluon
fragmentation function (FF) contribution to this process in the leading order
(LO) with respect to the QCD coupling constant. Combined with the known results
for the contribution from the twist-3 distribution function and the twist-3
quark FF, this completes the LO twist-3 cross section. We also found that the
model independent relations among the twist-3 gluon FFs based on the QCD
equation of motion and the Lorentz invariance property of the correlation
functions guarantee the color gauge invariance and the frame-independence of
the cross section.
|
Semi-local DFT methods exhibit significant errors for the phase diagrams of
transition-metal oxides that are caused by an incorrect description of
molecular oxygen and the large self-interaction error in materials with
strongly localized electronic orbitals. Empirical and semiempirical corrections
based on the DFT+U method can reduce these errors, but the parameterization and
validation of the correction terms remains an on-going challenge. We develop a
systematic methodology to determine the parameters and to statistically assess
the results by considering thermochemical data across a set of transition metal
compounds. We consider three interconnected levels of correction terms: (1) a
constant oxygen binding correction, (2) Hubbard-U correction, and (3) DFT/DFT+U
compatibility correction. The parameterization is expressed as a unified
optimization problem. We demonstrate this approach for 3d transition metal
oxides, considering a target set of binary and ternary oxides. With a total of
37 measured formation enthalpies taken from the literature, the dataset is
augmented by the reaction energies of 1,710 unique reactions that were derived
from the formation energies by systematic enumeration. To ensure a balanced
dataset across the available data, the reactions were grouped by their
similarity using clustering and suitably weighted. The parameterization is
validated using leave-one-out cross-validation (CV), a standard technique for
the validation of statistical models. We apply the methodology to the SCAN
density functional. Based on the CV score, the error of binary (ternary) oxide
formation energies is reduced by 40% (75%) to 0.10 (0.03) eV/atom. The method
and tools demonstrated here can be applied to other classes of materials or to
parameterize the corrections to optimize DFT+U performance for other target
physical properties.
|
We analyze the solution of the Schr\"odinger equation arising in the
treatment of a geometric model introduced to explain the origin of the observed
shallow levels in semiconductors threaded by a dislocation density. We show
(contrary to what the authors claimed) that the model does not support bound
states for any chosen set of model parameters. Assuming a fictitious motion in
the $x-y$ plane there are bound states provided that $k\neq 0$ and not only for
$k>0$ as the authors believed. The truncation condition proposed by the authors
yields only one particular energy for a given value of a chosen model parameter
and misses all the others (conditionally solvable problem)
|
In this paper, we conjecture a connection between the $A$-polynomial of a
knot in $\mathbb{S}^{3}$ and the hyperbolic volume of its exterior
$\mathcal{M}_{K}$ : the knots with zero hyperbolic volume are exactly the knots
with an $A$-polynomial where every irreducible factor is the sum of two
monomials in $L$ and $M$. Herein, we show the forward implication and examine
cases that suggest the converse may also be true. Since the $A$-polynomial of
hyperbolic knots are known to have at least one irreducible factor which is not
the sum of two monomials in $L$ and $M$, this paper considers satellite knots
which are graph knots and some with positive hyperbolic volume.
|
Artificial intelligence (AI) has been successful at solving numerous problems
in machine perception. In radiology, AI systems are rapidly evolving and show
progress in guiding treatment decisions, diagnosing, localizing disease on
medical images, and improving radiologists' efficiency. A critical component to
deploying AI in radiology is to gain confidence in a developed system's
efficacy and safety. The current gold standard approach is to conduct an
analytical validation of performance on a generalization dataset from one or
more institutions, followed by a clinical validation study of the system's
efficacy during deployment. Clinical validation studies are time-consuming, and
best practices dictate limited re-use of analytical validation data, so it is
ideal to know ahead of time if a system is likely to fail analytical or
clinical validation. In this paper, we describe a series of sanity tests to
identify when a system performs well on development data for the wrong reasons.
We illustrate the sanity tests' value by designing a deep learning system to
classify pancreatic cancer seen in computed tomography scans.
|
Science gateways are user-centric, end-to-end cyberinfrastructure for
managing scientific data and executions of computational software on
distributed resources. In order to simplify the creation and management of
science gateways, we have pursued a multi-tenanted, platform-as-a-service
approach that allows multiple gateway front-ends (portals) to be integrated
with a consolidated middleware that manages the movement of data and the
execution of workflows on multiple back-end scientific computing resources. An
important challenge for this approach is to provide an end-to-end data movement
and management solution that allows gateway users to integrate their own data
stores with the gateway platform. These user-provided data stores may include
commercial cloud-based object store systems, third-party data stores accessed
through APIs such as REST endpoints, and users' own local storage resources. In
this paper, we present a solution design and implementation based on the
integration of a managed file transfer (MFT) service (Airavata MFT) into the
platform.
|
Conformal Field Theories (CFTs) have rich dynamics in heavy states. We
describe the constraints due to spontaneously broken boost and dilatation
symmetries in such states. The spontaneously broken boost symmetries require
the existence of new low-lying primaries whose scaling dimension gap, we argue,
scales as $O(1)$. We demonstrate these ideas in various states, including
fluid, superfluid, mean field theory, and Fermi surface states. We end with
some remarks about the large charge limit in 2d and discuss a theory of a
single compact boson with an arbitrary conformal anomaly.
|
This paper proposes a discrete knowledge graph (KG) embedding (DKGE) method,
which projects KG entities and relations into the Hamming space based on a
computationally tractable discrete optimization algorithm, to solve the
formidable storage and computation cost challenges in traditional continuous
graph embedding methods. The convergence of DKGE can be guaranteed
theoretically. Extensive experiments demonstrate that DKGE achieves superior
accuracy than classical hashing functions that map the effective continuous
embeddings into discrete codes. Besides, DKGE reaches comparable accuracy with
much lower computational complexity and storage compared to many continuous
graph embedding methods.
|
Two of the most sensitive probes of the large scale structure of the universe
are the clustering of galaxies and the tangential shear of background galaxy
shapes produced by those foreground galaxies, so-called galaxy-galaxy lensing.
Combining the measurements of these two two-point functions leads to
cosmological constraints that are independent of the galaxy bias factor. The
optimal choice of foreground, or lens, galaxies is governed by the joint, but
conflicting requirements to obtain accurate redshift information and large
statistics. We present cosmological results from the full 5000 sq. deg. of the
Dark Energy Survey first three years of observations (Y3) combining those
two-point functions, using for the first time a magnitude-limited lens sample
(MagLim) of 11 million galaxies especially selected to optimize such
combination, and 100 million background shapes. We consider two cosmological
models, flat $\Lambda$CDM and $w$CDM. In $\Lambda$CDM we obtain for the matter
density $\Omega_m = 0.320^{+0.041}_{-0.034}$ and for the clustering amplitude
$S_8 = 0.778^{+0.037}_{-0.031}$, at 68\% C.L. The latter is only 1$\sigma$
smaller than the prediction in this model informed by measurements of the
cosmic microwave background by the Planck satellite. In $w$CDM we find
$\Omega_m = 0.32^{+0.044}_{-0.046}$, $S_8=0.777^{+0.049}_{-0.051}$, and dark
energy equation of state $w=-1.031^{+0.218}_{-0.379}$. We find that including
smaller scales while marginalizing over non-linear galaxy bias improves the
constraining power in the $\Omega_m-S_8$ plane by $31\%$ and in the
$\Omega_m-w$ plane by $41\%$ while yielding consistent cosmological parameters
from those in the linear bias case. These results are combined with those from
cosmic shear in a companion paper to present full DES-Y3 constraints from the
three two-point functions (3x2pt).
|
Object detection in aerial images is an important task in environmental,
economic, and infrastructure-related tasks. One of the most prominent
applications is the detection of vehicles, for which deep learning approaches
are increasingly used. A major challenge in such approaches is the limited
amount of data that arises, for example, when more specialized and rarer
vehicles such as agricultural machinery or construction vehicles are to be
detected. This lack of data contrasts with the enormous data hunger of deep
learning methods in general and object recognition in particular. In this
article, we address this issue in the context of the detection of road vehicles
in aerial images. To overcome the lack of annotated data, we propose a
generative approach that generates top-down images by overlaying artificial
vehicles created from 2D CAD drawings on artificial or real backgrounds. Our
experiments with a modified RetinaNet object detection network show that adding
these images to small real-world datasets significantly improves detection
performance. In cases of very limited or even no real-world images, we observe
an improvement in average precision of up to 0.70 points. We address the
remaining performance gap to real-world datasets by analyzing the effect of the
image composition of background and objects and give insights into the
importance of background.
|
While autoregressive models excel at image compression, their sample quality
is often lacking. Although not realistic, generated images often have high
likelihood according to the model, resembling the case of adversarial examples.
Inspired by a successful adversarial defense method, we incorporate randomized
smoothing into autoregressive generative modeling. We first model a smoothed
version of the data distribution, and then reverse the smoothing process to
recover the original data distribution. This procedure drastically improves the
sample quality of existing autoregressive models on several synthetic and
real-world image datasets while obtaining competitive likelihoods on synthetic
datasets.
|
The framework of Baikov-Gazizov-Ibragimov approximate symmetries has proven
useful for many examples where a small perturbation of an ordinary differential
equation (ODE) destroys its local symmetry group. For the perturbed model, some
of the local symmetries of the unperturbed equation may (or may not) re-appear
as approximate symmetries, and new approximate symmetries can appear.
Approximate symmetries are useful as a tool for the construction of approximate
solutions. We show that for algebraic and first-order differential equations,
to every point symmetry of the unperturbed equation, there corresponds an
approximate point symmetry of the perturbed equation. For second and
higher-order ODEs, this is not the case: some point symmetries of the original
ODE may be unstable, that is, they do not arise in the approximate point
symmetry classification of the perturbed ODE. We show that such unstable point
symmetries correspond to higher-order approximate symmetries of the perturbed
ODE, and can be systematically computed. Two detailed examples, including a
fourth-order nonlinear Boussinesq equation, are presented. Examples of the use
of higher-order approximate symmetries and approximate integrating factors to
obtain approximate solutions of higher-order ODEs are provided.
|
Citations are used for research evaluation, and it is therefore important to
know which factors influence or associate with citation impact of articles.
Several citation factors have been studied in the literature. In this study we
propose a new factor, topic growth, that no previous study has taken into
consideration. The growth rate of topics may influence future citation counts,
because a high growth in a topic means there are more publications citing
previous publications in that topic. We construct topics using community
detection in a citation network and use a two-part regression model is used to
study the association between topic growth and citation counts in eight broad
disciplines. The first part of the model uses quantile regression to estimate
the effect of growth ratio on citation counts for publications with more than
three citations. The second part of the model uses logistic regression to model
the influence of the independent variables on the probability of being lowly
cited versus being modestly or highly cited. Both models control for three
variables that may distort the association between the topic growth and
citations: journal impact, number of references, and number of authors. The
regression model clearly shows that publications in fast-growing topics have a
citation advantage compared to publications in slow-growing or declining topics
in all of the eight disciplines. Using citation indicators for research
evaluation may give incentives for researchers to publish in fast-growing
topics, but they may cause research to be less diversified. The results have
also some implications for citation normalization.
|
We have studied spin excitations in a single-domain crystal of
antiferromagnetic LiCoPO4 by THz absorption spectroscopy. By analyzing the
selection rules and comparing the strengths of the absorption peaks in the
different antiferromagnetic domains, we found electromagnons and
magnetoelectric spin resonances besides conventional magnetic-dipole active
spin-wave excitations. Using the sum rule for the magnetoelectric
susceptibility we determined the contribution of the spin excitations to all
the different off-diagonal elements of the static magnetoelectric
susceptibility tensor in zero as well as in finite magnetic fields. We conclude
that the magnetoelectric spin resonances are responsible for the static
magnetoelectric response of the bulk when the magnetic field is along the
x-axis, and the symmetric part of the magnetoelectric tensor with zero diagonal
elements dominates over the antisymmetric components.
|
In the next decades, the gravitational-wave (GW) standard siren observations
and the neutral hydrogen 21-cm intensity mapping (IM) surveys, as two promising
cosmological probes, will play an important role in precisely measuring
cosmological parameters. In this work, we make a forecast for cosmological
parameter estimation with the synergy between the GW standard siren
observations and the 21-cm IM surveys. We choose the Einstein Telescope (ET)
and the Taiji observatory as the representatives of the GW detection projects
and choose the Square Kilometre Array (SKA) phase I mid-frequency array as the
representative of the 21-cm IM experiments. In the simulation of the 21-cm IM
data, we assume perfect foreground removal and calibration. We find that the
synergy of the GW standard siren observations and the 21-cm IM survey could
break the cosmological parameter degeneracies. The joint ET+Taiji+SKA data give
$\sigma(H_0)=0.28\ {\rm km\ s^{-1}\ Mpc^{-1}}$ in the $\Lambda$CDM model,
$\sigma(w)=0.028$ in the $w$CDM model, which are better than the results of
$Planck$+BAO+SNe, and $\sigma(w_0)=0.077$ and $\sigma(w_a)=0.295$ in the CPL
model, which are comparable with the results of $Planck$+BAO+SNe. In the
$\Lambda$CDM model, the constraint precision of $H_0$ and $\Omega_{\rm m}$ is
less than or rather close to 1%, indicating that the magnificent prospects for
precision cosmology with these two promising cosmological probes are worth
expecting.
|
Many reinforcement learning algorithms rely on value estimation. However, the
most widely used algorithms -- namely temporal difference algorithms -- can
diverge under both off-policy sampling and nonlinear function approximation.
Many algorithms have been developed for off-policy value estimation which are
sound under linear function approximation, based on the linear mean-squared
projected Bellman error (PBE). Extending these methods to the non-linear case
has been largely unsuccessful. Recently, several methods have been introduced
that approximate a different objective, called the mean-squared Bellman error
(BE), which naturally facilities nonlinear approximation. In this work, we
build on these insights and introduce a new generalized PBE, that extends the
linear PBE to the nonlinear setting. We show how this generalized objective
unifies previous work, including previous theory, and obtain new bounds for the
value error of the solutions of the generalized objective. We derive an
easy-to-use, but sound, algorithm to minimize the generalized objective which
is more stable across runs, is less sensitive to hyperparameters, and performs
favorably across four control domains with neural network function
approximation.
|
We present a simple quantum description of the gravitational collapse of a
ball of dust which excludes those states whose width is arbitrarily smaller
than the gravitational radius of the matter source and supports the conclusion
that black holes are macroscopic extended objects. We also comment briefly on
the relevance of this result for the ultraviolet self-completion of gravity and
the corpuscular picture of black holes.
|
Here we deal with the stabilization problem of non-diagonal systems by
boundary control. In the studied setting, the boundary control input is subject
to a constant delay. We use the spectral decomposition method and split the
system into two components: an unstable and a stable one. To stabilize the
unstable part of the system, we connect, for the first time in the literature,
the famous backstepping control design technique with the direct-proportional
control design. More precisely, we construct a proportional open-loop
stabilizer, then, by means of the Artstein transformation we close the loop. At
the end of the paper, an example is provided in order to illustrate the
acquired results.
|
In this paper, a new framework, named as graphical state space model, is
proposed for the real time optimal estimation of a class of nonlinear state
space model. By discretizing this kind of system model as an equation which can
not be solved by Extended Kalman filter, factor graph optimization can
outperform Extended Kalman filter in some cases. A simple nonlinear example is
given to demonstrate the efficiency of this framework.
|
We consider four-point correlation functions of protected single-trace scalar
operators in planar N = 4 supersymmetric Yang-Mills (SYM). We conjecture that
all loop corrections derive from an integrand which enjoys a ten-dimensional
symmetry. This symmetry combines spacetime and R-charge transformations. By
considering a 10D light-like limit, we extend the correlator/amplitude duality
by equating large R-charge octagons with Coulomb branch scattering amplitudes.
Using results from integrability, this predicts new finite amplitudes as well
as some Feynman integrals.
|
The interfacial behavior of quantum materials leads to emergent phenomena
such as two dimensional electron gases, quantum phase transitions, and
metastable functional phases. Probes for in situ and real time surface
sensitive characterization are critical for active monitoring and control of
epitaxial synthesis, and hence the atomic-scale engineering of heterostructures
and superlattices. Termination switching, especially as an interfacial process
in ternary complex oxides, has been studied using a variety of probes, often ex
situ; however, direct observation of this phenomena is lacking. To address this
need, we establish in situ and real time reflection high energy electron
diffraction and Auger electron spectroscopy for pulsed laser deposition, which
provide structural and compositional information of the surface during film
deposition. Using this unique capability, we show, for the first time, the
direct observation and control of surface termination in complex oxide
heterostructures of SrTiO3 and SrRuO3. Density-functional-theory calculations
capture the energetics and stability of the observed structures and elucidate
their electronic behavior. This demonstration opens up a novel approach to
monitor and control the composition of materials at the atomic scale to enable
next-generation heterostructures for control over emergent phenomena, as well
as electronics, photonics, and energy applications.
|
This paper explores serverless cloud computing for double machine learning.
Being based on repeated cross-fitting, double machine learning is particularly
well suited to exploit the high level of parallelism achievable with serverless
computing. It allows to get fast on-demand estimations without additional cloud
maintenance effort. We provide a prototype Python implementation
\texttt{DoubleML-Serverless} for the estimation of double machine learning
models with the serverless computing platform AWS Lambda and demonstrate its
utility with a case study analyzing estimation times and costs.
|
Bayesian neural networks (BNNs) have shown success in the areas of
uncertainty estimation and robustness. However, a crucial challenge prohibits
their use in practice: Bayesian NNs require a large number of predictions to
produce reliable results, leading to a significant increase in computational
cost. To alleviate this issue, we propose spatial smoothing, a method that
ensembles neighboring feature map points of CNNs. By simply adding a few blur
layers to the models, we empirically show that the spatial smoothing improves
accuracy, uncertainty estimation, and robustness of BNNs across a whole range
of ensemble sizes. In particular, BNNs incorporating the spatial smoothing
achieve high predictive performance merely with a handful of ensembles.
Moreover, this method also can be applied to canonical deterministic neural
networks to improve the performances. A number of evidences suggest that the
improvements can be attributed to the smoothing and flattening of the loss
landscape. In addition, we provide a fundamental explanation for prior works -
namely, global average pooling, pre-activation, and ReLU6 - by addressing to
them as special cases of the spatial smoothing. These not only enhance
accuracy, but also improve uncertainty estimation and robustness by making the
loss landscape smoother in the same manner as the spatial smoothing. The code
is available at https://github.com/xxxnell/spatial-smoothing.
|
The generalized Brillouin zone (GBZ), which is the core concept of the
non-Bloch band theory to rebuild the bulk boundary correspondence in the
non-Hermitian topology, appears as a closed loop generally. In this work, we
find that even if the GBZ itself collapses into a point, the recovery of the
open boundary energy spectrum by the continuum bands remains unchanged.
Contrastively, if the bizarreness of the GBZ occurs, the winding number will
become illness. Namely, we find that the bulk boundary correspondence can still
be established whereas the GBZ has singularities from the perspective of the
energy, but not from the topological invariants. Meanwhile, regardless of the
fact that the GBZ comes out with the closed loop, the bulk boundary
correspondence can not be well characterized yet because of the ill-definition
of the topological number. Here, the results obtained may be useful for
improving the existing non-Bloch band theory.
|
The Vlasov-Poisson-Boltzmann equation is a classical equation governing the
dynamics of charged particles with the electric force being self-imposed. We
consider the system in a convex domain with the Cercignani-Lampis boundary
condition. We construct a uniqueness local-in-time solution based on an
$L^\infty$-estimate and $W^{1,p}$-estimate. In particular, we develop a new
iteration scheme along the characteristic with the Cercignani-Lampis boundary
for the $L^\infty$-estimate, and an intrinsic decomposition of boundary
integral for $W^{1,p}$-estimate.
|
In this paper, we generalize the embedded homology in \cite{hg1} for
hypergraphs and study the relative embedded homology for hypergraph pairs. We
study the topology for sub-hypergraphs. Using the relative embedded homology
and the topology for sub-hypergraphs, we discuss persistent relative embedded
homology for hypergraph pairs.
|
We report on the trapping of single Rb atoms in tunable arrays of optical
tweezers in a cryogenic environment at $\sim 4$ K. We describe the design and
construction of the experimental apparatus, based on a custom-made, UHV
compatible, closed-cycle cryostat with optical access. We demonstrate the
trapping of single atoms in cryogenic arrays of optical tweezers, with
lifetimes in excess of $\sim6000$ s, despite the fact that the vacuum system
has not been baked out. These results open the way to large arrays of single
atoms with extended coherence, for applications in large-scale quantum
simulation of many-body systems, and more generally in quantum science and
technology.
|
We introduce a new isometric strain model for the study of the dynamics of
cloth garments in a moderate stress environment, such as robotic manipulation
in the neighborhood of humans. This model treats textiles as surfaces which are
inextensible, admitting only isometric motions. Inextensibility is imposed in a
continuous setting, prior to any discretization, which gives consistency with
respect to re-meshing and prevents the problem of locking even with coarse
meshes. The simulations of robotic manipulation using the model are compared to
the actual manipulation in the real world, finding that the error between the
simulated and real position of each point in the garment is lower than 1cm in
average, even when a coarse mesh is used. Aerodynamic contributions to motion
are incorporated to the model through the virtual uncoupling of the inertial
and gravitational mass of the garment. This approach results in an accurate, as
compared to reality, description of cloth motion incorporating aerodynamic
effects by using only two parameters.
|
An abstract theory of Fourier series in locally convex topological vector
spaces is developed. An analog of Fej\'{e}r's theorem is proved for these
series. The theory is applied to distributional solutions of Cauchy-Riemann
equations to recover basic results of complex analysis. Some classical results
of function theory are also shown to be consequences of the series expansion.
|
We study the fundamental question of the sample complexity of learning a good
policy in finite Markov decision processes (MDPs) when the data available for
learning is obtained by following a logging policy that must be chosen without
knowledge of the underlying MDP. Our main results show that the sample
complexity, the minimum number of transitions necessary and sufficient to
obtain a good policy, is an exponential function of the relevant quantities
when the planning horizon $H$ is finite. In particular, we prove that the
sample complexity of obtaining $\epsilon$-optimal policies is at least
$\Omega(\mathrm{A}^{\min(\mathrm{S}-1, H+1)})$ for $\gamma$-discounted
problems, where $\mathrm{S}$ is the number of states, $\mathrm{A}$ is the
number of actions, and $H$ is the effective horizon defined as $H=\lfloor
\tfrac{\ln(1/\epsilon)}{\ln(1/\gamma)} \rfloor$; and it is at least
$\Omega(\mathrm{A}^{\min(\mathrm{S}-1, H)}/\varepsilon^2)$ for finite horizon
problems, where $H$ is the planning horizon of the problem. This lower bound is
essentially matched by an upper bound. For the average-reward setting we show
that there is no algorithm finding $\epsilon$-optimal policies with a finite
amount of data.
|
Under voltage load shedding has been considered as a standard and effective
measure to recover the voltage stability of the electric power grid under
emergency and severe conditions. However, this scheme usually trips a massive
amount of load which can be unnecessary and harmful to customers. Recently,
deep reinforcement learning (RL) has been regarded and adopted as a promising
approach that can significantly reduce the amount of load shedding. However,
like most existing machine learning (ML)-based control techniques, RL control
usually cannot guarantee the safety of the systems under control. In this
paper, we introduce a novel safe RL method for emergency load shedding of power
systems, that can enhance the safe voltage recovery of the electric power grid
after experiencing faults. Unlike the standard RL method, the safe RL method
has a reward function consisting of a Barrier function that goes to minus
infinity when the system state goes to the safety bounds. Consequently, the
optimal control policy, that maximizes the reward function, can render the
power system to avoid the safety bounds. This method is general and can be
applied to other safety-critical control problems. Numerical simulations on the
39-bus IEEE benchmark is performed to demonstrate the effectiveness of the
proposed safe RL emergency control, as well as its adaptive capability to
faults not seen in the training.
|
Nambu-Goldstone bosons, or axions, may be ubiquitous. Some of the axions may
have small masses and thus serve as mediators of long-range forces. In this
paper, we study the force mediated by an extremely light axion, $\phi$, between
the visible sector and the dark sector, where dark matter lives. Since nature
does not preserve the CP symmetry, the coupling between dark matter and $\phi$
is generically CP-violating. In this case, the induced force is extremely
long-range and behaves as an effective magnetic field. If the force acts on
electrons or nucleons, the spins of them on Earth precess around a fixed
direction towards the galactic center. This provides an experimental
opportunity for $\phi$ with mass, $m_\phi$, and decay constant, $f_\phi$,
satisfying $m_\phi\lesssim 10^{-25}\,$ eV, $f_\phi\lesssim 10^{14}\,$GeV if the
daily modulation of the effective magnetic field signals in magnetometers is
measured by using the coherent averaging method. The effective magnetic field
induced by an axionic compact object, such as an axion domain wall, is also
discussed.
|
Texture can be defined as the change of image intensity that forms repetitive
patterns, resulting from physical properties of the object's roughness or
differences in a reflection on the surface. Considering that texture forms a
complex system of patterns in a non-deterministic way, biodiversity concepts
can help texture characterization in images. This paper proposes a novel
approach capable of quantifying such a complex system of diverse patterns
through species diversity and richness and taxonomic distinctiveness. The
proposed approach considers each image channel as a species ecosystem and
computes species diversity and richness measures as well as taxonomic measures
to describe the texture. The proposed approach takes advantage of ecological
patterns' invariance characteristics to build a permutation, rotation, and
translation invariant descriptor. Experimental results on three datasets of
natural texture images and two datasets of histopathological images have shown
that the proposed texture descriptor has advantages over several texture
descriptors and deep methods.
|
A pulse oximeter is an optical device that monitors tissue oxygenation
levels. Traditionally, these devices estimate the oxygenation level by
measuring the intensity of the transmitted light through the tissue and are
embedded into everyday devices such as smartphones and smartwatches. However,
these sensors require prior information and are susceptible to unwanted changes
in the intensity, including ambient light, skin tone, and motion artefacts.
Previous experiments have shown the potential of Time-of-Flight (ToF)
techniques in measurements of tissue hemodynamics. Our proposed technology uses
histograms of photon flight paths within the tissue to obtain tissue
oxygenation, regardless of the changes in the intensity of the source. Our
device is based on a 45ps time-to-digital converter (TDC) which is implemented
in a Xilinx Zynq UltraScale+ field programmable gate array (FPGA), a CMOS
Single Photon Avalanche Diode (SPAD) detector, and a low-cost compact laser
source. All these components including the SPAD detector are manufactured using
the latest commercially available technology, which leads to increased
linearity, accuracy, and stability for ToF measurements. This proof-of-concept
system is approximately 10cmx8cmx5cm in size, with a high potential for
shrinkage through further system development and component integration. We
demonstrate preliminary results of ToF pulse measurements and report the
engineering details, trade-offs, and challenges of this design. We discuss the
potential for mass adoption of ToF based pulse oximeters in everyday devices
such as smartphones and wearables.
|
The present study proposed a method for numerical solution of linear Volterra
integral equations (VIEs) of the third kind, before only analytical solution
methods had been discussed with reference to previous research and review of
the related literature. Given that such analytical solutions are not almost
always feasible, it is required to provide a numerical method for solving the
mentioned equations. Accordingly, Krall-Laguerre polynomials were utilized for
numerical solution of these equations. The main purpose of this method is to
approximate the unknown functions through Krall-Laguerre polynomials. Moreover,
an error analysis is performed on the proposed method.
|
Traffic violation and the flexible and changeable nature of pedestrians make
it more difficult to predict pedestrian behavior or intention, which might be a
potential safety hazard on the road. Pedestrian motion state (such as walking
and standing) directly affects or reflects its intention. In combination with
pedestrian motion state and other influencing factors, pedestrian intention can
be predicted to avoid unnecessary accidents. In this paper, pedestrian is
treated as non-rigid object, which can be represented by a set of
two-dimensional key points, and the movement of key point relative to the torso
is introduced as micro motion. Static and dynamic micro motion features, such
as position, angle and distance, and their differential calculations in time
domain, are used to describe its motion pattern. Gated recurrent neural network
based seq2seq model is used to learn the dependence of motion state transition
on previous information, finally the pedestrian motion state is estimated via a
softmax classifier. The proposed method only needs the previous hidden state of
GRU and current feature to evaluate the probability of current motion state,
and it is computation efficient to deploy on vehicles. This paper verifies the
proposed algorithm on the JAAD public dataset, and the accuracy is improved by
11.6% compared with the existing method.
|
We study the impact of the inter-level energy constraints imposed by Haldane
Exclusion Statistics on relaxation processes in 1-dimensional systems coupled
to a bosonic bath. By formulating a second-quantized description of the
relevant Fock space, we identify certain universal features of this relaxation
dynamics, and show that it is generically slower than that of spinless
fermions. Our study focuses on the Calogero-Sutherland model, which realizes
Haldane Exclusion statistics exactly in one dimension; however our results
apply to any system that has the associated pattern of inter-level occupancy
constraints in Fock space.
|
There is a number of contradictory findings with regard to whether the theory
describing easy-plane quantum antiferromagnets undergoes a second-order phase
transition. The traditional Landau-Ginzburg-Wilson approach suggests a
first-order phase transition, as there are two different competing order
parameters. On the other hand, it is known that the theory has the property of
self-duality which has been connected to the existence of a deconfined quantum
critical point. The latter regime suggests that order parameters are not the
elementary building blocks of the theory, but rather consist of fractionalized
particles that are confined in both phases of the transition and only appear -
deconfine - at the critical point. Nevertheless, numerical Monte Carlo
simulations disagree with the claim of deconfined quantum criticality in the
system, indicating instead a first-order phase transition. Here these
contradictions are resolved by demonstrating via a duality transformation that
a new critical regime exists analogous to the zero temperature limit of a
certain classical statistical mechanics system. Because of this analogy, we dub
this critical regime "frozen". A renormalization group analysis bolsters this
claim, allowing us to go beyond it and align previous numerical predictions of
the first-order phase transition with the deconfined criticality in a
consistent framework.
|
In the upcoming process to overcome the limitations of the standard von
Neumann architecture, synaptic electronics is gaining a primary role for the
development of in-memory computing. In this field, Ge-based compounds have been
proposed as switching materials for nonvolatile memory devices and for
selectors. By employing the classical molecular dynamics, we study the
structural features of both the liquid states at 1500K and the amorphous phase
at 300K of Ge-rich and Se-rich chalcogenides binary GexSe1-x systems in the
range x 0.4-0.6. The simulations rely on a model of interatomic potentials
where ions interact through steric repulsion, as well as Coulomb and
charge-dipole interactions given by the large electronic polarizability of Se
ions. Our results indicate the formation of temperature-dependent hierarchical
structures with short-range local orders and medium-range structures, which
vary with the Ge content. Our work demonstrates that nanosecond-long
simulations, not accessible via ab initio techniques, are required to obtain a
realistic amorphous phase from the melt. Our classical molecular dynamics
simulations are able to describe the profound structural differences between
the melt and the glassy structures of GeSe chalcogenides. These results open to
the understanding of the interplay between chemical composition, atomic
structure, and electrical properties in switching materials.
|
Medication recommendation is an essential task of AI for healthcare. Existing
works focused on recommending drug combinations for patients with complex
health conditions solely based on their electronic health records. Thus, they
have the following limitations: (1) some important data such as drug molecule
structures have not been utilized in the recommendation process. (2) drug-drug
interactions (DDI) are modeled implicitly, which can lead to sub-optimal
results. To address these limitations, we propose a DDI-controllable drug
recommendation model named SafeDrug to leverage drugs' molecule structures and
model DDIs explicitly. SafeDrug is equipped with a global message passing
neural network (MPNN) module and a local bipartite learning module to fully
encode the connectivity and functionality of drug molecules. SafeDrug also has
a controllable loss function to control DDI levels in the recommended drug
combinations effectively. On a benchmark dataset, our SafeDrug is relatively
shown to reduce DDI by 19.43% and improves 2.88% on Jaccard similarity between
recommended and actually prescribed drug combinations over previous approaches.
Moreover, SafeDrug also requires much fewer parameters than previous deep
learning-based approaches, leading to faster training by about 14% and around
2x speed-up in inference.
|
The analysis of animal movement has gained attention recently, and new
continuous-time models and statistical methods have been developed. All of them
are based on the assumption that this movement can be recorded over a long
period of time, which is sometimes infeasible, for instance when the battery
life of the GPS is short. We prove that the estimation of its home range
improves if periods when the GPS is on are alternated with periods when the GPS
is turned off. This is illustrated through a simulation study, and real life
data. We also provide estimators of the stationary distribution, level sets
(which provides estimators of the core area) and the drift function.
|
Bug detection and prevention is one of the most important goals of software
quality assurance. Nowadays, many of the major problems faced by developers can
be detected or even fixed fully or partially with automatic tools. However,
recent works explored that there exists a substantial amount of simple yet very
annoying errors in code-bases, which are easy to fix, but hard to detect as
they do not hinder the functionality of the given product in a major way.
Programmers introduce such errors accidentally, mostly due to inattention.
Using the ManySStuBs4J dataset, which contains many simple, stupid bugs, found
in GitHub repositories written in the Java programming language, we
investigated the history of such bugs. We were interested in properties such
as: How long do such bugs stay unnoticed in code-bases? Whether they are
typically fixed by the same developer who introduced them? Are they introduced
with the addition of new code or caused more by careless modification of
existing code? We found that most of such stupid bugs lurk in the code for a
long time before they get removed. We noticed that the developer who made the
mistake seems to find a solution faster, however less then half of SStuBs are
fixed by the same person. We also examined PMD's performance when to came to
flagging lines containing SStuBs, and found that similarly to SpotBugs, it is
insufficient when it comes to finding these types of errors. Examining the
life-cycle of such bugs allows us to better understand their nature and adjust
our development processes and quality assurance methods to better support
avoiding them.
|
Semantic image synthesis, translating semantic layouts to photo-realistic
images, is a one-to-many mapping problem. Though impressive progress has been
recently made, diverse semantic synthesis that can efficiently produce
semantic-level multimodal results, still remains a challenge. In this paper, we
propose a novel diverse semantic image synthesis framework from the perspective
of semantic class distributions, which naturally supports diverse generation at
semantic or even instance level. We achieve this by modeling class-level
conditional modulation parameters as continuous probability distributions
instead of discrete values, and sampling per-instance modulation parameters
through instance-adaptive stochastic sampling that is consistent across the
network. Moreover, we propose prior noise remapping, through linear
perturbation parameters encoded from paired references, to facilitate
supervised training and exemplar-based instance style control at test time.
Extensive experiments on multiple datasets show that our method can achieve
superior diversity and comparable quality compared to state-of-the-art methods.
Code will be available at \url{https://github.com/tzt101/INADE.git}
|
Image compression using colour densities is historically impractical to
decompress losslessly. We examine the use of conditional generative adversarial
networks in making this transformation more feasible, through learning a
mapping between the images and a loss function to train on. We show that this
method is effective at producing visually lossless generations, indicating that
efficient colour compression is viable.
|
Discovering new materials with ultrahigh thermal conductivity has been a
critical research frontier and driven by many important technological
applications ranging from thermal management to energy science. Here we have
rigorously investigated the fundamental lattice vibrational spectra in ternary
compounds and determined the thermal conductivity using a predictive ab initio
approach. Phonon transport in B-X-C (X = N, P, As) groups is systematically
quantified with different crystal structures and high-order anharmonicity
involving a four-phonon process. Our calculation found an ultrahigh
room-temperature thermal conductivity through strong carbon-carbon bonding up
to 2100 W/mK beyond most common materials and the recently discovered boron
arsenide. This study provides fundamental insight into the atomistic design of
thermal conductivity and opens up opportunities in new materials searching
towards complicated compound structures.
DOI: 10.1103/PhysRevB.103.L041203
|
Path planning has long been one of the major research areas in robotics, with
PRM and RRT being two of the most effective classes of path planners. Though
generally very efficient, these sampling-based planners can become
computationally expensive in the important case of "narrow passages". This
paper develops a path planning paradigm specifically formulated for narrow
passage problems. The core is based on planning for rigid-body robots
encapsulated by unions of ellipsoids. The environmental features are enclosed
geometrically using convex differentiable surfaces (e.g., superquadrics). The
main benefit of doing this is that configuration-space obstacles can be
parameterized explicitly in closed form, thereby allowing prior knowledge to be
used to avoid sampling infeasible configurations. Then, by characterizing a
tight volume bound for multiple ellipsoids, robot transitions involving
rotations are guaranteed to be collision-free without traditional collision
detection. Furthermore, combining the stochastic sampling strategy, the
proposed planning framework can be extended to solving higher dimensional
problems in which the robot has a moving base and articulated appendages.
Benchmark results show that, remarkably, the proposed framework outperforms the
popular sampling-based planners in terms of computational time and success rate
in finding a path through narrow corridors and in higher dimensional
configuration spaces.
|
In the class of strictly convex smooth boundaries, each of which not having
strip around its boundary foliated by invariant curves, we prove that the
Taylor coefficients of the "normalized" Mather's $\beta$-function are
invariants under $C^\infty$-conjugacies. In contrast, we prove that any two
elliptic billiard maps are $C^0$-conjugated near their respective boundaries,
and $C^\infty$-conjugated in the open cylinder, near the boundary and away from
a plain passing through the center of the underlying ellipse. We also prove
that if the billiard maps corresponding to two ellipses are topologically
conjugated then the two ellipses are similar.
|
According to the Probability Ranking Principle (PRP), ranking documents in
decreasing order of their probability of relevance leads to an optimal document
ranking for ad-hoc retrieval. The PRP holds when two conditions are met: [C1]
the models are well calibrated, and, [C2] the probabilities of relevance are
reported with certainty. We know however that deep neural networks (DNNs) are
often not well calibrated and have several sources of uncertainty, and thus
[C1] and [C2] might not be satisfied by neural rankers. Given the success of
neural Learning to Rank (L2R) approaches-and here, especially BERT-based
approaches-we first analyze under which circumstances deterministic, i.e.
outputs point estimates, neural rankers are calibrated. Then, motivated by our
findings we use two techniques to model the uncertainty of neural rankers
leading to the proposed stochastic rankers, which output a predictive
distribution of relevance as opposed to point estimates. Our experimental
results on the ad-hoc retrieval task of conversation response ranking reveal
that (i) BERT-based rankers are not robustly calibrated and that stochastic
BERT-based rankers yield better calibration; and (ii) uncertainty estimation is
beneficial for both risk-aware neural ranking, i.e.taking into account the
uncertainty when ranking documents, and for predicting unanswerable
conversational contexts.
|
In this paper, we propose a new class of operator factorization methods to
discretize the integral fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$ for
$\alpha \in (0, 2)$. The main advantage of our method is to easily increase
numerical accuracy by using high-degree Lagrange basis functions, but remain
the scheme structure and computer implementation unchanged. Moreover, our
discretization of the fractional Laplacian results in a symmetric (multilevel)
Toeplitz differentiation matrix, which not only saves memory cost in
simulations but enables efficient computations via the fast Fourier transforms.
The performance of our method in both approximating the fractional Laplacian
and solving the fractional Poisson problems was detailedly examined. It shows
that our method has an optimal accuracy of ${\mathcal O}(h^2)$ for constant or
linear basis functions, while ${\mathcal O}(h^4)$ if quadratic basis functions
are used, with $h$ a small mesh size. Note that this accuracy holds for any
$\alpha \in (0, 2)$ and can be further increased if higher-degree basis
functions are used. If the solution of fractional Poisson problem satisfies $u
\in C^{m, l}(\bar{\Omega})$ for $m \in {\mathbb N}$ and $0 < l < 1$, then our
method has an accuracy of ${\mathcal O}\big(h^{\min\{m+l,\, 2\}}\big)$ for
constant and linear basis functions, while ${\mathcal O}\big(h^{\min\{m+l,\,
4\}}\big)$ for quadratic basis functions. Additionally, our method can be
readily applied to study generalized fractional Laplacians with a symmetric
kernel function, and numerical study on the tempered fractional Poisson problem
demonstrates its efficiency.
|
The eccentricity of a planet's orbit and the inclination of its orbital plane
encode important information about its formation and history. However,
exoplanets detected via direct-imaging are often only observed over a very
small fraction of their period, making it challenging to perform reliable
physical inferences given wide, unconstrained posteriors. The aim of this
project is to investigate biases (deviation of the median and mode of the
posterior from the true values of orbital parameters, and the width and
coverage of their credible intervals) in the estimation of orbital parameters
of directly-imaged exoplanets, particularly their eccentricities, and to define
general guidelines to perform better estimations of uncertainty. For this, we
constructed various orbits and generated mock data for each spanning $\sim 0.5
\%$ of the orbital period. We used the Orbits For The Impatient (OFTI)
algorithm to compute orbit posteriors, and compared those to the true values of
the orbital parameters. We found that the inclination of the orbital plane is
the parameter that most affects our estimations of eccentricity, with orbits
that appear near edge-on producing eccentricity distributions skewed away from
the true values, and often bi-modal. We also identified a degeneracy between
eccentricity and inclination that makes it difficult to distinguish posteriors
of face-on, eccentric orbits and edge-on, circular orbits. For the
exoplanet-imaging community, we propose practical recommendations, guidelines
and warnings relevant to orbit-fitting.
|
Time-resolved XUV-IR photoion mass spectroscopy of naphthalene conducted with
broadband, as well as with wavelength-selected narrowband XUV pulses reveals a
rising probability of fragmentation characterized by a lifetime of $92\pm4$~fs.
This lifetime is independent of the XUV excitation wavelength and is the same
for all low appearance energy fragments recorded in the experiment. Analysis of
the experimental data in conjunction with a statistical multi-state vibronic
model suggests that the experimental signals track vibrational energy
redistribution on the potential energy surface of the ground state cation. In
particular, populations of the out-of-plane ring twist and the out-of-plane
wave bending modes could be responsible for opening new IR absorption channels
leading to enhanced fragmentation.
|
D$_2$ molecules, excited by linearly cross-polarized femtosecond extreme
ultraviolet (XUV) and near-infrared (NIR) light pulses, reveal highly
structured D$^+$ ion fragment momenta and angular distributions that originate
from two different 4-step dissociative ionization pathways after four photon
absorption (1 XUV + 3 NIR). We show that, even for very low dissociation
kinetic energy release $\le$~240~meV, specific electronic excitation pathways
can be identified and isolated in the final ion momentum distributions. With
the aid of {\it ab initio} electronic structure and time-dependent
Schr\"odinger equation calculations, angular momentum, energy, and parity
conservation are used to identify the excited neutral molecular states and
molecular orientations relative to the polarization vectors in these different
photoexcitation and dissociation sequences of the neutral D$_2$ molecule and
its D$_2^+$ cation. In one sequential photodissociation pathway, molecules
aligned along either of the two light polarization vectors are excluded, while
another pathway selects molecules aligned parallel to the light propagation
direction. The evolution of the nuclear wave packet on the intermediate \Bstate
electronic state of the neutral D$_2$ molecule is also probed in real time.
|
We prove that for every $N\ge 3$, the group $\mathrm{Out}(F_N)$ of outer
automorphisms of a free group of rank $N$ is superrigid from the point of view
of measure equivalence: any countable group that is measure equivalent to
$\mathrm{Out}(F_N)$, is in fact virtually isomorphic to $\mathrm{Out}(F_N)$.
We introduce three new constructions of canonical splittings associated to a
subgroup of $\mathrm{Out}(F_N)$ of independent interest. They encode
respectively the collection of invariant free splittings, invariant cyclic
splittings, and maximal invariant free factor systems. Our proof also relies on
the following improvement of an amenability result by Bestvina and the authors:
given a free factor system $\mathcal{F}$ of $F_N$, the action of
$\mathrm{Out}(F_N,\mathcal{F})$ (the subgroup of $\mathrm{Out}(F_N)$ that
preserves $\mathcal{F}$) on the space of relatively arational trees with
amenable stabilizer is a Borel amenable action.
|
Let $T$ denote a positive operator with spectral radius $1$ on, say, an
$L^p$-space. A classical result in infinite dimensional Perron--Frobenius
theory says that, if $T$ is irreducible and power bounded, then its peripheral
point spectrum is either empty or a subgroup of the unit circle.
In this note we show that the analogous assertion for the entire peripheral
spectrum fails. More precisely, for every finite union $U$ of finite subgroups
of the unit circle we construct an irreducible stochastic operator on $\ell^1$
whose peripheral spectrum equals $U$.
We also give a similar construction for the $C_0$-semigroup case.
|
The sensitivity to blockages is a key challenge for the high-frequency (5G
millimeter wave and 6G sub-terahertz) wireless networks. Since these networks
mainly rely on line-of-sight (LOS) links, sudden link blockages highly threaten
the reliability of the networks. Further, when the LOS link is blocked, the
network typically needs to hand off the user to another LOS basestation, which
may incur critical time latency, especially if a search over a large codebook
of narrow beams is needed. A promising way to tackle the reliability and
latency challenges lies in enabling proaction in wireless networks. Proaction
basically allows the network to anticipate blockages, especially dynamic
blockages, and initiate user hand-off beforehand. This paper presents a
complete machine learning framework for enabling proaction in wireless networks
relying on visual data captured, for example, by RGB cameras deployed at the
base stations. In particular, the paper proposes a vision-aided wireless
communication solution that utilizes bimodal machine learning to perform
proactive blockage prediction and user hand-off. The bedrock of this solution
is a deep learning algorithm that learns from visual and wireless data how to
predict incoming blockages. The predictions of this algorithm are used by the
wireless network to proactively initiate hand-off decisions and avoid any
unnecessary latency. The algorithm is developed on a vision-wireless dataset
generated using the ViWi data-generation framework. Experimental results on two
basestations with different cameras indicate that the algorithm is capable of
accurately detecting incoming blockages more than $\sim 90\%$ of the time. Such
blockage prediction ability is directly reflected in the accuracy of proactive
hand-off, which also approaches $87\%$. This highlights a promising direction
for enabling high reliability and low latency in future wireless networks.
|
For a smooth surface $S$, Porta-Sala defined a categorical Hall algebra
generalizing previous work in K-theory of Zhao and Kapranov-Vasserot. We
construct semi-orthogonal decompositions for categorical Hall algebras of
points on $S$. We refine these decompositions in K-theory for a topological
K-theoretic Hall algebra.
|
Despite constant improvements in efficiency, today's data centers and
networks consume enormous amounts of energy and this demand is expected to rise
even further. An important research question is whether and how fog computing
can curb this trend. As real-life deployments of fog infrastructure are still
rare, a significant part of research relies on simulations. However, existing
power models usually only target particular components such as compute nodes or
battery-constrained edge devices.
Combining analytical and discrete-event modeling, we develop a holistic but
granular energy consumption model that can determine the power usage of compute
nodes as well as network traffic and applications over time. Simulations can
incorporate thousands of devices that execute complex application graphs on a
distributed, heterogeneous, and resource-constrained infrastructure. We
evaluated our publicly available prototype LEAF within a smart city traffic
scenario, demonstrating that it enables research on energy-conserving fog
computing architectures and can be used to assess dynamic task placement
strategies and other energy-saving mechanisms.
|
This monograph introduces key concepts and problems in the new research area
of Periodic Geometry and Topology for materials applications.Periodic
structures such as solid crystalline materials or textiles were previously
classified in discrete and coarse ways that depend on manual choices or are
unstable under perturbations. Since crystal structures are determined in a
rigid form, their finest natural equivalence is defined by rigid motion or
isometry, which preserves inter-point distances. Due to atomic vibrations,
isometry classes of periodic point sets form a continuous space whose geometry
and topology were unknown. The key new problem in Periodic Geometry is to
unambiguously parameterize this space of isometry classes by continuous
coordinates that allow a complete reconstruction of any crystal. The major part
of this manuscript reviews the recently developed isometry invariants to
resolve the above problem: (1) density functions computed from higher order
Voronoi zones, (2) distance-based invariants that allow ultra-fast
visualizations of huge crystal datasets, and (3) the complete invariant isoset
(a DNA-type code) with a first continuous metric on all periodic crystals. The
main goal of Periodic Topology is to classify textiles up to periodic isotopy,
which is a continuous deformation of a thickened plane without a fixed lattice
basis. This practical problem substantially differs from past research focused
on links in a fixed thickened torus.
|
Aims. The purpose of this paper is to describe a new post-processing
algorithm dedicated to the reconstruction of the spatial distribution of light
received from off-axis sources, in particular from circumstellar disks.
Methods. Built on the recent PACO algorithm dedicated to the detection of
point-like sources, the proposed method is based on the local learning of patch
covariances capturing the spatial fluctuations of the stellar leakages. From
this statistical modeling, we develop a regularized image reconstruction
algorithm (REXPACO) following an inverse problem approach based on a forward
image formation model of the off-axis sources in the ADI sequences.
Results. Injections of fake circumstellar disks in ADI sequences from the
VLT/SPHERE-IRDIS instrument show that both the morphology and the photometry of
the disks are better preserved by REXPACO compared to standard postprocessing
methods like cADI. In particular, the modeling of the spatial covariances
proves usefull in reducing typical ADI artifacts and in better disentangling
the signal of these sources from the residual stellar contamination. The
application to stars hosting circumstellar disks with various morphologies
confirms the ability of REXPACO to produce images of the light distribution
with reduced artifacts. Finally, we show how REXPACO can be combined with PACO
to disentangle the signal of circumstellar disks from the signal of candidate
point-like sources.
Conclusions. REXPACO is a novel post-processing algorithm producing
numerically deblurred images of the circumstellar environment. It exploits the
spatial covariances of the stellar leakages and of the noise to efficiently
eliminate this nuisance term.
|
Anomalous electric currents along a magnetic field, first predicted to emerge
during large heavy ion collision experiments, were also observed a few years
ago in condensed matter environments, exploring the fact that charge carriers
in Dirac/Weyl semi-metals exhibit a relativistic-like behavior. The mechanism
through which such currents are generated relies on an imbalance in the
chirality of systems immersed in a magnetic background, leading to the
so-called chiral magnetic effect (CME). While chiral magnetic currents have
been observed in materials in three space dimensions, in this work we propose
that an analog of the chiral magnetic effect can be constructed in two space
dimensions, corresponding to a novel type of intrinsic half-integer Quantum
Hall effect, thereby also offering a topological protection mechanism for the
current. While the 3D chiral anomaly underpins the CME, its 2D cousin is
emerging from the 2D parity anomaly, we thence call it the parity magnetic
effect (PME). It can occur in disturbed honeycomb lattices where both spin
degeneracy and time reversal symmetry are broken. These configurations harbor
two distinct gap-opening mechanisms that, when occurring simultaneously, drive
slightly different gaps in each valley, establishing an analog of the necessary
chiral imbalance. Some examples of promising material setups that fulfill the
prerequisites of our proposal are also listed.
|
This paper presents a thoroughgoing interpretation of a weak relevant logic
built over the Dunn-Belnap four-valued semantics in terms of the communication
of information in a network of sites of knowledge production (laboratories).
The knowledge communicated concerns experimental data and the regularities
tested using it. There have been many nods to interpretations similar to ours -
for example, in Dunn (1976), Belnap (1977). The laboratory interpretation was
outlined in Bilkova et al. (2010).
Our system is built on the Routley--Meyer semantics for relevant logic
equipped with a four-valued valuation of formulas, where labs stand in for
situations, and the four values reflect the complexity of assessing results of
experiments. This semantics avoids using the Routley star, on the cost of
introducing a further relation, required in evaluating falsity assignments of
implication. We can however provide a natural interpretation of two
accessibility relations - confirmation and refutation of hypotheses are two
independent processes in our laboratory setup. This setup motivates various
basic properties of the accessibility relations, as well as a number of other
possible restrictions. This gives us a flexible modular system which can be
adjusted to specific epistemic contexts.
As perfect regularities are rarely, or perhaps never, actually observed, we
add probabilities to the logical framework. As our logical framework is
non-classical, the probability is non-classical as well, satisfying a weaker
version of Kolmogorov axioms (cf. Priest 2006). We show that these
probabilities allow for a relative frequency as well as for a subjective
interpretation (we provide a Dutch book argument). We further show how to
update the probabilities and to distinguish conditional probabilities from the
probability of conditionals.
|
In 1981 Wyman classified the solutions of the Einstein--Klein--Gordon
equations with static spherically symmetric spacetime metric and vanishing
scalar potential. For one of these classes, the scalar field linearly grows
with time. We generalize this symmetry noninheriting solution, perturbatively,
to a rotating one and extend the static solution exactly to arbitrary spacetime
dimensions. Furthermore, we investigate the existence of nonminimally coupled,
time-dependent real scalar fields on top of static black holes, and prove a
no-hair theorem for stealth scalar fields on the Schwarzschild background.
|
In his work on representations of Thompson's group $F$, Vaughan Jones defined
and studied the $3$-colorable subgroup $\mathcal{F}$ of $F$. Later, Ren showed
that it is isomorphic with the Brown-Thompson group $F_4$. In this paper we
continue with the study of the $3$-colorable subgroup and prove that the
quasi-regular representation of $F$ associated with the $3$-colorable subgroup
is irreducible. We show moreover that the preimage of $\mathcal{F}$ under a
certain injective endomorphism of $F$ is contained in three (explicit) maximal
subgroups of $F$ of infinite index. These subgroups are different from the
previously known infinite index maximal subgroups of $F$, namely the parabolic
subgroups that fix a point in $(0,1)$, (up to isomorphism) the Jones' oriented
subgroup $\vec{F}$, and the explicit examples found by Golan.
|
Hybrid entangled states prove to be necessary for quantum information
processing within heterogeneous quantum networks. A method with irreducible
number of consumed resources that firmly provides hybrid CV-DV entanglement for
any input conditions of the experimental setup is proposed. Namely, a family of
CV states is introduced. Each of such CV states is first superimposed on a
beam-splitter with a delocalized photon and then detected by a photo-detector
behind the beam-splitter. Detection of any photon number heralds generation of
a hybrid CV-DV entangled state in the outputs, independent of
transmission/reflection coefficients of the beam-splitter and size of the input
CV state. Nonclassical properties of the generated state are studied and their
entanglement degree in terms of negativity is calculated. There are wide
domains of values of input parameters of the experimental setup that can be
chosen to make the generated state maximally entangled. The proposed method is
also applicable to truncated versions of the input CV states. We also propose a
simple method to produce even/odd CV states.
|
We present optical follow-up observations for candidate clusters in the
Clusters Hiding in Plain Sight (CHiPS) survey, which is designed to find new
galaxy clusters with extreme central galaxies that were misidentified as bright
isolated sources in the ROSAT All-Sky Survey catalog. We identify 11 cluster
candidates around X-ray, radio, and mid-IR bright sources, including six
well-known clusters, two false associations of foreground and background
clusters, and three new candidates which are observed further with Chandra. Of
the three new candidates, we confirm two newly discovered galaxy clusters:
CHIPS1356-3421 and CHIPS1911+4455. Both clusters are luminous enough to be
detected in the ROSAT All Sky-Survey data if not because of their bright
central cores. CHIPS1911+4455 is similar in many ways to the Phoenix cluster,
but with a highly-disturbed X-ray morphology on large scales. We find the
occurrence rate for clusters that would appear to be X-ray bright point sources
in the ROSAT All-Sky Survey (and any surveys with similar angular resolution)
to be 2+/-1%, and the occurrence rate of clusters with runaway cooling in their
cores to be <1%, consistent with predictions of Chaotic Cold Accretion. With
the number of new groups and clusters predicted to be found with eROSITA, the
population of clusters that appear to be point sources (due to a central QSO or
a dense cool core) could be around 2000. Finally, this survey demonstrates that
the Phoenix cluster is likely the strongest cool core at z<0.7 -- anything more
extreme would have been found in this survey.
|
Pedestrian attribute recognition in surveillance scenarios is still a
challenging task due to the inaccurate localization of specific attributes. In
this paper, we propose a novel view-attribute localization method based on
attention (VALA), which utilizes view information to guide the recognition
process to focus on specific attributes and attention mechanism to localize
specific attribute-corresponding areas. Concretely, view information is
leveraged by the view prediction branch to generate four view weights that
represent the confidences for attributes from different views. View weights are
then delivered back to compose specific view-attributes, which will participate
and supervise deep feature extraction. In order to explore the spatial location
of a view-attribute, regional attention is introduced to aggregate spatial
information and encode inter-channel dependencies of the view feature.
Subsequently, a fine attentive attribute-specific region is localized, and
regional weights for the view-attribute from different spatial locations are
gained by the regional attention. The final view-attribute recognition outcome
is obtained by combining the view weights with the regional weights.
Experiments on three wide datasets (RAP, RAPv2, and PA-100K) demonstrate the
effectiveness of our approach compared with state-of-the-art methods.
|
Delaunay triangulation is a well-known geometric combinatorial optimization
problem with various applications. Many algorithms can generate Delaunay
triangulation given an input point set, but most are nontrivial algorithms
requiring an understanding of geometry or the performance of additional
geometric operations, such as the edge flip. Deep learning has been used to
solve various combinatorial optimization problems; however, generating Delaunay
triangulation based on deep learning remains a difficult problem, and very few
research has been conducted due to its complexity. In this paper, we propose a
novel deep-learning-based approach for learning Delaunay triangulation using a
new attention mechanism based on self-attention and domain knowledge. The
proposed model is designed such that the model efficiently learns
point-to-point relationships using self-attention in the encoder. In the
decoder, a new attention score function using domain knowledge is proposed to
provide a high penalty when the geometric requirement is not satisfied. The
strength of the proposed attention score function lies in its ability to extend
its application to solving other combinatorial optimization problems involving
geometry. When the proposed neural net model is well trained, it is simple and
efficient because it automatically predicts the Delaunay triangulation for an
input point set without requiring any additional geometric operations. We
conduct experiments to demonstrate the effectiveness of the proposed model and
conclude that it exhibits better performance compared with other
deep-learning-based approaches.
|
This paper gives necessary and sufficient conditions for the Tanner graph of
a quasi-cyclic (QC) low-density parity-check (LDPC) code based on the all-one
protograph to have girth 6, 8, 10, and 12, respectively, in the case of
parity-check matrices with column weight 4. These results are a natural
extension of the girth results of the already-studied cases of column weight 2
and 3, and it is based on the connection between the girth of a Tanner graph
given by a parity-check matrix and the properties of powers of the product
between the matrix and its transpose. The girth conditions can be easily
incorporated into fast algorithms that construct codes of desired girth between
6 and 12; our own algorithms are presented for each girth, together with
constructions obtained from them and corresponding computer simulations. More
importantly, this paper emphasizes how the girth conditions of the Tanner graph
corresponding to a parity-check matrix composed of circulants relate to the
matrix obtained by adding (over the integers) the circulant columns of the
parity-check matrix. In particular, we show that imposing girth conditions on a
parity-check matrix is equivalent to imposing conditions on a square circulant
submatrix of size 4 obtained from it.
|
The online technical Q&A site Stack Overflow (SO) is popular among developers
to support their coding and diverse development needs. To address shortcomings
in API official documentation resources, several research has thus focused on
augmenting official API documentation with insights (e.g., code examples) from
SO. The techniques propose to add code examples/insights about APIs into its
official documentation. Reviews are opinionated sentences with
positive/negative sentiments. However, we are aware of no previous research
that attempts to automatically produce API documentation from SO by considering
both API code examples and reviews. In this paper, we present two novel
algorithms that can be used to automatically produce API documentation from SO
by combining code examples and reviews towards those examples. The first
algorithm is called statistical documentation, which shows the distribution of
positivity and negativity around the code examples of an API using different
metrics (e.g., star ratings). The second algorithm is called concept-based
documentation, which clusters similar and conceptually relevant usage
scenarios. An API usage scenario contains a code example, a textual description
of the underlying task addressed by the code example, and the reviews (i.e.,
opinions with positive and negative sentiments) from other developers towards
the code example. We deployed the algorithms in Opiner, a web-based platform to
aggregate information about APIs from online forums. We evaluated the
algorithms by mining all Java JSON-based posts in SO and by conducting three
user studies based on produced documentation from the posts.
|
Constraint programming (CP) is a paradigm used to model and solve constraint
satisfaction and combinatorial optimization problems. In CP, problems are
modeled with constraints that describe acceptable solutions and solved with
backtracking tree search augmented with logical inference. In this paper, we
show how quantum algorithms can accelerate CP, at both the levels of inference
and search. Leveraging existing quantum algorithms, we introduce a
quantum-accelerated filtering algorithm for the $\texttt{alldifferent}$ global
constraint and discuss its applicability to a broader family of global
constraints with similar structure. We propose frameworks for the integration
of quantum filtering algorithms within both classical and quantum backtracking
search schemes, including a novel hybrid classical-quantum backtracking search
method. This work suggests that CP is a promising candidate application for
early fault-tolerant quantum computers and beyond.
|
In this paper, we discuss interesting potential implications for the
supersymmetric (SUSY) universe in light of cosmological problems on (1) the
number of the satellite galaxies of the Milky Way (missing satellite problem)
and (2) a value of the matter density fluctuation at the scale around
8$h^{-1}$Mpc ($S_{8}$ tension). The implications are extracted by assuming that
the gravitino of a particular mass can be of help to alleviate the cosmological
tension. We consider two gravitino mass regimes vastly separated, that is,
$m_{3/2}\simeq100{\rm eV}$ and $m_{3/2}\simeq100{\rm GeV}$. We discuss
non-trivial features of each supersymmetric universe associated with a specific
gravitino mass by projecting potential resolutions of the cosmological problems
on each of associated SUSY models.
|
Special point defects in semiconductors have been envisioned as suitable
components for quantum-information technology. The identification of new deep
centers in silicon that can be easily activated and controlled is a main target
of the research in the field. Vacancy-related complexes are suitable to provide
deep electronic levels but they are hard to control spatially. With the spirit
of investigating solid state devices with intentional vacancy-related defects
at controlled position, here we report on the functionalization of silicon
vacancies by implanting Ge atoms through single-ion implantation, producing
Ge-vacancy (GeV) complexes. We investigate the quantum transport through an
array of GeV complexes in a silicon-based transistor. By exploiting a model
based on an extended Hubbard Hamiltonian derived from ab-initio results we find
anomalous activation energy values of the thermally activated conductance of
both quasi-localized and delocalized many-body states, compared to conventional
dopants. We identify such states, forming the upper Hubbard band, as
responsible of the experimental sub-threshold transport across the transistor.
The combination of our model with the single-ion implantation method enables
future research for the engineering of GeV complexes towards the creation of
spatially controllable individual defects in silicon for applications in
quantum information technologies.
|
From social interactions to the human brain, higher-order networks are key to
describe the underlying network geometry and topology of many complex systems.
While it is well known that network structure strongly affects its function,
the role that network topology and geometry has on the emerging dynamical
properties of higher-order networks is yet to be clarified. In this
perspective, the spectral dimension plays a key role since it determines the
effective dimension for diffusion processes on a network. Despite its
relevance, a theoretical understanding of which mechanisms lead to a finite
spectral dimension, and how this can be controlled, represents nowadays still a
challenge and is the object of intense research. Here we introduce two
non-equilibrium models of hyperbolic higher-order networks and we characterize
their network topology and geometry by investigating the interwined appearance
of small-world behavior, $\delta$-hyperbolicity and community structure. We
show that different topological moves determining the non-equilibrium growth of
the higher-order hyperbolic network models induce tunable values of the
spectral dimension, showing a rich phenomenology which is not displayed in
random graph ensembles. In particular, we observe that, if the topological
moves used to construct the higher-order network increase the area$/$volume
ratio, the spectral dimension continuously decreases, while the opposite effect
is observed if the topological moves decrease the area$/$volume ratio. Our work
reveals a new link between the geometry of a network and its diffusion
properties, contributing to a better understanding of the complex interplay
between network structure and dynamics.
|
Variable active galactic nuclei showing periodic light curves have been
proposed as massive black hole binary (MBHB) candidates. In such scenarios the
periodicity can be due to relativistic Doppler-boosting of the emitted light.
This hypothesis can be tested through the timing of scattered polarized light.
Following the results of polarization studies in type I nuclei and of dynamical
studies of MBHBs with circumbinary discs, we assume a coplanar equatorial
scattering ring, whose elements contribute differently to the total polarized
flux, due to different scattering angles, levels of Doppler boost, and
line-of-sight time delays. We find that in the presence of a MBHB, both the
degree of polarization and the polarization angle have periodic modulations.
The minimum of the polarization degree approximately coincides with the peak of
the light curve, regardless of the scattering ring size. The polarization angle
oscillates around the semi-minor axis of the projected MBHB orbital ellipse,
with a frequency equal either to the binary's orbital frequency (for large
scattering screen radii), or twice this value (for smaller scattering
structures). These distinctive features can be used to probe the nature of
periodic MBHB candidates and to compile catalogs of the most promising sub-pc
MBHBs. The identification of such polarization features in gravitational-wave
detected MBHBs would enormously increase the amount of physical information
about the sources, allowing the measurement of the individual masses of the
binary components, and the orientation of the line of nodes on the sky, even
for monochromatic gravitational wave signals.
|
This article is part of a comprehensive research project on liquidity risk in
asset management, which can be divided into three dimensions. The first
dimension covers liability liquidity risk (or funding liquidity) modeling, the
second dimension focuses on asset liquidity risk (or market liquidity)
modeling, and the third dimension considers asset-liability liquidity risk
management (or asset-liability matching). The purpose of this research is to
propose a methodological and practical framework in order to perform liquidity
stress testing programs, which comply with regulatory guidelines (ESMA, 2019)
and are useful for fund managers. The review of the academic literature and
professional research studies shows that there is a lack of standardized and
analytical models. The aim of this research project is then to fill the gap
with the goal to develop mathematical and statistical approaches, and provide
appropriate answers.
In this first part that focuses on liability liquidity risk modeling, we
propose several statistical models for estimating redemption shocks. The
historical approach must be complemented by an analytical approach based on
zero-inflated models if we want to understand the true parameters that
influence the redemption shocks. Moreover, we must also distinguish aggregate
population models and individual-based models if we want to develop behavioral
approaches. Once these different statistical models are calibrated, the second
big issue is the risk measure to assess normal and stressed redemption shocks.
Finally, the last issue is to develop a factor model that can translate stress
scenarios on market risk factors into stress scenarios on fund liabilities.
|
Van der Waals heterostructures obtained by artificially stacking
two-dimensional crystals represent the frontier of material engineering,
demonstrating properties superior to those of the starting materials. Fine
control of the interlayer twist angle has opened new possibilities for
tailoring the optoelectronic properties of these heterostructures. Twisted
bilayer graphene with a strong interlayer coupling is a prototype of twisted
heterostructure inheriting the intriguing electronic properties of graphene.
Understanding the effects of the twist angle on its out-of-equilibrium optical
properties is crucial for devising optoelectronic applications. With this aim,
we here combine excitation-resolved hot photoluminescence with femtosecond
transient absorption microscopy. The hot charge carrier distribution induced by
photo-excitation results in peaked absorption bleaching and photo-induced
absorption bands, both with pronounced twist angle dependence. Theoretical
simulations of the electronic band structure and of the joint density of states
enable to assign these bands to the blocking of interband transitions at the
van Hove singularities and to photo-activated intersubband transitions. The
tens of picoseconds relaxation dynamics of the observed bands is attributed to
the angle-dependence of electron and phonon heat capacities of twisted bilayer
graphene.
|
The paper investigates a discrete time Binomial risk model with different
types of polices and shock events may influence some of the claim sizes. It is
shown that this model can be considered as a particular case of the classical
compound Binomial model. As far as we work with parallel Binomial counting
processes in infinite time, if we consider them as independent, the probability
of the event they to have at least once simultaneous jumps would be equal to
one. We overcome this problem by using thinning instead of convolution
operation. The bivariate claim counting processes are expressed in two
different ways. The characteristics of the total claim amount processes are
derived. The risk reserve process and the probabilities of ruin are discussed.
The deficit at ruin is thoroughly investigated when the initial capital is
zero. Its mean, probability mass function and probability generating function
are obtained. We show that although the probability generating function of the
global maxima of the random walk is uniquely determined via its probability
mass function and vice versa, any compound geometric distribution with
non-negative summands has uncountably many stochastically equivalent compound
geometric presentations. The probability to survive in much more general
settings, than those, discussed here, for example in the Anderson risk model,
has uncountably many Beekman's convolution series presentations.
|
Anticipating the quantity of new associated or affirmed cases with novel
coronavirus ailment 2019 (COVID-19) is critical in the counteraction and
control of the COVID-19 flare-up. The new associated cases with COVID-19
information were gathered from 20 January 2020 to 21 July 2020. We filtered out
the countries which are converging and used those for training the network. We
utilized the SARIMAX, Linear regression model to anticipate new suspected
COVID-19 cases for the countries which did not converge yet. We predict the
curve of non-converged countries with the help of proposed Statistical SARIMAX
model (SSM). We present new information investigation-based forecast results
that can assist governments with planning their future activities and help
clinical administrations to be more ready for what's to come. Our framework can
foresee peak corona cases with an R-Squared value of 0.986 utilizing linear
regression and fall of this pandemic at various levels for countries like
India, US, and Brazil. We found that considering more countries for training
degrades the prediction process as constraints vary from nation to nation.
Thus, we expect that the outcomes referenced in this work will help individuals
to better understand the possibilities of this pandemic.
|
We study how conserved quantities such as angular momentum and center of mass
evolve with respect to the retarded time at null infinity, which is described
in terms of a Bondi-Sachs coordinate system. These evolution formulae
complement the classical Bondi mass loss formula for gravitational radiation.
They are further expressed in terms of the potentials of the shear and news
tensors. The consequences that follow from these formulae are (1)
Supertranslation invariance of the fluxes of the CWY conserved quantities. (2)
A conservation law of angular momentum \`a la Christodoulou. (3) A duality
paradigm for null infinity. In particular, the supertranslation invariance
distinguishes the CWY angular momentum and center of mass from the classical
definitions.
|
The distributed denial of service (DDoS) attack is detrimental to businesses
and individuals as people are heavily relying on the Internet. Due to
remarkable profits, crackers favor DDoS as cybersecurity weapons to attack a
victim. Even worse, edge servers are more vulnerable. Current solutions lack
adequate consideration to the expense of attackers and inter-defender
collaborations. Hence, we revisit the DDoS attack and defense, clarifying the
advantages and disadvantages of both parties. We further propose a joint
defense framework to defeat attackers by incurring a significant increment of
required bots and enlarging attack expenses. The quantitative evaluation and
experimental assessment showcase that such expense can surge up to thousands of
times. The skyrocket of expenses leads to heavy loss to the cracker, which
prevents further attacks.
|
Nondeterministic automata may be viewed as succinct programs implementing
deterministic automata, i.e. complete specifications. Converting a given
deterministic automaton into a small nondeterministic one is known to be
computationally very hard; in fact, the ensuing decision problem is
PSPACE-complete. This paper stands in stark contrast to the status quo. We
restrict attention to subatomic nondeterministic automata, whose individual
states accept unions of syntactic congruence classes. They are general enough
to cover almost all structural results concerning nondeterministic
state-minimality. We prove that converting a monoid recognizing a regular
language into a small subatomic acceptor corresponds to an NP-complete problem.
The NP certificates are solutions of simple equations involving relations over
the syntactic monoid. We also consider the subclass of atomic nondeterministic
automata introduced by Brzozowski and Tamm. Given a deterministic automaton and
another one for the reversed language, computing small atomic acceptors is
shown to be NP-complete with analogous certificates. Our complexity results
emerge from an algebraic characterization of (sub)atomic acceptors in terms of
deterministic automata with semilattice structure, combined with an equivalence
of categories leading to succinct representations.
|
In this paper we consider high dimension models based on dependent
observations defined through autoregressive processes. For such models we
develop an adaptive efficient estimation method via the robust sequential model
selection procedures. To this end, firstly, using the Van Trees inequality, we
obtain a sharp lower bound for robust risks in an explicit form given by the
famous Pinsker constant. It should be noted, that for such models this constant
is calculated for the first time. Then, using the weighted least square method
and sharp non asymptotic oracle inequalities we provide the efficiency property
in the minimax sense for the proposed estimation procedure, i.e. we establish,
that the upper bound for its risk coincides with the obtained lower bound. It
should be emphasized that this property is obtained without using sparse
conditions and in the adaptive setting when the parameter dimension and model
regularity are unknown.
|
Despite recent advances in natural language generation, it remains
challenging to control attributes of generated text. We propose DExperts:
Decoding-time Experts, a decoding-time method for controlled text generation
that combines a pretrained language model with "expert" LMs and/or
"anti-expert" LMs in a product of experts. Intuitively, under the ensemble,
tokens only get high probability if they are considered likely by the experts,
and unlikely by the anti-experts. We apply DExperts to language detoxification
and sentiment-controlled generation, where we outperform existing controllable
generation methods on both automatic and human evaluations. Moreover, because
DExperts operates only on the output of the pretrained LM, it is effective with
(anti-)experts of smaller size, including when operating on GPT-3. Our work
highlights the promise of tuning small LMs on text with (un)desirable
attributes for efficient decoding-time steering.
|
We revisit the problem of permuting an array of length $n$ according to a
given permutation in place, that is, using only a small number of bits of extra
storage. Fich, Munro and Poblete [FOCS 1990, SICOMP 1995] obtained an elegant
$\mathcal{O}(n\log n)$-time algorithm using only $\mathcal{O}(\log^{2}n)$ bits
of extra space for this basic problem by designing a procedure that scans the
permutation and outputs exactly one element from each of its cycles. However,
in the strict sense in place should be understood as using only an
asymptotically optimal $\mathcal{O}(\log n)$ bits of extra space, or storing a
constant number of indices. The problem of permuting in this version is, in
fact, a well-known interview question, with the expected solution being a
quadratic-time algorithm. Surprisingly, no faster algorithm seems to be known
in the literature.
Our first contribution is a strictly in-place generalisation of the method of
Fich et al. that works in $\mathcal{O}_{\varepsilon}(n^{1+\varepsilon})$ time,
for any $\varepsilon > 0$. Then, we build on this generalisation to obtain a
strictly in-place algorithm for inverting a given permutation on $n$ elements
working in the same complexity. This is a significant improvement on a recent
result of Gu\'spiel [arXiv 2019], who designed an $\mathcal{O}(n^{1.5})$-time
algorithm.
|
We address a blind source separation (BSS) problem in a noisy reverberant
environment in which the number of microphones $M$ is greater than the number
of sources of interest, and the other noise components can be approximated as
stationary and Gaussian distributed. Conventional BSS algorithms for the
optimization of a multi-input multi-output convolutional beamformer have
suffered from a huge computational cost when $M$ is large. We here propose a
computationally efficient method that integrates a weighted prediction error
(WPE) dereverberation method and a fast BSS method called independent vector
extraction (IVE), which has been developed for less reverberant environments.
We show that, given the power spectrum for each source, the optimization
problem of the new method can be reduced to that of IVE by exploiting the
stationary condition, which makes the optimization easy to handle and
computationally efficient. An experiment of speech signal separation shows
that, compared to a conventional method that integrates WPE and independent
vector analysis, our proposed method achieves much faster convergence while
maintaining its separation performance.
|
Obtaining samples from the posterior distribution of inverse problems with
expensive forward operators is challenging especially when the unknowns involve
the strongly heterogeneous Earth. To meet these challenges, we propose a
preconditioning scheme involving a conditional normalizing flow (NF) capable of
sampling from a low-fidelity posterior distribution directly. This conditional
NF is used to speed up the training of the high-fidelity objective involving
minimization of the Kullback-Leibler divergence between the predicted and the
desired high-fidelity posterior density for indirect measurements at hand. To
minimize costs associated with the forward operator, we initialize the
high-fidelity NF with the weights of the pretrained low-fidelity NF, which is
trained beforehand on available model and data pairs. Our numerical
experiments, including a 2D toy and a seismic compressed sensing example,
demonstrate that thanks to the preconditioning considerable speed-ups are
achievable compared to training NFs from scratch.
|
Rock-salt lead selenide nanocrystals can be used as building blocks for large
scale square superlattices via two-dimensional assembly of nanocrystals at a
liquid-air interface followed by oriented attachment. Here we report
measurements of the local density of states of an atomically coherent
superlattice with square geometry made from PbSe nanocrystals. Controlled
annealing of the sample permits the imaging of a clean structure and to
reproducibly probe the band gap and the valence hole and conduction electron
states. The measured band gap and peak positions are compared to the results of
optical spectroscopy and atomistic tight-binding calculations of the square
superlattice band structure. In spite of the crystalline connections between
nanocrystals that induce significant electronic couplings, the electronic
structure of the superlattices remains very strongly influenced by the effects
of disorder and variability.
|
We evaluated the generalization capability of deep neural networks (DNNs),
trained to classify chest X-rays as COVID-19, normal or pneumonia, using a
relatively small and mixed dataset. We proposed a DNN to perform lung
segmentation and classification, stacking a segmentation module (U-Net), an
original intermediate module and a classification module (DenseNet201). To
evaluate generalization, we tested the DNN with an external dataset (from
distinct localities) and used Bayesian inference to estimate probability
distributions of performance metrics. Our DNN achieved 0.917 AUC on the
external test dataset, and a DenseNet without segmentation, 0.906. Bayesian
inference indicated mean accuracy of 76.1% and [0.695, 0.826] 95% HDI (high
density interval, which concentrates 95% of the metric's probability mass) with
segmentation and, without segmentation, 71.7% and [0.646, 0.786]. We proposed a
novel DNN evaluation technique, using Layer-wise Relevance Propagation (LRP)
and Brixia scores. LRP heatmaps indicated that areas where radiologists found
strong COVID-19 symptoms and attributed high Brixia scores are the most
important for the stacked DNN classification. External validation showed
smaller accuracies than internal, indicating difficulty in generalization,
which segmentation improves. Performance in the external dataset and LRP
analysis suggest that DNNs can be trained in small and mixed datasets and
detect COVID-19.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.