abstract
stringlengths 42
2.09k
|
---|
Nowadays, a community starts to find the need for human presence in an
alternative way, there has been tremendous research and development in
advancing telepresence robots. People tend to feel closer and more comfortable
with telepresence robots as many senses a human presence in robots. In general,
many people feel the sense of agency from the face of a robot, but some
telepresence robots without arm and body motions tend to give a sense of human
presence. It is important to identify and configure how the telepresence robots
affect a sense of presence and agency to people by including human face and
slight face and arm motions. Therefore, we carried out extensive research via
web-based experiment to determine the prototype that can result in soothing
human interaction with the robot. The experiments featured videos of a
telepresence robot n = 128, 2 x 2 between-participant study robot face factor:
video-conference, robot-like face; arm motion factor: moving vs. static) to
investigate the factors significantly affecting human presence and agency with
the robot. We used two telepresence robots: an affordable robot platform and a
modified version for human interaction enhancements. The findings suggest that
participants feel agency that is closer to human-likeness when the robot's face
was replaced with a human's face and without a motion. The robot's motion
invokes a feeling of human presence whether the face is human or robot-like.
|
Dense depth map capture is challenging in existing active sparse illumination
based depth acquisition techniques, such as LiDAR. Various techniques have been
proposed to estimate a dense depth map based on fusion of the sparse depth map
measurement with the RGB image. Recent advances in hardware enable adaptive
depth measurements resulting in further improvement of the dense depth map
estimation. In this paper, we study the topic of estimating dense depth from
depth sampling. The adaptive sparse depth sampling network is jointly trained
with a fusion network of an RGB image and sparse depth, to generate optimal
adaptive sampling masks. We show that such adaptive sampling masks can
generalize well to many RGB and sparse depth fusion algorithms under a variety
of sampling rates (as low as $0.0625\%$). The proposed adaptive sampling method
is fully differentiable and flexible to be trained end-to-end with upstream
perception algorithms.
|
For every set of parabolic weights, we construct a Harder-Narasimhan
stratification for the moduli stack of parabolic vector bundles on a curve. It
is based on the notion of parabolic slope, introduced by Mehta and Seshadri. We
also prove that the stratification is schematic, that each stratum is complete,
and establish an analogue of Behrend's conjecture for parabolic vector bundles.
A comparison with recent $\Theta$-stratification approaches is discussed.
|
Counterfactual explanations (CEs) are a practical tool for demonstrating why
machine learning classifiers make particular decisions. For CEs to be useful,
it is important that they are easy for users to interpret. Existing methods for
generating interpretable CEs rely on auxiliary generative models, which may not
be suitable for complex datasets, and incur engineering overhead. We introduce
a simple and fast method for generating interpretable CEs in a white-box
setting without an auxiliary model, by using the predictive uncertainty of the
classifier. Our experiments show that our proposed algorithm generates more
interpretable CEs, according to IM1 scores, than existing methods.
Additionally, our approach allows us to estimate the uncertainty of a CE, which
may be important in safety-critical applications, such as those in the medical
domain.
|
This paper presents DeepI2P: a novel approach for cross-modality registration
between an image and a point cloud. Given an image (e.g. from a rgb-camera) and
a general point cloud (e.g. from a 3D Lidar scanner) captured at different
locations in the same scene, our method estimates the relative rigid
transformation between the coordinate frames of the camera and Lidar. Learning
common feature descriptors to establish correspondences for the registration is
inherently challenging due to the lack of appearance and geometric correlations
across the two modalities. We circumvent the difficulty by converting the
registration problem into a classification and inverse camera projection
optimization problem. A classification neural network is designed to label
whether the projection of each point in the point cloud is within or beyond the
camera frustum. These labeled points are subsequently passed into a novel
inverse camera projection solver to estimate the relative pose. Extensive
experimental results on Oxford Robotcar and KITTI datasets demonstrate the
feasibility of our approach. Our source code is available at
https://github.com/lijx10/DeepI2P
|
Nanoscale layered ferromagnets have demonstrated fascinating two-dimensional
magnetism down to atomic layers, providing a peculiar playground of spin orders
for investigating fundamental physics and spintronic applications. However,
strategy for growing films with designed magnetic properties is not well
established yet. Herein, we present a versatile method to control the Curie
temperature (T_{C}) and magnetic anisotropy during growth of ultrathin
Cr_{2}Te_{3} films. We demonstrate increase of the TC from 165 K to 310 K in
sync with magnetic anisotropy switching from an out-of-plane orientation to an
in-plane one, respectively, via controlling the Te source flux during film
growth, leading to different c-lattice parameters while preserving the
stoichiometries and thicknesses of the films. We attributed this modulation of
magnetic anisotropy to the switching of the orbital magnetic moment, using
X-ray magnetic circular dichroism analysis. We also inferred that different
c-lattice constants might be responsible for the magnetic anisotropy change,
supported by theoretical calculations. These findings emphasize the potential
of ultrathin Cr_{2}Te_{3} films as candidates for developing room-temperature
spintronics applications and similar growth strategies could be applicable to
fabricate other nanoscale layered magnetic compounds.
|
A bargaining game is investigated for cooperative energy management in
microgrids. This game incorporates a fully distributed and realistic
cooperative power scheduling algorithm (CoDES) as well as a distributed Nash
Bargaining Solution (NBS)-based method of allocating the overall power bill
resulting from CoDES. A novel weather-based stochastic renewable generation
(RG) prediction method is incorporated in the power scheduling. We demonstrate
the proposed game using a 4-user grid-connected microgrid model with diverse
user demands, storage, and RG profiles and examine the effect of weather
prediction on day-ahead power scheduling and cost/profit allocation. Finally,
the impact of users' ambivalence about cooperation and /or dishonesty on the
bargaining outcome is investigated, and it is shown that the proposed game is
resilient to malicious users' attempts to avoid payment of their fair share of
the overall bill.
|
The computer vision community has paid much attention to the development of
visible image super-resolution (SR) using deep neural networks (DNNs) and has
achieved impressive results. The advancement of non-visible light sensors, such
as acoustic imaging sensors, has attracted much attention, as they allow people
to visualize the intensity of sound waves beyond the visible spectrum. However,
because of the limitations imposed on acquiring acoustic data, new methods for
improving the resolution of the acoustic images are necessary. At this time,
there is no acoustic imaging dataset designed for the SR problem. This work
proposed a novel backprojection model architecture for the acoustic image
super-resolution problem, together with Acoustic Map Imaging VUB-ULB Dataset
(AMIVU). The dataset provides large simulated and real captured images at
different resolutions. The proposed XCycles BackProjection model (XCBP), in
contrast to the feedforward model approach, fully uses the iterative correction
procedure in each cycle to reconstruct the residual error correction for the
encoded features in both low- and high-resolution space. The proposed approach
was evaluated on the dataset and showed high outperformance compared to the
classical interpolation operators and to the recent feedforward
state-of-the-art models. It also contributed to a drastically reduced
sub-sampling error produced during the data acquisition.
|
The symmetry operators generating the hidden $\mathbb{Z}_2$ symmetry of the
asymmetric quantum Rabi model (AQRM) at bias $\epsilon \in
\frac{1}{2}\mathbb{Z}$ have recently been constructed by V. V. Mangazeev et al.
[J. Phys. A: Math. Theor. 54 12LT01 (2021)]. We start with this result to
determine symmetry operators for the $N$-qubit generalisation of the AQRM, also
known as the biased Dicke model, at special biases. We also prove for general
$N$ that the symmetry operators, which commute with the Hamiltonian of the
biased Dicke model, generate a $\mathbb{Z}_2$ symmetry.
|
Weakened random oracle models (WROMs) are variants of the random oracle model
(ROM). The WROMs have the random oracle and the additional oracle which breaks
some property of a hash function. Analyzing the security of cryptographic
schemes in WROMs, we can specify the property of a hash function on which the
security of cryptographic schemes depends. Liskov (SAC 2006) proposed WROMs and
later Numayama et al. (PKC 2008) formalized them as CT-ROM, SPT-ROM, and
FPT-ROM. In each model, there is the additional oracle to break collision
resistance, second preimage resistance, preimage resistance respectively. Tan
and Wong (ACISP 2012) proposed the generalized FPT-ROM (GFPT-ROM) which
intended to capture the chosen prefix collision attack suggested by Stevens et
al. (EUROCRYPT 2007). In this paper, in order to analyze the security of
cryptographic schemes more precisely, we formalize GFPT-ROM and propose
additional three WROMs which capture the chosen prefix collision attack and its
variants. In particular, we focus on signature schemes such as RSA-FDH, its
variants, and DSA, in order to understand essential roles of WROMs in their
security proofs.
|
Einstein-Maxwell-dilaton theory with non-trivial dilaton potential is known
to admit asymptotically flat and (Anti-)de Sitter charged black hole solutions.
We investigate the conditions for the presence of horizons as function of the
parameters mass $M$, charge $Q$ and dilaton coupling strength $\alpha$. We
observe that there is a value of $\alpha$ which separate two regions, one where
the black hole is Reissner-Nordstr\"om-like from a region where it is
Schwarzschild-like. We find that for de Sitter and small non-vanishing
$\alpha$, the extremal case is not reached by the solution. We also discuss the
attractive or repulsive nature of the leading long distance interaction between
two such black holes, or a test particle and one black hole, from a world-line
effective field theory point of view. Finally, we discuss possible
modifications of the Weak Gravity Conjecture in the presence of both a
dilatonic coupling and a cosmological constant.
|
Network intrusion attacks are a known threat. To detect such attacks, network
intrusion detection systems (NIDSs) have been developed and deployed. These
systems apply machine learning models to high-dimensional vectors of features
extracted from network traffic to detect intrusions. Advances in NIDSs have
made it challenging for attackers, who must execute attacks without being
detected by these systems. Prior research on bypassing NIDSs has mainly focused
on perturbing the features extracted from the attack traffic to fool the
detection system, however, this may jeopardize the attack's functionality. In
this work, we present TANTRA, a novel end-to-end Timing-based Adversarial
Network Traffic Reshaping Attack that can bypass a variety of NIDSs. Our
evasion attack utilizes a long short-term memory (LSTM) deep neural network
(DNN) which is trained to learn the time differences between the target
network's benign packets. The trained LSTM is used to set the time differences
between the malicious traffic packets (attack), without changing their content,
such that they will "behave" like benign network traffic and will not be
detected as an intrusion. We evaluate TANTRA on eight common intrusion attacks
and three state-of-the-art NIDS systems, achieving an average success rate of
99.99\% in network intrusion detection system evasion. We also propose a novel
mitigation technique to address this new evasion attack.
|
We performed deep observations to search for radio pulsations in the
directions of 375 unassociated Fermi Large Area Telescope (LAT) gamma-ray
sources using the Giant Metrewave Radio Telescope (GMRT) at 322 and 607 MHz. In
this paper we report the discovery of three millisecond pulsars (MSPs), PSR
J0248+4230, PSR J1207$-$5050 and PSR J1536$-$4948. We conducted follow up
timing observations for around 5 years with the GMRT and derived phase coherent
timing models for these MSPs. PSR J0248$+$4230 and J1207$-$5050 are isolated
MSPs having periodicities of 2.60 ms and 4.84 ms. PSR J1536-4948 is a 3.07 ms
pulsar in a binary system with orbital period of around 62 days about a
companion of minimum mass 0.32 solar mass. We also present multi-frequency
pulse profiles of these MSPs from the GMRT observations. PSR J1536-4948 is an
MSP with an extremely wide pulse profile having multiple components. Using the
radio timing ephemeris we subsequently detected gamma-ray pulsations from these
three MSPs, confirming them as the sources powering the gamma-ray emission. For
PSR J1536-4948 we performed combined radio-gamma-ray timing using around 11.6
years of gamma-ray pulse times of arrivals (TOAs) along with the radio TOAs.
PSR J1536-4948 also shows evidence for pulsed gamma-ray emission out to above
25 GeV, confirming earlier associations of this MSP with a >10 GeV point
source. The multi-wavelength pulse profiles of all three MSPs offer challenges
to models of radio and gamma-ray emission in pulsar magnetospheres.
|
This paper studies the precoder design problem of achieving max-min fairness
(MMF) amongst users in multigateway multibeam satellite communication systems
with feeder link interference. We propose a beamforming strategy based on a
newly introduced transmission scheme known as rate-splitting multiple access
(RSMA). RSMA relies on multi-antenna rate-splitting at the transmitter and
successive interference cancellation (SIC) at the receivers, such that the
intended message for a user is split into a common part and a private part and
the interference is partially decoded and partially treated as noise. In this
paper, we formulate the MMF problem subject to per-antenna power constraints at
the satellite for the system with imperfect channel state information at the
transmitter (CSIT). We also consider the case of two-stage precoding which is
assisted by on-board processing (OBP) at the satellite. Numerical results
obtained through simulations for RSMA and the conventional linear precoding
method are compared. When RSMA is used, MMF rate gain is promised and this gain
increases when OBP is used. RSMA is proven to be promising for multigateway
multibeam satellite systems whereby there are various practical challenges such
as feeder link interference, CSIT uncertainty, per-antenna power constraints,
uneven user distribution per beam and frame-based processing.
|
This paper introduces PyMatching, a fast open-source Python package for
decoding quantum error-correcting codes with the minimum-weight perfect
matching (MWPM) algorithm. PyMatching includes the standard MWPM decoder as
well as a variant, which we call local matching, that restricts each syndrome
defect to be matched to another defect within a local neighbourhood. The
decoding performance of local matching is almost identical to that of the
standard MWPM decoder in practice, while reducing the computational complexity
approximately quadratically. We benchmark the performance of PyMatching,
showing that local matching is several orders of magnitude faster than
implementations of the full MWPM algorithm using NetworkX or Blossom V for
problem sizes typically considered in error correction simulations. PyMatching
and its dependencies are open-source, and it can be used to decode any quantum
code for which syndrome defects come in pairs using a simple Python interface.
PyMatching supports the use of weighted edges, hook errors, boundaries and
measurement errors, enabling fast decoding and simulation of fault-tolerant
quantum computing.
|
We implement two recently developed fast Coulomb solvers, HSMA3D [J. Chem.
Phys. 149 (8) (2018) 084111] and HSMA2D [J. Chem. Phys. 152 (13) (2020)
134109], into a new user package HSMA for molecular dynamics simulation engine
LAMMPS. The HSMA package is designed for efficient and accurate modeling of
electrostatic interactions in 3D and 2D periodic systems with dielectric
effects at the O(N) cost. The implementation is hybrid MPI and OpenMP
parallelized and compatible with existing LAMMPS functionalities. The
vectorization technique following AVX512 instructions is adopted for
acceleration. To establish the validity of our implementation, we have
presented extensive comparisons to the widely used particle-particle
particle-mesh (PPPM) algorithm in LAMMPS and other dielectric solvers. With the
proper choice of algorithm parameters and parallelization setup, the package
enables calculations of electrostatic interactions that outperform the standard
PPPM in speed for a wide range of particle numbers.
|
The growing share of proactive actors in the electricity markets calls for
more attention on prosumers and more support for their decision-making under
decentralized electricity markets. In view of the changing paradigm, it is
crucial to study the long-term planning under the decentralized and
prosumer-centric markets to unravel the effects of such markets on the planning
decisions. In the first part of the two-part paper, we propose a
prosumer-centric framework for concurrent generation and transmission planning.
Here, three planning models are presented where a peer-to-peer market with
product differentiation, a pool market and a mixed bilateral/pool market and
their associated trading costs are explicitly modeled, respectively. To fully
reveal the individual costs and benefits, we start by formulating the
optimization problems of various actors, i.e. prosumers, transmission system
operator, energy market operator and carbon market operator. Moreover, to
enable decentralized planning where the privacy of the prosumers is preserved,
distributed optimization algorithms are presented based on the corresponding
centralized optimization problems.
|
Previous work mainly focuses on improving cross-lingual transfer for NLU
tasks with a multilingual pretrained encoder (MPE), or improving the
performance on supervised machine translation with BERT. However, it is
under-explored that whether the MPE can help to facilitate the cross-lingual
transferability of NMT model. In this paper, we focus on a zero-shot
cross-lingual transfer task in NMT. In this task, the NMT model is trained with
parallel dataset of only one language pair and an off-the-shelf MPE, then it is
directly tested on zero-shot language pairs. We propose SixT, a simple yet
effective model for this task. SixT leverages the MPE with a two-stage training
schedule and gets further improvement with a position disentangled encoder and
a capacity-enhanced decoder. Using this method, SixT significantly outperforms
mBART, a pretrained multilingual encoder-decoder model explicitly designed for
NMT, with an average improvement of 7.1 BLEU on zero-shot any-to-English test
sets across 14 source languages. Furthermore, with much less training
computation cost and training data, our model achieves better performance on 15
any-to-English test sets than CRISS and m2m-100, two strong multilingual NMT
baselines.
|
Particle tracking in large-scale numerical simulations of turbulent flows
presents one of the major bottlenecks in parallel performance and scaling
efficiency. Here, we describe a particle tracking algorithm for large-scale
parallel pseudo-spectral simulations of turbulence which scales well up to
billions of tracer particles on modern high-performance computing
architectures. We summarize the standard parallel methods used to solve the
fluid equations in our hybrid MPI/OpenMP implementation. As the main focus, we
describe the implementation of the particle tracking algorithm and document its
computational performance. To address the extensive inter-process communication
required by particle tracking, we introduce a task-based approach to overlap
point-to-point communications with computations, thereby enabling improved
resource utilization. We characterize the computational cost as a function of
the number of particles tracked and compare it with the flow field computation,
showing that the cost of particle tracking is very small for typical
applications.
|
Spontaneous imbibition has been receiving much attention due to its
significance in many subsurface and industrial applications. Unveiling
pore-scale wetting dynamics, and particularly its upscaling to the Darcy scale
are still unresolved. In this work, we conduct image-based pore-network
modeling of cocurrent spontaneous imbibition and the corresponding quasi-static
imbibition, in homogeneous sintered glass beads as well as heterogeneous
Estaillades. A wide range of viscosity ratios and wettability conditions are
taken into account. Based on our pore-scale results, we show the influence of
pore-scale heterogeneity on imbibition dynamics and nonwetting entrapment. We
elucidate different pore-filling mechanisms in imbibition, which helps us
understand wetting dynamics. Most importantly, we develop a non-equilibrium
model for relative permeability of the wetting phase, which adequately
incorporates wetting dynamics. This is crucial to the final goal of developing
a two-phase imbibition model with measurable material properties such as
capillary pressure and relative permeability. Finally, we propose some future
work on both numerical and experimental verifications of the developed
non-equilibrium permeability model.
|
This paper introduces the notion of an unravelled abstract regular polytope,
and proves that $\SL_3(q) \rtimes <t>$, where $t$ is the transpose inverse
automorphism of $\SL_3(q)$, possesses such polytopes for various congruences of
$q$. A large number of small examples of such polytopes are given, along with
extensive details of their various properties.
|
If $R$ is a commutative unital ring and $M$ is a unital $R$-module, then each
element of $\operatorname{End}_R(M)$ determines a left
$\operatorname{End}_{R}(M)[X]$-module structure on $\operatorname{End}_{R}(M)$,
where $\operatorname{End}_{R}(M)$ is the $R$-algebra of endomorphisms of $M$
and $\operatorname{End}_{R}(M)[X] =\operatorname{End}_{R}(M)\otimes_RR[X]$.
These structures provide a very short proof of the Cayley-Hamilton theorem,
which may be viewed as a reformulation of the proof in Algebra by Serge Lang.
Some generalisations of the Cayley-Hamilton theorem can be easily proved using
the proposed method.
|
We present a fascinating model that has lately caught attention among
physicists working in complexity related fields. Though it originated from
mathematics and later from economics, the model is very enlightening in many
aspects that we shall highlight in this review. It is called The Stable
Marriage Problem (though the marriage metaphor can be generalized to many other
contexts), and it consists of matching men and women, considering
preference-lists where individuals express their preference over the members of
the opposite gender. This problem appeared for the first time in 1962 in the
seminal paper of Gale and Shapley and has aroused interest in many fields of
science, including economics, game theory, computer science, etc. Recently it
has also attracted many physicists who, using the powerful tools of statistical
mechanics, have also approached it as an optimization problem. Here we present
a complete overview of the Stable Marriage Problem emphasizing its
multidisciplinary aspect, and reviewing the key results in the disciplines that
it has influenced most. We focus, in particular, in the old and recent results
achieved by physicists, finally introducing two new promising models inspired
by the philosophy of the Stable Marriage Problem. Moreover, we present an
innovative reinterpretation of the problem, useful to highlight the
revolutionary role of information in the contemporary economy.
|
In this article, we prove that Buchstaber invariant of 4-dimensional real
universal complex is no less than 24 as a follow-up to the work of Ayzenberg
[2] and Sun [14]. Moreover, a lower bound for Buchstaber invariants of
$n$-dimensional real universal complexes is given as an improvement of
Erokhovet's result in [7].
|
The interest in dynamic processes on networks is steadily rising in recent
years. In this paper, we consider the $(\alpha,\beta)$-Thresholded Network
Dynamics ($(\alpha,\beta)$-Dynamics), where $\alpha\leq \beta$, in which only
structural dynamics (dynamics of the network) are allowed, guided by local
thresholding rules executed in each node. In particular, in each discrete round
$t$, each pair of nodes $u$ and $v$ that are allowed to communicate by the
scheduler, computes a value $\mathcal{E}(u,v)$ (the potential of the pair) as a
function of the local structure of the network at round $t$ around the two
nodes. If $\mathcal{E}(u,v) < \alpha$ then the link (if it exists) between $u$
and $v$ is removed; if $\alpha \leq \mathcal{E}(u,v) < \beta$ then an existing
link among $u$ and $v$ is maintained; if $\beta \leq \mathcal{E}(u,v)$ then a
link between $u$ and $v$ is established if not already present.
The microscopic structure of $(\alpha,\beta)$-Dynamics appears to be simple,
so that we are able to rigorously argue about it, but still flexible, so that
we are able to design meaningful microscopic local rules that give rise to
interesting macroscopic behaviors. Our goals are the following: a) to
investigate the properties of the $(\alpha,\beta)$-Thresholded Network Dynamics
and b) to show that $(\alpha,\beta)$-Dynamics is expressive enough to solve
complex problems on networks.
Our contribution in these directions is twofold. We rigorously exhibit the
claim about the expressiveness of $(\alpha,\beta)$-Dynamics, both by designing
a simple protocol that provably computes the $k$-core of the network as well as
by showing that $(\alpha,\beta)$-Dynamics is in fact Turing-Complete. Second
and most important, we construct general tools for proving stabilization that
work for a subclass of $(\alpha,\beta)$-Dynamics and prove speed of convergence
in a restricted setting.
|
Deep generative models have emerged as a powerful class of priors for signals
in various inverse problems such as compressed sensing, phase retrieval and
super-resolution. Here, we assume an unknown signal to lie in the range of some
pre-trained generative model. A popular approach for signal recovery is via
gradient descent in the low-dimensional latent space. While gradient descent
has achieved good empirical performance, its theoretical behavior is not well
understood. In this paper, we introduce the use of stochastic gradient Langevin
dynamics (SGLD) for compressed sensing with a generative prior. Under mild
assumptions on the generative model, we prove the convergence of SGLD to the
true signal. We also demonstrate competitive empirical performance to standard
gradient descent.
|
The hull of a linear code over finite fields is the intersection of the code
and its dual, which was introduced by Assmus and Key. In this paper, we develop
a method to construct linear codes with trivial hull ( LCD codes) and
one-dimensional hull by employing the positive characteristic analogues of
Gauss sums. These codes are quasi-abelian, and sometimes doubly circulant. Some
sufficient conditions for a linear code to be an LCD code (resp. a linear code
with one-dimensional hull) are presented. It is worth mentioning that we
present a lower bound on the minimum distances of the constructed linear codes.
As an application, using these conditions, we obtain some optimal or almost
optimal LCD codes (resp. linear codes with one-dimensional hull) with respect
to the online Database of Grassl.
|
The performance of visual quality prediction models is commonly assumed to be
closely tied to their ability to capture perceptually relevant image aspects.
Models are thus either based on sophisticated feature extractors carefully
designed from extensive domain knowledge or optimized through feature learning.
In contrast to this, we find feature extractors constructed from random noise
to be sufficient to learn a linear regression model whose quality predictions
reach high correlations with human visual quality ratings, on par with a model
with learned features. We analyze this curious result and show that besides the
quality of feature extractors also their quantity plays a crucial role - with
top performances only being achieved in highly overparameterized models.
|
The WL-rank of a digraph $\Gamma$ is defined to be the rank of the coherent
configuration of $\Gamma$. We construct a new infinite family of strictly Deza
Cayley graphs for which the WL-rank is equal to the number of vertices. The
graphs from this family are divisible design and integral.
|
The city of Rio de Janeiro is one of the biggest cities in Brazil. Drug gangs
and paramilitary groups called \textit{mil\'icias} control some regions of the
city where the government is not present, specially in the slums. Due to the
characteristics of such two distinct groups, it was observed that the evolution
of COVID-19 is different in those two regions, in comparison with the regions
controlled by the government. In order to understand qualitatively those
observations, we divided the city in three regions controlled by the
government, by the drug gangs and by the \textit{mil\'icias}, respectively, and
we consider a SIRD-like epidemic model where the three regions are coupled.
Considering different levels of exposure, the model is capable to reproduce
qualitatively the distinct evolution of the COVID-19 disease in the three
regions, suggesting that the organized crime shapes the COVID-19 evolution in
the city of Rio de Janeiro. This case study suggests that the model can be used
in general for any metropolitan region with groups of people that can be
categorized by their level of exposure.
|
In Specific Power Absorption (SPA) models for Magnetic Fluid Hyperthermia
(MFH) experiments, the magnetic relaxation time of the nanoparticles (NPs) is
known to be a fundamental descriptor of the heating mechanisms. The relaxation
time is mainly determined by the interplay between the magnetic properties of
the NPs and the rheological properties of NPs environment. Although the role of
magnetism in MFH has been extensively studied, the thermal properties of the
NPs medium and their changes during of MFH experiments have been so far
underrated. Here, we show that ZnxFe3-xO4 NPs dispersed through different with
phase transition in the temperature range of the experiment: clarified butter
oil (CBO) and paraffin. These systems show non-linear behavior of the heating
rate within the temperature range of the MFH experiments. For CBO, a fast
increase at $306 K$ associated to changes in the viscosity (\texteta(T)) and
specific heat (c_p(T)) of the medium below and above its melting temperature.
This increment in the heating rate takes place around $318 K$ for paraffin.
Magnetic and morphological characterizations of NPs together with the observed
agglomeration of the nanoparticles above $306 K$ indicate that the fast
increase in MFH curves could not be associated to a change in the magnetic
relaxation mechanism, with N\'eel relaxation being dominant. In fact,
successive experiment runs performed up to temperatures below and above the CBO
melting point resulted in different MFH curves due to agglomeration of NPs
driven by magnetic field inhomogeneity during the experiments. Similar effects
were observed for paraffin. Our results highlight the relevance of the NPs
medium's thermodynamic properties for an accurate measurement of the heating
efficiency for in vitro and in vivo environments, where the thermal properties
are largely variable within the temperature window of MFH experiments.
|
Recent studies indicate that hierarchical Vision Transformer with a macro
architecture of interleaved non-overlapped window-based self-attention \&
shifted-window operation is able to achieve state-of-the-art performance in
various visual recognition tasks, and challenges the ubiquitous convolutional
neural networks (CNNs) using densely slid kernels. Most follow-up works attempt
to replace the shifted-window operation with other kinds of cross-window
communication paradigms, while treating self-attention as the de-facto standard
for window-based information aggregation. In this manuscript, we question
whether self-attention is the only choice for hierarchical Vision Transformer
to attain strong performance, and the effects of different kinds of
cross-window communication. To this end, we replace self-attention layers with
embarrassingly simple linear mapping layers, and the resulting proof-of-concept
architecture termed as LinMapper can achieve very strong performance in
ImageNet-1k image recognition. Moreover, we find that LinMapper is able to
better leverage the pre-trained representations from image recognition and
demonstrates excellent transfer learning properties on downstream dense
prediction tasks such as object detection and instance segmentation. We also
experiment with other alternatives to self-attention for content aggregation
inside each non-overlapped window under different cross-window communication
approaches, which all give similar competitive results. Our study reveals that
the \textbf{macro architecture} of Swin model families, other than specific
aggregation layers or specific means of cross-window communication, may be more
responsible for its strong performance and is the real challenger to the
ubiquitous CNN's dense sliding window paradigm. Code and models will be
publicly available to facilitate future research.
|
Safe, environmentally conscious and flexible, these are the central
requirements for the future mobility. In the European border region between
Germany, France and Luxembourg, mobility in the world of work and pleasure is a
decisive factor. It must be simple, affordable and available to all. The
automation and intelligent connection of road traffic plays an important role
in this. Due to the distributed settlement structure with many small towns and
village and a few central hot spots, a fully available public transport is very
complex and expensive and only a few bus and train lines exist. In this
context, the trinational research project TERMINAL aims to establish a
cross-border automated minibus in regular traffic and to explore the user
acceptance for commuter traffic. Additionally, mobility on demand services are
tested, and both will be embedded within the existing public transport
infrastructure.
|
Firing Squad Synchronisation on Cellular Automata is the dynamical
synchronisation of finitely many cells without any prior knowledge of their
range. This can be conceived as a signal with an infinite speed. Most of the
proposed constructions naturally translate to the continuous setting of signal
machines and generate fractal figures with an accumulation on a horizontal
line, i.e. synchronously, in the space-time diagram. Signal machines are
studied in a series of articles named Abstract Geometrical Computation.
In the present article, we design a signal machine that is able to
synchronise/accumulate on any non-infinite slope. The slope is encoded in the
initial configuration. This is done by constructing an infinite tree such that
each node computes the way the tree expands.
The interest of Abstract Geometrical computation is to do away with the
constraint of discrete space, while tackling new difficulties from continuous
space. The interest of this paper in particular is to provide basic tools for
further study of computable accumulation lines in the signal machine model.
|
Numerical qualification of an eco-friendly alternative gas mixture for
avalanche mode operation of Resistive Plate Chambers is the soul of this work.
To identify the gas mixture, a numerical model developed elsewhere by the
authors has been first established by comparing the simulated figure of merits
(efficiency and streamer probability) with the experimental data for the gas
mixture used in INO-ICAL. Then it has been used to simulate the same properties
of a gas mixture based on argon, carbon di-oxide and nitrogen, identified as
potential replacement by studying its different properties. Efficacy of this
eco-friendly gas mixture has been studied by comparing the simulated result
with the standard gas mixture used in INO-ICAL as well as with experimental
data of other eco-friendly hydrofluorocarbon (HFO1234ze) based potential
replacements. To increase the efficacy of the proposed gas mixture, studies of
the traditional way (addition of a little amount of SF$_6$) and an alternative
approach (exploring the option of high-end electronics) were carried out.
|
We introduce a linearised form of the square root of the Todd class inside
the Verbitsky component of a hyper-K\"ahler manifold using the extended Mukai
lattice. This enables us to define a Mukai vector for certain objects in the
derived category taking values inside the extended Mukai lattice which is
functorial for derived equivalences. As applications, we obtain a structure
theorem for derived equivalences between hyper-K\"ahler manifolds as well as an
integral lattice associated to the derived category of hyper-K\"ahler manifolds
deformation equivalent to the Hilbert scheme of a K3 surface mimicking the
surface case.
|
Robots are becoming more and more commonplace in many industry settings. This
successful adoption can be partly attributed to (1) their increasingly
affordable cost and (2) the possibility of developing intelligent,
software-driven robots. Unfortunately, robotics software consumes significant
amounts of energy. Moreover, robots are often battery-driven, meaning that even
a small energy improvement can help reduce its energy footprint and increase
its autonomy and user experience. In this paper, we study the Robot Operating
System (ROS) ecosystem, the de-facto standard for developing and prototyping
robotics software. We analyze 527 energy-related data points (including
commits, pull-requests, and issues on ROS-related repositories, ROS-related
questions on StackOverflow, ROS Discourse, ROS Answers, and the official ROS
Wiki). Our results include a quantification of the interest of roboticists on
software energy efficiency, 10 recurrent causes, and 14 solutions of
energy-related issues, and their implied trade-offs with respect to other
quality attributes. Those contributions support roboticists and researchers
towards having energy-efficient software in future robotics projects.
|
We design and analyze an algorithm for first-order stochastic optimization of
a large class of functions on $\mathbb{R}^d$. In particular, we consider the
\emph{variationally coherent} functions which can be convex or non-convex. The
iterates of our algorithm on variationally coherent functions converge almost
surely to the global minimizer $\boldsymbol{x}^*$. Additionally, the very same
algorithm with the same hyperparameters, after $T$ iterations guarantees on
convex functions that the expected suboptimality gap is bounded by
$\widetilde{O}(\|\boldsymbol{x}^* - \boldsymbol{x}_0\| T^{-1/2+\epsilon})$ for
any $\epsilon>0$. It is the first algorithm to achieve both these properties at
the same time. Also, the rate for convex functions essentially matches the
performance of parameter-free algorithms. Our algorithm is an instance of the
Follow The Regularized Leader algorithm with the added twist of using
\emph{rescaled gradients} and time-varying linearithmic regularizers.
|
Novel structure for relativistic hydrodynamics of classic plasmas is derived
following the microscopic dynamics of charged particles. The derivation is
started from the microscopic definition of concentration. Obviously, the
concentration evolution leads to the continuity equation and gives the
definition of particle current. Introducing no arbitrary functions, we consider
the evolution of current (which does not coincide with the momentum density).
It leads to a set of new function which, to the best of our knowledge, have not
been consider in the literature earlier. One of these functions is the average
reverse relativistic (gamma) factor. Its current is also considered as one of
basic functions. Evolution of new functions appears via the concentration and
particle current so the set of equations partially closes itself. Other
functions are presented as functions of basic function as a part of truncation
presiger. Two pairs of chosen functions construct two four vectors. Evolution
of these four vectors leads to appearance of two four tensors which are
considered instead of the energy-momentum tensor. The Langmuir waves are
considered within the suggested model.
|
We derive combinatorial necessary conditions for discrete-time quantum walks
defined by regular mixed graphs to be periodic. If the quantum walk is
periodic, all the eigenvalues of the time evolution matrices must be algebraic
integers. Focusing on this, we explore which ring the coefficients of the
characteristic polynomials should belong to. On the other hand, the
coefficients of the characteristic polynomials of $\eta$-Hermitian adjacency
matrices have combinatorial implications. From these, we can find combinatorial
implications in the coefficients of the characteristic polynomials of the time
evolution matrices, and thus derive combinatorial necessary conditions for
mixed graphs to be periodic. For example, if a $k$-regular mixed graph with $n$
vertices is periodic, then $2n/k$ must be an integer. As an application of this
work, we determine periodicity of mixed complete graphs and mixed graphs with a
prime number of vertices.
|
In this paper, we present a multiscale framework for solving the Helmholtz
equation in heterogeneous media without scale separation and in the high
frequency regime where the wavenumber $k$ can be large. The main innovation is
that our methods achieve a nearly exponential rate of convergence with respect
to the computational degrees of freedom, using a coarse grid of mesh size
$O(1/k)$ without suffering from the well-known pollution effect. The key idea
is a coarse-fine scale decomposition of the solution space that adapts to the
media property and wavenumber; this decomposition is inspired by the multiscale
finite element method. We show that the coarse part is of low complexity in the
sense that it can be approximated with a nearly exponential rate of convergence
via local basis functions, while the fine part is local such that it can be
computed efficiently using the local information of the right hand side. The
combination of the two parts yields the overall nearly exponential rate of
convergence. We demonstrate the effectiveness of our methods theoretically and
numerically; an exponential rate of convergence is consistently observed and
confirmed. In addition, we observe the robustness of our methods regarding the
high contrast in the media numerically.
|
In this article we introduce the zero-divisor graphs $\Gamma_\mathscr{P}(X)$
and $\Gamma^\mathscr{P}_\infty(X)$ of the two rings $C_\mathscr{P}(X)$ and
$C^\mathscr{P}_\infty(X)$; here $\mathscr{P}$ is an ideal of closed sets in $X$
and $C_\mathscr{P}(X)$ is the aggregate of those functions in $C(X)$, whose
support lie on $\mathscr{P}$. $C^\mathscr{P}_\infty(X)$ is the $\mathscr{P}$
analogue of the ring $C_\infty (X)$. We find out conditions on the topology on
$X$, under-which $\Gamma_\mathscr{P}(X)$ (respectively,
$\Gamma^\mathscr{P}_\infty(X)$) becomes triangulated/ hypertriangulated. We
realize that $\Gamma_\mathscr{P}(X)$ (respectively,
$\Gamma^\mathscr{P}_\infty(X)$) is a complemented graph if and only if the
space of minimal prime ideals in $C_\mathscr{P}(X)$ (respectively
$\Gamma^\mathscr{P}_\infty(X)$) is compact. This places a special case of this
result with the choice $\mathscr{P}\equiv$ the ideals of closed sets in $X$,
obtained by Azarpanah and Motamedi in \cite{Azarpanah} on a wider setting. We
also give an example of a non-locally finite graph having finite chromatic
number. Finally it is established with some special choices of the ideals
$\mathscr{P}$ and $\mathscr{Q}$ on $X$ and $Y$ respectively that the rings
$C_\mathscr{P}(X)$ and $C_\mathscr{Q}(Y)$ are isomorphic if and only if
$\Gamma_\mathscr{P}(X)$ and $\Gamma_\mathscr{Q}(Y)$ are isomorphic.
|
Ever since its foundations were laid nearly a century ago, quantum theory has
provoked questions about the very nature of reality. We address these questions
by considering the universe, and the multiverse, fundamentally as complex
patterns, or mathematical structures. Basic mathematical structures can be
expressed more simply in terms of emergent parameters. Even simple mathematical
structures can interact within their own structural environment, in a
rudimentary form of self-awareness, which suggests a definition of reality in a
mathematical structure as simply the complete structure. The absolute
randomness of quantum outcomes is most satisfactorily explained by a multiverse
of discrete, parallel universes. Some of these have to be identical to each
other, but that introduces a dilemma, because each mathematical structure must
be unique. The resolution is that the parallel universes must be embedded
within a mathematical structure, the multiverse, which allows universes to be
identical within themselves, but nevertheless distinct, as determined by their
position in the structure. The multiverse needs more emergent parameters than
our universe and so it can be considered to be a superstructure.
Correspondingly, its reality can be called a super-reality. While every
universe in the multiverse is part of the super-reality, the complete
super-reality is forever beyond the horizon of any of its component universes.
|
Semi-device independent (Semi-DI) quantum random number generators (QRNG)
gained attention for security applications, offering an excellent trade-off
between security and generation rate. This paper presents a proof-of-principle
time-bin encoding semi-DI QRNG experiments based on a prepare-and-measure
scheme. The protocol requires two simple assumptions and a measurable
condition: an upper-bound on the prepared pulses' energy. We lower-bound the
conditional min-entropy from the energy-bound and the input-output correlation,
determining the amount of genuine randomness that can be certified. Moreover,
we present a generalized optimization problem for bounding the min-entropy in
the case of multiple-input and outcomes in the form of a semidefinite program
(SDP). The protocol is tested with a simple experimental setup, capable of
realizing two configurations for the ternary time-bin encoding scheme. The
experimental setup is easy-to-implement and comprises commercially available
off-the-shelf (COTS) components at the telecom wavelength, granting a secure
and certifiable entropy source. The combination of ease-of-implementation,
scalability, high-security level, and output-entropy make our system a
promising candidate for commercial QRNGs.
|
While artificial intelligence provides the backbone for many tools people use
around the world, recent work has brought to attention that the algorithms
powering AI are not free of politics, stereotypes, and bias. While most work in
this area has focused on the ways in which AI can exacerbate existing
inequalities and discrimination, very little work has studied how governments
actively shape training data. We describe how censorship has affected the
development of Wikipedia corpuses, text data which are regularly used for
pre-trained inputs into NLP algorithms. We show that word embeddings trained on
Baidu Baike, an online Chinese encyclopedia, have very different associations
between adjectives and a range of concepts about democracy, freedom, collective
action, equality, and people and historical events in China than its regularly
blocked but uncensored counterpart - Chinese language Wikipedia. We examine the
implications of these discrepancies by studying their use in downstream AI
applications. Our paper shows how government repression, censorship, and
self-censorship may impact training data and the applications that draw from
them.
|
Graphene nanoribbons (GNRs) possess distinct symmetry-protected topological
phases. We show, through first-principles calculations, that by applying an
experimentally accessible transverse electric field (TEF), certain boron and
nitrogen periodically co-doped GNRs have tunable topological phases. The
tunability arises from a field-induced band inversion due to an opposite
response of the conduction- and valance-band states to the electric field. With
a spatially-varying applied field, segments of GNRs of distinct topological
phases are created, resulting in a field-programmable array of topological
junction states, each may be occupied with charge or spin. Our findings not
only show that electric field may be used as an easy tuning knob for
topological phases in quasi-one-dimensional systems, but also provide new
design principles for future GNR-based quantum electronic devices through their
topological characters.
|
Self-supervised contrastive learning offers a means of learning informative
features from a pool of unlabeled data. In this paper, we delve into another
useful approach -- providing a way of selecting a core-set that is entirely
unlabeled. In this regard, contrastive learning, one of a large number of
self-supervised methods, was recently proposed and has consistently delivered
the highest performance. This prompted us to choose two leading methods for
contrastive learning: the simple framework for contrastive learning of visual
representations (SimCLR) and the momentum contrastive (MoCo) learning
framework. We calculated the cosine similarities for each example of an epoch
for the entire duration of the contrastive learning process and subsequently
accumulated the cosine-similarity values to obtain the coreset score. Our
assumption was that an sample with low similarity would likely behave as a
coreset. Compared with existing coreset selection methods with labels, our
approach reduced the cost associated with human annotation. The unsupervised
method implemented in this study for coreset selection obtained improved
results over a randomly chosen subset, and were comparable to existing
supervised coreset selection on various classification datasets (e.g., CIFAR,
SVHN, and QMNIST).
|
We study the action of the homeomorphism group of a surface $S$ on the fine
curve graph ${\mathcal C }^\dagger(S)$. While the definition of
$\mathcal{C}^\dagger(S)$ parallels the classical curve graph for mapping class
groups, we show that the dynamics of the action of ${\mathrm{Homeo}}(S)$ on
$\mathcal{C}^\dagger(S)$ is much richer: homeomorphisms induce parabolic
isometries in addition to elliptics and hyperbolics, and all positive reals are
realized as asymptotic translation lengths.
When the surface $S$ is a torus, we relate the dynamics of the action of a
homeomorphism on $\mathcal{C}^\dagger(S)$ to the dynamics of its action on the
torus via the classical theory of rotation sets. We characterize homeomorphisms
acting hyperbolically, show asymptotic translation length provides a lower
bound for the area of the rotation set, and, while no characterisation purely
in terms of rotation sets is possible, we give sufficient conditions for
elements to be elliptic or parabolic.
|
A quantum internet aims at harnessing networked quantum technologies, namely
by distributing bipartite entanglement between distant nodes. However,
multipartite entanglement between the nodes may empower the quantum internet
for additional or better applications for communications, sensing, and
computation. In this work, we present an algorithm for generating multipartite
entanglement between different nodes of a quantum network with noisy quantum
repeaters and imperfect quantum memories, where the links are entangled pairs.
Our algorithm is optimal for GHZ states with 3 qubits, maximising
simultaneously the final state fidelity and the rate of entanglement
distribution. Furthermore, we determine the conditions yielding this
simultaneous optimality for GHZ states with a higher number of qubits, and for
other types of multipartite entanglement. Our algorithm is general also in the
sense that it can optimise simultaneously arbitrary parameters. This work opens
the way to optimally generate multipartite quantum correlations over noisy
quantum networks, an important resource for distributed quantum technologies.
|
We present a novel mapping for studying 2D many-body quantum systems by
solving an effective, one-dimensional long-range model in place of the original
two-dimensional short-range one. In particular, we address the problem of
choosing an efficient mapping from the 2D lattice to a 1D chain that optimally
preserves the locality of interactions within the TN structure. By using Matrix
Product States (MPS) and Tree Tensor Network (TTN) algorithms, we compute the
ground state of the 2D quantum Ising model in transverse field with lattice
size up to $64\times64$, comparing the results obtained from different mappings
based on two space-filling curves, the snake curve and the Hilbert curve. We
show that the locality-preserving properties of the Hilbert curve leads to a
clear improvement of numerical precision, especially for large sizes, and turns
out to provide the best performances for the simulation of 2D lattice systems
via 1D TN structures.
|
Biological agents have meaningful interactions with their environment despite
the absence of immediate reward signals. In such instances, the agent can learn
preferred modes of behaviour that lead to predictable states -- necessary for
survival. In this paper, we pursue the notion that this learnt behaviour can be
a consequence of reward-free preference learning that ensures an appropriate
trade-off between exploration and preference satisfaction. For this, we
introduce a model-based Bayesian agent equipped with a preference learning
mechanism (pepper) using conjugate priors. These conjugate priors are used to
augment the expected free energy planner for learning preferences over states
(or outcomes) across time. Importantly, our approach enables the agent to learn
preferences that encourage adaptive behaviour at test time. We illustrate this
in the OpenAI Gym FrozenLake and the 3D mini-world environments -- with and
without volatility. Given a constant environment, these agents learn confident
(i.e., precise) preferences and act to satisfy them. Conversely, in a volatile
setting, perpetual preference uncertainty maintains exploratory behaviour. Our
experiments suggest that learnable (reward-free) preferences entail a trade-off
between exploration and preference satisfaction. Pepper offers a
straightforward framework suitable for designing adaptive agents when reward
functions cannot be predefined as in real environments.
|
We introduce a new supervised learning algorithm based to train spiking
neural networks for classification. The algorithm overcomes a limitation of
existing multi-spike learning methods: it solves the problem of interference
between interacting output spikes during a learning trial. This problem of
learning interference causes learning performance in existing approaches to
decrease as the number of output spikes increases, and represents an important
limitation in existing multi-spike learning approaches. We address learning
interference by introducing a novel mechanism to balance the magnitudes of
weight adjustments during learning, which in theory allows every spike to
simultaneously converge to their desired timings. Our results indicate that our
method achieves significantly higher memory capacity and faster convergence
compared to existing approaches for multi-spike classification. In the
ubiquitous Iris and MNIST datasets, our algorithm achieves competitive
predictive performance with state-of-the-art approaches.
|
We exhibit explicit and easily realisable bijections between Hecke--Kiselman
monoids of type $A_n$/$\widetilde{A}_n$; certain braid diagrams on the
plane/cylinder; and couples of integer sequences of particular types. This
yields a fast solution of the word problem and an efficient normal form for
these HK monoids. Yang--Baxter type actions play an important role in our
constructions.
|
Here, we designed two promising schemes to realize the high-entropy structure
in a series of quasi-two-dimensional compounds, transition metal
dichalcogenides (TMDCs). In the intra-layer high-entropy plan, (HEM)X2
compounds with high-entropy structure in the MX2 slabs were obtained, here HEM
means high-entropy metals, such as TiZrNbMoTa. And superconductivity with a
Tc~7.4 K was found in a Mo-rich HEMX2. On the other hand, in the intercalation
plan, we intercalated HEM-atoms (FeCoCrNiMn) into the gap between the
sandwiched-MX2 slabs resulting in a series of (HEM)xMX2 compounds, x in the
range of 0~0.5, in which HEM is mainly composed of 3d transition metal
elements, such as FeCoCrNiMn. As the introduction of multi-component magnetic
atoms, ferromagnetic spin-glass states with strong 2D characteristics ensued.
Tuning the x content, three kinds of two in the high-entropy intercalated layer
were observed including the 1*1 triangular lattice and two kinds of
superlattices \sqrt3*\sqrt3 and \sqrt3*2 in x=0.333 and x>0.5, respectively.
Meanwhile, the spin frustration in the two-dimensional high-entropy magnetic
plane will be enhanced with the development of \sqrt3*\sqrt3 and will be
reduced significantly when changing into the \sqrt3*2 phase. The high-entropy
TMDCs and versatile two-dimensional high-entropy structures found by us possess
great potentials to find new physics in low-dimensional high-entropy structures
and future applications.
|
Reversible data hiding in encrypted images (RDH-EI) has attracted increasing
attention, since it can protect the privacy of original images while the
embedded data can be exactly extracted. Recently, some RDH-EI schemes with
multiple data hiders have been proposed using secret sharing technique.
However, these schemes protect the contents of the original images with
lightweight security level. In this paper, we propose a high-security RDH-EI
scheme with multiple data hiders. First, we introduce a cipher-feedback secret
sharing (CFSS) technique. It follows the cryptography standards by introducing
the cipher-feedback strategy of AES. Then, using the CFSS technique, we devise
a new (r,n)-threshold (r<=n) RDH-EI scheme with multiple data hiders called
CFSS-RDHEI. It can encrypt an original image into n encrypted images with
reduced size using an encryption key and sends each encrypted image to one data
hider. Each data hider can independently embed secret data into the encrypted
image to obtain the corresponding marked encrypted image. The original image
can be completely recovered from r marked encrypted images and the encryption
key. Performance evaluations show that our CFSS-RDHEI scheme has high embedding
rate and its generated encrypted images are much smaller, compared to existing
secret sharing-based RDH-EI schemes. Security analysis demonstrates that it can
achieve high security to defense some commonly used security attacks.
|
Narrow linewidth visible light lasers are critical for atomic, molecular and
optical (AMO) applications including atomic clocks, quantum computing, atomic
and molecular spectroscopy, and sensing. Historically, such lasers are
implemented at the tabletop scale, using semiconductor lasers stabilized to
large optical reference cavities. Photonic integration of high spectral-purity
visible light sources will enable experiments to increase in complexity and
scale. Stimulated Brillouin scattering (SBS) is a promising approach to realize
highly coherent on-chip visible light laser emission. While progress has been
made on integrated SBS lasers at telecommunications wavelengths, barriers have
existed to translate this performance to the visible, namely the realization of
Brillouin-active waveguides in ultra-low optical loss photonics. We have
overcome this barrier, demonstrating the first visible light photonic
integrated SBS laser, which operates at 674 nm to address the 88Sr+ optical
clock transition. To guide the laser design, we use a combination of
multi-physics simulation and Brillouin spectroscopy in a 2 meter spiral
waveguide to identify the 25.110 GHz first order Stokes frequency shift and 290
MHz gain bandwidth. The laser is implemented in an 8.9 mm radius silicon
nitride all-waveguide resonator with 1.09 dB per meter loss and Q of 55.4
Million. Lasing is demonstrated, with an on-chip 14.7 mW threshold, a 45% slope
efficiency, and linewidth narrowing as the pump is increased from below
threshold to 269 Hz. To illustrate the wavelength flexibility of this design,
we also demonstrate lasing at 698 nm, the wavelength for the optical clock
transition in neutral strontium. This demonstration of a waveguide-based,
photonic integrated SBS laser that operates in the visible, and the reduced
size and sensitivity to environmental disturbances, shows promise for diverse
AMO applications.
|
The cosmic-ray ionization rate ($\zeta$, s$^{-1}$) plays an important role in
the interstellar medium. It controls ion-molecular chemistry and provides a
source of heating. Here we perform a grid of calculations using the spectral
synthesis code CLOUDY along nine sightlines towards, HD 169454, HD 110432, HD
204827, $\lambda$ Cep, X Per, HD 73882, HD 154368, Cyg OB2 5, Cyg OB2 12. The
value of $\zeta$ is determined by matching the observed column densities of
H$_3^+$ and H$_2$. The presence of polycyclic aromatic hydrocarbons (PAHs)
affects the free electron density, which changes the H$_3^+$ density and the
derived ionization rate. PAHs are ubiquitous in the Galaxy, but there are also
regions where PAHs do not exist. Hence, we consider clouds with a range of PAH
abundances and show their effects on the H$_3^+$ abundance. We predict an
average cosmic-ray ionization rate for H$_2$ ($\zeta$(H$_2$))= (7.88 $\pm$
2.89) $\times$ 10$^{-16}$ s$^{-1}$ for models with average Galactic PAHs
abundances, (PAH/H =10$^{-6.52}$), except Cyg OB2 5 and Cyg OB2 12. The value
of $\zeta$ is nearly 1 dex smaller for sightlines toward Cyg OB2 12. We
estimate the average value of $\zeta$(H$_2$)= (95.69 $\pm$ 46.56) $\times$
10$^{-16}$ s$^{-1}$ for models without PAHs.
|
The rate-regulation trade-off defined between two objective functions, one
penalizing the packet rate and one the state deviation and control effort, can
express the performance bound of a networked control system. However, the
characterization of the set of globally optimal solutions in this trade-off for
multi-dimensional controlled Gauss-Markov processes has been an open problem.
In the present article, we characterize a policy profile that belongs to this
set. We prove that such a policy profile consists of a symmetric threshold
triggering policy, which can be expressed in terms of the value of information,
and a certainty-equivalent control policy, which uses a conditional expectation
with linear dynamics.
|
The results obtained from state of the art human pose estimation (HPE) models
degrade rapidly when evaluating people of a low resolution, but can super
resolution (SR) be used to help mitigate this effect? By using various SR
approaches we enhanced two low resolution datasets and evaluated the change in
performance of both an object and keypoint detector as well as end-to-end HPE
results. We remark the following observations. First we find that for people
who were originally depicted at a low resolution (segmentation area in pixels),
their keypoint detection performance would improve once SR was applied. Second,
the keypoint detection performance gained is dependent on that persons pixel
count in the original image prior to any application of SR; keypoint detection
performance was improved when SR was applied to people with a small initial
segmentation area, but degrades as this becomes larger. To address this we
introduced a novel Mask-RCNN approach, utilising a segmentation area threshold
to decide when to use SR during the keypoint detection step. This approach
achieved the best results on our low resolution datasets for each HPE
performance metrics.
|
The Bloch theorem is a general theorem restricting the persistent current
associated with a conserved U(1) charge in a ground state or in a thermal
equilibrium. It gives an upper bound of the magnitude of the current density,
which is inversely proportional to the system size. In a recent preprint, Else
and Senthil applied the argument for the Bloch theorem to a generalized Gibbs
ensemble, assuming the presence of an additional conserved charge, and
predicted a nonzero current density in the nonthermal steady state [D. V. Else
and T. Senthil, arXiv:2106.15623]. In this work, we provide a complementary
derivation based on the canonical ensemble, given that the additional charge is
strictly conserved within the system by itself. Furthermore, using the example
where the additional conserved charge is the momentum operator, we discuss that
the persistent current tends to vanish when the system is in contact with an
external momentum reservoir in the co-moving frame of the reservoir.
|
For a linear algebraic group $G$ over $\bf Q$, we consider the period domains
$D$ classifying $G$-mixed Hodge structures, and construct the extended period
domains $D_{\mathrm{BS}}$, $D_{\mathrm{SL}(2)}$, and $\Gamma \backslash
D_{\Sigma}$. In particular, we give toroidal partial compactifications of mixed
Mumford--Tate domains.
|
In this work, after making an attempt to improve the formulation of the model
on particle transport within astrophysical plasma outflows and constructing the
appropriate algorithms, we test the reliability and effectiveness of our method
through numerical simulations on well-studied Galactic microquasars as the SS
433 and the Cyg X-1 systems. Then, we concentrate on predictions of the
associated emissions, focusing on detectable high energy neutrinos and
$\gamma$-rays originated from the extra-galactic M33 X-7 system, which is a
recently discovered X-ray binary located in the neighboring galaxy Messier 33
and has not yet been modeled in detail. The particle and radiation energy
distributions, produced from magnetized astrophysical jets in the context of
our method, are assumed to originate from decay and scattering processes taking
place among the secondary particles created when hot (relativistic) protons of
the jet scatter on thermal (cold) ones (p-p interaction mechanism inside the
jet). These distributions are computed by solving the system of coupled
integro-differential transport equations of multi-particle processes (reactions
chain) following the inelastic proton-proton (p-p) collisions. For the
detection of such high energy neutrinos as well as multi-wavelength (radio,
X-ray and gamma-ray) emissions, extremely sensitive detection instruments are
in operation or have been designed like the CTA, IceCube, ANTARES, KM3NeT,
IceCube-Gen-2, and other space telescopes.
|
With the ever-increasing speed and volume of knowledge production and
consumption, scholarly communication systems have been rapidly transformed into
digitised and networked open ecosystems, where preprint servers have played a
pivotal role. However, evidence is scarce regarding how this paradigm shift has
affected the dynamics of collective attention on scientific knowledge. Herein,
we address this issue by investigating the citation dynamics of more than 1.5
million eprints on arXiv, the most prominent and oldest eprint archive. The
discipline-average citation history curves are estimated by applying a
nonlinear regression model to the long-term citation data. The revealed
spatiotemporal characteristics, including the growth and obsolescence patterns,
are shown to vary across disciplines, reflecting the different publication and
citation practices. The results are used to develop a spatiotemporally
normalised citation index, called the $\gamma$-index, with an approximately
normal distribution. It can be used to compare the citational impact of
individual papers across disciplines and time periods, providing a less biased
measure of research impact than those widely used in the literature and in
practice. Further, a stochastic model for the observed spatiotemporal citation
dynamics is derived, reproducing both the Lognormal Law for the cumulative
citation distribution and the time trajectory of average citations in a unified
formalism.
|
Behavioural symptoms and urinary tract infections (UTI) are among the most
common problems faced by people with dementia. One of the key challenges in the
management of these conditions is early detection and timely intervention in
order to reduce distress and avoid unplanned hospital admissions. Using in-home
sensing technologies and machine learning models for sensor data integration
and analysis provides opportunities to detect and predict clinically
significant events and changes in health status. We have developed an
integrated platform to collect in-home sensor data and performed an
observational study to apply machine learning models for agitation and UTI risk
analysis. We collected a large dataset from 88 participants with a mean age of
82 and a standard deviation of 6.5 (47 females and 41 males) to evaluate a new
deep learning model that utilises attention and rational mechanism. The
proposed solution can process a large volume of data over a period of time and
extract significant patterns in a time-series data (i.e. attention) and use the
extracted features and patterns to train risk analysis models (i.e. rational).
The proposed model can explain the predictions by indicating which time-steps
and features are used in a long series of time-series data. The model provides
a recall of 91\% and precision of 83\% in detecting the risk of agitation and
UTIs. This model can be used for early detection of conditions such as UTIs and
managing of neuropsychiatric symptoms such as agitation in association with
initial treatment and early intervention approaches. In our study we have
developed a set of clinical pathways for early interventions using the alerts
generated by the proposed model and a clinical monitoring team has been set up
to use the platform and respond to the alerts according to the created
intervention plans.
|
The rook graph is a graph whose edges represent all the possible legal moves
of the rook chess piece on a chessboard. The problem we consider is the
following. Given any set $M$ containing pairs of cells such that each cell of
the $m_1 \times m_2$ chessboard is in exactly one pair, we determine the values
of the positive integers $m_1$ and $m_2$ for which it is possible to construct
a closed tour of all the cells of the chessboard which uses all the pairs of
cells in $M$ and some edges of the rook graph. This is an alternative
formulation of a graph-theoretical problem presented in [Electron. J. Combin.
28(1) (2021), #P1.7] involving the Cartesian product $G$ of two complete graphs
$K_{m_1}$ and $K_{m_2}$, which is, in fact, isomorphic to the $m_{1}\times
m_{2}$ rook graph. The problem revolves around determining the values of the
parameters $m_1$ and $m_2$ that would allow any perfect matching of the
complete graph on the same vertex set of $G$ to be extended to a Hamiltonian
cycle by using only edges in $G$.
|
Long-lived storage of arbitrary transverse multimodes is important for
establishing a high-channel-capacity quantum network. Most of the pioneering
works focused on atomic diffusion as the dominant impact on the retrieved
pattern in an atom-based memory. In this work, we demonstrate that the
unsynchronized Larmor precession of atoms in the inhomogeneous magnetic field
dominates the distortion of the pattern stored in a cold-atom-based memory. We
find that this distortion effect can be eliminated by applying a strong uniform
polarization magnetic field. By preparing atoms in magnetically insensitive
states, the destructive interference between different spin-wave components is
diminished, and the stored localized patterns are synchronized further in a
single spin-wave component; then, an obvious enhancement in preserving patterns
for a long time is obtained. The reported results are very promising for
studying transverse multimode decoherence in storage and high-dimensional
quantum networks in the future.
|
In this paper, we consider a simplified model of turbulence for large
Reynolds numbers driven by a constant power energy input on large scales. In
the statistical stationary regime, the behaviour of the kinetic energy is
characterised by two well defined phases: a laminar phase where the kinetic
energy grows linearly for a (random) time $t_w$ followed by abrupt
avalanche-like energy drops of sizes $S$ due to strong intermittent
fluctuations of energy dissipation. We study the probability distribution
$P[t_w]$ and $P[S]$ which both exhibit a quite well defined scaling behaviour.
Although $t_w$ and $S$ are not statistically correlated, we suggest and
numerically checked that their scaling properties are related based on a
simple, but non trivial, scaling argument. We propose that the same approach
can be used for other systems showing avalanche-like behaviour such as
amorphous solids and seismic events.
|
Planar graphs can be represented as intersection graphs of different types of
geometric objects in the plane, e.g., circles (Koebe, 1936), line segments
(Chalopin \& Gon{\c{c}}alves, 2009), \textsc{L}-shapes (Gon{\c{c}}alves et al,
2018). For general graphs, however, even deciding whether such representations
exist is often $NP$-hard. We consider apex graphs, i.e., graphs that can be
made planar by removing one vertex from them. We show, somewhat surprisingly,
that deciding whether geometric representations exist for apex graphs is
$NP$-hard.
More precisely, we show that for every positive integer $k$, recognizing
every graph class $\mathcal{G}$ which satisfies $\textsc{PURE-2-DIR} \subseteq
\mathcal{G} \subseteq \textsc{1-STRING}$ is $NP$-hard, even when the input
graphs are apex graphs of girth at least $k$. Here, $PURE-2-DIR$ is the class
of intersection graphs of axis-parallel line segments (where intersections are
allowed only between horizontal and vertical segments) and \textsc{1-STRING} is
the class of intersection graphs of simple curves (where two curves share at
most one point) in the plane. This partially answers an open question raised by
Kratochv{\'\i}l \& Pergel (2007).
Most known $NP$-hardness reductions for these problems are from variants of
3-SAT. We reduce from the \textsc{PLANAR HAMILTONIAN PATH COMPLETION} problem,
which uses the more intuitive notion of planarity. As a result, our proof is
much simpler and encapsulates several classes of geometric graphs.
|
The present paper reports on the numerical investigation of lifted turbulent
jet flames with H2/N2 fuel issuing into a vitiated coflow of lean combustion
products of H2/air using conditional moment closure method (CMC). A 2D
axisymmetric formulation has been used for the predictions of fluid flow, while
CMC equations are solved with detailed chemistry to represent the
turbulence-chemistry interaction. Simulations are carried out for different
coflow temperatures, jet and coflow velocities in order to investigate the
impact on the flame lift-off height as well as on the flame stabilization.
Furthermore, the role of conditional velocity models on the flame has also been
investigated. In addition, the effect of mixing is investigated over a range of
coflow temperatures and the stabilization mechanism is determined from the
analysis of the transport budgets. It is found that the lift-off height is
highly sensitive to the coflow temperature, while the predicted lift-off height
using the mixing model constant, i.e., C{\Phi}=4, is found to be the closest to
the experimental results. For all the coflow temperatures, the balance is found
between the chemical, axial convection and molecular diffusion terms while the
contribution from axial and radial diffusion is negligible, thus indicating
auto-ignition as the flame stabilization mechanism.
|
When the Rashba and Dresslhaus spin-orbit coupling are both presented for a
two-dimensional electron in a perpendicular magnetic field, a striking
resemblance to anisotropic quantum Rabi model in quantum optics is found. We
perform a generalized Rashba coupling approximation to obtain a solvable
Hamiltonian by keeping the nearest-mixing terms of Laudau states, which is
reformulated in the similar form to that with only Rashba coupling. Each Landau
state becomes a new displaced-Fock state with a displacement shift instead of
the original Harmonic oscillator Fock state, yielding eigenstates in closed
form. Analytical energies are consistent with numerical ones in a wide range of
coupling strength even for a strong Zeeman splitting. In the presence of an
electric field, the spin conductance and the charge conductance obtained
analytically are in good agreements with the numerical results. As the
component of the Dresselhaus coupling increases, we find that the spin Hall
conductance exhibits a pronounced resonant peak at a larger value of the
inverse of the magnetic field. Meanwhile, the charge conductance exhibits a
series of plateaus as well as a jump at the resonant magnetic field. Our method
provides an easy-to-implement analytical treatment to two-dimensional electron
gas systems with both types of spin-orbit couplings.
|
In recent years, there has been a resurgence in methods that use distributed
(neural) representations to represent and reason about semantic knowledge for
robotics applications. However, while robots often observe previously unknown
concepts, these representations typically assume that all concepts are known a
priori, and incorporating new information requires all concepts to be learned
afresh. Our work relaxes this limiting assumption of existing representations
and tackles the incremental knowledge graph embedding problem by leveraging the
principles of a range of continual learning methods. Through an experimental
evaluation with several knowledge graphs and embedding representations, we
provide insights about trade-offs for practitioners to match a semantics-driven
robotics applications to a suitable continual knowledge graph embedding method.
|
As power systems are undergoing a significant transformation with more
uncertainties, less inertia and closer to operation limits, there is increasing
risk of large outages. Thus, there is an imperative need to enhance grid
emergency control to maintain system reliability and security. Towards this
end, great progress has been made in developing deep reinforcement learning
(DRL) based grid control solutions in recent years. However, existing DRL-based
solutions have two main limitations: 1) they cannot handle well with a wide
range of grid operation conditions, system parameters, and contingencies; 2)
they generally lack the ability to fast adapt to new grid operation conditions,
system parameters, and contingencies, limiting their applicability for
real-world applications. In this paper, we mitigate these limitations by
developing a novel deep meta reinforcement learning (DMRL) algorithm. The DMRL
combines the meta strategy optimization together with DRL, and trains policies
modulated by a latent space that can quickly adapt to new scenarios. We test
the developed DMRL algorithm on the IEEE 300-bus system. We demonstrate fast
adaptation of the meta-trained DRL polices with latent variables to new
operating conditions and scenarios using the proposed method and achieve
superior performance compared to the state-of-the-art DRL and model predictive
control (MPC) methods.
|
Full-stack autonomous driving perception modules usually consist of
data-driven models based on multiple sensor modalities. However, these models
might be biased to the sensor setup used for data acquisition. This bias can
seriously impair the perception models' transferability to new sensor setups,
which continuously occur due to the market's competitive nature. We envision
sensor data abstraction as an interface between sensor data and machine
learning applications for highly automated vehicles (HAD).
For this purpose, we review the primary sensor modalities, camera, lidar, and
radar, published in autonomous-driving related datasets, examine single sensor
abstraction and abstraction of sensor setups, and identify critical paths
towards an abstraction of sensor data from multiple perception configurations.
|
We reconstruct the Lorentzian graviton propagator in asymptotically safe
quantum gravity from Euclidean data. The reconstruction is applied to both the
dynamical fluctuation graviton and the background graviton propagator. We prove
that the spectral function of the latter necessarily has negative parts similar
to, and for the same reasons, as the gluon spectral function. In turn, the
spectral function of the dynamical graviton is positive. We argue that the
latter enters cross sections and other observables in asymptotically safe
quantum gravity. Hence, its positivity may hint at the unitarity of
asymptotically safe quantum gravity.
|
Distributed data processing ecosystems are widespread and their components
are highly specialized, such that efficient interoperability is urgent.
Recently, Apache Arrow was chosen by the community to serve as a format
mediator, providing efficient in-memory data representation. Arrow enables
efficient data movement between data processing and storage engines,
significantly improving interoperability and overall performance. In this work,
we design a new zero-cost data interoperability layer between Apache Spark and
Arrow-based data sources through the Arrow Dataset API. Our novel data
interface helps separate the computation (Spark) and data (Arrow) layers. This
enables practitioners to seamlessly use Spark to access data from all Arrow
Dataset API-enabled data sources and frameworks. To benefit our community, we
open-source our work and show that consuming data through Apache Arrow is
zero-cost: our novel data interface is either on-par or more performant than
native Spark.
|
Excessive evaporative loss of water from the topsoil in arid-land agriculture
is compensated via irrigation, which exploits massive freshwater resources. The
cumulative effects of decades of unsustainable freshwater consumption in many
arid regions are now threatening food-water security. While plastic mulches can
reduce evaporation from the topsoil, their cost and non-biodegradability limit
their utility. In response, we report on superhydrophobic sand (SHS), a
bio-inspired enhancement of common sand with a nanoscale wax coating. When SHS
was applied as a 5 mm-thick mulch over the soil, evaporation dramatically
reduced and crop yields increased. Multi-year field trials of SHS application
with tomato (Solanum lycopersicum), barley (Hordeum vulgare), and wheat
(Triticum aestivum) under normal irrigation enhanced yields by 17%-73%. Under
brackish water irrigation (5500 ppm NaCl), SHS mulching produced 53%-208%
higher fruit yield and grain gains for tomato and barley. Thus, SHS could
benefit agriculture and city-greening in arid regions.
|
Light curves of the accreting white dwarf pulsator GW Librae spanning a 7.5
month period in 2017 were obtained as part of the Next Generation Transit
Survey. This data set comprises 787 hours of photometry from 148 clear nights,
allowing the behaviour of the long (hours) and short period (20min) modulation
signals to be tracked from night to night over a much longer observing baseline
than has been previously achieved. The long period modulations intermittently
detected in previous observations of GW Lib are found to be a persistent
feature, evolving between states with periods ~83min and 2-4h on time-scales of
several days. The 20min signal is found to have a broadly stable amplitude and
frequency for the duration of the campaign, but the previously noted phase
instability is confirmed. Ultraviolet observations obtained with the Cosmic
Origin Spectrograph onboard the Hubble Space Telescope constrain the
ultraviolet-to-optical flux ratio to ~5 for the 4h modulation, and <=1 for the
20min period, with caveats introduced by non-simultaneous observations. These
results add further observational evidence that these enigmatic signals must
originate from the white dwarf, highlighting our continued gap in theoretical
understanding of the mechanisms that drive them.
|
Code summarization is the task of generating natural language description of
source code, which is important for program understanding and maintenance.
Existing approaches treat the task as a machine translation problem (e.g., from
Java to English) and applied Neural Machine Translation models to solve the
problem. These approaches only consider a given code unit (e.g., a method)
without its broader context. The lacking of context may hinder the NMT model
from gathering sufficient information for code summarization. Furthermore,
existing approaches use a fixed vocabulary and do not fully consider the words
in code, while many words in the code summary may come from the code. In this
work, we present a neural network model named ToPNN for code summarization,
which uses the topics in a broader context (e.g., class) to guide the neural
networks that combine the generation of new words and the copy of existing
words in code. Based on the model we present an approach for generating natural
language code summaries at the method level (i.e., method comments). We
evaluate our approach using a dataset with 4,203,565 commented Java methods.
The results show significant improvement over state-of-the-art approaches and
confirm the positive effect of class topics and the copy mechanism.
|
We show that the identification problem for a class of dynamic panel logit
models with fixed effects has a connection to the truncated moment problem in
mathematics. We use this connection to show that the sharp identified set of
the structural parameters is characterized by a set of moment equality and
inequality conditions. This result provides sharp bounds in models where moment
equality conditions do not exist or do not point identify the parameters. We
also show that the sharp identifying content of the non-parametric latent
distribution of the fixed effects is characterized by a vector of its
generalized moments, and that the number of moments grows linearly in T. This
final result lets us point identify, or sharply bound, specific classes of
functionals, without solving an optimization problem with respect to the latent
distribution.
|
Task environments developed in Minecraft are becoming increasingly popular
for artificial intelligence (AI) research. However, most of these are currently
constructed manually, thus failing to take advantage of procedural content
generation (PCG), a capability unique to virtual task environments. In this
paper, we present mcg, an open-source library to facilitate implementing PCG
algorithms for voxel-based environments such as Minecraft. The library is
designed with human-machine teaming research in mind, and thus takes a
'top-down' approach to generation, simultaneously generating low and high level
machine-readable representations that are suitable for empirical research.
These can be consumed by downstream AI applications that consider human spatial
cognition. The benefits of this approach include rapid, scalable, and efficient
development of virtual environments, the ability to control the statistics of
the environment at a semantic level, and the ability to generate novel
environments in response to player actions in real time.
|
Aspects of ultrahomogeneous and existentially closed Heyting algebras are
studied. Roelcke non-precompactness, non-simplicity, and non-amenability of the
automorphism group of the Fra\"iss\'e limit of finite Heyting algebras are
examined among others.
|
With the continuing spread of misinformation and disinformation online, it is
of increasing importance to develop combating mechanisms at scale in the form
of automated systems that support multiple languages. One task of interest is
claim veracity prediction, which can be addressed using stance detection with
respect to relevant documents retrieved online. To this end, we present our new
Arabic Stance Detection dataset (AraStance) of 4,063 claim--article pairs from
a diverse set of sources comprising three fact-checking websites and one news
website. AraStance covers false and true claims from multiple domains (e.g.,
politics, sports, health) and several Arab countries, and it is well-balanced
between related and unrelated documents with respect to the claims. We
benchmark AraStance, along with two other stance detection datasets, using a
number of BERT-based models. Our best model achieves an accuracy of 85\% and a
macro F1 score of 78\%, which leaves room for improvement and reflects the
challenging nature of AraStance and the task of stance detection in general.
|
Penrose et al. investigated the physical incoherence of the spacetime with
negative mass via the bending of light. Precise estimates of time-delay of null
geodesics were needed and played a pivotal role in their proof. In this paper,
we construct an intermediate diagonal metric and make a reduction of this
problem to a causality comparison in the compactified spacetimes regarding
timelike connectedness near the conformal infinities. This different approach
allows us to avoid encountering the difficulties and subtle issues Penrose et
al. met. It provides a new, substantially simple, and physically natural
non-PDE viewpoint to understand the positive mass theorem. This elementary
argument modestly applies to asymptotically flat solutions which are vacuum and
stationary near infinity.
|
Traditional channel coding with feedback constructs and transmits a codeword
only after all message bits are available at the transmitter. This paper joins
Guo & Kostina and Lalitha et. al. in developing approaches for causal (or
progressive) encoding, where the transmitter may begin transmitting codeword
symbols as soon as the first message bit arrives. Building on the work of
Horstein, Shayevitz and Feder, and Naghshvar et. al., this paper extends our
previous computationally efficient systematic algorithm for traditional
posterior matching to produce a four-phase encoder that progressively encodes
using only the message bits causally available. Systematic codes work well with
posterior matching on a channel with feedback, and they provide an immediate
benefit when causal encoding is employed instead of traditional encoding. Our
algorithm captures additional gains in the interesting region where the
transmission rate mu is higher than the rate lambda at which message bits
become available. In this region, transmission of additional symbols beyond
systematic bits, before a traditional encoder would have begun transmission,
further improves performance
|
Recently, there have been efforts towards understanding the sampling
behaviour of event-triggered control (ETC), for obtaining metrics on its
sampling performance and predicting its sampling patterns. Finite-state
abstractions, capturing the sampling behaviour of ETC systems, have proven
promising in this respect. So far, such abstractions have been constructed for
non-stochastic systems. Here, inspired by this framework, we abstract the
sampling behaviour of stochastic narrow-sense linear periodic ETC (PETC)
systems via Interval Markov Chains (IMCs). Particularly, we define functions
over sequences of state-measurements and interevent times that can be expressed
as discounted cumulative sums of rewards, and compute bounds on their expected
values by constructing appropriate IMCs and equipping them with suitable
rewards. Finally, we argue that our results are extendable to more general
forms of functions, thus providing a generic framework to define and study
various ETC sampling indicators.
|
Various unusual behaviors of artificial materials are governed by their
topological properties, among which the edge state at the boundary of a
photonic or phononic lattice has been captivated as a popular notion. However,
this remarkable bulk-boundary correspondence and the related phenomena are
missing in thermal materials. One reason is that heat diffusion is described in
a non-Hermitian framework because of its dissipative nature. The other is that
the relevant temperature field is mostly composed of modes that extend over
wide ranges, making it difficult to be rendered within the tight-binding theory
as commonly employed in wave physics. Here, we overcome the above challenges
and perform systematic studies on heat diffusion in thermal lattices. Based on
a continuum model, we introduce a state vector to link the Zak phase with the
existence of the edge state, and thereby analytically prove the thermal
bulk-boundary correspondence. We experimentally demonstrate the predicted edge
states with a topologically protected and localized heat dissipation capacity.
Our finding sets up a solid foundation to explore the topology in novel heat
transfer manipulations.
|
Initial hopes of quickly eradicating the COVID-19 pandemic proved futile, and
the goal shifted to controlling the peak of the infection, so as to minimize
the load on healthcare systems. To that end, public health authorities
intervened aggressively to institute social distancing, lock-down policies, and
other Non-Pharmaceutical Interventions (NPIs). Given the high social,
educational, psychological, and economic costs of NPIs, authorities tune them,
alternatively tightening up or relaxing rules, with the result that, in effect,
a relatively flat infection rate results. For example, during the summer in
parts of the United States, daily infection numbers dropped to a plateau. This
paper approaches NPI tuning as a control-theoretic problem, starting from a
simple dynamic model for social distancing based on the classical SIR epidemics
model. Using a singular-perturbation approach, the plateau becomes a
Quasi-Steady-State (QSS) of a reduced two-dimensional SIR model regulated by
adaptive dynamic feedback. It is shown that the QSS can be assigned and it is
globally asymptotically stable. Interestingly, the dynamic model for social
distancing can be interpreted as a nonlinear integral controller. Problems of
data fitting and parameter identifiability are also studied for this model. The
paper also discusses how this simple model allows for meaningful study of the
effect of population size, vaccinations, and the emergence of second waves.
|
The discovery of superconductivity in infinite-layer nickelates brings us
tantalizingly close to a new material class that mirrors the cuprate
superconductors. Here, we report on magnetic excitations in these nickelates,
measured using resonant inelastic x-ray scattering (RIXS) at the Ni L3-edge, to
shed light on the material complexity and microscopic physics. Undoped NdNiO2
possesses a branch of dispersive excitations with a bandwidth of approximately
200 meV, reminiscent of strongly-coupled, antiferromagnetically aligned spins
on a square lattice, despite a lack of evidence for long range magnetic order.
The significant damping of these modes indicates the importance of coupling to
rare-earth itinerant electrons. Upon doping, the spectral weight and energy
decrease slightly, while the modes become overdamped. Our results highlight the
role of Mottness in infinite-layer nickelates.
|
In the next decades, ultra-high-energy neutrinos in the EeV energy range will
be potentially detected by next-generation neutrino telescopes. Although their
primary goals are to observe cosmogenic neutrinos and to gain insight into
extreme astrophysical environments, they can also indirectly probe the nature
of dark matter. In this paper, we study the projected sensitivity of up-coming
neutrino radio telescopes, such as RNO-G, GRAND and IceCube-gen2 radio array,
to decaying dark matter scenarios. We investigate different dark matter
decaying channels and masses, from $10^7$ to $10^{15}$ GeV. By assuming the
observation of cosmogenic or newborn pulsar neutrinos, we forecast conservative
constraints on the lifetime of heavy dark matter particles. We find that these
limits are competitive with and highly complementary to previous
multi-messenger analyses.
|
Decision trees and their ensembles are very popular models of supervised
machine learning. In this paper we merge the ideas underlying decision trees,
their ensembles and FCA by proposing a new supervised machine learning model
which can be constructed in polynomial time and is applicable for both
classification and regression problems. Specifically, we first propose a
polynomial-time algorithm for constructing a part of the concept lattice that
is based on a decision tree. Second, we describe a prediction scheme based on a
concept lattice for solving both classification and regression tasks with
prediction quality comparable to that of state-of-the-art models.
|
This paper presents a new stochastic finite element method for computing
structural stochastic responses. The method provides a new expansion of
stochastic response and decouples the stochastic response into a combination of
a series of deterministic responses with random variable coefficients. A
dedicated iterative algorithm is proposed to determine the deterministic
responses and corresponding random variable coefficients one by one. The
algorithm computes the deterministic responses and corresponding random
variable coefficients in their individual space and is insensitive to
stochastic dimensions, thus it can be applied to high dimensional stochastic
problems readily without extra difficulties. More importantly, the
deterministic responses can be computed efficiently by use of existing Finite
Element Method (FEM) solvers, thus the proposed method can be easy to embed
into existing FEM structural analysis softwares. Three practical examples,
including low-dimensional and high-dimensional stochastic problems, are given
to demonstrate the accuracy and effectiveness of the proposed method.
|
Controllable person image generation aims to produce realistic human images
with desirable attributes (e.g., the given pose, cloth textures or hair style).
However, the large spatial misalignment between the source and target images
makes the standard architectures for image-to-image translation not suitable
for this task. Most of the state-of-the-art architectures avoid the alignment
step during the generation, which causes many artifacts, especially for person
images with complex textures. To solve this problem, we introduce a novel
Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned
flow-field to warp modulation parameters. This allows us to align person
spatial-adaptive styles with pose features efficiently. Moreover, we propose a
novel self-training part replacement strategy to refine the pretrained model
for the texture-transfer task, significantly improving the quality of the
generated cloth and the preservation ability of irrelevant regions. Our
experimental results on the widely used DeepFashion dataset demonstrate a
significant improvement of the proposed method over the state-of-the-art
methods on both pose-transfer and texture-transfer tasks. The source code is
available at https://github.com/zhangqianhui/Sawn.
|
We develop the Google matrix analysis of the multiproduct world trade network
obtained from the UN COMTRADE database in recent years. The comparison is done
between this new approach and the usual Import-Export description of this world
trade network. The Google matrix analysis takes into account the multiplicity
of trade transactions thus highlighting in a better way the world influence of
specific countries and products. It shows that after Brexit, the European Union
of 27 countries has the leading position in the world trade network ranking,
being ahead of USA and China. Our approach determines also a sensitivity of
trade country balance to specific products showing the dominant role of
machinery and mineral fuels in multiproduct exchanges. It also underlines the
growing influence of Asian countries.
|
We establish the correspondence between two apparently unrelated but in fact
complementary approaches of a relativistic deformed kinematics: the geometric
properties of momentum space and the loss of absolute locality in canonical
spacetime, which can be restored with the introduction of a generalized
spacetime. This correspondence is made explicit for the case of
$\kappa$-Poincar\'e kinematics and compared with its properties in the Hopf
algebra framework.
|
We investigate the possible presence of dark matter (DM) in massive and
rotating neutron stars (NSs). For the purpose we extend our previous work [1]
to introduce a light new physics vector mediator besides a scalar one in order
to ensure feeble interaction between fermionic DM and $\beta$ stable hadronic
matter in NSs. The masses of DM fermion, the mediators and the couplings are
chosen consistent with the self-interaction constraint from Bullet cluster and
from present day relic abundance. Assuming that both the scalar and vector
mediators contribute equally to the relic abundance, we compute the equation of
state (EoS) of the DM admixed NSs to find that the present consideration of the
vector new physics mediator do not bring any significant change to the EoS and
static NS properties of DM admixed NSs compared to the case where only the
scalar mediator was considered [1]. However, the obtained structural properties
in static conditions are in good agreement with the various constraints on them
from massive pulsars like PSR J0348+0432 and PSR J0740+6620, the gravitational
wave (GW170817) data and the recently obtained results of NICER experiments for
PSR J0030+0451 and PSR J0740+6620. We also extended our work to compute the
rotational properties of DM admixed NSs rotating at different angular
velocities. The present results in this regard suggest that the secondary
component of GW190814 may be a rapidly rotating massive DM admixed NS. The
constraints on rotational frequency from pulsars like PSR B1937+21 and PSR
J1748-2446ad are also satisfied by our present results. Also, the constraints
on moment of inertia are satisfied considering slow rotation. The universality
relation in terms of normalized moment of inertia also holds good with our DM
admixed EoS.
|
This paper presents a distributed optimization algorithm tailored for solving
optimal control problems arising in multi-building coordination. The buildings
coordinated by a grid operator, join a demand response program to balance the
voltage surge by using an energy cost defined criterion. In order to model the
hierarchical structure of the building network, we formulate a distributed
convex optimization problem with separable objectives and coupled affine
equality constraints. A variant of the Augmented Lagrangian based Alternating
Direction Inexact Newton (ALADIN) method for solving the considered class of
problems is then presented along with a convergence guarantee. To illustrate
the effectiveness of the proposed method, we compare it to the Alternating
Direction Method of Multipliers (ADMM) by running both an ALADIN and an ADMM
based model predictive controller on a benchmark case study.
|
The Flatland competition aimed at finding novel approaches to solve the
vehicle re-scheduling problem (VRSP). The VRSP is concerned with scheduling
trips in traffic networks and the re-scheduling of vehicles when disruptions
occur, for example the breakdown of a vehicle. While solving the VRSP in
various settings has been an active area in operations research (OR) for
decades, the ever-growing complexity of modern railway networks makes dynamic
real-time scheduling of traffic virtually impossible. Recently, multi-agent
reinforcement learning (MARL) has successfully tackled challenging tasks where
many agents need to be coordinated, such as multiplayer video games. However,
the coordination of hundreds of agents in a real-life setting like a railway
network remains challenging and the Flatland environment used for the
competition models these real-world properties in a simplified manner.
Submissions had to bring as many trains (agents) to their target stations in as
little time as possible. While the best submissions were in the OR category,
participants found many promising MARL approaches. Using both centralized and
decentralized learning based approaches, top submissions used graph
representations of the environment to construct tree-based observations.
Further, different coordination mechanisms were implemented, such as
communication and prioritization between agents. This paper presents the
competition setup, four outstanding solutions to the competition, and a
cross-comparison between them.
|
In this paper, we establish a large deviations principle (LDP) for
interacting particle systems that arise from state and action dynamics of
discrete-time mean-field games under the equilibrium policy of the
infinite-population limit. The LDP is proved under weak Feller continuity of
state and action dynamics. The proof is based on transferring LDP for empirical
measures of initial states and noise variables under setwise topology to the
original game model via contraction principle, which was first suggested by
Delarue, Lacker, and Ramanan to establish LDP for continuous-time mean-field
games under common noise. We also compare our work with LDP results established
in prior literature for interacting particle systems, which are in a sense
uncontrolled versions of mean-field games.
|
In this paper, we consider enumeration problems for edge-distinct and
vertex-distinct Eulerian trails. Here, two Eulerian trails are
\emph{edge-distinct} if the edge sequences are not identical, and they are
\emph{vertex-distinct} if the vertex sequences are not identical. As the main
result, we propose optimal enumeration algorithms for both problems, that is,
these algorithm runs in $\mathcal{O}(N)$ total time, where $N$ is the number of
solutions. Our algorithms are based on the reverse search technique introduced
by [Avis and Fukuda, DAM 1996], and the push out amortization technique
introduced by [Uno, WADS 2015].
|
In order to prevent the spread of COVID-19, governments have often required
regional or national lockdowns, which have caused extensive economic stagnation
over broad areas as the shock of the lockdowns has diffused to other regions
through supply chains. Using supply-chain data for 1.6 million firms in Japan,
this study examines how governments can mitigate these economic losses when
they are obliged to implement lockdowns. Through tests of all combinations of
two-region lockdowns, we find that coordinated, i.e., simultaneous, lockdowns
yield smaller GDP losses than uncoordinated lockdowns. Furthermore, we test
practical scenarios in which Japan's 47 regions impose lockdowns over three
months and find that GDP losses are lower if nationwide lockdowns are
coordinated than if they are uncoordinated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.