abstract
stringlengths 42
2.09k
|
---|
The purpose of marine seismic experiments is to provide information of the
structure and physical properties of the subsurface. The p-wave velocity
distribution is the most commonly modelled property, usually by inversion of
waveform attributes. In wide-angle reflection/refraction (WAS) experiments,
arrival times of seismic phases identified in Ocean Bottom Seismometers (OBS),
are used to image relatively deep structures. Most experiments have relatively
low redundancy and produce limited resolution velocity models. As alternative
to WAS experiments, the shallow subsurface is commonly studied with
Multi-Channel Seismic (MCS) data collected in towed streamers. In this case,
the recording of refractions as first arrivals is limited primarily by the
streamer length and by structural features like water depth and P-wave velocity
structure and, in general, a considerable amount of the refractions are masked
by reflections and noise. The most widely used tool to extract refraction
information from MCS data is the so-called downward continuation technique,
which is designed to redatuming streamer field data to the seafloor. In this
new virtual configuration, the early refractions transform to first arrivals
becoming visible from zero offset, which facilitates identification and use in
travel-time tomography.
This work presents a user friendly open source HPC software for redatuming 2D
streamer field data to the sea bottom for any seafloor relief. The main
ingredient is the acoustic wave equation used backward in time, allowing first
the redatuming of the receivers, and after, the redatuming of the sources.
Assessment tools are provided to evaluate the information available after
redatuming for specific data acquisition configurations. Also, we present a
step-by-step analysis that defines the most important features that influence
the quality of the redatumed, virtual recordings.
|
3D point cloud registration is a fundamental problem in computer vision and
robotics. There has been extensive research in this area, but existing methods
meet great challenges in situations with a large proportion of outliers and
time constraints, but without good transformation initialization. Recently, a
series of learning-based algorithms have been introduced and show advantages in
speed. Many of them are based on correspondences between the two point clouds,
so they do not rely on transformation initialization. However, these
learning-based methods are sensitive to outliers, which lead to more incorrect
correspondences. In this paper, we propose a novel deep graph matchingbased
framework for point cloud registration. Specifically, we first transform point
clouds into graphs and extract deep features for each point. Then, we develop a
module based on deep graph matching to calculate a soft correspondence matrix.
By using graph matching, not only the local geometry of each point but also its
structure and topology in a larger range are considered in establishing
correspondences, so that more correct correspondences are found. We train the
network with a loss directly defined on the correspondences, and in the test
stage the soft correspondences are transformed into hard one-to-one
correspondences so that registration can be performed by singular value
decomposition. Furthermore, we introduce a transformer-based method to generate
edges for graph construction, which further improves the quality of the
correspondences. Extensive experiments on registering clean, noisy,
partial-to-partial and unseen category point clouds show that the proposed
method achieves state-of-the-art performance. The code will be made publicly
available at https://github.com/fukexue/RGM.
|
We present a purely geometric method for constructing a rank two Killing
tensor in a spacetime with a codimension one foliation that lifts the trivial
Killing tensors from slices to the entire manifold. The resulting Killing
tensor can be nontrivial. A deep connection is found between the existence of
such a Killing tensor and the presence of generalized photon surfaces in
spacetime with two Killing vector fields. This technique generates Killing
tensors in a purely algebraic way, without solving differential equations. The
use of our method is demonstrated for Kerr, Kerr-Newman-NUT-AdS metrics and
Kerr-NUT-AdS multicharge gauged supergravity solution.
|
Most existing image privacy protection works focus mainly on the privacy of
photo owners and their friends, but lack the consideration of other people who
are in the background of the photos and the related location privacy issues. In
fact, when a person is in the background of someone else's photos, he/she may
be unintentionally exposed to the public when the photo owner shares the photo
online. Not only a single visited place could be exposed, attackers may also be
able to piece together a person's travel route from images. In this paper, we
propose a novel image privacy protection system, called LAMP, which aims to
light up the location awareness for people during online image sharing. The
LAMP system is based on a newly designed location-aware multi-party image
access control model. The LAMP system will automatically detect the user's
occurrences on photos regardless the user is the photo owner or not. Once a
user is identified and the location of the photo is deemed sensitive according
to the user's privacy policy, the LAMP system will intelligently replace the
user's face. A prototype of the system was implemented and evaluated to
demonstrate its applicability in the real world.
|
Learning from Demonstration (LfD) has been established as the dominant
paradigm for efficiently transferring skills from human teachers to robots. In
this context, the Federated Learning (FL) conceptualization has very recently
been introduced for developing large-scale human-robot collaborative
environments, targeting to robustly address, among others, the critical
challenges of multi-agent learning and long-term autonomy. In the current work,
the latter scheme is further extended and enhanced, by designing and
integrating a novel user profile formulation for providing a fine-grained
representation of the exhibited human behavior, adopting a Deep Learning
(DL)-based formalism. In particular, a hierarchically organized set of key
information sources is considered, including: a) User attributes (e.g.
demographic, anthropomorphic, educational, etc.), b) User state (e.g. fatigue
detection, stress detection, emotion recognition, etc.) and c)
Psychophysiological measurements (e.g. gaze, electrodermal activity, heart
rate, etc.) related data. Then, a combination of Long Short-Term Memory (LSTM)
and stacked autoencoders, with appropriately defined neural network
architectures, is employed for the modelling step. The overall designed scheme
enables both short- and long-term analysis/interpretation of the human behavior
(as observed during the feedback capturing sessions), so as to adaptively
adjust the importance of the collected feedback samples when aggregating
information originating from the same and different human teachers,
respectively.
|
Quantum networks are a new paradigm of complex networks, allowing us to
harness networked quantum technologies and to develop a quantum internet. But
how robust is a quantum network when its links and nodes start failing? We show
that quantum networks based on typical noisy quantum-repeater nodes are prone
to discontinuous phase transitions with respect to the random loss of operating
links and nodes, abruptly compromising the connectivity of the network, and
thus significantly limiting the reach of its operation. Furthermore, we
determine the critical quantum-repeater efficiency necessary to avoid this
catastrophic loss of connectivity as a function of the network topology, the
network size, and the distribution of entanglement in the network. In
particular, our results indicate that a scale-free topology is a crucial design
principle to establish a robust large-scale quantum internet.
|
This paper outlines a rigorous variational-based multilevel Global-Local
formulation for ductile fracture. Here, a phase-field formulation is used to
resolve failure mechanisms by regularizing the sharp crack topology on the
local state. The coupling of plasticity to the crack phase-field is realized by
a constitutive work density function, which is characterized through a degraded
stored elastic energy and the accumulated dissipated energy due to plasticity
and damage. Two different Global-Local approaches based on the idea of
multiplicative Schwarz' alternating method are proposed: (i) A global
constitutive model with an elastic-plastic behavior is first proposed, while it
is enhanced with a single local domain, which, in turn, describes an
elastic-plastic fracturing response. (ii) The main objective of the second
model is to introduce an adoption of the Global-Local approach toward the
multilevel local setting. To this end, an elastic-plastic global constitutive
model is augmented with two distinct local domains; in which, the first local
domain behaves as an elastic-plastic material and the next local domain is
modeled due to the fracture state. To further reduce the computational cost,
predictor-corrector adaptivity within Global-Local concept is introduced. An
adaptive scheme is devised through the evolution of the effective global
plastic flow (for only elastic-plastic adaptivity), and through the evolution
of the local crack phase-field state (for only fracture adaptivity). Thus, two
local domains are dynamically updated during the computation, resulting with
two-way adaptivity procedure. The overall response of the Global-Local approach
in terms of accuracy/robustness and efficiency is verified using single-scale
problems. The resulting framework is algorithmically described in detail and
substantiated with numerical examples.
|
While it is generally accepted that the magnetic field and its non-ideal
effects play important roles during the stellar formation, simple models of
pure hydrodynamics and angular momentum conservation are still widely employed
in the studies of disk assemblage in the framework of the so-called
"alpha-disk" model due to their simplicity. There has only been a few efforts
trying to bridge the gap between a collapsing prestellar core and a developed
disk. The goal of the present work is to revisit the assemblage of the
protoplanetary disk (PPD), by performing 3D MHD simulations with ambipolar
diffusion and full radiative transfer. We follow the global evolution of the
PPD from the prestellar core collapse for 100 kyr, with resolution of one AU.
The formed disk is more realistic and is in agreement with recent observations
of disks around class-0 young stellar objects. The mass flux arriving onto the
disk and the radial mass accretion rate within the disk are measured and
compared to analytical self-similar models. The surface mass flux is very
centrally peaked, implying that most of the mass falling onto the star does not
transit through the mid-plane of the disk. The disk mid-plane is almost dead to
turbulence, whereas upper layers and the disk outer edge are very turbulent.
The snow-line is significantly further away than in a passive disk. We
developed a zoomed rerun technique to quickly obtain a reasonable disk that is
highly stratified, weakly magnetized inside, and strongly magnetized outside.
During the class-0 phase of PPD formation, the interaction between the disk and
the infalling envelope is important and ought not be neglected. Accretion onto
the star is found to mostly depend on dynamics of the collapsing envelope,
rather than the detailed disk structure.
|
Thumbnail is the face of online videos. The explosive growth of videos both
in number and variety underpins the importance of a good thumbnail because it
saves potential viewers time to choose videos and even entice them to click on
them. A good thumbnail should be a frame that best represents the content of a
video while at the same time capturing viewers' attention. However, the
techniques and models in the past only focus on frames within a video, and we
believe such narrowed focus leave out much useful information that are part of
a video. In this paper, we expand the definition of content to include title,
description, and audio of a video and utilize information provided by these
modalities in our selection model. Specifically, our model will first sample
frames uniformly in time and return the top 1,000 frames in this subset with
the highest aesthetic scores by a Double-column Convolutional Neural Network,
to avoid the computational burden of processing all frames in downstream task.
Then, the model incorporates frame features extracted from VGG16, text features
from ELECTRA, and audio features from TRILL. These models were selected because
of their results on popular datasets as well as their competitive performances.
After feature extraction, the time-series features, frames and audio, will be
fed into Transformer encoder layers to return a vector representing their
corresponding modality. Each of the four features (frames, title, description,
audios) will pass through a context gating layer before concatenation. Finally,
our model will generate a vector in the latent space and select the frame that
is most similar to this vector in the latent space. To the best of our
knowledge, we are the first to propose a multi-modal deep learning model to
select video thumbnail, which beats the result from the previous
State-of-The-Art models.
|
For every field $F$ which has a quadratic extension $E$ we show there are
non-metabelian infinite-dimensional thin graded Lie algebras all of whose
homogeneous components, except the second one, have dimension $2$. We construct
such Lie algebras as $F$-subalgebras of Lie algebras $M$ of maximal class over
$E$. We characterise the thin Lie $F$-subalgebras of $M$ generated in degree
$1$. Moreover we show that every thin Lie algebra $L$ whose ring of graded
endomorphisms of degree zero of $L^3$ is a quadratic extension of $F$ can be
obtained in this Lie algebra of maximal class over $E$ which are ideally
$r$-constrained for a positive integer $r$.
|
We survey functional analytic methods for studying subwavelength resonator
systems. In particular, rigorous discrete approximations of Helmholtz
scattering problems are derived in an asymptotic subwavelength regime. This is
achieved by re-framing the Helmholtz equation as a non-linear eigenvalue
problem in terms of integral operators. In the subwavelength limit, resonant
states are described by the eigenstates of the generalized capacitance matrix,
which appears by perturbing the elements of the kernel of the limiting
operator. Using this formulation, we are able to describe subwavelength
resonance and related phenomena. In particular, we demonstrate large-scale
effective parameters with exotic values. We also show that these systems can
exhibit localized and guided waves on very small length scales. Using the
concept of topologically protected edge modes, such localization can be made
robust against structural imperfections.
|
This letter reports the findings of the late time behavior of the
out-of-time-ordered correlators (OTOCs) via a quantum kicked rotor model with
$\cal{PT}$-symmetric driving potential. An analytical expression of the OTOCs'
quadratic growth with time is yielded as $C(t)=G(K)t^2$. Interestingly, the
growth rate $G$ features a quantized response to the increase of the kick
strength $K$, which indicates the chaos-assisted quantization in the OTOCs'
dynamics. The physics behind this is the quantized absorption of energy from
the non-Hermitian driving potential. This discovery and the ensuing
establishment of the quantization mechanism in the dynamics of quantum chaos
with non-Hermiticity will provide insights in chaotic dynamics, promising
unprecedented observations in updated experiments.
|
We consider a family of deep neural networks consisting of two groups of
convolutional layers, a downsampling operator, and a fully connected layer. The
network structure depends on two structural parameters which determine the
numbers of convolutional layers and the width of the fully connected layer. We
establish an approximation theory with explicit approximation rates when the
approximated function takes a composite form $f\circ Q$ with a feature
polynomial $Q$ and a univariate function $f$. In particular, we prove that such
a network can outperform fully connected shallow networks in approximating
radial functions with $Q(x) =|x|^2$, when the dimension $d$ of data from
$\mathbb{R}^d$ is large. This gives the first rigorous proof for the
superiority of deep convolutional neural networks in approximating functions
with special structures. Then we carry out generalization analysis for
empirical risk minimization with such a deep network in a regression framework
with the regression function of the form $f\circ Q$. Our network structure
which does not use any composite information or the functions $Q$ and $f$ can
automatically extract features and make use of the composite nature of the
regression function via tuning the structural parameters. Our analysis provides
an error bound which decreases with the network depth to a minimum and then
increases, verifying theoretically a trade-off phenomenon observed for network
depths in many practical applications.
|
We revisit the evaluation of the chiral vortical effect in the accelerated
matter. To first order in the acceleration the corresponding matrix element of
the axial current can be reconstructed from the flat-space limit. A crucial
point is the existence in the this case of an extra conservation law of fluid
helicity. As a result, the chirality of the microscopic degrees of freedom and
helicity of the macroscopic motion are separately conserved. This separation
persists also in presence of gravity. Implications for the thermal chiral
vortical effect are discussed.
|
We present a numerical approach to efficiently calculate spin-wave
dispersions and spatial mode profiles in magnetic waveguides of arbitrarily
shaped cross section with any non-collinear equilibrium magnetization which is
translationally invariant along the waveguide. Our method is based on the
propagating-wave dynamic-matrix approach by Henry et al. and extends it to
arbitrary cross sections using a finite-element method. We solve the linearized
equation of motion of the magnetization only in a single waveguide cross
section which drastically reduces computational effort compared to common
three-dimensional micromagnetic simulations. In order to numerically obtain the
dipolar potential of individual spin-wave modes, we present a plane-wave
version of the hybrid finite-element/boundary-element method by Frekdin and
Koehler which, for the first time, we extend to a modified version of the
Poisson equation. Our method is applied to several important examples of
magnonic waveguides including systems with surface curvature, such as magnetic
nanotubes, where the curvature leads to an asymmetric spin-wave dispersion. In
all cases, the validity of our approach is confirmed by other methods. Our
method is of particular interest for the study of curvature-induced or
magnetochiral effects on spin-wave transport but also serves as an efficient
tool to investigate standard magnonic problems.
|
Business processes are continuously evolving in order to adapt to changes due
to various factors. One type of process changes are branching frequency
changes, which are related to changes in frequencies between different options
when there is an exclusive choice. Existing methods either cannot detect such
changes or cannot provide accurate and comprehensive results. In this paper, we
propose a method which takes both event logs and process models as input and
generates a choice sequence for each exclusive choice in the process model. The
method then identifies change points based on the choice sequences. We evaluate
our method on a real-life event log. Results show that our method can identify
branching frequency changes in process models and provide comprehensive results
to users.
|
We analyze the LIGO/Virgo GWTC-2 catalog to study the primary mass
distribution of the merging black holes. We perform hierarchical Bayesian
analysis, and examine whether the mass distribution has a sharp cutoff for
primary black hole masses below $65 M_\odot$, as predicted in pulsational pair
instability supernova model. We construct two empirical mass functions. One is
a piece-wise function with two power-law segments jointed by a sudden drop. The
other consists of a main truncated power-law component, a Gaussian component,
and a third very massive component. Both models can reasonably fit the data and
a sharp drop of the mass distribution is found at $\sim 50M_\odot$, suggesting
that the majority of the observed black holes can be explained by the stellar
evolution scenarios in which the pulsational pair-instability process takes
place. On the other hand, the very massive sub-population, which accounts for
at most several percents of the total, may be formed through hierarchical
mergers or other processes.
|
This study examines the mechanism design problem for public-good provision in
a large economy with $n$ independent agents. We propose a class of
dominant-strategy incentive compatible (DSIC) and ex post individual rational
(EPIR) mechanisms which we call the adjusted mean-thresholding (AMT)
mechanisms. We show that when the cost of provision grows slower than the
$\sqrt{n}$ rate, the AMT mechanisms are both asymptotically ex ante budget
balanced (AEABB) and asymptotically efficient (AE). When the cost grows faster
than the $\sqrt{n}$ rate, in contrast, we show that any DSIC, EPIR, and AEABB
mechanism must have provision probability converging to zero and hence cannot
be AE. Lastly, the AMT mechanisms are more informationally robust when compared
to, for example, the second-best mechanism. This is because the construction of
AMT mechanisms depends only on the first moments of the valuation
distributions.
|
Social Network Analysis is the use of Network and Graph Theory to study
social phenomena, which was found to be highly relevant in areas like
Criminology. This chapter provides an overview of key methods and tools that
may be used for the analysis of criminal networks, which are presented in a
real-world case study. Starting from available juridical acts, we have
extracted data on the interactions among suspects within two Sicilian Mafia
clans, obtaining two weighted undirected graphs. Then, we have investigated the
roles of these weights on the criminal network's properties, focusing on two
key features: weight distribution and shortest path length. We also present an
experiment that aims to construct an artificial network that mirrors criminal
behaviours. To this end, we have conducted a comparative degree distribution
analysis between the real criminal networks, using some of the most popular
artificial network models: Watts-Strogatz, Erd\H{o}s-R\'{e}nyi, and
Barab\'{a}si-Albert, with some topology variations. This chapter will be a
valuable tool for researchers who wish to employ social network analysis within
their own area of interest.
|
As more exoplanets are being discovered around ultracool dwarfs,
understanding their magnetic activity -- and the implications for habitability
-- is of prime importance. To find stellar flares and photometric signatures
related to starspots, continuous monitoring is necessary, which can be achieved
with spaceborn observatories like the Transiting Exoplanet Survey Satellite
(TESS). We present an analysis of TRAPPIST-1 like ultracool dwarfs with TESS
full-frame image photometry from the first two years of the primary mission. A
volume-limited sample up to 50 pc is constructed consisting of 339 stars closer
than 0.5 mag to TRAPPIST-1 on the Gaia colour-magnitude diagram. The 30-min
cadence TESS light curves of 248 stars were analysed, searching for flares and
rotational modulation caused by starspots. The composite flare frequency
distribution of the 94 identified flares shows a power law index similar to
TRAPPIST-1, and contains flares up to $E_\mathrm{TESS} = 3 \times 10^{33}$ erg.
Rotational periods shorter than 5 days were determined for 42 stars, sampling
the regime of fast rotators. The ages of 88 stars from the sample were
estimated using kinematic information. A weak correlation between rotational
period and age is observed, which is consistent with magnetic braking.
|
5G and future cellular networks intend to incorporate low earth orbit (LEO)
satellite communication systems (SatCom) to solve the coverage and availability
problems that cannot be addressed by satellite-based or ground-based
infrastructure alone. This integration of terrestrial and non terrestrial
networks poses many technical challenges which need to be identified and
addressed. To this aim, we design and simulate the downlink of a LEO SatCom
compatible with 5G NR, with a special focus on the design of the beamforming
codebook at the satellite side. The performance of this approach is evaluated
for the link between a LEO satellite and a mobile terminal in the Ku band,
assuming a realistic channel model and commercial antenna array designs, both
at the satellite and the terminal. Simulation results provide insights on open
research challenges related to analog codebook design and hybrid beamforming
strategies, requirements of the antenna terminals to provide a given SNR, or
required beam reconfiguration capabilities among others.
|
In this study, we numerically investigated the mechanical responses and
trajectories of frictional granular particles under oscillatory shear in the
reversible phase where particle trajectories form closed loops below the
yielding point. When the friction coefficient is small, the storage modulus
exhibits softening, and the loss modulus remains finite in the quasi-static
limit. As the friction coefficient increases, the softening and residual loss
modulus are suppressed. The storage and loss moduli satisfy scaling laws if
they are plotted as functions of the areas of the loop trajectories divided by
the strain amplitude and diameter of grains, at least for their small values.
|
It is shown that each non-compact locally compact second countable non-(T)
group $G$ possesses non-strongly ergodic weakly mixing IDPFT Poisson actions of
arbitrary Krieger's type. These actions are amenable if and only if $G$ is
amenable. If $G$ has the Haagerup property then (and only then) these actions
can be chosen of 0-type. If $G$ is amenable and unimodular then $G$ has weakly
mixing Bernoulli actions of any possible Krieger's type.
|
It was recently proposed that there is a phase in thermal QCD (IR phase) at
temperatures well above the chiral crossover, featuring elements of scale
invariance in the infrared (IR). Here we study the effective spatial
dimensions, $d_{IR}$, of Dirac low-energy modes in this phase, in the context
of pure-glue QCD. Our $d_{IR}$ is based on the scaling of mode support toward
thermodynamic limit, and hence is an IR probe. Ordinary extended modes, such as
those at high energy, have $d_{IR}=3$. We find $d_{IR}<3$ in the spectral range
whose lower edge coincides with $\lambda_{IR}=0$, the singularity of spectral
density defining the IR phase, and the upper edge with $\lambda_A$, the
previously identified Anderson-like non-analyticity. Details near
$\lambda_{IR}$ are unexpected in that only exact zero modes are $d_{IR}=3$,
while a thin spectral layer near zero is $d_{IR}=2$, followed by an extended
layer of $d_{IR}=1$ modes. With only integer values appearing, $d_{IR}$ may
have topological origin. We find similar structure at $\lambda_A$, and
associate its adjacent thin layer ($d_{IR} >\approx 2$) with Anderson-like
criticality. Our analysis reveals the manner in which non-analyticities at
$\lambda_{IR}$ and $\lambda_A$, originally identified in other quantities,
appear in $d_{IR}(\lambda)$. This dimension structure may be important for
understanding the near-perfect fluidity of the quark-gluon medium seen in
accelerator experiments. The role of $\lambda_A$ in previously conjectured
decoupling of IR component is explained.
|
ART-XC (Astronomical Roentgen Telescope - X-ray Concentrator) is the hard
X-ray instrument with grazing incidence imaging optics on board the
Spektr-Roentgen-Gamma (SRG) observatory. The SRG observatory is the flagship
astrophysical mission of the Russian Federal Space Program, which was
successively launched into orbit around the second Lagrangian point (L2) of the
Earth-Sun system with a Proton rocket from the Baikonur cosmodrome on 13 July
2019. The ART-XC telescope will provide the first ever true imaging all-sky
survey performed with grazing incidence optics in the 4-30 keV energy band and
will obtain the deepest and sharpest map of the sky in the energy range of 4-12
keV. Observations performed during the early calibration and performance
verification phase as well as during the on-going all-sky survey that started
on 12 Dec. 2019 have demonstrated that the in-flight characteristics of the
ART-XC telescope are very close to expectations based on the results of ground
calibrations. Upon completion of its 4-year all-sky survey, ART-XC is expected
to detect ~5000 sources (~3000 active galactic nuclei, including heavily
obscured ones, several hundred clusters of galaxies, ~1000 cataclysmic
variables and other Galactic sources), and to provide a high-quality map of the
Galactic background emission in the 4-12 keV energy band. ART-XC is also well
suited for discovering transient X-ray sources. In this paper, we describe the
telescope, results of its ground calibrations, major aspects of the mission,
the in-flight performance of ART-XC and first scientific results.
|
Linear global instability of the three-dimensional (3-D),
spanwise-homogeneous laminar separation bubble (LSB) induced by
shock-wave/boundary-layer interaction (SBLI) in a Mach 7 flow of nitrogen over
a $30^{\circ}-55^{\circ}$ double wedge is studied. At these conditions
corresponding to a freestream unit Reynolds number, $Re_1=5.2\times 10^{4}$
m$^{-1}$, the flow exhibits rarefaction effects and comparable
shock-thicknesses to the size of the boundary-layer at separation. This, in
turn, requires the use of the high-fidelity Direct Simulation Monte Carlo
(DSMC) method to accurately resolve unsteady flow features.
We show for the first time that the LSB sustains self-excited,
small-amplitude, 3-D perturbations that lead to spanwise-periodic flow
structures not only in and downstream of the separated region, as seen in a
multitude of experiments and numerical simulations, but also in the internal
structure of the separation and detached shock layers. The spanwise-periodicity
length and growth rate of the structures in the two zones are found to be
identical. It is shown that the linear global instability leads to
low-frequency unsteadiness of the triple point formed by the intersection of
separation and detached shocks, corresponding to a Strouhal number of
$St\sim0.02$. Linear superposition of the spanwise-homogeneous base flow and
the leading 3-D flow eigenmode provides further evidence of the strong coupling
between linear instability in the LSB and the shock layer.
|
Ensuring reliable operation of large power systems subjected to multiple
outages is a challenging task because of the combinatorial nature of the
problem. Traditional approaches for security assessment are often limited by
their scope and/or speed, resulting in missing of critical contingencies that
could lead to cascading failures. This paper proposes a two-component
methodology to enhance power system security. The first component combines an
efficient algorithm to detect cut-set saturation (called the feasibility test
(FT) algorithm) with real-time contingency analysis (RTCA) to create an
integrated corrective action (iCA), whose goal is to secure the system against
cut-set saturation as well as critical branch overloads. The second component
only employs the results of the FT to create a relaxed corrective action (rCA)
to secure the system against post-contingency cut-set saturation. The first
component is more comprehensive, but the latter is computationally more
efficient. The effectiveness of the two components is evaluated based upon the
number of cascade triggering contingencies alleviated, and the computation
time. The results obtained by analyzing different case-studies on the IEEE
118-bus and 2000-bus synthetic Texas systems indicate that the proposed
two-component methodology successfully enhances the scope and speed of power
system security assessment during multiple outages.
|
We analyze the performance of the best-response dynamic across all
normal-form games using a random games approach. The playing sequence -- the
order in which players update their actions -- is essentially irrelevant in
determining whether the dynamic converges to a Nash equilibrium in certain
classes of games (e.g. in potential games) but, when evaluated across all
possible games, convergence to equilibrium depends on the playing sequence in
an extreme way. Our main asymptotic result shows that the best-response dynamic
converges to a pure Nash equilibrium in a vanishingly small fraction of all
(large) games when players take turns according to a fixed cyclic order. By
contrast, when the playing sequence is random, the dynamic converges to a pure
Nash equilibrium if one exists in almost all (large) games.
|
Natural Human-Robot Interaction (HRI) is one of the key components for
service robots to be able to work in human-centric environments. In such
dynamic environments, the robot needs to understand the intention of the user
to accomplish a task successfully. Towards addressing this point, we propose a
software architecture that segments a target object from a crowded scene,
indicated verbally by a human user. At the core of our system, we employ a
multi-modal deep neural network for visual grounding. Unlike most grounding
methods that tackle the challenge using pre-trained object detectors via a
two-stepped process, we develop a single stage zero-shot model that is able to
provide predictions in unseen data. We evaluate the performance of the proposed
model on real RGB-D data collected from public scene datasets. Experimental
results showed that the proposed model performs well in terms of accuracy and
speed, while showcasing robustness to variation in the natural language input.
|
In this paper we propose an improved fast iterative method to solve the
Eikonal equation, which can be implemented in parallel. We improve the fast
iterative method for Eikonal equation in two novel ways, in the value update
and in the error correction. The new value update is very similar to the fast
iterative method in that we selectively update the points, chosen by a
convergence measure, in the active list. However, in order to reduce running
time, the improved algorithm does not run a convergence check of the
neighboring points of the narrow band as the fast iterative method usually
does. The additional error correction step is to correct the errors that the
previous value update step may cause. The error correction step consists of
finding and recalculating the point values in a separate remedy list which is
quite easy to implement on a GPU. In contrast to the fast marching method and
the fast sweeping method for the Eikonal equation, our improved method does not
need to compute the solution with any special ordering in neither the remedy
list nor the active list. Therefore, our algorithm can be implemented in
parallel. In our experiments, we implemente our new algorithm in parallel on a
GPU and compare the elapsed time with other current algorithms. The improved
fast iterative method runs faster than the other algorithms in most cases
through our numercal studies.
|
Point forecast reconciliation of collection of time series with linear
aggregation constraints has evolved substantially over the last decade. A few
commonly used methods are GLS (generalized least squares), OLS (ordinary least
squares), WLS (weighted least squares), and MinT (minimum trace). GLS and MinT
have similar mathematical expressions, but they differ by the covariance matrix
used. OLS and WLS can be considered as special cases of MinT where they differ
by the assumptions made about the structure of the covariance matrix. All these
methods ensure that the reconciled forecasts are unbiased, provided that the
base forecasts are unbiased. The ERM (empirical risk minimizer) approach was
proposed to relax the assumption of unbiasedness.
This paper proves that (a) GLS and MinT reduce to the same solution; (b) on
average, a method similar to ERM (which we refer to as MinT-U) can produce
better forecasts than MinT (lowest total mean squared error) which is then
followed by OLS and then by base; and (c) the mean squared error of each series
in the structure for MinT-U is smaller than that for MinT which is then
followed by that for either OLS or base forecasts. We show these theoretical
results using a set of simulation studies. We also evaluate them using the
Australian domestic tourism data set.
|
It has been shown previously that the presence of a Dzyaloshinskii-Moriya
interaction in perpendicularly magnetized thin films stabilizes N\'eel type
domain walls. We demonstrate, using micromagnetic simulations and analytical
modeling, that the presence of a uniaxial in-plane magnetic anisotropy can also
lead to the formation of N\'eel walls in the absence of a Dzyaloshinskii-Moriya
interaction. It is possible to abruptly switch between Bloch and N\'eel walls
via a small modulation of both the in-plane, but also the perpendicular
magnetic anisotropy. This opens up a route towards electric field control of
the domain wall type with small applied voltages through electric field
controlled anisotropies.
|
3D action recognition is referred to as the classification of action
sequences which consist of 3D skeleton joints. While many research work are
devoted to 3D action recognition, it mainly suffers from three problems: highly
complicated articulation, a great amount of noise, and a low implementation
efficiency. To tackle all these problems, we propose a real-time 3D action
recognition framework by integrating the locally aggregated kinematic-guided
skeletonlet (LAKS) with a supervised hashing-by-analysis (SHA) model. We first
define the skeletonlet as a few combinations of joint offsets grouped in terms
of kinematic principle, and then represent an action sequence using LAKS, which
consists of a denoising phase and a locally aggregating phase. The denoising
phase detects the noisy action data and adjust it by replacing all the features
within it with the features of the corresponding previous frame, while the
locally aggregating phase sums the difference between an offset feature of the
skeletonlet and its cluster center together over all the offset features of the
sequence. Finally, the SHA model which combines sparse representation with a
hashing model, aiming at promoting the recognition accuracy while maintaining a
high efficiency. Experimental results on MSRAction3D, UTKinectAction3D and
Florence3DAction datasets demonstrate that the proposed method outperforms
state-of-the-art methods in both recognition accuracy and implementation
efficiency.
|
Secant defectivity of projective varieties is classically approached via
dimensions of linear systems with multiple base points in general position. The
latter can be studied via degenerations. We exploit a technique that allows
some of the base points to collapse together. We deduce a general result which
we apply to prove a conjecture by Abo and Brambilla: for $c \geq 3$ and $d \geq
3$, the Segre-Veronese embedding of $\mathbb{P}^m\times\mathbb{P}^n$ in
bidegree $(c,d)$ is non-defective.
|
Memes are one of the most popular types of content used to spread information
online. They can influence a large number of people through rhetorical and
psychological techniques. The task, Detection of Persuasion Techniques in Texts
and Images, is to detect these persuasive techniques in memes. It consists of
three subtasks: (A) Multi-label classification using textual content, (B)
Multi-label classification and span identification using textual content, and
(C) Multi-label classification using visual and textual content. In this paper,
we propose a transfer learning approach to fine-tune BERT-based models in
different modalities. We also explore the effectiveness of ensembles of models
trained in different modalities. We achieve an F1-score of 57.0, 48.2, and 52.1
in the corresponding subtasks.
|
Quasi-periodic oscillations, often present in the power density spectrum of
accretion disk around black holes, are useful probes for the understanding of
gravitational interaction in the near-horizon regime of black holes. Since the
presence of an extra spatial dimension modifies the near horizon geometry of
black holes, it is expected that the study of these quasi-periodic oscillations
may shed some light on the possible existence of these extra dimensions.
Intriguingly, most of the extra dimensional models, which are of significant
interest to the scientific community, predicts the existence of a tidal charge
parameter in black hole spacetimes. This tidal charge parameter can have an
overall negative sign and is a distinctive signature of the extra dimensions.
Motivated by this, we have studied the quasi-periodic oscillations for a
rotating braneworld black hole using the available theoretical models.
Subsequently, we have used the observations of the quasi-periodic oscillations
from available black hole sources, e.g., GRO J1655 -- 40, XTE J1550 -- 564, GRS
1915 + 105, H 1743 + 322 and Sgr A* and have compared them with the predictions
from the relevant theoretical models, in order to estimate the tidal charge
parameter. It turns out that among the 11 theoretical models considered here, 8
of them predict a negative value for the tidal charge parameter, while for the
others negative values of the tidal charge parameter are also well within the
1-$\sigma$ confidence interval.
|
We derive a continuum mechanical model to capture the morphological changes
occurring at the pretumoral stage of epithelial tissues. The proposed model
aims to investigate the competition between the bulk elasticity of the
epithelium and the surface tensions of the apical and basal sides. According to
this model, when the apico-basal tension imbalance reaches a critical value, a
subcritical bifurcation is triggered and the epithelium attains its
physiological folded shape. Based on data available in the literature, our
model predicts that pretumoral cells are softer than healthy cells.
|
The 21cm line of neutral hydrogen (HI) opens a new avenue in our exploration
of the structure and evolution of the Universe. It provides complementary data
to the current large-scale structure observations with different systematics,
and thus it will be used to improve our understanding of the $\Lambda$CDM
model. Among several radio cosmological surveys designed to measure this line,
BINGO is a single-dish telescope mainly designed to detect baryon acoustic
oscillations (BAOs) at low redshifts ($0.127< z<0.449$). Our goal is to assess
the fiducial BINGO setup and its capabilities of constraining the cosmological
parameters, and to analyze the effect of different instrument configurations.
We used the Phase 1 fiducial configuration of the BINGO telescope to perform
our cosmological forecasts. In addition, we investigated the impact of several
instrumental setups, taking into account some instrumental systematics, and
different cosmological models. Combining BINGO with Planck temperature and
polarization data, the projected constraint improves from a $13\%$ and $25\%$
precision measurement at the $68\%$ confidence level with Planck only to $1\%$
and $3\%$ for the Hubble constant and the dark energy equation of state (EoS),
respectively, within the wCDM model. Assuming a Chevallier-Polarski-Linder
parameterization, the EoS parameters have standard deviations given by
$\sigma_{w_0} = 0.30$ and $\sigma_{w_a} = 1.2$, which are improvements on the
order of $30\%$ with respect to Planck alone. Also, we can access information
about the HI density and bias, obtaining $\sim 8.5\%$ and $\sim 6\%$ precision,
respectively, assuming they vary with redshift at three independent bins. The
fiducial BINGO configuration will be able to extract significant cosmological
information from the HI distribution and provide constraints competitive with
current and future cosmological surveys. (Abridged)
|
In cases of pressure or volume overload, probing cardiac function may be
difficult because of the interactions between shape and deformations.In this
work, we use the LDDMM framework and parallel transport to estimate and
reorient deformations of the right ventricle. We then propose a normalization
procedure for the amplitude of the deformation, and a second-order spline model
to represent the full cardiac contraction. The method is applied to 3D meshes
of the right ventricle extracted from echocardiographic sequences of 314
patients divided into three disease categories and a control group. We find
significant differences between pathologies in the model parameters, revealing
insights into the dynamics of each disease.
|
In this paper, we mainly deal with sequences of bounded linear operators on
Hilbert space. The main result is the so-called squeeze theorem (or sandwich
rule) for convergent sequences of self-adjoint operators. We show that this
theorem remains valid for all three main topologies on $B(H)$.
|
A growing effort in NLP aims to build datasets of human explanations.
However, the term explanation encompasses a broad range of notions, each with
different properties and ramifications. Our goal is to provide an overview of
diverse types of explanations and human limitations, and discuss implications
for collecting and using explanations in NLP. Inspired by prior work in
psychology and cognitive sciences, we group existing human explanations in NLP
into three categories: proximal mechanism, evidence, and procedure. These three
types differ in nature and have implications for the resultant explanations.
For instance, procedure is not considered explanations in psychology and
connects with a rich body of work on learning from instructions. The diversity
of explanations is further evidenced by proxy questions that are needed for
annotators to interpret and answer open-ended why questions. Finally,
explanations may require different, often deeper, understandings than
predictions, which casts doubt on whether humans can provide useful
explanations in some tasks.
|
An electron beam is deflected when it passes over a silicon nitride surface,
if the surface is illuminated by a low-power continuous-wave diode laser. A
deflection angle of up-to $1.2 \,\textrm{mrad}$ is achieved for an electron
beam of $29 \,\mu\textrm{rad}$ divergence. A mechanical beam-stop is used to
demonstrate that the effect can act as an optical electron switch with a rise
and fall time of $6 \,\mu\textrm{s}$. Such a switch provides an alternative
means to control electron beams, which may be useful in electron lithography
and microscopy.
|
Our main result is to establish stability of martingale couplings: suppose
that $\pi$ is a martingale coupling with marginals $\mu, \nu$. Then, given
approximating marginal measures $\tilde \mu \approx \mu, \tilde \nu\approx \nu$
in convex order, we show that there exists an approximating martingale coupling
$\tilde\pi \approx \pi$ with marginals $\tilde \mu, \tilde \nu$.
In mathematical finance, prices of European call / put option yield
information on the marginal measures of the arbitrage free pricing measures.
The above result asserts that small variations of call / put prices lead only
to small variations on the level of arbitrage free pricing measures.
While these facts have been anticipated for some time, the actual proof
requires somewhat intricate stability results for the adapted Wasserstein
distance. Notably the result has consequences for a several related problems.
Specifically, it is relevant for numerical approximations, it leads to a new
proof of the monotonicity principle of martingale optimal transport and it
implies stability of weak martingale optimal transport as well as optimal
Skorokhod embedding. On the mathematical finance side this yields continuity of
the robust pricing problem for exotic options and VIX options with respect to
market data. These applications will be detailed in two companion papers.
|
In present paper, we search the existence of dark energy scalar field models
within in $f(R, T)$ gravity theory established by Harko et al. (Phys. Rev. D
84, 024020, 2011) in a flat FRW universe. The correspondence between scalar
field models have been examined by employing new generalized dynamical
cosmological term $ \Lambda(t) $. In this regards, the best fit observational
values of parameters from three distinct sets data are applied. To decide the
solution to field equations, a scale factor $ a= \left(\sinh(\beta
t)\right)^{1/n} $ has been considered, where $ \beta$ \& $n $ are constants.
Here, we employ the recent ensues ($H_{0}=69.2$ and $q_{0}=-0.52)$ from
(OHD+JLA) observation (Yu et al., Astrophys. J. 856, 3, 2018). Through the
numerical estimation and graphical assessing of various cosmological
parameters, it has been experienced that findings are comparable with
kinematics and physical properties of universe and compatible with recent
cosmological ensues. The dynamics and potentials of scalar fields are clarified
in FRW scenario in the present model. Potentials reconstruction is highly
reasonable and shows a periodic establishment and in agreement with latest
observations.
|
We demonstrate the transfer of $^{23}$Na$^{40}$K molecules from a
closed-channel dominated Feshbach-molecule state to the absolute ground state.
The Feshbach molecules are initially created from a gas of sodium and potassium
atoms via adiabatic ramping over a Feshbach resonance at 78.3$\,$G. The
molecules are then transferred to the absolute ground state using stimulated
Raman adiabatic passage with an intermediate state in the spin-orbit-coupled
complex $|c^3 \Sigma^+, v=35, J=1 \rangle \sim |B^1\Pi, v=12, J=1\rangle$. Our
measurements show that the pump transition dipole moment linearly increases
with the closed-channel fraction. Thus, the pump-beam intensity can be two
orders of magnitude lower than is necessary with open-channel dominated
Feshbach molecules. We also demonstrate that the phase noise of the Raman
lasers can be reduced by filter cavities, significantly improving the transfer
efficiency.
|
We propose HERMES, a scalable, secure, and privacy-enhancing system for users
to share and access vehicles. HERMES securely outsources operations of vehicle
access token generation to a set of untrusted servers. It builds on an earlier
proposal, namely SePCAR [1], and extends the system design for improved
efficiency and scalability. To cater to system and user needs for secure and
private computations, HERMES utilizes and combines several cryptographic
primitives with secure multiparty computation efficiently. It conceals secret
keys of vehicles and transaction details from the servers, including vehicle
booking details, access token information, and user and vehicle identities. It
also provides user accountability in case of disputes. Besides, we provide
semantic security analysis and prove that HERMES meets its security and privacy
requirements. Last but not least, we demonstrate that HERMES is efficient and,
in contrast to SePCAR, scales to a large number of users and vehicles, making
it practical for real-world deployments. We build our evaluations with two
different multiparty computation protocols: HtMAC-MiMC and CBC-MAC-AES. Our
results demonstrate that HERMES with HtMAC-MiMC requires only approx 1,83 ms
for generating an access token for a single-vehicle owner and approx 11,9 ms
for a large branch of rental companies with over a thousand vehicles. It
handles 546 and 84 access token generations per second, respectively. This
results in HERMES being 696 (with HtMAC-MiMC) and 42 (with CBC-MAC-AES) times
faster compared to in SePCAR for a single-vehicle owner access token
generation. Furthermore, we show that HERMES is practical on the vehicle side,
too, as access token operations performed on a prototype vehicle on-board unit
take only approx 62,087 ms.
|
Context. HI filaments are closely related to dusty magnetized structures that
are observable in the far infrared (FIR). Recently it was proposed that the
coherence of oriented HI structures in velocity traces the line of sight
magnetic field tangling. Aims. We study the velocity-dependent coherence
between FIR emission at 857 GHz and HI on angular scales of 18 arcmin. Methods.
We use HI4PI HI data and Planck FIR data and apply the Hessian operator to
extract filaments. For coherence, we require that local orientation angles
{\theta} in the FIR at 857 GHz along the filaments be correlated with the HI.
Results. We find some correlation for HI column densities at |v_LSR | < 50 km/,
but a tight agreement between FIR and HI orientation angles {\theta} exists
only in narrow velocity intervals of 1 km/s. Accordingly, we assign velocities
to FIR filaments. Along the line of sight these HI structures show a high
degree of the local alignment with {\theta}, as well as in velocity space.
Interpreting these aligned structures in analogy to the polarization of dust
emission defines an HI polarization. We observe polarization fractions of up to
80%, with averages of 30%. Orientation angles {\theta} along the filaments,
projected perpendicular to the line of sight, are fluctuating systematically
and allow a characteristic distribution of filament curvatures to be
determined. Conclusions. Local HI and FIR filaments identified by the Hessian
analysis are coherent structures with well-defined radial velocities. HI
structures are also organized along the line of sight with a high degree of
coherence. The observed bending of these structures in the plane of the sky is
consistent with models for magnetic field curvatures induced by a Galactic
small-scale turbulent dynamo.
|
Recent years have witnessed the prosperity of legal artificial intelligence
with the development of technologies. In this paper, we propose a novel legal
application of legal provision prediction (LPP), which aims to predict the
related legal provisions of affairs. We formulate this task as a challenging
knowledge graph completion problem, which requires not only text understanding
but also graph reasoning. To this end, we propose a novel text-guided graph
reasoning approach. We collect amounts of real-world legal provision data from
the Guangdong government service website and construct a legal dataset called
LegalLPP. Extensive experimental results on the dataset show that our approach
achieves better performance compared with baselines. The code and dataset are
available in \url{https://github.com/zxlzr/LegalPP} for reproducibility.
|
Local SGD is a promising approach to overcome the communication overhead in
distributed learning by reducing the synchronization frequency among worker
nodes. Despite the recent theoretical advances of local SGD in empirical risk
minimization, the efficiency of its counterpart in minimax optimization remains
unexplored. Motivated by large scale minimax learning problems, such as
adversarial robust learning and training generative adversarial networks
(GANs), we propose local Stochastic Gradient Descent Ascent (local SGDA), where
the primal and dual variables can be trained locally and averaged periodically
to significantly reduce the number of communications. We show that local SGDA
can provably optimize distributed minimax problems in both homogeneous and
heterogeneous data with reduced number of communications and establish
convergence rates under strongly-convex-strongly-concave and
nonconvex-strongly-concave settings. In addition, we propose a novel variant
local SGDA+, to solve nonconvex-nonconcave problems. We give corroborating
empirical evidence on different distributed minimax problems.
|
In [Frobenius1896] it was shown that many important properties of a finite
group could be examined using formulas involving the character ratios of group
elements, i.e., the trace of the element acting in a given irreducible
representation, divided by the dimension of the representation. In
[Gurevich-Howe15] and [Gurevich-Howe17], the current authors introduced the
notion of rank of an irreducible representation of a finite classical group.
One of the motivations for studying rank was to clarify the nature of character
ratios for certain elements in these groups. In fact in the above cited papers,
two notions of rank were given. The first is the Fourier theoretic based notion
of U-rank of a representation, which comes up when one looks at its
restrictions to certain abelian unipotent subgroups. The second is the more
algebraic based notion of tensor rank which comes up naturally when one
attempts to equip the representation ring of the group with a grading that
reflects the central role played by the few "smallest" possible representations
of the group. In [Gurevich-Howe17] we conjectured that the two notions of rank
mentioned just above agree on a suitable collection called "low rank"
representations. In this note we review the development of the theory of rank
for the case of the general linear group GL_n over a finite field F_q, and give
a proof of the "agreement conjecture" that holds true for sufficiently large q.
Our proof is Fourier theoretic in nature, and uses a certain curious positivity
property of the Fourier transform of the set of matrices of low enough fixed
rank in the vector space of matrices of size m x n over F_q. In order to make
the story we are trying to tell clear, we choose in this note to follow a
particular example that shows how one might apply the theory of rank to certain
counting problems.
|
This study proposes an efficient data collection strategy exploiting a team
of Unmanned Aerial Vehicles (UAVs) to monitor and collect the data of a large
distributed sensor network usually used for environmental monitoring,
meteorology, agriculture, and renewable energy applications. The study develops
a collaborative mission planning system that enables a team of UAVs to conduct
and complete the mission of sensors' data collection collaboratively while
considering existing constraints of the UAV payload and battery capacity. The
proposed mission planner system employs the Differential Evolution (DE)
optimization algorithm enabling UAVs to maximize the number of visited sensor
nodes given the priority of the sensors and avoiding the redundant collection
of sensors' data. The proposed mission planner is evaluated through extensive
simulation and comparative analysis. The simulation results confirm the
effectiveness and fidelity of the proposed mission planner to be used for the
distributed sensor network monitoring and data collection.
|
Pauli spin blockade (PSB) in double quantum dots (DQDs) has matured into a
prime technique for precise measurements of nanoscale system parameters. In
this work we demonstrate that systems with site-dependent g-tensors and
spin-polarized leads allow for a complete characterization of the g-tensors in
the dots by magnetotransport experiments alone. Additionally, we show that
special polarization configurations can enhance the often elusive
magnetotransport signal, rendering the proposed technique robust against noise
in the system, and inducing a giant magnetoresistance effect. Finally, we
incorporate the effects of the spin-orbit interaction (SOI) and show that in
this case the leakage current contains information about the degree of spin
polarization in the leads.
|
We study the packing fraction of clusters in free-falling streams of
spherical and irregularly shaped particles using flash X-ray radiography. The
estimated packing fraction of clusters is low enough to correspond to
coordination numbers less than 6. Such coordination numbers in numerical
simulations correspond to aggregates that collide and grow without bouncing.
Moreover, the streams of irregular particles evolved faster and formed clusters
of larger sizes with lower packing fraction. This result on granular streams
suggests that particle shape has a significant effect on the agglomeration
process of granular materials.
|
In this paper we propose a theoretical model including a
susceptible-infected-recovered-dead (SIRD) model of epidemic in a dynamic
macroeconomic general equilibrium framework with agents' mobility. The latter
affect both their income (and consumption) and their probability of infecting
and of being infected. Strategic complementarities among individual mobility
choices drive the evolution of aggregate economic activity, while infection
externalities caused by individual mobility affect disease diffusion. Rational
expectations of forward looking agents on the dynamics of aggregate mobility
and epidemic determine individual mobility decisions. The model allows to
evaluate alternative scenarios of mobility restrictions, especially policies
dependent on the state of epidemic. We prove the existence of an equilibrium
and provide a recursive construction method for finding equilibrium(a), which
also guides our numerical investigations. We calibrate the model by using
Italian experience on COVID-19 epidemic in the period February 2020 - May 2021.
We discuss how our economic SIRD (ESIRD) model produces a substantially
different dynamics of economy and epidemic with respect to a SIRD model with
constant agents' mobility. Finally, by numerical explorations we illustrate how
the model can be used to design an efficient policy of
state-of-epidemic-dependent mobility restrictions, which mitigates the epidemic
peaks stressing health system, and allows for trading-off the economic losses
due to reduced mobility with the lower death rate due to the lower spread of
epidemic.
|
New ideas and technologies adopted by a small number of individuals
occasionally spread globally through a complex web of social ties. Here, we
present a simple and general approximation method, namely, a message-passing
approach, that allows us to describe the diffusion processes on complex
networks in an almost exact manner. We consider two classes of binary-action
games in each of which the best pure strategy for individual players is
characterized as a variant of the threshold rule. We show that the dynamics of
diffusion observed on synthetic networks are accurately replicated by the
message-passing equation, whose fixed point corresponds to a Nash equilibrium.
In contrast, the mean-field method tends to overestimate the size and frequency
of diffusion. We extend the framework to analyze multiplex networks in which
social interactions take place in multiple layers.
|
The modular design of planar phased arrays arranged on orthogonal
polygon-shaped apertures is addressed and a new method is proposed to
synthesize domino-tiled arrays fitting multiple, generally conflicting,
requirements. Starting from an analytic procedure to check the
domino-tileability of the aperture, two multi-objective optimization techniques
are derived to efficiently and effectively deal with small and medium/large
arrays depending on the values of the bounds for the cardinality of the
solution space of the admissible clustered solutions. A set of representative
numerical examples is reported to assess the effectiveness of the proposed
synthesis approach also through full-wave simulations when considering
non-ideal models for the radiating elements of the array.
|
The prediction of the intensity, location and time of the landfall of a
tropical cyclone well advance in time and with high accuracy can reduce human
and material loss immensely. In this article, we develop a Long Short-Term
memory based Recurrent Neural network model to predict intensity (in terms of
maximum sustained surface wind speed), location (latitude and longitude), and
time (in hours after the observation period) of the landfall of a tropical
cyclone which originates in the North Indian ocean. The model takes as input
the best track data of cyclone consisting of its location, pressure, sea
surface temperature, and intensity for certain hours (from 12 to 36 hours)
anytime during the course of the cyclone as a time series and then provide
predictions with high accuracy. For example, using 24 hours data of a cyclone
anytime during its course, the model provides state-of-the-art results by
predicting landfall intensity, time, latitude, and longitude with a mean
absolute error of 4.24 knots, 4.5 hours, 0.24 degree, and 0.37 degree
respectively, which resulted in a distance error of 51.7 kilometers from the
landfall location. We further check the efficacy of the model on three recent
devastating cyclones Bulbul, Fani, and Gaja, and achieved better results than
the test dataset.
|
This proof of concept (PoC) assesses the ability of machine learning (ML)
classifiers to predict the presence of a stenosis in a three vessel arterial
system consisting of the abdominal aorta bifurcating into the two common
iliacs. A virtual patient database (VPD) is created using one-dimensional pulse
wave propagation model of haemodynamics. Four different machine learning (ML)
methods are used to train and test a series of classifiers -- both binary and
multiclass -- to distinguish between healthy and unhealthy virtual patients
(VPs) using different combinations of pressure and flow-rate measurements. It
is found that the ML classifiers achieve specificities larger than 80% and
sensitivities ranging from 50-75%. The most balanced classifier also achieves
an area under the receiver operative characteristic curve of 0.75,
outperforming approximately 20 methods used in clinical practice, and thus
placing the method as moderately accurate. Other important observations from
this study are that: i) few measurements can provide similar classification
accuracies compared to the case when more/all the measurements are used; ii)
some measurements are more informative than others for classification; and iii)
a modification of standard methods can result in detection of not only the
presence of stenosis, but also the stenosed vessel.
|
Unlabelled data appear in many domains and are particularly relevant to
streaming applications, where even though data is abundant, labelled data is
rare. To address the learning problems associated with such data, one can
ignore the unlabelled data and focus only on the labelled data (supervised
learning); use the labelled data and attempt to leverage the unlabelled data
(semi-supervised learning); or assume some labels will be available on request
(active learning). The first approach is the simplest, yet the amount of
labelled data available will limit the predictive performance. The second
relies on finding and exploiting the underlying characteristics of the data
distribution. The third depends on an external agent to provide the required
labels in a timely fashion. This survey pays special attention to methods that
leverage unlabelled data in a semi-supervised setting. We also discuss the
delayed labelling issue, which impacts both fully supervised and
semi-supervised methods. We propose a unified problem setting, discuss the
learning guarantees and existing methods, explain the differences between
related problem settings. Finally, we review the current benchmarking practices
and propose adaptations to enhance them.
|
We present CSWin Transformer, an efficient and effective Transformer-based
backbone for general-purpose vision tasks. A challenging issue in Transformer
design is that global self-attention is very expensive to compute whereas local
self-attention often limits the field of interactions of each token. To address
this issue, we develop the Cross-Shaped Window self-attention mechanism for
computing self-attention in the horizontal and vertical stripes in parallel
that form a cross-shaped window, with each stripe obtained by splitting the
input feature into stripes of equal width. We provide a detailed mathematical
analysis of the effect of the stripe width and vary the stripe width for
different layers of the Transformer network which achieves strong modeling
capability while limiting the computation cost. We also introduce
Locally-enhanced Positional Encoding (LePE), which handles the local positional
information better than existing encoding schemes. LePE naturally supports
arbitrary input resolutions, and is thus especially effective and friendly for
downstream tasks. Incorporated with these designs and a hierarchical structure,
CSWin Transformer demonstrates competitive performance on common vision tasks.
Specifically, it achieves 85.4% Top-1 accuracy on ImageNet-1K without any extra
training data or label, 53.9 box AP and 46.4 mask AP on the COCO detection
task, and 51.7 mIOU on the ADE20K semantic segmentation task, surpassing
previous state-of-the-art Swin Transformer backbone by +1.2, +2.0, +1.4, and
+2.0 respectively under the similar FLOPs setting. By further pretraining on
the larger dataset ImageNet-21K, we achieve 87.5% Top-1 accuracy on ImageNet-1K
and state-of-the-art segmentation performance on ADE20K with 55.7 mIoU. The
code and models will be available at
https://github.com/microsoft/CSWin-Transformer.
|
In this paper, we discuss asymptotic behavior of the capacity of the range of
symmetric simple random walks on finitely generated groups. We show the
corresponding strong law of large numbers and central limit theorem.
|
Periodic outbursts are observed in several changing-look (CL) active galactic
nuclei (AGNs). \citet{sniegowska_possible_2020} suggested a model to explain
the repeating CL in these AGNs, where the periodic outbursts are triggered in a
narrow unstable zone between an inner ADAF and outer thin disk. In this work,
we intend to investigate the effects of large-scale magnetic fields on the
limit cycle behaviors of CL AGNs. The winds driven by magnetic fields can
significantly change the structure of thin disk by taking away the angular
momentum and energy of the disk. It is found that the period of outburst in
repeating CL AGNs can be substantially reduced by the magnetic fields.
Conversely, if we keep the period unchanged, the outburst intensity can be
raised for several times. These results can help to explain the observational
properties of multiple CL AGNs. Besides the magnetic fields, the effects of
transition radius $R_{\rm tr}$, the width of transition zone $\Delta R$ and
Shakura-Sunyaev parameter $\alpha$ are also explored in this work.
|
In this paper, we study the long time asymptotic behavior for the initial
value problem of the modified Camassa-Holm (mCH) equation in the solitonic
region \begin{align}
&m_{t}+\left(m\left(u^{2}-u_{x}^{2}\right)\right)_{x}+\kappa u_{x}=0, \quad
m=u-u_{x x}, \nonumber
&u(x, 0)=u_{0}(x),\nonumber \end{align} where $\kappa$ is a positive
constant. Based on the spectral analysis of the Lax pair associated with the
mCH equation and scattering matrix, the solution of the Cauchy problem is
characterized via the solution of a Riemann-Hilbert (RH) problem. Further using
the $\overline\partial$ generalization of Deift-Zhou steepest descent method,
we derive different long time asymptotic expansion of the solution $u(x,t)$ in
different space-time solitonic region of $x/t$. These asymptotic approximations
can be characterized with an $N(\Lambda)$-soliton whose parameters are
modulated by a sum of localized soliton-soliton interactions as one moves
through the region with diverse residual error order from $\overline\partial$
equation: $\mathcal{O}(|t|^{-1+2\rho})$ for
$\xi=\frac{y}{t}\in(-\infty,-0.25)\cup(2,+\infty)$ and
$\mathcal{O}(|t|^{-3/4})$ for $\xi=\frac{y}{t}\in(-0.25,2)$. Our results also
confirm the soliton resolution conjecture and asymptotically stability of
N-soliton solutions for the mCH equation.
|
Mean-reverting portfolios with few assets, but high variance, are of great
interest for investors in financial markets. Such portfolios are
straightforwardly profitable because they include a small number of assets
whose prices not only oscillate predictably around a long-term mean but also
possess enough volatility. Roughly speaking, sparsity minimizes trading costs,
volatility provides arbitrage opportunities, and mean-reversion property equips
investors with ideal investment strategies. Finding such favorable portfolios
can be formulated as a nonconvex quadratic optimization problem with an
additional sparsity constraint. To the best of our knowledge, there is no
method for solving this problem and enjoying favorable theoretical properties
yet. In this paper, we develop an effective two-stage algorithm for this
problem. In the first stage, we apply a tailored penalty decomposition method
for finding a stationary point of this nonconvex problem. For a fixed penalty
parameter, the block coordinate descent method is utilized to find a stationary
point of the associated penalty subproblem. In the second stage, we improve the
result from the first stage via a greedy scheme that solves restricted
nonconvex quadratically constrained quadratic programs (QCQPs). We show that
the optimal value of such a QCQP can be obtained by solving their semidefinite
relaxations. Numerical experiments on S\&P 500 are conducted to demonstrate the
effectiveness of the proposed algorithm.
|
Generalized self-concordance is a key property present in the objective
function of many important learning problems. We establish the convergence rate
of a simple Frank-Wolfe variant that uses the open-loop step size strategy
$\gamma_t = 2/(t+2)$, obtaining a $\mathcal{O}(1/t)$ convergence rate for this
class of functions in terms of primal gap and Frank-Wolfe gap, where $t$ is the
iteration count. This avoids the use of second-order information or the need to
estimate local smoothness parameters of previous work. We also show improved
convergence rates for various common cases, e.g., when the feasible region
under consideration is uniformly convex or polyhedral.
|
Starphenes are attractive compounds due to their characteristic
physicochemical properties that are inherited from acenes, making them
interesting compounds for organic electronics and optics. However, the
instability and low solubility of larger starphene homologs make their
synthesis extremely challenging. Herein, we present a new strategy leading to
pristine [16]starphene in preparative scale. Our approach is based on a
synthesis of a carbonyl-protected starphene precursor that is thermally
converted in a solid-state form to the neat [16]starphene, which is then
characterised with a variety of analytical methods, such as 13C CP-MAS NMR,
TGA, MS MALDI, UV-Vis and FTIR spectroscopy. Furthermore, high-resolution STM
experiments unambiguously confirm its expected structure and reveal a moderate
electronic delocalisation between the pentacene arms. Nucleus-independent
chemical shifts NICS(1) are also calculated to survey its aromatic character.
|
An extended Designed regression analysis of experimental data on density and
refractive indices of several classes of ionic liquids yielded statistically
averaged atomic volumes and polarizabilities of the constituting atoms. These
values can be used to predict the molecular volume and polarizability of an
unknown ionic liquid as well as its mass density and refractive index. Our
approach does not need information on the molecular structure of the ionic
liquid, but it turned out that the discrimination of the hybridization state of
the carbons improved the overall result. Our results are not only compared to
experimental data but also to quantum-chemical calculations. Furthermore,
fractional charges of ionic liquid ions and their relation to polarizability
are discussed.
|
We propose an efficient semi-Lagrangian Characteristic Mapping (CM) method
for solving the three-dimensional (3D) incompressible Euler equations. This
method evolves advected quantities by discretizing the flow map associated with
the velocity field. Using the properties of the Lie group of volume preserving
diffeomorphisms SDiff, long-time deformations are computed from a composition
of short-time submaps which can be accurately evolved on coarse grids. This
method is a fundamental extension to the CM method for two-dimensional
incompressible Euler equations [51]. We take a geometric approach in the 3D
case where the vorticity is not a scalar advected quantity, but can be computed
as a differential 2-form through the pullback of the initial condition by the
characteristic map. This formulation is based on the Kelvin circulation theorem
and gives point-wise a Lagrangian description of the vorticity field. We
demonstrate through numerical experiments the validity of the method and show
that energy is not dissipated through artificial viscosity and small scales of
the solution are preserved. We provide error estimates and numerical
convergence tests showing that the method is globally third-order accurate.
|
The first phase of table recognition is to detect the tabular area in a
document. Subsequently, the tabular structures are recognized in the second
phase in order to extract information from the respective cells. Table
detection and structural recognition are pivotal problems in the domain of
table understanding. However, table analysis is a perplexing task due to the
colossal amount of diversity and asymmetry in tables. Therefore, it is an
active area of research in document image analysis. Recent advances in the
computing capabilities of graphical processing units have enabled deep neural
networks to outperform traditional state-of-the-art machine learning methods.
Table understanding has substantially benefited from the recent breakthroughs
in deep neural networks. However, there has not been a consolidated description
of the deep learning methods for table detection and table structure
recognition. This review paper provides a thorough analysis of the modern
methodologies that utilize deep neural networks. This work provided a thorough
understanding of the current state-of-the-art and related challenges of table
understanding in document images. Furthermore, the leading datasets and their
intricacies have been elaborated along with the quantitative results. Moreover,
a brief overview is given regarding the promising directions that can serve as
a guide to further improve table analysis in document images.
|
General Relativity and the $\Lambda$CDM framework are currently the standard
lore and constitute the concordance paradigm. Nevertheless, long-standing open
theoretical issues, as well as possible new observational ones arising from the
explosive development of cosmology the last two decades, offer the motivation
and lead a large amount of research to be devoted in constructing various
extensions and modifications. All extended theories and scenarios are first
examined under the light of theoretical consistency, and then are applied to
various geometrical backgrounds, such as the cosmological and the spherical
symmetric ones. Their predictions at both the background and perturbation
levels, and concerning cosmology at early, intermediate and late times, are
then confronted with the huge amount of observational data that astrophysics
and cosmology are able to offer recently. Theories, scenarios and models that
successfully and efficiently pass the above steps are classified as viable and
are candidates for the description of Nature. We list the recent developments
in the fields of gravity and cosmology, presenting the state of the art,
high-lighting the open problems, and outlining the directions of future
research. Its realization is performed in the framework of the COST European
Action "Cosmology and Astrophysics Network for Theoretical Advances and
Training Actions".
|
The free monoid $A^*$ on a finite totally ordered alphabet $A$ acts at the
left on columns, by Schensted left insertion. This defines a finite monoid,
denoted $Styl(A)$ and called the stylic monoid. It is canonically a quotient of
the plactic monoid. Main results are: the cardinality of $Styl(A)$ is equal to
the number of partitions of a set on $|A|+1$ elements. We give a bijection with
so-called $N$-tableaux, similar to Schensted's algorithm, explaining this fact.
Presentation of $Styl(A)$: it is generated by $A$ subject to the plactic
(Knuth) relations and the idempotent relations $a^2=a$, $a\in A$. The canonical
involutive anti-automorphism on $A^*$, which reverses the order on $A$, induces
an involution of $Styl(A)$, which similarly to the corresponding involution of
the plactic monoid, may be computed by an evacuation-like operation
(Sch\"utzenberger involution on tableaux) on so-called standard immaculate
tableaux (which are in bijection with partitions). The monoid $Styl(A)$ is
$J$-trivial, and the $J$-order of $Styl(A)$ is graded: the co-rank is given by
the number of elements in the $N$-tableau. The monoid $Styl(A)$ is the
syntactic monoid for the the function which associates to each word $w\in A^*$
the length of its longest strictly decreasing subword.
|
Uniaxial anisotropy in nonlinear birefringent crystals limits the efficiency
of nonlinear optical interactions and breaks the spatial symmetry of light
generated in the parametric down-conversion (PDC) process. Therefore, this
effect is usually undesirable and must be compensated for. However, high gain
may be used to overcome the destructive role of anisotropy and instead to use
it for the generation of bright two-mode correlated twin-beams. In this work,
we provide a rigorous theoretical description of the spatial properties of
bright squeezed light in the presence of strong anisotropy. We investigate a
single-crystal and a two-crystal configuration and demonstrate the generation
of bright correlated twin-beams in such systems at high gain due to anisotropy.
We explore the mode structure of the generated light and show how anisotropy,
together with crystal spacing, can be used for radiation shaping.
|
Today, permissioned blockchains are being adopted by large organizations for
business critical operations. Consequently, they are subject to attacks by
malicious actors. Researchers have discovered and enumerated a number of
attacks that could threaten availability, integrity and confidentiality of
blockchain data. However, currently it remains difficult to detect these
attacks. We argue that security experts need appropriate visualizations to
assist them in detecting attacks on blockchain networks. To achieve this, we
develop HyperSec, a visual analytics monitoring tool that provides relevant
information at a glance to detect ongoing attacks on Hyperledger Fabric. For
evaluation, we connect the HyperSec prototype to a Hyperledger Fabric test
network. The results show that common attacks on Fabric can be detected by a
security expert using HyperSec's visualizations.
|
We present a novel approach for temporal contrast enhancement of energetic
laserpulses by filtered SPM broadened spectra. A measured temporal contrast
enhancement by at least 7 orders of magnitude in a simple setup has been
achieved. This technique is applicable to a wide range of laser parameters and
poses a highly efficient alternative to existing contrast-enhancement methods.
|
Zero-knowledge succinct non-interactive argument of knowledge (zkSNARK)
allows a party, known as the prover, to convince another party, known as the
verifier, that he knows a private value $v$, without revealing it, such that
$F(u,v)=y$ for some function $F$ and public values $u$ and $y$. There are
various versions of zk-SNARK, among them, Quadratic Arithmetic Program
(QAP)-based zk-SNARK has been widely used in practice, specially in Blockchain
technology. This is attributed to two desirable features; its fixed-size proof
and the very light computation load of the verifier. However, the computation
load of the prover in QAP-based zkSNARKs, is very heavy, even-though it is
designed to be very efficient. This load can be beyond the prover's computation
power to handle, and has to be offloaded to some external servers. In the
existing offloading solutions, either (i) the load of computation, offloaded to
each sever, is a fraction of the prover's primary computation (e.g., DZIK),
however the servers need to be trusted, (ii) the servers are not required to be
trusted, but the computation complexity imposed to each one is the same as the
prover's primary computation (e.g., Trinocchio). In this paper, we present a
scheme, which has the benefits of both solutions. In particular, we propose a
secure multi-party proof generation algorithm where the prover can delegate its
task to $N $ servers, where (i) even if a group of $T \in \mathbb{N}$ servers,
$T\le N$, collude, they cannot gain any information about the secret value $v$,
(ii) the computation complexity of each server is less than $1/(N-T)$ of the
prover's primary computation. The design is such that we don't lose the
efficiency of the prover's algorithm in the process of delegating the tasks to
external servers.
|
We investigate the impact of rotation and magnetic fields on the dynamics and
gravitational wave emission in 2D core-collapse supernova simulations with
neutrino transport. We simulate 17 different models of $15\,M_\odot$ and
$39\,M_\odot$ progenitor stars with various initial rotation profiles and
initial magnetic fields strengths up to $10^{12}\, \mathrm{G}$, assuming a
dipolar field geometry in the progenitor. Strong magnetic fields generally
prove conducive to shock revival, though this trend is not without exceptions.
The impact of rotation on the post-bounce dynamics is more variegated, in line
with previous studies. A significant impact on the time-frequency structure of
the gravitational wave signal is found only for rapid rotation or strong
initial fields. For rapid rotation, the angular momentum gradient at the
proto-neutron star surface can appreciably affect the frequency of the dominant
mode, so that known analytic relations for the high-frequency emission band no
longer hold. In case of two magnetorotational explosion models, the deviation
from these analytic relations is even more pronounced. One of the
magnetorotational explosions has been evolved to more than half a second after
the onset of the explosion and shows a subsidence of high-frequency emission at
late times. Its most conspicuous gravitational wave signature is a
high-amplitude tail signal. We also estimate the maximum detection distances
for our waveforms. The magnetorotational models do not stick out for higher
detectability during the post-bounce and explosion phase.
|
Existing coordinated cyber-attack detection methods have low detection
accuracy and efficiency and poor generalization ability due to difficulties
dealing with unbalanced attack data samples, high data dimensionality, and
noisy data sets. This paper proposes a model for cyber and physical data fusion
using a data link for detecting attacks on a Cyber-Physical Power System
(CPPS). Two-step principal component analysis (PCA) is used for classifying the
system's operating status. An adaptive synthetic sampling algorithm is used to
reduce the imbalance in the categories' samples. The loss function is improved
according to the feature intensity difference of the attack event, and an
integrated classifier is established using a classification algorithm based on
the cost-sensitive gradient boosting decision tree (CS-GBDT). The simulation
results show that the proposed method provides higher accuracy, recall, and
F-Score than comparable algorithms.
|
We examine the general problem of inter-domain Gaussian Processes (GPs):
problems where the GP realization and the noisy observations of that
realization lie on different domains. When the mapping between those domains is
linear, such as integration or differentiation, inference is still closed form.
However, many of the scaling and approximation techniques that our community
has developed do not apply to this setting. In this work, we introduce the
hierarchical inducing point GP (HIP-GP), a scalable inter-domain GP inference
method that enables us to improve the approximation accuracy by increasing the
number of inducing points to the millions. HIP-GP, which relies on inducing
points with grid structure and a stationary kernel assumption, is suitable for
low-dimensional problems. In developing HIP-GP, we introduce (1) a fast
whitening strategy, and (2) a novel preconditioner for conjugate gradients
which can be helpful in general GP settings. Our code is available at https:
//github.com/cunningham-lab/hipgp.
|
Causally consistent distributed storage systems have received significant
recent attention due to the potential for providing a low latency data access
as compared with linearizability. Current causally consistent data stores use
partial or full replication to ensure data access to clients over a distributed
setting.
In this paper, we develop, for the first time, an erasure coding based
algorithm called CausalEC that ensures causal consistency for a collection of
read-write objects stored in a distributed set of nodes over an asynchronous
message passing system. CausalEC can use an arbitrary linear erasure code for
data storage, and ensures liveness and storage properties prescribed by the
erasure code.
CausalEC retains a key benefit of previously designed replication-based
algorithms - every write operation is local, that is, a server performs only
local actions before returning to a client that issued a write operation. For
servers that store certain objects in an uncoded manner, read operations to
those objects also return locally. In general, a read operation to an object
can be returned by a server on contacting a small subset of other servers so
long as the underlying erasure code allows for the object to be decoded from
that subset. As a byproduct, we develop EventualEC, a new eventually consistent
erasure coding based data storage algorithm.
A novel technical aspect of CausalEC is the use of cross-object erasure
coding, where nodes encode values across multiple objects, unlike previous
consistent erasure coding based solutions. CausalEC navigates the technical
challenges of cross-object erasure coding, in particular, pertaining to
re-encoding the objects when writes update the values and ensuring that reads
are served in the transient state where the system transitions to storing the
codeword symbols corresponding to the new object versions.
|
This paper proposes a general enhancement to the Normalizing Flows (NF) used
in neural vocoding. As a case study, we improve expressive speech vocoding with
a revamped Parallel Wavenet (PW). Specifically, we propose to extend the affine
transformation of PW to the more expressive invertible non-affine function. The
greater expressiveness of the improved PW leads to better-perceived signal
quality and naturalness in the waveform reconstruction and text-to-speech (TTS)
tasks. We evaluate the model across different speaking styles on a
multi-speaker, multi-lingual dataset. In the waveform reconstruction task, the
proposed model closes the naturalness and signal quality gap from the original
PW to recordings by $10\%$, and from other state-of-the-art neural vocoding
systems by more than $60\%$. We also demonstrate improvements in objective
metrics on the evaluation test set with L2 Spectral Distance and Cross-Entropy
reduced by $3\%$ and $6\unicode{x2030}$ comparing to the affine PW.
Furthermore, we extend the probability density distillation procedure proposed
by the original PW paper, so that it works with any non-affine invertible and
differentiable function.
|
We obtain an exact spherically symmetric and magnetically charged black hole
solution in 4D Einstein-Gauss-Bonnet gravity coupled with rational nonlinear
electrodynamics. The thermodynamics of our model is studied. We calculate the
Hawking temperature and the heat capacity of the black hole. The phase
transitions occur in the point where the Hawking temperature possesses an
extremum. We show that black holes are thermodynamically stable at some range
of event horizon radii when the heat capacity is positive. The logarithmic
correction to the Bekenstein-Hawking entropy is obtained.
|
As photovoltaic (PV) penetration continues to rise and smart inverter
functionality continues to expand, smart inverters and other distributed energy
resources (DERs) will play increasingly important roles in distribution system
power management and security. In this paper, it is demonstrated that a
constellation of smart inverters in a simulated distribution circuit can enable
precise voltage predictions using an asynchronous and decentralized prediction
algorithm. Using simulated data and a constellation of 15 inverters in a ring
communication topology, the COLA algorithm is shown to accomplish the learning
task required for voltage magnitude prediction with far less communication
overhead than fully connected P2P learning protocols. Additionally, a dynamic
stopping criterion is proposed that does not require a regularizer like the
original COLA stopping criterion.
|
We describe a formal approach based on graphical causal models to identify
the "root causes" of the change in the probability distribution of variables.
After factorizing the joint distribution into conditional distributions of each
variable, given its parents (the "causal mechanisms"), we attribute the change
to changes of these causal mechanisms. This attribution analysis accounts for
the fact that mechanisms often change independently and sometimes only some of
them change. Through simulations, we study the performance of our distribution
change attribution method. We then present a real-world case study identifying
the drivers of the difference in the income distribution between men and women.
|
Electroencephalography (EEG) signals signals are often used to learn about
brain structure and to learn what thinking. EEG signals can be easily affected
by external factors. For this reason, they should be applied various
pre-process during their analysis. In this study, it is used the EEG signals
received from 109 subjects when opening and closing their right or left fists
and performing hand and foot movements and imagining the same movements. The
relationship between motor activities and imaginary of that motor activities
were investigated. Algorithms with high performance rates have been used for
feature extraction , selection and classification using the nearest neighbour
algorithm.
|
Motivated by the LHCb-group discovery of exotic hadrons in the range (6.2
$\sim$ 6.9) GeV, we present new results for the masses and couplings of
$0^{++}$ fully heavy $(\bar{Q}Q)(Q\bar{Q})$ molecules and $(QQ)(\overline{QQ})$
tetraquaks states from relativistic QCD Laplace Sum Rule (LSR) within stability
criteria where Next-to-Leading Order (NLO) Factorized (F) Perturbative (PT)
corrections is included. As the Operator Product Expansion (OPE) usually
converges for $d\leqslant 6-8$, we evaluated the QCD spectral functions at
Lowest Order (LO) of PT QCD and up to $\langle G^3 \rangle$. We also emphasize
the importance of PT radiative corrections for heavy quark sum rules in order
to justify the use of the running heavy quark mass value in the analysis. We
compare our predictions in Table 3 with the ones from ratio of Moments (MOM).
The broad structure arround (6.2 $\sim$ 6.9) GeV can be described by the
$\overline{\eta}_c\eta_c$, $\overline{J/\psi}J/\psi$ and
$\overline{\chi}_{c1}\chi_{c1}$ molecules or/and $\overline{S}_c S_c$,
$\overline{A}_c A_c$ and $\overline{V}_c V_c$ tetraquarks lowest mass ground
states. The narrow structure at (6.8 $\sim$ 6.9) GeV if it is a $0^{++}$ state
can be a $\overline{\chi}_{c0}\chi_{c0}$ molecules or/and its analogue
$\overline{P}_c P_c$ tetraquark. The $\overline{\chi}_{c1}\chi_{c1}$ predicted
mass is found to be below the $\chi_{c1}\chi_{c1}$ threshold while for the
beauty states, all of the estimated masses are above the $\eta_b \eta_b$ and
$\Upsilon(1S)\Upsilon(1S)$ threshold.
|
We consider quantum effects of gravitational and electromagnetic fields in
spherically symmetric black hole spacetimes in the asymptotic safety scenario.
Introducing both the running gravitational and electromagnetic couplings from
the renormalization group equations and applying a physically sensible scale
identification scheme based on the Kretschmann scalar, we construct a quantum
mechanically corrected, or quantum improved Reissner-Nordstrom metric. We study
the global structure of the quantum improved geometry and show, in particular,
that the central singularity is resolved, being generally replaced with a
regular Minkowski-core, where the curvature tensor vanishes. Exploring cases
with more general scale identifications, we further find that the space of
quantum improved geometries is divided into two regions: one for geometries
with a regular Minkowski-core and the other for those with a weak singularity
at the center. At the boundary of the two regions, the geometry has either
Minkowski-, de Sitter-, or anti-de Sitter(AdS)-core.
|
We investigate phases of 3d ${\cal N}=2$ Chern-Simons-matter theories,
extending to three dimensions the celebrated correspondence between 2d gauged
Wess-Zumino-Witten (GWZW) models and non-linear sigma models (NLSMs) with
geometric targets. We find that although the correspondence in 3d and 2d are
closely related by circle compactification, an important subtlety arises in
this process, changing the phase structure of the 3d theory. Namely, the
effective theory obtained from the circle compactification of a phase of a 3d
${\cal N}=2$ gauge theory is, in general, different from the phase of the 3d
${\cal N}=2$ theory on ${\mathbb R}^2\times S^{1}$, which means taking phases
of a 3d gauge theory does not necessarily commute with compactification. We
compute the Witten index of each effective theory to check this observation.
Furthermore, when the matter fields have the same non-minimal charges, the 3d
${\cal N}=2$ Chern-Simons-matter theory with a proper Chern-Simons level will
decompose into several identical 2d gauged linear sigma models (GLSMs) for the
same target upon reduction to 2d. To illustrate this phenomenon, we investigate
how vacua of the 3d gauge theory for a weighted projective space
$W\mathbb{P}_{[l,\cdots,l]}$ move on the field space when we change the radius
of $S^{1}$.
|
Ransomware has emerged as an infamous malware that has not escaped a lot of
myths and inaccuracies from media hype. Victims are not sure whether or not to
pay a ransom demand without fully understanding the lurking consequences. In
this paper, we present a ransomware classification framework based on
file-deletion and file-encryption attack structures that provides a deeper
comprehension of potential flaws and inadequacies exhibited in ransomware. We
formulate a threat and attack model representative of a typical ransomware
attack process from which we derive the ransomware categorization framework
based on a proposed classification algorithm. The framework classifies the
virulence of a ransomware attack to entail the overall effectiveness of
potential ways of recovering the attacked data without paying the ransom demand
as well as the technical prowess of the underlying attack structures. Results
of the categorization, in increasing severity from CAT1 through to CAT5, show
that many ransomwares exhibit flaws in their implementation of encryption and
deletion attack structures which make data recovery possible without paying the
ransom. The most severe categories CAT4 and CAT5 are better mitigated by
exploiting encryption essentials while CAT3 can be effectively mitigated via
reverse engineering. CAT1 and CAT2 are not common and are easily mitigated
without any decryption essentials.
|
Measuring the solar neutrino flux over gigayear timescales could provide a
new window to inform the Solar Standard Model as well as studies of the Earth's
long-term climate. We demonstrate the feasibility of measuring the
time-evolution of the $^8$B solar neutrino flux over gigayear timescales using
paleo detectors, naturally occurring minerals which record neutrino-induced
recoil tracks over geological times. We explore suitable minerals and identify
track lengths of 15--30 nm to be a practical window to detect the $^8$B solar
neutrino flux. A collection of ultra-radiopure minerals of different ages, each
some 0.1 kg by mass, can be used to probe the rise of the $^8$B solar neutrino
flux over the recent gigayear of the Sun's evolution. We also show that models
of the solar abundance problem can be distinguished based on the
time-integrated tracks induced by the $^8$B solar neutrino flux.
|
Metal-organic species can be designed to self-assemble in large-scale,
atomically defined, supramolecular architectures. Hybrid quantum wells, where
inorganic two-dimensional (2D) planes are separated by organic ligands, are a
particular example. The ligands effectively provide an intralayer confinement
for charge carriers resulting in a 2D electronic structure, even in
multilayered assemblies. Air-stable metal organic chalcogenides hybrid quantum
wells have recently been found to host tightly bound 2D excitons with strong
optical anisotropy in a bulk matrix. Here, we investigate the excited carrier
dynamics in the prototypical metal organic chalcogenide [AgSePh], disentangling
three excitonic resonances by low temperature transient absorption
spectroscopy. Our analysis suggests a complex relaxation cascade comprising
ultrafast screening and renormalization, inter-exciton relaxation, and
self-trapping of excitons within few picoseconds. The ps-decay provided by the
self-trapping mechanism may be leveraged to unlock the material's potential for
ultrafast optoelectronic applications.
|
A deeply rooted view in classical and quantum information is that
"information is physical", i.e., to store and process information, we need a
physical body. Here we ask whether quantum information can remain without a
physical body. We answer this question in the affirmative, i.e., we argue that
quantum information can exist without a physical body in volatile form. We
introduce the notion of the volatility of quantum information and show that
indeed the conditions for it are naturally satisfied in the quantum
teleportation protocol. We argue that even if special relativity principles are
not assumed, it is possible to make quantum information volatile. We also
discuss the classical limit of the phenomenon, as well as the multiparty
scenario.
|
One can hardly believe that there is still something to be said about cubic
equations. To dodge this doubt, we will instead try and say something about
Sylvester. He doubtless found a way to solve cubic equations. As mentioned by
Rota, it was the only method in this vein that he could remember. We realize
that Sylvester's magnificent approach for reduced cubic equations boils down to
an easy identity.
|
'Magic'-angle twisted bilayer graphene has received a lot of interest due to
its flat bands with potentially non-trivial topology that lead to intricate
correlated phases. A spectrum with flat bands, however, does not require a
twist between multiple sheets of van der Waals materials, but rather can be
realized with the application of an appropriate periodic potential. Here, we
propose the imposition of a tailored periodic potential onto a single graphene
layer through local perturbations that could be created via lithography or
adatom manipulation, which also results in an energy spectrum featuring flat
bands. Our first-principle calculations for an appropriate decoration of
graphene with adatoms indeed show the presence of flat bands in the spectrum.
Furthermore, we reveal the topological nature of the flat bands through a
symmetry-indicator analysis. This non-trivial topology manifests itself in
corner-localized states with a filling anomaly as we show using a tight-binding
model. Our proposal of a single decorated graphene sheet provides a new
versatile route to study correlated phases in topologically non-trivial, flat
band structures.
|
Data augmentation is a key element of deep learning pipelines, as it informs
the network during training about transformations of the input data that keep
the label unchanged. Manually finding adequate augmentation methods and
parameters for a given pipeline is however rapidly cumbersome. In particular,
while intuition can guide this decision for images, the design and choice of
augmentation policies remains unclear for more complex types of data, such as
neuroscience signals. Besides, class-dependent augmentation strategies have
been surprisingly unexplored in the literature, although it is quite intuitive:
changing the color of a car image does not change the object class to be
predicted, but doing the same to the picture of an orange does. This paper
investigates gradient-based automatic data augmentation algorithms amenable to
class-wise policies with exponentially larger search spaces. Motivated by
supervised learning applications using EEG signals for which good augmentation
policies are mostly unknown, we propose a new differentiable relaxation of the
problem. In the class-agnostic setting, results show that our new relaxation
leads to optimal performance with faster training than competing gradient-based
methods, while also outperforming gradient-free methods in the class-wise
setting. This work proposes also novel differentiable augmentation operations
relevant for sleep stage classification.
|
The Internet of Things, also known as the IoT, refers to the billions of
devices around the world that are now connected to the Internet, collecting and
sharing data. The amount of data collected through IoT sensors must be
completely securely controlled. To protect the information collected by IoT
sensors, a lightweight method called Discover the Flooding Attack-RPL (DFA-RPL)
has been proposed. The proposed DFA-RPL method identifies intrusive nodes in
several steps to exclude them from continuing routing operations. Thus, in the
DFA-RPL method, it first builds a cluster and selects the most appropriate node
as a cluster head in DODAG, then, due to the vulnerability of the RPL protocol
to Flooding attacks, it uses an ant colony algorithm (ACO) using five steps to
detect attacks. Use Flooding to prevent malicious activity on the IoT network.
In other words, if it detects a node as malicious, it puts that node on the
detention list and quarantines it for a certain period of time. The results
obtained from the simulation show the superiority of the proposed method in
terms of Packet Delivery Rate, Detection Rate, False Positive Rate, and False
Negative Rate compared to IRAD and REATO methods.
|
This paper presents an approach to improve computational fluid dynamics
simulations forecasts of air pollution using deep learning. Our method, which
integrates Principal Components Analysis (PCA) and adversarial training, is a
way to improve the forecast skill of reduced order models obtained from the
original model solution. Once the reduced-order model (ROM) is obtained via
PCA, a Long Short-Term Memory network (LSTM) is adversarially trained on the
ROM to make forecasts. Once trained, the adversarially trained LSTM outperforms
a LSTM trained in a classical way. The study area is in London, including
velocities and a concentration tracer that replicates a busy traffic junction.
This adversarially trained LSTM-based approach is used on the ROM in order to
produce faster forecasts of the air pollution tracer.
|
In this work, we investigate how quickly local perturbations propagate in
interacting boson systems with Bose-Hubbard-type Hamiltonians. In general,
these systems have unbounded local energies, and arbitrarily fast information
propagation may occur. We focus on a specific but experimentally natural
situation in which the number of bosons at any one site in the unperturbed
initial state is approximately limited. We rigorously prove the existence of an
almost-linear information-propagation light-cone, thus establishing a
Lieb--Robinson bound: the wave-front grows at most as $t\log^2 (t)$. We prove
the clustering theorem for gapped ground states and study the time complexity
of classically simulating one-dimensional quench dynamics, a topic of great
practical interest.
|
Massive multiple-input multiple-output is a very important technology for
future fifth-generation systems. However, massive massive multiple input
multiple output systems are still limited because of pilot contamination,
impacting the data rate due to the non-orthogonality of pilot sequences
transmitted by users in the same cell to the neighboring cells. We propose a
channel estimation with complete knowledge of large-scale fading by using an
orthogonal pilot reuse sequence to eliminate PC in edge users with poor channel
quality based on the estimation of large-scale fading and performance analysis
of maximum ratio transmission and zero forcing precoding methods. We derived
the lower bounds on the achievable downlink DR and signal-to-interference noise
ratio based on assigning PRS to a user grouping that mitigated this problem
when the number of antenna elements approaches infinity The simulation results
showed that a high DR can be achieved due to better channel estimation and
reduced performance loss
|
The goal of this paper is to investigate the validity of a hybrid
embedded/homogenized in-silico approach for modeling perfusion through solid
tumors. The rationale behind this novel idea is that only the larger blood
vessels have to be explicitly resolved while the smaller scales of the
vasculature are homogenized. As opposed to typical discrete or fully-resolved
1D-3D models, the required data can be obtained with in-vivo imaging techniques
since the morphology of the smaller vessels is not necessary. By contrast, the
larger vessels, whose topology and structure is attainable non-invasively, are
resolved and embedded as one-dimensional inclusions into the three-dimensional
tissue domain which is modeled as a porous medium. A sound mortar-type
formulation is employed to couple the two representations of the vasculature.
We validate the hybrid model and optimize its parameters by comparing its
results to a corresponding fully-resolved model based on several well-defined
metrics. These tests are performed on a complex data set of three different
tumor types with heterogeneous vascular architectures. The correspondence of
the hybrid model in terms of mean representative elementary volume blood and
interstitial fluid pressures is excellent with relative errors of less than 4%.
Larger, but less important and explicable errors are present in terms of blood
flow in the smaller, homogenized vessels. We finally discuss and demonstrate
how the hybrid model can be further improved to apply it for studies on tumor
perfusion and the efficacy of drug delivery.
|
We study online change point detection problems under the constraint of local
differential privacy (LDP) where, in particular, the statistician does not have
access to the raw data. As a concrete problem, we study a multivariate
nonparametric regression problem. At each time point $t$, the raw data are
assumed to be of the form $(X_t, Y_t)$, where $X_t$ is a $d$-dimensional
feature vector and $Y_t$ is a response variable. Our primary aim is to detect
changes in the regression function $m_t(x)=\mathbb{E}(Y_t |X_t=x)$ as soon as
the change occurs. We provide algorithms which respect the LDP constraint,
which control the false alarm probability, and which detect changes with a
minimal (minimax rate-optimal) delay. To quantify the cost of privacy, we also
present the optimal rate in the benchmark, non-private setting. These
non-private results are also new to the literature and thus are interesting
\emph{per se}. In addition, we study the univariate mean online change point
detection problem, under privacy constraints. This serves as the blueprint of
studying more complicated private change point detection problems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.