abstract
stringlengths 42
2.09k
|
---|
Local volatility is an important quantity in option pricing, portfolio
hedging, and risk management. It is not directly observable from the market;
hence calibrations of local volatility models are necessary using observable
market data. Unlike most existing point-estimate methods, we cast the
large-scale nonlinear inverse problem into the Bayesian framework, yielding a
posterior distribution of the local volatility, which naturally quantifies its
uncertainty. This extra uncertainty information enables traders and risk
managers to make better decisions. To alleviate the computational cost, we
apply Karhunen--L\`oeve expansion to reduce the dimensionality of the Gaussian
Process prior for local volatility. A modified two-stage adaptive Metropolis
algorithm is applied to sample the posterior probability distribution, which
further reduces computational burdens caused by repetitive numerical forward
option pricing model solver and time of heuristic tuning. We demonstrate our
methodology with both synthetic and market data.
|
A fundamental objective in quantum information science is to determine the
cost in classical resources of simulating a particular quantum system. The
classical simulation cost is quantified by the signaling dimension which
specifies the minimum amount of classical communication needed to perfectly
simulate a channel's input-output correlations when unlimited shared randomness
is held between encoder and decoder. This paper provides a collection of
device-independent tests that place lower and upper bounds on the signaling
dimension of a channel. Among them, a single family of tests is shown to
determine when a noisy classical channel can be simulated using an amount of
communication strictly less than either its input or its output alphabet size.
In addition, a family of eight Bell inequalities is presented that completely
characterize when any four-outcome measurement channel, such as a Bell
measurement, can be simulated using one communication bit and shared
randomness. Finally, we bound the signaling dimension for all partial replacer
channels in $d$ dimensions. The bounds are found to be tight for the special
case of the erasure channel.
|
This paper provides a critical overview of Georg Kreisel's method of informal
rigour, most famously presented in his 1967 paper `Informal rigour and
completeness proofs'. After first considering Kreisel's own characterization in
historical context, we then present two schemas under which we claim his
various examples of informal rigour can be subsumed. We then present detailed
reconstructions of his three original examples: his squeezing argument in favor
of the adequacy of the model theoretic analysis of logical validity, his
argument for the determinacy of the Continuum Hypothesis, and his refutation of
Markov's principle in intuitionistic analysis. We conclude by offering a
comparison of Kreisel's understanding of informal rigour with Carnap's method
of explication. In an appendix, we also offer briefer reconstructions of
Kreisel's attempts to apply informal rigour to the discovery of set theoretic
axioms, the distinction between standard and nonstandard models of arithmetic,
and the concepts of finitist proof, predicative definability, and
intuitionistic validity.
|
To scale neural speech synthesis to various real-world languages, we present
a multilingual end-to-end framework that maps byte inputs to spectrograms, thus
allowing arbitrary input scripts. Besides strong results on 40+ languages, the
framework demonstrates capabilities to adapt to new languages under extreme
low-resource and even few-shot scenarios of merely 40s transcribed recording,
without the need of per-language resources like lexicon, extra corpus,
auxiliary models, or linguistic expertise, thus ensuring scalability. While it
retains satisfactory intelligibility and naturalness matching rich-resource
models. Exhaustive comparative and ablation studies are performed to reveal the
potential of the framework for low-resource languages. Furthermore, we propose
a novel method to extract language-specific sub-networks in a multilingual
model for a better understanding of its mechanism.
|
In this paper, we consider the downlink (DL) of a zero-forcing (ZF) precoded
extra-large scale massive MIMO (XL-MIMO) system. The base-station (BS) operates
with limited number of radio-frequency (RF) transceivers due to high cost,
power consumption and interconnection bandwidth associated to the fully digital
implementation. The BS, which is implemented with a subarray switching
architecture, selects groups of active antennas inside each subarray to
transmit the DL signal. This work proposes efficient resource allocation (RA)
procedures to perform joint antenna selection (AS) and power allocation (PA) to
maximize the DL spectral efficiency (SE) of an XL-MIMO system operating under
different loading settings. Two metaheuristic RA procedures based on the
genetic algorithm (GA) are assessed and compared in terms of performance,
coordination data size and computational complexity. One algorithm is based on
a quasi-distributed methodology while the other is based on the conventional
centralized processing. Numerical results demonstrate that the
quasi-distributed GA-based procedure results in a suitable trade-off between
performance, complexity and exchanged coordination data. At the same time, it
outperforms the centralized procedures with appropriate system operation
settings.
|
Knowledge of longitudinal electron bunch profiles is vital to optimize the
performance of plasma wakefield accelerators and x-ray free electron laser
linacs. Because of their importance to these novel applications, noninvasive
frequency domain techniques are often employed to reconstruct longitudinal
bunch profiles from coherent synchrotron, transition, or undulator radiation
measurements. In this paper, we detail several common reconstruction techniques
involving the Kramers-Kronig phase relationship and Gerchberg-Saxton algorithm.
Through statistical analysis, we draw general conclusions about the accuracy of
these reconstruction techniques and the most suitable candidate for
longitudinal bunch reconstruction from spectroscopic data.
|
In this paper we present a conservative cell-centered Lagrangian finite
volume scheme for the solution of the hyper-elasticity equations on
unstructured multidimensional grids. The starting point of the new method is
the Eucclhyd scheme, which is here combined with the a posteriori
Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure
robustness and stability at shock waves with piece-wise linear spatial
reconstruction. The ADER (Arbitrary high order schemes using DERivatives)
approach is adopted to obtain second-order of accuracy in time as well. This
method has been tested in an hydrodynamics context and the present work aims at
extending it to the case of hyper-elasticity models. Such models are presented
in a fully Lagrangian framework and the dedicated Lagrangian numerical scheme
is derived in terms of nodal solver, GCL compliance, subcell forces and
compatible discretization. The Lagrangian numerical method is implemented in 3D
under MPI parallelization framework allowing to handle genuinely large meshes.
A relative large set of numerical test cases is presented to assess the ability
of the method to achieve effective second order of accuracy on smooth flows,
maintaining an essentially non-oscillatory behavior and general robustness
across discontinuities and ensuring at least physical admissibility of the
solution where appropriate. Pure elastic neo-Hookean and non-linear materials
are considered for our benchmark test problems in 2D and 3D. These test cases
feature material bending, impact, compression, non-linear deformation and
further bouncing/detaching motions.
|
This paper presents a novel method, Zero-Reference Deep Curve Estimation
(Zero-DCE), which formulates light enhancement as a task of image-specific
curve estimation with a deep network. Our method trains a lightweight deep
network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic
range adjustment of a given image. The curve estimation is specially designed,
considering pixel value range, monotonicity, and differentiability. Zero-DCE is
appealing in its relaxed assumption on reference images, i.e., it does not
require any paired or even unpaired data during training. This is achieved
through a set of carefully formulated non-reference loss functions, which
implicitly measure the enhancement quality and drive the learning of the
network. Despite its simplicity, we show that it generalizes well to diverse
lighting conditions. Our method is efficient as image enhancement can be
achieved by an intuitive and simple nonlinear curve mapping. We further present
an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes
advantage of a tiny network with just 10K parameters. Zero-DCE++ has a fast
inference speed (1000/11 FPS on a single GPU/CPU for an image of size
1200*900*3) while keeping the enhancement performance of Zero-DCE. Extensive
experiments on various benchmarks demonstrate the advantages of our method over
state-of-the-art methods qualitatively and quantitatively. Furthermore, the
potential benefits of our method to face detection in the dark are discussed.
The source code will be made publicly available at
https://li-chongyi.github.io/Proj_Zero-DCE++.html.
|
The partial (up to 7 %) substitution of Cd for Zn in the Yb-based
heavy-fermion material YbFe$_2$Zn$_{20}$ is known to induce a slight ($\sim 20$
%) reduction of the Sommerfeld specific heat coefficient $\gamma$ and a huge
(up to two orders of magnitude) reduction of the $T^2$ resistivity coefficient
$A$, corresponding to a drastic and unexpected reduction of the Kadowaki-Woods
ratio $A/\gamma ^2$. Here, Yb $L_{3}$-edge X-ray absorption spectroscopy shows
that the Yb valence state is close to $3+$ for all $x$, whereas X-ray
diffraction reveals that Cd replace the Zn ions only at the $16c$ site of the
$Fd\bar{3}m$ cubic structure, leaving the $48f$ and $96g$ sites with full Zn
occupation. Ab-initio electronic structure calculations in pure and Cd-doped
materials, carried out without considering correlations, show multiple
conduction bands with only minor modifications of the band dispersions near the
Fermi level and therefore do not explain the resistivity drop introduced by Cd
substitution. We propose that the site-selective Cd substitution introduces
light conduction bands with substantial contribution of Cd($16c$) $5p$ levels
that have weak coupling to the Yb$^{3+}$ $4f$ moments. These light fermions
coexist with heavy fermions originated from other conduction bands with larger
participation of Zn($48f$ and $96g$) $4p$ levels that remain strongly coupled
with the Yb$^{3+}$ local moments.
|
We propose a novel generic trust management framework for crowdsourced IoT
services. The framework exploits a multi-perspective trust model that captures
the inherent characteristics of crowdsourced IoT services. Each perspective is
defined by a set of attributes that contribute to the perspective's influence
on trust. The attributes are fed into a machine-learning-based algorithm to
generate a trust model for crowdsourced services in IoT environments. We
demonstrate the effectiveness of our approach by conducting experiments on
real-world datasets.
|
We investigated the radio properties of the host galaxy of X-ray flash,
XRF020903, which is the best example for investigating of the off-axis origin
of gamma-ray bursts(GRBs). Dust continuum at 233 GHz and CO are observed using
the Atacama Large millimeter/submillimeter array. The molecular gas mass
derived by applying the metalicity-dependent CO-to-H$_{2}$ conversion factor
matches the global trend along the redshift and stellar mass of the GRB host
galaxies. The estimated gas depletion timescale (pertaining to the potential
critical characteristics of GRB host galaxies) is equivalent to those of GRBs
and super-luminous supernova hosts in the same redshift range. These properties
of the XRF020903 host galaxy observed in radio resemble those of GRB host
galaxies, thereby supporting the identical origin of XRF020903 and GRBs.
|
It was recently argued that the pigeonhole principle, which states that if
three pigeons are put into two pigeonholes then at least one pigeonhole must
contain more than one pigeon, is violated in quantum systems [Y. Aharonov et
al., PNAS 113, 532 (2016)]. An experimental verification of this effect was
recently reported [M.-C. Chen et al., PNAS 116, 1549 (2019)]. In another recent
experimental work, it was argued that two entities were observed to exchange
properties without meeting each other [Z.-H. Liu et al., Nat. Commun. 11, 3006
(2020)]. Here we describe all these proposals and experiments as simple quantum
interference effects, where no such dramatic conclusions appear. Besides
demystifying some of the conclusions of the cited works, we also present
physical insights for some interesting behaviors present in these treatments.
For instance, we associate the anomalous particles behaviors in the quantum
pigeonhole effect to a quantum interference of force.
|
We have studied the three-body recombination rates on both sides of the
interspecies d-wave Feshbach resonance in the $^{85}$Rb\,-$^{87}$Rb-$^{87}$Rb
system using the $R$-matrix propagation method in the hyperspherical coordinate
frame. Two different mechanisms of recombination rate enhancement for positive
and negative $^{85}$Rb\,-$^{87}$Rb d-wave scattering lengths are analyzed. On
the positive scattering length side, the recombination rate enhancement occurs
due to the existence of three-body shape resonance, while on the negative
scattering length side, the coupling between the lowest entrance channel and
the highest recombination channel is crucial to the appearance of the
enhancement. In addition, our study shows that the intraspecies interaction
plays a significant role in determining the emergence of recombination rate
enhancements. Compared to the case in which the three pairwise interactions are
all in d-wave resonance, when the $^{87}$Rb-$^{87}$Rb interaction is near the
d-wave resonance, the values of the interspecies scattering length that produce
the recombination enhancement shift. In particular, when the
$^{87}$Rb-$^{87}$Rb interaction is away from the d-wave resonance, the
enhancement disappears on the negative interspecies scattering length side.
|
We discuss the quantum chemical nature of the Lead(II) valence basins,
sometime called the Lead "lone pair". Using various chemical interpretation
tools such as the molecular orbital analysis, Natural Bond Orbitals (NBO),
Natural Population Analysis (NPA) and Electron Localization Function (ELF)
topological analysis, we study a variety of Lead(II) complexes. A careful
analysis of the results show that the optimal structures of the lead complexes
are only govern by the 6s and 6p subshells whereas no involvement of the 5d
orbitals is found. Similarly, we do not find any significant contribution of
the 6d. Therefore, the Pb(II) complexation with its ligand can be explained
through the interaction of the 6s2 electrons and the accepting 6p orbitals. We
detail the potential structural and dynamical consequences of such electronic
structure organization of the Pb (II) valence domain.
|
$f(P)$ gravity is a novel extension of ECG in which the Ricci scalar in the
action is replaced by a function of the curvature invariant $P$ which
represents the contractions of the Riemann tensor at the cubic order \cite{p}.
The present work is concentrated on bounding some $f(P)$ gravity models using
the concept of energy conditions where the functional forms of $f(P)$ are
represented as \textbf{a)} $f(P) = \alpha \sqrt{P}$, and \textbf{b)} $f(P) =
\alpha \exp (P)$, where $\alpha$ is the sole model parameter. Energy conditions
are interesting linear relationships between pressure and density and have been
extensively employed to derive interesting results in Einstein's gravity, and
are also an excellent tool to impose constraints on any cosmological model. To
place the bounds, we ensured that the energy density must remain positive, the
pressure must remain negative, and the EoS parameter must attain a value close
to $-1$ to make sure that the bounds respect the accelerated expansion of the
Universe and are also in harmony with the latest observational data. We report
that for both the models, suitable parameter spaces exist which satisfy the
aforementioned conditions and therefore posit the $f(P)$ theory of gravity to
be a promising modified theory of gravitation.
|
In convolutional neural network-based character recognition, pooling layers
play an important role in dimensionality reduction and deformation
compensation. However, their kernel shapes and pooling operations are
empirically predetermined; typically, a fixed-size square kernel shape and max
pooling operation are used. In this paper, we propose a meta-learning framework
for pooling layers. As part of our framework, a parameterized pooling layer is
proposed in which the kernel shape and pooling operation are trainable using
two parameters, thereby allowing flexible pooling of the input data. We also
propose a meta-learning algorithm for the parameterized pooling layer, which
allows us to acquire a suitable pooling layer across multiple tasks. In the
experiment, we applied the proposed meta-learning framework to character
recognition tasks. The results demonstrate that a pooling layer that is
suitable across character recognition tasks was obtained via meta-learning, and
the obtained pooling layer improved the performance of the model in both
few-shot character recognition and noisy image recognition tasks.
|
Fu and Kane have discovered that a topological insulator with induced s-wave
superconductivity (gap $\Delta_0$, Fermi velocity $v_{\rm F}$, Fermi energy
$\mu$) supports chiral Majorana modes propagating on the surface along the edge
with a magnetic insulator. We show that the direction of motion of the Majorana
fermions can be inverted by the counterflow of supercurrent, when the Cooper
pair momentum along the boundary exceeds $\Delta_0^2/\mu v_{\rm F}$. The
chirality inversion is signaled by a doubling of the thermal conductance of a
channel parallel to the supercurrent. Moreover, the inverted edge can transport
a nonzero electrical current, carried by a Dirac mode that appears when the
Majorana mode switches chirality. The chirality inversion is a unique signature
of Majorana fermions in a spinful topological superconductor: it does not exist
for spinless chiral p-wave pairing.
|
Transport phenomena plays an important role in science and technology. In the
wide variety of applications both advection and diffusion may appear. Regarding
diffusion, for long times, different type of decay rates are possible for
different non-equilibrium systems. After summarizing the existing solutions of
the regular diffusion equation, we present not so well known solution derived
from three different trial functions, as a key point we present a family of
solutions for the case of infinite horizon. By this we tried to make a step
toward understanding the different long time decays for different diffusive
systems.
|
The Theory of Functional Connections (TFC) is a general methodology for
functional interpolation that can embed a set of user-specified linear
constraints. The functionals derived from this method, called \emph{constrained
expressions}, analytically satisfy the imposed constraints and can be leveraged
to transform constrained optimization problems to unconstrained ones. By
simplifying the optimization problem, this technique has been shown to produce
a numerical scheme that is faster, more accurate, and robust to poor
initialization. The content of this dissertation details the complete
development of the Theory of Functional Connections. First, the seminal paper
on the Theory of Functional Connections is discussed and motivates the
discovery of a more general formulation of the constrained expressions.
Leveraging this formulation, a rigorous structure of the constrained expression
is produced with associated mathematical definitions, claims, and proofs.
Furthermore, the second part of this dissertation explains how this technique
can be used to solve ordinary differential equations providing a wide variety
of examples compared to the state-of-the-art. The final part of this work
focuses on unitizing the techniques and algorithms produced in the prior
sections to explore the feasibility of using the Theory of Functional
Connections to solve real-time optimal control problems, namely optimal landing
problems.
|
The evolution of quadrupole and octupole collectivity and their coupling is
investigated in a series of even-even isotopes of the actinide Ra, Th, U, Pu,
Cm, and Cf with neutron number in the interval $130\leqslant N\leqslant 150$.
The Hartree-Fock-Bogoliubov approximation, based on the parametrization D1M of
the Gogny energy density functional, is employed to generate potential energy
surfaces depending upon the axially-symmetric quadrupole and octupole shape
degrees of freedom. The mean-field energy surface is then mapped onto the
expectation value of the $sdf$ interacting-boson-model Hamiltonian in the boson
condensate state as to determine the strength parameters of the boson
Hamiltonian. Spectroscopic properties related to the octupole degree of freedom
are produced by diagonalizing the mapped Hamiltonian. Calculated low-energy
negative-parity spectra, $B(E3;3^{-}_{1}\to 0^{+}_{1})$ reduced transition
rates, and effective octupole deformation suggest that the transition from
nearly spherical to stable octupole-deformed, and to octupole vibrational
states occurs systematically in the actinide region.
|
We consider a conjecture that identifies two types of base point free
divisors on $\bar{M}_{0,n}$. The first arises from Gromov-Witten theory of a
Grassmannian. The second comes from first Chern classes of vector bundles
associated to simple Lie algebras in type A. Here we reduce this conjecture on
$\bar{M}_{0,n}$ to the same statement for $n=4$. A reinterpretation leads to a
proof of the conjecture on $\bar{M}_{0,n}$ for a large class, and we give
sufficient conditions for the non-vanishing of these divisors.
|
Many modern software-intensive systems employ artificial intelligence /
machine-learning (AI/ML) components and are, thus, inherently data-centric. The
behaviour of such systems depends on typically large amounts of data processed
at run-time rendering such non-deterministic systems as complex. This
complexity growth affects our understanding on needs and practices in
Requirements Engineering (RE). There is, however, still little guidance on how
to handle requirements for such systems effectively: What are, for example,
typical quality requirements classes? What modelling concepts do we rely on or
which levels of abstraction do we need to consider? In fact, how to integrate
such concepts into approaches for a more traditional RE still needs profound
investigations. In this research preview paper, we report on ongoing efforts to
establish an artefact-based RE approach for the development of datacentric
systems (DCSs). To this end, we sketch a DCS development process with the newly
proposed requirements categories and data-centric artefacts and briefly report
on an ongoing investigation of current RE challenges in industry developing
data-centric systems.
|
Inspired by the fact that human eyes continue to develop tracking ability in
early and middle childhood, we propose to use tracking as a proxy task for a
computer vision system to learn the visual representations. Modelled on the
Catch game played by the children, we design a Catch-the-Patch (CtP) game for a
3D-CNN model to learn visual representations that would help with video-related
tasks. In the proposed pretraining framework, we cut an image patch from a
given video and let it scale and move according to a pre-set trajectory. The
proxy task is to estimate the position and size of the image patch in a
sequence of video frames, given only the target bounding box in the first
frame. We discover that using multiple image patches simultaneously brings
clear benefits. We further increase the difficulty of the game by randomly
making patches invisible. Extensive experiments on mainstream benchmarks
demonstrate the superior performance of CtP against other video pretraining
methods. In addition, CtP-pretrained features are less sensitive to domain gaps
than those trained by a supervised action recognition task. When both trained
on Kinetics-400, we are pleasantly surprised to find that CtP-pretrained
representation achieves much higher action classification accuracy than its
fully supervised counterpart on Something-Something dataset. Code is available
online: github.com/microsoft/CtP.
|
The method recently introduced in arXiv:2011.10115 realizes a deep neural
network with just a single nonlinear element and delayed feedback. It is
applicable for the description of physically implemented neural networks. In
this work, we present an infinite-dimensional generalization, which allows for
a more rigorous mathematical analysis and a higher flexibility in choosing the
weight functions. Precisely speaking, the weights are described by Lebesgue
integrable functions instead of step functions. We also provide a functional
back-propagation algorithm, which enables gradient descent training of the
weights. In addition, with a slight modification, our concept realizes
recurrent neural networks.
|
Order dispatch is one of the central problems to ride-sharing platforms.
Recently, value-based reinforcement learning algorithms have shown promising
performance on this problem. However, in real-world applications, the
non-stationarity of the demand-supply system poses challenges to re-utilizing
data generated in different time periods to learn the value function. In this
work, motivated by the fact that the relative relationship between the values
of some states is largely stable across various environments, we propose a
pattern transfer learning framework for value-based reinforcement learning in
the order dispatch problem. Our method efficiently captures the value patterns
by incorporating a concordance penalty. The superior performance of the
proposed method is supported by experiments.
|
The missing data problem pervasively exists in statistical applications. Even
as simple as the count data in mortality projections, it may not be available
for certain age-and-year groups due to the budget limitations or difficulties
in tracing research units, resulting in the follow-up estimation and prediction
inaccuracies. To circumvent this data-driven challenge, we extend the Poisson
log-normal Lee-Carter model to accommodate a more flexible time structure, and
develop the new sampling algorithm that improves the MCMC convergence when
dealing with incomplete mortality data. Via the overdispersion term and Gibbs
sampler, the extended model can be re-written as the dynamic linear model so
that both Kalman and sequential Kalman filters can be incorporated into the
sampling scheme. Additionally, our meticulous prior settings can avoid the
re-scaling step in each MCMC iteration, and allow model selection
simultaneously conducted with estimation and prediction. The proposed method is
applied to the mortality data of Chinese males during the period 1995-2016 to
yield mortality rate forecasts for 2017-2039. The results are comparable to
those based on the imputed data set, suggesting that our approach could handle
incomplete data well.
|
Vanadium tetracyanoethylene (V[TCNE]$_{x}$, $x\approx 2$) is an organic-based
ferrimagnet with a high magnetic ordering temperature $\mathrm{T_C>600 ~K}$,
low magnetic damping, and growth compatibility with a wide variety of
substrates. However, similar to other organic-based materials, it is sensitive
to air. Although encapsulation of V[TCNE]$_{x}$ with glass and epoxy extends
the film lifetime from an hour to a few weeks, what is limiting its lifetime
remains poorly understood. Here we characterize encapsulated V[TCNE]$_{x}$
films using confocal microscopy, Raman spectroscopy, ferromagnetic resonance
and SQUID magnetometry. We identify the relevant features in the Raman spectra
in agreement with \textit{ab initio} theory, reproducing $\mathrm{C=C,C\equiv
N}$ vibrational modes. We correlate changes in the effective dynamic
magnetization with changes in Raman intensity and in photoluminescence. Based
on changes in Raman spectra, we hypothesize possible structural changes and
aging mechanisms in V[TCNE]$_x$. These findings enable a local optical probe of
V[TCNE]$_{x}$ film quality, which is invaluable in experiments where assessing
film quality with local magnetic characterization is not possible.
|
We provide new necessary and sufficient conditions for the convergence of
positive series developing Bertran-De Morgan and Cauchy type tests given in [M.
Martin, Bull. Amer. Math. Soc. 47(1941), 452-457] and [L. Bourchtein et al,
Int. J. Math. Anal. 6(2012), 1847-1869]. The obtained result enables us to
extend the known conditions for recurrence and transience of birth-and-death
processes given in [V. M. Abramov, Amer. Math. Monthly 127(2020) 444-448].
|
We propose an anti-parity-time (anti-PT ) symmetric non-Hermitian
Su-Schrieffer-Heeger (SSH) model, where the large non-Hermiticity
constructively creates nontrivial topology and greatly expands the topological
phase. In the anti-PT -symmetric SSH model, the gain and loss are alternatively
arranged in pairs under the inversion symmetry. The appearance of degenerate
point at the center of the Brillouin zone determines the topological phase
transition, while the exceptional points unaffect the band topology. The large
non-Hermiticity leads to unbalanced wavefunction distribution in the broken
anti-PT -symmetric phase and induces the nontrivial topology. Our findings can
be verified through introducing dissipations in every another two sites of the
standard SSH model even in its trivial phase, where the nontrivial topology is
solely induced by the dissipations.
|
The trustworthiness of Robots and Autonomous Systems (RAS) has gained a
prominent position on many research agendas towards fully autonomous systems.
This research systematically explores, for the first time, the key facets of
human-centered AI (HAI) for trustworthy RAS. In this article, five key
properties of a trustworthy RAS initially have been identified. RAS must be (i)
safe in any uncertain and dynamic surrounding environments; (ii) secure, thus
protecting itself from any cyber-threats; (iii) healthy with fault tolerance;
(iv) trusted and easy to use to allow effective human-machine interaction
(HMI), and (v) compliant with the law and ethical expectations. Then, the
challenges in implementing trustworthy autonomous system are analytically
reviewed, in respects of the five key properties, and the roles of AI
technologies have been explored to ensure the trustiness of RAS with respects
to safety, security, health and HMI, while reflecting the requirements of
ethics in the design of RAS. While applications of RAS have mainly focused on
performance and productivity, the risks posed by advanced AI in RAS have not
received sufficient scientific attention. Hence, a new acceptance model of RAS
is provided, as a framework for requirements to human-centered AI and for
implementing trustworthy RAS by design. This approach promotes human-level
intelligence to augment human's capacity. while focusing on contributions to
humanity.
|
The pandemic of novel Coronavirus Disease 2019 (COVID-19) is widespread all
over the world causing serious health problems as well as serious impact on the
global economy. Reliable and fast testing of the COVID-19 has been a challenge
for researchers and healthcare practitioners. In this work we present a novel
machine learning (ML) integrated X-ray device in Healthcare Cyber-Physical
System (H-CPS) or smart healthcare framework (called CoviLearn) to allow
healthcare practitioners to perform automatic initial screening of COVID-19
patients. We propose convolutional neural network (CNN) models of X-ray images
integrated into an X-ray device for automatic COVID-19 detection. The proposed
CoviLearn device will be useful in detecting if a person is COVID-19 positive
or negative by considering the chest X-ray image of individuals. CoviLearn will
be useful tool doctors to detect potential COVID-19 infections instantaneously
without taking more intrusive healthcare data samples, such as saliva and
blood. COVID-19 attacks the endothelium tissues that support respiratory tract,
X-rays images can be used to analyze the health of a patient lungs. As all
healthcare centers have X-ray machines, it could be possible to use proposed
CoviLearn X-rays to test for COVID-19 without the especial test kits. Our
proposed automated analysis system CoviLearn which has 99% accuracy will be
able to save valuable time of medical professionals as the X-ray machines come
with a drawback as it needed a radiology expert.
|
We introduce the notion of a nonlinear splitting on a fibre bundle as a
generalization of an Ehresmann connection. We present its basic properties and
we pay attention to the special cases of affine, homogeneous and principal
nonlinear splittings. We explain where nonlinear splittings appear in the
context of Lagrangian systems and Finsler geometry and we show their relation
to Routh symmetry reduction, submersive second-order differential equations and
unreduction. We define a curvature map for a nonlinear splitting, and we
indicate where this concept appears in the context of nonholonomic systems with
affine constraints and Lagrangian systems of magnetic type.
|
Although board games and video games have been studied for decades in
artificial intelligence research, challenging word games remain relatively
unexplored. Word games are not as constrained as games like chess or poker.
Instead, word game strategy is defined by the players' understanding of the way
words relate to each other. The word game Codenames provides a unique
opportunity to investigate common sense understanding of relationships between
words, an important open challenge. We propose an algorithm that can generate
Codenames clues from the language graph BabelNet or from any of several
embedding methods - word2vec, GloVe, fastText or BERT. We introduce a new
scoring function that measures the quality of clues, and we propose a weighting
term called DETECT that incorporates dictionary-based word representations and
document frequency to improve clue selection. We develop BabelNet-Word
Selection Framework (BabelNet-WSF) to improve BabelNet clue quality and
overcome the computational barriers that previously prevented leveraging
language graphs for Codenames. Extensive experiments with human evaluators
demonstrate that our proposed innovations yield state-of-the-art performance,
with up to 102.8% improvement in precision@2 in some cases. Overall, this work
advances the formal study of word games and approaches for common sense
language understanding.
|
Significant number of researches have been developed recently around
intelligent system for traffic management, especially, OCR based license plate
recognition, as it is considered as a main step for any automatic traffic
management system. Good quality data sets are increasingly needed and produced
by the research community to improve the performance of those algorithms.
Furthermore, a special need of data is noted for countries having special
characters on their licence plates, like Morocco, where Arabic Alphabet is
used. In this work, we present a labeled open data set of circulation plates
taken in Morocco, for different type of vehicles, namely cars, trucks and
motorcycles. This data was collected manually and consists of 705 unique and
different images. Furthermore this data was labeled for plate segmentation and
for matriculation number OCR. Also, As we show in this paper, the data can be
enriched using data augmentation techniques to create training sets with few
thousands of images for different machine leaning and AI applications. We
present and compare a set of models built on this data. Also, we publish this
data as an open access data to encourage innovation and applications in the
field of OCR and image processing for traffic control and other applications
for transportation and heterogeneous vehicle management.
|
High hardware cost and high power consumption of massive multiple-input and
multiple output (MIMO) are still two challenges for the future wireless
communications including beyond 5G. Adopting the low-resolution
analog-to-digital converter (ADC) is viewed as a promising solution.
Additionally, the direction of arrival (DOA) estimation is an indispensable
technology for beam alignment and tracking in massive MIMO systems. Thus, in
this paper, the performance of DOA estimation for massive MIMO receive array
with mixed-ADC structure is first investigated, where one part of radio
frequency (RF) chains are connected with high-resolution ADCs and the remaining
ones are connected with low-resolution ADCs. Moreover, the Cramer-Rao lower
bound (CRLB) for this architecture is derived based on the additive
quantization noise model approximation for the effect of low-resolution ADCs.
Then, the root-MUSIC method is designed for such a receive structure.
Eventually, a performance loss factor and the associated energy efficiency
factor is defined for analysis in detail. Simulation results find that a
mixed-ADC architecture can strike a good balance among RMSE performance,
circuit cost and energy efficiency. More importantly, just 1-4 bits of
low-resolution ADCs can achieve a satisfactory performance for DOA measurement.
|
We use Direct Numerical Simulations (DNS) of the forced Navier-Stokes
equation for a 3-dimensional incompressible fluid in order to test recent
theoretical predictions. We study the two- and three-point spatio-temporal
correlation functions of the velocity field in stationary, isotropic and
homogeneous turbulence. We compare our numerical results to the predictions
from the Functional Renormalization Group (FRG) which were obtained in the
large wavenumber limit. DNS are performed at various Reynolds numbers and the
correlations are analyzed in different time regimes focusing on the large
wavenumbers. At small time delays, we find that the two-point correlation
function decays as a Gaussian in the variable $kt$ where $k$ is the wavenumber
and $t$ the time delay. The three-point correlation function, determined from
the time-dependent advection-velocity correlations, also follows a Gaussian
decay at small $t$ with the same prefactor as the one of the two-point
function. These behaviors are in precise agreement with the FRG results, and
can be simply understood as a consequence of sweeping. At large time delays,
the FRG predicts a crossover to an exponential in $k^2 t$, which we were not
able to resolve in our simulations. However, we analyze the two-point
spatio-temporal correlations of the modulus of the velocity, and show that they
exhibit this crossover from a Gaussian to an exponential decay, although we
lack of a theoretical understanding in this case. This intriguing phenomenon
calls for further theoretical investigation.
|
We derive a precise asymptotic formula for the density of the small singular
values of the real Ginibre matrix ensemble shifted by a complex parameter $z$
as the dimension tends to infinity. For $z$ away from the real axis the formula
coincides with that for the complex Ginibre ensemble we derived earlier in
[arXiv:1908.01653]. On the level of the one-point function of the low lying
singular values we thus confirm the transition from real to complex Ginibre
ensembles as the shift parameter $z$ becomes genuinely complex; the analogous
phenomenon has been well known for eigenvalues. We use the superbosonization
formula [arXiv:0707.2929] in a regime where the main contribution comes from a
three dimensional saddle manifold.
|
We develop a variational Bayesian (VB) approach for estimating large-scale
dynamic network models in the network autoregression framework. The VB approach
allows for the automatic identification of the dynamic structure of such a
model and obtains a direct approximation of the posterior density. Compared to
Markov Chain Monte Carlo (MCMC) based sampling approaches, the VB approach
achieves enhanced computational efficiency without sacrificing estimation
accuracy. In the simulation study conducted here, the proposed VB approach
detects various types of proper active structures for dynamic network models.
Compared to the alternative approach, the proposed method achieves similar or
better accuracy, and its computational time is halved. In a real data analysis
scenario of day-ahead natural gas flow prediction in the German gas
transmission network with 51 nodes between October 2013 and September 2015, the
VB approach delivers promising forecasting accuracy along with clearly detected
structures in terms of dynamic dependence.
|
This paper studies the design of a finite-dimensional output feedback
controller for the stabilization of a reaction-diffusion equation in the
presence of a sector nonlinearity in the boundary input. Due to the input
nonlinearity, classical approaches relying on the transfer of the control from
the boundary into the domain with explicit occurrence of the time-derivative of
the control cannot be applied. In this context, we first demonstrate using
Lyapunov direct method how a finite-dimensional observer-based controller can
be designed, without using the time derivative of the boundary input as an
auxiliary command, in order to achieve the boundary stabilization of general
1-D reaction-diffusion equations with Robin boundary conditions and a
measurement selected as a Dirichlet trace. We extend this approach to the case
of a control applying at the boundary through a sector nonlinearity. We show
from the derived stability conditions the existence of a size of the sector (in
which the nonlinearity is confined) so that the stability of the closed-loop
system is achieved when selecting the dimension of the observer large enough.
|
This is a short review of the Kadomtsev-Petviashvili hierarchies of types B
and C. The main objects are the $L$-operator, the wave operator, the auxiliary
linear problems for the wave function, the bilinear identity for the wave
function and the tau-function. All of them are discussed in the paper. The
connections with the usual Kadomtsev-Petviashvili hierarchy (of the type A) are
clarified. Examples of soliton solutions and the dispersionless limit of the
hierarchies are also considered.
|
In this article, we introduce a framework for entanglement characterization
by time-resolved single-photon counting with measurement operators defined in
the time domain. For a quantum system with unitary dynamics, we generate
time-continuous measurements by shifting from the Schr\"odinger picture to the
Heisenberg representation. In particular, we discuss this approach in reference
to photonic tomography. To make the measurement scheme realistic, we impose
timing uncertainty on photon counts along with the Poisson noise. Then, the
framework is tested numerically on quantum tomography of qubits. Next, we
investigate the accuracy of the model for polarization-entangled photon pairs.
Entanglement detection and precision of state reconstruction are quantified by
figures of merit and presented on graphs versus the amount of time uncertainty.
|
Magnetic skyrmions are stable topological spin textures with significant
potential for spintronics applications. Merons, as half-skyrmions, have been
discovered by recent observations, which have also raised the upsurge of
research. The main purpose of this work is to study further the lattice forms
of merons and skyrmions. We study a classical spin model with
Dzyaloshinskii-Moriya interaction, easy-axis, and in-plane magnetic
anisotropies on the honeycomb lattice via Monte Carlo simulations. This model
could also describe the low-energy behaviors of a two-component bosonic model
with a synthetic spin-orbit coupling in the deep Mott insulating region or
two-dimensional materials with strong spin-orbit coupling. The results
demonstrate the emergence of different sizes of spiral phases, skyrmion and
vortex superlattice in absence of magnetic field, furthered the emergence of
field-induced meron and skyrmion superlattice. In particular, we give the
simulated evolution of the spin textures driven by the magnetic field, which
could further reveal the effect of the magnetic field for inducing meron and
skyrmion superlattice.
|
We construct Fock and MacMahon modules for the quantum toroidal superalgebra
$\mathcal{E}_\mathbf{s}$ associated with the Lie superalgebra
$\mathfrak{gl}_{m|n}$ and parity $\mathbf{s}$. The bases of the Fock and
MacMahon modules are labeled by super-analogs of partitions and plane
partitions with various boundary conditions, while the action of generators of
$\mathcal{E}_\mathbf{s}$ is given by Pieri type formulas. We study the
corresponding characters.
|
We consider the exact rogue periodic wave (rogue wave on the periodic
background) and periodic wave solutions for the Chen-Lee-Liu equation via the
odd-th order Darboux transformation. Then, the multi-layer physics-informed
neural networks (PINNs) deep learning method is applied to research the
data-driven rogue periodic wave, breather wave, soliton wave and periodic wave
solutions of well-known Chen-Lee-Liu equation. Especially, the data-driven
rogue periodic wave is learned for the first time to solve the partial
differential equation. In addition, using image simulation, the relevant
dynamical behaviors and error analysis for there solutions are presented. The
numerical results indicate that the rogue periodic wave, breather wave, soliton
wave and periodic wave solutions for Chen-Lee-Liu equation can be generated
well by PINNs deep learning method.
|
Predicting vulnerable road user behavior is an essential prerequisite for
deploying Automated Driving Systems (ADS) in the real-world. Pedestrian
crossing intention should be recognized in real-time, especially for urban
driving. Recent works have shown the potential of using vision-based deep
neural network models for this task. However, these models are not robust and
certain issues still need to be resolved. First, the global spatio-temproal
context that accounts for the interaction between the target pedestrian and the
scene has not been properly utilized. Second, the optimum strategy for fusing
different sensor data has not been thoroughly investigated. This work addresses
the above limitations by introducing a novel neural network architecture to
fuse inherently different spatio-temporal features for pedestrian crossing
intention prediction. We fuse different phenomena such as sequences of RGB
imagery, semantic segmentation masks, and ego-vehicle speed in an optimum way
using attention mechanisms and a stack of recurrent neural networks. The
optimum architecture was obtained through exhaustive ablation and comparison
studies. Extensive comparative experiments on the JAAD pedestrian action
prediction benchmark demonstrate the effectiveness of the proposed method,
where state-of-the-art performance was achieved. Our code is open-source and
publicly available.
|
The likelihood ratio for a continuous gravitational wave signal is viewed
geometrically as a function of the orientation of two vectors; one representing
the optimal signal-to-noise ratio, the other representing the maximised
likelihood ratio or $\mathcal{F}$-statistic. Analytic marginalisation over the
angle between the vectors yields a marginalised likelihood ratio which is a
function of the $\mathcal{F}$-statistic. Further analytic marginalisation over
the optimal signal-to-noise ratio is explored using different choices of prior.
Monte-Carlo simulations show that the marginalised likelihood ratios have
identical detection power to the $\mathcal{F}$-statistic. This approach
demonstrates a route to viewing the $\mathcal{F}$-statistic in a Bayesian
context, while retaining the advantages of its efficient computation.
|
Photonic metamaterials with properties unattainable in base materials are
already beginning to revolutionize optical component design. However, their
exceptional characteristics are often static, as artificially engineered into
the material during the fabrication process. This limits their application for
in-operando adjustable optical devices and active optics in general. Here, for
a hybrid material consisting of a liquid crystal-infused nanoporous solid, we
demonstrate active and dynamic control of its meta-optics by applying
alternating electric fields parallel to the long axes of its cylindrical pores.
First-harmonic Pockels and second-harmonic Kerr birefringence responses,
strongly depending on the excitation frequency- and temperature, are observed
in a frequency range from 50 Hz to 50 kHz. This peculiar behavior is
quantitatively traced by a Landau-De Gennes free energy analysis to an
order-disorder orientational transition of the rod-like mesogens and intimately
related changes in the molecular mobilities and polar anchoring at the solid
walls on the single-pore, meta-atomic scale. Thus, our study evidences that
liquid crystal-infused nanopores exhibit integrated multi-physical couplings
and reversible phase changes that make them particularly promising for the
design of photonic metamaterials with thermo-electrically tunable birefringence
in the emerging field of spacetime metamaterials aiming at a full
spatio-temporal control of light.
|
Graph Convolutional Networks (GCNs) are powerful models for node
representation learning tasks. However, the node representation in existing GCN
models is usually generated by performing recursive neighborhood aggregation
across multiple graph convolutional layers with certain sampling methods, which
may lead to redundant feature mixing, needless information loss, and extensive
computations. Therefore, in this paper, we propose a novel architecture named
Non-Recursive Graph Convolutional Network (NRGCN) to improve both the training
efficiency and the learning performance of GCNs in the context of node
classification. Specifically, NRGCN proposes to represent different hops of
neighbors for each node based on inner-layer aggregation and layer-independent
sampling. In this way, each node can be directly represented by concatenating
the information extracted independently from each hop of its neighbors thereby
avoiding the recursive neighborhood expansion across layers. Moreover, the
layer-independent sampling and aggregation can be precomputed before the model
training, thus the training process can be accelerated considerably. Extensive
experiments on benchmark datasets verify that our NRGCN outperforms the
state-of-the-art GCN models, in terms of the node classification performance
and reliability.
|
Learning a sequence of tasks without access to i.i.d. observations is a
widely studied form of continual learning (CL) that remains challenging. In
principle, Bayesian learning directly applies to this setting, since recursive
and one-off Bayesian updates yield the same result. In practice, however,
recursive updating often leads to poor trade-off solutions across tasks because
approximate inference is necessary for most models of interest. Here, we
describe an alternative Bayesian approach where task-conditioned parameter
distributions are continually inferred from data. We offer a practical deep
learning implementation of our framework based on probabilistic
task-conditioned hypernetworks, an approach we term posterior meta-replay.
Experiments on standard benchmarks show that our probabilistic hypernetworks
compress sequences of posterior parameter distributions with virtually no
forgetting. We obtain considerable performance gains compared to existing
Bayesian CL methods, and identify task inference as our major limiting factor.
This limitation has several causes that are independent of the considered
sequential setting, opening up new avenues for progress in CL.
|
We introduce a geometric approach of integral curves for functional
inequalities involving directional derivatives in the general context of
differentiable manifolds that are equipped with a volume form. We focus on
Hardy-type inequalities and the explicit optimal Hardy potentials that are
induced by this method. We then apply the method to retrieve some known
inequalities and establish some new ones.
|
In this paper, we devise a distributional framework on actor-critic as a
solution to distributional instability, action type restriction, and conflation
between samples and statistics. We propose a new method that minimizes the
Cram\'er distance with the multi-step Bellman target distribution generated
from a novel Sample-Replacement algorithm denoted SR($\lambda$), which learns
the correct value distribution under multiple Bellman operations.
Parameterizing a value distribution with Gaussian Mixture Model further
improves the efficiency and the performance of the method, which we name GMAC.
We empirically show that GMAC captures the correct representation of value
distributions and improves the performance of a conventional actor-critic
method with low computational cost, in both discrete and continuous action
spaces using Arcade Learning Environment (ALE) and PyBullet environment.
|
Advances in integrated photonics open exciting opportunities for
batch-fabricated optical sensors using high quality factor nanophotonic
cavities to achieve ultra-high sensitivities and bandwidths. The sensitivity
improves with higher optical power, however, localized absorption and heating
within a micrometer-scale mode volume prominently distorts the cavity
resonances and strongly couples the sensor response to thermal dynamics,
limiting the sensitivity and hindering the measurement of broadband
time-dependent signals. Here, we derive a frequency-dependent photonic sensor
transfer function that accounts for thermo-optical dynamics and quantitatively
describes the measured broadband optomechanical signal from an integrated
photonic atomic-force-microscopy nanomechanical probe. Using this transfer
function, the probe can be operated in the high optical power, strongly
thermo-optically nonlinear regime, reaching a sensitivity of $\approx$ 0.4
fm/Hz$^{1/2}$, an improvement of $\approx 10\times$ relative to the best
performance in the linear regime. Counterintuitively, we discover that higher
transduction gain and sensitivity are obtained with lower quality factor
optical modes for low signal frequencies. Not limited to optomechanical
transducers, the derived transfer function is generally valid for describing
small-signal dynamic response of a broad range of technologically important
photonic sensors subject to the thermo-optical effect.
|
Understanding and improving mobile broadband deployment is critical to
bridging the digital divide and targeting future investments. Yet accurately
mapping mobile coverage is challenging. In 2019, the Federal Communications
Commission (FCC) released a report on the progress of mobile broadband
deployment in the United States. This report received a significant amount of
criticism with claims that the cellular coverage, mainly available through
Long-Term Evolution (LTE), was over-reported in some areas, especially those
that are rural and/or tribal [12]. We evaluate the validity of this criticism
using a quantitative analysis of both the dataset from which the FCC based its
report and a crowdsourced LTE coverage dataset. Our analysis is focused on the
state of New Mexico, a region characterized by diverse mix of
demographics-geography and poor broadband access. We then performed a
controlled measurement campaign in northern New Mexico during May 2019. Our
findings reveal significant disagreement between the crowdsourced dataset and
the FCC dataset regarding the presence of LTE coverage in rural and tribal
census blocks, with the FCC dataset reporting higher coverage than the
crowdsourced dataset. Interestingly, both the FCC and the crowdsourced data
report higher coverage compared to our on-the-ground measurements. Based on
these findings, we discuss our recommendations for improved LTE coverage
measurements, whose importance has only increased in the COVID-19 era of
performing work and school from home, especially in rural and tribal areas.
|
We propose a new approach to probe neutral-current non-standard neutrino
interaction parameter $\varepsilon_{\mu\tau}$ using the oscillation dip and
oscillation valley. Using the simulated ratio of upward-going and
downward-going reconstructed muon events at the upcoming ICAL detector, we
demonstrate that the presence of non-zero $\varepsilon_{\mu\tau}$ would result
in the shift in the dip location as well as the bending of the oscillation
valley. Thanks to the charge identification capability of ICAL, the opposite
shifts in the locations of oscillation dips as well as the contrast in the
curvatures of oscillation valleys for $\mu^-$ and $\mu^+$ is used to constrain
$|\varepsilon_{\mu\tau}|$ at 90% C.L. to about 2% using 500 kt$\cdot$yr
exposure. Our procedure incorporates statistical fluctuations, uncertainties in
oscillation parameters, and systematic errors.
|
In this letter, we present an intelligent reflecting surface (IRS) selection
strategy for multiple IRSs aided multiuser multiple-input single-output (MISO)
systems. In particular, we pose the IRS selection problem as a stable matching
problem. A two stage user-IRS assignment algorithm is proposed, where the main
objective is to carry out a stable user-IRS matching, such that the sum rate of
the system is improved. The first stage of the proposed algorithm employs a
well-known Gale Shapley matching designed for the stable marriage problem.
However, due to interference in multiuser systems, the matching obtained after
the first stage may not be stable. To overcome this issue, one-sided (i.e.,
only IRSs) blocking pairs (BPs) are identified in the second stage of the
proposed algorithm, where the BP is a pair of IRSs which are better off after
exchanging their partners. Thus, the second stage validates the stable matching
in the proposed algorithm. Numerical results show that the proposed assignment
achieves better sum rate performance compared to distance-based and random
matching algorithms.
|
Let $V$ be a simple vertex operator superalgebra and $G$ a finite
automorphism group of $V$ containing the canonical automorphism $\sigma$ such
that $V^G$ is regular. It is proved that every irreducible $V^G$-module occurs
in an irreducible $g$-twisted $V$-module for some $g\in G$ and the irreducible
$V^G$-modules are classified. Moreover, the quantum dimensions of irreducible
$V^G$-modules are determined, a global dimension formula for $V$ in terms of
twisted modules is obtained and a super quantum Galois theory is established.
In addition, the $S$-matrix of $V^G$ is computed
|
In imitation learning from observation IfO, a learning agent seeks to imitate
a demonstrating agent using only observations of the demonstrated behavior
without access to the control signals generated by the demonstrator. Recent
methods based on adversarial imitation learning have led to state-of-the-art
performance on IfO problems, but they typically suffer from high sample
complexity due to a reliance on data-inefficient, model-free reinforcement
learning algorithms. This issue makes them impractical to deploy in real-world
settings, where gathering samples can incur high costs in terms of time,
energy, and risk. In this work, we hypothesize that we can incorporate ideas
from model-based reinforcement learning with adversarial methods for IfO in
order to increase the data efficiency of these methods without sacrificing
performance. Specifically, we consider time-varying linear Gaussian policies,
and propose a method that integrates the linear-quadratic regulator with path
integral policy improvement into an existing adversarial IfO framework. The
result is a more data-efficient IfO algorithm with better performance, which we
show empirically in four simulation domains: using far fewer interactions with
the environment, the proposed method exhibits similar or better performance
than the existing technique.
|
Floating photovoltaics (FPV) is an emerging technology that is gaining
attention worldwide. However, little information is still available on its
possible impacts in the aquatic ecosystems, as well as on the durability of its
components. Therefore, this work intends to provide a contribution to this
field, analysing possible obstacles that can compromise the performance of this
technology, adding to an increase of its reliability and assessing possible
impacts. The problem under study is related to the potential submersion of
photovoltaic cables, that can lead to a degradation of its electrical
insulation capabilities and, consequently, higher energy production losses and
water contamination. In the present study, the submersion of photovoltaic
cables (with two different insulation materials) in freshwater and artificial
seawater was tested, in order to replicate real life conditions, when FPV
systems are located in reservoirs or in the marine environment. Electrical
insulation tests were carried out weekly to assess possible cable degradation,
the physical-chemical characteristics of the water were also periodically
monitored, complemented by analysis to detect traces of copper and
microplastics in the water. The results showed that the submersion of
photovoltaic cables with rubber sheath in saltwater can lead to a cable
accelerated degradation, with reduction of its electrical insulation and,
consequently, copper release into the aquatic environment.
|
The COVID-19 pandemic has impacted billions of people around the world. To
capture some of these impacts in the United States, we are conducting a
nationwide longitudinal survey collecting information about activity and
travel-related behaviors and attitudes before, during, and after the COVID-19
pandemic. The survey questions cover a wide range of topics including
commuting, daily travel, air travel, working from home, online learning,
shopping, and risk perception, along with attitudinal, socioeconomic, and
demographic information. The survey is deployed over multiple waves to the same
respondents to monitor how behaviors and attitudes evolve over time. Version
1.0 of the survey contains 8,723 Wave 1 responses that are publicly available.
This article details the methodology adopted for the collection, cleaning, and
processing of the data. In addition, the data are weighted to be representative
of national and regional demographics. This survey dataset can aid researchers,
policymakers, businesses, and government agencies in understanding both the
extent of behavioral shifts and the likelihood that changes in behaviors will
persist after COVID-19.
|
This paper shows that, in the definition of Alexandrov space with lower
([BGP]) or upper ([AKP]) curvature bound, the original conditions can be
replaced with much weaker ones, which can be viewed as comparison versions of
the second variation formula in Riemannian geometry (and thus if we define
Alexandrov spaces using these weakened conditions, then the original definition
will become a local version of Toponogov's Comparison Theorem on such spaces).
As an application, we give a new proof for the Doubling Theorem by Perel'man.
|
The thinness of a graph is a width parameter that generalizes some properties
of interval graphs, which are exactly the graphs of thinness one. Graphs with
thinness at most two include, for example, bipartite convex graphs. Many
NP-complete problems can be solved in polynomial time for graphs with bounded
thinness, given a suitable representation of the graph. Proper thinness is
defined analogously, generalizing proper interval graphs, and a larger family
of NP-complete problems are known to be polynomially solvable for graphs with
bounded proper thinness. It is known that the thinness of a graph is at most
its pathwidth plus one. In this work, we prove that the proper thinness of a
graph is at most its bandwidth, for graphs with at least one edge. It is also
known that boxicity is a lower bound for the thinness. The main results of this
work are characterizations of 2-thin and 2-proper thin graphs as intersection
graphs of rectangles in the plane with sides parallel to the Cartesian axes and
other specific conditions. We also bound the bend number of graphs with low
thinness as vertex intersection graphs of paths on a grid ($B_k$-VPG graphs are
the graphs that have a representation in which each path has at most $k$
bends). We show that 2-thin graphs are a subclass of $B_1$-VPG graphs and,
moreover, of monotone L-graphs, and that 3-thin graphs are a subclass of
$B_3$-VPG graphs. We also show that $B_0$-VPG graphs may have arbitrarily large
thinness, and that not every 4-thin graph is a VPG graph. Finally, we
characterize 2-thin graphs by a set of forbidden patterns for a vertex order.
|
This paper contributes with a new formal method of spatial discretization of
a class of nonlinear distributed parameter systems that allow a
port-Hamiltonian representation over a one dimensional manifold. A specific
finite dimensional port-Hamiltonian element is defined that enables a structure
preserving discretization of the infinite dimensional model that inherits the
Dirac structure, the underlying energy balance and matches the Hamiltonian
function on any, possibly nonuniform mesh of the spatial geometry.
|
The identification of stellar-mass black-hole mergers with up to 80 Msun as
powerful sources of gravitational wave radiation led to increased interest in
the physics of the most massive stars. The largest sample of possible
progenitors of such objects, very massive stars (VMS) with masses up to 300
Msun, have been identified in the 30 Dor star-forming region in the Large
Magellanic Cloud (LMC). The physics and evolution of VMS is highly uncertain,
mainly due to their proximity to the Eddington limit. In this work we
investigate the two most important effects that are thought to occur near the
Eddington limit. Enhanced mass loss through optically thick winds, and the
formation of radially inflated stellar envelopes. We compute evolutionary
models for VMS at LMC metallicity and perform a population synthesis of the
young stellar population in 30 Dor. We find that enhanced mass loss and
envelope inflation have a dominant effect on the evolution of the most massive
stars. While the observed mass-loss properties and the associated surface
He-enrichment are well described by our new models, the observed O-star
mass-loss rates are found to cover a much larger range than theoretically
predicted, with particularly low mass-loss rates for the youngest objects.
Also, the (rotational) surface enrichment in the O-star regime appears to be
not well understood. The positions of the most massive stars in the
Hertzsprung-Russell Diagram (HRD) are affected by mass loss and envelope
inflation. For instance, the majority of luminous B-supergiants in 30 Dor, and
the lack thereof at the highest luminosities, can be explained through the
combination of envelope inflation and mass loss. Finally, we find that the
upper limit for the inferred initial stellar masses in the greater 30 Dor
region is significantly lower than in its central cluster R 136, implying a
variable upper limit for the masses of stars.
|
The first half of the paper is devoted to description and implementation of
statistical tests arguing for the presence of a Brownian component in the
inventories and wealth processes of individual traders. We use intra-day data
from the Toronto Stock Exchange to provide empirical evidence of this claim. We
work with regularly spaced time intervals, as well as with asynchronously
observed data. The tests reveal with high significance the presence of a
non-zero Brownian motion component. The second half of the paper is concerned
with the analysis of trader behaviors throughout the day. We extend the
theoretical analysis of an existing optimal execution model to accommodate the
presence of It\^o inventory processes, and we compare empirically the optimal
behavior of traders in such fitted models, to their actual behavior as inferred
from the data.
|
We study $b$-property of a sublattice (or an order ideal) $F$ of a vector
lattice $E$. In particular, $b$-property of $E$ in $E^\delta$, the Dedekind
completion of $E$, $b$-property of $E$ in $E^u$, the universal completion of
$E$, and $b$-property of $E$ in $\hat{E}(\hat{\tau})$, the completion of $E$.
|
We describe explicitly the chamber structure of the movable cone for a
general smooth complete intersection Calabi-Yau threefold $X$ of Picard number
two in certain Pr-ruled Fano manifold and hence verify the Morrison-Kawamata
cone conjecture for such $X$. Moreover, all birational minimal models of such
Calabi-Yau threefolds are found, whose number is finite up to isomorphism.
|
Recently, learning a model that generalizes well on out-of-distribution (OOD)
data has attracted great attention in the machine learning community. In this
paper, after defining OOD generalization via Wasserstein distance, we
theoretically show that a model robust to input perturbation generalizes well
on OOD data. Inspired by previous findings that adversarial training helps
improve input-robustness, we theoretically show that adversarially trained
models have converged excess risk on OOD data, and empirically verify it on
both image classification and natural language understanding tasks. Besides, in
the paradigm of first pre-training and then fine-tuning, we theoretically show
that a pre-trained model that is more robust to input perturbation provides a
better initialization for generalization on downstream OOD data. Empirically,
after fine-tuning, this better-initialized model from adversarial pre-training
also has better OOD generalization.
|
Strong gravitational lensing is a gravitational wave (GW) propagation effect
that influences the inferred GW source parameters and the cosmological
environment. Identifying strongly-lensed GW images is challenging as waveform
amplitude magnification is degenerate with a shift in the source intrinsic mass
and redshift. However, even in the geometric-optics limit, type II
strongly-lensed images cannot be fully matched by type I (or unlensed) waveform
templates, especially with large binary mass ratios and orbital inclination
angles. We propose to use this mismatch to distinguish individual type II
images. Using planned noise spectra of Cosmic Explorer, Einstein Telescope and
LIGO Voyager, we show that a significant fraction of type II images can be
distinguished from unlensed sources, given sufficient SNR ($\sim 30$).
Incorporating models on GW source population and lens population, we predict
that the yearly detection rate of lensed GW sources with detectable type II
images is 172.2, 118.2 and 27.4 for CE, ET and LIGO Voyager, respectively.
Among these detectable events, 33.1%, 7.3% and 0.22% will be distinguishable
via their type II images with a log Bayes factor larger than 10. We conclude
that such distinguishable events are likely to appear in the third-generation
detector catalog; our strategy will significantly supplement existing strong
lensing search strategies.
|
A/B experimentation is a known technique for data-driven product development
and has demonstrated its value in web-facing businesses. With the
digitalisation of the automotive industry, the focus in the industry is
shifting towards software. For automotive embedded software to continuously
improve, A/B experimentation is considered an important technique. However, the
adoption of such a technique is not without challenge. In this paper, we
present an architecture to enable A/B testing in automotive embedded software.
The design addresses challenges that are unique to the automotive industry in a
systematic fashion. Going from hypothesis to practice, our architecture was
also applied in practice for running online experiments on a considerable
scale. Furthermore, a case study approach was used to compare our proposal with
state-of-practice in the automotive industry. We found our architecture design
to be relevant and applicable in the efforts of adopting continuous A/B
experiments in automotive embedded software.
|
Maintaining security and privacy in real-world enterprise networks is
becoming more and more challenging. Cyber actors are increasingly employing
previously unreported and state-of-the-art techniques to break into corporate
networks. To develop novel and effective methods to thwart these sophisticated
cyberattacks, we need datasets that reflect real-world enterprise scenarios to
a high degree of accuracy. However, precious few such datasets are publicly
available. Researchers still predominantly use the decade-old KDD datasets,
however, studies showed that these datasets do not adequately reflect modern
attacks like Advanced Persistent Threats(APT). In this work, we analyze the
usefulness of the recently introduced DARPA Operationally Transparent Cyber
(OpTC) dataset in this regard. We describe the content of the dataset in detail
and present a qualitative analysis. We show that the OpTC dataset is an
excellent candidate for advanced cyber threat detection research while also
highlighting its limitations. Additionally, we propose several research
directions where this dataset can be useful.
|
In this paper, we propose a scheme that utilizes the optimization ability of
artificial intelligence (AI) for optimal transceiver-joint equalization in
compensating for the optical filtering impairments caused by wavelength
selective switches (WSS). In contrast to adding or replacing a certain module
of existing digital signal processing (DSP), we exploit the similarity between
a communication system and a neural network (NN). By mapping a communication
system to an NN, in which the equalization modules correspond to the
convolutional layers and other modules can been regarded as static layers, the
optimal transceiver-joint equalization coefficients can be obtained. In
particular, the DSP structure of the communication system is not changed.
Extensive numerical simulations are performed to validate the performance of
the proposed method. For a 65 GBaud 16QAM signal, it can achieve a 0.76 dB gain
when the number of WSSs is 16 with a -6 dB bandwidth of 73 GHz.
|
We present a novel framework for designing multiplierless kernel machines
that can be used on resource-constrained platforms like intelligent edge
devices. The framework uses a piecewise linear (PWL) approximation based on a
margin propagation (MP) technique and uses only addition/subtraction, shift,
comparison, and register underflow/overflow operations. We propose a
hardware-friendly MP-based inference and online training algorithm that has
been optimized for a Field Programmable Gate Array (FPGA) platform. Our FPGA
implementation eliminates the need for DSP units and reduces the number of
LUTs. By reusing the same hardware for inference and training, we show that the
platform can overcome classification errors and local minima artifacts that
result from the MP approximation. Using the FPGA platform, we also show that
the proposed multiplierless MP-kernel machine demonstrates superior performance
in terms of power, performance, and area compared to other comparable
implementations.
|
This paper is devoted to studying impedance eigenvalues (that is, eigenvalues
of a particular Dirichlet-to-Neumann map) for the time harmonic linear elastic
wave problem, and their potential use as target-signatures for fluid-solid
interaction problems. We first consider several possible families of
eigenvalues of the elasticity problem, focusing on certain impedance
eigenvalues that are an analogue of Steklov eigenvalues. We show that one of
these families arises naturally in inverse scattering. We also analyse their
approximation from far field measurements of the scattered pressure field in
the fluid, and illustrate several alternative methods of approximation in the
case of an isotropic elastic disk.
|
For a finite group $G,$ we define the concept of $G$-partial permutation and
use it to show that the structure coefficients of the center of the wreath
product $G\wr \mathcal{S}_n$ algebra are polynomials in $n$ with non-negative
integer coefficients. Our main tool is a combinatorial algebra which projects
onto the center of the group $G\wr \mathcal{S}_n$ algebra for every $n.$ This
generalizes the Ivanov and Kerov method to prove the polynomiality property for
the structure coefficients of the center of the symmetric group algebra.
|
In this paper, closed-loop entry guidance in a randomly perturbed atmosphere,
using bank angle control, is posed as a stochastic optimal control problem. The
entry trajectory, as well as the closed-loop controls, are both modeled as
random processes with statistics determined by the entry dynamics, the entry
guidance, and the probabilistic structure of altitude-dependent atmospheric
density variations. The entry guidance, which is parameterized as a sequence of
linear feedback gains, is designed to steer the probability distribution of the
entry trajectories while satisfying bounds on the allowable control inputs and
on the maximum allowable state errors. Numerical simulations of a Mars entry
scenario demonstrate improved range targeting performance when using the
developed stochastic guidance scheme as compared to the existing Apollo final
phase algorithm.
|
Graph neural networks (GNNs) have received tremendous attention due to their
power in learning effective representations for graphs. Most GNNs follow a
message-passing scheme where the node representations are updated by
aggregating and transforming the information from the neighborhood. Meanwhile,
they adopt the same strategy in aggregating the information from different
feature dimensions. However, suggested by social dimension theory and spectral
embedding, there are potential benefits to treat the dimensions differently
during the aggregation process. In this work, we investigate to enable
heterogeneous contributions of feature dimensions in GNNs. In particular, we
propose a general graph feature gating network (GFGN) based on the graph signal
denoising problem and then correspondingly introduce three graph filters under
GFGN to allow different levels of contributions from feature dimensions.
Extensive experiments on various real-world datasets demonstrate the
effectiveness and robustness of the proposed frameworks.
|
The purpose of this paper is to introduce the notion of noncommutative
BiHom-pre-Poisson algebra. Also we establish the bimodules and matched pairs of
noncommutative BiHom-(pre)-Poisson algebras and related relevant properties are
also given. Finally, we exploit the notion of $\mathcal{O}$-operator to
illustrate the relations existing between noncommutative BiHom-Poisson and
noncommutative BiHom pre-Poisson algebras.
|
Classical Cepheids (DCEPs) are the most important primary indicators for the
extragalactic distance scale, but they are also important objects per se,
allowing us to put constraints on the physics of intermediate-mass stars and
the pulsation theories. We have investigated the peculiar DCEP HD 344787, which
is known to exhibit the fastest positive period change among DCEPs along with a
quenching amplitude of the light variation. We have used high-resolution
spectra obtained with HARPS-N@TNG for HD 344787 and the more famous Polaris
DCEP, to infer their detailed chemical abundances. Results from the analysis of
new time-series photometry of HD 344787 obtained by the TESS satellite are also
reported. The double mode nature of HD344787 pulsation is confirmed by analysis
of the TESS light curve, although with rather tiny amplitudes of a few tens of
millimag. This is an indication that HD344787 is on the verge of quenching the
pulsation. Analysis of the HARPS-N@TNG spectra reveals an almost solar
abundance and no depletion of carbon and oxygen. Hence, the star appears to
have not gone through the first dredge-up. Similar results are obtained for
Polaris. Polaris and HD344787 are confirmed to be both most likely at their
first crossing of the instability strip (IS). The two stars are likely at the
opposite borders of the IS for first overtone DCEPs with metal abundance
Z=0.008. A comparison with other DCEPs which are also thought to be at their
first crossing allows us to speculate that the differences we see in the
Hertzsprung-Russell diagram might be due to differences in the properties of
the DCEP progenitors during the main sequence phase.
|
Deep neural networks for automatic image colorization often suffer from the
color-bleeding artifact, a problematic color spreading near the boundaries
between adjacent objects. Such color-bleeding artifacts debase the reality of
generated outputs, limiting the applicability of colorization models in
practice. Although previous approaches have attempted to address this problem
in an automatic manner, they tend to work only in limited cases where a high
contrast of gray-scale values are given in an input image. Alternatively,
leveraging user interactions would be a promising approach for solving this
color-breeding artifacts. In this paper, we propose a novel edge-enhancing
network for the regions of interest via simple user scribbles indicating where
to enhance. In addition, our method requires a minimal amount of effort from
users for their satisfactory enhancement. Experimental results demonstrate that
our interactive edge-enhancing approach effectively improves the color-bleeding
artifacts compared to the existing baselines across various datasets.
|
We consider the distributed training of large-scale neural networks that
serve as PDE solvers producing full field outputs. We specifically consider
neural solvers for the generalized 3D Poisson equation over megavoxel domains.
A scalable framework is presented that integrates two distinct advances. First,
we accelerate training a large model via a method analogous to the multigrid
technique used in numerical linear algebra. Here, the network is trained using
a hierarchy of increasing resolution inputs in sequence, analogous to the 'V',
'W', 'F', and 'Half-V' cycles used in multigrid approaches. In conjunction with
the multi-grid approach, we implement a distributed deep learning framework
which significantly reduces the time to solve. We show the scalability of this
approach on both GPU (Azure VMs on Cloud) and CPU clusters (PSC Bridges2). This
approach is deployed to train a generalized 3D Poisson solver that scales well
to predict output full-field solutions up to the resolution of 512x512x512 for
a high dimensional family of inputs.
|
This paper presents a novel supervised approach to detecting the chorus
segments in popular music. Traditional approaches to this task are mostly
unsupervised, with pipelines designed to target some quality that is assumed to
define "chorusness," which usually means seeking the loudest or most frequently
repeated sections. We propose to use a convolutional neural network with a
multi-task learning objective, which simultaneously fits two temporal
activation curves: one indicating "chorusness" as a function of time, and the
other the location of the boundaries. We also propose a post-processing method
that jointly takes into account the chorus and boundary predictions to produce
binary output. In experiments using three datasets, we compare our system to a
set of public implementations of other segmentation and chorus-detection
algorithms, and find our approach performs significantly better.
|
Motivated by the recent LHCb announcement of a $3.1\sigma$ violation of
lepton-flavor universality in the ratio $R_K=\Gamma(B\to
K\mu^+\mu^-)/\Gamma(B\to K e^+ e^-)$, we present an updated, comprehensive
analysis of the flavor anomalies seen in both neutral-current ($b\to
s\ell^+\ell^-$) and charged-current ($b\to c\tau\bar\nu$) decays of $B$ mesons.
Our study starts from a model-independent effective field-theory approach and
then considers both a simplified model and a UV-complete extension of the
Standard Model featuring a vector leptoquark $U_1$ as the main mediator of the
anomalies. We show that the new LHCb data corroborate the emerging pattern of a
new, predominantly left-handed, semileptonic current-current interaction with a
flavor structure respecting a (minimally) broken $U(2)^5$ flavor symmetry. New
aspects of our analysis include a combined analysis of the semileptonic
operators involving tau leptons, including in particular the important
constraint from $B_s$--$\bar B_s$ mixing, a systematic study of the effects of
right-handed leptoquark couplings and of deviations from minimal
flavor-symmetry breaking, a detailed analysis of various rare $B$-decay modes
which would provide smoking-gun signatures of this non-standard framework (LFV
decays, di-tau modes, and $B\to K^{(*)}\nu\bar\nu$), and finally an updated
analysis of collider bounds on the leptoquark mass and couplings.
|
In this paper, the interference mitigation for Frequency Modulated Continuous
Wave (FMCW) radar system with a dechirping receiver is investigated. After
dechirping operation, the scattered signals from targets result in beat
signals, i.e., the sum of complex exponentials while the interferences lead to
chirp-like short pulses. Taking advantage of these different time and frequency
features between the useful signals and the interferences, the interference
mitigation is formulated as an optimization problem: a sparse and low-rank
decomposition of a Hankel matrix constructed by lifting the measurements. Then,
an iterative optimization algorithm is proposed to tackle it by exploiting the
Alternating Direction of Multipliers (ADMM) scheme. Compared to the existing
methods, the proposed approach does not need to detect the interference and
also improves the estimation accuracy of the separated useful signals. Both
numerical simulations with point-like targets and experiment results with
distributed targets (i.e., raindrops) are presented to demonstrate and verify
its performance. The results show that the proposed approach is generally
applicable for interference mitigation in both stationary and moving target
scenarios.
|
RGB-D salient object detection(SOD) demonstrates its superiority on detecting
in complex environments due to the additional depth information introduced in
the data. Inevitably, an independent stream is introduced to extract features
from depth images, leading to extra computation and parameters. This
methodology which sacrifices the model size to improve the detection accuracy
may impede the practical application of SOD problems. To tackle this dilemma,
we propose a dynamic distillation method along with a lightweight framework,
which significantly reduces the parameters. This method considers the factors
of both teacher and student performance within the training stage and
dynamically assigns the distillation weight instead of applying a fixed weight
on the student model. Extensive experiments are conducted on five public
datasets to demonstrate that our method can achieve competitive performance
compared to 10 prior methods through a 78.2MB lightweight structure.
|
Here we prove a global existence theorem for the solutions of the semi-linear
wave equation with critical non-linearity admitting a positive definite
Hamiltonian. Formulating a parametrix for the wave equation in a globally
hyperbolic curved spacetime, we derive an apriori pointwise bound for the
solution of the nonlinear wave equation in terms of the initial energy, from
which the global existence follows in a straightforward way. This is
accomplished in two steps. First, based on Moncrief's light cone formulation we
derive an expression for the scalar field in terms of integrals over the past
light cone from an arbitrary spacetime point to an `initial', Cauchy
hypersurface and additional integrals over the intersection of this cone with
the initial hypersurface. Secondly, we obtain apriori estimates for the energy
associated with three quasi-local approximate time-like conformal Killing and
one approximate Killing vector fields. Utilizing these naturally defined
energies associated with the physical stress-energy tensor together with the
integral equation, we show that the spacetime $L^{\infty}$ norm of the scalar
field remains bounded in terms of the initial data and continues to be so as
long as the spacetime remains singularity/Cauchy-horizon free.
|
An analytic formula is given for the total scattering cross section of an
electron and a photon at order $\alpha^3$. This includes both the
double-Compton scattering real-emission contribution as well as the virtual
Compton scattering part. When combined with the recent analytic result for the
pair-production cross section, the complete $\alpha^3$ cross section is now
known. Both the next-to-leading order calculation as well as the
pair-production cross section are computed using modern multiloop calculation
techniques, where cut diagrams are decomposed into a set of master integrals
that are then computed using differential equations.
|
Kernel maximum moment restriction (KMMR) recently emerges as a popular
framework for instrumental variable (IV) based conditional moment restriction
(CMR) models with important applications in conditional moment (CM) testing and
parameter estimation for IV regression and proximal causal learning. The
effectiveness of this framework, however, depends critically on the choice of a
reproducing kernel Hilbert space (RKHS) chosen as a space of instruments. In
this work, we presents a systematic way to select the instrument space for
parameter estimation based on a principle of the least identifiable instrument
space (LIIS) that identifies model parameters with the least space complexity.
Our selection criterion combines two distinct objectives to determine such an
optimal space: (i) a test criterion to check identifiability; (ii) an
information criterion based on the effective dimension of RKHSs as a complexity
measure. We analyze the consistency of our method in determining the LIIS, and
demonstrate its effectiveness for parameter estimation via simulations.
|
We report the first detection in space of the two doubly deuterated
isotopologues of methyl acetylene. The species CHD2CCH and CH2DCCD were
identified in the dense core L483 through nine and eight, respectively,
rotational lines in the 72-116 GHz range using the IRAM 30m telescope. The
astronomical frequencies observed here were combined with laboratory
frequencies from the literature measured in the 29-47 GHz range to derive more
accurate spectroscopic parameters for the two isotopologues. We derive
beam-averaged column densities of (2.7 +/- 0.5)e12 cm-2 for CHD2CCH and (2.2
+/- 0.4)e12 cm-2 for CH2DCCD, which translate to abundance ratios
CH3CCH/CHD2CCH = 34 +/- 10 and CH3CCH/CH2DCCD = 42 +/- 13. The doubly
deuterated isotopologues of methyl acetylene are only a few times less abundant
than the singly deuterated ones, concretely around 2.4 times less abundant than
CH3CCD. The abundances of the different deuterated isotopologues with respect
to CH3CCH are reasonably accounted for by a gas-phase chemical model in which
deuteration occurs from the precursor ions C3H6D+ and C3H5D+, when the
ortho-to-para ratio of molecular hydrogen is sufficiently low. This points to
gas-phase chemical reactions, rather than grain-surface processes, as
responsible for the formation and deuterium fractionation of CH3CCH in L483.
The abundance ratios CH2DCCH/CH3CCD = 3.0 +/- 0.9 and CHD2CCH/CH2DCCD = 1.25
+/- 0.37 observed in L483 are consistent with the statistically expected values
of three and one, respectively, with the slight overabundance of CHD2CCH
compared to CH2DCCD being well explained by the chemical model.
|
Significant experimental progress has been made recently for observing
long-sought supersolid-like states in Bose-Einstein condensates, where spatial
translational symmetry is spontaneously broken by anisotropic interactions to
form a stripe order. Meanwhile, the superfluid stripe ground state was also
observed by applying a weak optical lattice that forces the symmetry breaking.
Despite of the similarity of the ground states, here we show that these two
symmetry breaking mechanisms can be distinguished by their collective
excitation spectra. In contrast to gapless Goldstone modes of the
\textit{spontaneous} stripe state, we propose that the excitation spectra of
the \textit{forced} stripe phase can provide direct experimental evidence for
the long-sought gapped pseudo-Goldstone modes. We characterize the
pseudo-Goldstone mode of such lattice-induced stripe phase through its
excitation spectrum and static structure factor. Our work may pave the way for
exploring spontaneous and forced/approximate symmetry breaking mechanisms in
different physical systems.
|
Inertia effects in magnetization dynamics are theoretically shown to result
in a different type of spin waves, i.e. nutation surface spin waves, which
propagate at terahertz frequencies in in-plane magnetized ferromagnetic thin
films. Considering the magnetostatic limit, i.e. neglecting exchange coupling,
we calculate dispersion relation and group velocity, which we find to be slower
than the velocity of conventional (precession) spin waves. In addition, we find
that the nutation surface spin waves are backward spin waves. Furthermore, we
show that inertia causes a decrease of the frequency of the precession spin
waves, namely magnetostatic surface spin waves and backward volume
magnetostatic spin waves. The magnitude of the decrease depends on the magnetic
properties of the film and its geometry.
|
This paper introduces a node formulation for multistage stochastic programs
with endogenous (i.e., decision-dependent) uncertainty. Problems with such
structure arise when the choices of the decision maker determine a change in
the likelihood of future random events. The node formulation avoids an explicit
statement of non-anticipativity constraints, and as such keeps the dimension of
the model sizeable. An exact solution algorithm for a special case is
introduced and tested on a case study. Results show that the algorithm
outperforms a commercial solver as the size of the instances increases.
|
Human activities are hugely restricted by COVID-19, recently. Robots that can
conduct inter-floor navigation attract much public attention, since they can
substitute human workers to conduct the service work. However, current robots
either depend on human assistance or elevator retrofitting, and fully
autonomous inter-floor navigation is still not available. As the very first
step of inter-floor navigation, elevator button segmentation and recognition
hold an important position. Therefore, we release the first large-scale
publicly available elevator panel dataset in this work, containing 3,718 panel
images with 35,100 button labels, to facilitate more powerful algorithms on
autonomous elevator operation. Together with the dataset, a number of deep
learning based implementations for button segmentation and recognition are also
released to benchmark future methods in the community. The dataset will be
available at \url{https://github.com/zhudelong/elevator_button_recognition
|
The Reeb space of a continuous map is the space of all (elements
representing) connected components of preimages endowed with the quotient
topology induced from the natural equivalence relation on the domain. These
objects are strong tools in (differential) topological theory of Morse
functions, fold maps, which are their higher dimensional variants, and so on:
they are in general polyhedra whose dimensions are same as those of the
targets. In suitable cases Reeb spaces inherit topological information such as
homology groups, cohomology rings, and so on, of the manifolds.
This presents the following problem: what are global topologies of Reeb
spaces of these smooth maps of suitable classes like? The present paper
presents families of stable fold maps having Reeb spaces with non-trivial top
homology groups with their (co)homology groups (and rings). Related studies on
the global topologies from the viewpoints of the singularity theory of
differentiable maps and differential topology have been presented by various
researchers including the author. The author previously constructed families of
fold maps with Reeb spaces with non-trivial top homology groups and with good
topological properties. This paper presents new families, especially,
generalized situations of some known situations.
|
Let G be a split, simple, simply connected, algebraic group over Q. The
degree 4, weight 2 motivic cohomology group of the classifying space BG of G is
identified with Z. We construct cocycles representing the generator of this
group, known as the second universal motivic Chern class.
If G = SL(m), there is a canonical cocycle, defined by the first author
(1993). For any group G, we define a collection of cocycles parametrised by
cluster coordinate systems on the space of G-orbits on the cube of the
principal affine space G/U. Cocycles for different clusters are related by
explicit coboundaries, constructed using cluster transformations relating the
clusters.
The cocycle has three components. The construction of the last one is
canonical and elementary; it does not use clusters, and provides a canonical
cocycle for the motivic generator of the degree 3 cohomology class of the
complex manifold G(C). However to lift this component to the whole cocycle we
need cluster coordinates: the construction of the first two components uses
crucially the cluster structure of the moduli spaces A(G,S) related to the
moduli space of G-local systems on S. In retrospect, it partially explains why
the cluster coordinates on the space A(G,S) should exist.
This construction has numerous applications, including an explicit
construction of the universal extension of the group G by K_2, the line bundle
on Bun(G) generating its Picard group, Kac-Moody groups, etc. Another
application is an explicit combinatorial construction of the second motivic
Chern class of a G-bundle. It is a motivic analog of the work of
Gabrielov-Gelfand-Losik (1974), for any G.
|
Reconfigurable intelligent surface (RIS) is an emerging technique employing
metasurface to reflect the signal from the source node to the destination node
without consuming any energy. Not only the spectral efficiency but also the
energy efficiency can be improved through RIS. Essentially, RIS can be
considered as a passive relay between the source and destination node. On the
other hand, a relay node in a traditional relay network has to be active, which
indicates that it will consume energy when it is relaying the signal or
information between the source and destination nodes. In this paper, we compare
the performances between RIS and active relay for a general multiple-input
multiple-output (MIMO) system. To make the comparison fair and comprehensive,
both the performances of RIS and active relay are optimized with best-effort.
In terms of the RIS, transmit beamforming and reflecting coefficient at the RIS
are jointly optimized so as to maximize the end-to-end throughput. Although the
optimization problem is non-convex, it is transformed equivalently to a
weighted mean-square error (MSE) minimization problem and an alternating
optimization problem is proposed, which can ensure the convergence to a
stationary point. In terms of active relay, both half duplex relay (HDR) and
full duplex relay (FDR) are considered. End-to-end throughput is maximized via
an alternating optimization method. Numerical results are presented to
demonstrate the effectiveness of the proposed algorithm. Finally, comparisons
between RIS and relays are investigated from the perspective of system model,
performance, deployment and controlling method.
|
We study theoretical neutrino signals from core-collapse supernova (CCSN)
computed using axisymmetric CCSN simulations that cover the post-bounce phase
up to $\sim 4$~s. We provide basic quantities of the neutrino signals such as
event rates, energy spectra, and cumulative number of events at some
terrestrial neutrino detectors, and then discuss some new features in the late
phase that emerge in our models. Contrary to popular belief, neutrino emissions
in the late phase are not always steady, but rather have temporal fluctuations,
the vigor of which hinges on the CCSN model and neutrino flavor. We find that
such temporal variations are not primarily driven by proto-neutron star (PNS)
convection, but by fallback accretion in exploding models. We assess the
detectability of these temporal variations, and find that IceCube is the most
promising detector with which to resolve them. We also update fitting formulae
first proposed in our previous paper for which the total neutrino energy (TONE)
emitted at the CCSN source is estimated from the cumulative number of events in
each detector. This will be a powerful technique with which to analyze real
observations, particularly for low-statistics data.
|
Model-based evaluation in cybersecurity has a long history. Attack Graphs
(AGs) and Attack Trees (ATs) were the earlier developed graphical security
models for cybersecurity analysis. However, they have limitations (e.g.,
scalability problem, state-space explosion problem, etc.) and lack the ability
to capture other security features (e.g., countermeasures). To address the
limitations and to cope with various security features, a graphical security
model named attack countermeasure tree (ACT) was developed to perform security
analysis by taking into account both attacks and countermeasures. In our
research, we have developed different variants of a hierarchical graphical
security model to solve the complexity, dynamicity, and scalability issues
involved with security models in the security analysis of systems. In this
paper, we summarize and classify security models into the following;
graph-based, tree-based, and hybrid security models. We discuss the development
of a hierarchical attack representation model (HARM) and different variants of
the HARM, its applications, and usability in a variety of domains including the
Internet of Things (IoT), Cloud, Software-Defined Networking, and Moving Target
Defenses. We provide the classification of the security metrics, including
their discussions. Finally, we highlight existing problems and suggest future
research directions in the area of graphical security models and applications.
As a result of this work, a decision-maker can understand which type of HARM
will suit their network or security analysis requirements.
|
For any positive regularity parameter $\beta < \frac 12$, we construct
non-conservative weak solutions of the 3D incompressible Euler equations which
lie in $H^{\beta}$ uniformly in time. In particular, we construct solutions
which have an $L^2$-based regularity index \emph{strictly larger} than $\frac
13$, thus deviating from the $H^{\frac{1}{3}}$-regularity corresponding to the
Kolmogorov-Obhukov $\frac 53$ power spectrum in the inertial range.
|
Using spin-assisted ab-initio random structure searches, we explore an
exhaustive quantum phase diagram of archetypal interfaced Mott insulators, i.e.
lanthanum-iron and lanthanum-titanium oxides. In particular, we report that the
charge transfer induced by the interfacial electronic reconstruction stabilises
a high spin ferrous Fe2+ state. We provide a pathway to control the strength of
correlation in this electronic state by tuning the epitaxial strain, yielding a
manifold of quantum electronic phases, i.e. Mott-Hubbard, charge transfer and
Slater insulating states. Furthermore we report that the electronic
correlations are closely related to the structural oxygen octahedral rotations,
whose control is able to stabilise the low spin state of Fe2+ at low pressure
previously observed only under the extreme high pressure conditions in the
Earth's lower mantle. Thus we provide avenues for magnetic switching via THz
radiations which have crucial implications for next generation of spintronics
technologies.
|
The unprecedented worldwide spread of coronavirus disease has significantly
sped up the development of technology-based solutions to prevent, combat,
monitor, or predict pandemics and/or its evolution. The omnipresence of smart
Internet-of-things (IoT) devices can play a predominant role in designing
advanced techniques helping in minimizing the risk of contamination. In this
paper, we propose a practical framework that uses the Social IoT (SIoT) concept
to improve pedestrians safely navigate through a real-wold map of a smart city.
The objective is to mitigate the risks of exposure to the virus in high-dense
areas where social distancing might not be well-practiced. The proposed routing
approach recommends pedestrians' route in a real-time manner while considering
other devices' mobility. First, the IoT devices are clustered into communities
according to two SIoT relations that consider the devices' locations and the
friendship levels among their owners. Accordingly, the city map roads are
assigned weights representing their safety levels. Afterward, a navigation
algorithm, namely the Dijkstra algorithm, is applied to recommend the safest
route to follow. Simulation results applied on a real-world IoT data set have
shown the ability of the proposed approach in achieving trade-offs between both
safest and shortest paths according to the pedestrian preference.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.