abstract
stringlengths 42
2.09k
|
---|
Conformance checking techniques aim to collate observed process behavior with
normative/modeled process models. The majority of existing approaches focuses
on completed process executions, i.e., offline conformance checking. Recently,
novel approaches have been designed to monitor ongoing processes, i.e., online
conformance checking. Such techniques detect deviations of an ongoing process
execution from a normative process model at the moment they occur. Thereby,
countermeasures can be taken immediately to prevent a process deviation from
causing further, undesired consequences. Most online approaches only allow to
detect approximations of deviations. This causes the problem of falsely
detected deviations, i.e., detected deviations that are actually no deviations.
We have, therefore, recently introduced a novel approach to compute exact
conformance checking results in an online environment. In this paper, we focus
on the practical application and present a scalable, distributed implementation
of the proposed online conformance checking approach. Moreover, we present two
extensions to said approach to reduce its computational effort and its
practical applicability. We evaluate our implementation using data sets
capturing the execution of real processes.
|
Young stellar objects are observed to have large X-ray fluxes and are thought
to produce commensurate luminosities in energetic particles (cosmic rays). This
particle radiation, in turn, can synthesize short-lived radioactive nuclei
through spallation. With a focus on $^{26}$Al, this paper estimates the
expected abundances of radioactive nulcei produced by spallation during the
epoch of planet formation. In this model, cosmic rays are accelerated near the
inner truncation radii of circumstellar disks, $r_{\scriptstyle X}\approx0.1$
AU, where intense magnetic activity takes place. For planets forming in this
region, radioactive abundances can be enhanced over the values inferred for the
early solar system (from meteoritic measurements) by factors of $\sim10-20$.
These short-lived radioactive nuclei influence the process of planet formation
and the properties of planets in several ways. The minimum size required for
planetesimals to become fully molten decreases with increasing levels of
radioactive enrichment, and such melting leads to loss of volatile components
including water. Planets produced with an enhanced radioactive inventory have
significant internal luminosity which can be comparable to that provided by the
host star; this additional heating affects both atmospheric mass loss and
chemical composition. Finally, the habitable zone of red dwarf stars is
coincident with the magnetic reconnection region, so that planets forming at
those locations will experience maximum exposure to particle radiation, and
subsequent depletion of volatiles.
|
Using the recently developed time-dependent Landauer-B\"uttiker formalism and
Jefimenko's retarded solutions to the Maxwell equations, we show how to compute
the time-dependent electromagnetic field produced by the charge and current
densities in nanojunctions out of equilibrium. We then apply this formalism to
a benzene ring junction, and show that geometry-dependent quantum interference
effects can be used to control the magnetic field in the vicinity of the
molecule. Then, treating the molecular junction as a quantum emitter, we
demonstrate clear signatures of the local molecular geometry in the non-local
radiated power.
|
The perfect case for time-domain investigations are active galactic nuclei
(AGNs) since they are luminous objects that show strong variability. Key result
from the studies of AGNs variability is the estimated mass of a supermassive
black hole (SMBH), which resides in the center of an AGN. Moreover, the
spectral variability of AGN can be used to study the structure and physics of
the broad line region, which in general can be hardly directly observed. Here
we review the current status of AGNs variability investigations in Serbia, in
the perspectives of the present and future monitoring campaigns.
|
In this article, we investigate the problem of state reconstruction of
four-level quantum systems. A realistic scenario is considered with measurement
results distorted by random unitary operators. Two frames which define
injective measurements are applied and compared. By introducing arbitrary
rotations, we can test the performance of the framework versus the amount of
experimental noise. The results of numerical simulations are depicted on graphs
and discussed. In particular, a class of entangled states is reconstructed. The
concurrence is used as a figure of merit in order to quantify how well
entanglement is preserved through noisy measurements.
|
A data representation for system behavior telemetry for scalable big data
security analytics is presented, affording telemetry consumers comprehensive
visibility into workloads at reduced storage and processing overheads. The new
abstraction, SysFlow, is a compact open data format that lifts the
representation of system activities into a flow-centric, object-relational
mapping that records how applications interact with their environment, relating
processes to file accesses, network activities, and runtime information. The
telemetry format supports single-event and volumetric flow representations of
process control flows, file interactions, and network communications.
Evaluation on enterprise-grade benchmarks shows that SysFlow facilitates deeper
introspection into attack kill chains while yielding traces orders of magnitude
smaller than current state-of-the-art system telemetry approaches --
drastically reducing storage requirements and enabling feature-filled system
analytics, process-level provenance tracking, and long-term data archival for
cyber threat discovery and forensic analysis on historical data.
|
We introduce a novel class of time integrators for dispersive equations which
allow us to reproduce the dynamics of the solution from the classical $
\varepsilon = 1$ up to long wave limit regime $ \varepsilon \ll 1 $ on the
natural time scale of the PDE $t= \mathcal{O}(\frac{1}{\varepsilon})$. Most
notably our new schemes converge with rates at order $\tau \varepsilon$ over
long times $t= \frac{1}{\varepsilon}$.
|
The goal of this paper is an analysis of the geometry of billiards in
ellipses, based on properties of confocal central conics. The extended sides of
the billiards meet at points which are located on confocal ellipses and
hyperbolas. They define the associated Poncelet grid. If a billiard is periodic
then it closes for any choice of the initial vertex on the ellipse. This gives
rise to a continuous variation of billiards which is called billiard's motion
though it is neither a Euclidean nor a projective motion. The extension of this
motion to the associated Poncelet grid leads to new insights and invariants.
|
This paper corrects the characterisation of biautomatic groups presented in
Lemma 2.5.5 in the book Word Processing in Groups by Epstein et al. We present
a counterexample to the lemma, and we reformulate the lemma to give a valid
characterisation of biautomatic groups.
|
Data from the space missions {\it Gaia}, {\it Kepler}, {\it CoRoT} and {\it
TESS}, make it possible to compare parallax and asteroseismic distances. From
the ratio of two densities $\rho_{\rm sca}/\rho_{\pi}$, we obtain an empirical
relation $f_{\Delta \nu}$ between the asteroseismic large frequency separation
and mean density, which is important for more accurate stellar mass and radius.
This expression for main-sequence (MS) and subgiant stars with $K$-band
magnitude is very close to the one obtained from interior MS models by
Y{\i}ld{\i}z, \c{C}elik \& Kayhan. We also discuss the effects of effective
temperature and parallax offset as the source of the difference between
asteroseismic and non-asteroseismic stellar parameters. We have obtained our
best results for about 3500 red giants (RGs) by using 2MASS data and model
values for $f_{\Delta \nu}$ from Sharma et al. Another unknown scaling
parameter $f_{\nu_{\rm max}}$ comes from the relationship between the frequency
of maximum amplitude and gravity. Using different combinations of $f_{\nu_{\rm
max}}$ and the parallax offset, we find that the parallax offset is generally a
function of distance. The situation where this slope disappears is accepted as
the most reasonable solution. By a very careful comparison of asteroseismic and
non-asteroseismic parameters, we obtain very precise values for the parallax
offset and $f_{\nu_{\rm max}}$ for RGs of $-0.0463\pm0.0007$ mas and
$1.003\pm0.001$, respectively. Our results for mass and radius are in perfect
agreement with those of APOKASC-2: the mass and radius of $\sim$3500 RGs are in
the range of about 0.8-1.8 M$_{\odot}$ (96 per cent) and 3.8-38 R$_{\odot}$,
respectively.
|
In this paper, we investigate the problem of prescribing Webster scalar
curvatures on compact pseudo-Hermitian manifolds. In terms of the method of
upper and lower solutions and the perturbation theory of self-adjoint
operators, we can describe some sets of Webster scalar curvature functions
which can be realized through pointwise CR conformal deformations and CR
conformally equivalent deformations respectively from a given pseudo-Hermitian
structure.
|
We study normal directions to facets of the Newton polytope of the
discriminant of the Laurent polynomial system via the tropical approach. We use
the combinatorial construction proposed by Dickenstein, Feichtner and Sturmfels
for the tropicalization of algebraic varieties admitting a parametrization by a
linear map followed by a monomial map.
|
While polarisation sensing is vital in many areas of research, with
applications spanning from microscopy to aerospace, traditional approaches are
limited by method-related error amplification or accumulation, placing
fundamental limitations on precision and accuracy in single-shot polarimetry.
Here, we put forward a new measurement paradigm to circumvent this, introducing
the notion of a universal full Poincar\'e generator to map all polarisation
analyser states into a single vectorially structured light field, allowing all
vector components to be analysed in a single-shot with theoretically
user-defined precision. To demonstrate the advantage of our approach, we use a
common GRIN optic as our mapping device and show mean errors of <1% for each
vector component, enhancing the sensitivity by around three times, allowing us
to sense weak polarisation aberrations not measurable by traditional
single-shot techniques. Our work paves the way for next-generation polarimetry,
impacting a wide variety of applications relying on weak vector measurement.
|
This paper establishes sufficient conditions that force a graph to contain a
bipartite subgraph with a given structural property. In particular, let $\beta$
be any of the following graph parameters: Hadwiger number, Haj\'{o}s number,
treewidth, pathwidth, and treedepth. In each case, we show that there exists a
function $f$ such that every graph $G$ with $\beta(G)\geq f(k)$ contains a
bipartite subgraph $\hat{G}\subseteq G$ with $\beta(\hat{G})\geq k$.
|
We present MAPFF1.0, a determination of unpolarised charged-pion
fragmentation functions (FFs) from a set of single-inclusive $e^+e^-$
annihilation and lepton-nucleon semi-inclusive deep-inelastic-scattering
(SIDIS) data. FFs are parametrised in terms of a neural network (NN) and fitted
to data exploiting the knowledge of the analytic derivative of the NN itself
w.r.t. its free parameters. Uncertainties on the FFs are determined by means of
the Monte Carlo sampling method properly accounting for all sources of
experimental uncertainties, including that of parton distribution functions.
Theoretical predictions for the relevant observables, as well as evolution
effects, are computed to next-to-leading order (NLO) accuracy in perturbative
QCD. We exploit the flavour sensitivity of the SIDIS measurements delivered by
the HERMES and COMPASS experiments to determine a minimally-biased set of seven
independent FF combinations. Moreover, we discuss the quality of the fit to the
SIDIS data with low virtuality $Q^2$ showing that, as expected, low-$Q^2$ SIDIS
measurements are generally harder to describe within a NLO-accurate
perturbative framework.
|
Let $G$ be a $2$-generated group. The generating graph $\Gamma(G)$ is the
graph whose vertices are the elements of $G$ and where two vertices $g_1$ and
$g_2$ are adjacent if $G = \langle g_1, g_2 \rangle.$ This graph encodes the
combinatorial structure of the distribution of generating pairs across $G.$ In
this paper we study some graph theoretic properties of $\Gamma(G)$, with
particular emphasis on those properties that can be formulated in terms of
forbidden induced subgraphs. In particular we investigate when the generating
graph $\Gamma(G)$ is a cograph (giving a complete description when $G$ is
soluble) and when it is perfect (giving a complete description when $G$ is
nilpotent and proving, among the others, that $\Gamma(S_n)$ and $\Gamma(A_n)$
are perfect if and only if $n\leq 4$). Finally we prove that for a finite group
$G$, the properties that $\Gamma(G)$ is split, chordal or $C_4$-free are
equivalent.
|
In this work, we develop a systematic method of constructing flat-band models
with and without band crossings. Our construction scheme utilizes the symmetry
and spatial shape of a compact localized state (CLS) and also the singularity
of the flat-band wave function obtained by a Fourier transform of the CLS
(FT-CLS). In order to construct a flat-band model systematically using these
ingredients, we first choose a CLS with a specific symmetry representation in a
given lattice. Then, the singularity of FT-CLS indicates whether the resulting
flat band exhibits a band crossing point or not. A tight-binding Hamiltonian
with the flat band corresponding to the FT-CLS is obtained by introducing a set
of basis molecular orbitals, which are orthogonal to the FT-CLS. Our
construction scheme can be systematically applied to any lattice so that it
provides a powerful theoretical framework to study exotic properties of both
gapped and gapless flat bands arising from their wave function singularities.
|
One important feature of sunspots is the presence of light bridges. These
structures are elongated and bright (as compared to the umbra) features that
seem to be related to the formation and evolution of sunspots. In this work, we
studied the long-term evolution and the stratification of different atmospheric
parameters of three light bridges formed in the same host sunspot by different
mechanisms. To accomplish this, we used data taken with the GREGOR Infrared
Spectrograph installed at the GREGOR telescope. These data were inverted to
infer the physical parameters of the atmosphere where the observed spectral
profiles were formed of the three light bridges. We find that, in general, the
behaviour of the three light bridges is typical of this kind of structure with
the magnetic field strength, inclination, and temperature values between the
values at the umbra and the penumbra. We also find that they are of a
significantly non-magnetic character (particularly at the axis of the light
bridges) as it is deduced from the filling factor. In addition, within the
common behaviour of the physical properties of light bridges, we observe that
each one exhibits a particular behaviour. Another interesting result is that
the light bridge cools down, the magnetic field decreases, and the magnetic
field lines get more inclined higher in the atmosphere. Finally, we studied the
magnetic and non-magnetic line-of-sight velocities of the light bridges. The
former shows that the magnetic component is at rest and, interestingly, its
variation with optical depth shows a bi-modal behaviour. For the line-of-sight
velocity of the non-magnetic component, we see that the core of the light
bridge is at rest or with shallow upflows and clear downflows sinking through
the edges.
|
A vast concourse of events and phenomena occur in nature that may be
interrelated by a entropy-maximization technique that provides a comprehensible
explanation of a range of physical problems, integrating in a new framework the
universal tendency of energy to a minimum and entropy to a maximum. The outcome
is a modification of Newton's dynamical equation of motion, grounding the
principles of mechanics on the concepts of energy and entropy, instead on the
usual definition of force, integrating into a consistent framework the
description of translation and vortical motion. The new method offers a fresh
approach to traditional problems and can be applied with advantage in the
solution of variational problems.
|
The generalized Langevin equation (GLE) overcomes the limiting Markov
approximation of the Langevin equation by an incorporated memory kernel and can
be used to model various stochastic processes in many fields of science ranging
from climate modeling over neuroscience to finance. Generally, Bayesian
estimation facilitates the determination of both suitable model parameters and
their credibility for a measured time series in a straightforward way. In this
work we develop a realization of this estimation technique for the GLE in the
case of white noise. We assume piecewise constant drift and diffusion functions
and represent the characteristics of the data set by only a few coefficients,
which leads to a numerically efficient procedure. The kernel function is an
arbitrary time-discrete function with a fixed length $K$. We show how to
determine a reasonable value of $K$ based on the data. We illustrate the
abilities of both the method and the model by an example from turbulence.
|
Complex multivariate time series arise in many fields, ranging from computer
vision to robotics or medicine. Often we are interested in the independent
underlying factors that give rise to the high-dimensional data we are
observing. While many models have been introduced to learn such disentangled
representations, only few attempt to explicitly exploit the structure of
sequential data. We investigate the disentanglement properties of Gaussian
process variational autoencoders, a class of models recently introduced that
have been successful in different tasks on time series data. Our model exploits
the temporal structure of the data by modeling each latent channel with a GP
prior and employing a structured variational distribution that can capture
dependencies in time. We demonstrate the competitiveness of our approach
against state-of-the-art unsupervised and weakly-supervised disentanglement
methods on a benchmark task. Moreover, we provide evidence that we can learn
meaningful disentangled representations on real-world medical time series data.
|
In this paper we derive some Edmundson-Lah-Ribari\v{c} type inequalities for
positive linear functionals and 3-convex functions. Main results are applied to
the generalized f-divergence functional. Examples with Zipf Mandelbrot law are
used to illustrate the results. In addition, obtained results are utilized in
constructing some families of exponentially convex functions and Stolarsky-type
means.
|
We provide the rigorous derivation of the wave kinetic equation from the
cubic nonlinear Schr\"odinger (NLS) equation at the kinetic timescale, under a
particular scaling law that describes the limiting process. This solves a main
conjecture in the theory of wave turbulence, i.e. the kinetic theory of
nonlinear wave systems. Our result is the wave analog of Lanford's theorem on
the derivation of the Boltzmann kinetic equation from particle systems, where
in both cases one takes the thermodynamic limit as the size of the system
diverges to infinity, and as the interaction strength of waves or radius of
particles vanishes to $0$, according to a particular scaling law
(Boltzmann-Grad in the particle case).
More precisely, in dimensions $d\geq 3$, we consider the (NLS) equation in a
large box of size $L$ with a weak nonlinearity of strength $\alpha$. In the
limit $L\to\infty$ and $\alpha\to 0$, under the scaling law $\alpha\sim
L^{-1}$, we show that the long-time behavior of (NLS) is statistically
described by the wave kinetic equation, with well justified approximation, up
to times that are $O(1)$ (i.e independent of $L$ and $\alpha$) multiples of the
kinetic timescale $T_{\text{kin}}\sim \alpha^{-2}$. This is the first result of
its kind for any nonlinear dispersive system.
|
Fixed points in three dimensions described by conformal field theories with
$MN_{m,n}= O(m)^n\rtimes S_n$ global symmetry have extensive applications in
critical phenomena. Associated experimental data for $m=n=2$ suggest the
existence of two non-trivial fixed points, while the $\varepsilon$ expansion
predicts only one, resulting in a puzzling state of affairs. A recent numerical
conformal bootstrap study has found two kinks for small values of the
parameters $m$ and $n$, with critical exponents in good agreement with
experimental determinations in the $m=n=2$ case. In this paper we investigate
the fate of the corresponding fixed points as we vary the parameters $m$ and
$n$. We find that one family of kinks approaches a perturbative limit as $m$
increases, and using large spin perturbation theory we construct a large $m$
expansion that fits well with the numerical data. This new expansion, akin to
the large $N$ expansion of critical $O(N)$ models, is compatible with the fixed
point found in the $\varepsilon$ expansion. For the other family of kinks, we
find that it persists only for $n=2$, where for large $m$ it approaches a
non-perturbative limit with $\Delta_\phi\approx 0.75$. We investigate the
spectrum in the case $MN_{100,2}$ and find consistency with expectations from
the lightcone bootstrap.
|
A major challenge in applying machine learning to automated theorem proving
is the scarcity of training data, which is a key ingredient in training
successful deep learning models. To tackle this problem, we propose an approach
that relies on training purely with synthetically generated theorems, without
any human data aside from axioms. We use these theorems to train a
neurally-guided saturation-based prover. Our neural prover outperforms the
state-of-the-art E-prover on this synthetic data in both time and search steps,
and shows significant transfer to the unseen human-written theorems from the
TPTP library, where it solves 72\% of first-order problems without equality.
|
Graph embeddings are low dimensional representations of nodes, edges or whole
graphs. Such representations allow for data in a network format to be used
along with machine learning models for a variety of tasks (e.g., node
classification), where using a similarity matrix would be impractical. In
recent years, many methods for graph embedding generation have been created
based on the idea of random walks. We propose MultiWalk, a framework that uses
an ensemble of these methods to generate the embeddings. Our experiments show
that the proposed framework, using an ensemble composed of two state-of-the-art
methods, can generate embeddings that perform better in classification tasks
than each method in isolation.
|
The Muon g-2 experiment at FERMILAB has confirmed the muon anomalous magnetic
moment anomaly with an error bar 15% smaller and a different central value
compared with the previous Brookhaven result. The combined results from
FERMILAB and Brookhaven show a difference with theory at a significance of
$4.2\sigma$, strongly indicating the presence of new physics. In light of this
new result, we discuss a Two Higgs Doublet model augmented by an Abelian gauge
symmetry that can simultaneously accommodate a light dark matter candidate and
$(g-2)_\mu$, in agreement with existing bounds.
|
We propose and demonstrate a scalable scheme for the simultaneous
determination of internal and motional states in trapped ions with single-site
resolution. The scheme is applied to the study of polaritonic excitations in
the Jaynes- Cummings Hubbard model with trapped ions, in which the internal and
motional states of the ions are strongly correlated. We observe quantum phase
transitions of polaritonic excitations in two ions by directly evaluating their
variances per ion site. Our work establishes an essential technological method
for large-scale quantum simulations of polaritonic systems.
|
We revisit the renormalizable polynomial inflection point model of inflation,
focusing on the small field scenario which can be treated fully analytically.
In particular, the running of the spectral index is predicted to be $\alpha =
-1.43 \times 10^{-3} +5.56 \times 10^{-5} \left(N_{\rm CMB}-65 \right)$, which
might be tested in future. We also analyze reheating through perturbative
inflaton decays to either fermionic or bosonic final states via a trilinear
coupling. The lower bound on the reheating temperature from successful Big Bang
nucleosynthesis gives lower bounds for these couplings; on the other hand
radiative stability of the inflaton potential leads to upper bounds. In
combination this leads to a lower bound on the location $\phi_0$ of the near
inflection point, $\phi_0 > 3 \cdot 10^{-5}$ in Planckian units. The Hubble
parameter during inflation can be as low as $H_{\rm inf} \sim 1$ MeV, or as
high as $\sim 10^{10}$ GeV. Similarly, the reheating temperature can lie
between its lower bound of $\sim 4$ MeV and about $4 \cdot 10^8 \ (10^{11})$
GeV for fermionic (bosonic) inflaton decays. We finally speculate on the
"prehistory" of the universe in this scenario, which might have included an
epoch of eternal inflation.
|
The dynamical history of stars influences the formation and evolution of
planets significantly. To explore the influence of dynamical history on planet
formation and evolution from observations, we assume that stars who experienced
significantly different dynamical histories tend to have different relative
velocities. Utilizing the accurate Gaia-Kepler Stellar Properties Catalog, we
select single main-sequence stars and divide these stars into three groups
according to their relative velocities, i.e. high-V, medium-V, and low-V stars.
After considering the known biases from Kepler data and adopting prior and
posterior correction to minimize the influence of stellar properties on planet
occurrence rate, we find that high-V stars have a lower occurrence rate of
super-Earths and sub-Neptunes (1--4 R$_{\rm \oplus}$, P<100 days) and higher
occurrence rate of sub-Earth (0.5--1 R$_{ \oplus}$, P<30 days) than low-V
stars. Additionally, high-V stars have a lower occurrence rate of hot Jupiter
sized planets (4--20 R$_{\oplus}$, P<10 days) and a slightly higher occurrence
rate of warm or cold Jupiter sized planets (4--20 R$_{\oplus}$, 10<P<400 days).
After investigating the multiplicity and eccentricity, we find that high-V
planet hosts prefer a higher fraction of multi-planets systems and lower
average eccentricity, which is consistent with the eccentricity-multiplicity
dichotomy of Kepler planetary systems. All these statistical results favor the
scenario that the high-V stars with large relative velocity may experience
fewer gravitational events, while the low-V stars may be influenced by stellar
clustering significantly.
|
Determining the mechanism by which high-mass stars are formed is essential
for our understanding of the energy budget and chemical evolution of galaxies.
By using the New IRAM KIDs Array 2 (NIKA2) camera on the Institut de Radio
Astronomie Millim\'etrique (IRAM) 30-m telescope, we have conducted
high-sensitivity and large-scale mapping of a fraction of the Galactic plane in
order to search for signatures of the transition between the high- and low-mass
star-forming modes. Here, we present the first results from the Galactic Star
Formation with NIKA2 (GASTON) project, a Large Programme at the IRAM 30-m
telescope which is mapping $\approx$2 deg$^2$ of the inner Galactic plane (GP),
centred on $\ell$=23.9$^\circ$, $b$=0.05$^\circ$, as well as targets in Taurus
and Ophiuchus in 1.15 and 2.00 mm continuum wavebands. In this paper we present
the first of the GASTON GP data taken, and present initial science results. We
conduct an extraction of structures from the 1.15 mm maps using a dendrogram
analysis and, by comparison to the compact source catalogues from Herschel
survey data, we identify a population of 321 previously-undetected clumps.
Approximately 80 per cent of these new clumps are 70 $\mu$m-quiet, and may be
considered as starless candidates. We find that this new population of clumps
are less massive and cooler, on average, than clumps that have already been
identified. Further, by classifying the full sample of clumps based upon their
infrared-bright fraction - an indicator of evolutionary stage - we find
evidence for clump mass growth, supporting models of clump-fed high-mass star
formation.
|
Intersection patterns of convex sets in $\mathbb{R}^d$ have the remarkable
property that for $d+1 \le k \le \ell$, in any sufficiently large family of
convex sets in $\mathbb{R}^d$, if a constant fraction of the $k$-element
subfamilies have nonempty intersection, then a constant fraction of the
$\ell$-element subfamilies must also have nonempty intersection. Here, we prove
that a similar phenomenon holds for any topological set system $\mathcal{F}$ in
$\mathbb{R}^d$. Quantitatively, our bounds depend on how complicated the
intersection of $\ell$ elements of $\mathcal{F}$ can be, as measured by the sum
of the $\lceil\frac{d}2\rceil$ first Betti numbers. As an application, we
improve the fractional Helly number of set systems with bounded topological
complexity due to the third author, from a Ramsey number down to $d+1$. We also
shed some light on a conjecture of Kalai and Meshulam on intersection patterns
of sets with bounded homological VC dimension. A key ingredient in our proof is
the use of the stair convexity of Bukh, Matou\v{s}ek and Nivash to recast a
simplicial complex as a homological minor of a cubical complex.
|
In this book chapter, we study a problem of distributed content caching in an
ultra-dense edge caching network (UDCN), in which a large number of small base
stations (SBSs) prefetch popular files to cope with the ever-growing user
demand in 5G and beyond. In a UDCN, even a small misprediction of user demand
may render a large amount of prefetched data obsolete. Furtherproacmore, the
interference variance is high due to the short inter-SBS distances, making it
difficult to quantify data downloading rates. Lastly, since the caching
decision of each SBS interacts with those of all other SBSs, the problem
complexity of exponentially increases with the number of SBSs, which is unfit
for UDCNs. To resolve such challenging issues while reflecting time-varying and
location-dependent user demand, we leverage mean-field game (MFG) theory
through which each SBS interacts only with a single virtual SBS whose state is
drawn from the state distribution of the entire SBS population, i.e.,
mean-field (MF) distribution. This MF approximation asymptotically guarantees
achieving the epsilon Nash equilibrium as the number of SBSs approaches
infinity. To describe such an MFG-theoretic caching framework, this chapter
aims to provide a brief review of MFG, and demonstrate its effectiveness for
UDCNs.
|
Information overload is a prevalent challenge in many high-value domains. A
prominent case in point is the explosion of the biomedical literature on
COVID-19, which swelled to hundreds of thousands of papers in a matter of
months. In general, biomedical literature expands by two papers every minute,
totalling over a million new papers every year. Search in the biomedical realm,
and many other vertical domains is challenging due to the scarcity of direct
supervision from click logs. Self-supervised learning has emerged as a
promising direction to overcome the annotation bottleneck. We propose a general
approach for vertical search based on domain-specific pretraining and present a
case study for the biomedical domain. Despite being substantially simpler and
not using any relevance labels for training or development, our method performs
comparably or better than the best systems in the official TREC-COVID
evaluation, a COVID-related biomedical search competition. Using distributed
computing in modern cloud infrastructure, our system can scale to tens of
millions of articles on PubMed and has been deployed as Microsoft Biomedical
Search, a new search experience for biomedical literature:
https://aka.ms/biomedsearch.
|
We analyze the thermodynamic Casimir effect occurring in a gas of
non-interacting bosons confined by two parallel walls with a strongly
anisotropic dispersion inherited from an underlying lattice. In the direction
perpendicular to the confining walls the standard quadratic dispersion is
replaced by the term $|{\bf p}|^{\alpha}$ with $\alpha \geq 2$ treated as a
parameter. We derive a closed, analytical expression for the Casimir force
depending on the dimensionality $d$ and the exponent $\alpha$, and analyze it
for thermodynamic states in which the Bose-Einstein condensate is present. For
$\alpha\in\{4,6,8,\dots\}$ the exponent governing the decay of the Casimir
force with increasing distance between the walls becomes modified and the
Casimir amplitude $\Delta_{\alpha}(d)$ exhibits oscillations of sign as a
function of $d$. Otherwise we find that $\Delta_{\alpha}(d)$ features
singularities when viewed as a function of $d$ and $\alpha$. Recovering the
known previous results for the isotropic limit $\alpha=2$ turns out to occur
via a cancellation of singular terms.
|
The classic searches for supersymmetry have not given any strong indication
for new physics. Therefore CMS is designing dedicated searches to target the
more difficult and specific supersymmetry scenarios. This contribution present
three such recent searches based on 13 TeV proton-proton collisions recorded
with the CMS detector in 2016, 2017 and 2018: a search for heavy gluinos
cascading via heavy next-to-lightest neutralino in final states with boosted Z
bosons and missing transverse momentum; a search for compressed supersymmetry
in final states with soft taus; and a search for compressed, long-lived
charginos in hadronic final states with disappearing tracks.
|
More than a decade has passed since the definition of Globular Cluster (GC)
changed, and now we know that they host Multiple Populations (MPs). But few GCs
do not share that behaviour and Ruprecht 106 is one of these clusters. We
analyzed thirteen member red giant branch stars using spectra in the wavelength
range 6120-6405 Angstroms obtained through the GIRAFFE Spectrograph, mounted at
UT2 telescope at Paranal, as well as the whole cluster using C, V, R and I
photometry obtained through the Swope telescope at Las Campanas. Atmospheric
parameters were determined from the photometry to determine Fe and Na
abundances. A photometric analysis searching for MPs was also carried out. Both
analyses confirm that Ruprecht 106 is indeed one on the few GCs to host Simple
Stellar Population, in agreement with previous studies. Finally, a dynamical
study concerning its orbits was carried out to analyze the possible extra
galactic origin of the Cluster. The orbital integration indicates that this GC
belongs to the inner halo, while an Energy plane shows that it cannot be
accurately associated with any known extragalactic progenitor.
|
Phase gradient metagratings/metasurfaces (PGMs) have provided a new paradigm
for light manipulations. In this work, we will show the existence of gauge
invariance in PGMs, i.e., the diffraction law of PGMs is independent of the
choice of initial value of abrupt phase shift that induces the phase gradient.
This gauge invariance ensures the well-studied ordinary metallic grating that
can be regarded as a PGM, with its diffraction properties that can fully
predicted by generalized diffraction law with phase gradient. The generalized
diffraction law presents a new insight for the famous effect of Wood's
Anomalies and Rayleigh conjecture.
|
Hydraulic blockage of cross-drainage structures such as culverts is
considered one of main contributor in triggering urban flash floods. However,
due to lack of during floods data and highly non-linear nature of debris
interaction, conventional modelling for hydraulic blockage is not possible.
This paper proposes to use machine learning regression analysis for the
prediction of hydraulic blockage. Relevant data has been collected by
performing a scaled in-lab study and replicating different blockage scenarios.
From the regression analysis, Artificial Neural Network (ANN) was reported best
in hydraulic blockage prediction with $R^2$ of 0.89. With deployment of
hydraulic sensors in smart cities, and availability of Big Data, regression
analysis may prove helpful in addressing the blockage detection problem which
is difficult to counter using conventional experimental and hydrological
approaches.
|
Bi-objective search is a well-known algorithmic problem, concerned with
finding a set of optimal solutions in a two-dimensional domain. This problem
has a wide variety of applications such as planning in transport systems or
optimal control in energy systems. Recently, bi-objective A*-based search
(BOA*) has shown state-of-the-art performance in large networks. This paper
develops a bi-directional and parallel variant of BOA*, enriched with several
speed-up heuristics. Our experimental results on 1,000 benchmark cases show
that our bi-directional A* algorithm for bi-objective search (BOBA*) can
optimally solve all of the benchmark cases within the time limit, outperforming
the state of the art BOA*, bi-objective Dijkstra and bi-directional
bi-objective Dijkstra by an average runtime improvement of a factor of five
over all of the benchmark instances.
|
For AI technology to fulfill its full promises, we must have effective means
to ensure Responsible AI behavior and curtail potential irresponsible use,
e.g., in areas of privacy protection, human autonomy, robustness, and
prevention of biases and discrimination in automated decision making. Recent
literature in the field has identified serious shortcomings of narrow
technology focused and formalism-oriented research and has proposed an
interdisciplinary approach that brings the social context into the scope of
study. In this paper, we take a sociotechnical approach to propose a more
expansive framework of thinking about the Responsible AI challenges in both
technical and social context. Effective solutions need to bridge the gap
between a technical system with the social system that it will be deployed to.
To this end, we propose human agency and regulation as main mechanisms of
intervention and propose a decentralized computational infrastructure, or a set
of public utilities, as the computational means to bridge this gap. A
decentralized infrastructure is uniquely suited for meeting this challenge and
enable technical solutions and social institutions in a mutually reinforcing
dynamic to achieve Responsible AI goals. Our approach is novel in its
sociotechnical approach and its aim in tackling the structural issues that
cannot be solved within the narrow confines of AI technical research. We then
explore possible features of the proposed infrastructure and discuss how it may
help solve example problems recently studied in the field.
|
In this paper, we argue that models coming from a variety of fields share a
common structure that we call matching function equilibria with partial
assignment. This structure revolves around an aggregate matching function and a
system of nonlinear equations. This encompasses search and matching models,
matching models with transferable, non-transferable and imperfectly
transferable utility, and matching with peer effects. We provide a proof of
existence and uniqueness of an equilibrium as well as an efficient algorithm to
compute it. We show how to estimate parametric versions of these models by
maximum likelihood. We also propose an approach to construct counterfactuals
without estimating the matching functions for a subclass of models. We
illustrate our estimation approach by analyzing the impact of the elimination
of the Social Security Student Benefit Program in 1982 on the marriage market
in the United States.
|
We prove existence and regularity results for free transmission problems
governed by fully nonlinear elliptic equations with nonhomogeneous
degeneracies.
|
Non-minimally coupled scalar field models are well-known for providing
interesting cosmological features. These include a late time dark energy
behavior, a phantom dark energy evolution without singularity, an early time
inflationary universe, scaling solutions, convergence to the standard
$\Lambda$CDM, etc. While the usual stability analysis helps us determine the
evolution of a model geometrically, bifurcation theory allows us to precisely
locate the parameters' values describing the global dynamics without a
fine-tuning of initial conditions. Using the center manifold theory and
bifurcation analysis, we show that the general model undergoes a transcritical
bifurcation, which predicts us to tune our models to have certain desired
dynamics. We obtained a class of models and a range of parameters capable of
describing a cosmic evolution from an early radiation era towards a late time
dark energy era over a wide range of initial conditions. There is also a
possible scenario of crossing the phantom divide line. We also find a class of
models where the late time attractor mechanism is indistinguishable from that
of a structurally stable general relativity based model; thus, we can elude the
big rip singularity generically. Therefore, bifurcation theory allows us to
select models that are viable with cosmological observations.
|
The objective of this paper is to present some results about viscosity
subsolutions of the contact Hamiltonian-Jacobi equations on connected, closed
manifold $M$ $$ H(x,\partial_x u,u)= 0, \quad x\in M. $$ Based on implicit
variational principles introduced in [12,14], we focus on the monotonicity of
the solution semigroups on viscosity subsolutions and the positive invariance
of the epigraph for viscosity subsolutions. Besides, we show a similar
consequence for strict viscosity subsolutions on $M$.
|
We analyze the influence of the surface passivation produced by oxides on the
superconducting properties of $\gamma$-Mo$_2$N ultra-thin films. The
superconducting critical temperature of thin films grown directly on Si (100)
with those using a buffer and a capping layer of AlN are compared. The results
show that the cover layer avoids the presence of surface oxides, maximizing the
superconducting critical temperature for films with thicknesses of a few
nanometers. We characterize the flux-flow instability measuring current-voltage
curves in a 6.4 nm thick Mo$_2$N film with a superconducting critical
temperature of 6.4 K. The data is analyzed using the Larkin and Ovchinnikov
model. Considering self-heating effects due to finite heat removal from the
substrate, we determine a fast quasiparticle relaxation time $\approx$ 45 ps.
This value is promising for its applications in single-photon detectors.
|
Understanding the biological function of knots in proteins and their folding
process is an open and challenging question in biology. Recent studies classify
the topology and geometry of knotted proteins by analysing the distribution of
a protein's planar projections using topological objects called knotoids. We
approach the analysis of proteins with the same topology by introducing a
topologically inspired statistical metric between their knotoid distributions.
We detect geometric differences between trefoil proteins by characterising
their entanglement and we recover a clustering by sequence similarity. By
looking directly at the geometry and topology of their native states, we are
able to probe different folding pathways for proteins forming open-ended
trefoil knots. Interestingly, our pipeline reveals that the folding pathway of
shallow knotted Carbonic Anhydrases involves the creation of a double-looped
structure, differently from what was previously observed for deeply knotted
trefoil proteins. We validate this with Molecular Dynamics simulations.
|
Moving clouds affect the global solar irradiance that reaches the surface of
the Earth. As a consequence, the amount of resources available to meet the
energy demand in a smart grid powered using Photovoltaic (PV) systems depends
on the shadows projected by passing clouds. This research introduces an
algorithm for tracking clouds to predict Sun occlusion. Using thermal images of
clouds, the algorithm is capable of estimating multiple wind velocity fields
with different altitudes, velocity magnitudes and directions.
|
In Quantum Field Theory, we discuss the main features of the (non-local)
contour gauge which extends the local axial-type gauge used in most approaches.
Based on the gluon geometry, we demonstrate that the contour gauge does not
suffer from the residual gauge. We discuss the useful correspondence between
the contour gauge conception and the Hamiltonian (Lagrangian) formalism. Having
compared the local and non-local gauges, we again advocate the advantage of the
contour gauge use.
|
Two decades after its unexpected discovery, the properties of the $X(3872)$
exotic resonance are still under intense scrutiny. In particular, there are
doubts about its nature as an ensemble of mesons or having any other internal
structure. We use a Diffusion Monte Carlo method to solve the many-body
Schr\"odinger equation that describes this state as a $c \bar c n \bar n$
($n=u$ or $d$ quark) system. This approach accounts for multi-particle
correlations in physical observables avoiding the usual quark-clustering
assumed in other theoretical techniques. The most general and accepted pairwise
Coulomb$\,+\,$linear-confining$\,+\,$hyperfine spin-spin interaction, with
parameters obtained by a simultaneous fit of around 100 masses of mesons and
baryons, is used. The $X(3872)$ contains light quarks whose masses are given by
the mechanism responsible of the dynamical breaking of chiral symmetry. The
same mechanisms gives rise to Goldstone-boson exchange interactions between
quarks that have been fixed in the last 10-20 years reproducing hadron,
hadron-hadron and multiquark phenomenology. It appears that a meson-meson
molecular configuration is preferred but, contrary to the usual assumption of
$D^0\bar{D}^{\ast0}$ molecule for the $X(3872)$, our formalism produces $\omega
J/\psi$ and $\rho J/\psi$ clusters as the most stable ones, which could explain
in a natural way all the observed features of the $X(3872)$.
|
We propose a method to evaluate and improve the validity of required
specifications by comparing models from different viewpoints. Inconsistencies
are automatically extracted from the model in which the analyst defines the
service procedure based on the initial requirement; thereafter, the analyst
automatically compares it with a state transition model from the same initial
requirement that has been created by an evaluator who is different from the
analyst. The identified inconsistencies are reported to the analyst to enable
the improvement of the required specifications. We develop a tool for
extraction and comparison and then discuss its effectiveness by applying the
method to a requirements specification example.
|
Kappa distributions and with loss cone features have been frequently observed
with flares emissions with the signatures of Lower hybrid waves. We have
analysed the plasma with Kappa distributions and with loss cone features for
the drift wave instabilities in perpendicular propagation for Large flare and
Normal flare and Coronal condition . While analysing the growth/damping rate,
we understand that the growth of propagation of EM waves increases with kappa
distribution index for all the three cases. In comparing the propagation large
flare shows lesser growth in compared with the normal and the coronal plasmas.
When added the loss cone features to Kappa distributions, we find that the
damping of EM wave propagation takes place. The damping rate EM waves is
increases with perpendicular temperature and loss cone index l, in all the
three cases but damping is very high for large flare and then normal in
comparision with coronal condition. This shows that the lower hybrid damping
may be the source of coronal heating.
|
Many software engineering studies or tasks rely on categorizing software
engineering artifacts. In practice, this is done either by defining simple but
often imprecise heuristics, or by manual labelling of the artifacts.
Unfortunately, errors in these categorizations impact the tasks that rely on
them. To improve the precision of these categorizations, we propose to gather
heuristics in a collaborative heuristic repository, to which researchers can
contribute a large amount of diverse heuristics for a variety of tasks on a
variety of SE artifacts. These heuristics are then leveraged by
state-of-the-art weak supervision techniques to train high-quality classifiers,
thus improving the categorizations. We present an initial version of the
heuristic repository, which we applied to the concrete task of commit
classification.
|
Graph embedding is a general approach to tackling graph-analytic problems by
encoding nodes into low-dimensional representations. Most existing embedding
methods are transductive since the information of all nodes is required in
training, including those to be predicted. In this paper, we propose a novel
inductive embedding method for semi-supervised learning on graphs. This method
generates node representations by learning a parametric function to aggregate
information from the neighborhood using an attention mechanism, and hence
naturally generalizes to previously unseen nodes. Furthermore, adversarial
training serves as an external regularization enforcing the learned
representations to match a prior distribution for improving robustness and
generalization ability. Experiments on real-world clean or noisy graphs are
used to demonstrate the effectiveness of this approach.
|
We propose a new diffusion-asymptotic analysis for sequentially randomized
experiments, including those that arise in solving multi-armed bandit problems.
In an experiment with $ n $ time steps, we let the mean reward gaps between
actions scale to the order $1/\sqrt{n}$ so as to preserve the difficulty of the
learning task as $n$ grows. In this regime, we show that the behavior of a
class of sequentially randomized Markov experiments converges to a diffusion
limit, given as the solution of a stochastic differential equation. The
diffusion limit thus enables us to derive refined, instance-specific
characterization of the stochastic dynamics of adaptive experiments. As an
application of this framework, we use the diffusion limit to obtain several new
insights on the regret and belief evolution of Thompson sampling. We show that
a version of Thompson sampling with an asymptotically uninformative prior
variance achieves nearly-optimal instance-specific regret scaling when the
reward gaps are relatively large. We also demonstrate that, in this regime, the
posterior beliefs underlying Thompson sampling are highly unstable over time.
|
The Embedded-Atom Model (EAM) provides a phenomenological description of
atomic arrangements in metallic systems. It consists of a configurational
energy depending on atomic positions and featuring the interplay of two-body
atomic interactions and nonlocal effects due to the corresponding electronic
clouds. The purpose of this paper is to mathematically investigate the
minimization of the EAM energy among lattices in two and three dimensions. We
present a suite of analytical and numerical results under different reference
choices for the underlying interaction potentials. In particular, Gaussian,
inverse-power, and Lennard-Jones-type interactions are addressed.
|
We present a statistical analysis for the characteristics and spatial
evolution of the interplanetary discontinuities (IDs) in the solar wind, from
0.13 to 0.9 au, by using the Parker Solar Probe measurements on Orbits 4 and 5.
3948 IDs have been collected, including 2511 rotational discontinuities (RDs)
and 557 tangential discontinuities (TDs), with the remnant unidentified. The
statistical results show that (1) the ID occurrence rate decreases from 200
events/day at 0.13 au to 1 events/day at 0.9 au, following a spatial scaling
r-2.00, (2) the RD to TD ratio decreases quickly with the heliocentric
distance, from 8 at r<0.3 au to 1 at r>0.4 au, (3) the magnetic field tends to
rotate across the IDs, 45{\deg} for TDs and 30{\deg} for RDs in the pristine
solar wind within 0.3 au, (4) a special subgroup of RDs exist within 0.3 au,
characterized by small field rotation angles and parallel or antiparallel
propagations to the background magnetic fields, (5) the TD thicknesses
normalized by local ion inertial lengths (di) show no clear spatial scaling and
generally range from 5 to 35 di, and the normalized RD thicknesses follow
r-1.09 spatial scaling, (6) the outward (anti-sunward) propagating RDs
predominate in all RDs, with the propagation speeds in the plasma rest frame
proportional to r-1.03. This work could improve our understandings for the ID
characteristics and evolutions and shed light on the study of the turbulent
environment in the pristine solar wind.
|
The paper addresses the problem of defining families of ordered sequences
$\{x_i\}_{i\in N}$ of elements of a compact subset $X$ of $R^d$ whose prefixes
$X_n=\{x_i\}_{i=1}^{n}$, for all orders $n$, have good space-filling properties
as measured by the dispersion (covering radius) criterion. Our ultimate aim is
the definition of incremental algorithms that generate sequences $X_n$ with
small optimality gap, i.e., with a small increase in the maximum distance
between points of $X$ and the elements of $X_n$ with respect to the optimal
solution $X_n^\star$. The paper is a first step in this direction, presenting
incremental design algorithms with proven optimality bound for one-parameter
families of criteria based on coverings and spacings that both converge to
dispersion for large values of their parameter. The examples presented show
that the covering-based method outperforms state-of-the-art competitors,
including coffee-house, suggesting that it inherits from its guaranteed 50\%
optimality gap.
|
Barchan dunes, or simply barchans, are crescent-shaped dunes found in diverse
environments such as the bottom of rivers, Earth's deserts and the surface of
Mars. In a recent paper [Phys. Rev. E 101, 012905 (2020)], we investigated the
evolution of subaqueous barchans by using computational fluid dynamics -
discrete element method (CFD-DEM), and our simulations captured well the
evolution of an initial pile toward a barchan dune in both the bedform and
grain scales. The numerical method having shown to be adequate, we obtain now
the forces acting on each grain, isolate the contact interactions, and
investigate how forces are distributed and transmitted in a barchan dune. We
present force maps and probability density functions (PDFs) for values in the
streamwise and spanwise directions, and show that stronger forces are
experienced by grains at neither the crest nor leading edge of the barchan, but
in positions just upstream the dune centroid on the periphery of the dune. We
show also that a great part of grains undergo longitudinal forces of the order
of 10$^{-7}$ N, with negative values around the crest, resulting in
decelerations and grain deposition in that region. These data show that the
force distribution tends to route a great part of grains toward the crest and
horns of subaqueous barchans, being fundamental to comprehend their
morphodynamics. However, to the best of the authors' knowledge, they are not
accessible from current experiments, making of our results an important step
toward understanding the behavior of barchan dunes.
|
In this paper, we prove that a compact quasi-Einstein manifold
$(M^n,\,g,\,u)$ of dimension $n\geq 4$ with boundary $\partial M,$ nonnegative
sectional curvature and zero radial Weyl tensor is either isometric, up to
scaling, to the standard hemisphere $\Bbb{S}^n_+,$ or $g=dt^{2}+\psi
^{2}(t)g_{L}$ and $u=u(t),$ where $g_{L}$ is Einstein with nonnegative Ricci
curvature. A similar classification result is obtained by assuming a
fourth-order vanishing condition on the Weyl tensor. Moreover, a new example is
presented in order to justify our assumptions. In addition, the case of
dimension $n=3$ is also discussed.
|
Normal mode decomposition of atomic vibrations has been used to provide
microscopic under-standing of thermal transport in amorphous solids for
decades. In normal mode methods, it is naturally assumed that atoms vibrate
around their equilibrium positions and that individual normal modes are the
fundamental vibrational excitations transporting heat. With the abundance of
predictions from normal mode methods and experimental measurements now
available, we care-fully analyze these calculations in amorphous silicon, a
model amorphous solid. We find a number of discrepancies, suggesting that
treating individual normal modes as fundamental heat carriers may not be
accurate in amorphous solids. Further, our classical and ab-initio molecular
dynamics simulations of amorphous silicon demonstrate a large degree of atomic
diffusion, especially at high temperatures, leading to the conclusion that
thermal transport in amorphous solids could be better described starting from
the perspective of liquid dynamics rather than from crystalline solids
|
Recent research has confirmed the feasibility of backdoor attacks in deep
reinforcement learning (RL) systems. However, the existing attacks require the
ability to arbitrarily modify an agent's observation, constraining the
application scope to simple RL systems such as Atari games. In this paper, we
migrate backdoor attacks to more complex RL systems involving multiple agents
and explore the possibility of triggering the backdoor without directly
manipulating the agent's observation. As a proof of concept, we demonstrate
that an adversary agent can trigger the backdoor of the victim agent with its
own action in two-player competitive RL systems. We prototype and evaluate
BACKDOORL in four competitive environments. The results show that when the
backdoor is activated, the winning rate of the victim drops by 17% to 37%
compared to when not activated.
|
To complete a previous work, the probability density functions for the errors
in the center-of-gravity as positioning algorithm are derived with the usual
methods of the cumulative distribution functions. These methods introduce
substantial complications compared to the approaches used in a previous
publication on similar problems. The combinations of random variables
considered are: $X_{g3}=\theta(x_2-x_1) (x_1-x_3)/(x_1+x_2+x_3) +
\theta(x_1-x_2)(x_1+2x_4)/(x_1+x_2+x_4)$ and
$X_{g4}=(\theta(x_4-x_5)(2x_4+x_1-x_3)/(x_1+x_2+x_3+x_4)+
\theta(x_5-x_4)(x_1-x_3-2x_5)/(x_1+x_2+x_3+x_5)$ The complete and partial forms
of the probability density functions of these expressions of the
center-of-gravity algorithms are calculated for general probability density
functions of the observation noise. The cumulative probability distributions
are the essential steps in this study, never calculated elsewhere.
|
In this work we derive the junction conditions for the matching between two
spacetimes at a separation hypersurface in the perfect-fluid version of
$f\left(R,T\right)$ gravity, not only in the usual geometrical representation
but also in a dynamically equivalent scalar-tensor representation. We start
with the general case in which a thin-shell separates the two spacetimes at the
separation hypersurface, for which the general junction conditions are deduced,
and the particular case for smooth matching is considered when the
stress-energy tensor of the thin-shell vanishes. The set of junction conditions
is similar to the one previously obtained for $f\left(R\right)$ gravity but
features also constraints in the continuity of the trace of the stress-energy
tensor $T_{ab}$ and its partial derivatives, which force the thin-shell to
satisfy the equation of state of radiation $\sigma=2p_t$. As a consequence, a
necessary and sufficient condition for spherically symmetric thin-shells to
satisfy all the energy conditions is the positivity of its energy density
$\sigma$. For specific forms of the function $f\left(R,T\right)$, the
continuity of $R$ and $T$ ceases to be mandatory but a gravitational
double-layer arises at the separation hypersurface. The Martinez thin-shell
system and a thin-shell surrounding a central black-hole are provided as
examples of application.
|
We highlight shortcomings of the dynamical dark energy (DDE) paradigm. For
parametric models with equation of state (EOS), $w(z) = w_0 + w_a f(z)$ for a
given function of redshift $f(z)$, we show that the errors in $w_a$ are
sensitive to $f(z)$: if $f(z)$ increases quickly with redshift $z$, then errors
in $w_a$ are smaller, and vice versa. As a result, parametric DDE models suffer
from a degree of arbitrariness and focusing too much on one model runs the risk
that DDE may be overlooked. In particular, we show the ubiquitous
Chevallier-Polarski-Linder model is one of the least sensitive to DDE. We also
comment on ``wiggles" in $w(z)$ uncovered in non-parametric reconstructions.
Concretely, we isolate the most relevant Fourier modes in the wiggles, model
them and fit them back to the original data to confirm the wiggles at
$\lesssim2\sigma$. We delve into the assumptions going into the reconstruction
and argue that the assumed correlations, which clearly influence the wiggles,
place strong constraints on field theory models of DDE.
|
Planning to support widespread transportation electrification depends on
detailed estimates for the electricity demand from electric vehicles in both
uncontrolled and controlled or smart charging scenarios. We present a modeling
approach to rapidly generate charging estimates that include control for
large-scale scenarios with millions of individual drivers. We model
uncontrolled charging demand using statistical representations of real charging
sessions. We model the effect of load modulation control on aggregate charging
profiles with a novel machine learning approach that replaces traditional
optimization approaches. We demonstrate its performance modeling workplace
charging control with multiple electricity rate schedules, achieving small
errors (2.5% to 4.5%), while accelerating computations by more than 4000 times.
We illustrate the methodology by generating scenarios for California's 2030
charging demand including multiple charging segments and controls, with
scenarios run locally in under 50 seconds, and for assisting rate design
modeling the large-scale impact of a new workplace charging rate.
|
The effects of internal adaptation dynamics on the self-organized aggregation
of chemotactic bacteria are investigated by Monte Carlo (MC) simulations based
on a two-stream kinetic transport equation coupled with a reaction-diffusion
equation of the chemoattractant that bacteria produce. A remarkable finding is
a nonmonotonic behavior of the peak aggregation density with respect to the
adaptation time; more specifically, aggregation is the most enhanced when the
adaptation time is comparable to or moderately larger than the mean run time of
bacteria. Another curious observation is the formation of a trapezoidal
aggregation profile occurring at a very large adaptation time, where the biased
motion of individual cells is rather hindered at the plateau regimes due to the
boundedness of the tumbling frequency modulation. Asymptotic analysis of the
kinetic transport system is also carried out, and a novel asymptotic equation
is obtained at the large adaptation-time regime while the Keller-Segel type
equations are obtained when the adaptation time is moderate. Numerical
comparison of the asymptotic equations with MC results clarifies that
trapezoidal aggregation is well described by the novel asymptotic equation, and
the nonmonotonic behavior of the peak aggregation density is interpreted as the
transient of the asymptotic solutions between different adaptation time
regimes.
|
We investigate the invariance of the Gibbs measure for the fractional
Schrodinger equation of exponential type (expNLS) $i\partial_t u +
(-\Delta)^{\frac{\alpha}2} u = 2\gamma\beta e^{\beta|u|^2}u$ on $d$-dimensional
compact Riemannian manifolds $\mathcal{M}$, for a dispersion parameter
$\alpha>d$, some coupling constant $\beta>0$, and $\gamma\neq 0$. (i) We first
study the construction of the Gibbs measure for (expNLS). We prove that in the
defocusing case $\gamma>0$, the measure is well-defined in the whole regime
$\alpha>d$ and $\beta>0$ (Theorem 1.1 (i)), while in the focusing case
$\gamma<0$ its partition function is always infinite for any $\alpha>d$ and
$\beta>0$, even with a mass cut-off of arbitrary small size (Theorem 1.1 (ii)).
(ii) We then study the dynamics (expNLS) with random initial data of low
regularity. We first use a compactness argument to prove weak invariance of the
Gibbs measure in the whole regime $\alpha>d$ and $0<\beta < \beta^\star_\alpha$
for some natural parameter $0<\beta^\star_\alpha\sim (\alpha-d)$ (Theorem 1.3
(i)). In the large dispersion regime $\alpha>2d$, we can improve this result by
constructing a local deterministic flow for (expNLS) for any $\beta>0$. Using
the Gibbs measure, we prove that solutions are almost surely global for
$0<\beta \ll\beta^\star_\alpha$, and that the Gibbs measure is invariant
(Theorem 1.3 (ii)). (iii) Finally, in the particular case $d=1$ and
$\mathcal{M}=\mathbb{T}$, we are able to exploit some probabilistic multilinear
smoothing effects to build a probabilistic flow for (expNLS) for
$1+\frac{\sqrt{2}}2<\alpha \leq 2$, locally for arbitrary $\beta>0$ and
globally for $0<\beta \ll \beta^\star_\alpha$ (Theorem 1.5).
|
We propose a novel approximation hierarchy for cardinality-constrained,
convex quadratic programs that exploits the rank-dominating eigenvectors of the
quadratic matrix. Each level of approximation admits a min-max characterization
whose objective function can be optimized over the binary variables
analytically, while preserving convexity in the continuous variables.
Exploiting this property, we propose two scalable optimization algorithms,
coined as the "best response" and the "dual program", that can efficiently
screen the potential indices of the nonzero elements of the original program.
We show that the proposed methods are competitive with the existing screening
methods in the current sparse regression literature, and it is particularly
fast on instances with high number of measurements in experiments with both
synthetic and real datasets.
|
This paper completely characterizes the standard Young tableaux that can be
reconstructed from their sets or multisets of $1$-minors. In particular, any
standard Young tableau with at least $5$ entries can be reconstructed from its
set of $1$-minors.
|
We consider the problem of forecasting the daily number of hospitalized
COVID-19 patients at a single hospital site, in order to help administrators
with logistics and planning. We develop several candidate hierarchical Bayesian
models which directly capture the count nature of data via a generalized
Poisson likelihood, model time-series dependencies via autoregressive and
Gaussian process latent processes, and share statistical strength across
related sites. We demonstrate our approach on public datasets for 8 hospitals
in Massachusetts, U.S.A. and 10 hospitals in the United Kingdom. Further
prospective evaluation compares our approach favorably to baselines currently
used by stakeholders at 3 related hospitals to forecast 2-week-ahead demand by
rescaling state-level forecasts.
|
We study theoretically two vibrating quantum emitters trapped near a
one-dimensional waveguide and interacting with propagating photons. We
demonstrate, that in the regime of strong optomechanical interaction the
light-induced coupling of emitter vibrations can lead to formation of spatially
localized vibration modes, exhibiting parity-time (PT ) symmetry breaking.
These localized vibrations can be interpreted as topological defects in the
quasiclassical energy spectrum.
|
The origin of p-type conductivity and the mechanism responsible for low
carrier mobility was investigated in pyrite (FeS2) thin films. Temperature
dependent resistivity measurements were performed on polycrystalline and
nanostructured thin films prepared by three different methods. Films have a
high hole density and low mobility regardless of the method used for their
preparation. The charge transport mechanism is determined to be nearest
neighbour hopping (NNH) at near room temperature with Mott-type variable range
hopping (VRH) of holes via localized states occurring at lower temperatures.
Density functional theory (DFT) predicts that sulfur vacancy induced localized
defect states will be situated within the band gap with the charge remaining
localized around the defect. The data indicate that the electronic properties
including hopping transport in pyrite thin films can be correlated to sulfur
vacancy related defect. The results provide insights on electronic properties
of pyrite thin films and its implications for charge transport
|
In one-shot weight sharing for NAS, the weights of each operation (at each
layer) are supposed to be identical for all architectures (paths) in the
supernet. However, this rules out the possibility of adjusting operation
weights to cater for different paths, which limits the reliability of the
evaluation results. In this paper, instead of counting on a single supernet, we
introduce $K$-shot supernets and take their weights for each operation as a
dictionary. The operation weight for each path is represented as a convex
combination of items in a dictionary with a simplex code. This enables a matrix
approximation of the stand-alone weight matrix with a higher rank ($K>1$). A
\textit{simplex-net} is introduced to produce architecture-customized code for
each path. As a result, all paths can adaptively learn how to share weights in
the $K$-shot supernets and acquire corresponding weights for better evaluation.
$K$-shot supernets and simplex-net can be iteratively trained, and we further
extend the search to the channel dimension. Extensive experiments on benchmark
datasets validate that K-shot NAS significantly improves the evaluation
accuracy of paths and thus brings in impressive performance improvements.
|
We derive the main classical gravitational tests for a recently found vacuum
solution with spin and dilation charges in the framework of Metric-Affine gauge
theory of gravity. Using the results of the perihelion precession of the star
S2 by the GRAVITY collaboration and the gravitational redshift of Sirius B
white dwarf we constrain the corrections provided by the torsion and
nonmetricity fields for these effects.
|
We present the results of photometry, linear spectropolarimetry, and imaging
circular polarimetry ofcomet C/2009 P1 (Garradd) performed at the 6-m telescope
BTA of the Special Astrophysical Observatory(Russia) equipped by the multi-mode
focal reducer SCORPIO-2. The comet was observed at two epochspost-perihelion:
on February 2-14, 2012 at r=1.6 au and {\alpha}=36 {\deg}; and on April 14-21,
2012 at r=2.2 au and {\alpha}=27 deg. The spatial maps of the relative
intensity and circular polarization as well as the spectral distribution of
linear polarization are presented. There were two features (dust and gas tails)
orientedin the solar and antisolar directions on February 2 and 14 that allowed
us to determine rotation periodof the nucleus as 11.1 hours. We detected
emissions of C2 , C3 , CN, CH, NH2 molecules as well as CO+ and H2O+ ions,
along with a high level of the dust continuum. On February 2, the degree of
linear polarization in the continuum, within the wavelength range of 0.67-0.68
{\mu}m, was about 5% in the near-nucleus region up to near 6000 km and
decreased to about 3% at near 40,000 km. The left-handed (negative) circular
polarization at the level approximately from -0.06% to -0.4% was observed at
the distances up to 3*10^4 km from the nucleus on February 14 and April 21,
respectively.
|
Prospects of the Cherenkov Telescope Array (CTA) for the study of very high
energy gamma-ray emission from nearby star-forming galaxies are investigated.
In the previous work, we constructed a model to calculate luminosity and energy
spectrum of pion-decay gamma-ray emission produced by cosmic-ray interaction
with the interstellar medium (ISM), from four physical quantities of galaxies
[star formation rate (SFR), gas mass, stellar mass, and effective radius]. The
model is in good agreement with the observed GeV--TeV emission of several
nearby galaxies. Applying this model to nearby galaxies that are not yet
detected in TeV (mainly from the KINGFISH catalog), their hadronic gamma-ray
luminosities and spectra are predicted. We identify galaxies of the highest
chance of detection by CTA, including NGC 5236, M33, NGC 6946, and IC 342.
Concerning gamma-ray spectra, NGC 1482 is particularly interesting because our
model predicts that this galaxy is close to the calorimetric limit and its
gamma-ray spectral index in GeV--TeV is close to that of cosmic-ray protons
injected into ISM. Therefore this galaxy may be detectable by CTA even though
its GeV flux is below the {\it Fermi} Large Area Telescope sensitivity limit.
In the TeV regime, most galaxies are not in the calorimetric limit, and the
predicted TeV flux is lower than that assuming a simple relation between the
TeV luminosity and SFR of M82 and NGC 253, typically by a factor of 15. This
means that a more sophisticated model beyond the calorimetric limit assumption
is necessary to study TeV emission from star-forming galaxies.
|
Motivated by the needs from an airline crew scheduling application, we
introduce structured convolutional kernel networks (Struct-CKN), which combine
CKNs from Mairal et al. (2014) in a structured prediction framework that
supports constraints on the outputs. CKNs are a particular kind of
convolutional neural networks that approximate a kernel feature map on training
data, thus combining properties of deep learning with the non-parametric
flexibility of kernel methods. Extending CKNs to structured outputs allows us
to obtain useful initial solutions on a flight-connection dataset that can be
further refined by an airline crew scheduling solver. More specifically, we use
a flight-based network modeled as a general conditional random field capable of
incorporating local constraints in the learning process. Our experiments
demonstrate that this approach yields significant improvements for the
large-scale crew pairing problem (50,000 flights per month) over standard
approaches, reducing the solution cost by 17% (a gain of millions of dollars)
and the cost of global constraints by 97%.
|
We study a general convergence theory for the numerical solutions of
compressible viscous and electrically conducting fluids with a focus on
numerical schemes that preserve the divergence free property of magnetic field
exactly. Our strategy utilizes the recent concepts of dissipative weak
solutions and consistent approximations. First, we show the dissipative
weak--strong uniqueness principle, meaning a dissipative weak solution
coincides with a classical solution as long as they emanate from the same
initial data. Next, we show the convergence of consistent approximation towards
the dissipative weak solution and thus the classical solution. Upon
interpreting the consistent approximation as the stability and consistency of
suitable numerical solutions we have established a generalized Lax equivalence
theory: convergence $\Longleftrightarrow$ stability and consistency. Further,
to illustrate the application of this theory, we propose two novel mixed finite
volume-finite element methods with exact divergence-free magnetic field.
Finally, by showing solutions of these two schemes are consistent
approximations, we conclude their convergence towards the dissipative weak
solution and the classical solution.
|
We study the duality between JT gravity and the double-scaled matrix model
including their respective deformations. For these deformed theories we relate
the thermal partition function to the generating function of topological
gravity correlators that are determined as solutions to the KdV hierarchy. We
specialise to those deformations of JT gravity coupled to a gas of defects,
which conforms with known results in the literature. We express the
(asymptotic) thermal partition functions in a low temperature limit, in which
non-perturbative corrections are suppressed and the thermal partition function
becomes exact. In this limit we demonstrate that there is a Hawking-Page phase
transition between connected and disconnected surfaces for this instance of JT
gravity with a transition temperature affected by the presence of defects.
Furthermore, the calculated spectral form factors show the qualitative
behaviour expected for a Hawking-Page phase transition. The considered
deformations cause the ramp to be shifted along the real time axis. Finally, we
comment on recent results related to conical Weil-Petersson volumes and the
analytic continuation to two-dimensional de Sitter space.
|
In this paper, we extend the article that Minkowski problem in Gaussian
probability space of Huang et al. to $L_p$-Gaussian Minkowski problem, and
obtain the existence and uniqueness of $o$-symmetry weak solution in case of
$p\geq1$.
|
Clustering data into meaningful subsets is a major task in scientific data
analysis. To date, various strategies ranging from model-based approaches to
data-driven schemes, have been devised for efficient and accurate clustering.
One important class of clustering methods that is of a particular interest is
the class of exemplar-based approaches. This interest primarily stems from the
amount of compressed information encoded in these exemplars that effectively
reflect the major characteristics of the respective clusters. Affinity
propagation (AP) has proven to be a powerful exemplar-based approach that
refines the set of optimal exemplars by iterative pairwise message updates.
However, a critical limitation is its inability to capitalize on known
networked relations between data points often available for various scientific
datasets. To mitigate this shortcoming, we propose geometric-AP, a novel
clustering algorithm that effectively extends AP to take advantage of the
network topology. Geometric-AP obeys network constraints and uses max-sum
belief propagation to leverage the available network topology for generating
smooth clusters over the network. Extensive performance assessment reveals a
significant enhancement in the quality of the clustering results when compared
to benchmark clustering schemes. Especially, we demonstrate that geometric-AP
performs extremely well even in cases where the original AP fails drastically.
|
The effectiveness of fingerprint-based authentication systems on good quality
fingerprints is established long back. However, the performance of standard
fingerprint matching systems on noisy and poor quality fingerprints is far from
satisfactory. Towards this, we propose a data uncertainty-based framework which
enables the state-of-the-art fingerprint preprocessing models to quantify noise
present in the input image and identify fingerprint regions with background
noise and poor ridge clarity. Quantification of noise helps the model two
folds: firstly, it makes the objective function adaptive to the noise in a
particular input fingerprint and consequently, helps to achieve robust
performance on noisy and distorted fingerprint regions. Secondly, it provides a
noise variance map which indicates noisy pixels in the input fingerprint image.
The predicted noise variance map enables the end-users to understand erroneous
predictions due to noise present in the input image. Extensive experimental
evaluation on 13 publicly available fingerprint databases, across different
architectural choices and two fingerprint processing tasks demonstrate
effectiveness of the proposed framework.
|
We encounter variables with little variation often in educational data mining
(EDM) due to the demographics of higher education and the questions we ask.
Yet, little work has examined how to analyze such data. Therefore, we conducted
a simulation study using logistic regression, penalized regression, and random
forest. We systematically varied the fraction of positive outcomes, feature
imbalances, and odds ratios. We find the algorithms treat features with the
same odds ratios differently based on the features' imbalance and the outcome
imbalance. While none of the algorithms fully solved how to handle imbalanced
data, penalized approaches such as Firth and Log-F reduced the difference
between the built-in odds ratio and value determined by the algorithm. Our
results suggest that EDM studies might contain false negatives when determining
which variables are related to an outcome. We then apply our findings to a
graduate admissions data set. We end by proposing recommendations that
researchers should consider penalized regression for data sets on the order of
hundreds of cases and should include more context about their data in
publications such as the outcome and feature imbalances.
|
The COVID-19 pandemic started in China in December 2019 and quickly spread to
several countries. The consequences of this pandemic are incalculable, causing
the death of millions of people and damaging the global economy. To achieve
large-scale control of this pandemic, fast tools for detection and treatment of
patients are needed. Thus, the demand for alternative tools for the diagnosis
of COVID-19 has increased dramatically since accurated and automated tools are
not available. In this paper we present the ongoing work on a system for
COVID-19 detection using ultrasound imaging and using Deep Learning techniques.
Furthermore, such a system is implemented on a Raspberry Pi to make it portable
and easy to use in remote regions without an Internet connection.
|
We prove that an inclusion-exclusion inspired expression of Schubert
polynomials of permutations that avoid the patterns 1432 and 1423 is
nonnegative. Our theorem implies a partial affirmative answer to a recent
conjecture of Yibo Gao about principal specializations of Schubert polynomials.
We propose a general framework for finding inclusion-exclusion inspired
expression of Schubert polynomials of all permutations.
|
It is shown that the slopes of the superhorizon hypermagnetic spectra
produced by the variation of the gauge couplings are practically unaffected by
the relative strength of the parity-breaking terms. A new method is proposed
for the estimate of the gauge power spectra in the presence of pseudoscalar
interactions during inflation. To corroborate the general results, various
concrete examples are explicitly analyzed. Since the large-scale gauge spectra
also determine the late-time magnetic fields it turns out that the pseudoscalar
contributions have little impact on the magnetogenesis requirement. Conversely
the parity-breaking terms crucially affect the gyrotropic spectra that may
seed, in certain models, the baryon asymmetry of the Universe. In the most
interesting regions of the parameter space the modes reentering prior to
symmetry breaking lead to a sufficiently large baryon asymmetry while the
magnetic power spectra associated with the modes reentering after symmetry
breaking may even be of the order of a few hundredths of a nG over typical
length scales comparable with the Mpc prior to the collapse of the protogalaxy.
From the viewpoint of the effective field theory description of magnetogenesis
scenarios these considerations hold generically for the whole class of
inflationary models where the inflaton is not constrained by any underlying
symmetry.
|
We study the representation theory of non-admissible simple affine vertex
algebra $L_{-5/2} (sl(4))$. We determine an explicit formula for the singular
vector of conformal weight four in the universal affine vertex algebra
$V^{-5/2} (sl(4))$, and show that it generates the maximal ideal in $V^{-5/2}
(sl(4))$. We classify irreducible $L_{-5/2} (sl(4))$--modules in the category
${\mathcal O}$, and determine the fusion rules between irreducible modules in
the category of ordinary modules $KL_{-5/2}$. It turns out that this fusion
algebra is isomorphic to the fusion algebra of $KL_{-1}$. We also prove that
$KL_{-5/2}$ is a semi-simple, rigid braided tensor category.
In our proofs we use the notion of collapsing level for the affine
$\mathcal{W}$--algebra, and the properties of conformal embedding $gl(4)
\hookrightarrow sl(5)$ at level $k=-5/2$ from arXiv:1509.06512. We show that
$k=-5/2$ is a collapsing level with respect to the subregular nilpotent element
$f_{subreg}$, meaning that the simple quotient of the affine
$\mathcal{W}$--algebra $W^{-5/2}(sl(4), f_{subreg})$ is isomorphic to the
Heisenberg vertex algebra $M_J(1)$. We prove certain results on vanishing and
non-vanishing of cohomology for the quantum Hamiltonian reduction functor
$H_{f_{subreg}}$. It turns out that the properties of $H_{f_{subreg}}$ are more
subtle than in the case of minimal reducition.
|
Over the last years, the number of cyber-attacks on industrial control
systems has been steadily increasing. Among several factors, proper software
development plays a vital role in keeping these systems secure. To achieve
secure software, developers need to be aware of secure coding guidelines and
secure coding best practices. This work presents a platform geared towards
software developers in the industry that aims to increase awareness of secure
software development. The authors also introduce an interactive game component,
a virtual coach, which implements a simple artificial intelligence engine based
on the laddering technique for interviews. Through a survey, a preliminary
evaluation of the implemented artifact with real-world players (from academia
and industry) shows a positive acceptance of the developed platform.
Furthermore, the players agree that the platform is adequate for training their
secure coding skills. The impact of our work is to introduce a new automatic
challenge evaluation method together with a virtual coach to improve existing
cybersecurity awareness training programs. These training workshops can be
easily held remotely or off-line.
|
The long wavelength moir\'e superlattices in twisted 2D structures have
emerged as a highly tunable platform for strongly correlated electron physics.
We study the moir\'e bands in twisted transition metal dichalcogenide
homobilayers, focusing on WSe$_2$, at small twist angles using a combination of
first principles density functional theory, continuum modeling, and
Hartree-Fock approximation. We reveal the rich physics at small twist angles
$\theta<4^\circ$, and identify a particular magic angle at which the top
valence moir\'e band achieves almost perfect flatness. In the vicinity of this
magic angle, we predict the realization of a generalized Kane-Mele model with a
topological flat band, interaction-driven Haldane insulator, and Mott
insulators at the filling of one hole per moir\'e unit cell. The combination of
flat dispersion and uniformity of Berry curvature near the magic angle holds
promise for realizing fractional quantum anomalous Hall effect at fractional
filling. We also identify twist angles favorable for quantum spin Hall
insulators and interaction-induced quantum anomalous Hall insulators at other
integer fillings.
|
Due to the rapid emergence of short videos and the requirement for content
understanding and creation, the video captioning task has received increasing
attention in recent years. In this paper, we convert traditional video
captioning task into a new paradigm, \ie, Open-book Video Captioning, which
generates natural language under the prompts of video-content-relevant
sentences, not limited to the video itself. To address the open-book video
captioning problem, we propose a novel Retrieve-Copy-Generate network, where a
pluggable video-to-text retriever is constructed to retrieve sentences as hints
from the training corpus effectively, and a copy-mechanism generator is
introduced to extract expressions from multi-retrieved sentences dynamically.
The two modules can be trained end-to-end or separately, which is flexible and
extensible. Our framework coordinates the conventional retrieval-based methods
with orthodox encoder-decoder methods, which can not only draw on the diverse
expressions in the retrieved sentences but also generate natural and accurate
content of the video. Extensive experiments on several benchmark datasets show
that our proposed approach surpasses the state-of-the-art performance,
indicating the effectiveness and promising of the proposed paradigm in the task
of video captioning.
|
The $\phi^4$ double-well theory admits a kink solution, whose rich
phenomenology is strongly affected by the existence of a single bound
excitation called the shape mode. We find that the leading quantum correction
to the energy needed to excite the shape mode is $-0.115567\lambda/m$ in terms
of the coupling $\lambda/4$ and the meson mass $m$ evaluated at the minimum of
the potential. On the other hand, the correction to the continuum threshold is
$-0.433\lambda/m$. A naive extrapolation to finite coupling then suggests that
the shape mode melts into the continuum at the modest coupling of
$\lambda/4\sim 0.106 m^2$, where the $\mathbb{Z}_2$ symmetry is still broken.
|
Most of the existing formation algorithms for multiagent systems are fully
label-specified, i.e., the desired position for each agent in the formation is
uniquely determined by its label, which would inevitably make the formation
algorithms vulnerable to agent failures. To address this issue, in this paper,
we propose a dynamic leader-follower approach to solving the line marching
problem for a swarm of planar kinematic robots. In contrast to the existing
results, the desired positions for the robots in the line are not fully
label-specified, but determined in a dynamic way according to the current state
of the robot swarm. By constantly forming a chain of leader-follower pairs,
exact formation can be achieved by pairwise leader-following tracking. Since
the order of the chain of leader-follower pairs is constantly updated, the
proposed algorithm shows strong robustness against robot failures.
Comprehensive numerical results are provided to evaluate the performance of the
proposed algorithm.
|
A central theme in condensed matter physics is to create and understand the
exotic states of matter by incorporating magnetism into topological materials.
One prime example is the quantum anomalous Hall (QAH) state. Recently, MnBi2Te4
has been demonstrated to be an intrinsic magnetic topological insulator and the
QAH effect was observed in exfoliated MnBi2Te4 flakes. Here, we used molecular
beam epitaxy (MBE) to grow MnBi2Te4 films with thickness down to 1 septuple
layer (SL) and performed thickness-dependent transport measurements. We
observed a non-square hysteresis loop in the antiferromagnetic state for films
with thickness greater than 2 SL. The hysteresis loop can be separated into two
AH components. Through careful analysis, we demonstrated that one AH component
with the larger coercive field is from the dominant MnBi2Te4 phase, while the
other AH component with the smaller coercive field is from the minor Mn-doped
Bi2Te3 phase in the samples. The extracted AH component of the MnBi2Te4 phase
shows a clear even-odd layer-dependent behavior, a signature of
antiferromagnetic thin films. Our studies reveal insights on how to optimize
the MBE growth conditions to improve the quality of MnBi2Te4 films, in which
the QAH and other exotic states are predicted.
|
We present the KMOS Galaxy Evolution Survey (KGES), a $K$-band Multi-Object
Spectrograph (KMOS) study of the H$\alpha$ and [NII] emission from 288 $K$
band-selected galaxies at $1.2 \lesssim z \lesssim 1.8$, with stellar masses in
the range $\log_{10}(M_{*}/\rm{M}_{\odot})\approx$9-11.5. In this paper, we
describe the survey design, present the sample, and discuss the key properties
of the KGES galaxies. We combine KGES with appropriately matched samples at
lower redshifts from the KMOS Redshift One Spectroscopic Survey (KROSS) and the
SAMI Galaxy Survey. Accounting for the effects of sample selection, data
quality, and analysis techniques between surveys, we examine the kinematic
characteristics and angular momentum content of star-forming galaxies at
$z\approx1.5$, $\approx1$ and $\approx0$. We find that stellar mass, rather
than redshift, most strongly correlates with the disc fraction amongst
star-forming galaxies at $z \lesssim 1.5$, observing only a modest increase in
the prevalence of discs between $z\approx1.5$ and $z\approx0.04$ at fixed
stellar mass. Furthermore, typical star-forming galaxies follow the same median
relation between specific angular momentum and stellar mass, regardless of
their redshift, with the normalisation of the relation depending more strongly
on how disc-like a galaxy's kinematics are. This suggests that massive
star-forming discs form in a very similar manner across the $\approx$ 10 Gyr
encompassed by our study and that the inferred link between the angular
momentum of galaxies and their haloes does not change significantly across the
stellar mass and redshift ranges probed in this work.
|
A variety of wireless channel estimation methods, e.g., MUSIC and ESPRIT,
rely on prior knowledge of the model order. Therefore, it is important to
correctly estimate the number of multipath components (MPCs) which compose such
channels. However, environments with many scatterers may generate MPCs which
are closely spaced. This clustering of MPCs in addition to noise makes the
model order selection task difficult in practice to currently known algorithms.
In this paper, we exploit the multidimensional characteristics of MIMO
orthogonal frequency division multiplexing (OFDM) systems and propose a machine
learning (ML) method capable of determining the number of MPCs with a higher
accuracy than state of the art methods in almost coherent scenarios. Moreover,
our results show that our proposed ML method has an enhanced reliability.
|
That neural networks may be pruned to high sparsities and retain high
accuracy is well established. Recent research efforts focus on pruning
immediately after initialization so as to allow the computational savings
afforded by sparsity to extend to the training process. In this work, we
introduce a new `DCT plus Sparse' layer architecture, which maintains
information propagation and trainability even with as little as 0.01% trainable
kernel parameters remaining. We show that standard training of networks built
with these layers, and pruned at initialization, achieves state-of-the-art
accuracy for extreme sparsities on a variety of benchmark network architectures
and datasets. Moreover, these results are achieved using only simple heuristics
to determine the locations of the trainable parameters in the network, and thus
without having to initially store or compute with the full, unpruned network,
as is required by competing prune-at-initialization algorithms. Switching from
standard sparse layers to DCT plus Sparse layers does not increase the storage
footprint of a network and incurs only a small additional computational
overhead.
|
In this paper we investigate the one dimensional (1D) logarithmic diffusion
equation with nonlinear Robin boundary conditions, namely, \[ \left\{
\begin{array}{l} \partial_t u=\partial_{xx} \log u\quad \mbox{in}\quad
\left[-l,l\right]\times \left(0, \infty\right)\\ \displaystyle \partial_x
u\left(\pm l, t\right)=\pm 2\gamma u^{p}\left(\pm l, t\right), \end{array}
\right. \] where $\gamma$ is a constant. Let $u_0>0$ be a smooth function
defined on $\left[-l,l\right]$, and which satisfies the compatibility condition
$$\partial_x \log u_0\left(\pm l\right)= \pm 2\gamma u_0^{p-1}\left(\pm
l\right).$$ We show that for $\gamma > 0$, $p\leq \frac{3}{2}$ solutions to the
logarithmic diffusion equation above with initial data $u_0$ are global and
blow-up in infinite time, and for $p>2$ there is finite time blow-up. Also, we
show that in the case of $\gamma<0$, $p\geq \frac{3}{2}$, solutions to the
logarithmic diffusion equation with initial data $u_0$ are global and blow-down
in infinite time, but if $p\leq 1$ there is finite time blow-down. For some of
the cases mentioned above, and some particular families of examples, we provide
blow-up and blow-down rates. Our approach is partly based on studying the Ricci
flow on a cylinder endowed with a $\mathbb{S}^1$-symmetric metric. Then, we
bring our ideas full circle by proving a new long time existence result for the
Ricci flow on a cylinder without any symmetry assumption. Finally, we show a
blow-down result for the logarithmic diffusion equation on a disc.
|
We present a dynamical mean-field study of antiferromagnetic magnons in one-,
two- and three-orbital Hubbard model of square and bcc cubic lattice at
intermediate coupling strength. Weinvestigate the effect of anisotropy
introduced by an external magnetic field or single-ion anisotropy.For the
latter we tune continuously between the easy-axis and easy-plane models. We
also analyzea model with spin-orbit coupling in cubic site-symmetry setting.
The ordered states as well as themagnetic excitations are sensitive to even a
small breaking ofSU(2)symmetry of the model andfollow the expectations of
spin-wave theory as well as general symmetry considerations.
|
With Rydberg dipole interactions, a mesoscopic atomic ensemble may behave
like a two-level single atom, resulting in the so-called picture of superatom.
It is in potential a strong candidate as a qubit in quantum information
science, especially for efficient coupling with single photons via collective
enhancement that is essential for building quantum internet to connect remote
quantum computers. Previously, preliminary studies have been carried out in
demonstrating basic concept of Rydberg superatom, a single-photon source, and
entanglement with a single photon, etc. While a crucial element of single-shot
qubit measurement is still missing. Here we realize the deterministic
measurement of a superatom qubit via photon burst in a single shot. We make use
of a low-finesse ring cavity to enhance the atom-photon interaction and obtain
an in-fiber retrieval efficiency of 44%. Harnessing dipole interaction between
two Rydberg levels, we may either create a sequence of multiple single photons
or nothing, conditioned on the initial qubit state. We achieve a single-shot
measurement fidelity of 93.2% in 4.8 us. Our work complements the experimental
toolbox of harnessing Rydberg superatom for quantum information applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.