abstract
stringlengths 42
2.09k
|
---|
At the end of April 20, 2020, there were only a few new COVID-19 cases
remaining in China, whereas the rest of the world had shown increases in the
number of new cases. It is of extreme importance to develop an efficient
statistical model of COVID-19 spread, which could help in the global fight
against the virus. We propose a clustering-segmented autoregressive sigmoid
(CSAS) model to explore the space-time pattern of the log-transformed
infectious count. Four key characteristics are included in this CSAS model,
including unknown clusters, change points, stretched S-curves, and
autoregressive terms, in order to understand how this outbreak is spreading in
time and in space, to understand how the spread is affected by epidemic control
strategies, and to apply the model to updated data from an extended period of
time. We propose a nonparametric graph-based clustering method for discovering
dissimilarity of the curve time series in space, which is justified with
theoretical support to demonstrate how the model works under mild and easily
verified conditions. We propose a very strict purity score that penalizes
overestimation of clusters. Simulations show that our nonparametric graph-based
clustering method is faster and more accurate than the parametric clustering
method regardless of the size of data sets. We provide a Bayesian information
criterion (BIC) to identify multiple change points and calculate a confidence
interval for a mean response. By applying the CSAS model to the collected data,
we can explain the differences between prevention and control policies in China
and selected countries.
|
In this paper, we study the quantum fluctuation dynamics in a Bose gas on a
torus $\Lambda=([-L/2,L/2]^3/\sim)$ that exhibits Bose-Einstein condensation,
beyond the leading order Hartree-Fock-Bogoliubov (HFB) fluctuations. Given a
mean-field Hamiltonian and Bose-Einstein condensate (BEC) with density $N$, we
extract a quantum Boltzmann type dynamics from a second-order Duhamel expansion
upon subtracting both the HFB dynamics and the BEC dynamics. Using a Fock-space
approach, we provide explicit error bounds. Given an approximately quasi-free
initial state, we determine the time evolution of the centered correlation
functions $\langle a\rangle$, $\langle aa\rangle-\langle a\rangle^2$, $\langle
a^+a\rangle-|\langle a\rangle|^2$ at mesoscopic time scales. For large but
finite $N$, we consider both the case of fixed system size $|\Lambda|\sim1$,
and the case $|\Lambda|\sim (\log(N)/\log\log(N))^{\frac78}$. In the case
$|\Lambda|\sim1$, we show that the Boltzmann collision operator contains
subleading terms that can become dominant, depending on time-dependent
coefficients assuming particular values in $\mathbb{Q}$; this phenomenon is
reminiscent of the Talbot effect. For the case $|\Lambda|\sim
(\log(N)/\log\log(N))^{\frac78}$, we prove that the collision operator is well
approximated by the expression predicted in the literature.
|
By inverting the time-dependent Kohn-Sham equation for a numerically exact
dynamics of the helium atom, we show that the dynamical step and peak features
of the exact correlation potential found previously in one-dimensional models
persist for real three-dimensional systems. We demonstrate that the Kohn-Sham
and true current-densities differ by a rotational component. The results have
direct implications for approximate TDDFT calculations of atoms and molecules
in strong fields, emphasizing the need to go beyond the adiabatic
approximation, and highlighting caution in quantitative use of the Kohn-Sham
current.
|
A large fraction of known exoplanets have short orbital periods where tidal
excitation of gravity waves within the host star causes the planets' orbits to
decay. We study the effects of tidal resonance locking, in which the planet
locks into resonance with a tidally excited stellar gravity mode. Because a
star's gravity mode frequencies typically increase as the star evolves, the
planet's orbital frequency increases in lockstep, potentially causing much
faster orbital decay than predicted by other tidal theories. Due to nonlinear
mode damping, resonance locking in Sun-like stars likely only operates for
low-mass planets ($M \lesssim 0.1 \, M_{\rm Jup}$), but in stars with
convective cores it can likely operate for all planetary masses. The orbital
decay timescale with resonance locking is typically comparable to the star's
main-sequence lifetime, corresponding to a wide range in effective stellar
quality factor ($10^3 \lesssim Q' \lesssim 10^9$), depending on the planet's
mass and orbital period. We make predictions for several individual systems and
examine the orbital evolution resulting from both resonance locking and
nonlinear wave dissipation. Our models demonstrate how short-period massive
planets can be quickly destroyed by nonlinear mode damping, while short-period
low-mass planets can survive, even though they undergo substantial inward tidal
migration via resonance locking.
|
A vertex labeling of a hypergraph is sum distinguishing if it uses positive
integers and the sums of labels taken over the distinct hyperedges are
distinct. Let s(H) be the smallest integer N such that there is a
sum-distinguishing labeling of H with each label at most N. The largest value
of s(H) over all hypergraphs on n vertices and m hyperedges is denoted s(n,m).
We prove that s(n,m) is almost-quadratic in m as long as m is not too large.
More precisely, the following holds: If n < m < n^{O(1)} then s(n,m)= m^2/w(m),
where w(m) is a function that goes to infinity and is smaller than any
polynomial in m.
The parameter s(n,m) has close connections to several other graph and
hypergraph functions, such as the irregularity strength of hypergraphs. Our
result has several applications, notably:
1. We answer a question of Gyarfas et al. whether there are n-vertex
hypergraphs with irregularity strength greater than 2n. In fact we show that
there are n-vertex hypergraphs with irregularity strength at least n^{2-o(1)}.
2. Our results imply that s*(n)=n^2/w(n) where s*(n) is the distinguishing
closed-neighborhood number, i.e., the smallest integer N such that any n-vertex
graph allows for a vertex labeling with positive integers at most N so that the
sums of labels on distinct closed neighborhoods of vertices are distinct.
|
Molybdenum trioxide (MoO$_3$) in-plane anisotropy has increasingly attracted
the attention of the scientific community in the last few years. Many of the
observed in-plane anisotropic properties stem from the anisotropic refractive
index and elastic constants of the material but a comprehensive analysis of
these fundamental properties is still lacking. Here we employ Raman and
micro-reflectance measurements, using polarized light, to determine the angular
dependence of the refractive index of thin MoO$_3$ flakes and we study the
directional dependence of the MoO$_3$ Young's modulus using the buckling
metrology method. We found that MoO$_3$ displays one of the largest in-plane
anisotropic mechanical properties reported for 2D materials so far.
|
We present a study on the relationship between the ratio of the depth of a
crater to its diameter and the diameter for lunar craters both on the maria and
on the highlands. We consider craters younger than 1.1 billion years in age,
i.e. of Copernican period. The aim of this work is to improve our understanding
of such relationships based on our new estimates of the craters's depth and
diameter. Previous studies considered similar relationships for much older
craters (up to 3.2 billion years). We calculated the depths of craters with
diameters from 10 to 100 km based on the altitude profiles derived from data
obtained by the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar
Reconnaissance Orbiter (LRO). The ratio h/D of the depth h of a crater to its
diameter D can diverge by up to a factor of two for craters with almost the
same diameters. The linear and power approximations (regressions) of the
dependence of h/D on D were made for simple and complex Copernican craters
selected from the data from Mazrouei et al. (2019) and Losiak et al. (2015).
For the separation of highland craters into two groups based only on their
dependences of h/D on D, at D<18 km these are mostly simple craters, although
some complex craters can have diameters D>16 km. Depths of mare craters with
D<14 km are greater than 0.15D. Following Pike's (1981) classification, we
group mare craters of D<15 km as simple craters. Mare craters with 15<D<18 km
fit both approximation curves for simple and complex craters. Depths of mare
craters with D>18 km are in a better agreement with the approximation curve of
h/D vs. D for complex craters than for simple craters. At the same diameter,
mare craters are deeper than highland craters at a diameter smaller than 30-40
km. For greater diameters, highland craters are deeper.
|
The minimal $U(1)_X$ extension of the Standard Model (SM) is a well-motivated
new physics scenario, where the anomaly cancellation requirement dictates the
new neutral gauge boson ($Z^\prime$) couplings with the SM fermions in terms of
two scalar charges ($x_H$ and $x_\Phi$). In this paper, we investigate the SM
charged fermion pair production mechanism for different values of these scalar
charges in the $U(1)_X$ scenario at future electron-positron colliders, i.e.
$e^+e^-\to f\bar{f}$. Apart from the standard photon and $Z$ boson exchange for
this process, this model features a $t$-channel (or both $s$ and $t$-channel
for $f=e^-$) $Z^\prime$-boson exchange, which interferes with the SM processes.
Considering the dilepton and dijet signatures from the heavy resonance we
estimate the bounds on the U$(1)_X$ coupling $(g^\prime)$ and the $Z^\prime$
mass $(M_{Z^\prime})$. Considering the LEP-II results and prospective
International Linear Collider (ILC) bounds on the effective scale for the four
fermion interaction we estimate the reach on $M_{Z^\prime}/g^\prime$ for
different center of mass energies. We study the angular distributions,
forward-backward $(\mathcal{A}_{\rm{FB}})$, left-right
$(\mathcal{A}_{\rm{LR}})$ and left-right forward-backward
$(\mathcal{A}_{\rm{LR, FB}})$ asymmetries of the $f\bar{f}$ final states which
can show substantial deviations from the SM results, even for a multi-TeV $Z'$.
This provides a powerful complementary way to probe the heavy $Z'$ parameter
space beyond the direct reach of the Large Hadron Collider (LHC), as well as an
effective way to determine the $U(1)_X$ charges.
|
In our previous study, we successfully reproduced the illusory motion of the
rotating snake illusion using deep neural networks incorporating predictive
coding theory. In the present study, we further examined the properties of the
networks using a set of 1500 images, including ordinary static images of
paintings and photographs and images of various types of motion illusions.
Results showed that the networks clearly classified illusory images and others
and reproduced illusory motions against various types of illusions similar to
human perception. Notably, the networks occasionally detected anomalous motion
vectors, even in ordinally static images where humans were unable to perceive
any illusory motion. Additionally, illusion-like designs with repeating
patterns were generated using areas where anomalous vectors were detected, and
psychophysical experiments were conducted, in which illusory motion perception
in the generated designs was detected. The observed inaccuracy of the networks
will provide useful information for further understanding information
processing associated with human vision.
|
In this paper, we propose a spectral-spatial graph reasoning network (SSGRN)
for hyperspectral image (HSI) classification. Concretely, this network contains
two parts that separately named spatial graph reasoning subnetwork (SAGRN) and
spectral graph reasoning subnetwork (SEGRN) to capture the spatial and spectral
graph contexts, respectively. Different from the previous approaches
implementing superpixel segmentation on the original image or attempting to
obtain the category features under the guide of label image, we perform the
superpixel segmentation on intermediate features of the network to adaptively
produce the homogeneous regions to get the effective descriptors. Then, we
adopt a similar idea in spectral part that reasonably aggregating the channels
to generate spectral descriptors for spectral graph contexts capturing. All
graph reasoning procedures in SAGRN and SEGRN are achieved through graph
convolution. To guarantee the global perception ability of the proposed
methods, all adjacent matrices in graph reasoning are obtained with the help of
non-local self-attention mechanism. At last, by combining the extracted spatial
and spectral graph contexts, we obtain the SSGRN to achieve a high accuracy
classification. Extensive quantitative and qualitative experiments on three
public HSI benchmarks demonstrate the competitiveness of the proposed methods
compared with other state-of-the-art approaches.
|
Despite the rapid growth of online advertisement in developing countries,
existing highly over-parameterized Click-Through Rate (CTR) prediction models
are difficult to be deployed due to the limited computing resources. In this
paper, by bridging the relationship between CTR prediction task and tabular
learning, we present that tabular learning models are more efficient and
effective in CTR prediction than over-parameterized CTR prediction models.
Extensive experiments on eight public CTR prediction datasets show that tabular
learning models outperform twelve state-of-the-art CTR prediction models.
Furthermore, compared to over-parameterized CTR prediction models, tabular
learning models can be fast trained without expensive computing resources
including high-performance GPUs. Finally, through an A/B test on an actual
online application, we show that tabular learning models improve not only
offline performance but also the CTR of real users.
|
In this paper, we propose a novel spoken-text-style conversion method that
can simultaneously execute multiple style conversion modules such as
punctuation restoration and disfluency deletion without preparing matched
datasets. In practice, transcriptions generated by automatic speech recognition
systems are not highly readable because they often include many disfluencies
and do not include punctuation marks. To improve their readability, multiple
spoken-text-style conversion modules that individually model a single
conversion task are cascaded because matched datasets that simultaneously
handle multiple conversion tasks are often unavailable. However, the cascading
is unstable against the order of tasks because of the chain of conversion
errors. Besides, the computation cost of the cascading must be higher than the
single conversion. To execute multiple conversion tasks simultaneously without
preparing matched datasets, our key idea is to distinguish individual
conversion tasks using the on-off switch. In our proposed zero-shot joint
modeling, we switch the individual tasks using multiple switching tokens,
enabling us to utilize a zero-shot learning approach to executing simultaneous
conversions. Our experiments on joint modeling of disfluency deletion and
punctuation restoration demonstrate the effectiveness of our method.
|
In this paper we give few expressions and asymptotics of ruin probabilities
for a Markov modulated risk process for various regimes of a time horizon,
initial reserves and a claim size distribution. We also consider few versions
of the ruin time.
|
Cadmium Zinc Telluride Imager (CZTI) onboard AstroSat has been a prolific
Gamma-Ray Burst (GRB) monitor. While the 2-pixel Compton scattered events (100
- 300 keV) are used to extract sensitive spectroscopic information, the
inclusion of the low-gain pixels (around 20% of the detector plane) after
careful calibration extends the energy range of Compton energy spectra to 600
keV. The new feature also allows single-pixel spectroscopy of the GRBs to the
sub-MeV range which is otherwise limited to 150 keV. We also introduced a new
noise rejection algorithm in the analysis ('Compton noise'). These new
additions not only enhances the spectroscopic sensitivity of CZTI, but the
sub-MeV spectroscopy will also allow proper characterization of the GRBs not
detected by Fermi. This article describes the methodology of single, Compton
event and veto spectroscopy in 100 - 600 keV for the GRBs detected in the first
year of operation. CZTI in last five years has detected around 20 bright GRBs.
The new methodologies, when applied on the spectral analysis for this large
sample of GRBs, has the potential to improve the results significantly and help
in better understanding the prompt emission mechanism.
|
Spin-orbit interactions which couple spin of a particle with its momentum
degrees of freedom lie at the center of spintronic applications. Of special
interest in semiconductor physics are Rashba and Dresselhaus spin-orbit
coupling (SOC). When equal in strength, the Rashba and Dresselhaus fields
result in SU(2) spin rotation symmetry and emergence of the persistent spin
helix (PSH) only investigated for charge carriers in semiconductor quantum
wells. Recently, a synthetic Rashba-Dresselhaus Hamiltonian was shown to
describe cavity photons confined in a microcavity filled with optically
anisotropic liquid crystal. In this work, we present a purely optical
realisation of two types of spin patterns corresponding to PSH and the
Stern-Gerlach experiment in such a cavity. We show how the symmetry of the
Hamiltonian results in spatial oscillations of the spin orientation of photons
travelling in the plane of the cavity.
|
Monocular 3D object detection is a key problem for autonomous vehicles, as it
provides a solution with simple configuration compared to typical multi-sensor
systems. The main challenge in monocular 3D detection lies in accurately
predicting object depth, which must be inferred from object and scene cues due
to the lack of direct range measurement. Many methods attempt to directly
estimate depth to assist in 3D detection, but show limited performance as a
result of depth inaccuracy. Our proposed solution, Categorical Depth
Distribution Network (CaDDN), uses a predicted categorical depth distribution
for each pixel to project rich contextual feature information to the
appropriate depth interval in 3D space. We then use the computationally
efficient bird's-eye-view projection and single-stage detector to produce the
final output bounding boxes. We design CaDDN as a fully differentiable
end-to-end approach for joint depth estimation and object detection. We
validate our approach on the KITTI 3D object detection benchmark, where we rank
1st among published monocular methods. We also provide the first monocular 3D
detection results on the newly released Waymo Open Dataset. We provide a code
release for CaDDN which is made available.
|
We consider warm inflation with a Dirac-Born-Infeld (DBI) kinetic term in
which both the nonequilibrium dissipative particle production and the sound
speed parameter slow the motion of the inflaton field. We find that a low sound
speed parameter removes, or at least strongly suppresses, the growing function
appearing in the scalar of curvature power spectrum of warm inflation, which
appears due to the temperature dependence in the dissipation coefficient. As a
consequence of that, a low sound speed helps to push warm inflation into the
strong dissipation regime, which is an attractive regime from a model building
and phenomenological perspective. In turn, the strong dissipation regime of
warm inflation softens the microscopic theoretical constraints on cold DBI
inflation. The present findings, along with the recent results from swampland
criteria, give a strong hint that warm inflation may consistently be embedded
into string theory.
|
We deform the moment map picture on the space of symplectic connections on a
symplectic manifold. To do that, we study a vector bundle of Fedosov star
product algebras on the space of symplectic connections. We describe a natural
formal connection on this bundle adapted to the star product algebras on the
fibers. We study its curvature and show the star product trace of the curvature
is a formal symplectic form on the space of symplectic connections. The action
of Hamiltonian diffeomorphisms on symplectic connections preserves the formal
symplectic structure and we show the star product trace can be interpreted as a
formal moment map for this action. Finally, we apply this picture to study
automorphisms of star products and Hamiltonian diffeomorphisms.
|
We consider variants of the classical Frank-Wolfe algorithm for constrained
smooth convex minimization, that instead of access to the standard oracle for
minimizing a linear function over the feasible set, have access to an oracle
that can find an extreme point of the feasible set that is closest in Euclidean
distance to a given vector. We first show that for many feasible sets of
interest, such an oracle can be implemented with the same complexity as the
standard linear optimization oracle. We then show that with such an oracle we
can design new Frank-Wolfe variants which enjoy significantly improved
complexity bounds in case the set of optimal solutions lies in the convex hull
of a subset of extreme points with small diameter (e.g., a low-dimensional face
of a polytope). In particular, for many $0\text{--}1$ polytopes, under
quadratic growth and strict complementarity conditions, we obtain the first
linearly convergent variant with rate that depends only on the dimension of the
optimal face and not on the ambient dimension.
|
This paper proposes a robust beamforming scheme to enhance the physical layer
security (PLS) of multicast transmission in a cognitive satellite and aerial
network (CSAN) operating in the millimeter wave frequency band. Based on
imperfect channel state information (CSI) of both eavesdroppers (Eves) and
primary users (PUs), we maximize the minimum achievable secrecy rate (ASR) of
the secondary users (SUs) in the aerial network under the constraints of the
interference to the PUs in the satellite network, the quality of service (QoS)
requirements of the SUs and per-antenna power budget of the aerial platform. To
tackle this mathematically intractable problem, we first introduce an auxiliary
variable and outage constraints to simplify the complex objective function. We
then convert the non-convex outage constraints into deterministic forms and
adopt penalty function approach to obtain a semi-definite problem such that it
can be solved in an iterative fashion. Finally, simulation results show that
with the transmit power increase, the minimal ASR of SUs obtained from the
proposed BF scheme well approximate the optimal value.
|
When transferring a control policy from simulation to a physical system, the
policy needs to be robust to variations in the dynamics to perform well.
Commonly, the optimal policy overfits to the approximate model and the
corresponding state-distribution, often resulting in failure to trasnfer
underlying distributional shifts. In this paper, we present Robust Fitted Value
Iteration, which uses dynamic programming to compute the optimal value function
on the compact state domain and incorporates adversarial perturbations of the
system dynamics. The adversarial perturbations encourage a optimal policy that
is robust to changes in the dynamics. Utilizing the continuous-time perspective
of reinforcement learning, we derive the optimal perturbations for the states,
actions, observations and model parameters in closed-form. Notably, the
resulting algorithm does not require discretization of states or actions.
Therefore, the optimal adversarial perturbations can be efficiently
incorporated in the min-max value function update. We apply the resulting
algorithm to the physical Furuta pendulum and cartpole. By changing the masses
of the systems we evaluate the quantitative and qualitative performance across
different model parameters. We show that robust value iteration is more robust
compared to deep reinforcement learning algorithm and the non-robust version of
the algorithm. Videos of the experiments are shown at
https://sites.google.com/view/rfvi
|
We present a systematic study on multilingual and cross-lingual intent
detection from spoken data. The study leverages a new resource put forth in
this work, termed MInDS-14, a first training and evaluation resource for the
intent detection task with spoken data. It covers 14 intents extracted from a
commercial system in the e-banking domain, associated with spoken examples in
14 diverse language varieties. Our key results indicate that combining machine
translation models with state-of-the-art multilingual sentence encoders (e.g.,
LaBSE) can yield strong intent detectors in the majority of target languages
covered in MInDS-14, and offer comparative analyses across different axes:
e.g., zero-shot versus few-shot learning, translation direction, and impact of
speech recognition. We see this work as an important step towards more
inclusive development and evaluation of multilingual intent detectors from
spoken data, in a much wider spectrum of languages compared to prior work.
|
This work is a probabilistic study of the 'primes' of the Cram\'er model. We
prove that there exists a set of integers $\mathcal S$ of density 1 such that
\begin{equation}\liminf_{ \mathcal S\ni n\to\infty} (\log n)\mathbb{P} \{S_n\
\hbox{prime} \} \ge \frac{1}{\sqrt{2\pi e}\, }, \end{equation} and that for
$b>\frac12$, the formula \begin{equation} \mathbb{P} \{S_n\ \text{prime}\, \}
\, =\, \frac{ (1+ o( 1) )}{ \sqrt{2\pi B_n } } \int_{m_n-\sqrt{ 2bB_n\log
n}}^{m_n+\sqrt{ 2bB_n\log n}} \, e^{-\frac{(t - m_n)^2}{ 2 B_n } }\, {\rm
d}\pi(t), \end{equation} in which $m_n=\mathbb{E} S_n,B_n={\rm Var }\,S_n$,
holds true for all $n\in \mathcal S$, $n\to \infty$. Further we prove that for
any $0<\eta<1$, and all $n$ large enough and $ \zeta_0\le \zeta\le \exp\big\{
\frac{c\log n}{\log\log n}\big\}$, letting $S'_n= \sum_{j= 8}^n \xi_j$,
\begin{eqnarray*} \mathbb{P}\big\{ S'_n\hbox{\ $\zeta$-quasiprime}\big\} \,\ge
\, (1-\eta) \frac{ e^{-\gamma} }{ \log \zeta }, \end{eqnarray*} according to
Pintz's terminology, where $c>0$ and $\gamma$ is Euler's constant. We also test
which infinite sequences of primes are ultimately avoided by the 'primes' of
the Cram\'er model, with probability 1. Moreover we show that the Cram\'er
model has incidences on the Prime Number Theorem, since it predicts that the
error term is sensitive to subsequences. We obtain sharp results on the length
and the number of occurrences of intervals $I$ such as for some $z>0$,
\begin{equation}\sup_{n\in I} \frac{|S_n-m_n|}{ \sqrt{B_n}}\le z,
\end{equation} which are tied with the spectrum of the Sturm-Liouville
equation.
|
We give an explicit correspondance between Costantino$-$L\^e's stated skein
algebras, which are defined via explicit relations on stated tangles, and
Gunningham$-$Jordan$-$Safronov's internal skein algebras, a.k.a
Ben-Zvi$-$Brochier$-$Jordan's moduli algebras, which are defined as internal
endomorphism algebras. For the sake of accessibility, we do not use
factorisation homology but compute it using Cooke's skein categories. Stated
skein algebras are defined on surfaces with multiple boundary edges and we
generalise internal skein algebras in this context. We prove excision
properties of multi-edges internal skein algebras using excision properties of
skein categories, and agreeing with excision properties of stated skein
algebras when $\mathcal{V} = \mathcal{U}_{q^2}(SL_2)\text{--}mod^{fin}$.
|
Active longitudinal beam optics can help FEL facilities achieve cutting edge
performance by optimizing the beam to: produce multi-color pulses, suppress
caustics, or support attosecond lasing. As the next generation of
superconducting accelerators comes online, there is a need to find new elements
which can both operate at high beam power and which offer multiplexing
capabilities at Mhz repetition rate. Laser heater shaping promises to satisfy
both criteria by imparting a programmable slice-energy spread on a shot-by-shot
basis. We use a simple kinetic analysis to show how control of the slice energy
spread translates into control of the bunch current profile, and then we
present a collection of start-to-end simulations at LCLS-II in order to
illustrate the technique.
|
A general-purpose model combining concepts from rational continuum mechanics,
fracture and damage mechanics, plasticity, and poromechanics is devised in
Eulerian coordinates, involving objective time derivatives. The model complies
with mass, momentum, and energy conservation as well as entropy inequality and
objectivity. It is devised to cover many diverse phenomena, specifically
rupture of existing lithospheric faults, tectonic earthquakes, generation and
propagation of seismic waves, birth of new tectonic faults, or volcanic
activity, aseismic creep, folding of rocks, aging of rocks, long-distance
saturated water transport and flow in poroelastic rocks, melting of rocks and
formation of magma chambers, or solidification of magma.
|
We present the results of a measurement of isotopic concentrations and atomic
number ratio of a double-sided actinide target with alpha-spectroscopy and mass
spectrometry. The double-sided actinide target, with primarily Pu-239 on one
side and U-235 on the other, was used in the fission Time Projection Chamber
(fissionTPC) for a measurement of the neutron-induced fission cross-section
ratio between the two isotopes. The measured atomic number ratio is intended to
provide an absolute normalization of the measured fission cross-section ratio.
The Pu-239/U-235 atom number ratio was measured with a combination of mass
spectrometry and alpha-spectroscopy with a planar silicon detector with
uncertainties of less than 1%.
|
Therapeutic protons acting on O18-substituted thymidine increase cytotoxicity
in radio-resistant human cancer cells. We consider here the physics behind the
irradiation during proton beam therapy and diagnosis using O18-enriched thymine
in DNA, with attention to the effect of the presence of thymine-18 on cancer
cell death.
|
We present a collaborative visual simultaneous localization and mapping
(SLAM) framework for service robots. With an edge server maintaining a map
database and performing global optimization, each robot can register to an
existing map, update the map, or build new maps, all with a unified interface
and low computation and memory cost. We design an elegant communication
pipeline to enable real-time information sharing between robots. With a novel
landmark organization and retrieval method on the server, each robot can
acquire landmarks predicted to be in its view, to augment its local map. The
framework is general enough to support both RGB-D and monocular cameras, as
well as robots with multiple cameras, taking the rigid constraints between
cameras into consideration. The proposed framework has been fully implemented
and verified with public datasets and live experiments.
|
Single photons exhibit inherently quantum and unintuitive properties such as
the Hong-ou-Mandel effect, demonstrating their bosonic and quantized nature,
yet at the same time may correspond to single excitations of spatial or
temporal modes with a very complex structure. Those two features are rarely
seen together. Here we experimentally demonstrate how the Hong-Ou-Mandel effect
can be spectrally-resolved and harnessed to characterize a complex temporal
mode of a single-photon \textendash{} a zero-area pulse \textendash{} obtained
via a resonant interaction of a terahertz-bandwidth photon with a narrow
gigahertz-wide atomic transition of atomic vapor. The combination of bosonic
quantum behavior with bandwidth-mismatched light-atom interaction is of
fundamental importance for a deeper understanding of both phenomena, as well as
their engineering offering applications in the characterization of ultra-fast
transient processes.
|
We present a new technique to obtain outer-bounds on the capacity region of
networks with ultra low-rate feedback. We establish a connection between the
achievable rates in the forward channel and the minimum distortion that can be
attained over the feedback channel.
|
When discussing future concerns within socio-technical systems in work
contexts, we often find descriptions of missed technology development and
integration. The experience of technology that fails whilst being integrated is
often rooted in dysfunctional epistemological approaches within the research
and development process. Thus, ultimately leading to sustainable
technology-distrust in work contexts. This is true for organizations that
integrate new technologies and for organizations that invent them.
Organizations in which we find failed technology development and integrations
are, in their very nature, social systems. Nowadays, those complex social
systems act within an even more complex environment. This urges the development
of new anticipation methods for technology development and integration.
Gathering of and dealing with complex information in the described context is
what we call Anticipation Next. This explorative work uses existing literature
from the adjoining research fields of system theory, organizational theory, and
socio-technical research to combine various concepts. We deliberately aim at a
networked way of thinking in scientific contexts and thus combine
multidisciplinary subject areas in one paper to present an innovative way to
deal with multi-faceted problems in a human-centred way. We end with suggesting
a conceptual framework that should be used in the very early stages of
technology development and integration in work contexts.
|
Photovoltaic effect of neutral atoms using inhomogeneous light in double-trap
opened system is studied theoretically. Using asymmetric external driving field
to replacing original asymmetric chemical potential of atoms, we create
polarization of atom population in the double-trap system. The polarization of
atom number distribution induces net current of atoms and works as collected
carriers in the cell. The cell can work even under partially coherent light.
The whole configuration is described by quantum master equation considering
weak tunneling between the system and its reservoirs at finite temperature. The
model of neutral atoms could be extended to more general quantum particles in
principle.
|
Deep learning-based segmentation methods are vulnerable to unforeseen data
distribution shifts during deployment, e.g. change of image appearances or
contrasts caused by different scanners, unexpected imaging artifacts etc. In
this paper, we present a cooperative framework for training image segmentation
models and a latent space augmentation method for generating hard examples.
Both contributions improve model generalization and robustness with limited
data. The cooperative training framework consists of a fast-thinking network
(FTN) and a slow-thinking network (STN). The FTN learns decoupled image
features and shape features for image reconstruction and segmentation tasks.
The STN learns shape priors for segmentation correction and refinement. The two
networks are trained in a cooperative manner. The latent space augmentation
generates challenging examples for training by masking the decoupled latent
space in both channel-wise and spatial-wise manners. We performed extensive
experiments on public cardiac imaging datasets. Using only 10 subjects from a
single site for training, we demonstrated improved cross-site segmentation
performance and increased robustness against various unforeseen imaging
artifacts compared to strong baseline methods. Particularly, cooperative
training with latent space data augmentation yields 15% improvement in terms of
average Dice score when compared to a standard training method.
|
A late (t $\sim$ 1,500 days) multi-wavelength (UV, optical, IR, and X-ray)
flare was found in PS1-10adi, a tidal disruption event (TDE) candidate that
took place in an active galactic nucleus (AGN). TDEs usually involve
super-Eddington accretion, which drives fast mass outflow (disk wind). So here
we explore a possible scenario that such a flare might be produced by the
interaction of the disk wind with a dusty torus for TDEs in AGN. Due to the
high velocity of the disk wind, strong shocks will emerge and convert the bulk
of the kinetic energy of the disk wind to radiation. We calculate the dynamics
and then predict the associated radiation signatures, taking into account the
widths of the wind and torus. We compare our model with the bolometric light
curve of the late flare in PS1-10adi constructed from observations. We find
from our modeling that the disk wind has a total kinetic energy of about
$10^{51}$ erg and a velocity of 0.1 c (i.e., a mass of 0.3 $M_{\odot}$); the
gas number density of the clouds in the torus is $3\times 10^{7}$ $\rm
cm^{-3}$. Observation of such a late flare can be an evidence of the disk wind
in TDEs and can be used as a tool to explore the nuclear environment of the
host.
|
During a solar eclipse the solar irradiance reaching the top-of-atmosphere
(TOA) is reduced in the Moon shadow. The solar irradiance is commonly measured
by Earth observation satellites before the start of the solar eclipse and is
not corrected for this reduction, which results in a decrease of the computed
TOA reflectances. Consequently, air quality products that are derived from TOA
reflectance spectra, such as the ultraviolet (UV) Absorbing Aerosol Index
(AAI), are distorted or undefined in the shadow of the Moon. The availability
of air quality satellite data in the penumbral and antumbral shadow during
solar eclipses, however, is of particular interest to users studying the
atmospheric response to solar eclipses. Given the time and location of a point
on the Earth's surface, we explain how to compute the obscuration during a
solar eclipse taking into account wavelength-dependent solar limb darkening.
With the calculated obscuration fractions, we restore the TOA reflectances and
the AAI in the penumbral shadow during the annular solar eclipses on 26
December 2019 and 21 June 2020 measured by the TROPOMI/S5P instrument. In the
corrected products, the signature of the Moon shadow disappeared, but only if
wavelength-dependent solar limb darkening is taken into account. We conclude
that the correction method of this paper can be used to detect real AAI rising
phenomena during a solar eclipse and has the potential to restore any other
product that is derived from TOA reflectance spectra. This would resolve the
solar eclipse anomalies in satellite air quality measurements and would allow
for studying the effect of the eclipse obscuration on the composition of the
Earth's atmosphere from space.
|
Absorption spectroscopy studies the absorption of electromagnetic wave in a
material sample under test. The interacting electromagnetic wave can be
propagating in free space or in a waveguide. The waveguide-based absorption
spectroscopy method has several advantages compared to the free space setup.
The effect of the waveguide cross section on the interaction between the
waveguide mode and the sample can be expressed by a factor, called interaction
factor. In this article, a new formulation for the interaction factor is
derived. It is shown that this factor is inversely proportional to the energy
velocity of the waveguide mode.
|
Through the use of examples, we explain one way in which applied topology has
evolved since the birth of persistent homology in the early 2000s. The first
applications of topology to data emphasized the global shape of a dataset, such
as the three-circle model for $3 \times 3$ pixel patches from natural images,
or the configuration space of the cyclo-octane molecule, which is a sphere with
a Klein bottle attached via two circles of singularity. In these studies of
global shape, short persistent homology bars are disregarded as sampling noise.
More recently, however, persistent homology has been used to address questions
about the local geometry of data. For instance, how can local geometry be
vectorized for use in machine learning problems? Persistent homology and its
vectorization methods, including persistence landscapes and persistence images,
provide popular techniques for incorporating both local geometry and global
topology into machine learning. Our meta-hypothesis is that the short bars are
as important as the long bars for many machine learning tasks. In defense of
this claim, we survey applications of persistent homology to shape recognition,
agent-based modeling, materials science, archaeology, and biology.
Additionally, we survey work connecting persistent homology to geometric
features of spaces, including curvature and fractal dimension, and various
methods that have been used to incorporate persistent homology into machine
learning.
|
The training of deep neural networks (DNNs) is usually memory-hungry due to
the limited device memory capacity of DNN accelerators. Characterizing the
memory behaviors of DNN training is critical to optimize the device memory
pressures. In this work, we pinpoint the memory behaviors of each device memory
block of GPU during training by instrumenting the memory allocators of the
runtime system. Our results show that the memory access patterns of device
memory blocks are stable and follow an iterative fashion. These observations
are useful for the future optimization of memory-efficient training from the
perspective of raw memory access patterns.
|
To identify the causes of performance problems or to predict process
behavior, it is essential to have correct and complete event data. This is
particularly important for distributed systems with shared resources, e.g., one
case can block another case competing for the same machine, leading to
inter-case dependencies in performance. However, due to a variety of reasons,
real-life systems often record only a subset of all events taking place. To
understand and analyze the behavior and performance of processes with shared
resources, we aim to reconstruct bounds for timestamps of events in a case that
must have happened but were not recorded by inference over events in other
cases in the system. We formulate and solve the problem by systematically
introducing multi-entity concepts in event logs and process models. We
introduce a partial-order based model of a multi-entity event log and a
corresponding compositional model for multi-entity processes. We define
PQR-systems as a special class of multi-entity processes with shared resources
and queues. We then study the problem of inferring from an incomplete event log
unobserved events and their timestamps that are globally consistent with a
PQR-system. We solve the problem by reconstructing unobserved traces of
resources and queues according to the PQR-model and derive bounds for their
timestamps using a linear program. While the problem is illustrated for
material handling systems like baggage handling systems in airports, the
approach can be applied to other settings where recording is incomplete. The
ideas have been implemented in ProM and were evaluated using both synthetic and
real-life event logs.
|
The next frontier for the Internet leading by innovations in mobile
computing, in particular, 5G, together with blockchains' transparency,
immutability, provenance, and authenticity, indicates the potentials of running
a new generation of applications on the mobile internet. A 5G-enabled
blockchain system is structured as a hierarchy and needs to deal with different
challenges such as maintaining blockchain ledger at different spatial domains
and various levels of the networks, efficient processing of cross-domain
transactions, establishing consensus among heterogeneous nodes, and supporting
delay-tolerant mobile transactions. In this paper, we present Saguaro, a
hierarchical permissioned blockchain designed specifically for Internet-scale
mobile networks. Saguaro benefits from the hierarchical structure of mobile
network infrastructure to address the aforementioned challenges. Our extensive
experimental results demonstrate the high potential of Saguaro being the first
5G-enabled permissioned blockchain system in the community.
|
In this paper, we present CogNet, a knowledge base (KB) dedicated to
integrating three types of knowledge: (1) linguistic knowledge from FrameNet,
which schematically describes situations, objects and events. (2) world
knowledge from YAGO, Freebase, DBpedia and Wikidata, which provides explicit
knowledge about specific instances. (3) commonsense knowledge from ConceptNet,
which describes implicit general facts. To model these different types of
knowledge consistently, we introduce a three-level unified frame-styled
representation architecture. To integrate free-form commonsense knowledge with
other structured knowledge, we propose a strategy that combines automated
labeling and crowdsourced annotation. At present, CogNet integrates 1,000+
semantic frames from linguistic KBs, 20,000,000+ frame instances from world
KBs, as well as 90,000+ commonsense assertions from commonsense KBs. All these
data can be easily queried and explored on our online platform, and free to
download in RDF format for utilization under a CC-BY-SA 4.0 license. The demo
and data are available at http://cognet.top/.
|
In this paper, we introduce a proximal-proximal majorization-minimization
(PPMM) algorithm for nonconvex tuning-free robust regression problems. The
basic idea is to apply the proximal majorization-minimization algorithm to
solve the nonconvex problem with the inner subproblems solved by a sparse
semismooth Newton (SSN) method based proximal point algorithm (PPA). We must
emphasize that the main difficulty in the design of the algorithm lies in how
to overcome the singular difficulty of the inner subproblem. Furthermore, we
also prove that the PPMM algorithm converges to a d-stationary point. Due to
the Kurdyka-Lojasiewicz (KL) property of the problem, we present the
convergence rate of the PPMM algorithm. Numerical experiments demonstrate that
our proposed algorithm outperforms the existing state-of-the-art algorithms.
|
Compiling commonsense knowledge is traditionally an AI topic approached by
manual labor. Recent advances in web data processing have enabled automated
approaches. In this demonstration we will showcase three systems for automated
commonsense knowledge base construction, highlighting each time one aspect of
specific interest to the data management community. (i) We use Quasimodo to
illustrate knowledge extraction systems engineering, (ii) Dice to illustrate
the role that schema constraints play in cleaning fuzzy commonsense knowledge,
and (iii) Ascent to illustrate the relevance of conceptual modelling. The demos
are available online at https://quasimodo.r2.enst.fr,
https://dice.mpi-inf.mpg.de and ascent.mpi-inf.mpg.de.
|
We consider random singlet phases of spin-$\frac{1}{2}$, random,
antiferromagnetic spin chains, in which the universal leading-order divergence
$\frac{\ln 2}{3}\ln\ell$ of the average entanglement entropy of a block of
$\ell$ spins, as well as the closely related leading term $\frac{2}{3}l^{-2}$
in the distribution of singlet lengths are well known by the strong-disorder
renormalization group (SDRG) method. Here, we address the question of how large
the subleading terms of the above quantities are. By an analytical calculation
performed along a special SDRG trajectory of the random XX chain, we identify a
series of integer powers of $1/l$ in the singlet-length distribution with the
subleading term $\frac{4}{3}l^{-3}$. Our numerical SDRG analysis shows that,
for the XX fixed point, the subleading term is generally $O(l^{-3})$ with a
non-universal coefficient and also reveals terms with half-integer powers:
$l^{-7/2}$ and $l^{-5/2}$ for the XX and XXX fixed points, respectively. We
also present how the singlet lengths originating in the SDRG approach can be
interpreted and calculated in the XX chain from the one-particle states of the
equivalent free-fermion model. These results imply that the subleading term
next to the logarithmic one in the entanglement entropy is $O(\ell^{-1})$ for
the XX fixed point and $O(\ell^{-1/2})$ for the XXX fixed point with
non-universal coefficients. For the XX model, where a comparison with exact
diagonalization is possible, the order of the subleading term is confirmed but
we find that the SDRG fails to provide the correct non-universal coefficient.
|
We consider certain systems of three linked simultaneous diagonal equations
in ten variables with total degree exceeding five. By means of a complification
argument, we obtain an asymptotic formula for the number of integral solutions
of this system of bounded height that resolves the associated paucity problem.
|
The coherent-state qubit is a promising candidate for optical quantum
information processing due to its nearly-deterministic nature of the Bell-state
measurement (BSM). However, its non-orthogonality incurs difficulties such as
failure of the BSM. One may use a large amplitude ($\alpha$) for the coherent
state to minimize the failure probability, but the qubit then becomes more
vulnerable to dephasing by photon loss. We propose a hardware-efficient
concatenated BSM (CBSM) scheme with modified parity encoding using coherent
states with reasonably small amplitudes ($|\alpha| \lessapprox 2$), which
simultaneously suppresses both failures and dephasing in the BSM procedure. We
numerically show that the CBSM scheme achieves a success probability
arbitrarily close to unity for appropriate values of $\alpha$ and sufficiently
low photon loss rates (e.g., $\lessapprox 5\%$). Furthermore, we verify that
the quantum repeater scheme exploiting the CBSM scheme for quantum error
correction enables one to carry out efficient long-range quantum communication
over 1000 km. We show that the performance is comparable to those of other
up-to-date methods or even outperforms them for some cases. Finally, we present
methods to prepare logical qubits under modified parity encoding and implement
elementary logical operations, which consist of several physical-level
ingredients such as generation of Schr\"odinger's cat state and elementary
gates under coherent-state basis. Our work demonstrates that the encoded
coherent-state qubits in free-propagating fields provide an alternative route
to fault-tolerant information processing, especially long-range quantum
communication.
|
It has been hypothesized that quantum computers may lend themselves well to
applications in machine learning. In the present work, we analyze function
classes defined via quantum kernels. Quantum computers offer the possibility to
efficiently compute inner products of exponentially large density operators
that are classically hard to compute. However, having an exponentially large
feature space renders the problem of generalization hard. Furthermore, being
able to evaluate inner products in high dimensional spaces efficiently by
itself does not guarantee a quantum advantage, as already classically tractable
kernels can correspond to high- or infinite-dimensional reproducing kernel
Hilbert spaces (RKHS).
We analyze the spectral properties of quantum kernels and find that we can
expect an advantage if their RKHS is low dimensional and contains functions
that are hard to compute classically. If the target function is known to lie in
this class, this implies a quantum advantage, as the quantum computer can
encode this inductive bias, whereas there is no classically efficient way to
constrain the function class in the same way. However, we show that finding
suitable quantum kernels is not easy because the kernel evaluation might
require exponentially many measurements.
In conclusion, our message is a somewhat sobering one: we conjecture that
quantum machine learning models can offer speed-ups only if we manage to encode
knowledge about the problem at hand into quantum circuits, while encoding the
same bias into a classical model would be hard. These situations may plausibly
occur when learning on data generated by a quantum process, however, they
appear to be harder to come by for classical datasets.
|
The Internet of Things (IoT) and smart city paradigm includes ubiquitous
technology to extract context information in order to return useful services to
users and citizens. An essential role in this scenario is often played by
computer vision applications, requiring the acquisition of images from specific
devices. The need for high-end cameras often penalizes this process since they
are power-hungry and ask for high computational resources to be processed.
Thus, the availability of novel low-power vision sensors, implementing advanced
features like in-hardware motion detection, is crucial for computer vision in
the IoT domain. Unfortunately, to be highly energy-efficient, these sensors
might worsen the perception performance (e.g., resolution, frame rate, color).
Therefore, domain-specific pipelines are usually delivered in order to exploit
the full potential of these cameras. This paper presents the development,
analysis, and embedded implementation of a realtime detection, classification
and tracking pipeline able to exploit the full potential of background
filtering Smart Vision Sensors (SVS). The power consumption obtained for the
inference - which requires 8ms - is 7.5 mW.
|
This paper investigates how a disturbance in the power network affects the
nodal frequencies of certain network buses. To begin with, we show that the
inertia of a single generator is in inverse proportion to the initial rate of
change of frequency (RoCoF) under disturbances. Then, we present how the
initial RoCoF of the nodal frequencies are related to the inertia constants of
multiple generators in a power network, which leads to a performance metric to
analyze nodal frequency performance. To be specific, the proposed metric
evaluates the impact of disturbances on the nodal frequency performance. The
validity and effectiveness of the proposed metric are illustrated via
simulations on a multi-machine power system.
|
In this article we have studied bicomplex valued measurable functions on an
arbitrary measurable space. We have established the bicomplex version of
Lebesgue's dominated convergence theorem and some other results related to this
theorem. Also we have proved the bicomplex version of Lebesgue-Radon-Nikodym
theorem. Finally we have introduced the idea of hyperbolic version of invariant
measure.
|
Many exoplanets are discovered in binary star systems in internal or in
circumbinary orbits. Whether the planet can be habitable or not depends on the
possibility to maintain liquid water on its surface, and therefore on the
luminosity of its host stars and on the dynamical properties of the planetary
orbit. The trajectory of a planet in a double star system can be determined,
approximating stars and planets with point masses, by solving numerically the
equations of motion of the classical three-body system. In this study, we
analyze a large data set of planetary orbits, made up with high precision long
integration at varying: the mass of the planet, its distance from the primary
star, the mass ratio for the two stars in the binary system, and the
eccentricity of the star motion. To simulate the gravitational dynamics, we use
a 15th order integration scheme (IAS15, available within the REBOUND
framework), that provides an optimal solution for long-term integration. In our
data analysis, we evaluate if an orbit is stable or not and also provide the
statistics of different types of instability: collisions with the primary or
secondary star and planets ejected away from the binary star system. Concerning
the stability, we find a significant number of orbits that are only marginally
stable, according to the classification introduced by Musielak et al. 2005. For
planets of negligible mass, we estimate the critical semi-major axis $a_c$ as a
function of the mass ratio and the eccentricity of the binary, in agreement
with the results of Holman and Wiegert 1999. However, we find that for very
massive planets (Super-Jupiters) the critical semi-major axis decrease in some
cases by a few percent, compared to cases in which the mass of the planet is
negligible.
|
We obtain local estimates, also called propagation of smallness or Remez-type
inequalities, for analytic functions in several variables. Using Carleman
estimates, we obtain a three sphere-type inequality, where the outer two
spheres can be any sets satisfying a boundary separation property, and the
inner sphere can be any set of positive Lebesgue measure. We apply this local
result to characterize the dominating sets for Bergman spaces on strongly
pseudoconvex domains in terms of a density condition or a testing condition on
the reproducing kernels. Our methods also yield a sufficient condition for
arbitrary domains and lower-dimensional sets.
|
Recently, the LHCb Collaboration reported a new structure $P_{cs}(4459)$ with
a mass of 19 MeV below the $\Xi_c \bar{D}^{*} $ threshold. It may be a
candidate of molecular state from the $\Xi_c \bar{D}^{*} $ interaction. In the
current work, we perform a coupled-channel study of the $\Xi_c^*\bar{D}^*$,
$\Xi'_c\bar{D}^*$, $\Xi^*_c\bar{D}$, $\Xi_c\bar{D}^*$, $\Xi'_c\bar{D}$, and
$\Xi_c\bar{D}$ interactions in the quasipotential Bethe-Salpeter equation
approach. With the help of the heavy quark chiral effective Lagrangian, the
potential is constructed by light meson exchanges. Two $\Xi_c \bar{D}^{*} $
molecular states are produced with spin parities $ J^P=1/2^-$ and $3/2^- $. The
lower state with $3/2^-$ can be related to the observed $P_{cs}(4450)$ while
two-peak structure cannot be excluded. Within the same model, other strange
hidden-charm pentaquarks are also predicted. Two states with spin parities
$1/2^-$ and $3/2^-$ are predicted near the $\Xi'_c\bar{D}$, $\Xi_c\bar{D}$, and
$\Xi_c^*\bar{D}$ thresholds, respectively. As two states near $\Xi_c
\bar{D}^{*}$ threshold, two states are produced with $1/2^-$ and $3/2^-$ near
the $\Xi'_c\bar{D}^*$ threshold. The couplings of the molecular states to the
considered channels are also discussed. The experimental research of those
states are helpful to understand the origin and internal structure of the
$P_{cs}$ and $P_c$ states.
|
We study the morphology of the stellar periphery of the Magellanic Clouds in
search of substructure using near-infrared imaging data from the VISTA
Hemisphere Survey (VHS). Based on the selection of different stellar
populations using the ($J-K_\mathrm{s}$, $K_\mathrm{s}$) colour-magnitude
diagram, we confirm the presence of substructures related to the interaction
history of the Clouds and find new substructures on the easter side of the LMC
disc which may be owing to the influence of the Milky Way, and on the northern
side of the SMC, which is probably associated to the ellipsoidal structure of
the galaxy. We also study the luminosity function of red clump stars in the SMC
and confirm the presence of a bi-modal distance distribution, in the form of a
foreground population. We find that this bi-modality is still detectable in the
eastern regions of the galaxy out to a 10 deg distance from its centre.
Additionally, a background structure is detected in the North between 7 and 10
deg from the centre which might belong to the Counter Bridge, and a foreground
structure is detected in the South between 6 and 8 deg from the centre which
might be linked to the Old Bridge.
|
This article introduces a representation of dynamic meshes, adapted to some
numerical simulations that require controlling the volume of objects with free
boundaries, such as incompressible fluid simulation, some astrophysical
simulations at cosmological scale, and shape/topology optimization. The
algorithm decomposes the simulated object into a set of convex cells called a
Laguerre diagram, parameterized by the position of $N$ points in 3D and $N$
additional parameters that control the volumes of the cells. These parameters
are found as the (unique) solution of a convex optimization problem --
semi-discrete Monge-Amp\`ere equation -- stemming from optimal transport
theory. In this article, this setting is extended to objects with free
boundaries and arbitrary topology, evolving in a domain of arbitrary shape, by
solving a partial optimal transport problem. The resulting Lagrangian scheme
makes it possible to accurately control the volume of the object, while
precisely tracking interfaces, interactions, collisions, and topology changes.
|
As the deep neural networks are being applied to complex tasks, the size of
the networks and architecture increases and their topology becomes more
complicated too. At the same time, training becomes slow and at some instances
inefficient. This motivated the introduction of various normalization
techniques such as Batch Normalization and Layer Normalization. The
aforementioned normalization methods use arithmetic operations to compute an
approximation statistics (mainly the first and second moments) of the layer's
data and use it to normalize it. The aforementioned methods use plain Monte
Carlo method to approximate the statistics and such method fails when
approximating the statistics whose distribution is complex. Here, we propose an
approach that uses weighted sum, implemented using depth-wise convolutional
neural networks, to not only approximate the statistics, but to learn the
coefficients of the sum.
|
A significant problem with immersive virtual reality (IVR) experiments is the
ability to compare research conditions. VR kits and IVR environments are
complex and diverse but researchers from different fields, e.g. ICT,
psychology, or marketing, often neglect to describe them with a level of detail
sufficient to situate their research on the IVR landscape. Careful reporting of
these conditions may increase the applicability of research results and their
impact on the shared body of knowledge on HCI and IVR. Based on literature
review, our experience, practice and a synthesis of key IVR factors, in this
article we present a reference checklist for describing research conditions of
IVR experiments. Including these in publications will contribute to the
comparability of IVR research and help other researchers decide to what extent
reported results are relevant to their own research goals. The compiled
checklist is a ready-to-use reference tool and takes into account key hardware,
software and human factors as well as diverse factors connected to visual,
audio, tactile, and other aspects of interaction.
|
We propose a computational design tool to enable casual end-users to easily
design, fabricate, and assemble flat-pack furniture with guaranteed
manufacturability. Using our system, users select parameterized components from
a library and constrain their dimensions. Then they abstractly specify
connections among components to define the furniture. Once fabrication
specifications (e.g. materials) designated, the mechanical implementation of
the furniture is automatically handled by leveraging encoded domain expertise.
Afterwards, the system outputs 3D models for visualization and mechanical
drawings for fabrication. We demonstrate the validity of our approach by
designing, fabricating, and assembling a variety of flat-pack (scaled)
furniture on demand.
|
We present results from \textsc{GigaEris}, a cosmological, $N$-body
hydrodynamical ``zoom-in'' simulation of the formation of a Milky Way-sized
galaxy with unprecedented resolution, encompassing of order a billion particles
within the refined region. The simulation employs a modern implementation of
smoothed-particle hydrodynamics, including metal-line cooling and metal and
thermal diffusion. We focus on the early assembly of the galaxy, down to
redshift $z=4.4$. The simulated galaxy has properties consistent with
extrapolations of the main sequence of star-forming galaxies to higher
redshifts and levels off to a star formation rate of $\sim$60$\,
M_{\odot}$yr$^{-1}$ at $z=4.4$. A compact, thin rotating stellar disk with
properties analogous to those of low-redshift systems arises already at $z \sim
8$-9. The galaxy rapidly develops a multi-component structure, and the disk, at
least at these early stages, does not grow upside-down as often reported in the
literature. Rather, at any given time, newly born stars contribute to sustain a
thin disk, while the thick disk grows from stars that are primarily added
through accretion and mergers. The kinematics reflect the early, ubiquitous
presence of a thin disk, as a stellar disk component with $v_\phi/\sigma_R$
larger than unity is already present at $z \sim 9$-10. Our results suggest that
high-resolution spectro-photometric observations of very high-redshift galaxies
should find thin rotating disks, consistent with the recent discovery of cold
rotating gas disks by ALMA. Finally, we present synthetic images for the JWST
NIRCam camera, showing how the early disk would be easily detectable already at
$z \sim 7$.
|
Gaps in protoplanetary disks have long been hailed as signposts of planet
formation. However, a direct link between exoplanets and disks remains hard to
identify. We present a large sample study of ALMA disk surveys of nearby
star-forming regions to disentangle this connection. All disks are classified
as either structured (transition, ring, extended) or non-structured (compact)
disks. Although low-resolution observations may not identify large scale
substructure, we assume that an extended disk must contain substructure from a
dust evolution argument. A comparison across ages reveals that structured disks
retain high dust masses up to at least 10 Myr, whereas the dust mass of
compact, non-structured disks decreases over time. This can be understood if
the dust mass evolves primarily by radial drift, unless drift is prevented by
pressure bumps. We identify a stellar mass dependence of the fraction of
structured disks. We propose a scenario linking this dependence with that of
giant exoplanet occurrence rates. We show that there are enough exoplanets to
account for the observed disk structures if transitional disks are created by
exoplanets more massive than Jupiter, and ring disks by exoplanets more massive
than Neptune, under the assumption that most of those planets eventually
migrate inwards. On the other hand, the known anti-correlation between
transiting super-Earths and stellar mass implies those planets must form in the
disks without observed structure, consistent with formation through pebble
accretion in drift-dominated disks. These findings support an evolutionary
scenario where the early formation of giant planets determines the disk's dust
evolution and its observational appearance.
|
We present a new family $\left\{ S_{n}(x;q)\right\} _{n\geq 0}$ of monic
polynomials in $x$, orthogonal with respect to a Sobolev-type inner product
related to the $q$-Hermite I orthogonal polynomials, involving a first-order
$q$-derivative on a mass-point $\alpha \in \mathbb{R}$ located out of the
corresponding orthogonality interval $[-1,1]$, for some fixed real number $q
\in (0, 1)$. We present connection formulas, and the annihilation operator for
this non-standard orthogonal polynomial family.
|
We study the dependence of some relevant tokamak equilibrium quantities on
the toroidal plasma rotation. The Grad-Shafranov equation generalised to the
rotating case is analytically solved employing two different representations
for the homogenous solution. Using an expression in terms of polynomials, we
describe the separatrix shape by a few geometrical parameters, reproducing
different plasma scenarios such as double-null and inverse triangularity. In
this setting, the introduction of toroidal rotation corresponds to variations
on relevant plasma quantities, most notably an enhancement of the poloidal
beta. Using a more general expression in terms of Bessel functions, we
reconstruct the full plasma boundary of the double-null configuration proposed
for the upcoming DTT experiment, demonstrating how said configuration is
compatible with different values of the plasma velocity.
|
Infertility is becoming an issue for an increasing number of couples. The
most common solution, in vitro fertilization, requires embryologists to
carefully examine light microscopy images of human oocytes to determine their
developmental potential. We propose an automatic system to improve the speed,
repeatability, and accuracy of this process. We first localize individual
oocytes and identify their principal components using CNN (U-Net) segmentation.
Next, we calculate several descriptors based on geometry and texture. The final
step is an SVM classifier. Both the segmentation and classification training is
based on expert annotations. The presented approach leads to a classification
accuracy of 70%.
|
The well-known Landau-Yang (LY) theorem on the decay of a neutral particle
into two photons is generalized for analyzing the decay of a neutral or charged
particle into two identical massless particles of any spin. Selection rules
categorized by discrete parity invariance and Bose/Fermi symmetry are worked
out in the helicity formulation. The general form of the Lorentz-covariant
triple vertices are derived and the corresponding decay helicity amplitudes are
explicitly calculated in the Jacob-Wick convention. After checking the
consistency of all the analytic results obtained by two complementary
approaches, we extract out the key aspects of the generalized LY theorem.
|
There is considerable interest in the pH-dependent switchable biocatalytic
properties of cerium oxide nanoparticles (CeNPs) in biomedicine, where these
materials exhibit beneficial antioxidant activity against reactive oxygen
species at neutral and basic physiological pH but cytotoxic prooxidant activity
at acidic pathological pH. Oxygen vacancies play a key role in such
biocatalytic activities. While the general characteristics of the role of
oxygen vacancies are known, the mechanism of their action at the atomic scale
under different pH conditions has yet to be elucidated. The present work
applies density functional theory (DFT) calculations to interpret the
pH-induced behavior of the stable {111} surface of CeO2 at the atomic scale.
Analysis of the surface-adsorbed media species reveals the critical role of pH
on the reversibility of the Ce3+ and Ce4+ redox equilibria and the formation
and annihilation of the oxygen vacancies. Under acidic conditions, this
reversible switching is hindered owing to incomplete volumetric filling of the
oxygen vacancies by the oxygen in the water molecules, hence effective
retention of the oxygen vacancies, and consequent inhibition of redox
biomimetic reactions. Under neutral and basic conditions, the capacity for this
reversible switching is preserved due to complete filling of the oxygen
vacancies by the OH ions owing to their ready size accommodation, thereby
retaining the capacity for performing redox biomimetic reactions cyclically.
|
The mass of a supermassive black hole ($M_\mathrm{BH}$) is a fundamental
property that can be obtained through observational methods. Constraining
$M_\mathrm{BH}$ through multiple methods for an individual galaxy is important
for verifying the accuracy of different techniques, and for investigating the
assumptions inherent in each method. NGC 4151 is one of those rare galaxies for
which multiple methods can be used: stellar and gas dynamical modeling because
of its proximity ($D=15.8\pm0.4$ Mpc from Cepheids), and reverberation mapping
because of its active accretion. In this work, we re-analyzed $H-$band integral
field spectroscopy of the nucleus of NGC 4151 from Gemini NIFS, improving the
analysis at several key steps. We then constructed a wide range of axisymmetric
dynamical models with the new orbit-superposition code Forstand. One of our
primary goals is to quantify the systematic uncertainties in $M_\mathrm{BH}$
arising from different combinations of the deprojected density profile,
inclination, intrinsic flattening, and mass-to-light ratio. As a consequence of
uncertainties on the stellar luminosity profile arising from the presence of
the AGN, our constraints on \mbh are rather weak. Models with a steep central
cusp are consistent with no black hole; however, in models with more moderate
cusps, the black hole mass lies within the range of $0.25\times10^7\,M_\odot
\lesssim M_\mathrm{BH} \lesssim 3\times10^7\,M_\odot$. This measurement is
somewhat smaller than the earlier analysis presented by Onken et al., but
agrees with previous $M_\mathrm{BH}$ values from gas dynamical modeling and
reverberation mapping. Future dynamical modeling of reverberation data, as well
as IFU observations with JWST, will aid in further constraining $M_\mathrm{BH}$
in NGC 4151.
|
We extend studies of micro-solvation of carbon monoxide by a combination of
high-resolution IR spectroscopy and ab initio calculations. Spectra of the
(H2O)4-CO and (D2O)4-CO pentamers are observed in the C-O stretch fundamental
region (~2150 cm-1). The H2O containing spectrum is broadened by
predissociation, but that of D2O is sharp, enabling detailed analysis which
gives a precise band origin and rotational parameters. Ab initio calculations
are employed to confirm the assignment to (water)4-CO and to determine the
structure, in which the geometry of the (water)4 fragment is a cyclic ring very
similar to the isolated water tetramer. The CO fragment is located "above" the
ring plane, with a partial hydrogen bond between the C atom and one of the
"free" protons (deuterons) of the water tetramer. Together with previous
results on D2O-CO, (D2O)2-CO, and (D2O)3-CO, this represents a probe of the
four initial steps in the solvation of carbon monoxide at high resolution.
|
In this work we study the spatio-temporal correlations of photons produced by
spontaneous parametric down conversion. In particular, we study how the waists
of the detection and pump beams impact on the spectral bandwidth of the
photons. Our results indicate that this parameter is greatly affected by the
spatial properties of the detection beam, while not as much by the pump beam.
This allows for a simple experimental implementation to control the bandwidth
of the biphoton spectra, which only entails modifying the optical configuration
to collect the photons. Moreover, we have performed Hong-Ou-Mandel
interferometry measurements that also provide the phase of the biphoton
wavefunction, and thereby its temporal shape. We explain all these results with
a toy model derived under certain approximations, which accurately recovers
most of the interesting experimental details.
|
Recently, quantum oscillation of the resistance in insulating monolayer
WTe$_2$ was reported. An explanation in terms of gap modulation in the
hybridized Landau levels of an excitonic insulator was also proposed by one of
us. However, the previous picture of gap modulation in the Landau levels
spectrum was built on a pair of well nested electron and hole Fermi surfaces,
while the monolayer WTe$_2$ has one hole and two electron Fermi pockets with
relative anisotropy. Here we demonstrate that for system like monolayer
WTe$_2$, the excitonic insulating state arising from the coupled one hole and
two electron pockets possesses a finite region in interaction parameter space
that shows gap modulation in a magnetic field. In this region, the thermally
activated conductivity displays the $1/B$ periodic oscillation and it can
further develop into discrete peaks at low temperature, in agreement with the
experimental observation. We show that the relative anisotropy of the bands is
a key parameter and the qunatum oscillations decrease rapidly if the anisotropy
increases further than the realistic value for monolayer WTe$_2$.
|
This paper and [29] treat the existence and nonexistence of stable weak
solutions to a fractional Hardy--H\'enon equation $(-\Delta)^s u = |x|^\ell
|u|^{p-1} u$ in $\mathbb{R}^N$, where $0 < s < 1$, $\ell > -2s$, $p>1$, $N \geq
1$ and $N > 2s$. In this paper, when $p$ is critical or supercritical in the
sense of the Joseph--Lundgren, we prove the existence of a family of positive
radial stable solutions, which satisfies the separation property. We also show
the multiple existence of the Joseph--Lundgren critical exponent for some $\ell
\in (0,\infty)$ and $s \in (0,1)$, and this property does not hold in the case
$s=1$.
|
The Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany,
provides unique possibilities for a new generation of hadron-, nuclear- and
atomic physics experiments. The future antiProton ANnihilations at DArmstadt
(PANDA or $\overline{\rm P}$ANDA) experiment at FAIR will offer a broad physics
programme, covering different aspects of the strong interaction. Understanding
the latter in the non-perturbative regime remains one of the greatest
challenges in contemporary physics. The antiproton-nucleon interaction studied
with PANDA provides crucial tests in this area. Furthermore, the
high-intensity, low-energy domain of PANDA allows for searches for physics
beyond the Standard Model, e.g. through high precision symmetry tests. This
paper takes into account a staged approach for the detector setup and for the
delivered luminosity from the accelerator. The available detector setup at the
time of the delivery of the first antiproton beams in the HESR storage ring is
referred to as the \textit{Phase One} setup. The physics programme that is
achievable during Phase One is outlined in this paper.
|
In this paper, we review data mining approaches for health applications. Our
focus is on hardware-centric approaches. Modern computers consist of multiple
processors, each equipped with multiple cores, each with a set of
arithmetic/logical units. Thus, a modern computer may be composed of several
thousand units capable of doing arithmetic operations like addition and
multiplication. Graphic processors, in addition may offer some thousand such
units. In both cases, single instruction multiple data and multiple instruction
multiple data parallelism must be exploited. We review the principles of
algorithms which exploit this parallelism and focus also on the memory issues
when multiple processing units access main memory through caches. This is
important for many applications of health, such as ECG, EEG, CT, SPECT, fMRI,
DTI, ultrasound, microscopy, dermascopy, etc.
|
Time-lapse fluorescence microscopy (TLFM) is an important and powerful tool
in synthetic biological research. Modeling TLFM experiments based on real data
may enable researchers to repeat certain experiments with minor effort. This
thesis is a study towards deep learning-based modeling of TLFM experiments on
the image level. The modeling of TLFM experiments, by way of the example of
trapped yeast cells, is split into two tasks. The first task is to generate
synthetic image data based on real image data. To approach this problem, a
novel generative adversarial network, for conditionalized and unconditionalized
image generation, is proposed. The second task is the simulation of brightfield
microscopy images over multiple discrete time-steps. To tackle this simulation
task an advanced future frame prediction model is introduced. The proposed
models are trained and tested on a novel dataset that is presented in this
thesis. The obtained results showed that the modeling of TLFM experiments, with
deep learning, is a proper approach, but requires future research to
effectively model real-world experiments.
|
The goal of quantum circuit transformation is to map a logical circuit to a
physical device by inserting additional gates as few as possible in an
acceptable amount of time. We present an effective approach called TSA to
construct the mapping. It consists of two key steps: one makes use of a
combined subgraph isomorphism and completion to initialize some candidate
mappings, the other dynamically modifies the mappings by using tabu
search-based adjustment. Our experiments show that, compared with
state-of-the-art methods GA, SABRE and FiDLS proposed in the literature, TSA
can generate mappings with a smaller number of additional gates and it has a
better scalability for large-scale circuits.
|
We study the action of symmetries on geodesics of a control problem on a
Carnot group with growth vector $(4,7)$. We show that there is a subgroup of
symmetries isomorphic to $SO(3)$ and a set of points in the Carnot group with a
nontrivial stabilizer of this action. We prove that each geodesic either lies
in this set or do not intersect this set. In the former case the optimality of
geodesics is solved completely through the identification of the quotient with
Heisenberg group.
|
Training Deep neural networks (DNNs) on noisy labeled datasets is a
challenging problem, because learning on mislabeled examples deteriorates the
performance of the network. As the ground truth availability is limited with
real-world noisy datasets, previous papers created synthetic noisy datasets by
randomly modifying the labels of training examples of clean datasets. However,
no final conclusions can be derived by just using this random noise, since it
excludes feature-dependent noise. Thus, it is imperative to generate
feature-dependent noisy datasets that additionally provide ground truth.
Therefore, we propose an intuitive approach to creating feature-dependent noisy
datasets by utilizing the training predictions of DNNs on clean datasets that
also retain true label information. We refer to these datasets as "Pseudo Noisy
datasets". We conduct several experiments to establish that Pseudo noisy
datasets resemble feature-dependent noisy datasets across different conditions.
We further randomly generate synthetic noisy datasets with the same noise
distribution as that of Pseudo noise (referred as "Randomized Noise") to
empirically show that i) learning is easier with feature-dependent label noise
compared to random noise, ii) irrespective of noise distribution, Pseudo noisy
datasets mimic feature-dependent label noise and iii) current training methods
are not generalizable to feature-dependent label noise. Therefore, we believe
that Pseudo noisy datasets will be quite helpful to study and develop robust
training methods.
|
We consider the insulated conductivity problem with two unit balls as
insulating inclusions, a distance of order $\varepsilon$ apart. The solution
$u$ represents the electric potential. In dimensions $n \ge 3$ it is an open
problem to find the optimal bound on the gradient of $u$, the electric field,
in the narrow region between the insulating bodies. Li-Yang recently proved a
bound of order $\varepsilon^{-(1-\gamma)/2}$ for some $\gamma>0$. In this paper
we use a direct maximum principle argument to sharpen the Li-Yang estimate for
$n \ge 4$. Our method gives effective lower bounds on the best constant
$\gamma$, which in particular approach $1$ as $n$ tends to infinity.
|
We develop an algorithm that combines the advantages of priority promotion -
one of the leading approaches to solving large parity games in practice - with
the quasi-polynomial time guarantees offered by Parys' algorithm. Hybridising
these algorithms sounds both natural and difficult, as they both generalise the
classic recursive algorithm in different ways that appear to be irreconcilable:
while the promotion transcends the call structure, the guarantees change on
each level. We show that an interface that respects both is not only effective,
but also efficient.
|
We give an efficient classical algorithm that recovers the distribution of a
non-interacting fermion state over the computational basis. For a system of $n$
non-interacting fermions and $m$ modes, we show that $O(m^2 n^4 \log(m/\delta)/
\varepsilon^4)$ samples and $O(m^4 n^4 \log(m/\delta)/ \varepsilon^4)$ time are
sufficient to learn the original distribution to total variation distance
$\varepsilon$ with probability $1 - \delta$. Our algorithm empirically
estimates the one- and two-mode correlations and uses them to reconstruct a
succinct description of the entire distribution efficiently.
|
In this work we show that, in the class of
$L^\infty((0,T);L^2(\mathbb{T}^3))$ distributional solutions of the
incompressible Navier-Stokes system, the ones which are smooth in some open
interval of times are meagre in the sense of Baire category, and the Leray ones
are a nowhere dense set.
|
The evolution of satellite galaxies is shaped by their constant interaction
with the circum galactic medium surrounding central galaxies, which in turn may
be affected by gas and energy ejected from the central supermassive black hole.
However, the nature of this coupling between black holes and galaxies is highly
debated and observational evidence remains scarce. Here we report an analysis
of archival data on 124,163 satellite galaxies in the potential wells of 29,631
dark matter halos with masses between 10$^{12}$ and $10^{14}$ solar masses. We
find that quiescent satellites are relatively less frequent along the minor
axis of their central galaxies. This observation might appear counterintuitive
as black hole activity is expected to eject mass and energy preferentially in
the direction of the minor axis of the host galaxy. However, we show that the
observed signal results precisely from the ejective nature of black hole
feedback in massive halos, as active galactic nuclei-powered outflows clear out
the circumgalactic medium, reducing the ram pressure and thus preserving star
formation. This interpretation is supported by the IllustrisTNG suite of
cosmological numerical simulations, where a similar modulation is observed even
though the sub-grid implementation of black hole feedback is effectively
isotropic. Our results provide compelling observational evidence for the role
of black holes in regulating galaxy evolution over spatial scales differing by
several orders of magnitude.
|
This paper integrates non-orthogonal multiple access (NOMA) and over-the-air
federated learning (AirFL) into a unified framework using one simultaneous
transmitting and reflecting reconfigurable intelligent surface (STAR-RIS). The
STAR-RIS plays an important role in adjusting the decoding order of hybrid
users for efficient interference mitigation and omni-directional coverage
extension. To capture the impact of non-ideal wireless channels on AirFL, a
closed-form expression for the optimality gap (a.k.a. convergence upper bound)
between the actual loss and the optimal loss is derived. This analysis reveals
that the learning performance is significantly affected by active and passive
beamforming schemes as well as wireless noise. Furthermore, when the learning
rate diminishes as the training proceeds, the optimality gap is explicitly
characterized to converge with a linear rate. To accelerate convergence while
satisfying QoS requirements, a mixed-integer non-linear programming (MINLP)
problem is formulated by jointly designing the transmit power at users and the
configuration mode of STAR-RIS. Next, a trust region-based successive convex
approximation method and a penalty-based semidefinite relaxation approach are
proposed to handle the decoupled non-convex subproblems iteratively. An
alternating optimization algorithm is then developed to find a suboptimal
solution for the original MINLP problem. Extensive simulation results show that
i) the proposed framework can efficiently support NOMA and AirFL users via
concurrent uplink communications, ii) our algorithms can achieve a faster
convergence rate on IID and non-IID settings compared to existing baselines,
and iii) both the spectrum efficiency and learning performance can be
significantly improved with the aid of the well-tuned STAR-RIS.
|
We explore the phenomenology of a two-fluid cosmological model, where the
field equations of general relativity (GR) are sourced by baryonic and cold
dark matter. We find that the model allows for a unified description of small
and large scale, late-time cosmological dynamics. Specifically, in the static
regime we recover the flattening of galactic rotation curves by requiring the
matter density profile to scale as $1/r^2$. The same behavior describes matter
inhomogeneities distribution at small cosmological scales. This traces galactic
dynamics back to structure formation. At large cosmological scales, we focus on
back reaction effects of the spacetime geometry to the presence of matter
inhomogeneities. We find that a cosmological constant with the observed order
of magnitude, emerges by averaging the back reaction term on spatial scales of
order 100 Mpc and it is related in a natural way to matter distribution. This
provides a resolution to both the cosmological constant and the coincidence
problems and shows the existence of an intriguing link between the small and
large scale behavior in cosmology.
|
In this study, we considered a moving particle with a magnetic quadrupole
moment in an elastic medium in the presence of a screw dislocation. We assumed
a radial electric field in a rotating frame that leads a uniform effective
magnetic field perpendicular to the plane of motion. We solved the
Schr\"odinger equation to derive wave and energy eigenvalue functions by
employing analytical methods for two interaction configurations: in the absence
of potential and in the presence of a static scalar potential. Due to the
topological defect in the medium, we observed a shift in the angular momentum
quantum number which affects the energy eigenvalues and the wave function of
the system.
|
We developed a systematic non-perturbative method base on Dyson-Schwinger
theory and the $\Phi$-derivable theory for Ising model at broken phase. Based
on these methods, we obtain critical temperature and spin spin correlation
beyond mean field theory. The spectrum of Green function obtained from our
methods become gapless at critical point, so the susceptibility become
divergent at Tc. The critical temperature of Ising model obtained from this
method is fairly good in comparison with other non-cluster methods. It is
straightforward to extend this method to more complicate spin models for
example with continue symmetry.
|
Active systems evade the rules of equilibrium thermodynamics by constantly
dissipating energy at the level of their microscopic components. This energy
flux stems from the conversion of a fuel, present in the environment, into
sustained individual motion. It can lead to collective effects without any
equilibrium equivalent, such as a phase separation for purely repulsive
particles, or a collective motion (flocking) for aligning particles. Some of
these effects can be rationalized by using equilibrium tools to recapitulate
nonequilibrium transitions. An important challenge is then to delineate
systematically to which extent the character of these active transitions is
genuinely distinct from equilibrium analogs. We review recent works that use
stochastic thermodynamics tools to identify, for active systems, a measure of
irreversibility comprising a coarse-grained or informatic entropy production.
We describe how this relates to the underlying energy dissipation or
thermodynamic entropy production, and how it is influenced by collective
behavior. Then, we review the possibility to construct thermodynamic ensembles
out-of-equilibrium, where trajectories are biased towards atypical values of
nonequilibrium observables. We show that this is a generic route to discovering
unexpected phase transitions in active matter systems, which can also inform
their design.
|
Model brittleness is a key concern when deploying deep learning models in
real-world medical settings. A model that has high performance at one
institution may suffer a significant decline in performance when tested at
other institutions. While pooling datasets from multiple institutions and
retraining may provide a straightforward solution, it is often infeasible and
may compromise patient privacy. An alternative approach is to fine-tune the
model on subsequent institutions after training on the original institution.
Notably, this approach degrades model performance at the original institution,
a phenomenon known as catastrophic forgetting. In this paper, we develop an
approach to address catastrophic forget-ting based on elastic weight
consolidation combined with modulation of batch normalization statistics under
two scenarios: first, for expanding the domain from one imaging system's data
to another imaging system's, and second, for expanding the domain from a large
multi-institutional dataset to another single institution dataset. We show that
our approach outperforms several other state-of-the-art approaches and provide
theoretical justification for the efficacy of batch normalization modulation.
The results of this study are generally applicable to the deployment of any
clinical deep learning model which requires domain expansion.
|
The determination of the isothermal adsorption curves represents a mechanism
that allows ob-taining information on the process of adsorption of water in
organic and inorganic materials. In addition, it is a measure to be considered
when characterizing the physicochemical and structural properties of the
materials. We want to present an approach to the state of knowledge about the
methods to characterize seeds and materials associated with food products
physically and struc-turally, and to relate this knowledge to biophysical
processes in these materials. This review considers the papers available since
2001 associated with water adsorption studies on seeds and other food products
as well as the approach of different authors to to technical and experimental
models and processes that are needed for the development of this topic. From
these articles the applied experimental methodologies (obtaining samples,
environmental conditions and labor-atory equipment) and the mathematical models
used to give physical, chemical and biological meaning to the results were
analyzed and discussed, concluding in the methodologies that have best adapted
to the advance of the technology for obtaining isothermal curves in the last
years.
|
Recent research has used deep learning to develop partial differential
equation (PDE) models in science and engineering. The functional form of the
PDE is determined by a neural network, and the neural network parameters are
calibrated to available data. Calibration of the embedded neural network can be
performed by optimizing over the PDE. Motivated by these applications, we
rigorously study the optimization of a class of linear elliptic PDEs with
neural network terms. The neural network parameters in the PDE are optimized
using gradient descent, where the gradient is evaluated using an adjoint PDE.
As the number of parameters become large, the PDE and adjoint PDE converge to a
non-local PDE system. Using this limit PDE system, we are able to prove
convergence of the neural network-PDE to a global minimum during the
optimization. The limit PDE system contains a non-local linear operator whose
eigenvalues are positive but become arbitrarily small. The lack of a spectral
gap for the eigenvalues poses the main challenge for the global convergence
proof. Careful analysis of the spectral decomposition of the coupled PDE and
adjoint PDE system is required. Finally, we use this adjoint method to train a
neural network model for an application in fluid mechanics, in which the neural
network functions as a closure model for the Reynolds-averaged Navier-Stokes
(RANS) equations. The RANS neural network model is trained on several datasets
for turbulent channel flow and is evaluated out-of-sample at different Reynolds
numbers.
|
A majority of coded matrix-matrix computation literature has broadly focused
in two directions: matrix partitioning for computing a single computation task
and batch processing of multiple distinct computation tasks. While these works
provide codes with good straggler resilience and fast decoding for their
problem spaces, these codes would not be able to take advantage of the natural
redundancy of re-using matrices across batch jobs. In this paper, we introduce
the Variable Coded Distributed Batch Matrix Multiplication (VCDBMM) problem
which tasks a distributed system to perform batch matrix multiplication where
matrices are not necessarily distinct among batch jobs. Inspired in part by
Cross-Subspace Alignment codes, we develop Flexible Cross-Subspace Alignments
(FCSA) codes that are flexible enough to utilize this redundancy. We provide a
full characterization of FCSA codes which allow for a wide variety of system
complexities including good straggler resilience and fast decoding. We
theoretically demonstrate that, under certain practical conditions, FCSA codes
are within a factor of 2 of the optimal solution when it comes to straggler
resilience. Furthermore, our simulations demonstrate that our codes can achieve
even better optimality gaps in practice, even going as low as 1.7.
|
In this paper, we use local fraction derivative to show the H\"older
continuity of the solution to the following nonlinear time-fractional slow and
fast diffusion equation:
$$\left(\partial^\beta+\frac{\nu}{2}(-\Delta)^{\alpha/2}\right)u(t,x) =
I_t^\gamma\left[\sigma\left(u(t,x)\right)\dot{W}(t,x)\right],\quad t>0,\:
x\in\mathbb{R}^d,$$ where $\dot{W}$ is the space-time white noise,
$\alpha\in(0,2]$, $\beta\in(0,2)$, $\gamma\ge 0$ and $\nu>0$, under the
condition that $2(\beta+\gamma)-1-d\beta/\alpha>0$. The case when
$\beta+\gamma\le 1$ has been obtained in \cite{ChHuNu19}. In this paper, we
have removed this extra condition, which in particular includes all cases for
$\beta\in(0,2)$.
|
For $S \subset \mathbb{R}^n$ and $d > 0$, denote by $G(S, d)$ the graph with
vertex set $S$ with any two vertices being adjacent if and only if they are at
a Euclidean distance $d$ apart. Deem such a graph to be ``non-trivial" if $d$
is actually realized as a distance between points of $S$. In a 2015 article,
the author asked if there exist distinct $d_1, d_2$ such that the non-trivial
graphs $G(\mathbb{Z}^2, d_1)$ and $G(\mathbb{Z}^2, d_2)$ are isomorphic. In our
current work, we offer a straightforward geometric construction to show that a
negative answer holds for this question.
|
Adaptive control architectures often make use of Lyapunov functions to design
adaptive laws. We are specifically interested in adaptive control methods, such
as the well-known L1 adaptive architecture, which employ a parameter observer
for this purpose. In such architectures, the observation error plays a critical
role in determining analytical bounds on the tracking error as well as
robustness. In this paper, we show how the non-existence of coercive Lyapunov
operators can impact the analytical bounds, and with it the performance and the
robustness of such adaptive systems.
|
High-resolution remote sensing images can provide abundant appearance
information for ship detection. Although several existing methods use image
super-resolution (SR) approaches to improve the detection performance, they
consider image SR and ship detection as two separate processes and overlook the
internal coherence between these two correlated tasks. In this paper, we
explore the potential benefits introduced by image SR to ship detection, and
propose an end-to-end network named ShipSRDet. In our method, we not only feed
the super-resolved images to the detector but also integrate the intermediate
features of the SR network with those of the detection network. In this way,
the informative feature representation extracted by the SR network can be fully
used for ship detection. Experimental results on the HRSC dataset validate the
effectiveness of our method. Our ShipSRDet can recover the missing details from
the input image and achieves promising ship detection performance.
|
Detection of individual molecules is the ultimate goal of any chemical
sensor. In the case of gas detection, such resolution has been achieved in
advanced nanoscale electronic solid-state sensors, but it has not been possible
so far in integrated photonic devices, where the weak light-molecule
interaction is typically hidden by noise. Here, we demonstrate a scheme to
generate ultrasensitive down-conversion four-wave-mixing (FWM) in a graphene
bipolar-junction-transistor heterogeneous D-shaped fiber. In the communication
band, the FWM conversion efficiency can change steeply when the graphene Fermi
level approaches 0.4 eV. In this condition, we exploit our unique two-step
optoelectronic heterodyne detection scheme, and we achieve real-time individual
gas molecule detection in vacuum. Such combination of graphene strong
nonlinearities, electrical tunability, and all-fiber integration paves the way
toward the design of versatile high-performance graphene photonic devices.
|
In this paper, we study problem of efficient service relocation (i.e.,
changing assigned data center for a selected client node) in elastic optical
networks (EONs) in order to increase network performance (measured by the
volume of accepted traffic). To this end, we first propose novel traffic model
for cloud ready transport networks. The model takes into account four flow
types (i.e., city-to-city, city-to-data center, data center-to-data center and
data center-to-data center) while the flow characteristics are based on real
economical and geographical parameters of the cities related to network nodes.
Then, we propose dedicated flow allocation algorithm that can be supported by
the service relocation process. We also introduce 21 different relocation
policies, which use three types of data for decision making - network
topological characteristics, rejection history and traffic prediction.
Eventually, we perform extensive numerical experiments in order to: (i) tune
proposed optimization approaches and (ii) evaluate and compare their efficiency
and select the best one. The results of the investigation prove high efficiency
of the proposed policies. The propoerly designed relocation policy allowed to
allocate up to 3% more traffic (compared to the allocation without that
policy). The results also reveal that the most efficient relocation policy
bases its decisions on two types of data simultaneously - the rejection history
and traffic prediction.
|
Predictive modeling is the key factor for saving time and resources with
respect to manufacturing processes such as fermentation processes arising e.g.\
in food and chemical manufacturing processes. According to Zhang et al. (2002),
the open-loop dynamics of yeast are highly dependent on the initial cell mass
distribution. This can be modeled via population balance models describing the
single-cell behavior of the yeast cell. There have already been several
population balance models for wine fermentation in the literature. However, the
new model introduced in this paper is much more detailed than the ones studied
previously. This new model for the white wine fermentation process is based on
a combination of components previously introduced in literature. It turns it
into a system of highly nonlinear weakly hyperbolic partial/ordinary
integro-differential equations. This model becomes very challenging from a
theoretical and numerical point of view. Existence and uniqueness of solutions
to a simplified version of the introduced problem is studied based on semigroup
theory. For its numerical solution a numerical methodology based on a finite
volume scheme combined with a time implicit scheme is derived. The impact of
the initial cell mass distribution on the solution is studied and underlined
with numerical results. The detailed model is compared to a simpler model based
on ordinary differential equations. The observed differences for different
initial distributions and the different models turn out to be smaller than
expected. The outcomes of this paper are very interesting and useful for
applied mathematicians, winemakers and process engineers.
|
Non-line-of-sight (NLOS) imaging enables monitoring around corners and is
promising for diverse applications. The resolution of transient NLOS imaging is
limited to a centimeter scale, mainly by the temporal resolution of the
detectors. Here, we construct an up-conversion single-photon detector with a
high temporal resolution of ~1.4 ps and a low noise count rate of 5 counts per
second (cps). Notably, the detector operates at room temperature, near-infrared
wavelength. Using this detector, we demonstrate high-resolution and low-noise
NLOS imaging. Our system can provide a 180 {\mu}m axial resolution and a 2 mm
lateral resolution, which is more than one order of magnitude better than that
in previous experiments. These results open avenues for high-resolution NLOS
imaging techniques in relevant applications.
|
Card shuffling models have provided simple motivating examples for the
mathematical theory of mixing times for Markov chains. As a complement, we
introduce a more intricate realistic model of a certain observable real-world
scheme for mixing human players onto teams. We quantify numerically the
effectiveness of this mixing scheme over the 7 or 8 steps performed in
practice. We give a combinatorial proof of the non-trivial fact that the chain
is indeed irreducible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.