abstract
stringlengths 42
2.09k
|
---|
Inspired by investigations of Bose-Einstein condensates (BECs) produced in
the Cold Atom Laboratory (CAL) aboard the International Space Station, we
present a study of thermodynamic properties of shell-shaped BECs. Within the
context of a spherically symmetric `bubble trap' potential, we study the
evolution of the system from small filled spheres to hollow, large, thin shells
via the tuning of trap parameters. We analyze the bubble trap spectrum and
states, and track the distinct changes in spectra between radial and angular
modes across the evolution. This separation of the excitation spectrum provides
a basis for quantifying dimensional cross-over to quasi-2D physics at a given
temperature. Using the spectral data, for a range of trap parameters, we
compute the critical temperature for a fixed number of particles to form a BEC.
For a set of initial temperatures, we also evaluate the change in temperature
that would occur in adiabatic expansion from small filled sphere to large thin
shell were the trap to be dynamically tuned. We show that the system cools
during this expansion but that the decrease in critical temperature occurs more
rapidly, thus resulting in depletion of any initial condensate. We contrast our
spectral methods with standard semiclassical treatments, which we find must be
used with caution in the thin-shell limit. With regards to interactions, using
energetic considerations and corroborated through Bogoliubov treatments, we
demonstrate that they would be less important for thin shells due to reduced
density but vortex physics would become more predominant. Finally, we apply our
treatments to traps that realistically model CAL experiments and borrow from
the thermodynamic insights found in the idealized bubble case during adiabatic
expansion.
|
Hardly any software development process is used as prescribed by authors or
standards. Regardless of company size or industry sector, a majority of project
teams and companies use hybrid development methods (short: hybrid methods) that
combine different development methods and practices. Even though such hybrid
methods are highly individualized, a common understanding of how to
systematically construct synergetic practices is missing. In this article, we
make a first step towards a statistical construction procedure for hybrid
methods. Grounded in 1467 data points from a large-scale practitioner survey,
we study the question: What are hybrid methods made of and how can they be
systematically constructed? Our findings show that only eight methods and few
practices build the core of modern software development. Using an 85% agreement
level in the participants' selections, we provide examples illustrating how
hybrid methods can be characterized by the practices they are made of.
Furthermore, using this characterization, we develop an initial construction
procedure, which allows for defining a method frame and enriching it
incrementally to devise a hybrid method using ranked sets of practice.
|
The remarkable progress in the field of laser spectroscopy induced by the
invention of the frequency-comb laser has enabled many new high-precision tests
of fundamental theory and searches for new physics. Extending frequency-comb
based spectroscopy techniques to the vacuum (VUV) and extreme ultraviolet (XUV)
spectral range would enable measurements in e.g. heavier hydrogen-like systems
and open up new possibilities for tests of quantum electrodynamics and
measurements of fundamental constants. The main approaches rely on
high-harmonic generation (HHG), which is known to induce spurious phase shifts
from plasma formation. After our initial report (Physical Review Letters 123,
143001 (2019)), we give a detailed account of how the Ramsey-comb technique is
used to probe the plasma dynamics with high precision, and enables accurate
spectroscopy in the VUV. A series of Ramsey fringes is recorded to track the
phase evolution of a superposition state in xenon atoms, excited by two
up-converted frequency-comb pulses. Phase shifts of up to 1 rad induced by HHG
were observed at ns timescales and with mrad-level accuracy at $110$ nm. Such
phase shifts could be reduced to a negligible level, enabling us to measure the
$5p^6 \rightarrow 5p^5 8s~^2[3/2]_1$ transition frequency in $^{132}Xe$ at 110
nm (seventh harmonic) with sub-MHz accuracy. The obtained value is $10^4$ times
more precise than the previous determination and the fractional accuracy of
$2.3 \times 10^{-10}$ is $3.6$ times better than the previous best
spectroscopic measurement using HHG. The isotope shifts between $^{132}Xe$ and
two other isotopes were determined with an accuracy of $420$ kHz. The method
can be readily extended to achieve kHz-level accuracy, e.g. to measure the
$1S-2S$ transition in $He^+$. Therefore, the Ramsey-comb method shows great
promise for high-precision spectroscopy of targets requiring VUV and XUV
wavelengths.
|
The Large Area Telescope (LAT) on board the \Fermi Gamma-ray Space Telescope
(\Fermi) shows long-lasting high-energy emission in many gamma-ray bursts
(GRBs), similar to X-ray afterglows observed by the Neil Gehrels Swift
Observatory \citep[\textit{Swift};][]{gehrels2004}. Some LAT light curves (LCs)
show a late-time flattening reminiscent of X-ray plateaus. We explore the
presence of plateaus in LAT temporally extended emission analyzing GRBs from
the second \lat GRB Catalog \citep[2FLGC;][]{Ajello2019apj} from 2008 to May
2016 with known redshifts, and check whether they follow closure relations
corresponding to 4 distinct astrophysical environments predicted by the
external forward shock (ES) model. We find that three LCs can be fit by the
same phenomenological model used to fit X-ray plateaus \citep{Willingale2007}
and show tentative evidence for the existence of plateaus in their high-energy
extended emission. The most favorable scenario is a slow cooling regime,
whereas the preferred density profile for each GRBs varies from a constant
density ISM to a $r^{-2}$ wind environment. We also compare the end time of the
plateaus in $\gamma$-rays and X-rays using a statistical comparison with 222
\textit{Swift} GRBs with plateaus and known redshifts from January 2005 to
August 2019. Within this comparison, the case of GRB 090510 shows an indication
of chromaticity at the end time of the plateau. Finally, we update the 3-D
fundamental plane relation among the rest frame end time of the plateau, its
correspondent luminosity, and the peak prompt luminosity for 222 GRBs observed
by \textit{Swift}. We find that these three LAT GRBs follow this relation.
|
We review the properties of neutron matter in the low-density regime. In
particular, we revise its ground state energy and the superfluid neutron
pairing gap, and analyze their evolution from the weak to the strong coupling
regime. The calculations of the energy and the pairing gap are performed,
respectively, within the Brueckner--Hartree--Fock approach of nuclear matter
and the BCS theory using the chiral nucleon-nucleon interaction of Entem and
Machleidt at N$^3$LO and the Argonne V18 phenomenological potential. Results
for the energy are also shown for a simple Gaussian potential with a strength
and range adjusted to reproduce the $^1S_0$ neutron-neutron scattering length
and effective range. Our results are compared with those of quantum Monte Carlo
calculations for neutron matter and cold atoms. The Tan contact parameter in
neutron matter is also calculated finding a reasonable agreement with
experimental data with ultra-cold atoms only at very low densities. We find
that low-density neutron matter exhibits a behavior close to that of a Fermi
gas at the unitary limit, although, this limit is actually never reached. We
also review the properties (energy, effective mass and quasiparticle residue)
of a spin-down neutron impurity immersed in a low-density free Fermi gas of
spin-up neutrons already studied by the author in a recent work where it was
shown that these properties are very close to those of an attractive Fermi
polaron in the unitary limit.
|
Poetry is one of the most important art forms of human languages. Recently
many studies have focused on incorporating some linguistic features of poetry,
such as style and sentiment, into its understanding or generation system.
However, there is no focus on understanding or evaluating the semantics of
poetry. Therefore, we propose a novel task to assess a model's semantic
understanding of poetry by poem matching. Specifically, this task requires the
model to select one line of Chinese classical poetry among four candidates
according to the modern Chinese translation of a line of poetry. To construct
this dataset, we first obtain a set of parallel data of Chinese classical
poetry and modern Chinese translation. Then we retrieve similar lines of poetry
with the lines in a poetry corpus as negative choices. We name the dataset
Chinese Classical Poetry Matching Dataset (CCPM) and release it at
https://github.com/THUNLP-AIPoet/CCPM. We hope this dataset can further enhance
the study on incorporating deep semantics into the understanding and generation
system of Chinese classical poetry. We also preliminarily run two variants of
BERT on this dataset as the baselines for this dataset.
|
Taking advantage of social media platforms, such as Twitter, this paper
provides an effective framework for emotion detection among those who are
quarantined. Early detection of emotional feelings and their trends help
implement timely intervention strategies. Given the limitations of medical
diagnosis of early emotional change signs during the quarantine period,
artificial intelligence models provide effective mechanisms in uncovering early
signs, symptoms and escalating trends. Novelty of the approach presented herein
is a multitask methodological framework of text data processing, implemented as
a pipeline for meaningful emotion detection and analysis, based on the
Plutchik/Ekman approach to emotion detection and trend detection. We present an
evaluation of the framework and a pilot system. Results of confirm the
effectiveness of the proposed framework for topic trends and emotion detection
of COVID-19 tweets. Our findings revealed Stay-At-Home restrictions result in
people expressing on twitter both negative and positive emotional semantics.
Semantic trends of safety issues related to staying at home rapidly decreased
within the 28 days and also negative feelings related to friends dying and
quarantined life increased in some days. These findings have potential to
impact public health policy decisions through monitoring trends of emotional
feelings of those who are quarantined. The framework presented here has
potential to assist in such monitoring by using as an online emotion detection
tool kit.
|
In this paper, we numerically investigate the propulsive performance of
three-dimensional pitching flexible plates with varying flexibility and
trailing edge shapes. To eliminate the effect of other geometric parameters,
only the trailing edge angle is varied from 45{\deg} (concave), 90{\deg}
(rectangular) to 135{\deg} (convex) while maintaining the constant area of the
flexible plate. We examine the impact of the frequency ratio f* defined as the
ratio of the natural frequency of the flexible plate to the actuated pitching
frequency. Through our numerical simulations, we find that the global maximum
mean thrust occurs near f*=1 corresponding to the resonance condition. However,
the optimal propulsive efficiency is achieved around f*=1.54 instead of the
resonance condition. While the convex plate with low and high bending stiffness
values shows the best performance, the rectangular plate with moderate bending
stiffness is the most efficient propulsion configuration. Through dynamic mode
decomposition, we find that the passive deformation can help in redistributing
the pressure gradient thus improving the efficiency and thrust production. A
momentum-based thrust evaluation approach is adopted to link the instantaneous
vortical structures with the time-dependent thrust. When the vortices detach
from the trailing edge, the instantaneous thrust shows the largest values due
to the strong momentum change and convection process. Moderate flexibility and
convex shape help transfer momentum to the fluid, thereby improving thrust
generation and promoting the transition from drag to thrust. The increase of
the trailing edge angle can broaden the range of flexibility that produces
positive mean thrust.
|
The world is still struggling in controlling and containing the spread of the
COVID-19 pandemic caused by the SARS-CoV-2 virus. The medical conditions
associated with SARS-CoV-2 infections have resulted in a surge in the number of
patients at clinics and hospitals, leading to a significantly increased strain
on healthcare resources. As such, an important part of managing and handling
patients with SARS-CoV-2 infections within the clinical workflow is severity
assessment, which is often conducted with the use of chest x-ray (CXR) images.
In this work, we introduce COVID-Net CXR-S, a convolutional neural network for
predicting the airspace severity of a SARS-CoV-2 positive patient based on a
CXR image of the patient's chest. More specifically, we leveraged transfer
learning to transfer representational knowledge gained from over 16,000 CXR
images from a multinational cohort of over 15,000 patient cases into a custom
network architecture for severity assessment. Experimental results with a
multi-national patient cohort curated by the Radiological Society of North
America (RSNA) RICORD initiative showed that the proposed COVID-Net CXR-S has
potential to be a powerful tool for computer-aided severity assessment of CXR
images of COVID-19 positive patients. Furthermore, radiologist validation on
select cases by two board-certified radiologists with over 10 and 19 years of
experience, respectively, showed consistency between radiologist interpretation
and critical factors leveraged by COVID-Net CXR-S for severity assessment.
While not a production-ready solution, the ultimate goal for the open source
release of COVID-Net CXR-S is to act as a catalyst for clinical scientists,
machine learning researchers, as well as citizen scientists to develop
innovative new clinical decision support solutions for helping clinicians
around the world manage the continuing pandemic.
|
The understanding of the ro-vibrational dynamics in molecular
(near)-atmospheric pressure plasmas is essential to investigate the influence
of vibrational excited molecules on the discharge properties. In a companion
paper, results of ro-vibrational coherent anti-Stokes Raman scattering (CARS)
measurements for a nanosecond pulsed plasma jet consisting of two conducting
molybdenum electrodes with a gap of 1 mm in nitrogen at 200 mbar are presented.
Here, those results are discussed and compared to theoretical predictions based
on rate coefficients for the relevant processes found in the literature. It is
found, that during the discharge the measured vibrational excitation agrees
well with predictions obtained from the rates for resonant electron collisions
calculated by Laporta et al [PlasmaSources Science and Technology 23 065002
(2014)]. The predictions are based on the electric field during the discharge,
measured by EFISH and the electron density which is deduced from the field and
mobility data calculated with Bolsig+. In the afterglow a simple kinetic
simulation for the vibrational subsystem of nitrogen is performed and it is
found, that the populations of vibrational excited states develop according to
vibrational-vibrational transfer on timescales of a few $\mu$s, while the
development on timescales of some hundred $\mu$s is determined by the losses at
the walls. No significant influence of electronically excited states on the
populations of the vibrational states visible in the CARS measurements ($v
\lesssim 7$) was observed.
|
Our life is getting filled by Internet of Things (IoT) devices. These devices
often rely on closed or poorly documented protocols, with unknown formats and
semantics. Learning how to interact with such devices in an autonomous manner
is the key for interoperability and automatic verification of their
capabilities. In this paper, we propose RL-IoT, a system that explores how to
automatically interact with possibly unknown IoT devices. We leverage
reinforcement learning (RL) to recover the semantics of protocol messages and
to take control of the device to reach a given goal, while minimizing the
number of interactions. We assume to know only a database of possible IoT
protocol messages, whose semantics are however unknown. RL-IoT exchanges
messages with the target IoT device, learning those commands that are useful to
reach the given goal. Our results show that RL-IoT is able to solve both simple
and complex tasks. With properly tuned parameters, RL-IoT learns how to perform
actions with the target device, a Yeelight smart bulb in our case study,
completing non-trivial patterns with as few as 400 interactions. RL-IoT paves
the road for automatic interactions with poorly documented IoT protocols, thus
enabling interoperable systems.
|
Quantifying and verifying the control level in preparing a quantum state are
central challenges in building quantum devices. The quantum state is
characterized from experimental measurements, using a procedure known as
tomography, which requires a vast number of resources. Furthermore, the
tomography for a quantum device with temporal processing, which is
fundamentally different from the standard tomography, has not been formulated.
We develop a practical and approximate tomography method using a recurrent
machine learning framework for this intriguing situation. The method is based
on repeated quantum interactions between a system called quantum reservoir with
a stream of quantum states. Measurement data from the reservoir are connected
to a linear readout to train a recurrent relation between quantum channels
applied to the input stream. We demonstrate our algorithms for quantum learning
tasks followed by the proposal of a quantum short-term memory capacity to
evaluate the temporal processing ability of near-term quantum devices.
|
This paper describes the short-term competition on the Components
Segmentation Task of Document Photos that was prepared in the context of the
16th International Conference on Document Analysis and Recognition (ICDAR
2021). This competition aims to bring together researchers working in the field
of identification document image processing and provides them a suitable
benchmark to compare their techniques on the component segmentation task of
document images. Three challenge tasks were proposed entailing different
segmentation assignments to be performed on a provided dataset. The collected
data are from several types of Brazilian ID documents, whose personal
information was conveniently replaced. There were 16 participants whose results
obtained for some or all the three tasks show different rates for the adopted
metrics, like Dice Similarity Coefficient ranging from 0.06 to 0.99. Different
Deep Learning models were applied by the entrants with diverse strategies to
achieve the best results in each of the tasks. Obtained results show that the
currently applied methods for solving one of the proposed tasks (document
boundary detection) are already well established. However, for the other two
challenge tasks (text zone and handwritten sign detection) research and
development of more robust approaches are still required to achieve acceptable
results.
|
In this work, the issue of estimation of reachable sets in continuous bimodal
piecewise affine systems is studied. A new method is proposed, in the framework
of ellipsoidal bounding, using piecewise quadratic Lyapunov functions. Although
bimodal piecewise affine systems can be seen as a special class of affine
hybrid systems, reachability methods developed for affine hybrid systems might
be inappropriately complex for bimodal dynamics. This work goes in the
direction of exploiting the dynamical structure of the system to propose a
simpler approach. More specifically, because of the piecewise nature of the
Lyapunov function, we first derive conditions to ensure that a given quadratic
function is positive on half spaces. Then, we exploit the property of bimodal
piecewise quadratic functions being continuous on a given hyperplane. Finally,
linear matrix characterizations of the estimate of the reachable set are
derived.
|
Magneto-optical parameters of trions in novel large and symmetric InP-based
quantum dots, uncommon for molecular beam epitaxy grown nanostructures, with
emission in the third telecom window, are measured in Voigt and Faraday
configurations of external magnetic field. The diamagnetic coefficients are
found to be in the range of 1.5-4 {\mu}eV/{\T^2}, and 8-15 {\mu}eV/{\T^2},
respectively out of plane and in plane of the dots. The determined values of
diamagnetic shifts are related to the anisotropy of dot sizes. Trion g-factors
are measured to be relatively small, in the range of 0.3-0.7 and 0.5-1.3, in
both configurations respectively. Analysis of single carrier g-factors, based
on the formalism of spin-correlated orbital currents, leads to the similar
values for hole and electron of {\sim} 0.25 for Voigt and {\g_e} {\approx} -5;
{\g_h} {\approx} +6 for Faraday configuration of magnetic field. Values of
g-factors close to zero measured in Voigt configuration make the investigated
dots promising for electrical tuning of g-factor sign, required for schemes of
single spin control in qubit applications.
|
Edge devices, such as cameras and mobile units, are increasingly capable of
performing sophisticated computation in addition to their traditional roles in
sensing and communicating signals. The focus of this paper is on collaborative
object detection, where deep features computed on the edge device from input
images are transmitted to the cloud for further processing. We consider the
impact of packet loss on the transmitted features and examine several ways for
recovering the missing data. In particular, through theory and experiments, we
show that methods for image inpainting based on partial differential equations
work well for the recovery of missing features in the latent space. The
obtained results represent the new state of the art for missing data recovery
in collaborative object detection.
|
The long dreamed quantum internet would consist of a network of quantum nodes
(solid-state or atomic systems) linked by flying qubits, naturally based on
photons, travelling over long distances at the speed of light, with negligible
decoherence. A key component is a light source, able to provide single or
entangled photon pairs. Among the different platforms, semiconductor quantum
dots are very attractive, as they can be integrated with other photonic and
electronic components in miniaturized chips. In the early 1990s two approaches
were developed to synthetize self-assembled epitaxial semiconductor quantum
dots (QDs), or artificial atoms, namely the Stranski-Krastanov (SK) and the
droplet epitaxy (DE) method. Because of its robustness and simplicity, the SK
method became the workhorse to achieve several breakthroughs in both
fundamental and technological areas. The need for specific emission wavelengths
or structural and optical properties has nevertheless motivated further
research on the DE method and its more recent development, the
local-droplet-etching (LDE), as complementary routes to obtain high quality
semiconductor nanostructures. The recent reports on the generation of highly
entangled photon pairs, combined with good photon indistinguishability, suggest
that DE and LDE QDs may complement (and sometime even outperform) conventional
SK InGaAs QDs as quantum emitters. We present here a critical survey of the
state of the art of DE and LDE, highlighting the advantages and weaknesses, the
obtained achievements and the still open challenges, in view of applications in
quantum communication and technology.
|
Hyperdimensional computing (HDC) is a brain-inspired computing paradigm based
on high-dimensional holistic representations of vectors. It recently gained
attention for embedded smart sensing due to its inherent error-resiliency and
suitability to highly parallel hardware implementations. In this work, we
propose a programmable all-digital CMOS implementation of a fully autonomous
HDC accelerator for always-on classification in energy-constrained sensor
nodes. By using energy-efficient standard cell memory (SCM), the design is
easily cross-technology mappable. It achieves extremely low power, 5 $\mu W$ in
typical applications, and an energy-efficiency improvement over the
state-of-the-art (SoA) digital architectures of up to 3$\times$ in post-layout
simulations for always-on wearable tasks such as EMG gesture recognition. As
part of the accelerator's architecture, we introduce novel hardware-friendly
embodiments of common HDC-algorithmic primitives, which results in 3.3$\times$
technology scaled area reduction over the SoA, achieving the same accuracy
levels in all examined targets. The proposed architecture also has a fully
configurable datapath using microcode optimized for HDC stored on an integrated
SCM based configuration memory, making the design "general-purpose" in terms of
HDC algorithm flexibility. This flexibility allows usage of the accelerator
across novel HDC tasks, for instance, a newly designed HDC applied to the task
of ball bearing fault detection.
|
The neutron-proton equilibration process in 48 Ca+ 40 Ca at 35 MeV/nucleon
bombarding energy has been experimentally estimated by means of the isospin
transport ratio. Experimental data have been collected with a subset of the
FAZIA telescope array, which permitted to determine Z and N of detected
fragments. For the first time, the QP evaporative channel has been compared
with the QP break-up one in a homogeneous and consistent way, pointing out to a
comparable n-p equilibration which suggests close interaction time between
projectile and target independently of the exit channel. Moreover, in the QP
evaporative channel n-p equilibration has been compared with the prediction of
the Antisymmetrized Molecular Dynamics (AMD) model coupled to the GEMINI
statistical model as an afterburner, showing a larger probability of proton and
neutron transfers in the simulation with respect to the experimental data.
|
Turbulent flows over wavy surfaces give rise to the formation of ripples,
dunes and other natural bedforms. To predict how much sediment these flows
transport, research has focused mainly on basal shear stress, which peaks
upstream of the highest topography, and has largely ignored the corresponding
pressure variations. In this article, we reanalyze old literature data, as well
as more recent wind tunnel results, to shed a new light on pressure induced by
a turbulent flow on a sinusoidal surface. While the Bernoulli effect increases
the velocity above crests and reduces it in troughs, pressure exhibits
variations that lag behind the topography. We extract the in-phase and
in-quadrature components from streamwise pressure profiles and compare them to
hydrodynamic predictions calibrated on shear stress data.
|
We report a set of exact formulae for computing Dirac masses, mixings, and
CP-violation parameter(s) from 3$\times$3 Yukawa matrices $Y$ valid when $Y
Y^\dagger \rightarrow U^\dagger \,Y Y^\dagger \, U$ under global $U(3)_{Q_L}$
flavour symmetry transformations $U$. The results apply to the Standard Model
Effective Field Theory (SMEFT) and its `geometric' realization (geoSMEFT). We
thereby complete, in the Dirac flavour sector, the catalogue of geoSMEFT
parameters derived at all orders in the $\sqrt{2 \langle H^\dagger H \rangle} /
\Lambda$ expansion. The formalism is basis-independent, and can be applied to
models with decoupled ultraviolet flavour dynamics, as well as to models whose
infrared dynamics are not minimally flavour violating. We highlight these
points with explicit examples and, as a further demonstration of the
formalism's utility, we derive expressions for the renormalization group flow
of quark masses, mixings, and CP-violation at all mass dimension and
perturbative loop orders in the (geo)SM(EFT) and beyond.
|
An hidden variable (hv) theory is a theory that allows globally dispersion
free ensembles. We demonstrate that the Phase Space formulation of Quantum
Mechanics (QM) is an hv theory with the position q, and momentum p as the hv.
Comparing the Phase space and Hilbert space formulations of QM we identify
the assumption that led von Neumann to the Hilbert space formulation of QM
which, in turn, precludes global dispersion free ensembles within the theory.
The assumption, dubbed I, is: "If a physical quantity $\mathbf{A}$ has an
operator $\hat{A}$ then $f(\mathbf{A})$ has the operator $f(\hat{A})$". This
assumption does not hold within the Phase Space formulation of QM.
The hv interpretation of the Phase space formulation provides novel insight
into the interrelation between dispersion and non commutativity of position and
momentum (operators) within the Hilbert space formulation of QM and mitigates
the criticism against von Neumann's no hidden variable theorem by, virtually,
the consensus.
|
The extremely low threshold voltage (Vth) of native MOSFETs (Vth~0V@300K) is
conducive to the design of cryogenic circuits. Previous research on cryogenic
MOSFETs mainly focused on the standard threshold voltage (SVT) and low
threshold voltage (LVT) MOSFETs. In this paper, we characterize native MOSFETs
within the temperature range from 300K to 4.2K. The cryogenic Vth increases up
to ~0.25V (W/L=10um/10um) and the improved subthreshold swing
(SS)~14.30mV/[email protected]. The off-state current (Ioff) and the gate-induced drain
leakage (GIDL) effect are ameliorated greatly. The step-up effect caused by the
substrate charge and the transconductance peak effect caused by the energy
quantization in different sub-bands are also discussed. Based on the EKV model,
we modified the mobility calculation equations and proposed a compact model of
large size native MOSFETs suitable for the range of 300K to 4.2K. The
mobility-related parameters are extracted via a machine learning approach and
the temperature dependences of the scattering mechanisms are analyzed. This
work is beneficial to both the research on cryogenic MOSFETs modeling and the
design of cryogenic CMOS circuits for quantum chips.
|
The majoron, a neutrinophilic pseudo-Goldstone boson conventionally arising
in the context of neutrino mass models, can damp neutrino free-streaming and
inject additional energy density into neutrinos prior to recombination. The
combination of these effects for an eV-scale mass majoron has been shown to
ameliorate the outstanding $H_0$ tension, however only if one introduces
additional dark radiation at the level of $\Delta N_{\rm eff} \sim 0.5$. We
show here that models of low-scale leptogenesis can naturally source this dark
radiation by generating a primordial population of majorons from the decays of
GeV-scale sterile neutrinos in the early Universe. Using a posterior predictive
distribution conditioned on Planck2018+BAO data, we show that the value of
$H_0$ observed by the SH$_0$ES collaboration is expected to occur at the level
of $\sim 10\%$ in the primordial majoron cosmology (to be compared with $\sim
0.1\%$ in the case of $\Lambda$CDM). This insight provides an intriguing
connection between the neutrino mass mechanism, the baryon asymmetry of the
Universe, and the discrepant measurements of $H_0$.
|
The classical construction of the Weil representation, that is with complex
coefficients, has been expected to be working for more general coefficient
rings. This paper exhibits a minimal ring $\mathcal{A}$, being the integral
closure of $\mathbb{Z}[\frac{1}{p}]$ in a cyclotomic field, and carries the
construction of the Weil representation over $\mathcal{A}$-algebras. As a
leitmotiv all along the work, most of the problems can actually be solved over
the based ring $\mathcal{A}$ and transferred for any $\mathcal{A}$-algebra by
scalar extension. The most striking fact consists in these many Weil
representations arising as the scalar extension of a single one with
coefficients in $\mathcal{A}$. In this sense, the Weil module obtained is
universal. Building upon this construction, we are speculating and making
prognoses about an integral theta correspondence.
|
Light curves produced by the Kepler mission demonstrate stochastic brightness
fluctuations (or "flicker") of stellar origin which contribute to the noise
floor, limiting the sensitivity of exoplanet detection and characterization
methods. In stars with surface convection, the primary driver of these
variations on short (sub-eight-hour) timescales is believed to be convective
granulation. In this work, we improve existing models of this granular flicker
amplitude, or $F_8$, by including the effect of the Kepler bandpass on measured
flicker, by incorporating metallicity in determining convective Mach numbers,
and by using scaling relations from a wider set of numerical simulations. To
motivate and validate these changes, we use a recent database of convective
flicker measurements in Kepler stars, which allows us to more fully detail the
remaining model--prediction error. Our model improvements reduce the typical
misprediction of flicker amplitude from a factor of 2.5 to 2. We rule out
rotation period and strong magnetic activity as possible explanations for the
remaining model error, and we show that binary companions may affect convective
flicker. We also introduce an "envelope" model which predicts a range of
flicker amplitudes for any one star to account for some of the spread in
numerical simulations, and we find that this range covers 78% of observed
stars. We note that the solar granular flicker amplitude is lower than most
Sun-like stars. This improved model of convective flicker amplitude can better
characterize this source of noise in exoplanet studies as well as better inform
models and simulations of stellar granulation.
|
The idea of a metapopulation has become canonical in ecology. Its original
mean field form provides the important intuition that migration and extinction
interact to determine the dynamics of a population composed of subpopulations.
From its conception, it has been evident that the very essence of the
metapopulation paradigm centers on the process of local extinction. We note
that there are two qualitatively distinct types of extinction, gradual and
catastrophic, and explore their impact on the dynamics of metapopulation
formation using discrete iterative maps. First, by modifying the classic
logistic map with the addition of the Allee effect, we show that catastrophic
local extinctions in subpopulations are a pre-requisite of metapopulation
formation. When subpopulations experience gradual extinction, increased
migration rates force synchrony and drive the metapopulation below the Allee
point resulting in migration induced destabilization of the system across
parameter space. Second, a sawtooth map (an extension of the Bernoulli bit
shift map) is employed to simultaneously explore the increasing and decreasing
modes of population behavior. We conclude with four generalizations. 1. At low
migration rates, a metapopulation may go extinct faster than completely
unconnected subpopulations. 2. There exists a gradient between stable
metapopulation formation and population synchrony, with critical transitions
from no metapopulation to metapopulation to synchronization, the latter
frequently inducing metapopulation extinction. 3. Synchronization patterns
emerge through time, resulting in synchrony groups and chimeric populations
existing simultaneously. 4. There are two distinct mechanisms of
synchronization: i. extinction and rescue and, ii.) stretch reversals in a
modification of the classic chaotic stretching and folding.
|
Edge intelligence leverages computing resources on network edge to provide
artificial intelligence (AI) services close to network users. As it enables
fast inference and distributed learning, edge intelligence is envisioned to be
an important component of 6G networks. In this article, we investigate AI
service provisioning for supporting edge intelligence. First, we present the
features and requirements of AI services. Then, we introduce AI service data
management, and customize network slicing for AI services. Specifically, we
propose a novel resource pooling method to jointly manage service data and
network resources for AI services. A trace-driven case study demonstrates the
effectiveness of the proposed resource pooling method. Through this study, we
illustrate the necessity, challenge, and potential of AI service provisioning
on network edge.
|
We explore the degrees of freedom required to jointly fit projected and
redshift-space clustering of galaxies selected in three bins of stellar mass
from the Sloan Digital Sky Survey Main Galaxy Sample (SDSS MGS) using a subhalo
abundance matching (SHAM) model. We employ emulators for relevant clustering
statistics in order to facilitate our analysis, leading to large speed gains
with minimal loss of accuracy. We are able to simultaneously fit the projected
and redshift-space clustering of the two most massive galaxy samples that we
consider with just two free parameters: scatter in stellar mass at fixed SHAM
proxy and the dependence of the SHAM proxy on dark matter halo concentration.
We find some evidence for models that include velocity bias, but including
orphan galaxies improves our fits to the lower mass samples significantly. We
also model the clustering signals of specific star formation rate (SSFR)
selected samples using conditional abundance matching (CAM). We obtain
acceptable fits to projected and redshift-space clustering as a function of
SSFR and stellar mass using two CAM variants, although the fits are worse than
for stellar mass-selected samples alone. By incorporating non-unity
correlations between the CAM proxy and SSFR we are able to resolve previously
identified discrepancies between CAM predictions and SDSS observations of the
environmental dependence of quenching for isolated central galaxies.
|
Recently, AutoRegressive (AR) models for the whole image generation empowered
by transformers have achieved comparable or even better performance to
Generative Adversarial Networks (GANs). Unfortunately, directly applying such
AR models to edit/change local image regions, may suffer from the problems of
missing global information, slow inference speed, and information leakage of
local guidance. To address these limitations, we propose a novel model -- image
Local Autoregressive Transformer (iLAT), to better facilitate the locally
guided image synthesis. Our iLAT learns the novel local discrete
representations, by the newly proposed local autoregressive (LA) transformer of
the attention mask and convolution mechanism. Thus iLAT can efficiently
synthesize the local image regions by key guidance information. Our iLAT is
evaluated on various locally guided image syntheses, such as pose-guided person
image synthesis and face editing. Both the quantitative and qualitative results
show the efficacy of our model.
|
Microwave electromagnetic heating are widely used in many industrial
processes. The mathematics involved is based on the Maxwell's equations coupled
with the heat equation. The thermal conductivity is strongly dependent on the
temperature, itself an unknown of the system of P.D.E. We propose here a model
which simplifies this coupling using a nonlocal term as the source of heating.
We prove that the corresponding mathematical initial-boundary value problem has
solutions using the Schauder's fixed point theorem.
|
Optical flow estimation with occlusion or large displacement is a problematic
challenge due to the lost of corresponding pixels between consecutive frames.
In this paper, we discover that the lost information is related to a large
quantity of motion features (more than 40%) computed from the popular
discriminative cost-volume feature would completely vanish due to invalid
sampling, leading to the low efficiency of optical flow learning. We call this
phenomenon the Vanishing Cost Volume Problem. Inspired by the fact that local
motion tends to be highly consistent within a short temporal window, we propose
a novel iterative Motion Feature Recovery (MFR) method to address the vanishing
cost volume via modeling motion consistency across multiple frames. In each MFR
iteration, invalid entries from original motion features are first determined
based on the current flow. Then, an efficient network is designed to adaptively
learn the motion correlation to recover invalid features for lost-information
restoration. The final optical flow is then decoded from the recovered motion
features. Experimental results on Sintel and KITTI show that our method
achieves state-of-the-art performances. In fact, MFR currently ranks second on
Sintel public website.
|
There is an increasing pressure for lecturers to work with two goals. First,
they need to ensure their undergraduate students have a good grasp of the
knowledge and skills of the intellectual field. In addition, they need to
prepare graduates and postgraduates for careers both within and outside of
academia. The problem addressed by this paper is how assessments may reveal a
shift of focus from a mastery of knowledge to a work-focused orientation. This
shift is examined through a case study of physics and the sub-discipline of
theoretical physics as intellectual fields. The evidence is assessment tasks
given to students at different points of their studies from first year to
doctoral levels. By examining and analysing the assessment tasks using concepts
from Legitimation Code Theory (LCT), we demonstrate how the shifts in the
assessments incrementally lead students from a pure disciplinary focus to one
that enables students to potentially pursue employment both within and outside
of academia. In doing so, we also highlight the usefulness of LCT as a
framework for evaluating the preparation of science students for diverse
workplaces.
|
The fabrication of graphene-silicon (Gr-Si) junction inolves the formation of
a parallel metal-insulator-semiconductor (MIS) structure, which is often
disregarded but plays an important role in the optoelectronic properties of the
device. In this work, the transfer of graphene onto a patterned n-type Si
substrate, covered by $Si_3N_4$, produces a Gr-Si device in which the parallel
MIS consists of a $Gr-Si_3N_4-Si$ structure surrounding the Gr-Si junction. The
Gr-Si device exhibits rectifying behavior with a rectification ratio up to
$10^4$. The investigation of its temperature behavior is necessary to
accurately estimate the Schottky barrier height at zero bias, ${\phi}_{b0}=0.24
eV$, the effective Richardson's constant, $A^*=7 \cdot 10^{-10}
AK^{-2}cm^{-2}$, and the diode ideality factor n=2.66 of the Gr-Si junction.
The device is operated as a photodetector in both photocurrent and photovoltage
mode in the visible and infrared (IR) spectral regions. A responsivity up to
350 mA/W and external quantum efficiency (EQE) up to 75% is achieved in the
500-1200 nm wavelength range. A decrease of responsivity to 0.4 mA/W and EQE to
0.03% is observed above 1200 nm, that is in the IR region beyond the silicon
optical bandgap, in which photoexcitation is driven by graphene. Finally, a
model based on two back-to-back diodes, one for the Gr-Si junction. the other
for the $Gr-Si_3N_4-Si$ MIS structure, is proposed to explain the electrical
behavior of the Gr-Si device.
|
We create an artificial system of agents (attention-based neural networks)
which selectively exchange messages with each-other in order to study the
emergence of memetic evolution and how memetic evolutionary pressures interact
with genetic evolution of the network weights. We observe that the ability of
agents to exert selection pressures on each-other is essential for memetic
evolution to bootstrap itself into a state which has both high-fidelity
replication of memes, as well as continuing production of new memes over time.
However, in this system there is very little interaction between this memetic
'ecology' and underlying tasks driving individual fitness - the emergent meme
layer appears to be neither helpful nor harmful to agents' ability to learn to
solve tasks. Sourcecode for these experiments is available at
https://github.com/GoodAI/memes
|
Populism is a political phenomenon of democratic illiberalism centered on the
figure of a strong leader. By modeling person/node connections of prominent
figures of the recent Colombian political landscape we map, quantify, and
analyze the position and influence of Alvaro Uribe as a populist leader. We
found that Uribe is a central hub in the political alliances networks, cutting
through traditional party alliances, . but is not the most central figure in
the state machinery.
The article first presents the framing of the problem, followed by the
historical context of the case in study, the methodology employed and data
collection, analysis, conclusions and further research paths. This study has
implications for offering a new way of applying quantitative methods to the
studies of populist regimes
|
The thermodynamic and structural properties of two dimensional dense Yukawa
liquids are studied with molecular dynamics simulations. The "exact"
thermodynamic properties are simultaneously employed in an advanced scheme for
the determination of an equation of state that shows an unprecedented level of
accuracy for the internal energy, pressure and isothermal compressibility. The
"exact" structural properties are utilized to formulate a novel empirical
correction to the hypernetted-chain approach that leads to a very high accuracy
level in terms of static correlations and thermodynamics.
|
We prove that a tournament and its complement contain the same number of
oriented Hamiltonian paths (resp. cycles) of any given type, as a
generalization of Rosenfeld's result proved for antidirected paths.
|
The binary alloy of titanium-tungsten (TiW) is an established diffusion
barrier in high-power semiconductor devices, owing to its ability to suppress
the diffusion of copper from the metallisation scheme into the surrounding
silicon substructure. However, little is known about the response of TiW to
high temperature events or its behaviour when exposed to air. Here, a combined
soft and hard X-ray photoelectron spectroscopy (XPS) characterisation approach
is used to study the influence of post-deposition annealing and titanium
concentration on the oxidation behaviour of a 300~nm-thick TiW film. The
combination of both XPS techniques allows for the assessment of the chemical
state and elemental composition across the surface and bulk of the TiW layer.
The findings show that in response to high-temperature annealing, titanium
segregates out of the mixed metal system and upwardly migrates, accumulating at
the TiW/air interface. Titanium shows remarkably rapid diffusion under
relatively short annealing timescales and the extent of titanium surface
enrichment is increased through longer annealing periods or by increasing the
precursor titanium concentration. Surface titanium enrichment enhances the
extent of oxidation both at the surface and in the bulk of the alloy due to the
strong gettering ability of titanium. Quantification of the soft X-ray
photoelectron spectra highlights the formation of three tungsten oxidation
environments, attributed to WO$_2$, WO$_3$ and a WO$_3$ oxide coordinated with
a titanium environment. This combinatorial characterisation approach provides
valuable insights into the thermal and oxidation stability of TiW alloys from
two depth perspectives, aiding the development of future device technologies.
|
We consider topological defects for the $\lambda\phi^4$ theory in (1+1)
dimensions with a Lorentz-violating background.
It has been shown, by M. Barreto et al. (2006) \cite{barreto2006defect}, one
cannot have original effects in (the leading order of) single scalar field
model.
Here, we introduce a new Lorentz-violating term, next to leading order which
cannot be absorbed by any redefinition of the scalar field or coordinates.
Our term is the lowest order term which leads to concrete effects on the kink
properties. We calculate the corrections to the kink shape and the
corresponding mass. Quantization of the kink is performed and the revised modes
are obtained. We find the bound and continuum states are affected due to this
Lorentz symmetry violation.
|
We show that the observed primordial perturbations can be entirely sourced by
a light spectator scalar field with a quartic potential, akin to the Higgs
boson, provided that the field is sufficiently displaced from vacuum during
inflation. The framework relies on the indirect modulation of reheating, which
is implemented without any direct coupling between the spectator field and the
inflaton and does not require non-renormalisable interactions. The scenario
gives rise to local non-Gaussianity with $f_{\rm NL}\simeq 5$ as the typical
signal. As an example model where the indirect modulation mechanism is realised
for the Higgs boson, we study the Standard Model extended with right-handed
neutrinos. For the Standard Model running we find, however, that the scenario
analysed does not seem to produce the observed perturbation.
|
We present mid-infrared observations of comet P/2016 BA14 (PANSTARRS), which
were obtained on UT 2016 March 21.3 at heliocentric and geocentric distances of
1.012 au and 0.026 au, respectively, approximately 30 hours before its closest
approach to Earth (0.024 au) on UT 2016 March 22.6. Low-resolution
($\lambda$/$\Delta \lambda$~250) spectroscopic observations in the N-band and
imaging observations with four narrow-band filters (centered at 8.8, 12.4, 17.7
and 18.8 $\mu$m) in the N- and Q-bands were obtained using the Cooled
Mid-Infrared Camera and Spectrometer (COMICS) mounted on the 8.2-m Subaru
telescope atop Maunakea, Hawaii. The observed spatial profiles of P/2016 BA14
at different wavelengths are consistent with a point-spread function. Owing to
the close approach of the comet to the Earth, the observed thermal emission
from the comet is dominated by the thermal emission from its nucleus rather
than its dust coma. The observed spectral energy distribution of the nucleus at
mid-infrared wavelengths is consistent with a Planck function at temperature
T~350 K, with the effective diameter of P/2016 BA14 estimated as ~0.8 km (by
assuming an emissivity of 0.97). The normalized emissivity spectrum of the
comet exhibits absorption-like features that are not reproduced by the
anhydrous minerals typically found in cometary dust coma, such as olivine and
pyroxene. Instead, the spectral features suggest the presence of large grains
of phyllosilicate minerals and organic materials. Thus, our observations
indicate that an inactive small body covered with these processed materials is
a possible end state of comets.
|
Time-domain reflectometry (TDR) is an established means of measuring
impedance inhomogeneity of a variety of waveguides, providing critical data
necessary to characterize and optimize the performance of high-bandwidth
computational and communication systems. However, TDR systems with both the
high spatial resolution (sub-cm) and voltage resolution (sub-$\muV$) required
to evaluate high-performance waveguides are physically large and often
cost-prohibitive, severely limiting their utility as testing platforms and
greatly limiting their use in characterizing and trouble-shooting fielded
hardware.
Consequently, there exists a growing technical need for an electronically
simple, portable, and low-cost TDR technology. The receiver of a TDR system
plays a key role in recording reflection waveforms; thus, such a receiver must
have high analog bandwidth, high sampling rate, and high-voltage resolution.
However, these requirements are difficult to meet using low-cost
analog-to-digital converters (ADCs). This article describes a new TDR
architecture, namely, jitter-based APC (JAPC), which obviates the need for
external components based on an alternative concept, analog-to-probability
conversion (APC) that was recently proposed. These results demonstrate that a
fully reconfigurable and highly integrated TDR (iTDR) can be implemented on a
field-programmable gate array (FPGA) chip without using any external circuit
components. Empirical evaluation of the system was conducted using an HDMI
cable as the device under test (DUT), and the resulting impedance inhomogeneity
pattern (IIP) of the DUT was extracted with spatial and voltage resolutions of
5 cm and 80 $\muV$, respectively. These results demonstrate the feasibility of
using the prototypical JAPC-based iTDR for real-world waveguide
characterization applications
|
Silicon nitride (SiN) waveguides with ultra-low optical loss enable
integrated photonic applications including low noise, narrow linewidth lasers,
chip-scale nonlinear photonics, and microwave photonics. Lasers are key
components to SiN photonic integrated circuits (PICs), but are difficult to
fully integrate with low-index SiN waveguides due to their large mismatch with
the high-index III-V gain materials. The recent demonstration of multilayer
heterogeneous integration provides a practical solution and enabled the
first-generation of lasers fully integrated with SiN waveguides. However a
laser with high device yield and high output power at telecommunication
wavelengths, where photonics applications are clustered, is still missing,
hindered by large mode transition loss, nonoptimized cavity design, and a
complicated fabrication process. Here, we report high-performance lasers on SiN
with tens of milliwatts output through the SiN waveguide and sub-kHz
fundamental linewidth, addressing all of the aforementioned issues. We also
show Hertz-level linewidth lasers are achievable with the developed integration
techniques. These lasers, together with high-$Q$ SiN resonators, mark a
milestone towards a fully-integrated low-noise silicon nitride photonics
platform. This laser should find potential applications in LIDAR, microwave
photonics and coherent optical communications.
|
In this paper, we are tackling the proposal-free referring expression
grounding task, aiming at localizing the target object according to a query
sentence, without relying on off-the-shelf object proposals. Existing
proposal-free methods employ a query-image matching branch to select the
highest-score point in the image feature map as the target box center, with its
width and height predicted by another branch. Such methods, however, fail to
utilize the contextual relation between the target and reference objects, and
lack interpretability on its reasoning procedure. To solve these problems, we
propose an iterative shrinking mechanism to localize the target, where the
shrinking direction is decided by a reinforcement learning agent, with all
contents within the current image patch comprehensively considered. Beside, the
sequential shrinking process enables to demonstrate the reasoning about how to
iteratively find the target. Experiments show that the proposed method boosts
the accuracy by 4.32% against the previous state-of-the-art (SOTA) method on
the RefCOCOg dataset, where query sentences are long and complex, with many
targets referred by other reference objects.
|
This paper considers joint device activity detection and channel estimation
in Internet of Things (IoT) networks, where a large number of IoT devices exist
but merely a random subset of them become active for short-packet transmission
at each time slot. In particular, we propose to leverage the temporal
correlation in user activity, i.e., a device active at the previous time slot
is more likely to be still active at the current moment, to improve the
detection performance. Despite the temporally-correlated user activity in
consecutive time slots, it is challenging to unveil the connection between the
activity pattern estimated previously, which is imperfect but the only
available side information (SI), and the true activity pattern at the current
moment due to the unknown estimation error. In this work, we manage to tackle
this challenge under the framework of approximate message passing (AMP).
Specifically, thanks to the state evolution, the correlation between the
activity pattern estimated by AMP at the previous time slot and the real
activity pattern at the previous and current moment is quantified explicitly.
Based on the well-defined temporal correlation, we further manage to embed this
useful SI into the design of the minimum mean-squared error (MMSE) denoisers
and log-likelihood ratio (LLR) test based activity detectors under the AMP
framework. Theoretical comparison between the SI-aided AMP algorithm and its
counterpart without utilizing temporal correlation is provided. Moreover,
numerical results are given to show the significant gain in activity detection
accuracy brought by the SI-aided algorithm.
|
Quantum chromodynamics (QCD) claims that the major source of the nucleon
invariant mass is not the Higgs mechanism but the trace anomaly in QCD energy
momentum tensor. Although experimental and theoretical results support such
conclusion, a direct demonstration is still absent. We present the first
Lattice QCD calculation of the quark and gluon trace anomaly contributions to
the hadron masses, using the overlap fermion on the 2+1 flavor dynamical Domain
wall quark ensemble at $m_{\pi}=340$ MeV and lattice spacing $a=$0.1105 fm. The
result shows that the gluon trace anomaly contributes to most of the nucleon
mass, and the contribution in the pion state is smaller than that in others
nearly by a factor $\sim$10 since the gluon trace anomaly density inside pion
is different from the other hadrons and the magnitude is much smaller. The
gluon trace anomaly coefficient $\beta/g^3=-0.056(6)$ we obtained is consistent
with its regularization independent leading order value
$(-11+\frac{2N_f}{3})/(4\pi)^2$ perfectly.
|
Massive black hole binaries (MBHBs) with masses of ~ 10^4 to ~ 10^10 of solar
masses are one of the main targets for currently operating and forthcoming
space-borne gravitational wave observatories. In this paper, we explore the
effect of the stellar host rotation on the bound binary hardening efficiency,
driven by three-body stellar interactions. As seen in previous studies, we find
that the centre of mass (CoM) of a prograde MBHB embedded in a rotating
environment starts moving on a nearly circular orbit about the centre of the
system shortly after the MBHB binding. In our runs, the oscillation radius is
approximately 0.25 ( approximately 0.1) times the binary influence radius for
equal mass MBHBs (MBHBs with mass ratio 1:4). Conversely, retrograde binaries
remain anchored about the centre of the host. The binary shrinking rate is
twice as fast when the binary CoM exhibits a net orbital motion, owing to a
more efficient loss cone repopulation even in our spherical stellar systems. We
develop a model that captures the CoM oscillations of prograde binaries; we
argue that the CoM angular momentum gain per time unit scales with the internal
binary angular momentum, so that most of the displacement is induced by stellar
interactions occurring around the time of MBHB binding, while the subsequent
angular momentum enhancement gets eventually quashed by the effect of dynamical
friction. The effect of the background rotation on the MBHB evolution may be
relevant for LISA sources, that are expected to form in significantly rotating
stellar systems.
|
Time-frequency masking or spectrum prediction computed via short symmetric
windows are commonly used in low-latency deep neural network (DNN) based source
separation. In this paper, we propose the usage of an asymmetric
analysis-synthesis window pair which allows for training with targets with
better frequency resolution, while retaining the low-latency during inference
suitable for real-time speech enhancement or assisted hearing applications. In
order to assess our approach across various model types and datasets, we
evaluate it with both speaker-independent deep clustering (DC) model and a
speaker-dependent mask inference (MI) model. We report an improvement in
separation performance of up to 1.5 dB in terms of source-to-distortion ratio
(SDR) while maintaining an algorithmic latency of 8 ms.
|
Gravitational waves (GWs) provide unobscured insight into the birthplace of
neutron stars (NSs) and black holes in core-collapse supernovae (CCSNe). The
nuclear equation of state (EOS) describing these dense environments is yet
uncertain, and variations in its prescription affect the proto-neutron star
(PNS) and the post-bounce dynamics in CCSNe simulations, subsequently impacting
the GW emission. We perform axisymmetric simulations of CCSNe with Skyrme-type
EOSs to study how the GW signal and PNS convection zone are impacted by two
experimentally accessible EOS parameters, (1) the effective mass of nucleons,
$m^\star$, which is crucial in setting the thermal dependence of the EOS, and
(2) the isoscalar incompressibility modulus, $K_{\rm{sat}}$. While
$K_{\rm{sat}}$ shows little impact, the peak frequency of the GWs has a strong
effective mass dependence due to faster contraction of the PNS for higher
values of $m^\star$ owing to a decreased thermal pressure. These more compact
PNSs also exhibit more neutrino heating which drives earlier explosions and
correlates with the GW amplitude via accretion plumes striking the PNS,
exciting the oscillations. We investigate the spatial origin of the GWs and
show the agreement between a frequency-radial distribution of the GW emission
and a perturbation analysis. We do not rule out overshoot from below via PNS
convection as another moderately strong excitation mechanism in our
simulations. We also study the combined effect of effective mass and rotation.
In all our simulations we find evidence for a power gap near $\sim$1250 Hz, we
investigate its origin and report its EOS dependence.
|
NASA's Great Observatories have opened up the electromagnetic spectrum from
space, providing sustained access to wavelengths not accessible from the
ground. Together, Hubble, Compton, Chandra, and Spitzer have provided the
scientific community with an agile and powerful suite of telescopes with which
to attack broad scientific questions, and react to a rapidly changing
scientific landscape. As the existing Great Observatories age, or are
decommissioned, community access to these wavelengths will diminish, with an
accompanying loss of scientific capability. This report, commissioned by the
NASA Cosmic Origins, Physics of the Cosmos and Exoplanet Exploration Program
Analysis Groups (PAGs), analyzes the importance of multi-wavelength
observations from space during the epoch of the Great Observatories, providing
examples that span a broad range of astrophysical investigations.
|
We present a parametric family of semi-implicit second order accurate
numerical methods for non-conservative and conservative advection equation for
which the numerical solutions can be obtained in a fixed number of forward and
backward alternating substitutions. The methods use a novel combination of
implicit and explicit time discretizations for one-dimensional case and the
Strang splitting method in several dimensional case. The methods are described
for advection equations with a continuous variable velocity that can change its
sign inside of computational domain. The methods are unconditionally stable in
the non-conservative case for variable velocity and for variable numerical
parameter. Several numerical experiments confirm the advantages of presented
methods including an involvement of differential programming to find optimized
values of the variable numerical parameter.
|
Change detection (CD) in remote sensing images has been an ever-expanding
area of research. To date, although many methods have been proposed using
various techniques, accurately identifying changes is still a great challenge,
especially in the high resolution or heterogeneous situations, due to the
difficulties in effectively modeling the features from ground objects with
different patterns. In this paper, a novel CD method based on the graph
convolutional network (GCN) and multiscale object-based technique is proposed
for both homogeneous and heterogeneous images. First, the object-wise high
level features are obtained through a pre-trained U-net and the multiscale
segmentations. Treating each parcel as a node, the graph representations can be
formed and then, fed into the proposed multiscale graph convolutional network
with each channel corresponding to one scale. The multiscale GCN propagates the
label information from a small number of labeled nodes to the other ones which
are unlabeled. Further, to comprehensively incorporate the information from the
output channels of multiscale GCN, a fusion strategy is designed using the
father-child relationships between scales. Extensive Experiments on optical,
SAR and heterogeneous optical/SAR data sets demonstrate that the proposed
method outperforms some state-of the-art methods in both qualitative and
quantitative evaluations. Besides, the Influences of some factors are also
discussed.
|
The objective of this study was to investigate the importance of multiple
county-level features in the trajectory of COVID-19. We examined feature
importance across 2,787 counties in the United States using a data-driven
machine learning model. We trained random forest models using 23 features
representing six key influencing factors affecting pandemic spread: social
demographics of counties, population activities, mobility within the counties,
movement across counties, disease attributes, and social network structure.
Also, we categorized counties into multiple groups according to their
population densities, and we divided the trajectory of COVID-19 into three
stages: the outbreak stage, the social distancing stage, and the reopening
stage. The study aims to answer two research questions: (1) The extent to which
the importance of heterogeneous features evolves in different stages; (2) The
extent to which the importance of heterogeneous features varies across counties
with different characteristics. We fitted a set of random forest models to
determine weekly feature importance. The results showed that: (1) Social
demographic features, such as gross domestic product, population density, and
minority status maintained high-importance features throughout stages of
COVID-19 across the 2787 studied counties; (2) Within-county mobility features
had the highest importance in county clusters with higher population densities;
(3) The feature reflecting the social network structure (Facebook, social
connectedness index), had higher importance in the models for counties with
higher population densities. The results show that the data-driven machine
learning models could provide important insights to inform policymakers
regarding feature importance for counties with various population densities and
in different stages of a pandemic life cycle.
|
A fuzzy programming approach is used in this article for solving the piece
selection problem in P2P network with multiple objectives, in which some of the
factors are fuzzy in nature. A piece selection problem has been prepared as a
fuzzy mixed integer goal programming piece selection problem that includes
three primary goals: minimizing the download cost and download time and
maximizing speed and useful information transmission subject to realistic
constraints regarding peer's demand, peer's capacity, peer's dynamicity, etc.
The proposed approach has the ability to handle practical situations in a fuzzy
environment and offers a better decision tool for the piece selection decision
in a dynamic P2P network. An extensive simulation is carried out to demonstrate
the effectiveness of the proposed model. The proposed mechanism has the
capability to handle practical situations in a fuzzy environment and offers a
better decision tool for the piece selection decision in a decentralized P2P
network.
|
We study the properties {of a conformal field theory} (CFT) driven
periodically with a continuous protocol characterized by a frequency
$\omega_D$. Such a drive, in contrast to its discrete counterparts (such as
square pulses or periodic kicks), does not admit exact analytical solution for
the evolution operator $U$. In this work, we develop a Floquet perturbation
theory which provides an analytic, albeit perturbative, result for $U$ that
matches exact numerics in the large drive amplitude limit. We find that the
drive yields the well-known heating (hyperbolic) and non-heating (elliptic)
phases separated by transition lines (parabolic phase boundary). Using this and
starting from a primary state of the CFT, we compute the return probability
($P_n$), equal ($C_n$) and unequal ($G_n$) time two-point primary correlators,
energy density($E_n$), and the $m^{\rm th}$ Renyi entropy ($S_n^m$) after $n$
drive cycles. Our results show that below a crossover stroboscopic time scale
$n_c$, $P_n$, $E_n$ and $G_n$ exhibits universal power law behavior as the
transition is approached either from the heating or the non-heating phase; this
crossover scale diverges at the transition. We also study the emergent spatial
structure of $C_n$, $G_n$ and $E_n$ for the continuous protocol and find
emergence of spatial divergences of $C_n$ and $G_n$ in both the heating and
non-heating phases. We express our results for $S_n^m$ and $C_n$ in terms of
conformal blocks and provide analytic expressions for these quantities in
several limiting cases. Finally we relate our results to those obtained from
exact numerics of a driven lattice model.
|
We comment on recent claims that recoil in the final stages of Hawking
evaporation gives black hole remnants large velocities, rendering them inviable
as a dark matter candidate. We point out that due to cosmic expansion, such
large velocities at the final stages of evaporation are not in tension with the
cold dark matter paradigm so long as they are attained at sufficiently early
times. In particular, the predicted recoil velocities are robustly compatible
with observations if the remnants form before the epoch of big bang
nucleosynthesis, a requirement which is already imposed by the physics of
nucleosynthesis itself.
|
We present reconstructed convergence maps, \textit{mass maps}, from the Dark
Energy Survey (DES) third year (Y3) weak gravitational lensing data set. The
mass maps are weighted projections of the density field (primarily dark matter)
in the foreground of the observed galaxies. We use four reconstruction methods,
each is a \textit{maximum a posteriori} estimate with a different model for the
prior probability of the map: Kaiser-Squires, null B-mode prior, Gaussian
prior, and a sparsity prior. All methods are implemented on the celestial
sphere to accommodate the large sky coverage of the DES Y3 data. We compare the
methods using realistic $\Lambda$CDM simulations with mock data that are
closely matched to the DES Y3 data. We quantify the performance of the methods
at the map level and then apply the reconstruction methods to the DES Y3 data,
performing tests for systematic error effects. The maps are compared with
optical foreground cosmic-web structures and are used to evaluate the lensing
signal from cosmic-void profiles. The recovered dark matter map covers the
largest sky fraction of any galaxy weak lensing map to date.
|
In this paper we prove a H\"older regularity estimate for viscosity solutions
of inhomogeneous equations governed by the infinite Laplace operator relative
to a frame of vector fields.
|
Recent experiments (Guan et al. 2016a,b) showed many interesting phenomena on
dynamic contact angle hysteresis while there is still a lack of complete
theoretical interpretation. In this work, we study the time averaging of the
apparent advancing and receding contact angles on surfaces with periodic
chemical patterns. We first derive a Cox-type boundary condition for the
apparent dynamic contact angle on homogeneous surfaces using Onsager
variational principle. Based on this condition, we propose a reduced model for
some typical moving contact line problems on chemically inhomogeneous surfaces
in two dimensions. Multiscale expansion and averaging techniques are employed
to approximate the model for asymptotically small chemical patterns. We obtain
a quantitative formula for the averaged dynamic contact angles. It gives
explicitly how the advancing and receding contact angles depend on the velocity
and the chemical inhomogeneity of the substrate. The formula is a
coarse-graining version of the Cox-type boundary condition on inhomogeneous
surfaces. Numerical simulations are presented to validate the analytical
results. The numerical results also show that the formula characterizes very
well the complicated behaviour of dynamic contact angle hysteresis observed in
the experiments.
|
Analysing research trends and predicting their impact on academia and
industry is crucial to gain a deeper understanding of the advances in a
research field and to inform critical decisions about research funding and
technology adoption. In the last years, we saw the emergence of several
publicly-available and large-scale Scientific Knowledge Graphs fostering the
development of many data-driven approaches for performing quantitative analyses
of research trends. This chapter presents an innovative framework for
detecting, analysing, and forecasting research topics based on a large-scale
knowledge graph characterising research articles according to the research
topics from the Computer Science Ontology. We discuss the advantages of a
solution based on a formal representation of topics and describe how it was
applied to produce bibliometric studies and innovative tools for analysing and
predicting research dynamics.
|
The fish target detection algorithm lacks a good quality data set, and the
algorithm achieves real-time detection with lower power consumption on embedded
devices, and it is difficult to balance the calculation speed and
identification ability. To this end, this paper collected and annotated a data
set named "Aquarium Fish" of 84 fishes containing 10042 images, and based on
this data set, proposed a multi-scale input fast fish target detection network
(BTP-yoloV3) and its optimization method. The experiment uses Depthwise
convolution to redesign the backbone of the yoloV4 network, which reduces the
amount of calculation by 94.1%, and the test accuracy is 92.34%. Then, the
training model is enhanced with MixUp, CutMix, and mosaic to increase the test
accuracy by 1.27%; Finally, use the mish, swish, and ELU activation functions
to increase the test accuracy by 0.76%. As a result, the accuracy of testing
the network with 2000 fish images reached 94.37%, and the computational
complexity of the network BFLOPS was only 5.47. Comparing the YoloV3~4,
MobileNetV2-yoloV3, and YoloV3-tiny networks of migration learning on this data
set. The results show that BTP-Yolov3 has smaller model parameters, faster
calculation speed, and lower energy consumption during operation while ensuring
the calculation accuracy. It provides a certain reference value for the
practical application of neural network.
|
We describe the ongoing Relativistic Binary programme (RelBin), a part of the
MeerTime large survey project with the MeerKAT radio telescope. RelBin is
primarily focused on observations of relativistic effects in binary pulsars to
enable measurements of neutron star masses and tests of theories of gravity. We
selected 25 pulsars as an initial high priority list of targets based on their
characteristics and observational history with other telescopes. In this paper,
we provide an outline of the programme, present polarisation calibrated pulse
profiles for all selected pulsars as a reference catalogue along with updated
dispersion measures. We report Faraday rotation measures for 24 pulsars, twelve
of which have been measured for the first time. More than a third of our
selected pulsars show a flat position angle swing confirming earlier
observations. We demonstrate the ability of the Rotating Vector Model (RVM),
fitted here to seven binary pulsars, including the Double Pulsar (PSR
J0737$-$3039A), to obtain information about the orbital inclination angle. We
present a high time resolution light curve of the eclipse of PSR J0737$-$3039A
by the companion's magnetosphere, a high-phase resolution position angle swing
for PSR J1141$-$6545, an improved detection of the Shapiro delay of PSR
J1811$-$2405, and pulse scattering measurements for PSRs J1227$-$6208,
J1757$-$1854, and J1811$-$1736. Finally, we demonstrate that timing
observations with MeerKAT improve on existing data sets by a factor of,
typically, 2-3, sometimes by an order of magnitude.
|
Device-edge co-inference opens up new possibilities for resource-constrained
wireless devices (WDs) to execute deep neural network (DNN)-based applications
with heavy computation workloads. In particular, the WD executes the first few
layers of the DNN and sends the intermediate features to the edge server that
processes the remaining layers of the DNN. By adapting the model splitting
decision, there exists a tradeoff between local computation cost and
communication overhead. In practice, the DNN model is re-trained and updated
periodically at the edge server. Once the DNN parameters are regenerated, part
of the updated model must be placed at the WD to facilitate on-device
inference. In this paper, we study the joint optimization of the model
placement and online model splitting decisions to minimize the energy-and-time
cost of device-edge co-inference in presence of wireless channel fading. The
problem is challenging because the model placement and model splitting
decisions are strongly coupled, while involving two different time scales. We
first tackle online model splitting by formulating an optimal stopping problem,
where the finite horizon of the problem is determined by the model placement
decision. In addition to deriving the optimal model splitting rule based on
backward induction, we further investigate a simple one-stage look-ahead rule,
for which we are able to obtain analytical expressions of the model splitting
decision. The analysis is useful for us to efficiently optimize the model
placement decision in a larger time scale. In particular, we obtain a
closed-form model placement solution for the fully-connected multilayer
perceptron with equal neurons. Simulation results validate the superior
performance of the joint optimal model placement and splitting with various DNN
structures.
|
We use Feireisl-Lions theory to deduce the existence of weak solutions to a
system describing the dynamics of a linear oscillator containing a Newtonian
compressible fluid. The appropriate Navier-Stokes equation is considered on a
domain whose movement has one degree of freedom. The equation is paired with
the Newton law and we assume a no-slip boundary condition.
|
Wavelength transduction of single-photon signals is indispensable to
networked quantum applications, particularly those incorporating quantum
memories. Lithium niobate nanophotonic devices have demonstrated favorable
linear, nonlinear, and electro-optical properties to deliver this crucial
function while offering superiror efficiency, integrability, and scalability.
Yet, their quantum noise level--an crucial metric for any single-photon based
application--has yet to be understood. In this work, we report the first study
with the focus on telecom to near-visible conversion driven by a telecom pump
of small detuning, for practical considerations in distributed quantum
processing over fiber networks. Our results find the noise level to be on the
order of $10^{-4}$ photons per time-frequency mode for high conversion,
allowing faithful pulsed operations. Through carefully analyzing the origins of
such noise and each's dependence on the pump power and wavelength detuning, we
have also identified a formula for noise suppression to $10^{-5}$ photons per
mode. Our results assert a viable, low-cost, and modular approach to networked
quantum processing and beyond using lithium niobate nanophotonics.
|
Walking while using a smartphone is becoming a major pedestrian safety
concern as people may unknowingly bump into various obstacles that could lead
to severe injuries. In this paper, we propose ObstacleWatch, an acoustic-based
obstacle collision detection system to improve the safety of pedestrians who
are engaged in smartphone usage while walking. ObstacleWatch leverages the
advanced audio hardware of the smartphone to sense the surrounding obstacles
and infers fine-grained information about the frontal obstacle for collision
detection. In particular, our system emits well-designed inaudible beep signals
from the smartphone built-in speaker and listens to the reflections with the
stereo recording of the smartphone. By analyzing the reflected signals received
at two microphones, ObstacleWatch is able to extract fine-grained information
of the frontal obstacle including the distance, angle, and size for detecting
the possible collisions and to alert users. Our experimental evaluation under
two real-world environments with different types of phones and obstacles shows
that ObstacleWatch achieves over 92% accuracy in predicting obstacle collisions
with distance estimation errors at about 2 cm. Results also show that
ObstacleWatch is robust to different sizes of objects and is compatible with
different phone models with low energy consumption.
|
In this work we look into adding a new language to a multilingual NMT system
in an unsupervised fashion. Under the utilization of pre-trained cross-lingual
word embeddings we seek to exploit a language independent multilingual sentence
representation to easily generalize to a new language. While using
cross-lingual embeddings for word lookup we decode from a yet entirely unseen
source language in a process we call blind decoding. Blindly decoding from
Portuguese using a basesystem containing several Romance languages we achieve
scores of 36.4 BLEU for Portuguese-English and 12.8 BLEU for Russian-English.
In an attempt to train the mapping from the encoder sentence representation to
a new target language we use our model as an autoencoder. Merely training to
translate from Portuguese to Portuguese while freezing the encoder we achieve
26 BLEU on English-Portuguese, and up to 28 BLEU when adding artificial noise
to the input. Lastly we explore a more practical adaptation approach through
non-iterative backtranslation, exploiting our model's ability to produce high
quality translations through blind decoding. This yields us up to 34.6 BLEU on
English-Portuguese, attaining near parity with a model adapted on real
bilingual data.
|
We report here on experiments and simulations examining the effect of
changing wall friction on the gravity-driven flow of spherical particles in a
vertical hopper. In 2D experiments and simulations, we observe that the
exponent of the expected power-law scaling of mass flow rate with opening size
(known as Beverloo's law) decreases as the coefficient of friction between
particles and wall increases, whereas Beverloo scaling works as expected in 3D.
In our 2D experiments, we find that wall friction plays the biggest role in a
region near the outlet comparable in height to the largest opening size.
However, wall friction is not the only factor determining a constant rate of
flow, as we observe a near-constant mass outflow rate in the 2D simulations
even when wall friction is set to zero. We show in our simulations that an
increase in wall friction leaves packing fractions relatively unchanged, while
average particle velocities become independent of opening size as the
coefficient of friction increases. We track the spatial pattern of
time-averaged particle velocities and accelerations inside the hopper. We
observe that the hemisphere-like region above the opening where particles begin
to accelerate is largely independent of opening size at finite wall friction.
However, the magnitude of particle accelerations decreases significantly as
wall friction increases, which in turn results in mean sphere velocities that
no longer scale with opening size, consistent with our observations of mass
flow rate scaling. The case of zero wall friction is anomalous, in that most of
the acceleration takes place near the outlet.
|
The analytical and numerical analysis of the dynamics of charged particles in
the field of an intensive transverse electromagnetic wave in a vacuum presented
in the article. Identifies the conditions for resonant acceleration of
particles. These conditions are formulated. The features and the mechanism of
this acceleration are discussed.
|
The outbreak of COVID-19 led to a record-breaking race to develop a vaccine.
However, the limited vaccine capacity creates another massive challenge: how to
distribute vaccines to mitigate the near-end impact of the pandemic? In the
United States in particular, the new Biden administration is launching mass
vaccination sites across the country, raising the obvious question of where to
locate these clinics to maximize the effectiveness of the vaccination campaign.
This paper tackles this question with a novel data-driven approach to optimize
COVID-19 vaccine distribution. We first augment a state-of-the-art
epidemiological model, called DELPHI, to capture the effects of vaccinations
and the variability in mortality rates across age groups. We then integrate
this predictive model into a prescriptive model to optimize the location of
vaccination sites and subsequent vaccine allocation. The model is formulated as
a bilinear, non-convex optimization model. To solve it, we propose a coordinate
descent algorithm that iterates between optimizing vaccine distribution and
simulating the dynamics of the pandemic. As compared to benchmarks based on
demographic and epidemiological information, the proposed optimization approach
increases the effectiveness of the vaccination campaign by an estimated $20\%$,
saving an extra $4000$ extra lives in the United States over a three-month
period. The proposed solution achieves critical fairness objectives -- by
reducing the death toll of the pandemic in several states without hurting
others -- and is highly robust to uncertainties and forecast errors -- by
achieving similar benefits under a vast range of perturbations.
|
In this paper, we present the use of Model Predictive Control (MPC) based on
Reinforcement Learning (RL) to find the optimal policy for a multi-agent
battery storage system. A time-varying prediction of the power price and
production-demand uncertainty are considered. We focus on optimizing an
economic objective cost while avoiding very low or very high state of charge,
which can damage the battery. We consider the bounded power provided by the
main grid and the constraints on the power input and state of each agent. A
parametrized MPC-scheme is used as a function approximator for the
deterministic policy gradient method and RL optimizes the closed-loop
performance by updating the parameters. Simulation results demonstrate that the
proposed method is able to tackle the constraints and deliver the optimal
policy.
|
The mass contained in an arbitrary spacetime in general relativity is not
well defined. However, for asymptotically flat spacetimes various definitions
of mass have been proposed. In this paper I consider eight masses and show that
some of them correspond to the active gravitational mass while the others
correspond to the inertial mass. For example, the ADM mass corresponds to the
inertial mass while the M$\o$ller mass corresponds to the active gravitational
mass. In general the inertial and active gravitational masses are not equal. If
the spacetime is vacuum at large $r$ the Einstein equations force the inertial
and active gravitational masses to be the same. The Einstein equations also
force the masses to be the same if any matter that extends out to large $r$
satisfies the weak, strong or dominant energy condition. I also examine the
contributions of the inertial and active gravitational masses to the
gravitational redshift, the deflection of light, the Shapiro time delay, the
precession of perihelia and to the motion of test bodies in the spacetime.
|
We test the adequacy of ultraviolet (UV) spectra for characterizing the outer
structure of Type Ia supernova (SN) ejecta. For this purpose, we perform
spectroscopic analysis for ASASSN-14lp, a normal SN Ia showing low continuum in
the mid-UV regime. To explain the strong UV suppression, two possible origins
have been investigated by mapping the chemical profiles over a significant part
of their ejecta. We fit the spectral time series with mid-UV coverage obtained
before and around maximum light by HST, supplemented with ground-based optical
observations for the earliest epochs. The synthetic spectra are calculated with
the one dimensional MC radiative-transfer code TARDIS from self-consistent
ejecta models. Among several physical parameters, we constrain the abundance
profiles of nine chemical elements. We find that a distribution of $^{56}$Ni
(and other iron-group elements) that extends toward the highest velocities
reproduces the observed UV flux well. The presence of radioactive material in
the outer layers of the ejecta, if confirmed, implies strong constraints on the
possible explosion scenarios. We investigate the impact of the inferred
$^{56}$Ni distribution on the early light curves with the radiative transfer
code TURTLS, and confront the results with the observed light curves of
ASASSN-14lp. The inferred abundances are not in conflict with the observed
photometry. We also test whether the UV suppression can be reproduced if the
radiation at the photosphere is significantly lower in the UV regime than the
pure Planck function. In this case, solar metallicity might be sufficient
enough at the highest velocities to reproduce the UV suppression.
|
Medical image registration and segmentation are two of the most frequent
tasks in medical image analysis. As these tasks are complementary and
correlated, it would be beneficial to apply them simultaneously in a joint
manner. In this paper, we formulate registration and segmentation as a joint
problem via a Multi-Task Learning (MTL) setting, allowing these tasks to
leverage their strengths and mitigate their weaknesses through the sharing of
beneficial information. We propose to merge these tasks not only on the loss
level, but on the architectural level as well. We studied this approach in the
context of adaptive image-guided radiotherapy for prostate cancer, where
planning and follow-up CT images as well as their corresponding contours are
available for training. The study involves two datasets from different
manufacturers and institutes. The first dataset was divided into training (12
patients) and validation (6 patients), and was used to optimize and validate
the methodology, while the second dataset (14 patients) was used as an
independent test set. We carried out an extensive quantitative comparison
between the quality of the automatically generated contours from different
network architectures as well as loss weighting methods. Moreover, we evaluated
the quality of the generated deformation vector field (DVF). We show that MTL
algorithms outperform their Single-Task Learning (STL) counterparts and achieve
better generalization on the independent test set. The best algorithm achieved
a mean surface distance of $1.06 \pm 0.3$ mm, $1.27 \pm 0.4$ mm, $0.91 \pm 0.4$
mm, and $1.76 \pm 0.8$ mm on the validation set for the prostate, seminal
vesicles, bladder, and rectum, respectively. The high accuracy of the proposed
method combined with the fast inference speed, makes it a promising method for
automatic re-contouring of follow-up scans for adaptive radiotherapy.
|
Transductive zero-shot learning (T-ZSL) which could alleviate the domain
shift problem in existing ZSL works, has received much attention recently.
However, an open problem in T-ZSL: how to effectively make use of unseen-class
samples for training, still remains. Addressing this problem, we first
empirically analyze the roles of unseen-class samples with different degrees of
hardness in the training process based on the uneven prediction phenomenon
found in many ZSL methods, resulting in three observations. Then, we propose
two hardness sampling approaches for selecting a subset of diverse and hard
samples from a given unseen-class dataset according to these observations. The
first one identifies the samples based on the class-level frequency of the
model predictions while the second enhances the former by normalizing the class
frequency via an approximate class prior estimated by an explored prior
estimation algorithm. Finally, we design a new Self-Training framework with
Hardness Sampling for T-ZSL, called STHS, where an arbitrary inductive ZSL
method could be seamlessly embedded and it is iteratively trained with
unseen-class samples selected by the hardness sampling approach. We introduce
two typical ZSL methods into the STHS framework and extensive experiments
demonstrate that the derived T-ZSL methods outperform many state-of-the-art
methods on three public benchmarks. Besides, we note that the unseen-class
dataset is separately used for training in some existing transductive
generalized ZSL (T-GZSL) methods, which is not strict for a GZSL task. Hence,
we suggest a more strict T-GZSL data setting and establish a competitive
baseline on this setting by introducing the proposed STHS framework to T-GZSL.
|
Let $(\xi_k,\eta_k)_{k\in\mathbb{N}}$ be independent identically distributed
random vectors with arbitrarily dependent positive components. We call a
(globally) perturbed random walk a random sequence $T:=(T_k)_{k\in\mathbb{N}}$
defined by $T_k:=\xi_1+\ldots+\xi_{k-1}+\eta_k$ for $k\in\mathbb{N}$. Consider
a general branching process generated by $T$ and denote by $N_j(t)$ the number
of the $j$th generation individuals with birth times $\leq t$. We treat early
generations, that is, fixed generations $j$ which do not depend on $t$. In this
setting we prove counterparts for $\mathbb{E}N_j$ of the Blackwell theorem and
the key renewal theorem, prove a strong law of large numbers for $N_j$, find
the first-order asymptotics for the variance of $N_j$. Also, we prove a
functional limit theorem for the vector-valued process $(N_1(ut),\ldots,
N_j(ut))_{u\geq 0}$, properly normalized and centered, as $t\to\infty$. The
limit is a vector-valued Gaussian process whose components are integrated
Brownian motions.
|
The augmented Lagrangian method (ALM) is one of the most useful methods for
constrained optimization. Its convergence has been well established under
convexity assumptions or smoothness assumptions, or under both assumptions. ALM
may experience oscillations and divergence when the underlying problem is
simultaneously nonconvex and nonsmooth. In this paper, we consider the linearly
constrained problem with a nonconvex (in particular, weakly convex) and
nonsmooth objective. We modify ALM to use a Moreau envelope of the augmented
Lagrangian and establish its convergence under conditions that are weaker than
those in the literature. We call it the Moreau envelope augmented Lagrangian
(MEAL) method. We also show that the iteration complexity of MEAL is
$o(\varepsilon^{-2})$ to yield an $\varepsilon$-accurate first-order stationary
point. We establish its whole sequence convergence (regardless of the initial
guess) and a rate when a Kurdyka-Lojasiewicz property is assumed. Moreover,
when the subproblem of MEAL has no closed-form solution and is difficult to
solve, we propose two practical variants of MEAL, an inexact version called
iMEAL with an approximate proximal update, and a linearized version called
LiMEAL for the constrained problem with a composite objective. Their
convergence is also established.
|
In this effort, a novel operator theoretic framework is developed for
data-driven solution of optimal control problems. The developed methods focus
on the use of trajectories (i.e., time-series) as the fundamental unit of data
for the resolution of optimal control problems in dynamical systems. Trajectory
information in the dynamical systems is embedded in a reproducing kernel
Hilbert space (RKHS) through what are called occupation kernels. The occupation
kernels are tied to the dynamics of the system through the densely defined
Liouville operator. The pairing of Liouville operators and occupation kernels
allows for lifting of nonlinear finite-dimensional optimal control problems
into the space of infinite-dimensional linear programs over RKHSs.
|
The fifth generation (5G) wireless technology is primarily designed to
address a wide range of use cases categorized into the enhanced mobile
broadband (eMBB), ultra-reliable and low latency communication (URLLC), and
massive machine-type communication (mMTC). Nevertheless, there are a few other
use cases which are in-between these main use cases such as industrial wireless
sensor networks, video surveillance, or wearables. In order to efficiently
serve such use cases, in Release 17, the 3rd generation partnership project
(3GPP) introduced the reduced capability NR devices (NR-RedCap) with lower cost
and complexity, smaller form factor and longer battery life compared to regular
NR devices. However, one key potential consequence of device cost and
complexity reduction is the coverage loss. In this paper, we provide a
comprehensive evaluation of NR RedCap coverage for different physical channels
and initial access messages to identify the channels/messages that are
potentially coverage limiting for RedCap UEs. We perform the coverage
evaluations for RedCap UEs operating in three different scenarios, namely
Rural, Urban and Indoor with carrier frequencies 700 MHz, 2.6 GHz and 28 GHz,
respectively. Our results confirm that for all the considered scenarios, the
amounts of required coverage recovery for RedCap channels are either less than
1 dB or can be compensated by considering smaller data rate targets for RedCap
use cases.
|
Using a extended version of the Quantum Hadrodynamics (QHD), I propose a new
microscopic equation of state (EoS) able to correctly reproduce the main
properties of symmetric nuclear matter at the saturation density, as well
produce massive neutron stars and satisfactory results for the radius and the
tidal parameter $\Lambda$. I show that even when hyperons are present, this EoS
is able to reproduce at least 2.00 solar masses neutron star. The constraints
about the radius of a $2.00M_\odot$ and the minimum mass that enable direct
Urca effect are also checked.
|
This paper is concerned with high moment and pathwise error estimates for
both velocity and pressure approximations of the Euler-Maruyama scheme for time
discretization and its two fully discrete mixed finite element discretizations.
The main idea for deriving the high moment error estimates for the velocity
approximation is to use a bootstrap technique starting from the second moment
error estimate. The pathwise error estimate, which is sub-optimal in the energy
norm, is obtained by using Kolmogorov's theorem based on the high moment error
estimates. Unlike for the velocity error estimate, the higher moment and
pathwise error estimates for the pressure approximation are derived in a
time-averaged norm. In addition, the impact of noise types on the rates of
convergence for both velocity and pressure approximations is also addressed.
|
Machine Learning (ML) models are known to be vulnerable to adversarial inputs
and researchers have demonstrated that even production systems, such as
self-driving cars and ML-as-a-service offerings, are susceptible. These systems
represent a target for bad actors. Their disruption can cause real physical and
economic harm. When attacks on production ML systems occur, the ability to
attribute the attack to the responsible threat group is a critical step in
formulating a response and holding the attackers accountable. We pose the
following question: can adversarially perturbed inputs be attributed to the
particular methods used to generate the attack? In other words, is there a way
to find a signal in these attacks that exposes the attack algorithm, model
architecture, or hyperparameters used in the attack? We introduce the concept
of adversarial attack attribution and create a simple supervised learning
experimental framework to examine the feasibility of discovering attributable
signals in adversarial attacks. We find that it is possible to differentiate
attacks generated with different attack algorithms, models, and hyperparameters
on both the CIFAR-10 and MNIST datasets.
|
In steady-state, the plasma loss rate to the chamber wall is balanced by the
ionization rate in the hot-filament discharges. This balance in the loss rate
and ionization rate maintains the quasi--neutrality of the bulk plasma. In this
report, we have studied the properties of bulk plasma in the presence of an
auxiliary (additional) biased metal disk, which is working as a sink, in
low-pressure helium plasma. A single Langmuir probe and emissive probe are used
to characterize the plasma for various biases (positive and negative) to the
metal disk, which was placed along the discharge axis inside the plasma. It is
observed that only a positively biased disk increases the plasma potential,
electron temperature, and plasma density. Moreover, the plasma parameters
remain unaltered when the disk is negatively biased. The observed results for
two different-sized positively metal disks are compared with an available
theoretical model and found an opposite behavior of plasma density variation
with the disk bias voltages at given discharge condition. The role of the
primary energetic electron population in determining the plasma parameters is
discussed. These experimental results are qualitatively explained on the basis
of electrostatic confinement arising due to the loss of electrons to a biased
metal disk electrode.
|
Language models are the foundation of current neural network-based models for
natural language understanding and generation. However, research on the
intrinsic performance of language models on African languages has been
extremely limited, which is made more challenging by the lack of large or
standardised training and evaluation sets that exist for English and other
high-resource languages. In this paper, we evaluate the performance of
open-vocabulary language models on low-resource South African languages, using
byte-pair encoding to handle the rich morphology of these languages. We
evaluate different variants of n-gram models, feedforward neural networks,
recurrent neural networks (RNNs), and Transformers on small-scale datasets.
Overall, well-regularized RNNs give the best performance across two isiZulu and
one Sepedi datasets. Multilingual training further improves performance on
these datasets. We hope that this research will open new avenues for research
into multilingual and low-resource language modelling for African languages.
|
We argue that a superconducting state with a Fermi-surface of Bogoliubov
quasiparticles, a Bogoliubov Fermi-surface (BG-FS), can be identified by the
dependence of physical quantities on disorder. In particular, we show that a
linear dependence of the residual density of states at weak disorder
distinguishes a BG-FS state from other nodal superconducting states. We further
demonstrate the stability of supercurrent against impurities and a
characteristic Drude-like behavior of the optical conductivity. Our results can
be directly applied to electron irradiation experiments on candidate materials
of BG-FSs, including Sr$_2$RuO$_4$, FeSe$_{1-x}$S$_x$, and UBe$_{13}$.
|
We study the effects of stochastic resetting on geometric Brownian motion
(GBM), a canonical stochastic multiplicative process for non-stationary and
non-ergodic dynamics. Resetting is a sudden interruption of a process, which
consecutively renews its dynamics. We show that, although resetting renders GBM
stationary, the resulting process remains non-ergodic. Quite surprisingly, the
effect of resetting is pivotal in manifesting the non-ergodic behavior. In
particular, we observe three different long-time regimes: a quenched state, an
unstable and a stable annealed state depending on the resetting strength.
Notably, in the last regime, the system is self-averaging and thus the sample
average will always mimic ergodic behavior establishing a stand alone feature
for GBM under resetting. Crucially, the above-mentioned regimes are well
separated by a self-averaging time period which can be minimized by an optimal
resetting rate. Our results can be useful to interpret data emanating from
stock market collapse or reconstitution of investment portfolios.
|
Markov Decision Processes are classically solved using Value Iteration and
Policy Iteration algorithms. Recent interest in Reinforcement Learning has
motivated the study of methods inspired by optimization, such as gradient
ascent. Among these, a popular algorithm is the Natural Policy Gradient, which
is a mirror descent variant for MDPs. This algorithm forms the basis of several
popular Reinforcement Learning algorithms such as Natural actor-critic, TRPO,
PPO, etc, and so is being studied with growing interest. It has been shown that
Natural Policy Gradient with constant step size converges with a sublinear rate
of O(1/k) to the global optimal. In this paper, we present improved finite time
convergence bounds, and show that this algorithm has geometric (also known as
linear) asymptotic convergence rate. We further improve this convergence result
by introducing a variant of Natural Policy Gradient with adaptive step sizes.
Finally, we compare different variants of policy gradient methods
experimentally.
|
In 2017 Skabelund constructed two new examples of maximal curves
$\tilde{\mathcal{S}}_q$ and $\tilde{\mathcal{R}}_q$ as covers of the Suzuki and
Ree curves, respectively. The resulting Skabelund curves are analogous to the
Giulietti-Korchm\'aros cover of the Hermitian curve. In this paper a complete
characterization of all Galois subcovers of the Skabelund curves
$\tilde{\mathcal{S}}_q$ and $\tilde{\mathcal{R}}_q$ is given. Calculating the
genera of the corresponding curves, we find new additions to the list of known
genera of maximal curves over finite fields.
|
Efficient and precise prediction of plasticity by data-driven models relies
on appropriate data preparation and a well-designed model. Here we introduce an
unsupervised machine learning-based data preparation method to maximize the
trainability of crystal orientation evolution data during deformation. For
Taylor model crystal plasticity data, the preconditioning procedure improves
the test score of an artificial neural network from 0.831 to 0.999, while
decreasing the training iterations by an order of magnitude. The efficacy of
the approach was further improved with a recurrent neural network. Electron
backscattered (EBSD) lab measurements of crystal rotation during rolling were
compared with the results of the surrogate model, and despite error introduced
by Taylor model simplifying assumptions, very reasonable agreement between the
surrogate model and experiment was observed. Our method is foundational for
further data-driven studies, enabling the efficient and precise prediction of
texture evolution from experimental and simulated crystal plasticity results.
|
Cyber threat and attack intelligence information are available in
non-standard format from heterogeneous sources. Comprehending them and
utilizing them for threat intelligence extraction requires engaging security
experts. Knowledge graphs enable converting this unstructured information from
heterogeneous sources into a structured representation of data and factual
knowledge for several downstream tasks such as predicting missing information
and future threat trends. Existing large-scale knowledge graphs mainly focus on
general classes of entities and relationships between them. Open-source
knowledge graphs for the security domain do not exist. To fill this gap, we've
built \textsf{TINKER} - a knowledge graph for threat intelligence
(\textbf{T}hreat \textbf{IN}telligence \textbf{K}nowl\textbf{E}dge
g\textbf{R}aph). \textsf{TINKER} is generated using RDF triples describing
entities and relations from tokenized unstructured natural language text from
83 threat reports published between 2006-2021. We built \textsf{TINKER} using
classes and properties defined by open-source malware ontology and using
hand-annotated RDF triples. We also discuss ongoing research and challenges
faced while creating \textsf{TINKER}.
|
We generalize standard credal set models for imprecise probabilities to
include higher order credal sets -- confidences about confidences. In doing so,
we specify how an agent's higher order confidences (credal sets) update upon
observing an event. Our model begins to address standard issues with imprecise
probability models, like Dilation and Belief Inertia. We conjecture that when
higher order credal sets contain all possible probability functions, then in
the limiting case the highest order confidences converge to form a uniform
distribution over the first order credal set, where we define uniformity in
terms of the statistical distance metric (total variation distance). Finite
simulation supports the conjecture. We further suggest that this convergence
presents the total-variation-uniform distribution as a natural, privileged
prior for statistical hypothesis testing.
|
We consider the inverse Seesaw scenario for neutrino masses with the
approximate Lepton number symmetry broken dynamically by a scalar with Lepton
number two. We show that the Majoron associated to the spontaneous symmetry
breaking can alleviate the Hubble tension through its contribution to $\Delta
N_\text{eff}$ and late decays to neutrinos. Among the additional fermionic
states required for realizing the inverse Seesaw mechanism, sterile neutrinos
at the keV-MeV scale can account for all the dark matter component of the
Universe if produced via freeze-in from the decays of heavier degrees of
freedom.
|
In any type II superstring background, the supergravity vertex operators in
the pure spinor formalism are described by a gauge superfield. In this paper,
we obtain for the first time an explicit expression for this superfield in an
$AdS_5 \times S^5$ background. Previously, the vertex operators were only known
close to the boundary of $AdS_5$ or in the minus eight picture. Our strategy
for the computation was to apply eight picture raising operators in the minus
eight picture vertices. In the process, a huge number of terms are generated
and we have developed numerical techniques to perform intermediary
simplifications. Alternatively, the same numerical techniques can be used to
compute the vertices directly in the zero picture by constructing a basis of
invariants and fitting for the coefficients. One motivation for constructing
the vertex operators is the computation of $AdS_5 \times S^5$ string
amplitudes.
|
In this article we investigate the fibers of relative $D$-modules. In general
we prove that there exists an open, Zariski dense subset of the vanishing set
of the annihilator over which the fibers of a cyclic relative $D$-module are
non-zero. Next we restrict our attention to relatively holonomic $D$-modules.
For this class we prove that the fiber over every point in the vanishing set of
the annihilator is non-zero. As a consequence we obtain new proofs of a
conjecture of Budur which was recently proven by Budur, van der Veer, Wu and
Zhou, as well as a new proof of a theorem of Maisonobe. Moreover, we also
obtain a diagonal specialization result for Bernstein-Sato ideals.
|
Riemannian manifolds provide a principled way to model nonlinear geometric
structure inherent in data. A Riemannian metric on said manifolds determines
geometry-aware shortest paths and provides the means to define statistical
models accordingly. However, these operations are typically computationally
demanding. To ease this computational burden, we advocate probabilistic
numerical methods for Riemannian statistics. In particular, we focus on
Bayesian quadrature (BQ) to numerically compute integrals over normal laws on
Riemannian manifolds learned from data. In this task, each function evaluation
relies on the solution of an expensive initial value problem. We show that by
leveraging both prior knowledge and an active exploration scheme, BQ
significantly reduces the number of required evaluations and thus outperforms
Monte Carlo methods on a wide range of integration problems. As a concrete
application, we highlight the merits of adopting Riemannian geometry with our
proposed framework on a nonlinear dataset from molecular dynamics.
|
Gravitational instabilities can drive small-scale turbulence and large-scale
spiral arms in massive gaseous disks under conditions of slow radiative
cooling. These motions affect the observed disk morphology, its mass accretion
rate and variability, and could control the process of planet formation via
dust grain concentration, processing, and collisional fragmentation. We study
gravito-turbulence and its associated spiral structure in thin gaseous disks
subject to a prescribed cooling law. We characterize the morphology, coherence,
and propagation of the spirals and examine when the flow deviates from viscous
disk models. We used the finite-volume code Pluto to integrate the equations of
self-gravitating hydrodynamics in three-dimensional spherical geometry. The gas
was cooled over longer-than-orbital timescales to trigger the gravitational
instability and sustain turbulence. We ran models for various disk masses and
cooling rates. In all cases considered, the turbulent gravitational stress
transports angular momentum outward at a rate compatible with viscous disk
theory. The dissipation of orbital energy happens via shocks in spiral density
wakes, heating the disk back to a marginally stable thermal equilibrium. These
wakes drive vertical motions and contribute to mix material from the disk with
its corona. They are formed and destroyed intermittently, and they nearly
corotate with the gas at every radius. As a consequence, large-scale spiral
arms exhibit no long-term global coherence, and energy thermalization is an
essentially local process. In the absence of radial substructures or tidal
forcing, and provided a local cooling law, gravito-turbulence reduces to a
local phenomenon in thin gaseous disks.
|
For many reinforcement learning (RL) applications, specifying a reward is
difficult. This paper considers an RL setting where the agent obtains
information about the reward only by querying an expert that can, for example,
evaluate individual states or provide binary preferences over trajectories.
From such expensive feedback, we aim to learn a model of the reward that allows
standard RL algorithms to achieve high expected returns with as few expert
queries as possible. To this end, we propose Information Directed Reward
Learning (IDRL), which uses a Bayesian model of the reward and selects queries
that maximize the information gain about the difference in return between
plausibly optimal policies. In contrast to prior active reward learning methods
designed for specific types of queries, IDRL naturally accommodates different
query types. Moreover, it achieves similar or better performance with
significantly fewer queries by shifting the focus from reducing the reward
approximation error to improving the policy induced by the reward model. We
support our findings with extensive evaluations in multiple environments and
with different query types.
|
The growing popularity of online fundraising (aka "crowdfunding") has
attracted significant research on the subject. In contrast to previous studies
that attempt to predict the success of crowdfunded projects based on specific
characteristics of the projects and their creators, we present a more general
approach that focuses on crowd dynamics and is robust to the particularities of
different crowdfunding platforms. We rely on a multi-method analysis to
investigate the correlates, predictive importance, and quasi-causal effects of
features that describe crowd dynamics in determining the success of crowdfunded
projects. By applying a multi-method analysis to a study of fundraising in
three different online markets, we uncover general crowd dynamics that
ultimately decide which projects will succeed. In all analyses and across the
three different platforms, we consistently find that funders' behavioural
signals (1) are significantly correlated with fundraising success; (2)
approximate fundraising outcomes better than the characteristics of projects
and their creators such as credit grade, company valuation, and subject domain;
and (3) have significant quasi-causal effects on fundraising outcomes while
controlling for potentially confounding project variables. By showing that
universal features deduced from crowd behaviour are predictive of fundraising
success on different crowdfunding platforms, our work provides design-relevant
insights about novel types of collective decision-making online. This research
inspires thus potential ways to leverage cues from the crowd and catalyses
research into crowd-aware system design.
|
We use a recent numerical security proof technique to analyse three different
postselection strategies for continuous-variable quantum key distribution
protocols (CV-QKD) with quadrature phase-shift keying modulation. For all
postselection strategies studied we provide novel analytical results for the
operators that define the respective regions in phase space. In both, the
untrusted and trusted detector model, a new, cross-shaped postselection
strategy, clearly outperforms a state-of-the-art radial postselection scheme
for higher transmission distances and higher values of noise. Motivated by the
high computational effort for the error-correction phase we also studied the
case when a large fraction of the raw key is eliminated by postselection: We
observe that the secure key rate in case that only $20\%$ of the raw key passes
the cross-shaped postselection is still roughly $80\%$ of the secure key rate
without postselection for low values of excess noise and roughly $95\%$ for
higher values of excess noise. Furthermore, we examine a strategy with radial
and angular postselection that combines both the advantages of state-of-the-art
radial postselection schemes and our cross-shaped strategy at the cost of
higher complexity. Employing the cross-shaped postselection strategy, that can
easily be introduced in the data processing, both new and existing CV-QKD
systems can improve the achievable secure key rates significantly.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.