abstract
stringlengths 42
2.09k
|
---|
Models of front propagation like the famous FKPP equation have extensive
applications across scientific disciplines e.g., in the spread of infectious
diseases. A common feature of such models is the existence of a static state
into which to propagate, e.g., the uninfected host population. Here, we instead
model an infectious front propagating into a growing host population. The
infectious agent spreads via self-similar waves whereas the amplitude of the
wave of infected organisms increases exponentially. Depending on the population
under consideration, wave speeds are either advanced or retarded compared to
the non-growing case. We identify a novel selection mechanism in which the
shape of the infectious wave controls the speeds of the various waves and we
propose experiments with bacteria and bacterial viruses to test our
predictions. Our work reveals the complex interplay between population growth
and front propagation.
|
The spin Hall effect and its inverse are important spin-charge conversion
mechanisms. The direct spin Hall effect induces a surface spin accumulation
from a transverse charge current due to spin orbit coupling even in
non-magnetic conductors. However, most detection schemes involve additional
interfaces, leading to large scattering in reported data. Here we perform
interface free x-ray spectroscopy measurements at the Cu L_{3,2} absorption
edges of highly Bi-doped Cu (Cu_{95}Bi_{5}). The detected X-ray magnetic
circular dichroism (XMCD) signal corresponds to an induced magnetic moment of
(2.7 +/- 0.5) x 10-12 {\mu}_{B} A^{-1} cm^{2} per Cu atom averaged over the
probing depth, which is of the same order as for Pt measured by magneto-optics.
The results highlight the importance of interface free measurements to assess
material parameters and the potential of CuBi for spin-charge conversion
applications.
|
Ensuring that a predictor is not biased against a sensible feature is the key
of Fairness learning. Conversely, Global Sensitivity Analysis is used in
numerous contexts to monitor the influence of any feature on an output
variable. We reconcile these two domains by showing how Fairness can be seen as
a special framework of Global Sensitivity Analysis and how various usual
indicators are common between these two fields. We also present new Global
Sensitivity Analysis indices, as well as rates of convergence, that are useful
as fairness proxies.
|
Virtualization of distributed real-time systems enables the consolidation of
mixed-criticality functions on a shared hardware platform thus easing system
integration. Time-triggered communication and computation can act as an enabler
of safe hard real-time systems. A time-triggered hypervisor that activates
virtual CPUs according to a global schedule can provide the means to allow for
a resource efficient implementation of the time-triggered paradigm in
virtualized distributed real-time systems. A prerequisite of time-triggered
virtualization for hard real-time systems is providing access to a global time
base to VMs as well as to the hypervisor. A global time base is the result of
clock synchronization with an upper bound on the clock synchronization
precision.
We present a formalization of the notion of time in virtualized real-time
systems. We use this formalization to propose a virtual clock condition that
enables us to test the suitability of a virtual clock for the design of
virtualized time-triggered real-time systems. We discuss and model how
virtualization, in particular resource consolidation versus resource
partitioning, degrades clock synchronization precision. Finally, we apply our
insights to model the IEEE~802.1AS clock synchronization protocol and derive an
upper bound on the clock synchronization precision of IEEE 802.1AS. We present
our implementation of a dependent clock for ACRN that can be synchronized to a
grandmaster clock. The results of our experiments illustrate that a type-1
hypervisor implementing a dependent clock yields native clock synchronization
precision. Furthermore, we show that the upper bound derived from our model
holds for a series of experiments featuring native as well as virtualized
setups.
|
We study several parameters of a random Bienaym\'e-Galton-Watson tree $T_n$
of size $n$ defined in terms of an offspring distribution $\xi$ with mean $1$
and nonzero finite variance $\sigma^2$. Let $f(s)={\bf E}\{s^\xi\}$ be the
generating function of the random variable $\xi$. We show that the independence
number is in probability asymptotic to $qn$, where $q$ is the unique solution
to $q = f(1-q)$. One of the many algorithms for finding the largest independent
set of nodes uses a notion of repeated peeling away of all leaves and their
parents. The number of rounds of peeling is shown to be in probability
asymptotic to $\log n / \log\bigl(1/f'(1-q)\bigr)$. Finally, we study a related
parameter which we call the leaf-height. Also sometimes called the protection
number, this is the maximal shortest path length between any node and a leaf in
its subtree. If $p_1 = {\bf P}\{\xi=1\}>0$, then we show that the maximum
leaf-height over all nodes in $T_n$ is in probability asymptotic to $\log
n/\log(1/p_1)$. If $p_1 = 0$ and $\kappa$ is the first integer $i>1$ with ${\bf
P}\{\xi=i\}>0$, then the leaf-height is in probability asymptotic to
$\log_\kappa\log n$.
|
In this work we ask how an Unruh-DeWitt (UD) detector with harmonic
oscillator internal degrees of freedom $Q$ measuring an evolving quantum matter
field $\Phi(\bm{x}, t)$ in an expanding universe with scale factor $a(t)$
responds. We investigate the detector's response which contains non-Markovian
information about the quantum field squeezed by the dynamical spacetime. The
challenge is in the memory effects accumulated over the evolutionary history.
We first consider a detector $W$, the `\textsl{Witness}', which co-existed and
evolved with the quantum field from the beginning. We derive a nonMarkovian
quantum Langevin equation for the detector's $Q$ by integrating over the
squeezed quantum field. The solution of this integro-differential equation
would answer our question, in principle, but very challenging, in practice.
Striking a compromise, we then ask, to what extent can a detector $D$
introduced at late times, called the `\textsl{Detective}', decipher past
memories. This situation corresponds to many cosmological experiments today
probing specific stages in the past, such as COBE targeting activities at the
surface of last scattering. Somewhat surprisingly we show that it is possible
to retrieve to some degree certain global physical quantities, such as the
resultant squeezing, particles created, quantum coherence and correlations. The
reason is because the quantum field has all the fine-grained information from
the beginning in how it was driven by the cosmic dynamics $a(t)$. How long the
details of past history can persist in the quantum field depends on the memory
time. The fact that a squeezed field cannot come to complete equilibrium under
constant driving, as in an evolving spacetime, actually helps to retain the
memory. We discuss interesting features and potentials of this
`\textit{archaeological}' perspective toward cosmological issues.
|
We present updated measurements of the Crab pulsar glitch of 2019 July 23
using a dataset of pulse arrival times spanning $\sim$5 months. On MJD 58687,
the pulsar underwent its seventh largest glitch observed to date, characterised
by an instantaneous spin-up of $\sim$1 $\mu$Hz. Following the glitch the
pulsar's rotation frequency relaxed exponentially towards pre-glitch values
over a timescale of approximately one week, resulting in a permanent frequency
increment of $\sim$0.5 $\mu$Hz. Due to our semi-continuous monitoring of the
Crab pulsar, we were able to partially resolve a fraction of the total spin-up.
This delayed spin-up occurred exponentially over a timescale of $\sim$18 hours.
This is the sixth Crab pulsar glitch for which part of the initial rise was
resolved in time and this phenomenon has not been observed in any other
glitching pulsars, offering a unique opportunity to study the microphysical
processes governing interactions between the neutron star interior and the
crust.
|
In this work we shall study $k$-inflation theories with non-minimal coupling
of the scalar field to gravity, in the presence of only a higher order kinetic
term of the form $\sim \mathrm{const}\times X^{\mu}$, with
$X=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi$. The study will be focused
in the cases where a scalar potential is included or is absent, and the
evolution of the scalar field will be assumed to satisfy the slow-roll or the
constant-roll condition. In the case of the slow-roll models with scalar
potential, we shall calculate the slow-roll indices, and the corresponding
observational indices of the theory, and we demonstrate that the resulting
theory is compatible with the latest Planck data. The same results are obtained
in the constant-roll case, at least in the presence of a scalar potential. In
the case that models without potential are considered, the results are less
appealing since these are strongly model dependent, and at least for a
power-law choice of the non-minimal coupling, the theory is non-viable.
Finally, due to the fact that the scalar and tensor power spectra are conformal
invariant quantities, we argue that the Einstein frame counterpart of the
non-minimal $k$-inflation models with scalar potential, can be a viable theory,
due to the conformal invariance of the observational indices. The Einstein
frame theory is more involved and thus more difficult to work with it
analytically, so one implication of our work is that we provide evidence for
the viability of another class of $k$-inflation models.
|
Some known fixed point theorems for nonexpansive mappings in metric spaces
are extended here to the case of primitive uniform spaces. The reasoning
presented in the proofs seems to be a natural way to obtain other general
results.
|
Carroll's group is presented as a group of transformations in a 5-dimensional
space ($\mathcal{C}$) obtained by embedding the Euclidean space into a (4;
1)-de Sitter space. Three of the five dimensions of $\mathcal{C}$ are related
to $\mathcal{R}^3$, and the other two to mass and time. A covariant formulation
of Caroll's group, analogous as introduced by Takahashi to Galilei's group, is
deduced. Unit representations are studied.
|
In this paper, we consider the spectral dependences of transverse
electromagnetic waves generated in solar plasma at coalescence of Langmuir
waves. It is shown that different spectra of Langmuir waves lead to
characteristic types of transversal electromagnetic wave spectra, what makes it
possible to diagnose the features of the spectra of Langmuir waves generated in
solar plasma.
|
A duality between an electrostatic problem in a three dimensional world and a
quantum mechanical problem in a one dimensional world which allows one to
obtain the ground state solution of the Schr\"odinger equation by using
electrostatic results is generalized to three dimensions. Here, it is
demonstrated that the same transformation technique is also applicable to the
s-wave Schr\"odinger equation in three dimensions for central potentials. This
approach leads to a general relationship between the electrostatic potential
and the s-wave function and the electric energy density to the quantum
mechanical energy.
|
The past year has witnessed the rapid development of applying the Transformer
module to vision problems. While some researchers have demonstrated that
Transformer-based models enjoy a favorable ability of fitting data, there are
still growing number of evidences showing that these models suffer over-fitting
especially when the training data is limited. This paper offers an empirical
study by performing step-by-step operations to gradually transit a
Transformer-based model to a convolution-based model. The results we obtain
during the transition process deliver useful messages for improving visual
recognition. Based on these observations, we propose a new architecture named
Visformer, which is abbreviated from the `Vision-friendly Transformer'. With
the same computational complexity, Visformer outperforms both the
Transformer-based and convolution-based models in terms of ImageNet
classification accuracy, and the advantage becomes more significant when the
model complexity is lower or the training set is smaller. The code is available
at https://github.com/danczs/Visformer.
|
We employ various quantum-mechanical approaches for studying the impact of
electric fields on both nonretarded and retarded noncovalent interactions
between atoms or molecules. To this end, we apply perturbative and
non-perturbative methods within the frameworks of quantum mechanics (QM) as
well as quantum electrodynamics (QED). In addition, to provide a transparent
physical picture of the different types of resulting interactions, we employ a
stochastic electrodynamic approach based on the zero-point fluctuating field.
Atomic response properties are described via harmonic Drude oscillators - an
efficient model system that permits an analytical solution and has been
convincingly shown to yield accurate results when modeling non-retarded
intermolecular interactions. The obtained intermolecular energy contributions
are classified as field-induced (FI) electrostatics, FI polarization, and
dispersion interactions. The interplay between these three types of
interactions enables the manipulation of molecular dimer conformations by
applying transversal or longitudinal electric fields along the intermolecular
axis. Our framework combining four complementary theoretical approaches paves
the way toward a systematic description and improved understanding of molecular
interactions when molecules are subject to both external and vacuum fields.
|
In this short paper we recall the (Garfield) Impact Factor of a Journal, we
improve and extend it, and eventually we present the Total Impact Factor that
reflects the most accurate impact factor.
|
This Letter capitalizes on a unique set of total solar eclipse observations,
acquired between 2006 and 2020, in white light, Fe XI 789.2 nm ($\rm T_{fexi}$
= $1.2 \pm 0.1$ MK) and Fe XIV 530.3 nm ($\rm T_{fexiv}$ = $ 1.8 \pm 0.1$ MK)
emission, complemented by in situ Fe charge state and proton speed measurements
from ACE/SWEPAM-SWICS, to identify the source regions of different solar wind
streams. The eclipse observations reveal the ubiquity of open structures,
invariably associated with Fe XI emission from $\rm Fe^{10+}$, hence a constant
electron temperature, $\rm T_{c}$ = $\rm T_{fexi}$, in the expanding corona.
The in situ Fe charge states are found to cluster around $\rm Fe^{10+}$,
independently of the 300 to 700 km $\rm s^{-1}$ stream speeds, referred to as
the continual solar wind. $\rm Fe^{10+}$ thus yields the fiducial link between
the continual solar wind and its $\rm T_{fexi}$ sources at the Sun. While the
spatial distribution of Fe XIV emission, from $\rm Fe^{13+}$, associated with
streamers, changes throughout the solar cycle, the sporadic appearance of
charge states $> \rm Fe^{11+}$, in situ, exhibits no cycle dependence
regardless of speed. These latter streams are conjectured to be released from
hot coronal plasmas at temperatures $\ge \rm T_{fexiv}$ within the bulge of
streamers and from active regions, driven by the dynamic behavior of
prominences magnetically linked to them. The discovery of continual streams of
slow, intermediate and fast solar wind, characterized by the same $\rm
T_{fexi}$ in the expanding corona, places new constraints on the physical
processes shaping the solar wind.
|
Our goal is to estimate the star formation main sequence (SFMS) and the star
formation rate density (SFRD) at z <= 0.017 (d < 75 Mpc) using the Javalambre
Photometric Local Universe Survey (J-PLUS) first data release, that probes
897.4 deg2 with twelve optical bands. We extract the Halpha emission flux of
805 local galaxies from the J-PLUS filter J0660, being the continuum level
estimated with the other eleven J-PLUS bands, and the dust attenuation and
nitrogen contamination corrected with empirical relations. Stellar masses (M),
Halpha luminosities (L), and star formation rates (SFRs) were estimated by
accounting for parameters covariances. Our sample comprises 689 blue galaxies
and 67 red galaxies, classified in the (u-g) vs (g-z) color-color diagram, plus
49 AGN. The SFMS is explored at log M > 8 and it is clearly defined by the blue
galaxies, with the red galaxies located below them. The SFMS is described as
log SFR = 0.83 log M - 8.44. We find a good agreement with previous estimations
of the SFMS, especially those based on integral field spectroscopy. The Halpha
luminosity function of the AGN-free sample is well described by a Schechter
function with log L* = 41.34, log phi* = -2.43, and alpha = -1.25. Our
measurements provide a lower characteristic luminosity than several previous
studies in the literature. The derived star formation rate density at d < 75
Mpc is log rho_SFR = -2.10 +- 0.11, with red galaxies accounting for 15% of the
SFRD. Our value is lower than previous estimations at similar redshift, and
provides a local reference for evolutionary studies regarding the star
formation history of the Universe.
|
Capacitated vehicle routing problem (CVRP) is being one of the most common
optimization problems in our days, considering the wide usage of routing
algorithms in multiple fields such as transportation domain, food delivery,
network routing, ... Capacitated vehicle routing problem is classified as an
NP-Hard problem, hence normal optimization algorithm can't solve it. In our
paper, we discuss a new way to solve the mentioned problem, using a recursive
approach of the most known clustering algorithm "K-Means", one of the known
shortest path algorithm "Dijkstra", and some mathematical operations. In this
paper, we will show how to implement those methods together in order to get the
nearest solution of the optimal route, since research and development are still
on go, this research paper may be extended with another one, that will involve
the implementational results of this thoric side.
|
This paper explores Google's Edge TPU for implementing a practical network
intrusion detection system (NIDS) at the edge of IoT, based on a deep learning
approach. While there are a significant number of related works that explore
machine learning based NIDS for the IoT edge, they generally do not consider
the issue of the required computational and energy resources. The focus of this
paper is the exploration of deep learning-based NIDS at the edge of IoT, and in
particular the computational and energy efficiency. In particular, the paper
studies Google's Edge TPU as a hardware platform, and considers the following
three key metrics: computation (inference) time, energy efficiency and the
traffic classification performance. Various scaled model sizes of two major
deep neural network architectures are used to investigate these three metrics.
The performance of the Edge TPU-based implementation is compared with that of
an energy efficient embedded CPU (ARM Cortex A53). Our experimental evaluation
shows some unexpected results, such as the fact that the CPU significantly
outperforms the Edge TPU for small model sizes.
|
We study the nonlinear stability of plane Couette and Poiseuille flows with
the Lyapunov second method by using the classical L2-energy. We prove that the
streamwise perturbations are L2-energy stable for any Reynolds number. This
contradicts the results of Joseph [10], Joseph and Carmi [12] and Busse [4],
and allows us to prove that the critical nonlinear Reynolds numbers are
obtained along two-dimensional perturbations, the spanwise perturbations, as
Orr [16] had supposed. This conclusion combined with recent results by
Falsaperla et al. [8] on the stability with respect to tilted rolls, provides a
possible solution to the Couette-Sommerfeld paradox.
|
We investigate the production of photons from coherently oscillating,
spatially localized clumps of axionic fields (oscillons and axion stars) in the
presence of external electromagnetic fields. We delineate different qualitative
behaviour of the photon luminosity in terms of an effective dimensionless
coupling parameter constructed out of the axion-photon coupling, and field
amplitude, oscillation frequency and radius of the axion star. For small values
of this dimensionless coupling, we provide a general analytic formula for the
dipole radiation field and the photon luminosity per solid angle, including a
strong dependence on the radius of the configuration. For moderate to large
coupling, we report on a non-monotonic behavior of the luminosity with the
coupling strength in the presence of external magnetic fields. After an initial
rise in luminosity with the coupling strength, we see a suppression (by an
order of magnitude or more compared to the dipole radiation approximation) at
moderately large coupling. At sufficiently large coupling, we find a transition
to a regime of exponential growth of the luminosity due to parametric
resonance. We carry out 3+1 dimensional lattice simulations of axion
electrodynamics, at small and large coupling, including non-perturbative
effects of parametric resonance as well as backreaction effects when necessary.
We also discuss medium (plasma) effects that lead to resonant axion to photon
conversion, relevance of the coherence of the soliton, and implications of our
results in astrophysical and cosmological settings.
|
Recent studies in three dimensional spintronics propose that the \OE rsted
field plays a significant role in cylindrical nanowires. However, there is no
direct report of its impact on magnetic textures. Here, we use time-resolved
scanning transmission X-ray microscopy to image the dynamic response of
magnetization in cylindrical Co$_{30}$Ni$_{70}$ nanowires subjected to
nanosecond \OE rsted field pulses. We observe the tilting of longitudinally
magnetized domains towards the azimuthal \OE rsted field direction and create a
robust model to reproduce the differential magnetic contrasts and extract the
angle of tilt. Further, we report the compression and expansion, or breathing,
of a Bloch-point domain wall that occurs when weak pulses with opposite sign
are applied. We expect that this work lays the foundation for and provides an
incentive to further studying complex and fascinating magnetization dynamics in
nanowires, especially the predicted ultra-fast domain wall motion and
associated spin wave emissions.
|
By an unquenched quark model, we predict a charmed-strange baryon state,
namely, the $\Omega_{c0}^d(1P,1/2^-)$. Its mass is predicted to be 2945 MeV,
which is below the $\Xi_c\bar{K}$ threshold due to the nontrivial
coupled-channel effect. So the $\Omega_{c0}^d(1P,1/2^-)$ state could be
regraded as the analog of the charmed-strange meson $D_{s0}^*(2317)$. It is a
good opportunity for the running Belle II experiment to search for this state
in the $\Omega_c^{(*)}\gamma$ mass spectrum experiment in the future.
|
We present a flexible discretization technique for computational models of
thin tubular networks embedded in a bulk domain, for example a porous medium.
These systems occur in the simulation of fluid flow in vascularized biological
tissue, root water and nutrient uptake in soil, hydrological or petroleum wells
in rock formations, or heat transport in micro-cooling devices. The key
processes, such as heat and mass transfer, are usually dominated by the
exchange between the network system and the embedding domain. By explicitly
resolving the interface between these domains with the computational mesh, we
can accurately describe these processes. The network is efficiently described
by a network of line segments. Coupling terms are evaluated by projection of
the interface variables. The new method is naturally applicable for nonlinear
and time-dependent problems and can therefore be used as a reference method in
the development of novel implicit interface 1D-3D methods and in the design of
verification benchmarks for embedded tubular network methods. Implicit
interface, not resolving the bulk-network interface explicitly have proven to
be very efficient but have only been mathematically analyzed for linear
elliptic problems so far. Using two application scenarios, fluid perfusion of
vascularized tissue and root water uptake from soil, we investigate the effect
of some common modeling assumptions of implicit interface methods numerically.
|
In this paper a family of non-autonomous scalar parabolic PDEs over a general
compact and connected flow is considered. The existence or not of a
neighbourhood of zero where the problems are linear has an influence on the
methods used and on the dynamics of the induced skew-product semiflow. That is
why two cases are distinguished: linear-dissipative and purely dissipative
problems. In both cases, the structure of the global and pullback attractors is
studied using principal spectral theory. Besides, in the purely dissipative
setting, a simple condition is given, involving both the underlying linear
dynamics and some properties of the nonlinear term, to determine the nontrivial
sections of the attractor.
|
In this paper we study the variety of one dimensional representations of a
finite $W$-algebra attached to a classical Lie algebra, giving a precise
description of the dimensions of the irreducible components. We apply this to
prove a conjecture of Losev describing the image of his orbit method map. In
order to do so we first establish new Yangian-type presentations of
semiclassical limits of the $W$-algebras attached to distinguished nilpotent
elements in classical Lie algebras, using Dirac reduction.
|
We examine the possibility that dark matter (DM) consists of a gapped
continuum, rather than ordinary particles. A Weakly-Interacting Continuum (WIC)
model, coupled to the Standard Model via a Z-portal, provides an explicit
realization of this idea. The thermal DM relic density in this model is
naturally consistent with observations, providing a continuum counterpart of
the "WIMP miracle". Direct detection cross sections are strongly suppressed
compared to ordinary Z-portal WIMP, thanks to a unique effect of the continuum
kinematics. Continuum DM states decay throughout the history of the universe,
and observations of cosmic microwave background place constraints on potential
late decays. Production of WICs at colliders can provide a striking
cascade-decay signature. We show that a simple Z-portal WIC model with the gap
scale between 40 and 110 GeV provides a fully viable DM candidate consistent
with all current experimental constraints.
|
We propose a new method with $\mathcal{L}_2$ distance that maps one
$N$-dimensional distribution to another, taking into account available
information about correspondences. We solve the high-dimensional problem in 1D
space using an iterative projection approach. To show the potentials of this
mapping, we apply it to colour transfer between two images that exhibit
overlapped scenes. Experiments show quantitative and qualitative competitive
results as compared with the state of the art colour transfer methods.
|
We propose an optimization scheme for ground-state cooling of a mechanical
mode by coupling to a general three-level system. We formulate the optimization
scheme, using the master equation approach, over a broad range of system
parameters including detunings, decay rates, coupling strengths, and pumping
rate. We implement the optimization scheme on three physical systems: a
colloidal quantum dot coupled to its confined phonon mode, a polariton coupled
to a mechanical resonator mode, and a coupled-cavity system coupled to a
mechanical resonator mode. These three physical systems span a broad range of
mechanical mode frequencies, coupling rates, and decay rates. Our optimization
scheme lowers the stead-state phonon number in all three cases by orders of
magnitude. We also calculate the net cooling rate by estimating the phonon
decay rate and show that the optimized system parameters also result in
efficient cooling. The proposed optimization scheme can be readily extended to
any generic driven three-level system coupled to a mechanical mode.
|
Experimental studies of high-purity kagome-lattice antiferromagnets (KAFM)
are of great importance in attempting to better understand the predicted
enigmatic quantum spin-liquid ground state of the KAFM model. However,
realizations of this model can rarely evade magnetic ordering at low
temperatures due to various perturbations to its dominant isotropic exchange
interactions. Such a situation is for example encountered due to sizable
Dzyaloshinskii-Moriya magnetic anisotropy in YCu$_3$(OH)$_6$Cl$_3$, which
stands out from other KAFM materials by its perfect crystal structure. We find
evidence of magnetic ordering also in the distorted sibling compound
Y$_3$Cu$_9$(OH)$_{18}$[Cl$_8$(OH)], which has recently been proposed to feature
a spin-liquid ground state arising from a spatially anisotropic kagome lattice.
Our findings are based on a combination of bulk susceptibility, specific heat,
and magnetic torque measurements that disclose a N\'eel transition temperature
of $T_N=11$~K in this material, which might feature a coexistence of magnetic
order and persistent spin dynamics as previously found in
YCu$_3$(OH)$_6$Cl$_3$. Contrary to previous studies of single crystals and
powders containing impurity inclusions, we use high-purity single crystals of
Y$_3$Cu$_9$(OH)$_{18}$[Cl$_8$(OH)] grown via an optimized hydrothermal
synthesis route that minimizes such inclusions. This study thus demonstrates
that the lack of magnetic ordering in less pure samples of the investigated
compound does not originate from the reduced symmetry of spin lattice but is
instead of extrinsic origin.
|
In the present work, we explore analytically and numerically the co-existence
and interactions of ring dark solitons (RDSs) with other RDSs, as well as with
vortices. The azimuthal instabilities of the rings are explored via the
so-called filament method. As a result of their nonlinear interaction, the
vortices are found to play a stabilizing role on the rings, yet their effect is
not sufficient to offer complete stabilization of RDSs. Nevertheless, complete
stabilization of the relevant configuration can be achieved by the presence of
external ring-shaped barrier potentials. Interactions of multiple rings are
also explored, and their equilibrium positions (as a result of their own
curvature and their tail-tail interactions) are identified. In this case too,
stabilization is achieved via multi-ring external barrier potentials.
|
To explain X-ray spectra of active galactic nuclei (AGN), non-thermal
activity in AGN coronae such as pair cascade models has been extensively
discussed in the past literature. Although X-ray and gamma-ray observations in
the 1990s disfavored such pair cascade models, recent millimeter-wave
observations of nearby Seyferts establish the existence of weak non-thermal
coronal activity. Besides, the IceCube collaboration reported NGC 1068, a
nearby Seyfert, as the hottest spot in their 10-yr survey. These pieces of
evidence are enough to investigate the non-thermal perspective of AGN coronae
in depth again. This article summarizes our current observational
understandings of AGN coronae and describes how AGN coronae generate
high-energy particles. We also provide ways to test the AGN corona model with
radio, X-ray, MeV gamma-ray, and high-energy neutrino observations.
|
We present a method for contraction-based feedback motion planning of locally
incrementally exponentially stabilizable systems with unknown dynamics that
provides probabilistic safety and reachability guarantees. Given a dynamics
dataset, our method learns a deep control-affine approximation of the dynamics.
To find a trusted domain where this model can be used for planning, we obtain
an estimate of the Lipschitz constant of the model error, which is valid with a
given probability, in a region around the training data, providing a local,
spatially-varying model error bound. We derive a trajectory tracking error
bound for a contraction-based controller that is subjected to this model error,
and then learn a controller that optimizes this tracking bound. With a given
probability, we verify the correctness of the controller and tracking error
bound in the trusted domain. We then use the trajectory error bound together
with the trusted domain to guide a sampling-based planner to return
trajectories that can be robustly tracked in execution. We show results on a 4D
car, a 6D quadrotor, and a 22D deformable object manipulation task, showing our
method plans safely with learned models of high-dimensional underactuated
systems, while baselines that plan without considering the tracking error bound
or the trusted domain can fail to stabilize the system and become unsafe.
|
Fairness-aware machine learning for multiple protected at-tributes (referred
to as multi-fairness hereafter) is receiving increasing attention as
traditional single-protected attribute approaches cannot en-sure fairness
w.r.t. other protected attributes. Existing methods, how-ever, still ignore the
fact that datasets in this domain are often imbalanced, leading to unfair
decisions towards the minority class. Thus, solutions are needed that achieve
multi-fairness,accurate predictive performance in overall, and balanced
performance across the different classes.To this end, we introduce a new
fairness notion,Multi-Max Mistreatment(MMM), which measures unfairness while
considering both (multi-attribute) protected group and class membership of
instances. To learn an MMM-fair classifier, we propose a multi-objective
problem formulation. We solve the problem using a boosting approach that
in-training,incorporates multi-fairness treatment in the distribution update
and post-training, finds multiple Pareto-optimal solutions; then uses
pseudo-weight based decision making to select optimal solution(s) among
accurate, balanced, and multi-attribute fair solutions
|
This paper proposes a set of techniques to investigate eye gaze and fixation
patterns while users interact with electronic user interfaces. In particular,
two case studies are presented - one on analysing eye gaze while interacting
with deceptive materials in web pages and another on analysing graphs in
standard computer monitor and virtual reality displays. We analysed spatial and
temporal distributions of eye gaze fixations and sequence of eye gaze
movements. We used this information to propose new design guidelines to avoid
deceptive materials in web and user-friendly representation of data in 2D
graphs. In 2D graph study we identified that area graph has lowest number of
clusters for user's gaze fixations and lowest average response time. The
results of 2D graph study were implemented in virtual and mixed reality
environment. Along with this, it was ob-served that the duration while
interacting with deceptive materials in web pages is independent of the number
of fixations. Furthermore, web-based data visualization tool for analysing eye
tracking data from single and multiple users was developed.
|
We analyze the top Lyapunov exponent of the product of sequences of two by
two matrices that appears in the analysis of several statistical mechanics
models with disorder: for example these matrices are the transfer matrices for
the nearest neighbor Ising chain with random external field, and the free
energy density of this Ising chain is the Lyapunov exponent we consider. We
obtain the sharp behavior of this exponent in the large interaction limit when
the external field is centered: this balanced case turns out to be critical in
many respects. From a mathematical standpoint we precisely identify the
behavior of the top Lyapunov exponent of a product of two dimensional random
matrices close to a diagonal random matrix for which top and bottom Lyapunov
exponents coincide. In particular, the Lyapunov exponent is only
$\log$-H\"older continuous.
|
The favourable properties of tungsten borides for shielding the central High
Temperature Superconductor (HTS) core of a spherical tokamak fusion power plant
are modelled using the MCNP code. The objectives are to minimize the power
deposition into the cooled HTS core, and to keep HTS radiation damage to
acceptable levels by limiting the neutron and gamma fluxes. The shield
materials compared are W2B, WB, W2B5 and WB4 along with a reactively sintered
boride B0.329C0.074Cr0.024Fe0.274W0.299, monolithic W and WC. Of all these W2B5
gave the most favourable results with a factor of ~10 or greater reduction in
neutron flux and gamma energy deposition as compared to monolithic W. These
results are compared with layered water-cooled shields, giving the result that
the monolithic shields, with moderating boron, gave comparable neutron flux and
power deposition, and (in the case of W2B5) even better performance. Good
performance without water-coolant has advantages from a reactor safety
perspective due to the risks associated with radio-activation of oxygen. 10B
isotope concentrations between 0 and 100% are considered for the boride
shields. The naturally occurring 20% fraction gave much lower energy
depositions than the 0% fraction, but the improvement largely saturated beyond
40%. Thermophysical properties of the candidate materials are discussed, in
particular the thermal strain. To our knowledge, the performance of W2B5 is
unrivalled by other monolithic shielding materials. This is partly as its
trigonal crystal structure gives it higher atomic density compared with other
borides. It is also suggested that its high performance depends on it having
just high enough 10B content to maintain a constant neutron energy spectrum
across the shield.
|
We show that the action of the Kauffman bracket skein algebra of a surface
$\Sigma$ on the skein module of the handlebody bounded by $\Sigma$ is faithful
if and only if the quantum parameter is not a root of 1.
|
Shared Memory is a mechanism that allows several processes to communicate
with each other by accessing -- writing or reading -- a set of variables that
they have in common. A Consistency Model defines how each process observes the
state of the Memory, according to the accesses performed by it and by the rest
of the processes in the system. Therefore, it determines what value a read
returns when a given process issues it. This implies that there must be an
agreement among all, or among processes in different subsets, on the order in
which all or a subset of the accesses happened. It is clear that a higher
quantity of accesses or proceses taking part in the agreement makes it possibly
harder or slower to be achieved. This is the main reason for which a number of
Consistency Models for Shared Memory have been introduced. This paper is a
handy summary of [2] and [3] where consistency models (Sequential, Causal,
PRAM, Cache, Processors, Slow), including synchronized ones (Weak, Release,
Entry), were formally defined. This provides a better understanding of those
models and a way to reason and compare them through a concise notation. There
are many papers on this subject in the literature such as [11] with which this
work shares some concepts.
|
Applying an operator product expansion approach we update the Standard Model
prediction of the $B_c$ lifetime from over 20 years ago. The non-perturbative
velocity expansion is carried out up to third order in the relative velocity of
the heavy quarks. The scheme dependence is studied using three different mass
schemes for the $\bar b$ and $c$ quarks, resulting in three different values
consistent with each other and with experiment. Special focus has been laid on
renormalon cancellation in the computation. Uncertainties resulting from scale
dependence, neglecting the strange quark mass, non-perturbative matrix elements
and parametric uncertainties are discussed in detail. The resulting
uncertainties are still rather large compared to the experimental ones, and
therefore do not allow for clear-cut conclusions concerning New Physics effects
in the $B_c$ decay.
|
The Helmholtz equation in one dimension, which describes the propagation of
electromagnetic waves in effectively one-dimensional systems, is equivalent to
the time-independent Schr\"odinger equation. The fact that the potential term
entering the latter is energy-dependent obstructs the application of the
results on low-energy quantum scattering in the study of the low-frequency
waves satisfying the Helmholtz equation. We use a recently developed dynamical
formulation of stationary scattering to offer a comprehensive treatment of the
low-frequency scattering of these waves for a general finite-range scatterer.
In particular, we give explicit formulas for the coefficients of the
low-frequency series expansion of the transfer matrix of the system which in
turn allow for determining the low-frequency expansions of its reflection,
transmission, and absorption coefficients. Our general results reveal a number
of interesting physical aspects of low-frequency scattering particularly in
relation to permittivity profiles having balanced gain and loss.
|
We establish the dual equivalence of the category of (potentially nonunital)
operator systems and the category of pointed compact nc (noncommutative) convex
sets, extending a result of Davidson and the first author. We then apply this
dual equivalence to establish a number of results about operator systems, some
of which are new even in the unital setting.
For example, we show that the maximal and minimal C*-covers of an operator
system can be realized in terms of the C*-algebra of continuous nc functions on
its nc quasistate space, clarifying recent results of Connes and van Suijlekom.
We also characterize "C*-simple" operator systems, i.e. operator systems with
simple minimal C*-cover, in terms of their nc quasistate spaces.
We develop a theory of quotients of operator systems that extends the theory
of quotients of unital operator algebras. In addition, we extend results of the
first author and Shamovich relating to nc Choquet simplices. We show that an
operator system is a C*-algebra if and only if its nc quasistate space is an nc
Bauer simplex with zero as an extreme point, and we show that a second
countable locally compact group has Kazhdan's property (T) if and only if for
every action of the group on a C*-algebra, the set of invariant quasistates is
the quasistate space of a C*-algebra.
|
With the growing use of camera devices, the industry has many image datasets
that provide more opportunities for collaboration between the machine learning
community and industry. However, the sensitive information in the datasets
discourages data owners from releasing these datasets. Despite recent research
devoted to removing sensitive information from images, they provide neither
meaningful privacy-utility trade-off nor provable privacy guarantees. In this
study, with the consideration of the perceptual similarity, we propose
perceptual indistinguishability (PI) as a formal privacy notion particularly
for images. We also propose PI-Net, a privacy-preserving mechanism that
achieves image obfuscation with PI guarantee. Our study shows that PI-Net
achieves significantly better privacy utility trade-off through public image
data.
|
The backup control barrier function (CBF) was recently proposed as a
tractable formulation that guarantees the feasibility of the CBF quadratic
programming (QP) via an implicitly defined control invariant set. The control
invariant set is based on a fixed backup policy and evaluated online by forward
integrating the dynamics under the backup policy. This paper is intended as a
tutorial of the backup CBF approach and a comparative study to some benchmarks.
First, the backup CBF approach is presented step by step with the underlying
math explained in detail. Second, we prove that the backup CBF always has a
relative degree 1 under mild assumptions. Third, the backup CBF approach is
compared with benchmarks such as Hamilton Jacobi PDE and Sum-of-Squares on the
computation of control invariant sets, which shows that one can obtain a
control invariant set close to the maximum control invariant set under a good
backup policy for many practical problems.
|
Complex fluids flow in complex ways in complex structures. Transport of water
and various organic and inorganic molecules in the central nervous system are
important in a wide range of biological and medical processes [C. Nicholson,
and S. Hrab\v{e}tov\'a, Biophysical Journal, 113(10), 2133(2017)]. However, the
exact driving mechanisms are often not known. In this paper, we investigate
flows induced by action potentials in an optic nerve as a prototype of the
central nervous system (CNS). Different from traditional fluid dynamics
problems, flows in biological tissues such as the CNS are coupled with ion
transport. It is driven by osmosis created by concentration gradient of ionic
solutions, which in term influence the transport of ions. Our mathematical
model is based on the known structural and biophysical properties of the
experimental system used by the Harvard group Orkand et al [R.K. Orkand, J.G.
Nicholls, S.W. Kuffler, Journal of Neurophysiology, 29(4), 788(1966)].
Asymptotic analysis and numerical computation show the significant role of
water in convective ion transport. The full model (including water) and the
electrodiffusion model (excluding water) are compared in detail to reveal an
interesting interplay between water and ion transport. In the full model,
convection due to water flow dominates inside the glial domain. This water flow
in the glia contributes significantly to the spatial buffering of potassium in
the extracellular space. Convection in the extracellular domain does not
contribute significantly to spatial buffering. Electrodiffusion is the dominant
mechanism for flows confined to the extracellular domain.
|
A (charged) rotating black hole may be unstable against a (charged) massive
scalar field perturbation due to the existence of superradiance modes. The
stability property depends on the parameters of the system. In this paper, the
superradiant stable parameter space is studied for the four-dimensional
extremal Kerr and Kerr-Newman black holes under massive and charged massive
scalar perturbation. For the extremal Kerr case, it is found that when the
angular frequency and proper mass of the scalar perturbation satisfy the
inequality $\omega<\mu/\sqrt{3}$, the extremal Kerr black hole and scalar
perturbation system is superradiantly stable. For the Kerr-Newman black hole
case, when the angular frequency of the scalar perturbation satisfies
$\omega<qQ/M$ and the product of the mass-to-charge ratios of the black hole
and scalar perturbation satisfies $\frac{\mu}{q}\frac{M}{Q} > \frac{\sqrt{3
k^2+2} }{ \sqrt{k^2+2} },~k=\frac{a}{M}$, the extremal Kerr-Newman black hole
is superradiantly stable under charged massive scalar perturbation.
|
Multi-domain image-to-image translation with conditional Generative
Adversarial Networks (GANs) can generate highly photo realistic images with
desired target classes, yet these synthetic images have not always been helpful
to improve downstream supervised tasks such as image classification. Improving
downstream tasks with synthetic examples requires generating images with high
fidelity to the unknown conditional distribution of the target class, which
many labeled conditional GANs attempt to achieve by adding soft-max
cross-entropy loss based auxiliary classifier in the discriminator. As recent
studies suggest that the soft-max loss in Euclidean space of deep feature does
not leverage their intrinsic angular distribution, we propose to replace this
loss in auxiliary classifier with an additive angular margin (AAM) loss that
takes benefit of the intrinsic angular distribution, and promotes intra-class
compactness and inter-class separation to help generator synthesize high
fidelity images.
We validate our method on RaFD and CIFAR-100, two challenging face expression
and natural image classification data set. Our method outperforms
state-of-the-art methods in several different evaluation criteria including
recently proposed GAN-train and GAN-test metrics designed to assess the impact
of synthetic data on downstream classification task, assessing the usefulness
in data augmentation for supervised tasks with prediction accuracy score and
average confidence score, and the well known FID metric.
|
We consider the problem of minimizing the supplied energy of
infinite-dimensional linear port-Hamiltonian systems and prove that optimal
trajectories exhibit the turnpike phenomenon towards certain subspaces induced
by the dissipation of the dynamics.
|
We investigate the complexity and performance of recurrent neural network
(RNN) models as post-processing units for the compensation of fibre
nonlinearities in digital coherent systems carrying polarization multiplexed
16-QAM and 32-QAM signals. We evaluate three bi-directional RNN models, namely
the bi-LSTM, bi-GRU and bi-Vanilla-RNN and show that all of them are promising
nonlinearity compensators especially in dispersion unmanaged systems. Our
simulations show that during inference the three models provide similar
compensation performance, therefore in real-life systems the simplest scheme
based on Vanilla-RNN units should be preferred. We compare bi-Vanilla-RNN with
Volterra nonlinear equalizers and exhibit its superiority both in terms of
performance and complexity, thus highlighting that RNN processing is a very
promising pathway for the upgrade of long-haul optical communication systems
utilizing coherent detection.
|
The aim of this paper is to show that almost greedy bases induce tighter
embeddings in superreflexive Banach spaces than in general Banach spaces. More
specifically, we show that an almost greedy basis in a superreflexive Banach
space $\mathbb{X}$ induces embeddings that allow squeezing $\mathbb{X}$ between
two superreflexive Lorentz sequence spaces that are close to each other in the
sense that they have the same fundamental function.
|
We provide a new degree bound on the weighted sum-of-squares (SOS)
polynomials for Putinar-Vasilescu's Positivstellensatz. This leads to another
Positivstellensatz saying that if $f$ is a polynomial of degree at most $2 d_f$
nonnegative on a semialgebraic set having nonempty interior defined by finitely
many polynomial inequalities $g_j(x)\ge 0$, $j=1,\dots,m$ with
$g_1:=L-\|x\|_2^2$ for some $L>0$, then there exist positive constants $\bar c$
and $c$ depending on $f,g_j$ such that for any $\varepsilon>0$, for all $k\ge
\bar c \varepsilon^{-c}$, $f$ has the decomposition \begin{equation}
\begin{array}{l} (1+\|x\|_2^2)^k(f+\varepsilon)=\sigma_0+\sum_{j=1}^m
\sigma_jg_j \,, \end{array} \end{equation} for some SOS polynomials $\sigma_j$
being such that the degrees of $\sigma_0,\sigma_jg_j$ are at most $2(d_f+k)$.
Here $\|\cdot\|_2$ denotes the $\ell_2$ vector norm. As a consequence, we
obtain a converging hierarchy of semidefinite relaxations for lower bounds in
polynomial optimization on basic compact semialgebraic sets. The complexity of
this hierarchy is $\mathcal{O}(\varepsilon^{-c})$ for prescribed accuracy
$\varepsilon>0$. In particular, if $m=L=1$ then $c=65$, yielding the complexity
$\mathcal{O}(\varepsilon^{-65})$ for the minimization of a polynomial on the
unit ball. Our result improves the complexity bound
$\mathcal{O}(\exp(\varepsilon^{-c}))$ due to Nie and Schweighofer in [Journal
of Complexity 23.1 (2007): 135-150].
|
Let $\mathcal{F}\subset 2^{[n]}$ be a set family such that the intersection
of any two members of $\mathcal{F}$ has size divisible by $\ell$. The famous
Eventown theorem states that if $\ell=2$ then $|\mathcal{F}|\leq 2^{\lfloor
n/2\rfloor}$, and this bound can be achieved by, e.g., an `atomic'
construction, i.e. splitting the ground set into disjoint pairs and taking
their arbitrary unions. Similarly, splitting the ground set into disjoint sets
of size $\ell$ gives a family with pairwise intersections divisible by $\ell$
and size $2^{\lfloor n/\ell\rfloor}$. Yet, as was shown by Frankl and Odlyzko,
these families are far from maximal. For infinitely many $\ell$, they
constructed families $\mathcal{F}$ as above of size $2^{\Omega(n\log
\ell/\ell)}$. On the other hand, if the intersection of any number of sets in
$\mathcal{F}\subset 2^{[n]}$ has size divisible by $\ell$, then it is easy to
show that $|\mathcal{F}|\leq 2^{\lfloor n/\ell\rfloor}$. In 1983 Frankl and
Odlyzko conjectured that $|\mathcal{F}|\leq 2^{(1+o(1)) n/\ell}$ holds already
if one only requires that for some $k=k(\ell)$ any $k$ distinct members of
$\mathcal{F}$ have an intersection of size divisible by $\ell$. We completely
resolve this old conjecture in a strong form, showing that $|\mathcal{F}|\leq
2^{\lfloor n/\ell\rfloor}+O(1)$ if $k$ is chosen appropriately, and the $O(1)$
error term is not needed if (and only if) $\ell \, | \, n$, and $n$ is
sufficiently large. Moreover the only extremal configurations have `atomic'
structure as above. Our main tool, which might be of independent interest, is a
structure theorem for set systems with small 'doubling'.
|
Visual Object Tracking (VOT) can be seen as an extended task of Few-Shot
Learning (FSL). While the concept of FSL is not new in tracking and has been
previously applied by prior works, most of them are tailored to fit specific
types of FSL algorithms and may sacrifice running speed. In this work, we
propose a generalized two-stage framework that is capable of employing a large
variety of FSL algorithms while presenting faster adaptation speed. The first
stage uses a Siamese Regional Proposal Network to efficiently propose the
potential candidates and the second stage reformulates the task of classifying
these candidates to a few-shot classification problem. Following such a
coarse-to-fine pipeline, the first stage proposes informative sparse samples
for the second stage, where a large variety of FSL algorithms can be conducted
more conveniently and efficiently. As substantiation of the second stage, we
systematically investigate several forms of optimization-based few-shot
learners from previous works with different objective functions, optimization
methods, or solution space. Beyond that, our framework also entails a direct
application of the majority of other FSL algorithms to visual tracking,
enabling mutual communication between researchers on these two topics.
Extensive experiments on the major benchmarks, VOT2018, OTB2015, NFS, UAV123,
TrackingNet, and GOT-10k are conducted, demonstrating a desirable performance
gain and a real-time speed.
|
The fundamental processes by which nuclear energy is generated in the Sun
have been known for many years. However, continuous progress in areas such as
neutrino experiments, stellar spectroscopy and helioseismic data and techniques
requires ever more accurate and precise determination of nuclear reaction cross
sections, a fundamental physical input for solar models. In this work, we
review the current status of (standard) solar models and present a detailed
discussion on the relevance of nuclear reactions for detailed predictions of
solar properties. In addition, we also provide an analytical model that helps
understanding the relation between nuclear cross sections, neutrino fluxes and
the possibility they offer for determining physical characteristics of the
solar interior. The latter is of particular relevance in the context of the
conundrum posed by the solar composition, the solar abundance problem, and in
the light of the first ever direct detection of solar CN neutrinos recently
obtained by the Borexino collaboration. Finally, we present a short list of
wishes about the precision with which nuclear reaction rates should be
determined to allow for further progress in our understanding of the Sun.
|
We construct non-exact operator spaces satisfying the Weak Expectation
Property (WEP) and the Operator space version of the Local Lifting Property
(OLLP). These examples should be compared with the example we recently gave of
a $C^*$-algebra with WEP and LLP. The construction produces several new
analogues among operator spaces of the Gurarii space, extending Oikhberg's
previous work. Each of our "Gurarii operator spaces" is associated to a class
of finite dimensional operator spaces (with suitable properties). In each case
we show the space exists and is unique up to completely isometric isomorphism.
|
In this paper, we propose a novel graph convolutional network architecture,
Graph Stacked Hourglass Networks, for 2D-to-3D human pose estimation tasks. The
proposed architecture consists of repeated encoder-decoder, in which
graph-structured features are processed across three different scales of human
skeletal representations. This multi-scale architecture enables the model to
learn both local and global feature representations, which are critical for 3D
human pose estimation. We also introduce a multi-level feature learning
approach using different-depth intermediate features and show the performance
improvements that result from exploiting multi-scale, multi-level feature
representations. Extensive experiments are conducted to validate our approach,
and the results show that our model outperforms the state-of-the-art.
|
The electronic bandstructure of a solid is a collection of allowed bands
separated by forbidden bands, revealing the geometric symmetry of the crystal
structures. Comprehensive knowledge of the bandstructure with band parameters
explains intrinsic physical, chemical and mechanical properties of the solid.
Here we report the artificial polaritonic bandstructures of two-dimensional
honeycomb lattices for microcavity exciton-polaritons using GaAs semiconductors
in the wide-range detuning values, from cavity-photon-like (red-detuned) to
exciton-like (blue-detuned) regimes. In order to understand the experimental
bandstructures and their band parameters, such as gap energies, bandwidths,
hopping integrals and density of states, we originally establish a polariton
band theory within an augmented plane wave method with two-kind-bosons, cavity
photons trapped at the lattice sites and freely moving excitons. In particular,
this two-kind-band theory is absolutely essential to elucidate the exciton
effect in the bandstructures of blue-detuned exciton-polaritons, where the
flattened exciton-like dispersion appears at larger in-plane momentum values
captured in our experimental access window. We reach an excellent agreement
between theory and experiments in all detuning values.
|
We analyze the observed spatial, chemical and dynamical distributions of
local metal-poor stars, based on photometrically derived metallicity and
distance estimates along with proper motions from the Gaia mission. Along the
Galactic prime meridian, we identify stellar populations with distinct
properties in the metallicity versus rotational velocity space, including Gaia
Sausage/Enceladus (GSE), the metal-weak thick disk (MWTD), and the Splash
(sometimes referred to as the "in situ" halo). We model the observed
phase-space distributions using Gaussian mixtures and refine their positions
and fractional contributions as a function of distances from the Galactic plane
($|Z|$) and the Galactic center ($R_{\rm GC}$), providing a global perspective
of the major stellar populations in the local halo. Within the sample volume
($|Z|<6$ kpc), stars associated with GSE exhibit a larger proportion of
metal-poor stars at greater $R_{\rm GC}$ ($\Delta \langle{\rm[Fe/H]}\rangle
/\Delta R_{\rm GC} =-0.05\pm0.02$ dex kpc$^{-1}$). This observed trend, along
with a mild anticorrelation of the mean rotational velocity with metallicity
($\Delta \langle v_\phi \rangle / \Delta \langle{\rm[Fe/H]} \rangle \sim -10$
km s$^{-1}$ dex$^{-1}$), implies that more metal-rich stars in the inner region
of the GSE progenitor were gradually stripped away, while the prograde orbit of
the merger at infall became radialized by dynamical friction. The metal-rich
GSE stars are causally disconnected from the Splash structure, whose stars are
mostly found on prograde orbits ($>94\%$) and exhibit a more centrally
concentrated distribution than GSE. The MWTD exhibits a similar spatial
distribution to the Splash, suggesting earlier dynamical heating of stars in
the primordial disk of the Milky Way, possibly before the GSE merger.
|
Integrating external language models (LMs) into end-to-end (E2E) models
remains a challenging task for domain-adaptive speech recognition. Recently,
internal language model estimation (ILME)-based LM fusion has shown significant
word error rate (WER) reduction from Shallow Fusion by subtracting a weighted
internal LM score from an interpolation of E2E model and external LM scores
during beam search. However, on different test sets, the optimal LM
interpolation weights vary over a wide range and have to be tuned extensively
on well-matched validation sets. In this work, we perform LM fusion in the
minimum WER (MWER) training of an E2E model to obviate the need for LM weights
tuning during inference. Besides MWER training with Shallow Fusion (MWER-SF),
we propose a novel MWER training with ILME (MWER-ILME) where the ILME-based
fusion is conducted to generate N-best hypotheses and their posteriors.
Additional gradient is induced when internal LM is engaged in MWER-ILME loss
computation. During inference, LM weights pre-determined in MWER training
enable robust LM integrations on test sets from different domains. Experimented
with 30K-hour trained transformer transducers, MWER-ILME achieves on average
8.8% and 5.8% relative WER reductions from MWER and MWER-SF training,
respectively, on 6 different test sets
|
Aircraft manufacturing relies on pre-order bookings. The configuration of the
to be assembled aircraft is fixed by the design assisted market surveys. The
sensitivity of the supply chain to the market conditions, makes, the
relationship between the product (aircraft) and the associated service
(aviation), precarious. Traditional model to mitigate this risk to
profitability rely on increasing the scales of operations. However, the
emergence of new standards of air quality monitoring and insistence on the
implementation, demands additional corrective measures. In the quest for a
solution, this research commentary establishes a link, between the airport
taxes and the nature of the transporting unit. It warns, that merely,
increasing the number of mid haulage range aircrafts (MHA) in the fleet, may
not be enough, to overcome this challenge. In a two-pronged approach, the
communication proposes, the use of mostly electric assisted air planes, and
small sized airports as the key to solving this complex problem. As a side-note
the appropriateness of South Asian region, as a test-bed for MEAP based
aircrafts is also investigated. The success of this the idea can be potentially
extended, to any other aviation friendly region of the world.
|
We present high-angular-resolution radio observations of the Arches cluster
in the Galactic centre, one of the most massive young clusters in the Milky
Way. The data were acquired in two epochs and at 6 and 10 GHz with the Karl G.
Jansky Very Large Array (JVLA). The rms noise reached is three to four times
better than during previous observations and we have almost doubled the number
of known radio stars in the cluster. Nine of them have spectral indices
consistent with thermal emission from ionised stellar winds, one is a confirmed
colliding wind binary (CWB), and two sources are ambiguous cases. Regarding
variability, the radio emission appears to be stable on timescales of a few to
ten years. Finally, we show that the number of radio stars can be used as a
tool for constraining the age and/or mass of a cluster and also its mass
function.
|
We consider an elliptic problem with nonlinear boundary condition involving
nonlinearity with superlinear and subcritical growth at infinity and a
bifurcation parameter as a factor. We use re-scaling method, degree theory and
continuation theorem to prove that there exists a connected branch of positive
solutions bifurcating from infinity when the parameter goes to zero. Moreover,
if the nonlinearity satisfies additional conditions near zero, we establish a
global bifurcation result, and discuss the number of positive solution(s) with
respect to the parameter using bifurcation theory and degree theory.
|
Real-world machine learning systems need to analyze test data that may differ
from training data. In K-way classification, this is crisply formulated as
open-set recognition, core to which is the ability to discriminate open-set
data outside the K closed-set classes. Two conceptually elegant ideas for
open-set discrimination are: 1) discriminatively learning an open-vs-closed
binary discriminator by exploiting some outlier data as the open-set, and 2)
unsupervised learning the closed-set data distribution with a GAN, using its
discriminator as the open-set likelihood function. However, the former
generalizes poorly to diverse open test data due to overfitting to the training
outliers, which are unlikely to exhaustively span the open-world. The latter
does not work well, presumably due to the instable training of GANs. Motivated
by the above, we propose OpenGAN, which addresses the limitation of each
approach by combining them with several technical insights. First, we show that
a carefully selected GAN-discriminator on some real outlier data already
achieves the state-of-the-art. Second, we augment the available set of real
open training examples with adversarially synthesized "fake" data. Third and
most importantly, we build the discriminator over the features computed by the
closed-world K-way networks. This allows OpenGAN to be implemented via a
lightweight discriminator head built on top of an existing K-way network.
Extensive experiments show that OpenGAN significantly outperforms prior
open-set methods.
|
Climate change presents an existential threat to human societies and the
Earth's ecosystems more generally. Mitigation strategies naturally require
solving a wide range of challenging problems in science, engineering, and
economics. In this context, rapidly developing quantum technologies in
computing, sensing, and communication could become useful tools to diagnose and
help mitigate the effects of climate change. However, the intersection between
climate and quantum sciences remains largely unexplored. This preliminary
report aims to identify potential high-impact use-cases of quantum technologies
for climate change with a focus on four main areas: simulating physical
systems, combinatorial optimization, sensing, and energy efficiency. We hope
this report provides a useful resource towards connecting the climate and
quantum science communities, and to this end we identify relevant research
questions and next steps.
|
Information theoretic sensor management approaches are an ideal solution to
state estimation problems when considering the optimal control of multi-agent
systems, however they are too computationally intensive for large state spaces,
especially when considering the limited computational resources typical of
large-scale distributed multi-agent systems. Reinforcement learning (RL) is a
promising alternative which can find approximate solutions to distributed
optimal control problems that take into account the resource constraints
inherent in many systems of distributed agents. However, the RL training can be
prohibitively inefficient, especially in low-information environments where
agents receive little to no feedback in large portions of the state space. We
propose a hybrid information-driven multi-agent reinforcement learning (MARL)
approach that utilizes information theoretic models as heuristics to help the
agents navigate large sparse state spaces, coupled with information based
rewards in an RL framework to learn higher-level policies. This paper presents
our ongoing work towards this objective. Our preliminary findings show that
such an approach can result in a system of agents that are approximately three
orders of magnitude more efficient at exploring a sparse state space than naive
baseline metrics. While the work is still in its early stages, it provides a
promising direction for future research.
|
Harnessing the quantum computation power of the present
noisy-intermediate-size-quantum devices has received tremendous interest in the
last few years. Here we study the learning power of a one-dimensional
long-range randomly-coupled quantum spin chain, within the framework of
reservoir computing. In time sequence learning tasks, we find the system in the
quantum many-body localized (MBL) phase holds long-term memory, which can be
attributed to the emergent local integrals of motion. On the other hand, MBL
phase does not provide sufficient nonlinearity in learning highly-nonlinear
time sequences, which we show in a parity check task. This is reversed in the
quantum ergodic phase, which provides sufficient nonlinearity but compromises
memory capacity. In a complex learning task of Mackey-Glass prediction that
requires both sufficient memory capacity and nonlinearity, we find optimal
learning performance near the MBL-to-ergodic transition. This leads to a
guiding principle of quantum reservoir engineering at the edge of quantum
ergodicity reaching optimal learning power for generic complex reservoir
learning tasks. Our theoretical finding can be readily tested with present
experiments.
|
We present a study of the influence of magnetic field strength and morphology
in Type Ia Supernovae and their late-time light curves and spectra. In order to
both capture self-consistent magnetic field topologies as well evolve our
models to late times, a two stage approach is taken. We study the early
deflagration phase (1s) using a variety of magnetic field strengths, and find
that the topology of the field is set by the burning, independent of the
initial strength. We study late time (~1000 days) light curves and spectra with
a variety of magnetic field topologies, and infer magnetic field strengths from
observed supernovae. Lower limits are found to be 106G. This is determined by
the escape, or lack thereof, of positrons that are tied to the magnetic field.
The first stage employs 3d MHD and a local burning approximation, and uses the
code Enzo. The second stage employs a hybrid approach, with 3D radiation and
positron transport, and spherical hydrodynamics. The second stage uses the code
HYDRA. In our models, magnetic field amplification remains small during the
early deflagration phase. Late-time spectra bear the imprint of both magnetic
field strength and morphology. Implications for alternative explosion scenarios
are discussed.
|
Computational Fluid Dynamics (CFD) is a major sub-field of engineering.
Corresponding flow simulations are typically characterized by heavy
computational resource requirements. Often, very fine and complex meshes are
required to resolve physical effects in an appropriate manner. Since all CFD
algorithms scale at least linearly with the size of the underlying mesh
discretization, finding an optimal mesh is key for computational efficiency.
One methodology used to find optimal meshes is goal-oriented adaptive mesh
refinement. However, this is typically computationally demanding and only
available in a limited number of tools. Within this contribution, we adopt a
machine learning approach to identify optimal mesh densities. We generate
optimized meshes using classical methodologies and propose to train a
convolutional network predicting optimal mesh densities given arbitrary
geometries. The proposed concept is validated along 2d wind tunnel simulations
with more than 60,000 simulations. Using a training set of 20,000 simulations
we achieve accuracies of more than 98.7%.
Corresponding predictions of optimal meshes can be used as input for any mesh
generation and CFD tool. Thus without complex computations, any CFD engineer
can start his predictions from a high quality mesh.
|
Context. The Sun's complex corona is the source of the solar wind and
interplanetary magnetic field. While the large scale morphology is well
understood, the impact of variations in coronal properties on the scale of a
few degrees on properties of the interplanetary medium is not known. Solar
Orbiter, carrying both remote sensing and in situ instruments into the inner
solar system, is intended to make these connections better than ever before.
Aims. We combine remote sensing and in situ measurements from Solar Orbiter's
first perihelion at 0.5 AU to study the fine scale structure of the solar wind
from the equatorward edge of a polar coronal hole with the aim of identifying
characteristics of the corona which can explain the in situ variations.
Methods. We use in situ measurements of the magnetic field, density and solar
wind speed to identify structures on scales of hours at the spacecraft. Using
Potential Field Source Surface mapping we estimate the source locations of the
measured solar wind as a function of time and use EUI images to characterise
these solar sources. Results. We identify small scale stream interactions in
the solar wind with compressed magnetic field and density along with speed
variations which are associated with corrugations in the edge of the coronal
hole on scales of several degrees, demonstrating that fine scale coronal
structure can directly influence solar wind properties and drive variations
within individual streams. Conclusions. This early analysis already
demonstrates the power of Solar Orbiter's combined remote sensing and in situ
payload and shows that with future, closer perihelia it will be possible
dramatically to improve our knowledge of the coronal sources of fine scale
solar wind structure, which is important both for understanding the phenomena
driving the solar wind and predicting its impacts at the Earth and elsewhere.
|
It is crucial for policymakers to understand the community prevalence of
COVID-19 so combative resources can be effectively allocated and prioritized
during the COVID-19 pandemic. Traditionally, community prevalence has been
assessed through diagnostic and antibody testing data. However, despite the
increasing availability of COVID-19 testing, the required level has not been
met in most parts of the globe, introducing a need for an alternative method
for communities to determine disease prevalence. This is further complicated by
the observation that COVID-19 prevalence and spread varies across different
spatial, temporal, and demographics. In this study, we understand trends in the
spread of COVID-19 by utilizing the results of self-reported COVID-19 symptoms
surveys as an alternative to COVID-19 testing reports. This allows us to assess
community disease prevalence, even in areas with low COVID-19 testing ability.
Using individually reported symptom data from various populations, our method
predicts the likely percentage of the population that tested positive for
COVID-19. We do so with a Mean Absolute Error (MAE) of 1.14 and Mean Relative
Error (MRE) of 60.40\% with 95\% confidence interval as (60.12, 60.67). This
implies that our model predicts +/- 1140 cases than the original in a
population of 1 million. In addition, we forecast the location-wise percentage
of the population testing positive for the next 30 days using self-reported
symptoms data from previous days. The MAE for this method is as low as 0.15
(MRE of 23.61\% with 95\% confidence interval as (23.6, 13.7)) for New York. We
present an analysis of these results, exposing various clinical attributes of
interest across different demographics. Lastly, we qualitatively analyze how
various policy enactments (testing, curfew) affect the prevalence of COVID-19
in a community.
|
Precise quantitative delineation of tumor hypoxia is essential in radiation
therapy treatment planning to improve the treatment efficacy by targeting
hypoxic sub-volumes. We developed a combined imaging system of positron
emission tomography (PET) and electron para-magnetic resonance imaging (EPRI)
of molecular oxygen to investigate the accuracy of PET imaging in assessing
tumor hypoxia. The PET/EPRI combined imaging system aims to use EPRI to
precisely measure the oxygen partial pressure in tissues. This will evaluate
the validity of PET hypoxic tumor imaging by (near) simultaneously acquired
EPRI as ground truth. The combined imaging system was constructed by
integrating a small animal PET scanner (inner ring diameter 62 mm and axial
field of view 25.6 mm) and an EPRI subsystem (field strength 25 mT and resonant
frequency 700 MHz). The compatibility between the PET and EPRI subsystems were
tested with both phantom and animal imaging. Hypoxic imaging on a tumor mouse
model using $^{18}$F-fluoromisonidazole radio-tracer was conducted with the
developed PET/EPRI system. We report the development and initial imaging
results obtained from the PET/EPRI combined imaging system.
|
The effects of the evolution force are observable in nature at all structural
levels ranging from small molecular systems to conversely enormous biospheric
systems. However, the evolution force and work associated with formation of
biological structures has yet to be described mathematically or theoretically.
In addressing the conundrum, we consider evolution from a unique perspective
and in doing so introduce the Fundamental Theory of the Evolution Force, FTEF.
Herein, we prove FTEF by proof of concept using a synthetic evolution
artificial intelligence to engineer 14-3-3 {\zeta} docking proteins. Synthetic
genes were engineered by transforming 14-3-3 {\zeta} sequences into time-based
DNA codes that served as templates for random DNA hybridizations and genetic
assembly. Application of time-based DNA codes allowed us to fast forward
evolution, while damping the effect of point mutations. Notably, SYN-AI
engineered a set of three architecturally conserved docking proteins that
retained motion and vibrational dynamics of native Bos taurus 14-3-3 {\zeta}.
|
Quantum cascade lasers (QCLs) facilitate compact optical frequency comb
sources that operate in the mid-infrared and terahertz spectral regions, where
many molecules have their fundamental absorption lines. Enhancing the optical
bandwidth of these chip-sized lasers is of paramount importance to address
their application in broadband high-precision spectroscopy. In this work, we
provide a numerical and experimental investigation of the comb spectral width
and show how it can be optimized to obtain its maximum value defined by the
laser gain bandwidth. The interplay of nonoptimal values of the resonant Kerr
nonlinearity and the cavity dispersion can lead to significant narrowing of the
comb spectrum and reveals the best approach for dispersion compensation. The
implementation of high mirror losses is shown to be favourable and results in
proliferation of the comb sidemodes. Ultimately, injection locking of QCLs by
modulating the laser bias around the roundtrip frequency provides a stable
external knob to control the FM comb state and recover the maximum spectral
width of the unlocked laser state.
|
The lack of an easily realizable complementary circuit technology offering
low static power consumption has been limiting the utilization of other
semiconductor materials than silicon. In this publication, a novel depletion
mode JFET based complementary circuit technology is presented and herein after
referred to as Complementary Semiconductor (CS) circuit technology. The fact
that JFETs are pure semiconductor devices, i.e. a carefully optimized Metal
Oxide Semiconductor (MOS) gate stack is not required, facilitates the
implementation of CS circuit technology to many semiconductor materials, like
e.g. germanium and silicon carbide. Furthermore, when the CS circuit technology
is idle there are neither conductive paths between nodes that are biased at
different potentials nor forward biased p-n junctions and thus it enables low
static power consumption. Moreover, the fact that the operation of depletion
mode JFETs does not necessitate the incorporation of forward biased p-n
junctions means that CS circuit technology is not limited to wide band-gap
semiconductor materials, low temperatures, and/or low voltage spans. In this
paper the operation of the CS logic is described and proven via simulations.
|
Many robot manipulation skills can be represented with deterministic
characteristics and there exist efficient techniques for learning parameterized
motor plans for those skills. However, one of the active research challenge
still remains to sustain manipulation capabilities in situation of a mechanical
failure. Ideally, like biological creatures, a robotic agent should be able to
reconfigure its control policy by adapting to dynamic adversaries. In this
paper, we propose a method that allows an agent to survive in a situation of
mechanical loss, and adaptively learn manipulation with compromised degrees of
freedom -- we call our method Survivable Robotic Learning (SRL). Our key idea
is to leverage Bayesian policy gradient by encoding knowledge bias in posterior
estimation, which in turn alleviates future policy search explorations, in
terms of sample efficiency and when compared to random exploration based policy
search methods. SRL represents policy priors as Gaussian process, which allows
tractable computation of approximate posterior (when true gradient is
intractable), by incorporating guided bias as proxy from prior replays. We
evaluate our proposed method against off-the-shelf model free learning
algorithm (DDPG), testing on a hexapod robot platform which encounters
incremental failure emulation, and our experiments show that our method
improves largely in terms of sample requirement and quantitative success ratio
in all failure modes. A demonstration video of our experiments can be viewed
at: https://sites.google.com/view/survivalrl
|
Following ideas introduced by Beardon-Minda and by Baribeau-Rivard-Wegert in
the context of the Schwarz-Pick lemma, we use the iterated hyperbolic
difference quotients to prove a multipoint Julia lemma. As applications, we
give a sharp estimate from below of the angular derivative at a boundary point,
generalizing results due to Osserman, Mercer and others; and we prove a
generalization to multiple fixed points of an interesting estimate due to Cowen
and Pommerenke. These applications show that iterated hyperbolic difference
quotients and multipoint Julia lemmas can be useful tools for exploring in a
systematic way the influence of higher order derivatives on the boundary
behaviour of holomorphic self-maps of the unit disk.
|
We develop a mesoscopic lattice model to study the morphology formation in
interacting ternary mixtures with evaporation of one component. As concrete
application of our model, we wish to capture morphologies as they are typically
arising during fabrication of organic solar cells. In this context, we consider
an evaporating solvent into which two other components are dissolved, as a
model for a 2-component coating solution that is drying on a substrate. We
propose a 3-spins dynamics to describe the evolution of the three interacting
species. As main tool, we use a Monte Carlo Metropolis-based algorithm, with
the possibility of varying the system's temperature, mixture composition,
interaction strengths, and evaporation kinetics. The main novelty is the
structure of the mesoscopic model -- a bi-dimensional lattice with periodic
boundary conditions and divided in square cells to encode a mesoscopic range
interaction among the units. We investigate the effect of the model parameters
on the structure of the resulting morphologies. Finally, we compare the results
obtained with the mesoscopic model with corresponding ones based on an
analogous lattice model with a short range interaction among the units, i.e.
when the mesoscopic length scale coincides with the microscopic length scale of
the lattice.
|
We consider the direct $s$-channel gravitational production of dark matter
during the reheating process. Independent of the identity of the dark matter
candidate or its non-gravitational interactions, the gravitational process is
always present and provides a minimal production mechanism. During reheating, a
thermal bath is quickly generated with a maximum temperature $T_{\rm max}$, and
the temperature decreases as the inflaton continues to decay until the energy
densities of radiation and inflaton oscillations are equal, at $T_{\rm RH}$.
During these oscillations, $s$-channel gravitational production of dark matter
occurs. We show that the abundance of dark matter (fermionic or scalar) depends
primarily on the combination $T_{\rm max}^4/T_{\rm RH} M_P^3$. We find that a
sufficient density of dark matter can be produced over a wide range of dark
matter masses: from a GeV to a ZeV.
|
Magnetic and crystallographic transitions in the Cairo pentagonal magnet
Bi2Fe4O9 are investigated by means of infrared synchrotron-based spectroscopy
as a function of temperature (20 - 300 K) and pressure (0 - 15.5 GPa). One of
the phonon modes is shown to exhibit an anomalous softening as a function of
temperature in the antiferromagnetic phase below 240 K, highlighting
spin-lattice coupling. Moreover, under applied pressure at 40 K, an even larger
softening is observed through the pressure induced structural transition.
Lattice dynamical calculations reveal that this mode is indeed very peculiar as
it involves a minimal bending of the strongest superexchange path in the
pentagonal planes, as well as a decrease of the distances between second
neighbor irons. The latter confirms the hypothesis made by Friedrich et al.,1
about an increase in the oxygen coordination of irons being at the origin of
the pressure-induced structural transition. As a consequence, one expects a new
magnetic superexchange path that may alter the magnetic structure under
pressure.
|
Real-time detections of transients and rapid multi-wavelength follow-up are
at the core of modern multi-messenger astrophysics. MeerTRAP is one such
instrument that has been deployed on the MeerKAT radio telescope in South
Africa to search for fast radio transients in real-time. This, coupled with the
ability to rapidly localize the transient in combination with optical
co-pointing by the MeerLICHT telescope gives the instrument the edge in finding
and identifying the nature of the transient on short timescales. The commensal
nature of the project means that MeerTRAP will keep looking for transients even
if the telescope is not being used specifically for that purpose. Here, we
present a brief overview of the MeerTRAP project. We describe the overall
design, specifications and the software stack required to implement such an
undertaking. We conclude with some science highlights that have been enabled by
this venture over the last 10 months of operation.
|
Zonotopes are widely used for over-approximating forward reachable sets of
uncertain linear systems for verification purposes. In this paper, we use
zonotopes to achieve more scalable algorithms that under-approximate backward
reachable sets of uncertain linear systems for control design. The main
difference is that the backward reachability analysis is a two-player game and
involves Minkowski difference operations, but zonotopes are not closed under
such operations. We under-approximate this Minkowski difference with a
zonotope, which can be obtained by solving a linear optimization problem. We
further develop an efficient zonotope order reduction technique to bound the
complexity of the obtained zonotopic under-approximations. The proposed
approach is evaluated against existing approaches using randomly generated
instances and illustrated with several examples.
|
Identifying a low-dimensional informed parameter subspace offers a viable
path to alleviating the dimensionality challenge in the sampled-based solution
to large-scale Bayesian inverse problems. This paper introduces a novel
gradient-based dimension reduction method in which the informed subspace does
not depend on the data. This permits an online-offline computational strategy
where the expensive low-dimensional structure of the problem is detected in an
offline phase, meaning before observing the data. This strategy is particularly
relevant for multiple inversion problems as the same informed subspace can be
reused. The proposed approach allows controlling the approximation error (in
expectation over the data) of the posterior distribution. We also present
sampling strategies that exploit the informed subspace to draw efficiently
samples from the exact posterior distribution. The method is successfully
illustrated on two numerical examples: a PDE-based inverse problem with a
Gaussian process prior and a tomography problem with Poisson data and a
Besov-$\mathcal{B}^2_{11}$ prior.
|
Random Reshuffling (RR), also known as Stochastic Gradient Descent (SGD)
without replacement, is a popular and theoretically grounded method for
finite-sum minimization. We propose two new algorithms: Proximal and Federated
Random Reshuffing (ProxRR and FedRR). The first algorithm, ProxRR, solves
composite convex finite-sum minimization problems in which the objective is the
sum of a (potentially non-smooth) convex regularizer and an average of $n$
smooth objectives. We obtain the second algorithm, FedRR, as a special case of
ProxRR applied to a reformulation of distributed problems with either
homogeneous or heterogeneous data. We study the algorithms' convergence
properties with constant and decreasing stepsizes, and show that they have
considerable advantages over Proximal and Local SGD. In particular, our methods
have superior complexities and ProxRR evaluates the proximal operator once per
epoch only. When the proximal operator is expensive to compute, this small
difference makes ProxRR up to $n$ times faster than algorithms that evaluate
the proximal operator in every iteration. We give examples of practical
optimization tasks where the proximal operator is difficult to compute and
ProxRR has a clear advantage. Finally, we corroborate our results with
experiments on real data sets.
|
Conditional on the extended Riemann hypothesis, we show that with high
probability, the characteristic polynomial of a random symmetric $\{\pm
1\}$-matrix is irreducible. This addresses a question raised by Eberhard in
recent work. The main innovation in our work is establishing sharp estimates
regarding the rank distribution of symmetric random $\{\pm 1\}$-matrices over
$\mathbb{F}_p$ for primes $2 < p \leq \exp(O(n^{1/4}))$. Previously, such
estimates were available only for $p = o(n^{1/8})$. At the heart of our proof
is a way to combine multiple inverse Littlewood--Offord-type results to control
the contribution to singularity-type events of vectors in $\mathbb{F}_p^{n}$
with anticoncentration at least $1/p + \Omega(1/p^2)$. Previously, inverse
Littlewood--Offord-type results only allowed control over vectors with
anticoncentration at least $C/p$ for some large constant $C > 1$.
|
In recent years, the immiscible polymer blend system has attracted much
attention as the matrix of nanocomposites. Herein, from the perspective of
dynamics, the control of the carbon nanotubes (CNTs) migration aided with the
interface of polystyrene (PS) and poly(methyl methacrylate) (PMMA) blends was
achieved through a facile melt mixing method. Thus, we revealed a comprehensive
relationship between several typical CNTs migrating scenarios and the microwave
dielectric properties of their nanocomposites. Based on the unique morphologies
and phase domain structures of the immiscible matrix, we further investigated
the multiple microwave dielectric relaxation processes and shed new light on
the relation between relaxation peak position and the phase domain size
distribution. Moreover, by integrating the CNTs interface localization control
with the matrix co-continuous structure construction, we found that the
interface promotes double percolation effect to achieve conductive percolation
at low CNTs loading (~1.06 vol%). Overall, the present study provides a unique
nanocomposite material design symphonizing both functional fillers dispersion
and location as well as the matrix architecture optimization for microwave
applications.
|
Despite the acclaimed success of the magnetic field (H) formulation for
modeling the electromagnetic behavior of superconductors with the finite
element method, the use of vector-dependent variables in non-conducting domains
leads to unnecessarily long computation times. In order to solve this issue, we
have recently shown how to use a magnetic scalar potential together with the
H-formulation in the COMSOL Multiphysics environment to efficiently and
accurately solve for the magnetic field surrounding superconducting domains.
However, from the definition of the magnetic scalar potential, the
non-conducting domains must be made simply connected in order to obey Ampere's
law. In this work, we use thin cuts to apply a discontinuity in $\phi$ and make
the non-conducting domains simply connected. This approach is shown to be
easily implementable in the COMSOL Multiphysics finite element program, already
widely used by the applied superconductivity community. We simulate three
different models in 2-D and 3-D using superconducting filaments and tapes, and
show that the results are in very good agreement with the H-A and
H-formulations. Finally, we compare the computation times between the
formulations, showing that the H-$\phi$-formulation can be up to seven times
faster than the standard H-formulation in certain applications of interest.
|
Calcium scoring, a process in which arterial calcifications are detected and
quantified in CT, is valuable in estimating the risk of cardiovascular disease
events. Especially when used to quantify the extent of calcification in the
coronary arteries, it is a strong and independent predictor of coronary heart
disease events. Advances in artificial intelligence (AI)-based image analysis
have produced a multitude of automatic calcium scoring methods. While most
early methods closely follow standard calcium scoring accepted in clinic,
recent approaches extend this procedure to enable faster or more reproducible
calcium scoring. This chapter provides an introduction to AI for calcium
scoring, and an overview of the developed methods and their applications. We
conclude with a discussion on AI methods in calcium scoring and propose
potential directions for future research.
|
Researchers have developed numerous debugging approaches to help programmers
in the debugging process, but these approaches are rarely used in practice. In
this paper, we investigate how programmers debug their code and what
researchers should consider when developing debugging approaches. We conducted
an online questionnaire where 102 programmers provided information about
recently fixed bugs. We found that the majority of bugs (69.6 %) are semantic
bugs. Memory and concurrency bugs do not occur as frequently (6.9 % and 8.8 %),
but they consume more debugging time. Locating a bug is more difficult than
reproducing and fixing it. Programmers often use only IDE build-in tools for
debugging. Furthermore, programmers frequently use a
replication-observation-deduction pattern when debugging. These results suggest
that debugging support is particularly valuable for memory and concurrency
bugs. Furthermore, researchers should focus on the fault localization phase and
integrate their tools into commonly used IDEs.
|
Zero-shot learning (ZSL) refers to the problem of learning to classify
instances from the novel classes (unseen) that are absent in the training set
(seen). Most ZSL methods infer the correlation between visual features and
attributes to train the classifier for unseen classes. However, such models may
have a strong bias towards seen classes during training. Meta-learning has been
introduced to mitigate the basis, but meta-ZSL methods are inapplicable when
tasks used for training are sampled from diverse distributions. In this regard,
we propose a novel Task-aligned Generative Meta-learning model for Zero-shot
learning (TGMZ). TGMZ mitigates the potentially biased training and enables
meta-ZSL to accommodate real-world datasets containing diverse distributions.
TGMZ incorporates an attribute-conditioned task-wise distribution alignment
network that projects tasks into a unified distribution to deliver an unbiased
model. Our comparisons with state-of-the-art algorithms show the improvements
of 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY
datasets, respectively. TGMZ also outperforms competitors by 3.6% in
generalized zero-shot learning (GZSL) setting and 7.9% in our proposed
fusion-ZSL setting.
|
We consider the Brenier-Schr{\"o}dinger problem on compact manifolds with
boundary. In the spirit of a work by Arnaudon, Cruzeiro, L{\'e}onard and
Zambrini, we study the kinetic property of regular solutions and obtain a link
to the Navier-Stokes equations with an impermeability condition. We also
enhance the class of models for which the problem admits a unique solution.
This involves a method of taking quotients by reflection groups for which we
give several examples.
|
The fast protection of meshed HVDC grids requires the modeling of the
transient phenomena affecting the grid after a fault. In the case of hybrid
lines comprising both overhead and underground parts, the numerous generated
traveling waves may be difficult to describe and evaluate. This paper proposes
a representation of the grid as a graph, allowing to take into account any
waves traveling through the grid. A relatively compact description of the waves
is then derived, based on a combined physical and behavioral modeling approach.
The obtained model depends explicitly on the characteristics of the grid as
well as on the fault parameters. An application of the model to the
identification of the faulty portion of an hybrid line is proposed. The
knowledge of the faulty portion is profitable as faults in overhead lines,
generally temporary, can lead to the reclosing of the line.
|
We investigate the feasibility of using deep learning techniques, in the form
of a one-dimensional convolutional neural network (1D-CNN), for the extraction
of signals from the raw waveforms produced by the individual channels of liquid
argon time projection chamber (LArTPC) detectors. A minimal generic LArTPC
detector model is developed to generate realistic noise and signal waveforms
used to train and test the 1D-CNN, and evaluate its performance on low-level
signals. We demonstrate that our approach overcomes the inherent shortcomings
of traditional cut-based methods by extending sensitivity to signals with ADC
values below their imposed thresholds. This approach exhibits great promise in
enhancing the capabilities of future generation neutrino experiments like DUNE
to carry out their low-energy neutrino physics programs.
|
Sensing and metrology play an important role in fundamental science and
applications, by fulfilling the ever-present need for more precise data sets,
and by allowing to make more reliable conclusions on the validity of
theoretical models. Sensors are ubiquitous, they are used in applications
across a diverse range of fields including gravity imaging, geology,
navigation, security, timekeeping, spectroscopy, chemistry, magnetometry,
healthcare, and medicine. Current progress in quantum technologies inevitably
triggers the exploration of quantum systems to be used as sensors with new and
improved capabilities. This perspective initially provides a brief review of
existing and tested quantum sensing systems, before discussing future possible
directions of superconducting quantum circuits use for sensing and metrology:
superconducting sensors including many entangled qubits and schemes employing
Quantum Error Correction. The perspective also lists future research directions
that could be of great value beyond quantum sensing, e.g. for applications in
quantum computation and simulation.
|
Over the past two decades, open systems that are described by a non-Hermitian
Hamiltonian have become a subject of intense research. These systems encompass
classical wave systems with balanced gain and loss, semiclassical models with
mode selective losses, and minimal quantum systems, and the meteoric research
on them has mainly focused on the wide range of novel functionalities they
demonstrate. Here, we address the following questions: Does anything remain
constant in the dynamics of such open systems? What are the consequences of
such conserved quantities? Through spectral-decomposition method and explicit,
recursive procedure, we obtain all conserved observables for general
$\mathcal{PT}$-symmetric systems. We then generalize the analysis to
Hamiltonians with other antilinear symmetries, and discuss the consequences of
conservation laws for open systems. We illustrate our findings with several
physically motivated examples.
|
Study on a rectified current induced by active particles has received a great
attention due to its possible application to a microscopic motor in biological
environments. Insertion of an {\em asymmetric} passive object amid many active
particles has been regarded as an essential ingredient for generating such a
rectified motion. Here, we report that the reverse situation is also possible,
where the motion of an active object can be rectified by its geometric
asymmetry amid many passive particles. This may describe an unidirectional
motion of polar biological agents with asymmetric shape. We also find a weak
but less diffusive rectified motion in a {\em passive} mode without energy
pump-in. This "moving by dissipation" mechanism could be used as a design
principle for developing more reliable microscopic motors.
|
It has been argued that supergravity models of inflation with vanishing sound
speeds, $c_s$, lead to an unbounded growth in the production rate of
gravitinos. We consider several models of inflation to delineate the conditions
for which $c_s = 0$. In models with unconstrained superfields, we argue that
the mixing of the goldstino and inflatino in a time-varying background prevents
the uncontrolled production of the longitudinal modes. This conclusion is
unchanged if there is a nilpotent field associated with supersymmetry breaking
with constraint ${\bf S^2} =0$, i.e. sgoldstino-less models. Models with a
second orthogonal constraint, ${\bf S(\Phi-\bar{\Phi})} =0$, where $\bf{\Phi}$
is the inflaton superfield, which eliminates the inflatino, may suffer from the
over-production of gravitinos. However, we point out that these models may be
problematic if this constraint originates from a UV Lagrangian, as this may
require using higher derivative operators. These models may also exhibit other
pathologies such as $c_s > 1$, which are absent in theories with the single
constraint or unconstrained fields.
|
Power law size distributions are the hallmarks of nonlinear energy
dissipation processes governed by self-organized criticality. Here we analyze
75 data sets of stellar flare size distributions, mostly obtained from the {\sl
Extreme Ultra-Violet Explorer (EUVE)} and the {\sl Kepler} mission. We aim to
answer the following questions for size distributions of stellar flares: (i)
What are the values and uncertainties of power law slopes? (ii) Do power law
slopes vary with time ? (iii) Do power law slopes depend on the stellar
spectral type? (iv) Are they compatible with solar flares? (v) Are they
consistent with self-organized criticality (SOC) models? We find that the
observed size distributions of stellar flare fluences (or energies) exhibit
power law slopes of $\alpha_E=2.09\pm0.24$ for optical data sets observed with
Kepler. The observed power law slopes do not show much time variability and do
not depend on the stellar spectral type (M, K, G, F, A, Giants). In solar
flares we find that background subtraction lowers the uncorrected value of
$\alpha_E=2.20\pm0.22$ to $\alpha_E=1.57\pm0.19$. Furthermore, most of the
stellar flares are temporally not resolved in low-cadence (30 min) Kepler data,
which causes an additional bias. Taking these two biases into account, the
stellar flare data sets are consistent with the theoretical prediction $N(x)
\propto x^{-\alpha_x}$ of self-organized criticality models, i.e.,
$\alpha_E=1.5$. Thus, accurate power law fits require automated detection of
the inertial range and background subtraction, which can be modeled with the
generalized Pareto distribution, finite-system size effects, and extreme event
outliers.
|
If every vertex in a map has one out of two face-cycle types, then the map is
said to be $2$-semiequivelar. A 2-uniform tiling is an edge-to-edge tiling of
regular polygons having $2$ distinct transitivity classes of vertices. Clearly,
a $2$-uniform map is $2$-semiequivelar. The converse of this is not true in
general. There are 20 distinct 2-uniform tilings (these are of $14$ different
types) on the plane. In this article, we prove that a $2$-semiequivelar
toroidal map $K$ has a finite $2$-uniform cover if the universal cover of $K$
is $2$-uniform except of two types.
|
To keep up with demand, servers will scale up to handle hundreds of thousands
of clients simultaneously. Much of the focus of the community has been on
scaling servers in terms of aggregate traffic intensity (packets transmitted
per second). However, bottlenecks caused by the increasing number of concurrent
clients, resulting in a large number of concurrent flows, have received little
attention. In this work, we focus on identifying such bottlenecks. In
particular, we define two broad categories of problems; namely, admitting more
packets into the network stack than can be handled efficiently, and increasing
per-packet overhead within the stack. We show that these problems contribute to
high CPU usage and network performance degradation in terms of aggregate
throughput and RTT. Our measurement and analysis are performed in the context
of the Linux networking stack, the the most widely used publicly available
networking stack. Further, we discuss the relevance of our findings to other
network stacks. The goal of our work is to highlight considerations required in
the design of future networking stacks to enable efficient handling of large
numbers of clients and flows.
|
This paper presents a new technique for disturbing the algebraic structure of
linear codes in code-based cryptography. Specifically, we introduce the
so-called semilinear transformations in coding theory and then creatively apply
them to the construction of code-based cryptosystems. Note that
$\mathbb{F}_{q^m}$ can be viewed as an $\mathbb{F}_q$-linear space of dimension
$m$, a semilinear transformation $\varphi$ is therefore defined as an
$\mathbb{F}_q$-linear automorphism of $\mathbb{F}_{q^m}$. Then we impose this
transformation to a linear code $\mathcal{C}$ over $\mathbb{F}_{q^m}$. It is
clear that $\varphi(\mathcal{C})$ forms an $\mathbb{F}_q$-linear space, but
generally does not preserve the $\mathbb{F}_{q^m}$-linearity any longer.
Inspired by this observation, a new technique for masking the structure of
linear codes is developed in this paper. Meanwhile, we endow the underlying
Gabidulin code with the so-called partial cyclic structure to reduce the
public-key size. Compared to some other code-based cryptosystems, our proposal
admits a much more compact representation of public keys. For instance, 2592
bytes are enough to achieve the security of 256 bits, almost 403 times smaller
than that of Classic McEliece entering the third round of the NIST PQC project.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.