abstract
stringlengths 42
2.09k
|
---|
Dynamical magnets can pump spin currents into superconductors. To understand
such a phenomenon, we develop a method utilizing the generalized Usadel
equation to describe time-dependent situations in superconductors in contact
with dynamical ferromagnets. Our proof-of-concept theory is valid when there is
sufficient dephasing at finite temperatures, and when the ferromagnetic
insulators are weakly polarized. We derive the effective equation of motion for
the Keldysh Green's function focusing on a thin film superconductor sandwiched
between two noncollinear ferromagnetic insulators of which one is dynamical. In
turn, we compute the spin currents in the system as a function of the
temperature and the magnetizations' relative orientations. When the induced
Zeeman splitting is weak, we find that the spin accumulation in the
superconducting state is smaller than in the normal states due to the lack of
quasiparticle states inside the gap. This feature gives a lower backflow spin
current from the superconductor as compared to a normal metal. Furthermore, in
superconductors, we find that the ratio between the backflow spin current in
the parallel and anti-parallel magnetization configuration depends strongly on
temperature, in contrast to the constant ratio in normal metals.
|
Data augmentation is an inexpensive way to increase training data diversity
and is commonly achieved via transformations of existing data. For tasks such
as classification, there is a good case for learning representations of the
data that are invariant to such transformations, yet this is not explicitly
enforced by classification losses such as the cross-entropy loss. This paper
investigates the use of training objectives that explicitly impose this
consistency constraint and how it can impact downstream audio classification
tasks. In the context of deep convolutional neural networks in the supervised
setting, we show empirically that certain measures of consistency are not
implicitly captured by the cross-entropy loss and that incorporating such
measures into the loss function can improve the performance of audio
classification systems. Put another way, we demonstrate how existing
augmentation methods can further improve learning by enforcing consistency.
|
We use numerical simulations to demonstrate third-harmonic generation with
near-unity nonlinear circular dichroism (CD) and high conversion efficiency ($
10^{-2}\ \text{W}^{-2}$) in asymmetric Si-on-SiO$_2$ metasurfaces. The working
principle relies on the selective excitation of a quasi-bound state in the
continuum, characterized by a very high ($>10^5$) quality-factor. By tuning
multi-mode interference with the variation of the metasurface geometrical
parameters, we show the possibility of independent control of linear CD and
nonlinear CD. Our results pave the way for the development of all-dielectric
metasurfaces for nonlinear chiro-optical devices with high conversion
efficiency.
|
Recent advances in representation learning have demonstrated an ability to
represent information from different modalities such as video, text, and audio
in a single high-level embedding vector. In this work we present a
self-supervised learning framework that is able to learn a representation that
captures finer levels of granularity across different modalities such as
concepts or events represented by visual objects or spoken words. Our framework
relies on a discretized embedding space created via vector quantization that is
shared across different modalities. Beyond the shared embedding space, we
propose a Cross-Modal Code Matching objective that forces the representations
from different views (modalities) to have a similar distribution over the
discrete embedding space such that cross-modal objects/actions localization can
be performed without direct supervision. In our experiments we show that the
proposed discretized multi-modal fine-grained representation (e.g.,
pixel/word/frame) can complement high-level summary representations (e.g.,
video/sentence/waveform) for improved performance on cross-modal retrieval
tasks. We also observe that the discretized representation uses individual
clusters to represent the same semantic concept across modalities.
|
We obtained high-resolution spectra of the ultra-cool M-dwarf TRAPPIST-1
during the transit of its planet `b' using two high dispersion near-infrared
spectrographs, IRD instrument on the Subaru 8.2m telescope and HPF instrument
on the 10m Hobby-Eberly Telescope. These spectroscopic observations are
complemented by a photometric transit observation for planet `b' using the
APO/ARCTIC, which assisted us to capture the correct transit times for our
transit spectroscopy. Using the data obtained by the new IRD and HPF
observations, as well as the prior transit observations of planets `b', `e' and
`f' from IRD, we attempt to constrain the atmospheric escape of the planet
using the He I triplet 10830 {\AA} absorption line. We do not detect evidence
for any primordial extended H-He atmospheres in all three planets. To limit any
planet related absorption, we place an upper limit on the equivalent widths of
<7.754 m{\AA} for planet `b', <10.458 m{\AA} for planet `e', and <4.143 m{\AA}
for planet `f' at 95% confidence from the IRD data, and <3.467 m{\AA} for
planet `b' at 95% confidence from HPF data. Using these limits along with a
solar-like composition isothermal Parker wind model, we attempt to constrain
the mass-loss rates for the three planets. For TRAPPIST-1b, our models exclude
the highest possible energy-limited rate for a wind temperature <5000 K. This
non-detection of extended atmospheres having low mean-molecular weight in all
three planets aids in further constraining their atmospheric composition by
steering the focus towards the search of high molecular weight species in their
atmospheres.
|
With the development of deep networks on various large-scale datasets, a
large zoo of pretrained models are available. When transferring from a model
zoo, applying classic single-model based transfer learning methods to each
source model suffers from high computational burden and cannot fully utilize
the rich knowledge in the zoo. We propose \emph{Zoo-Tuning} to address these
challenges, which learns to adaptively transfer the parameters of pretrained
models to the target task. With the learnable channel alignment layer and
adaptive aggregation layer, Zoo-Tuning \emph{adaptively aggregates channel
aligned pretrained parameters} to derive the target model, which promotes
knowledge transfer by simultaneously adapting multiple source models to
downstream tasks. The adaptive aggregation substantially reduces the
computation cost at both training and inference. We further propose lite
Zoo-Tuning with the temporal ensemble of batch average gating values to reduce
the storage cost at the inference time. We evaluate our approach on a variety
of tasks, including reinforcement learning, image classification, and facial
landmark detection. Experiment results demonstrate that the proposed adaptive
transfer learning approach can transfer knowledge from a zoo of models more
effectively and efficiently.
|
In recent cross-disciplinary studies involving both optics and computing,
single-photon-based decision-making has been demonstrated by utilizing the
wave-particle duality of light to solve multi-armed bandit problems.
Furthermore, entangled-photon-based decision-making has managed to solve a
competitive multi-armed bandit problem in such a way that conflicts of
decisions among players are avoided while ensuring equality. However, as these
studies are based on the polarization of light, the number of available choices
is limited to two, corresponding to two orthogonal polarization states. Here we
propose a scalable principle to solve competitive decision-making situations by
using the orbital angular momentum of photons based on its high dimensionality,
which theoretically allows an unlimited number of arms. Moreover, by extending
the Hong-Ou-Mandel effect to more than two states, we theoretically establish
an experimental configuration able to generate multi-photon states with orbital
angular momentum and conditions that provide conflict-free selections at every
turn. We numerically examine total rewards regarding three-armed bandit
problems, for which the proposed strategy accomplishes almost the theoretical
maximum, which is greater than a conventional mixed strategy intending to
realize Nash equilibrium. This is thanks to the quantum interference effect
that achieves no-conflict selections, even in the exploring phase to find the
best arms.
|
We show that the moduli stack of elliptic surfaces of Kodaira dimension one
and without multiple fiber satisfies a weak form of the topological
hyperbolicity.
|
We present analysis of the spatial density structure for the outer disk from
8$-$14 \,kpc with the LAMOST DR5 13534 OB-type stars and observe similar
flaring on north and south sides of the disk implying that the flaring
structure is symmetrical about the Galactic plane, for which the scale height
at different Galactocentric distance is from 0.14 to 0.5 \,kpc. By using the
average slope to characterize the flaring strength we find that the thickness
of the OB stellar disk is similar but flaring is slightly stronger compared to
the thin disk as traced by red giant branch stars, possibly implying that
secular evolution is not the main contributor to the flaring but perturbation
scenarios such as interactions with passing dwarf galaxies should be more
possible. When comparing the scale height of OB stellar disk of the north and
south sides with the gas disk, the former one is slightly thicker than the
later one by $\approx$ 33 and 9 \,pc, meaning that one could tentatively use
young OB-type stars to trace the gas properties. Meanwhile, we unravel that the
radial scale length of the young OB stellar disk is 1.17 $\pm$ 0.05 \,kpc,
which is shorter than that of the gas disk, confirming that the gas disk is
more extended than stellar disk. What is more, by considering the mid-plane
displacements ($Z_{0}$) in our density model we find that almost all of $Z_{0}$
are within 100 \,pc with the increasing trend as Galactocentric distance
increases.
|
Velocity-space anisotropy can significantly modify fusion reactivity. The
nature and magnitude of this modification depends on the plasma temperature, as
well as the details of how the anisotropy is introduced. For plasmas that are
sufficiently cold compared to the peak of the fusion cross-section, anisotropic
distributions tend to have higher yields than isotropic distributions with the
same thermal energy. At higher temperatures, it is instead isotropic
distributions that have the highest yields. However, the details of this
behavior depend on exactly how the distribution differs from an isotropic
Maxwellian. This paper describes the effects of anisotropy on fusion yield for
the class of anisotropic distribution functions with the same energy
distribution as a 3D isotropic Maxwellian, and compares those results with the
yields from bi-Maxwellian distributions. In many cases, especially for plasmas
somewhat below reactor-regime temperatures, the effects of anisotropy can be
substantial.
|
The first generation of blockchain focused on digital currencies and secure
storage, management and transfer of tokenized values. Thereafter, the focus has
been shifting from currencies to a broader application space. In this paper, we
systematically explore marketplace types and properties, and consider the
mechanisms required to support those properties through blockchain. We propose
a generic and configurable framework for blockchain-based marketplaces, and
describe how popular marketplace types, price discovery policies, and other
configuration parameters are implemented within the framework by presenting
concrete event-based algorithms. Finally, we consider three use cases with
widely diverging properties and show how the proposed framework supports them.
|
A tangle is a connected topological space constructed by gluing several
copies of the unit interval $[0, 1]$. We explore which tangles guarantee
envy-free allocations of connected shares for n agents, meaning that such
allocations exist no matter which monotonic and continuous functions represent
agents' valuations. Each single tangle $\mathcal{T}$ corresponds in a natural
way to an infinite topological class $\mathcal{G}(\mathcal{T})$ of multigraphs,
many of which are graphs. This correspondence links EF fair division of tangles
to EFk$_{outer}$ fair division of graphs. We know from Bil\`o et al that all
Hamiltonian graphs guarantee EF1$_{outer}$ allocations when the number of
agents is 2, 3, 4 and guarantee EF2$_{outer}$ allocations for arbitrarily many
agents. We show that exactly six tangles are stringable; these guarantee EF
connected allocations for any number of agents, and their associated
topological classes contain only Hamiltonian graphs. Any non-stringable tangle
has a finite upper bound r on the number of agents for which EF allocations of
connected shares are guaranteed. Most graphs in the associated non-stringable
topological class are not Hamiltonian, and a negative transfer theorem shows
that for each $k \geq 1$ most of these graphs fail to guarantee EFk$_{outer}$
allocations of vertices for r + 1 or more agents. This answers a question posed
in Bil\`o et al, and explains why a focus on Hamiltonian graphs was necessary.
With bounds on the number of agents, however, we obtain positive results for
some non-stringable classes. An elaboration of Stromquist's moving knife
procedure shows that the non-stringable lips tangle guarantees envy-free
allocations of connected shares for three agents. We then modify the discrete
version of Stromquist's procedure in Bil\`o et al to show that all graphs in
the topological class guarantee EF1$_{outer}$ allocations for three agents.
|
This paper studies properties of binary runlength-limited sequences with
additional constraints on their Hamming weight and/or their number of runs of
identical symbols. An algebraic and a probabilistic (entropic) characterization
of the exponential growth rate of the number of such sequences, i.e., their
information capacity, are obtained by using the methods of multivariate
analytic combinatorics, and properties of the capacity as a function of its
parameters are stated. The second-order term in the asymptotic expansion of the
rate of these sequences is also given, and the typical values of the relevant
quantities are derived. Several applications of the results are illustrated,
including bounds on codes for weight-preserving and run-preserving channels
(e.g., the run-preserving insertion-deletion channel), a sphere-packing bound
for channels with sparse error patterns, and the asymptotics of constant-weight
sub-block constrained sequences. In addition, the asymptotics of a closely
related notion -- $ q $-ary sequences with fixed Manhattan weight -- is briefly
discussed, and an application in coding for molecular timing channels is
illustrated.
|
We propose novel triple-${\bm q}$ multipole orders as possible candidates for
the two distinct low-temperature symmetry broken phases in quadrupolar system
PrV$_2$Al$_{20}$. An analysis of the experiment under [111] magnetic fields
indicates that the {\it ferro} octupole moments in the lower temperature phase
arise from the {\it antiferro} octupole interactions. We demonstrate that the
triple-${\bm q}$ multipole orders can solve this seemingly inconsistent issue.
Anisotropies of quadrupole moments stabilize a triple-${\bm q}$ order, which
further leads to the second transition to a coexisting phase with triple-${\bm
q}$ octupole moments. The cubic invariant of quadrupole moments formed by the
triple-${\bm q}$ components and the characteristic couplings with the octupole
moments in their free energy play important roles. We analyze a multipolar
exchange model by mean-field approximation and discuss the temperature and
magnetic field phase diagrams. Many of the microscopic results, such as the
number of phases and the magnitudes of critical fields in the phase diagrams,
are qualitatively consistent with the experiments in PrV$_2$Al$_{20}$.
|
Rotational-vibrational transitions of the fundamental vibrational modes of
the $^{12}$C$^{14}$N$^+$ and $^{12}$C$^{15}$N$^+$ cations have been observed
for the first time using a cryogenic ion trap apparatus with an action
spectroscopy scheme. The lines P(3) to R(3) of $^{12}$C$^{14}$N$^+$ and R(1) to
R(3) of $^{12}$C$^{15}$N$^+$ have been measured, limited by the trap
temperature of approximately 4 K and the restricted tuning range of the
infrared laser. Spectroscopic parameters are presented for both isotopologues,
with band origins at 2000.7587(1) and 1970.321(1) cm$^{-1}$, respectively, as
well as an isotope independent fit combining the new and the literature data.
|
WD 0145+234 is a white dwarf that is accreting metals from a circumstellar
disc of planetary material. It has exhibited a substantial and sustained
increase in 3-5 micron flux since 2018. Follow-up Spitzer photometry reveals
that emission from the disc had begun to decrease by late 2019. Stochastic
brightening events superimposed on the decline in brightness suggest the
liberation of dust during collisional evolution of the circumstellar solids. A
simple model is used to show that the observations are indeed consistent with
ongoing collisions. Rare emission lines from circumstellar gas have been
detected at this system, supporting the emerging picture of white dwarf debris
discs as sites of collisional gas and dust production.
|
Magnetic fields lines are trapped in black hole event horizons by accreting
plasma. If the trapped field lines are lightly loaded with plasma, then their
motion is controlled by their footpoints on the horizon and thus by the spin of
the black hole. In this paper, we investigate the boundary layer between
lightly loaded polar field lines and a dense, equatorial accretion flow. We
present an analytic model for aligned prograde and retrograde accretion systems
and argue that there is significant shear across this "jet-disk boundary" at
most radii for all black hole spins. Specializing to retrograde aligned
accretion, where the model predicts the strongest shear, we show numerically
that the jet-disk boundary is unstable. The resulting mixing layer episodically
loads plasma onto trapped field lines where it is heated, forced to rotate with
the hole, and permitted to escape outward into the jet. In one case we follow
the mass loading in detail using Lagrangian tracer particles and find a
time-averaged mass-loading rate ~ 0.01 Mdot.
|
Game publishers and anti-cheat companies have been unsuccessful in blocking
cheating in online gaming. We propose a novel, vision-based approach that
captures the final state of the frame buffer and detects illicit overlays. To
this aim, we train and evaluate a DNN detector on a new dataset, collected
using two first-person shooter games and three cheating software. We study the
advantages and disadvantages of different DNN architectures operating on a
local or global scale. We use output confidence analysis to avoid unreliable
detections and inform when network retraining is required. In an ablation
study, we show how to use Interval Bound Propagation to build a detector that
is also resistant to potential adversarial attacks and study its interaction
with confidence analysis. Our results show that robust and effective
anti-cheating through machine learning is practically feasible and can be used
to guarantee fair play in online gaming.
|
In this paper, we prove that for the doubly symmetric binary distribution,
the lower increasing envelope and the upper envelope of the
minimum-relative-entropy region are respectively convex and concave. We also
prove that another function induced the minimum-relative-entropy region is
concave. These two envelopes and this function were previously used to
characterize the optimal exponents in strong small-set expansion problems and
strong Brascamp--Lieb inequalities. The results in this paper, combined with
the strong small-set expansion theorem derived by Yu, Anantharam, and Chen
(2021), and the strong Brascamp--Lieb inequality derived by Yu (2021), confirm
positively Ordentlich--Polyanskiy--Shayevitz's conjecture on the strong
small-set expansion (2019) and Polyanskiy's conjecture on the strong
Brascamp--Lieb inequality (2016). The proofs in this paper are based on the
equivalence between the convexity of a function and the convexity of the set of
minimizers of its Lagrangian dual.
|
Deep learning techniques have the power to identify the degree of
modification of high energy jets traversing deconfined QCD matter on a
jet-by-jet basis. Such knowledge allows us to study jets based on their
initial, rather than final energy. We show how this new technique provides
unique access to the genuine configuration profile of jets over the transverse
plane of the nuclear collision, both with respect to their production point and
their orientation. Effectively removing the selection biases induced by
final-state interactions, one can in this way analyse the potential azimuthal
anisotropies of jet production associated to initial-state effects.
Additionally, we demonstrate the capability of our new method to locate with
remarkable precision the production point of a dijet pair in the nuclear
overlap region, in what constitutes an important step forward towards the long
term quest of using jets as tomographic probes of the quark-gluon plasma.
|
In \cite{CMW19}, the authors introduced $k$-entanglement breaking linear maps
to understand the entanglement breaking property of completely positive maps on
taking composition. In this article, we do a systematic study of
$k$-entanglement breaking maps. We prove many equivalent conditions for a
$k$-positive linear map to be $k$-entanglement breaking, thereby study the
mapping cone structure of $k$-entanglement breaking maps. We discuss examples
of $k$-entanglement breaking maps and some of their significance. As an
application of our study, we characterize completely positive maps that reduce
Schmidt number on taking composition with another completely positive map.
|
We consider a locally uniformly strictly elliptic second order partial
differential operator in $\mathbb{R}^d$, $d\ge 2$, with low regularity
assumptions on its coefficients, as well as an associated Hunt process and
semigroup. The Hunt process is known to solve a corresponding stochastic
differential equation that is pathwise unique. In this situation, we study the
relation of invariance, infinitesimal invariance, recurrence, transience,
conservativeness and $L^r$-uniqueness, and present sufficient conditions for
non-existence of finite infinitesimally invariant measures as well as finite
invariant measures. Our main result is that recurrence implies uniqueness of
infinitesimally invariant measures, as well as existence and uniqueness of
invariant measures, both in subclasses of locally finite measures. We can hence
make in particular use of various explicit analytic criteria for recurrence
that have been previously developed in the context of (generalized) Dirichlet
forms and present diverse examples and counterexamples for uniqueness of
infinitesimally invariant, as well as invariant measures and an example where
$L^1$-uniqueness fails for one infinitesimally invariant measure but holds for
another and pathwise uniqueness holds. Furthermore, we illustrate how our
results can be applied to related work and vice versa.
|
Superconductors with kagome lattices have been identified for over 40 years,
with a superconducting transition temperature TC up to 7K. Recently, certain
kagome superconductors have been found to exhibit an exotic charge order, which
intertwines with superconductivity and persists to a temperature being one
order of magnitude higher than TC. In this work, we use scanning tunneling
microscopy (STM) to study the charge order in kagome superconductor RbV3Sb5. We
observe both a 2x2 chiral charge order and nematic surface superlattices
(predominantly 1x4). We find that the 2x2 charge order exhibits intrinsic
chirality with magnetic field tunability. Defects can scatter electrons to
introduce standing waves, which couple with the charge order to cause extrinsic
effects. While the chiral charge order resembles that discovered in KV3Sb5, it
further interacts with the nematic surface superlattices that are absent in
KV3Sb5 but exist in CsV3Sb5.
|
Stochastic processes offer a fundamentally different paradigm of dynamics
than deterministic processes that students are most familiar with, the most
prominent example of the latter being Newtons laws of motion. Here, we discuss
in a pedagogical manner a simple and illustrative example of stochastic
processes in the form of a particle undergoing standard Brownian diffusion,
with the additional feature of the particle resetting repeatedly and at random
times to its initial condition. Over the years, many different variants of this
simple setting have been studied, including extensions to many-body interacting
systems, all of which serve as illustrations of peculiar static and dynamic
features that characterize stochastic dynamics at long times. We will provide
in this work a brief overview of this active and rapidly evolving field by
considering the arguably simplest example of Brownian diffusion in one
dimension. Along the way, we will learn about some of the general techniques
that a physicist employs to study stochastic processes.
|
In this paper, we prove various radius results and obtain sufficient
conditions using the convolution for the Ma-Minda classes $\mathcal{S}^*(\psi)$
and $\mathcal{C}(\psi)$ of starlike and convex analytic functions. We also
obtain the Bohr radius for the class $ S_{f}(\psi):=
\{g(z)=\sum_{k=1}^{\infty}b_k z^k : g \prec f \}$ of subordinants, where $f\in
\mathcal{S}^*(\psi).$ The results are improvements and generalizations of
several well known results.
|
Checkpointing large amounts of related data concurrently to stable storage is
a common I/O pattern of many HPC applications. However, such a pattern
frequently leads to I/O bottlenecks that lead to poor scalability and
performance. As modern HPC infrastructures continue to evolve, there is a
growing gap between compute capacity vs. I/O capabilities. Furthermore, the
storage hierarchy is becoming increasingly heterogeneous: in addition to
parallel file systems, it comprises burst buffers, key-value stores, deep
memory hierarchies at node level, etc. In this context, state of art is
insufficient to deal with the diversity of vendor APIs, performance and
persistency characteristics. This extended abstract presents an overview of
VeloC (Very Low Overhead Checkpointing System), a checkpointing runtime
specifically design to address these challenges for the next generation
Exascale HPC applications and systems. VeloC offers a simple API at user level,
while employing an advanced multi-level resilience strategy that transparently
optimizes the performance and scalability of checkpointing by leveraging
heterogeneous storage.
|
We calculate the energy levels of a system of neutrinos undergoing collective
oscillations as functions of an effective coupling strength and radial distance
from the neutrino source using the quantum Lanczos (QLanczos) algorithm
implemented on IBM Q quantum computer hardware. Our calculations are based on
the many-body neutrino interaction Hamiltonian introduced in Ref.\
\cite{Patwardhan2019}. We show that the system Hamiltonian can be separated
into smaller blocks, which can be represented using fewer qubits than those
needed to represent the entire system as one unit, thus reducing the noise in
the implementation on quantum hardware. We also calculate transition
probabilities of collective neutrino oscillations using a Trotterization method
which is simplified before subsequent implementation on hardware. These
calculations demonstrate that energy eigenvalues of a collective neutrino
system and collective neutrino oscillations can both be computed on quantum
hardware with certain simplification to within good agreement with exact
results.
|
We study the algorithmic content of Pontryagin - van Kampen duality. We prove
that the dualization is computable in the important cases of compact and
locally compact totally disconnected Polish abelian groups. The applications of
our main results include solutions to questions of Kihara and Ng about
presentations of connected Polish spaces, and an unexpected arithmetical
characterisation of direct products of solenoid groups among all Polish groups.
|
Recent studies have demonstrated a perceivable improvement on the performance
of neural machine translation by applying cross-lingual language model
pretraining (Lample and Conneau, 2019), especially the Translation Language
Modeling (TLM). To alleviate the need for expensive parallel corpora by TLM, in
this work, we incorporate the translation information from dictionaries into
the pretraining process and propose a novel Bilingual Dictionary-based Language
Model (BDLM). We evaluate our BDLM in Chinese, English, and Romanian. For
Chinese-English, we obtained a 55.0 BLEU on WMT-News19 (Tiedemann, 2012) and a
24.3 BLEU on WMT20 news-commentary, outperforming the Vanilla Transformer
(Vaswani et al., 2017) by more than 8.4 BLEU and 2.3 BLEU, respectively.
According to our results, the BDLM also has advantages on convergence speed and
predicting rare words. The increase in BLEU for WMT16 Romanian-English also
shows its effectiveness in low-resources language translation.
|
Near the surface of any neutron star there is a thin heat blanketing envelope
that produces substantial thermal insulation of warm neutron star interiors and
that relates the internal temperature of the star to its effective surface
temperature. Physical processes in the blanketing envelopes are reasonably
clear but the chemical composition is not. The latter circumstance complicates
inferring physical parameters of matter in the stellar interiors from
observations of the thermal surface radiation of the stars and urges one to
elaborate the models of blanketing envelopes. We outline physical properties of
these envelopes, particularly, the equation of state, thermal conduction, ion
diffusion and others. Various models of heat blankets are reviewed, such as
composed of separate layers of different elements, or containing diffusive
binary ion mixtures in or out of diffusion equilibrium. The effects of strong
magnetic fields in the envelopes are outlined as well as the effects of high
temperatures which induce strong neutrino emission in the envelopes themselves.
Finally, we discuss how the properties of the heat blankets affect thermal
evolution of neutron stars and the ability to infer important information on
internal structure of neutron stars from observations.
|
We study the optical-pump induced ultrafast transient change of the X-ray
absorption at the $L_3$ absorption resonances of the transition metals Ni and
Fe in Fe$_{0.5}$Ni$_{0.5}$ alloy. We find the effect for both elements to occur
simultaneously on a femtosecond timescale. This effect may hence be used as a
handy cross-correlation scheme providing a time-zero reference for ultrafast
optical-pump soft X-ray-probe measurement. The method benefits from a
relatively simple experimental setup as the sample itself acts as
time-reference tool. In particular, this technique works with low flux
ultrafast soft X-ray sources. The measurements are compared to the
cross-correlation method introduced in an earlier publication.
|
We propose a system of conservation laws with relaxation source terms (i.e.
balance laws) for non-isothermal viscoelastic flows of Maxwell fluids. The
system is an extension of the polyconvex elastodynamics of hyperelastic bodies
using additional structure variables. It is obtained by writing the Helmholtz
free energy as the sum of a volumetric energy density (function of the
determinant of the deformation gradient det F and the temperature $\theta$ like
the standard perfect-gas law or Noble-Abel stiffened-gas law) plus a polyconvex
strain energy density function of F, $\theta$ and of symmetric
positive-definite structure tensors that relax at a characteristic time scale.
One feature of our model is that it unifies various ideal materials ranging
from hyperelastic solids to perfect fluids, encompassing fluids with memory
like Maxwell fluids. We establish a strictly convex mathematical entropy to
show that the system is symmetric-hyperbolic. Another feature of the proposed
model is therefore the short-time existence and uniqueness of smooth solutions,
which define genuinely causal viscoelastic flows with waves propagating at
finite speed. In heat-conductors, we complement the system by a
Maxwell-Cattaneo equation for an energy-flux variable. The system is still
symmetric-hyperbolic, and smooth evolutions with finite-speed waves remain
well-defined.
|
We introduce an optimisation method for variational quantum algorithms and
experimentally demonstrate a 100-fold improvement in efficiency compared to
naive implementations. The effectiveness of our approach is shown by obtaining
multi-dimensional energy surfaces for small molecules and a spin model. Our
method solves related variational problems in parallel by exploiting the global
nature of Bayesian optimisation and sharing information between different
optimisers. Parallelisation makes our method ideally suited to next generation
of variational problems with many physical degrees of freedom. This addresses a
key challenge in scaling-up quantum algorithms towards demonstrating quantum
advantage for problems of real-world interest.
|
This work aims to interpolate parametrized Reduced Order Model (ROM) basis
constructed via the Proper Orthogonal Decomposition (POD) to derive a robust
ROM of the system's dynamics for an unseen target parameter value. A novel
non-intrusive Space-Time (ST) POD basis interpolation scheme is proposed, for
which we define ROM spatial and temporal basis \emph{curves on compact Stiefel
manifolds}. An interpolation is finally defined on a \emph{mixed part} encoded
in a square matrix directly deduced using the space part, the singular values
and the temporal part, to obtain an interpolated snapshot matrix, keeping track
of accurate space and temporal eigenvectors. Moreover, in order to establish a
well-defined curve on the compact Stiefel manifold, we introduce a new
procedure, the so-called oriented SVD. Such an oriented SVD produces unique
right and left eigenvectors for generic matrices, for which all singular values
are distinct. It is important to notice that the ST POD basis interpolation
does not require the construction and the subsequent solution of a
reduced-order FEM model as classically is done. Hence it is avoiding the
bottleneck of standard POD interpolation which is associated with the
evaluation of the nonlinear terms of the Galerkin projection on the governing
equations. As a proof of concept, the proposed method is demonstrated with the
adaptation of rigid-thermoviscoplastic finite element ROMs applied to a typical
nonlinear open forging metal forming process. Strong correlations of the ST POD
models with respect to their associated high-fidelity FEM counterpart
simulations are reported, highlighting its potential use for near real-time
parametric simulations using off-line computed ROM POD databases.
|
Forecasting demand is one of the fundamental components of a successful
revenue management system in hospitality. The industry requires understandable
models that contribute to adaptability by a revenue management department to
make data-driven decisions. Data analysis and forecasts prove an essential role
for the time until the check-in date, which differs per day of week. This paper
aims to provide a new model, which is inspired by cubic smoothing splines,
resulting in smooth demand curves per rate class over time until the check-in
date. This model regulates the error between data points and a smooth curve,
and is therefore able to capture natural guest behavior. The forecast is
obtained by solving a linear programming model, which enables the incorporation
of industry knowledge in the form of constraints. Using data from a major hotel
chain, a lower error and 13.3% more revenue is obtained.
|
We consider a standard federated learning (FL) architecture where a group of
clients periodically coordinate with a central server to train a statistical
model. We develop a general algorithmic framework called FedLin to tackle some
of the key challenges intrinsic to FL, namely objective heterogeneity, systems
heterogeneity, and infrequent and imprecise communication. Our framework is
motivated by the observation that under these challenges, various existing FL
algorithms suffer from a fundamental speed-accuracy conflict: they either
guarantee linear convergence but to an incorrect point, or convergence to the
global minimum but at a sub-linear rate, i.e., fast convergence comes at the
expense of accuracy. In contrast, when the clients' local loss functions are
smooth and strongly convex, we show that FedLin guarantees linear convergence
to the global minimum, despite arbitrary objective and systems heterogeneity.
We then establish matching upper and lower bounds on the convergence rate of
FedLin that highlight the effects of intermittent communication. Finally, we
show that FedLin preserves linear convergence rates under aggressive gradient
sparsification, and quantify the effect of the compression level on the
convergence rate. Our work is the first to provide tight linear convergence
rate guarantees, and constitutes the first comprehensive analysis of gradient
sparsification in FL.
|
Catacondensed benzenoids (those benzenoids having no carbon atom belonging to
three hexagonal rings) form the simplest class of polycyclic aromatic
hydrocarbons (PAH). They have a long history of study and are of wide chemical
importance. In this paper, mathematical possibilities for natural extension of
the notion of a catacondensed benzenoid are discussed, leading under plausible
chemically and physically motivated restrictions to the notion of a
catacondensed chemical hexagonal complex (CCHC). A general polygonal complex is
a topological structure composed of polygons that are glued together along
certain edges. A polygonal complex is flat if none of its edges belong to more
than two polygons. A connected flat polygonal complex determines an orientable
or nonorientable surface, possibly with boundary. A CCHC is then a connected
flat polygonal complex all of whose polygons are hexagons and each of whose
vertices belongs to at most two hexagonal faces. We prove that all CCHC are
Kekulean and give formulas for counting the perfect matchings in a series of
examples based on expansion of cubic graphs in which the edges are replaced by
linear polyacenes of equal length. As a preliminary assessment of the likely
stability of molecules with CCHC structure, all-electron quantum chemical
calculations are applied to molecular structures based on several CCHC, using
either linear or kinked unbranched catafused polyacenes as the expansion motif.
The systems examined all have closed shells according to H\"uckel theory and
all correspond to minima on the potential surface, thus passing the most basic
test for plausibility as a chemical species.
|
In this work GEM and single-hole Thick GEM structures, composed of different
coating materials, are studied. The used foils incorporate conductive layers
made of copper, aluminium, molybdenum, stainless steel, tungsten and tantalum.
The main focus of the study is the determination of the material dependence of
the formation of electrical discharges in GEM-based detectors. For this task,
discharge probability measurements are conducted with several Thick GEM samples
using a basic electronics readout chain. In addition to that, optical
spectroscopy methods are employed to study the light emitted during discharges
from the different foils. It is observed that the light spectra of GEMs include
emission lines from the conductive layer material. This indicates the presence
of the foil material in the discharge plasma after the initial spark. However,
no lines associated with the coating material are observed while studying spark
discharges induced in Thick GEMs. It is concluded that the conductive layer
material does not play a substantial role in terms of stability against primary
discharges. However, a strong material dependence is observed in the case of
secondary discharge formation, pointing to molybdenum coating as the one
providing increased stability.
|
Is there a constant $r_0$ such that, in any invariant tree network linking
rate-$1$ Poisson points in the plane, the mean within-network distance between
points at Euclidean distance $r$ is infinite for $r > r_0$? We prove a slightly
weaker result. This is a continuum analog of a result of Benjamini et al (2001)
on invariant spanning trees of the integer lattice.
|
Bose polarons, quasi-particles composed of mobile impurities surrounded by
cold Bose gas, can experience strong interactions mediated by the many-body
environment and form bipolaron bound states. Here we present a detailed study
of heavy polarons in a one-dimensional Bose gas by formulating a
non-perturbative theory and complementing it with exact numerical simulations.
We develop an analytic approach for weak boson-boson interactions and
arbitrarily strong impurity-boson couplings. Our approach is based on a
mean-field theory that accounts for deformations of the superfluid by the
impurities and in this way minimizes quantum fluctuations. The mean-field
equations are solved exactly in Born-Oppenheimer (BO) approximation leading to
an analytic expression for the interaction potential of heavy polarons which is
found to be in excellent agreement with quantum Monte-Carlo (QMC) results. In
the strong-coupling limit the potential substantially deviates from the
exponential form valid for weak coupling and has a linear shape at short
distances. Taking into account the leading-order Born-Huang corrections we
calculate bipolaron binding energies for impurity-boson mass ratios as low as 3
and find excellent agreement with QMC results.
|
This work concerns video-language pre-training and representation learning.
In this now ubiquitous training scheme, a model first performs pre-training on
paired videos and text (e.g., video clips and accompanied subtitles) from a
large uncurated source corpus, before transferring to specific downstream
tasks. This two-stage training process inevitably raises questions about the
generalization ability of the pre-trained model, which is particularly
pronounced when a salient domain gap exists between source and target data
(e.g., instructional cooking videos vs. movies). In this paper, we first bring
to light the sensitivity of pre-training objectives (contrastive vs.
reconstructive) to domain discrepancy. Then, we propose a simple yet effective
framework, CUPID, to bridge this domain gap by filtering and adapting source
data to the target data, followed by domain-focused pre-training. Comprehensive
experiments demonstrate that pre-training on a considerably small subset of
domain-focused data can effectively close the source-target domain gap and
achieve significant performance gain, compared to random sampling or even
exploiting the full pre-training dataset. CUPID yields new state-of-the-art
performance across multiple video-language and video tasks, including
text-to-video retrieval [72, 37], video question answering [36], and video
captioning [72], with consistent performance lift over different pre-training
methods.
|
Automatic image aesthetics assessment is a computer vision problem that deals
with the categorization of images into different aesthetic levels. The
categorization is usually done by analyzing an input image and computing some
measure of the degree to which the image adhere to the key principles of
photography (balance, rhythm, harmony, contrast, unity, look, feel, tone and
texture). Owing to its diverse applications in many areas, automatic image
aesthetic assessment has gained significant research attention in recent years.
This paper presents a literature review of the recent techniques of automatic
image aesthetics assessment. A large number of traditional hand crafted and
deep learning based approaches are reviewed. Key problem aspects are discussed
such as why some features or models perform better than others and what are the
limitations. A comparison of the quantitative results of different methods is
also provided at the end.
|
We assume a generic real singlet scalar extension of the Standard Model
living in the vacuum $(v,w)$ at the electroweak scale with $v=246$ GeV and $w$
being respectively the Higgs and the singlet scalar vacuum expectation values.
By requiring {\it absolute} vacuum stability for the vacuum $(v,w)$, the
positivity condition and the perturbativity up to the Planck scale, we show
that the viable space of parameters in the model is strongly constrained for
various singlet scalar vacuum expectation values $w=0.1, 1, 10, 100$ TeV. Also,
it turns out that the singlet scalar mass can be from a few GeV up to less than
TeV.
|
The constant increase in the complexity of data networks motivates the search
for strategies that make it possible to reduce current monitoring times. This
paper shows the way in which multilayer network representation and the
application of multiscale analysis techniques, as applied to software-defined
networks, allows for the visualization of anomalies from "coarse views of the
network topology". This implies the analysis of fewer data, and consequently
the reduction of the time that a process takes to monitor the network. The fact
that software-defined networks allow for the obtention of a global view of
network behavior facilitates detail recovery from affected zones detected in
monitoring processes. The method is evaluated by calculating the reduction
factor of nodes, checked during anomaly detection, with respect to the total
number of nodes in the network.
|
This paper explores an efficient solution for Space-time Super-Resolution,
aiming to generate High-resolution Slow-motion videos from Low Resolution and
Low Frame rate videos. A simplistic solution is the sequential running of Video
Super Resolution and Video Frame interpolation models. However, this type of
solutions are memory inefficient, have high inference time, and could not make
the proper use of space-time relation property. To this extent, we first
interpolate in LR space using quadratic modeling. Input LR frames are
super-resolved using a state-of-the-art Video Super-Resolution method. Flowmaps
and blending mask which are used to synthesize LR interpolated frame is reused
in HR space using bilinear upsampling. This leads to a coarse estimate of HR
intermediate frame which often contains artifacts along motion boundaries. We
use a refinement network to improve the quality of HR intermediate frame via
residual learning. Our model is lightweight and performs better than current
state-of-the-art models in REDS STSR Validation set.
|
How has the solar wind evolved to reach what it is today? In this review, I
discuss the long-term evolution of the solar wind, including the evolution of
observed properties that are intimately linked to the solar wind: rotation,
magnetism and activity. Given that we cannot access data from the solar wind 4
billion years ago, this review relies on stellar data, in an effort to better
place the Sun and the solar wind in a stellar context. I overview some clever
detection methods of winds of solar-like stars, and derive from these an
observed evolutionary sequence of solar wind mass-loss rates. I then link these
observational properties (including, rotation, magnetism and activity) with
stellar wind models. I conclude this review then by discussing implications of
the evolution of the solar wind on the evolving Earth and other solar system
planets. I argue that studying exoplanetary systems could open up new avenues
for progress to be made in our understanding of the evolution of the solar
wind.
|
Despite continuous increasing of the number of ICRF sources, their sky
coverage is still not satisfactory. The goal of this study is to discuss some
new considerations for extending the ICRF source list. Statistical analysis of
the ICRF catalog allows us to identify less populated sky regions where new
ICRF sources or additional observations of the current ICRF sources are most
desirable to improve both the uniformity of the source distribution and the
uniformity of the distribution of the position errors. It is also desirable to
include more sources with high redshift in the ICRF list. These sources may be
of interest for astrophysics. To select prospective new ICRF sources, the OCARS
catalog is used. The number of sources in OCARS is about three times greater
than in the ICRF3, which gives us an opportunity to select new ICRF sources
that have already be tested and detected in astrometric and geodetic VLBI
experiments.
|
We give a short and unified proof of the Brundan-Kleshchev isomorphism
between blocks of cyclotomic Hecke algebras and cyclotomic
KhovanovLauda-Rouquier algebras of type A.
|
While averages and typical fluctuations often play a major role to understand
the behavior of a non-equilibrium system, this nonetheless is not always true.
Rare events and large fluctuations are also pivotal when a thorough analysis of
the system is being done. In this context, the statistics of extreme
fluctuations in contrast to the average plays an important role, as has been
discussed in fields ranging from statistical and mathematical physics to
climate, finance and ecology. Herein, we study Extreme Value Statistics (EVS)
of stochastic resetting systems which have recently gained lot of interests due
to its ubiquitous and enriching applications in physics, chemistry, queuing
theory, search processes and computer science. We present a detailed analysis
for the finite and large time statistics of extremals (maximum and arg-maximum
i.e., the time when the maximum is reached) of the spatial displacement in such
system. In particular, we derive an exact renewal formula that relates the
joint distribution of maximum and arg-maximum of the reset process to the
statistical measures of the underlying process. Benchmarking our results for
the maximum of a reset-trajectory that pertain to the Gumbel class for large
sample size, we show that the arg-maximum density attains to a uniform
distribution regardless of the underlying process at a large observation time.
This emerges as a manifestation of the renewal property of the resetting
mechanism. The results are augmented with a wide spectrum of Markov and
non-Markov stochastic processes under resetting namely simple diffusion,
diffusion with drift, Ornstein-Uhlenbeck process and random acceleration
process in one dimension. Rigorous results are presented for the first two
set-ups while the latter two are supported with heuristic and numerical
analysis.
|
We develop the geometrical, analytical, and computational framework for
reactive island theory for three degrees-of-freedom time-independent
Hamiltonian systems. In this setting, the dynamics occurs in a 5-dimensional
energy surface in phase space and is governed by four-dimensional stable and
unstable manifolds of a three-dimensional normally hyperbolic invariant sphere.
The stable and unstable manifolds have the geometrical structure of spherinders
and we provide the means to investigate the ways in which these spherinders and
their intersections determine the dynamical evolution of trajectories. This
geometrical picture is realized through the computational technique of
Lagrangian descriptors. In a set of trajectories, Lagrangian descriptors allow
us to identify the ones closest to a stable or unstable manifold. Using an
approximation of the manifold on a surface of section we are able to calculate
the flux between two regions of the energy surface.
|
Some data analysis applications comprise datasets, where explanatory
variables are expensive or tedious to acquire, but auxiliary data are readily
available and might help to construct an insightful training set. An example is
neuroimaging research on mental disorders, specifically learning a
diagnosis/prognosis model based on variables derived from expensive Magnetic
Resonance Imaging (MRI) scans, which often requires large sample sizes.
Auxiliary data, such as demographics, might help in selecting a smaller sample
that comprises the individuals with the most informative MRI scans. In active
learning literature, this problem has not yet been studied, despite promising
results in related problem settings that concern the selection of instances or
instance-feature pairs.
Therefore, we formulate this complementary problem of Active Selection of
Classification Features (ASCF): Given a primary task, which requires to learn a
model f: x-> y to explain/predict the relationship between an
expensive-to-acquire set of variables x and a class label y. Then, the
ASCF-task is to use a set of readily available selection variables z to select
these instances, that will improve the primary task's performance most when
acquiring their expensive features z and including them to the primary training
set.
We propose two utility-based approaches for this problem, and evaluate their
performance on three public real-world benchmark datasets. In addition, we
illustrate the use of these approaches to efficiently acquire MRI scans in the
context of neuroimaging research on mental disorders, based on a simulated
study design with real MRI data.
|
Concept drift detection is a crucial task in data stream evolving
environments. Most of state of the art approaches designed to tackle this
problem monitor the loss of predictive models. However, this approach falls
short in many real-world scenarios, where the true labels are not readily
available to compute the loss. In this context, there is increasing attention
to approaches that perform concept drift detection in an unsupervised manner,
i.e., without access to the true labels. We propose a novel approach to
unsupervised concept drift detection based on a student-teacher learning
paradigm. Essentially, we create an auxiliary model (student) to mimic the
behaviour of the primary model (teacher). At run-time, our approach is to use
the teacher for predicting new instances and monitoring the mimicking loss of
the student for concept drift detection. In a set of experiments using 19 data
streams, we show that the proposed approach can detect concept drift and
present a competitive behaviour relative to the state of the art approaches.
|
We describe in this paper an optimal control strategy for shaping a
large-scale swarm of particles using boundary global actuation. This problem
arises as a key challenge in many swarm robotics applications, especially when
the robots are passive particles that need to be guided by external control
fields. The system is large-scale and underactuated, making the control
strategy at the microscopic particle level infeasible. We consider the
Kolmogorov forward equation associated to the stochastic process of the single
particle to encode the macroscopic behaviour of the particles swarm. The
control inputs shape the velocity field of the density dynamics according to
the physical model of the actuators. We find the optimal actuation considering
an optimal control problem whose state dynamics is governed by a linear
parabolic advection-diffusion equation where the control induces a transport
field. From a theoretical standpoint, we show the existence of a solution to
the resulting nonlinear optimal control problem. From a numerical standpoint,
we employ the discrete adjoint method to accurately compute the reduced
gradient and we show how it commutes with the optimize-then-discretize
approach. Finally, numerical simulations show the effectiveness of the control
strategy in driving the density sufficiently close to the target.
|
The aim of this work is to present an overview about the combination of the
Reduced Basis Method (RBM) with two different approaches for Fluid-Structure
Interaction (FSI) problems, namely a monolithic and a partitioned approach. We
provide the details of implementation of two reduction procedures, and we then
apply them to the same test case of interest. We first implement a reduction
technique that is based on a monolithic procedure where we solve the fluid and
the solid problems all at once. We then present another reduction technique
that is based on a partitioned (or segregated) procedure: the fluid and the
solid problems are solved separately and then coupled using a fixed point
strategy. The toy problem that we consider is based on the Turek-Hron benchmark
test case, with a fluid Reynolds number Re = 100.
|
We establish that for any proper action of a Lie group on a manifold the
associated equivariant differentiable cohomology groups with coefficients in
modules of $\mathcal{C}^\infty$-functions vanish in all degrees except than
zero. Furthermore let $G$ be a Lie group of $CR$-automorphisms of a strictly
pseudo-convex $CR$-manifold $M$. We associate to $G$ a canonical class in the
first differential cohomology of $G$ with coefficients in the
$\mathcal{C}^\infty$-functions on $M$. This class is non-zero if and only if
$G$ is essential in the sense that there does not exist a $CR$-compatible
strictly pseudo-convex pseudo-Hermitian structure on $M$ which is preserved by
$G$. We prove that a closed Lie subgroup $G$ of $CR$-automorphisms acts
properly on $M$ if and only if its canonical class vanishes. As a consequence
of Schoen's theorem, it follows that for any strictly pseudo-convex
$CR$-manifold $M$, there exists a compatible strictly pseudo-convex
pseudo-Hermitian structure such that the CR-automorphism group for $M$ and the
group of pseudo-Hermitian transformations coincide, except for two kinds of
spherical $CR$-manifolds. Similar results hold for conformal Riemannian and
K\"ahler manifolds.
|
Recent work has made significant progress in helping users to automate single
data preparation steps, such as string-transformations and table-manipulation
operators (e.g., Join, GroupBy, Pivot, etc.). We in this work propose to
automate multiple such steps end-to-end, by synthesizing complex data pipelines
with both string transformations and table-manipulation operators. We propose a
novel "by-target" paradigm that allows users to easily specify the desired
pipeline, which is a significant departure from the traditional by-example
paradigm. Using by-target, users would provide input tables (e.g., csv or json
files), and point us to a "target table" (e.g., an existing database table or
BI dashboard) to demonstrate how the output from the desired pipeline would
schematically "look like". While the problem is seemingly underspecified, our
unique insight is that implicit table constraints such as FDs and keys can be
exploited to significantly constrain the space to make the problem tractable.
We develop an Auto-Pipeline system that learns to synthesize pipelines using
reinforcement learning and search. Experiments on large numbers of real
pipelines crawled from GitHub suggest that Auto-Pipeline can successfully
synthesize 60-70% of these complex pipelines with up to 10 steps.
|
We show that certain global anomalies can be detected in an elementary
fashion by analyzing the way the symmetry algebra is realized on the torus
Hilbert space of the anomalous theory. Distinct anomalous behaviours imprinted
in the Hilbert space are identified with the distinct cohomology "layers" that
appear in the classification of anomalies in terms of cobordism groups. We
illustrate the manifestation of the layers in the Hilbert for a variety of
anomalous symmetries and spacetime dimensions, including time-reversal
symmetry, and both in systems of fermions and in anomalous topological quantum
field theories (TQFTs) in 2+1d. We argue that anomalies can imply an exact
bose-fermi degeneracy in the Hilbert space, thus revealing a supersymmetric
spectrum of states; we provide a sharp characterization of when this phenomenon
occurs and give nontrivial examples in various dimensions, including in
strongly coupled QFTs. Unraveling the anomalies of TQFTs leads us to develop
the construction of the Hilbert spaces, the action of operators and the modular
data in spin TQFTs, material that can be read on its own.
|
Detecting human-object interactions (HOI) is an important step toward a
comprehensive visual understanding of machines. While detecting non-temporal
HOIs (e.g., sitting on a chair) from static images is feasible, it is unlikely
even for humans to guess temporal-related HOIs (e.g., opening/closing a door)
from a single video frame, where the neighboring frames play an essential role.
However, conventional HOI methods operating on only static images have been
used to predict temporal-related interactions, which is essentially guessing
without temporal contexts and may lead to sub-optimal performance. In this
paper, we bridge this gap by detecting video-based HOIs with explicit temporal
information. We first show that a naive temporal-aware variant of a common
action detection baseline does not work on video-based HOIs due to a
feature-inconsistency issue. We then propose a simple yet effective
architecture named Spatial-Temporal HOI Detection (ST-HOI) utilizing temporal
information such as human and object trajectories, correctly-localized visual
features, and spatial-temporal masking pose features. We construct a new video
HOI benchmark dubbed VidHOI where our proposed approach serves as a solid
baseline.
|
With the rise of digital currency systems that rely on blockchain to ensure
ledger security, the ability to perform cross-chain transactions is becoming a
crucial interoperability requirement. Such transactions allow not only funds to
be transferred from one blockchain to another (as done in atomic swaps), but
also a blockchain to verify the inclusion of any event on another blockchain.
Cross-chain bridges are protocols that allow on-chain exchange of
cryptocurrencies, on-chain transfer of assets to sidechains, and cross-shard
verification of events in sharded blockchains, many of which rely on Byzantine
fault tolerance (BFT) for scalability. Unfortunately, existing bridge protocols
that can transfer funds from a BFT blockchain incur significant computation
overhead on the destination blockchain, resulting in a high gas cost for smart
contract verification of events. In this paper, we propose Horizon, a
gas-efficient, cross-chain bridge protocol to transfer assets from a BFT
blockchain to another blockchain (e.g., Ethereum) that supports basic smart
contract execution.
|
Teacher-student models provide a framework in which the typical-case
performance of high-dimensional supervised learning can be described in closed
form. The assumptions of Gaussian i.i.d. input data underlying the canonical
teacher-student model may, however, be perceived as too restrictive to capture
the behaviour of realistic data sets. In this paper, we introduce a Gaussian
covariate generalisation of the model where the teacher and student can act on
different spaces, generated with fixed, but generic feature maps. While still
solvable in a closed form, this generalization is able to capture the learning
curves for a broad range of realistic data sets, thus redeeming the potential
of the teacher-student framework. Our contribution is then two-fold: First, we
prove a rigorous formula for the asymptotic training loss and generalisation
error. Second, we present a number of situations where the learning curve of
the model captures the one of a realistic data set learned with kernel
regression and classification, with out-of-the-box feature maps such as random
projections or scattering transforms, or with pre-learned ones - such as the
features learned by training multi-layer neural networks. We discuss both the
power and the limitations of the framework.
|
In this paper, we propose a formalization of the process of exploitation of
SQL injection vulnerabilities. We consider a simplification of the dynamics of
SQL injection attacks by casting this problem as a security capture-the-flag
challenge. We model it as a Markov decision process, and we implement it as a
reinforcement learning problem. We then deploy reinforcement learning agents
tasked with learning an effective policy to perform SQL injection; we design
our training in such a way that the agent learns not just a specific strategy
to solve an individual challenge but a more generic policy that may be applied
to perform SQL injection attacks against any system instantiated randomly by
our problem generator. We analyze the results in terms of the quality of the
learned policy and in terms of convergence time as a function of the complexity
of the challenge and the learning agent's complexity. Our work fits in the
wider research on the development of intelligent agents for autonomous
penetration testing and white-hat hacking, and our results aim to contribute to
understanding the potential and the limits of reinforcement learning in a
security environment.
|
Most quantum information tasks based on Bell tests relie on the assumption of
measurement independence. However, it is difficult to ensure that the
assumption of measurement independence is always met in experimental
operations, so it is crucial to explore the effects of relaxing this assumption
on Bell tests. In this paper, we discuss the effects of relaxing the assumption
of measurement independence on 1-parameter family of Bell (1-PFB) tests. For
both general and factorizable input distributions, we establish the
relationship among measurement dependence, guessing probability, and the
maximum value of 1-PFB correlation function that Eve can fake. The
deterministic strategy when Eve fakes the maximum value is also given. We
compare the unknown information rate of Chain inequality and 1-PFB inequality,
and find the range of the parameter in which it is more difficult for Eve to
fake the maximum quantum violation in 1-PFB inequality than in Chain
inequality.
|
For task-oriented dialog systems, training a Reinforcement Learning (RL)
based Dialog Management module suffers from low sample efficiency and slow
convergence speed due to the sparse rewards in RL.To solve this problem, many
strategies have been proposed to give proper rewards when training RL, but
their rewards lack interpretability and cannot accurately estimate the
distribution of state-action pairs in real dialogs. In this paper, we propose a
multi-level reward modeling approach that factorizes a reward into a
three-level hierarchy: domain, act, and slot. Based on inverse adversarial
reinforcement learning, our designed reward model can provide more accurate and
explainable reward signals for state-action pairs.Extensive evaluations show
that our approach can be applied to a wide range of reinforcement
learning-based dialog systems and significantly improves both the performance
and the speed of convergence.
|
In this paper, we study the generalized gapped k-mer filters and derive a
closed form solution for their coefficients. We consider nonnegative integers
$\ell$ and $k$, with $k\leq \ell$, and an $\ell$-tuple
$B=(b_1,\ldots,b_{\ell})$ of integers $b_i\geq 2$, $i=1,\ldots,\ell$. We
introduce and study an incidence matrix $A=A_{\ell,k;B}$. We develop a
M\"obius-like function $\nu_B$ which helps us to obtain closed forms for a
complete set of mutually orthogonal eigenvectors of $A^{\top} A$ as well as a
complete set of mutually orthogonal eigenvectors of $AA^{\top}$ corresponding
to nonzero eigenvalues.
The reduced singular value decomposition of $A$ and combinatorial
interpretations for the nullity and rank of $A$, are among the consequences of
this approach.
We then combine the obtained formulas, some results from linear algebra, and
combinatorial identities of elementary symmetric functions and $\nu_B$, to
provide the entries of the Moore-Penrose pseudo-inverse matrix $A^{+}$ and the
Gapped k-mer filter matrix $A^{+} A$.
|
Total charge and energy evaluations for the electron beams generated in the
laser wakefield acceleration (LWFA) is the primary step in the determination of
the required target and laser parameters. Particle-in-cell (PIC) simulations is
an efficient numerical tool that can provide such evaluations unless the effect
of numerical dispersion is not diminished. The numerical dispersion, which is
specific for the PIC modeling, affects not only the dephasing lengths in LWFA
but also the total amount of the self-injected electrons. A numerical error of
the order of $10^{-4}-10^{-3}$ in the calculation of the speed of light results
in a significant error in the total injected charge and energy gain of the
accelerated electron bunches. In the standard numerical approach, the numerical
correction of the speed of light either requires infinitely small spatial grid
resolution (which needs large computation platform) or force to compromise with
the numerical accuracy. A simple and easy to implement numerical scheme is
shown to suppress the numerical dispersion of the electromagnetic pulse in PIC
simulations even with a modest spatial resolution, and without any special
treatments to the core structure of the numerical algorithm. Evaluated charges
of the self-injected electron bunches become essentially lower owing to the
better calculations of the wake phase velocity.
|
We study the scheduling of jobs on a single parallel-batching machine with
non-identical job sizes and incompatible job families. Jobs from the same
family have the same processing time and can be loaded into a batch, as long as
the batch size respects the machine capacity. The objective is to minimize the
total weighted completion time. The problem combines two classic combinatorial
problems, namely bin packing and single machine scheduling. We develop three
new mixed-integer linear-programming formulations, namely an assignment-based
formulation, a time-indexed formulation (TIF), and a set-partitioning
formulation (SPF). We also propose a column generation (CG) algorithm for the
SPF, which is the basis for a branch-and-price (B&P) algorithm and a CG-based
heuristic. We develop a preprocessing method to reduce the formulation size. A
heuristic framework based on proximity search is also developed using the TIF.
The SPF and B&P can solve instances with non-unit and unit job durations to
optimality with up to 80 and 150 jobs within reasonable runtime limits,
respectively. The proposed heuristics perform better than the methods from the
literature.
|
Implementation attacks like side-channel and fault attacks pose a
considerable threat to cryptographic devices that are physically accessible by
an attacker. As a consequence, devices like smart cards implement corresponding
countermeasures like redundant computation and masking. Recently, statistically
ineffective fault attacks (SIFA) were shown to be able to circumvent these
classical countermeasure techniques. We present a new approach for verifying
the SIFA protection of arbitrary masked implementations in both hardware and
software. The proposed method uses Boolean dependency analysis, factorization,
and known properties of masked computations to show whether the fault detection
mechanism of redundant masked circuits can leak information about the processed
secret values. We implemented this new method in a tool called Danira, which
can show the SIFA resistance of cryptographic implementations like AES S-Boxes
within minutes.
|
We demonstrate theoretically and experimentally that a specifically designed
microcavity driven in the optical parametric oscillation regime exhibits
lighthouse-like emission, i.e., an emission focused around a single direction.
Remarkably, the emission direction of this micro-lighthouse is continuously
controlled by the linear polarization of the incident laser, and angular beam
steering over \unit{360}{\degree} is demonstrated. Theoretically, this
unprecedented effect arises from the interplay between the nonlinear optical
response of microcavity exciton-polaritons, the difference in the subcavities
forming the microcavity, and the rotational invariance of the device.
|
Simulation-based Dynamic Traffic Assignment models have important
applications in real-time traffic management and control. The efficacy of these
systems rests on the ability to generate accurate estimates and predictions of
traffic states, which necessitates online calibration. A widely used solution
approach for online calibration is the Extended Kalman Filter (EKF), which --
although appealing in its flexibility to incorporate any class of parameters
and measurements -- poses several challenges with regard to calibration
accuracy and scalability, especially in congested situations for large-scale
networks. This paper addresses these issues in turn so as to improve the
accuracy and efficiency of EKF-based online calibration approaches for large
and congested networks. First, the concept of state augmentation is revisited
to handle violations of the Markovian assumption typically implicit in online
applications of the EKF. Second, a method based on graph-coloring is proposed
to operationalize the partitioned finite-difference approach that enhances
scalability of the gradient computations.
Several synthetic experiments and a real world case study demonstrate that
application of the proposed approaches yields improvements in terms of both
prediction accuracy and computational performance. The work has applications in
real-world deployments of simulation-based dynamic traffic assignment systems.
|
The ground state of an antiferromagnetic Heisenberg model on L X L clusters
joined by a single bond and balanced Bethe clusters are investigated with
quantum Monte Carlo and mean field theory. The improved Monte Carlo method of
Sandvik and Evertz is used and the observables include valence bond and loop
valence bond observables introduced by Lin and Sandvik as well as the valence
bond entropy and the second Renyi entropy. For the bisecting of the Bethe
cluster, in disagreement with our previous results and in agreement with mean
field theory, the valence loop entropy and the second Renyi entropy scale as
the logarithm of the number of sites in the cluster. For bisecting the L X L -
L X L clusters, the valence bond entropy scales as L, however, the loop entropy
and the entanglement entropy scale as the ln(L). The calculations suggest that
the area law is essentially correct and linking high entanglement objects will
not generate much more entanglement.
|
The formation and presence of clathrate hydrates could influence the
composition and stability of planetary ices and comets; they are at the heart
of the development of numerous complex planetary models, all of which include
the necessary condition imposed by their stability curves, some of which
include the cage occupancy or host-guest content and the hydration number, but
fewer take into account the kinetics aspects. We measure the
temperature-dependent-diffusion-controlled formation of the carbon dioxide
clathrate hydrate in the 155-210~K range in order to establish the clathrate
formation kinetics at low temperature. We exposed thin water ice films of a few
microns in thickness deposited in a dedicated infrared transmitting closed cell
to gaseous carbon dioxide maintained at a pressure of a few times the pressure
at which carbon dioxide clathrate hydrate is thermodynamically stable. The time
dependence of the clathrate formation was monitored with the recording of
specific infrared vibrational modes of CO2 with a Fourier Transform InfraRed
(FTIR) spectrometer. These experiments clearly show a two-step clathrate
formation, particularly at low temperature, within a relatively simple
geometric configuration. We satisfactorily applied a model combining surface
clathration followed by a bulk diffusion-relaxation growth process to the
experiments and derived the temperature-dependent-diffusion coefficient for the
bulk spreading of clathrate. The derived apparent activation energy
corresponding to this temperature-dependent-diffusion coefficient in the
considered temperature range is E_a = 24.7 +/- 9.7 kJ/mol. The kinetics
parameters favour a possible carbon dioxide clathrate hydrate nucleation mainly
in planets or satellites.
|
The quantum Cram\'er-Rao bound is a cornerstone of modern quantum metrology,
as it provides the ultimate precision in parameter estimation. In the
multiparameter scenario, this bound becomes a matrix inequality, which can be
cast to a scalar form with a properly chosen weight matrix. Multiparameter
estimation thus elicits tradeoffs in the precision with which each parameter
can be estimated. We show that, if the information is encoded in a unitary
transformation, we can naturally choose the weight matrix as the metric tensor
linked to the geometry of the underlying algebra $\mathfrak{su}(n)$, with
applications in numerous fields. This ensures an intrinsic bound that is
independent of the choice of parametrization.
|
The constant growth in the number of malware - software or code fragment
potentially harmful for computers and information networks - and the use of
sophisticated evasion and obfuscation techniques have seriously hindered
classic signature-based approaches. On the other hand, malware detection
systems based on machine learning techniques started offering a promising
alternative to standard approaches, drastically reducing analysis time and
turning out to be more robust against evasion and obfuscation techniques. In
this paper, we propose a malware taxonomic classification pipeline able to
classify Windows Portable Executable files (PEs). Given an input PE sample, it
is first classified as either malicious or benign. If malicious, the pipeline
further analyzes it in order to establish its threat type, family, and
behavior(s). We tested the proposed pipeline on the open source dataset EMBER,
containing approximately 1 million PE samples, analyzed through static
analysis. Obtained malware detection results are comparable to other academic
works in the current state of art and, in addition, we provide an in-depth
classification of malicious samples. Models used in the pipeline provides
interpretable results which can help security analysts in better understanding
decisions taken by the automated pipeline.
|
As an analogy of superalgebra of multivector fields with the Schounte
bracket, we introduce a non-trivial superbracket on differential forms of
manifold. We show properties of this new superalgebra. We extend this
superalgebra by adding one factor. The new extended superalgebra should be
studied more widely and in deep. We study Betti numbers of double weighted
homology groups by the Euler vector field. In appendix, we explain our bracket
is produced like as the Schouten bracket.
|
The charge, spin, and composition degrees of freedom in high-entropy alloy
endow it with tunable valence and spin states, infinite combinations and
excellent mechanical performance. Meanwhile, the stacking, interlayer, and
angle degrees of freedom in van der Waals material bring it with exceptional
features and technological applications. Integration of these two distinct
material categories while keeping their merits would be tempting. Based on this
heuristic thinking, we design and explore a new range of materials (i.e.,
dichalcogenides, halides and phosphorus trisulfides) with multiple metallic
constitutions and intrinsic layered structure, which are coined as high-entropy
van der Waals materials. Millimeter-scale single crystals with homogeneous
element distribution can be efficiently acquired and easily exfoliated or
intercalated in this materials category. Multifarious physical properties like
superconductivity, magnetic ordering, metal-insulator transition and corrosion
resistance have been exploited. Further research based on the concept of
high-entropy van der Waals materials will enrich the high-throughput design of
new systems with intriguing properties and practical applications.
|
In this paper, we study the Sobolev extension property of Lp-quasidisks which
are the generalizations of the classical quasidisks. After that, we also find
some applications of their Sobolev extension property.
|
Leakage outside of the qubit computational subspace poses a threatening
challenge to quantum error correction (QEC). We propose a scheme using two
leakage-reduction units (LRUs) that mitigate these issues for a transmon-based
surface code, without requiring an overhead in terms of hardware or QEC-cycle
time as in previous proposals. For data qubits we consider a microwave drive to
transfer leakage to the readout resonator, where it quickly decays, ensuring
that this negligibly affects the coherence within the computational subspace
for realistic system parameters. For ancilla qubits we apply a
$|1\rangle\leftrightarrow|2\rangle$ $\pi$ pulse conditioned on the measurement
outcome. Using density-matrix simulations of the distance-3 surface code we
show that the average leakage lifetime is reduced to almost 1 QEC cycle, even
when the LRUs are implemented with limited fidelity. Furthermore, we show that
this leads to a significant reduction of the logical error rate. This LRU
scheme opens the prospect for near-term scalable QEC demonstrations.
|
For the exactly solvable model of exponential last passage percolation on
$\mathbb{Z}^2$, consider the geodesic $\Gamma_n$ joining $(0,0)$ and $(n,n)$
for large $n$. It is well known that the transversal fluctuation of $\Gamma_n$
around the line $x=y$ is $n^{2/3+o(1)}$ with high probability. We obtain the
exponent governing the decay of the small ball probability for $\Gamma_{n}$ and
establish that for small $\delta$, the probability that $\Gamma_{n}$ is
contained in a strip of width $\delta n^{2/3}$ around the diagonal is $\exp
(-\Theta(\delta^{-3/2}))$ uniformly in high $n$. We also obtain optimal small
deviation estimates for the one point distribution of the geodesic showing that
for $\frac{t}{2n}$ bounded away from $0$ and $1$, we have
$\mathbb{P}(|x(t)-y(t)|\leq \delta n^{2/3})=\Theta(\delta)$ uniformly in high
$n$, where $(x(t),y(t))$ is the unique point where $\Gamma_{n}$ intersects the
line $x+y=t$. Our methods are expected to go through for other exactly solvable
models of planar last passage percolation and, upon taking the $n\to \infty$
limit, provide analogous estimates for geodesics in the directed landscape.
|
We propose a modified quantum teleportation scheme to increase the
teleportation accuracy by applying a cubic phase gate to the displaced squeezed
state. We have described the proposed scheme in Heisenberg's language,
evaluating it from the point of view of adding an error in teleportation, and
have shown that it allows achieving less error than the original scheme.
Repeating the description in the language of wave functions, we have found the
range of the displacement values, at which our conclusions will be valid. Using
the example of teleportation of the vacuum state, we have shown that the scheme
allows one to achieve high fidelity values.
|
Non-Hermitian skin effects and exceptional points are topological phenomena
characterized by integer winding numbers. In this study, we give methods to
theoretically detect skin effects and exceptional points by generalizing
inversion symmetry. The generalization of inversion symmetry is unique to
non-Hermitian systems. We show that parities of the winding numbers can be
determined from energy eigenvalues on the inversion-invariant momenta when
generalized inversion symmetry is present. The simple expressions for the
winding numbers allow us to easily analyze skin effects and exceptional points
in non-Hermitian bands. We also demonstrate the methods for (second-order) skin
effects and exceptional points by using lattice models.
|
Neurofibromatosis type 1 (NF1) is an autosomal dominant tumor predisposition
syndrome that involves the central and peripheral nervous systems. Accurate
detection and segmentation of neurofibromas are essential for assessing tumor
burden and longitudinal tumor size changes. Automatic convolutional neural
networks (CNNs) are sensitive and vulnerable as tumors' variable anatomical
location and heterogeneous appearance on MRI. In this study, we propose deep
interactive networks (DINs) to address the above limitations. User interactions
guide the model to recognize complicated tumors and quickly adapt to
heterogeneous tumors. We introduce a simple but effective Exponential Distance
Transform (ExpDT) that converts user interactions into guide maps regarded as
the spatial and appearance prior. Comparing with popular Euclidean and geodesic
distances, ExpDT is more robust to various image sizes, which reserves the
distribution of interactive inputs. Furthermore, to enhance the tumor-related
features, we design a deep interactive module to propagate the guides into
deeper layers. We train and evaluate DINs on three MRI data sets from NF1
patients. The experiment results yield significant improvements of 44% and 14%
in DSC comparing with automated and other interactive methods, respectively. We
also experimentally demonstrate the efficiency of DINs in reducing user burden
when comparing with conventional interactive methods. The source code of our
method is available at \url{https://github.com/Jarvis73/DINs}.
|
Medical terminology normalization aims to map the clinical mention to
terminologies come from a knowledge base, which plays an important role in
analyzing Electronic Health Record(EHR) and many downstream tasks. In this
paper, we focus on Chinese procedure terminology normalization. The expression
of terminologies are various and one medical mention may be linked to multiple
terminologies. Previous study explores some methods such as multi-class
classification or learning to rank(LTR) to sort the terminologies by literature
and semantic information. However, these information is inadequate to find the
right terminologies, particularly in multi-implication cases. In this work, we
propose a combined recall and rank framework to solve the above problems. This
framework is composed of a multi-task candidate generator(MTCG), a keywords
attentive ranker(KAR) and a fusion block(FB). MTCG is utilized to predict the
mention implication number and recall candidates with semantic similarity. KAR
is based on Bert with a keywords attentive mechanism which focuses on keywords
such as procedure sites and procedure types. FB merges the similarity come from
MTCG and KAR to sort the terminologies from different perspectives. Detailed
experimental analysis shows our proposed framework has a remarkable improvement
on both performance and efficiency.
|
Manipulating valley-dependent Berry phase effects provides remarkable
opportunities for both fundamental research and practical applications. Here,
by referring to effective model analysis, we propose a general scheme for
realizing topological magneto-valley phase transitions. More importantly, by
using valley-half-semiconducting VSi2N4 as an outstanding example, we
investigate sign change of valley-dependent Berry phase effects which drive the
change-in-sign valley anomalous transport characteristics via external means
such as biaxial strain, electric field, and correlation effects. As a result,
this gives rise to quantized versions of valley anomalous transport phenomena.
Our findings not only uncover a general framework to control valley degree of
freedom, but also motivate further research in the direction of multifunctional
quantum devices in valleytronics and spintronics.
|
The rise of algorithmic decision-making has spawned much research on fair
machine learning (ML). Financial institutions use ML for building risk
scorecards that support a range of credit-related decisions. Yet, the
literature on fair ML in credit scoring is scarce. The paper makes three
contributions. First, we revisit statistical fairness criteria and examine
their adequacy for credit scoring. Second, we catalog algorithmic options for
incorporating fairness goals in the ML model development pipeline. Last, we
empirically compare different fairness processors in a profit-oriented credit
scoring context using real-world data. The empirical results substantiate the
evaluation of fairness measures, identify suitable options to implement fair
credit scoring, and clarify the profit-fairness trade-off in lending decisions.
We find that multiple fairness criteria can be approximately satisfied at once
and recommend separation as a proper criterion for measuring the fairness of a
scorecard. We also find fair in-processors to deliver a good balance between
profit and fairness and show that algorithmic discrimination can be reduced to
a reasonable level at a relatively low cost. The codes corresponding to the
paper are available on GitHub.
|
Complex singularities have been suggested in propagators of confined
particles, e.g., the Landau-gauge gluon propagator. We rigorously reconstruct
Minkowski propagators from Euclidean propagators with complex singularities. As
a result, the analytically continued Wightman function is holomorphic in the
tube, and the Lorentz symmetry and locality are kept intact, whereas the
reconstructed Wightman function violates the temperedness and the positivity
condition. Moreover, we argue that complex singularities correspond to confined
zero-norm states in an indefinite metric state space.
|
For a fixed positive $\epsilon$, we show the existence of a constant
$C_\epsilon$ with the following property: Given a $\pm1$-edge-labeling
$c:E(K_n)\to \{ -1,1\}$ of the complete graph $K_n$ with $c(E(K_n))=0$, and a
spanning forest $F$ of $K_n$ of maximum degree $\Delta$, one can determine in
polynomial time an isomorphic copy $F'$ of $F$ in $K_n$ with $|c(E(F'))|\leq
\left(\frac{3}{4}+\epsilon\right)\Delta+C_\epsilon.$ Our approach is based on
the method of conditional expectation.
|
Current perception systems often carry multimodal imagers and sensors such as
2D cameras and 3D LiDAR sensors. To fuse and utilize the data for downstream
perception tasks, robust and accurate calibration of the multimodal sensor data
is essential. We propose a novel deep learning-driven technique (CalibDNN) for
accurate calibration among multimodal sensor, specifically LiDAR-Camera pairs.
The key innovation of the proposed work is that it does not require any
specific calibration targets or hardware assistants, and the entire processing
is fully automatic with a single model and single iteration. Results comparison
among different methods and extensive experiments on different datasets
demonstrates the state-of-the-art performance.
|
In the present study, we propose to implement a new framework for estimating
generative models via an adversarial process to extend an existing GAN
framework and develop a white-box controllable image cartoonization, which can
generate high-quality cartooned images/videos from real-world photos and
videos. The learning purposes of our system are based on three distinct
representations: surface representation, structure representation, and texture
representation. The surface representation refers to the smooth surface of the
images. The structure representation relates to the sparse colour blocks and
compresses generic content. The texture representation shows the texture,
curves, and features in cartoon images. Generative Adversarial Network (GAN)
framework decomposes the images into different representations and learns from
them to generate cartoon images. This decomposition makes the framework more
controllable and flexible which allows users to make changes based on the
required output. This approach overcomes any previous system in terms of
maintaining clarity, colours, textures, shapes of images yet showing the
characteristics of cartoon images.
|
The time average expected age of information (AoI) is studied for status
updates sent over an error-prone channel from an energy-harvesting transmitter
with a finite-capacity battery. Energy cost of sensing new status updates is
taken into account as well as the transmission energy cost better capturing
practical systems. The optimal scheduling policy is first studied under the
hybrid automatic repeat request (HARQ) protocol when the channel and energy
harvesting statistics are known, and the existence of a threshold-based optimal
policy is shown. For the case of unknown environments, average-cost
reinforcement-learning algorithms are proposed that learn the system parameters
and the status update policy in real-time. The effectiveness of the proposed
methods is demonstrated through numerical results.
|
We identify potential early markets for fusion energy and their projected
cost targets, based on analysis and synthesis of many relevant, recent studies
and reports. Because private fusion companies aspire to start commercial
deployment before 2040, we examine cost requirements for fusion-generated
electricity, process heat, and hydrogen production based on today's market
prices but with various adjustments relating to possible scenarios in 2035,
such as "business-as-usual," high renewables penetration, and carbon pricing up
to 100 \$/tCO$_2$. Key findings are that fusion developers should consider
focusing initially on high-priced global electricity markets and including
integrated thermal storage in order to maximize revenue and compete in markets
with high renewables penetration. Process heat and hydrogen production will be
tough early markets for fusion, but may open up to fusion as markets evolve and
if fusion's levelized cost of electricity falls below 50 \$/MWh$_\mathrm{e}$.
Finally, we discuss potential ways for a fusion plant to increase revenue via
cogeneration (e.g., desalination, direct air capture, or district heating) and
to lower capital costs (e.g., by minimizing construction times and interest or
by retrofitting coal plants).
|
We study synchronizing partial DFAs, which extend the classical concept of
synchronizing complete DFAs and are a special case of synchronizing unambiguous
NFAs. A partial DFA is called synchronizing if it has a word (called a reset
word) whose action brings a non-empty subset of states to a unique state and is
undefined for all other states. While in the general case the problem of
checking whether a partial DFA is synchronizing is PSPACE-complete, we show
that in the strongly connected case this problem can be efficiently reduced to
the same problem for a complete DFA. Using combinatorial, algebraic, and formal
languages methods, we develop techniques that relate main synchronization
problems for strongly connected partial DFAs with the same problems for
complete DFAs. In particular, this includes the \v{C}ern\'{y} and the rank
conjectures, the problem of finding a reset word, and upper bounds on the
length of the shortest reset words of literal automata of finite prefix codes.
We conclude that solving fundamental synchronization problems is equally hard
in both models, as an essential improvement of the results for one model
implies an improvement for the other.
|
In this work, the antibacterial activity of the polymeric precursor
dicarbonyldichlororuthenium has been studied against Escherichia coli and
Staphylococcus aureus. This Ru carbonyl precursor shows minimum inhibitory
concentration at nanogram per millilitre, which renders it a novel
antimicrobial polymer without any organic ligands. Besides,
dicarbonyldichlororuthenium antimicrobial activity is markedly boosted under
photoirradiation, which can be ascribed to the enhanced generation of reactive
oxygen species under UV irradiation. This compound has been able to inhibit
bacterial growth via the disruption of bacterial membranes and triggering
upregulation of stress responses as shown in microscopic measurements. The
activity of polymeric ruthenium as an antibacterial material is significant
even at very low concentrations while remaining biocompatible to the mammalian
cells at much higher concentrations. This study proves that this simple Ru
carbonyl precursor can be used as an antimicrobial compound with high activity
and a low toxicity profile in the context of need for new antimicrobial agents
to fight bacterial infections.
|
The radio galaxy 1321+045 is a rare example of a young, compact steep
spectrum source located in the center of a z=0.263 galaxy cluster. Using a
combination of Chandra, VLBA, VLA, MERLIN and IRAM 30m observations, we
investigate the conditions which have triggered this outburst. We find that the
previously identified 5 kpc scale radio lobes are probably no longer powered by
the AGN, which seems to have launched a new ~20 pc jet on a different axis,
likely within the last few hundred years. We estimate the enthalpy of the lobes
to be 8.48 [+6.04,-3.56] x10^57 erg, only sufficient to balance cooling in the
surrounding 16 kpc for ~9 Myr. The cluster ICM properties are similar to those
of rapidly cooling nearby clusters, with a low central entropy (8.6 [+2.2,-1.4]
kev cm^2 within 8 kpc), short central cooling time (390 [+170,-150] Myr), and
t_cool/t_ff and t_cool/t_eddy ratios indicative of thermal instability out to
~45 kpc. Despite previous detection of Halpha emission from the BCG, our IRAM
30m observations do not detect CO emission in either the (1-0) or (3-2)
transitions. We place 3sigma limits on the molecular gas mass of M_mol
<=7.7x10^9 Msol and <=5.6x10^9 Msol from the two lines respectively. We find
indications of a recent minor cluster merger which has left a ~200 kpc tail of
stripped gas in the ICM, and probably induced sloshing motions.
|
Recent spectroscopic observations by sensitive radio telescopes require
accurate molecular spectral line frequencies to identify molecular species in a
forest of lines detected. To measure rest frequencies of molecular spectral
lines in the laboratory, an emission-type millimeter and submillimeter-wave
spectrometer utilizing state-of-the-art radio-astronomical technologies is
developed. The spectrometer is equipped with a 200 cm glass cylinder cell, a
two sideband (2SB) Superconductor-Insulator-Superconductor (SIS) receiver in
the 230 GHz band, and wide-band auto-correlation digital spectrometers. By
using the four 2.5 GHz digital spectrometers, a total instantaneous bandwidth
of the 2SB SIS receiver of 8 GHz can be covered with a frequency resolution of
88.5 kHz. Spectroscopic measurements of CH$_3$CN and HDO are carried out in the
230 GHz band so as to examine frequency accuracy, stability, sensitivity, as
well as intensity calibration accuracy of our system. As for the result of
CH$_3$CN, we confirm that the frequency accuracy for lines detected with
sufficient signal to noise ratio is better than 1 kHz, when the high resolution
spectrometer having a channel resolution of 17.7 kHz is used. In addition, we
demonstrate the capability of this system by spectral scan measurement of
CH$_3$OH from 216 GHz to 264 GHz. We assign 242 transitions of CH$_3$OH, 51
transitions of $^{13}$CH$_3$OH, and 21 unidentified emission lines for 295
detected lines. Consequently, our spectrometer demonstrates sufficient
sensitivity, spectral resolution, and frequency accuracy for in-situ
experimental-based rest frequency measurements of spectral lines on various
molecular species.
|
Achieving human-level performance on some of Machine Reading Comprehension
(MRC) datasets is no longer challenging with the help of powerful Pre-trained
Language Models (PLMs). However, it is necessary to provide both answer
prediction and its explanation to further improve the MRC system's reliability,
especially for real-life applications. In this paper, we propose a new
benchmark called ExpMRC for evaluating the explainability of the MRC systems.
ExpMRC contains four subsets, including SQuAD, CMRC 2018, RACE$^+$, and C$^3$
with additional annotations of the answer's evidence. The MRC systems are
required to give not only the correct answer but also its explanation. We use
state-of-the-art pre-trained language models to build baseline systems and
adopt various unsupervised approaches to extract evidence without a
human-annotated training set. The experimental results show that these models
are still far from human performance, suggesting that the ExpMRC is
challenging. Resources will be available through
https://github.com/ymcui/expmrc
|
We consider thermal effects in the propagation of gravitational waves on a
cosmological background. In particular, we consider scalar field cosmologies
and study gravitational modes near cosmological singularities. We point out
that the contribution of thermal radiation can heavily affect the dynamics of
gravitational waves giving enhancement or dissipation effects both at quantum
and classical level.These effects are considered both in General Relativity and
in modified theories like $F(R)$ gravity which can be easily reduced to
scalar-tensor cosmology. The possible detection and disentanglement of standard
and scalar gravitational modes on the stochastic background are also discussed.
|
Exceptional points (EPs), i.e., non-Hermitian degeneracies at which
eigenvalues and eigenvectors coalesce, can be realized by tuning the gain/loss
contrast of different modes in non-Hermitian systems or by engineering the
asymmetric coupling of modes. Here we demonstrate a mechanism that can achieve
EPs of arbitrary order by employing the non-reciprocal coupling of spinning
cylinders sitting on a dielectric waveguide. The spinning motion breaks the
time-reversal symmetry and removes the degeneracy of opposite chiral modes of
the cylinders. Under the excitation of a linearly polarized plane wave, the
chiral mode of one cylinder can unidirectionally couple to the same mode of the
other cylinder via the spin-orbit interaction associated with the evanescent
wave of the waveguide. The structure can give rise to arbitrary-order EPs that
are robust against spin-flipping perturbations, in contrast to conventional
systems relying on spin-selective excitations. In addition, we show that
higher-order EPs in the proposed system are accompanied by enhanced optical
isolation, which may find applications in designing novel optical isolators,
nonreciprocal optical devices, and topological photonics.
|
The design of the cross-section of an FRP-reinforced concrete beam is an
iterative process of estimating both its dimensions and the reinforcement
ratio, followed by the check of the compliance of a number of strength and
serviceability constraints. The process continues until a suitable solution is
found. Since there are infinite solutions to the problem, it appears convenient
to define some optimality criteria so as to measure the relative goodness of
the different solutions. This paper intends to develop a preliminary least-cost
section design model that follows the recommendations in the ACI 440.1 R-06,
and uses a relatively new artificial intelligence technique called particle
swarm optimization (PSO) to handle the optimization tasks. The latter is based
on the intelligence that emerges from the low-level interactions among a number
of relatively non-intelligent individuals within a population.
|
We study the problem of testing the null hypothesis that X and Y are
conditionally independent given Z, where each of X, Y and Z may be functional
random variables. This generalises testing the significance of X in a
regression model of scalar response Y on functional regressors X and Z. We show
however that even in the idealised setting where additionally (X, Y, Z) has a
Gaussian distribution, the power of any test cannot exceed its size. Further
modelling assumptions are needed and we argue that a convenient way of
specifying these is based on choosing methods for regressing each of X and Y on
Z. We propose a test statistic involving inner products of the resulting
residuals that is simple to compute and calibrate: type I error is controlled
uniformly when the in-sample prediction errors are sufficiently small. We show
this requirement is met by ridge regression in functional linear model settings
without requiring any eigen-spacing conditions or lower bounds on the
eigenvalues of the covariance of the functional regressor. We apply our test in
constructing confidence intervals for truncation points in truncated functional
linear models and testing for edges in a functional graphical model for EEG
data.
|
Multifidelity simulation methodologies are often used in an attempt to
judiciously combine low-fidelity and high-fidelity simulation results in an
accuracy-increasing, cost-saving way. Candidates for this approach are
simulation methodologies for which there are fidelity differences connected
with significant computational cost differences. Physics-informed Neural
Networks (PINNs) are candidates for these types of approaches due to the
significant difference in training times required when different fidelities
(expressed in terms of architecture width and depth as well as optimization
criteria) are employed. In this paper, we propose a particular multifidelity
approach applied to PINNs that exploits low-rank structure. We demonstrate that
width, depth, and optimization criteria can be used as parameters related to
model fidelity, and show numerical justification of cost differences in
training due to fidelity parameter choices. We test our multifidelity scheme on
various canonical forward PDE models that have been presented in the emerging
PINNs literature.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.