abstract
stringlengths 42
2.09k
|
---|
In this paper we prove the soliton resolution conjecture for all times, for
all solutions in the energy space, of the co-rotational wave map equation. To
our knowledge this is the first such result for all initial data in the energy
space for a wave-type equation. We also prove the corresponding results for
radial solutions, which remain bounded in the energy norm, of the cubic
(energy-critical) nonlinear wave equation in space dimension 4.
|
Magnetoelectric (ME) effect refers to the coupling between electric and
magnetic fields in a medium resulting in electric polarization induced by
magnetic fields and magnetization induced by electric fields. The linear ME
effect in certain magnetoelectric materials such as multiferroics has been of
great interest due to its application in the fabrication of spintronics
devices, memories, and magnetic sensors. However, the exclusive studies on the
nonlinear ME effect are mostly centered on the investigation of second-harmonic
generation in chiral materials. Here, we report the demonstration of nonlinear
wave mixing of optical electric fields and radio-frequency (rf) magnetic fields
in thermal atomic vapor, which is the consequence of the higher-order nonlinear
ME effect in the medium. The experimental results are explained by comparing
with density matrix calculations of the system. We also experimentally verify
the expected dependence of the generated field amplitudes on the rf field
magnitude as evidence of the magnetoelectric effect. This study can open up the
possibility for precision rf-magnetometry due to its advantage in terms of
larger dynamic range and arbitrary frequency resolution.
|
This paper studies point identification of the distribution of the
coefficients in some random coefficients models with exogenous regressors when
their support is a proper subset, possibly discrete but countable. We exhibit
trade-offs between restrictions on the distribution of the random coefficients
and the support of the regressors. We consider linear models including those
with nonlinear transforms of a baseline regressor, with an infinite number of
regressors and deconvolution, the binary choice model, and panel data models
such as single-index panel data models and an extension of the Kotlarski lemma.
|
We report a precision measurement of the parity-violating asymmetry $A_{PV}$
in the elastic scattering of longitudinally polarized electrons from
$^{208}$Pb. We measure $A_{PV}=550\pm 16 {\rm (stat)}\pm 8\ {\rm (syst)}$ parts
per billion, leading to an extraction of the neutral weak form factor $F_W(Q^2
= 0.00616\ {\rm GeV}^2) = 0.368 \pm 0.013$. Combined with our previous
measurement, the extracted neutron skin thickness is $R_n-R_p=0.283 \pm
0.071$~fm. The result also yields the first significant direct measurement of
the interior weak density of $^{208}$Pb: $\rho^0_W = -0.0796\pm0.0036\ {\rm
(exp.)}\pm0.0013\ {\rm (theo.)}\ {\rm fm}^{-3}$ leading to the interior baryon
density $\rho^0_b = 0.1480\pm0.0036\ {\rm (exp.)}\pm0.0013\ {\rm (theo.)}\ {\rm
fm}^{-3}$. The measurement accurately constrains the density dependence of the
symmetry energy of nuclear matter near saturation density, with implications
for the size and composition of neutron stars.
|
Connected vehicles, whether equipped with advanced driver-assistance systems
or fully autonomous, are currently constrained to visual information in their
lines-of-sight. A cooperative perception system among vehicles increases their
situational awareness by extending their perception ranges. Existing solutions
imply significant network and computation load, as well as high flow of
not-always-relevant data received by vehicles. To address such issues, and thus
account for the inherently diverse informativeness of the data, we present
Augmented Informative Cooperative Perception (AICP) as the first fast-filtering
system which optimizes the informativeness of shared data at vehicles. AICP
displays the filtered data to the drivers in augmented reality head-up display.
To this end, an informativeness maximization problem is presented for vehicles
to select a subset of data to display to their drivers. Specifically, we
propose (i) a dedicated system design with custom data structure and
light-weight routing protocol for convenient data encapsulation, fast
interpretation and transmission, and (ii) a comprehensive problem formulation
and efficient fitness-based sorting algorithm to select the most valuable data
to display at the application layer. We implement a proof-of-concept prototype
of AICP with a bandwidth-hungry, latency-constrained real-life augmented
reality application. The prototype realizes the informative-optimized
cooperative perception with only 12.6 milliseconds additional latency. Next, we
test the networking performance of AICP at scale and show that AICP effectively
filter out less relevant packets and decreases the channel busy time.
|
In recent years, programming has witnessed a shift towards using standard
libraries as a black box. However, there has not been a synchronous development
of tools that can help demonstrate the working of such libraries in general
programs, which poses an impediment to improved learning outcomes and makes
debugging exasperating. We introduce Eye, an interactive pedagogical tool that
visualizes a program's execution as it runs. It demonstrates properties and
usage of data structures in a general environment, thereby helping in learning,
logical debugging, and code comprehension. Eye provides a comprehensive
overview at each stage during run time including the execution stack and the
state of data structures. The modular implementation allows for extension to
other languages and modification of the graphics as desired.
Eye opens up a gateway for CS2 students to more easily understand myriads of
programs that are available on online programming websites, lowering the
barrier towards self-learning of coding. It expands the scope of visualizing
data structures from standard algorithms to general cases, benefiting both
teachers as well as programmers who face issues in debugging. Line by line
interpreting allows Eye to describe the execution and not only the current
state. We also conduct experiments to evaluate the efficacy of Eye for
debugging and comprehending a new piece of code. Our findings show that it
becomes faster and less frustrating to debug certain problems using this tool,
and also makes understanding new code a much more pleasant experience.
|
The classic Riesz representation theorem characterizes all linear and
increasing functionals on the space $C_{c}(X)$ of continuous compactly
supported functions. A geometric version of this result, which characterizes
all linear increasing functionals on the set of convex bodies in
$\mathbb{R}^{n}$, was essentially known to Alexandrov. This was used by
Alexandrov to prove the existence of mixed area measures in convex geometry.
In this paper we characterize linear and increasing functionals on the class
of log-concave functions on $\mathbb{R}^{n}$. Here "linear" means linear with
respect to the natural addition on log-concave functions which is the
sup-convolution. Equivalently, we characterize pointwise-linear and increasing
functionals on the class of convex functions. For some choices of the exact
class of functions we prove that there are no non-trivial such functionals. For
another choice we obtain the expected analogue of the result for convex bodies.
And most interestingly, for yet another choice we find a new unexpected family
of such functionals.
Finally, we explain the connection between our results and recent work done
in convex geometry regarding the surface area measure of a log-concave
functions. An application of our results in this direction is also given.
|
We have analyzed the ALMA archival data of the SO ($J_N=6_5-5_4$ and
$J_N=7_6-6_5$), CO ($J=2-1$), and CCH ($N=3-2, J=7/2-5/2, F=4-3$) lines from
the class 0 protobinary system, NGC1333 IRAS 4A. The images of SO ($J_N =
6_5-5_4$) and CO ($J=2-1$) successfully separate two northern outflow lobes
connected to each protostar, IRAS 4A1 and IRAS 4A2. The outflow from IRAS 4A2
shows an S-shaped morphology, consisting of a flattened envelope around IRAS
4A2 with two outflow lobes connected to both edges of the envelope. The
flattened envelope surrounding IRAS 4A2 has an opposite velocity gradient to
that of the circumbinary envelope. The observed features are reproduced by the
magnetohydrodynamic simulation of the collapsing core whose magnetic field
direction is misaligned to the rotational axis. Our simulation shows that the
intensity of the outflow lobes is enhanced on one side, resulting in the
formation of S-shaped morphology. The S-shaped outflow can also be explained by
the precessing outflow launched from an unresolved binary with a separation
larger than 12 au (0.04arcsec). Additionally, we discovered a previously
unknown extremely high velocity component at $\sim$45-90 km/s near IRAS 4A2
with CO. CCH ($J_{N,F}=7/2_{3,4}-5/2_{2,3}$) emission shows two pairs of blobs
attaching to the bottom of shell like feature, and the morphology is
significantly different from those of SO and CO lines. Toward IRAS 4A2, the
S-shaped outflow shown in SO is overlapped with the edges of CCH shells, while
CCH shells have the velocity gradients opposite to the flattened structure
around IRAS 4A2.
|
We prove the existence of small amplitude time quasi-periodic solutions of
the pure gravity water waves equations with constant vorticity, for a
bidimensional fluid over a flat bottom delimited by a space periodic free
interface. Using a Nash-Moser implicit function iterative scheme we construct
traveling nonlinear waves which pass through each other slightly deforming and
retaining forever a quasiperiodic structure. These solutions exist for any
fixed value of depth and gravity and restricting the vorticity parameter to a
Borel set of asymptotically full Lebesgue measure.
|
Quantum annealing is an emerging new platform for combinatorial optimization,
requiring an Ising model formulation for optimization problems. The formulation
can be an essential obstacle to the permeation of this innovation into broad
areas of everyday life. Our research is aimed at the proposal of a Petri net
modeling approach for an Ising model formulation. Although the proposed method
requires users to model their optimization problems with Petri nets, this
process can be carried out in a relatively straightforward manner if we know
the target problem and the simple Petri net modeling rules. With our method,
the constraints and objective functions in the target optimization problems are
represented as fundamental characteristics of Petri net models, extracted
systematically from Petri net models, and then converted into binary quadratic
nets, equivalent to Ising models. The proposed method can drastically reduce
the difficulty of the Ising model formulation.
|
We consider the classical molecular beam epitaxy (MBE) model with logarithmic
type potential known as no-slope-selection. We employ a third order backward
differentiation (BDF3) in time with implicit treatment of the surface diffusion
term. The nonlinear term is approximated by a third order explicit
extrapolation (EP3) formula. We exhibit mild time step constraints under which
the modified energy dissipation law holds. We break the second Dahlquist
barrier and develop a new theoretical framework to prove unconditional uniform
energy boundedness with no size restrictions on the time step. This is the
first unconditional result for third order BDF methods applied to the MBE
models without introducing any stabilization terms or fictitious variables. A
novel theoretical framework is also established for the error analysis of high
order methods.
|
In this draft paper, we introduce a novel architecture for graph networks
which is equivariant to the Euclidean group in $n$-dimensions. The model is
designed to work with graph networks in their general form and can be shown to
include particular variants as special cases. Thanks to its equivariance
properties, we expect the proposed model to be more data efficient with respect
to classical graph architectures and also intrinsically equipped with a better
inductive bias. We defer investigating this matter to future work.
|
Let $\Sigma_{g}$ be a closed surface of genus $g\geq 2$ and $\Gamma_{g}$
denote the fundamental group of $\Sigma_{g}$. We establish a generalization of
Voiculescu's theorem on the asymptotic $*$-freeness of Haar unitary matrices
from free groups to $\Gamma_{g}$. We prove that for a random representation of
$\Gamma_{g}$ into $\mathsf{SU}(n)$, with law given by the volume form arising
from the Atiyah-Bott-Goldman symplectic form on moduli space, the expected
value of the trace of a fixed non-identity element of $\Gamma_{g}$ is bounded
as $n\to\infty$. The proof involves an interplay between Dehn's work on the
word problem in $\Gamma_{g}$ and classical invariant theory.
|
We prove some Strichartz estimates for the massless, radial Dirac-Coulomb
equation in 3D. The main tools in our argument are the use of a "relativistic
Hankel transform" together with some precise estimates on the generalized
eigenfunctions of the Dirac-Coulomb operator.
|
We derive $q$-versions of Green's theorem from the Leibniz rules of partial
derivatives for the $q$-deformed Euclidean space. Using these results and the
Schr\"{o}dinger equations for a $q$-deformed nonrelativistic particle, we
derive continuity equations for the probability density, the energy density,
and the momentum density of a $q$-deformed nonrelativistic particle.
|
Given a smooth convex cone in the Euclidean $(n+1)$-space ($n\geq2$), we
consider strictly mean convex hypersurfaces with boundary which are star-shaped
with respect to the center of the cone and which meet the cone perpendicularly.
If those hypersurfaces inside the cone evolve by a class of inverse curvature
flows, then, by using the convexity of the cone in the derivation of the
gradient and H\"{o}lder estimates, we can prove that this evolution exists for
all the time and the evolving hypersurfaces converge smoothly to a piece of a
round sphere as time tends to infinity.
|
Emitted photons stemming from the radiative recombination of electron-hole
pairs carry chemical potential in radiative energy converters. This luminescent
effect can substantially alter the local net photogeneration in near-field
thermophotovoltaic cells. Several assumptions involving the luminescent effect
are commonly made in modeling photovoltaic devices; in particular, the photon
chemical potential is assumed to be zero or a constant prescribed by the bias
voltage. The significance of photon chemical potential depends upon the emitter
temperature, the semiconductor properties, and the injection level. Hence,
these assumptions are questionable in thermophotovoltaic devices operating in
the near-field regime. In the present work, an iterative solver that combines
fluctuational electrodynamics with the drift-diffusion model is developed to
tackle the coupled photon and charge transport problem, enabling the
determination of the spatial profile of photon chemical potential beyond the
detailed balance approach. The difference between the results obtained by
allowing the photon chemical potential to vary spatially and by assuming a
constant value demonstrates the limitations of the conventional approaches.
This study is critically important for performance evaluation of near-field
thermophotovoltaic systems.
|
We propose to use tensor diagrams and the Fomin-Pylyavskyy conjectures to
explore the connection between symbol alphabets of $n$-particle amplitudes in
planar $\mathcal{N}=4$ Yang-Mills theory and certain polytopes associated to
the Grassmannian G(4, $n$). We show how to assign a web (a planar tensor
diagram) to each facet of these polytopes. Webs with no inner loops are
associated to cluster variables (rational symbol letters). For webs with a
single inner loop we propose and explicitly evaluate an associated web series
that contains information about algebraic symbol letters. In this manner we
reproduce the results of previous analyses of $n \le 8$, and find that the
polytope $\mathcal{C}^\dagger(4,9)$ encodes all rational letters, and all
square roots of the algebraic letters, of known nine-particle amplitudes.
|
In this study, the stability dependence of turbulent Prandtl number ($Pr_t$)
is quantified via a novel and simple analytical approach. Based on the variance
and flux budget equations, a hybrid length scale formulation is first proposed
and its functional relationships to well-known length scales are established.
Next, the ratios of these length scales are utilized to derive an explicit
relationship between $Pr_t$ and gradient Richardson number. In addition,
theoretical predictions are made for several key turbulence variables (e.g.,
dissipation rates, normalized fluxes). The results from our proposed approach
are compared against other competing formulations as well as published
datasets. Overall, the agreement between the different approaches is rather
good despite their different theoretical foundations and assumptions.
|
The problem of estimating the angular speed of a solid body from attitude
measurements is addressed. To solve this problem, we propose an observer whose
dynamics are not constrained to evolve on any specific manifold. This
drastically simplifies the analysis of the proposed observer. Using Lyapunov
analysis, sufficient conditions for global asymptotic stability of a set
wherein the estimation error is equal to zero are established. In addition, the
proposed methodology is adapted to deal with angular speed estimation for
systems evolving on the unit circle. The approach is illustrated through
several numerical simulations.
|
Exoplanet detection in the past decade by efforts including NASA's Kepler and
TESS missions has discovered many worlds that differ substantially from planets
in our own Solar system, including more than 400 exoplanets orbiting binary or
multi-star systems. This not only broadens our understanding of the diversity
of exoplanets, but also promotes our study of exoplanets in the complex binary
and multi-star systems and provides motivation to explore their habitability.
In this study, we analyze orbital stability of exoplanets in non-coplanar
circumbinary systems using a numerical simulation method, with which a large
number of circumbinary planet samples are generated in order to quantify the
effects of various orbital parameters on orbital stability. We also train a
machine learning model that can quickly determine the stability of the
circumbinary planetary systems. Our results indicate that larger inclinations
of the planet tend to increase the stability of its orbit, but change in the
planet's mass range between Earth and Jupiter has little effect on the
stability of the system. In addition, we find that Deep Neural Networks (DNNs)
have higher accuracy and precision than other machine learning algorithms.
|
Let $M_n$ be a random $n\times n$ matrix with i.i.d. $\text{Bernoulli}(1/2)$
entries. We show that for fixed $k\ge 1$, \[\lim_{n\to
\infty}\frac{1}{n}\log_2\mathbb{P}[\text{corank }M_n\ge k] = -k.\]
|
Any binary string can be associated with a unary predicate $P$ on
$\mathbb{N}$. In this paper we investigate subsets named by a predicate $P$
such that the relation $P(x+y)$ has finite VC dimension. This provides a
measure of complexity for binary strings with different properties than the
standard string complexity function (based on diversity of substrings). We
prove that strings of bounded VC dimension are meagre in the topology of the
reals, provide simple rules for bounding the VC dimension of a string, and show
that the bi-infinite strings of VC dimension $d$ are a non-sofic shift space.
Additionally we characterize the irreducible strings of low VC dimension (0,1
and 2), and provide connections to mathematical logic.
|
The goal of this paper is extend Kottwitz's theory of $B(G)$ for global
fields. In particular, we show how to extend the definition of ``$B(G)$ with
adelic coefficients'' from tori to all connected reductive groups. As an
application, we give an explicit construction of certain transfer factors for
non-regular semisimple elements of non-quasisplit groups. This generalizes some
results of Kaletha and Taibi. These formulas are used in the stabilization of
the cohomology of Shimura and Igusa varieties.
|
As an extension of previous ungraded work, we define a graded $p$-polar ring
to be an analog of a graded commutative ring where multiplication is only
allowed on $p$-tuples (instead of pairs) of elements of equal degree. We show
that the free affine $p$-adic group scheme functor, as well as the free formal
group functor, defined on $k$-algebras for a perfect field $k$ of
characteristic $p$, factors through $p$-polar $k$-algebras. It follows that the
same is true for any affine $p$-adic or formal group functor, in particular for
the functor of $p$-typical Witt vectors. As an application, we show that the
latter is free on the $p$-polar affine line.
|
Bismuth ferrite is one of the most widely studied multiferroic materials
because of its large ferroelectric polarisation coexisting with magnetic order
at room temperature. Using density functional theory (DFT), we identify several
previously unknown polar and non-polar structures within the low-energy phase
space of perovskite-structure bismuth ferrite, BiFeO$_3$. Of particular
interest is a series of non-centrosymmetric structures with polarisation along
one lattice vector, combined with anti-polar distortions, reminiscent of
ferroelectric domains, along a perpendicular direction. We discuss possible
routes to stabilising the new phases using biaxial heteroepitaxial strain or
interfacial electrostatic control in heterostructures.
|
Primordial black holes may have been produced in the early stages of the
thermal history of the Universe after cosmic inflation. If so, dark matter in
the form of elementary particles can be subsequently accreted around these
objects, in particular when it gets non-relativistic and further streams freely
in the primordial plasma. A dark matter mini-spike builds up gradually around
each black hole, with density orders of magnitude larger than the cosmological
one. We improve upon previous work by carefully inspecting the computation of
the mini-spike radial profile as a function of black hole mass, dark matter
particle mass and temperature of kinetic decoupling. We identify a phase-space
contribution that has been overlooked and that leads to changes in the final
results. We also derive complementary analytical formulae using convenient
asymptotic regimes, which allows us to bring out peculiar power-law behaviors
for which we provide tentative physical explanations.
|
The $k$-mappability problem has two integers parameters $m$ and $k$. For
every subword of size $m$ in a text $S$, we wish to report the number of
indices in $S$ in which the word occurs with at most $k$ mismatches.
The problem was lately tackled by Alzamel et al. For a text with constant
alphabet $\Sigma$ and $k \in O(1)$, they present an algorithm with linear space
and $O(n\log^{k+1}n)$ time. For the case in which $k = 1$ and a constant size
alphabet, a faster algorithm with linear space and $O(n\log(n)\log\log(n))$
time was presented in a 2020 paper by Alzamel et al.
In this work, we enhance the techniques of Alzamel et al.'s 2020 paper to
obtain an algorithm with linear space and $O(n \log(n))$ time for $k = 1$. Our
algorithm removes the constraint of the alphabet being of constant size. We
also present linear algorithms for the case of $k=1$, $|\Sigma|\in O(1)$ and
$m=\Omega(\sqrt{n})$.
|
In the past few years, the detection of gravitational waves from compact
binary coalescences with the Advanced LIGO and Advanced Virgo detectors has
become routine. Future observatories will detect even larger numbers of
gravitational-wave signals, which will also spend a longer time in the
detectors' sensitive band. This will eventually lead to overlapping signals,
especially in the case of Einstein Telescope (ET) and Cosmic Explorer (CE).
Using realistic distributions for the merger rate as a function of redshift as
well as for component masses in binary neutron star and binary black hole
coalescences, we map out how often signal overlaps of various types will occur
in an ET-CE network over the course of a year. We find that a binary neutron
star signal will typically have tens of overlapping binary black hole and
binary neutron star signals. Moreover, it will happen up to tens of thousands
of times per year that two signals will have their end times within seconds of
each other. In order to understand to what extent this would lead to
measurement biases with current parameter estimation methodology, we perform
injection studies with overlapping signals from binary black hole and/or binary
neutron star coalescences. Varying the signal-to-noise ratios, the durations of
overlap, and the kinds of overlapping signals, we find that in most scenarios
the intrinsic parameters can be recovered with negligible bias. However, biases
do occur for a short binary black hole or a quieter binary neutron star signal
overlapping with a long and louder binary neutron star event when the merger
times are sufficiently close. Hence our studies show where improvements are
required to ensure reliable estimation of source parameters for all detected
compact binary signals as we go from second-generation to third-generation
detectors.
|
Computer vision (CV) techniques try to mimic human capabilities of visual
perception to support labor-intensive and time-consuming tasks like the
recognition and localization of critical objects. Nowadays, CV increasingly
relies on artificial intelligence (AI) to automatically extract useful
information from images that can be utilized for decision support and business
process automation. However, the focus of extant research is often exclusively
on technical aspects when designing AI-based CV systems while neglecting
socio-technical facets, such as trust, control, and autonomy. For this purpose,
we consider the design of such systems from a hybrid intelligence (HI)
perspective and aim to derive prescriptive design knowledge for CV-based HI
systems. We apply a reflective, practice-inspired design science approach and
accumulate design knowledge from six comprehensive CV projects. As a result, we
identify four design-related mechanisms (i.e., automation, signaling,
modification, and collaboration) that inform our derived meta-requirements and
design principles. This can serve as a basis for further socio-technical
research on CV-based HI systems.
|
Plasmon-free surface-enhanced Raman scattering (SERS) substrates have
attracted tremendous attention for their abundant sources, excellent chemical
stability, superior biocompatibility, good signal uniformity, and unique
selectivity to target molecules. Recently, researchers have made great progress
in fabricating novel plasmon-free SERS substrates and exploring new enhancement
strategies to improve their sensitivity. This review summarizes the recent
developments of plasmon-free SERS substrates and specially focuses on the
enhancement mechanisms and strategies. Furthermore, the promising applications
of plasmon-free SERS substrates in biomedical diagnosis, metal ions and organic
pollutants sensing, chemical and biochemical reactions monitoring, and
photoelectric characterization are introduced. Finally, current challenges and
future research opportunities in plasmon-free SERS substrates are briefly
discussed.
|
This paper studies bandit algorithms under data poisoning attacks in a
bounded reward setting. We consider a strong attacker model in which the
attacker can observe both the selected actions and their corresponding rewards,
and can contaminate the rewards with additive noise. We show that \emph{any}
bandit algorithm with regret $O(\log T)$ can be forced to suffer a regret
$\Omega(T)$ with an expected amount of contamination $O(\log T)$. This amount
of contamination is also necessary, as we prove that there exists an $O(\log
T)$ regret bandit algorithm, specifically the classical UCB, that requires
$\Omega(\log T)$ amount of contamination to suffer regret $\Omega(T)$. To
combat such poising attacks, our second main contribution is to propose a novel
algorithm, Secure-UCB, which uses limited \emph{verification} to access a
limited number of uncontaminated rewards. We show that with $O(\log T)$
expected number of verifications, Secure-UCB can restore the order optimal
$O(\log T)$ regret \emph{irrespective of the amount of contamination} used by
the attacker. Finally, we prove that for any bandit algorithm, this number of
verifications $O(\log T)$ is necessary to recover the order-optimal regret. We
can then conclude that Secure-UCB is order-optimal in terms of both the
expected regret and the expected number of verifications, and can save
stochastic bandits from any data poisoning attack.
|
The close-packed AB$_2$ structures called Laves phases constitute the largest
group of intermetallic compounds. In this paper we computationally investigated
the pseudo-binary Laves phase system Y$_{1-x}$Gd$_x$(Fe$_{1-y}$Co$_y$)$_2$
spanning between the YFe$_2$, YCo$_2$, GdFe$_2$, and GdCo$_2$ vertices. While
the vast majority of the Y$_{1-x}$Gd$_x$(Fe$_{1-y}$Co$_y$)$_2$ phase diagram is
the ferrimagnetic phase, YCo$_2$ along with a narrow range of concentrations
around it is the paramagnetic phase. We presented results obtained by Monte
Carlo simulations of the Heisenberg model with parameters derived from
first-principles calculations. For calculations, we used the Uppsala atomistic
spin dynamics (UppASD) code together with the spin-polarized relativistic
Korringa-Kohn-Rostoker (SPR-KKR) code. From first principles we calculated the
magnetic moments and exchange integrals for the considered pseudo-binary
system, together with spin-polarized densities of states for boundary
compositions. Furthermore, we showed how the compensation point with the
effective zero total moment depends on the concentration in the considered
ferrimagnetic phases. However, the main result of our study was the
determination of the Curie temperature dependence for the system
Y$_{1-x}$Gd$_x$(Fe$_{1-y}$Co$_y$)$_2$. Except for the paramagnetic region
around YCo$_2$, the predicted temperatures were in good qualitative and
quantitative agreement with experimental results, which confirmed the ability
of the method to predict magnetic transition temperatures for systems
containing up to three different magnetic elements (Fe, Co, and Gd)
simultaneously. For the Y(Fe$_{1-y}$Co$_y$)$_2$ and Gd(Fe$_{1-y}$Co$_y$)$_2$
systems our calculations matched the experimentally-confirmed
Slater-Pauling-like behavior of T$_C$ dependence on the Co concentration.
|
Quantum steering refers to correlations that can be classified as
intermediate between entanglement and Bell nonlocality. Every state exhibiting
Bell nonlocality exhibits also quantum steering and every state exhibiting
quantum steering is also entangled. In low dimensional cases similar
hierarchical relations have been observed between the temporal counterparts of
these correlations. Here, we study the hierarchy of such temporal correlations
for a general multilevel quantum system. We demonstrate that the same hierarchy
holds for two definitions of state over time. In order to compare different
types of temporal correlations, we show that temporal counterparts of Bell
nonlocality and entanglement can be quantified with a temporal nonlocality
robustness and temporal entanglement robustness. Our numerical result reveal
that in contrast to temporal steering, for temporal nonlocality to manifest
itself we require the initial state not to be in a completely mixed state.
|
The development of respiratory failure is common among patients in intensive
care units (ICU). Large data quantities from ICU patient monitoring systems
make timely and comprehensive analysis by clinicians difficult but are ideal
for automatic processing by machine learning algorithms. Early prediction of
respiratory system failure could alert clinicians to patients at risk of
respiratory failure and allow for early patient reassessment and treatment
adjustment. We propose an early warning system that predicts moderate/severe
respiratory failure up to 8 hours in advance. Our system was trained on
HiRID-II, a data-set containing more than 60,000 admissions to a tertiary care
ICU. An alarm is typically triggered several hours before the beginning of
respiratory failure. Our system outperforms a clinical baseline mimicking
traditional clinical decision-making based on pulse-oximetric oxygen saturation
and the fraction of inspired oxygen. To provide model introspection and
diagnostics, we developed an easy-to-use web browser-based system to explore
model input data and predictions visually.
|
Herein we shall consider Lorentz boosts and Wigner rotations from a
(complexified) quaternionic point of view. We shall demonstrate that for a
suitably defined self-adjoint complex quaternionic 4-velocity, pure Lorentz
boosts can be phrased in terms of the quaternion square root of the relative
4-velocity connecting the two inertial frames. Straightforward computations
then lead to quite explicit and relatively simple algebraic formulae for the
composition of 4-velocities and the Wigner angle. We subsequently relate the
Wigner rotation to the generic non-associativity of the composition of three
4-velocities, and develop a necessary and sufficient condition for
associativity to hold. Finally, we relate the composition of 4-velocities to a
specific implementation of the Baker-Campbell-Hausdorff theorem. As compared to
ordinary 4x4 Lorentz transformations, the use of self-adjoint complexified
quaternions leads, from a computational view, to storage savings and more rapid
computations, and from a pedagogical view to to relatively simple and explicit
formulae.
|
C. elegans shows chemotaxis using klinokinesis where the worm senses the
concentration based on a single concentration sensor to compute the
concentration gradient to perform foraging through gradient ascent/descent
towards the target concentration followed by contour tracking. The biomimetic
implementation requires complex neurons with multiple ion channel dynamics as
well as interneurons for control. While this is a key capability of autonomous
robots, its implementation on energy-efficient neuromorphic hardware like
Intel's Loihi requires adaptation of the network to hardware-specific
constraints, which has not been achieved. In this paper, we demonstrate the
adaptation of chemotaxis based on klinokinesis to Loihi by implementing
necessary neuronal dynamics with only LIF neurons as well as a complete
spike-based implementation of all functions e.g. Heaviside function and
subtractions. Our results show that Loihi implementation is equivalent to the
software counterpart on Python in terms of performance - both during foraging
and contour tracking. The Loihi results are also resilient in noisy
environments. Thus, we demonstrate a successful adaptation of chemotaxis on
Loihi - which can now be combined with the rich array of SNN blocks for SNN
based complex robotic control.
|
Deep neural networks (DNNs) have shown to perform very well on large scale
object recognition problems and lead to widespread use for real-world
applications, including situations where DNN are implemented as "black boxes".
A promising approach to secure their use is to accept decisions that are likely
to be correct while discarding the others. In this work, we propose DOCTOR, a
simple method that aims to identify whether the prediction of a DNN classifier
should (or should not) be trusted so that, consequently, it would be possible
to accept it or to reject it. Two scenarios are investigated: Totally Black Box
(TBB) where only the soft-predictions are available and Partially Black Box
(PBB) where gradient-propagation to perform input pre-processing is allowed.
Empirically, we show that DOCTOR outperforms all state-of-the-art methods on
various well-known images and sentiment analysis datasets. In particular, we
observe a reduction of up to $4\%$ of the false rejection rate (FRR) in the PBB
scenario. DOCTOR can be applied to any pre-trained model, it does not require
prior information about the underlying dataset and is as simple as the simplest
available methods in the literature.
|
A search is presented for a heavy vector resonance decaying into a Z boson
and the standard model Higgs boson, where the Z boson is identified through its
leptonic decays to electrons, muons, or neutrinos, and the Higgs boson is
identified through its hadronic decays. The search is performed in a
Lorentz-boosted regime and is based on data collected from 2016 to 2018 at the
CERN LHC, corresponding to an integrated luminosity of 137 fb$^{-1}$. Upper
limits are derived on the production of a narrow heavy resonance Z', and a mass
below 3.5 and 3.7 TeV is excluded at 95% confidence level in models where the
heavy vector boson couples exclusively to fermions and to bosons, respectively.
These are the most stringent limits placed on the Heavy Vector Triplet Z' model
to date. If the heavy vector boson couples exclusively to standard model
bosons, upper limits on the product of the cross section and branching fraction
are set between 23 and 0.3 fb for a Z' mass between 0.8 and 4.6 TeV,
respectively. This is the first limit set on a heavy vector boson coupling
exclusively to standard model bosons in its production and decay.
|
Obtaining labeled data for machine learning tasks can be prohibitively
expensive. Active learning mitigates this issue by exploring the unlabeled data
space and prioritizing the selection of data that can best improve the model
performance. A common approach to active learning is to pick a small sample of
data for which the model is most uncertain. In this paper, we explore the
efficacy of Bayesian neural networks for active learning, which naturally
models uncertainty by learning distribution over the weights of neural
networks. By performing a comprehensive set of experiments, we show that
Bayesian neural networks are more efficient than ensemble based techniques in
capturing uncertainty. Our findings also reveal some key drawbacks of the
ensemble techniques, which was recently shown to be more effective than Monte
Carlo dropouts.
|
The advent of automated vehicles operating at SAE levels 4 and 5 poses high
fault tolerance demands for all functions contributing to the driving task. At
the actuator level, fault-tolerant vehicle motion control, which exploits
functional redundancies among the actuators, is one means to achieve the
required degree of fault tolerance. Therefore, we give a comprehensive overview
of the state of the art in actuator fault-tolerant vehicle motion control with
a focus on drive, brake, and steering degradations, as well as tire blowouts.
This review shows that actuator fault-tolerant vehicle motion is a widely
studied field; yet, the presented approaches differ with respect to many
aspects. To provide a starting point for future research, we survey the
employed actuator topologies, the tolerated degradations, the presented control
approaches, as well as the experiments conducted for validation. Overall, and
despite the large number of different approaches, the covered literature
reveals the potential of increasing fault tolerance by fault-tolerant vehicle
motion control. Thus, besides developing novel approaches or demonstrating
real-time applicability, future research should aim at investigating
limitations and enabling comparison of fault-tolerant motion control approaches
in order to allow for a thorough safety argumentation.
|
The odd isotopologues of ytterbium monohydroxide, $^{171,173}$YbOH, have been
identified as promising molecules in which to measure parity (P) and time
reversal (T) violating physics. Here we characterize the
$\tilde{A}^{2}\Pi_{1/2}(0,0,0)-\tilde{X}^2\Sigma^+(0,0,0)$ band near 577 nm for
these odd isotopologues. Both laser-induced fluorescence (LIF) excitation
spectra of a supersonic molecular beam sample and absorption spectra of a
cryogenic buffer-gas cooled sample were recorded. Additionally, a novel
spectroscopic technique based on laser-enhanced chemical reactions is
demonstrated and utilized in the absorption measurements. This technique is
especially powerful for disentangling congested spectra. An effective
Hamiltonian model is used to extract the fine and hyperfine parameters for the
$\tilde{A}^{2}\Pi_{1/2}(0,0,0)$ and $\tilde{X}^2\Sigma^+(0,0,0)$ states. A
comparison of the determined $\tilde{X}^2\Sigma^+(0,0,0)$ hyperfine parameters
with recently predicted values (M. Denis, et al., J. Chem. Phys. $\bf{152}$,
084303 (2020), K. Gaul and R. Berger, Phys. Rev. A $\bf{101}$, 012508 (2020),
J. Liu et al., J. Chem. Phys. $\bf{154}$, 064110 (2021)) is made. The measured
hyperfine parameters provide experimental confirmation of the computational
methods used to compute the P,T-violating coupling constants $W_d$ and $W_M$,
which correlate P,T-violating physics to P,T-violating energy shifts in the
molecule. The dependence of the fine and hyperfine parameters of the
$\tilde{A}^{2}\Pi_{1/2}(0,0,0)$ and $\tilde{X}^2\Sigma^+(0,0,0)$ states for all
isotopologues of YbOH are discussed and a comparison to isoelectronic YbF is
made.
|
The two-dimensional Helmholtz equation separates in elliptic coordinates
based on two distinct foci, a limit case of which includes polar coordinate
systems when the two foci coalesce. This equation is invariant under the
Euclidean group of translations and orthogonal transformations; we replace the
latter by the discrete dihedral group of N discrete rotations and reflections.
The separation of variables in polar and elliptic coordinates is then used to
define discrete Bessel and Mathieu functions, as approximants to the well-known
continuous Bessel and Mathieu functions, as N-point Fourier transforms
approximate the Fourier transform over the circle, with integrals replaced by
finite sums. We find that these 'discrete' functions approximate the numerical
values of their continuous counterparts very closely and preserve some key
special function relations.
|
A schizophrenia relapse has severe consequences for a patient's health, work,
and sometimes even life safety. If an oncoming relapse can be predicted on
time, for example by detecting early behavioral changes in patients, then
interventions could be provided to prevent the relapse. In this work, we
investigated a machine learning based schizophrenia relapse prediction model
using mobile sensing data to characterize behavioral features. A
patient-independent model providing sequential predictions, closely
representing the clinical deployment scenario for relapse prediction, was
evaluated. The model uses the mobile sensing data from the recent four weeks to
predict an oncoming relapse in the next week. We used the behavioral rhythm
features extracted from daily templates of mobile sensing data, self-reported
symptoms collected via EMA (Ecological Momentary Assessment), and demographics
to compare different classifiers for the relapse prediction. Naive Bayes based
model gave the best results with an F2 score of 0.083 when evaluated in a
dataset consisting of 63 schizophrenia patients, each monitored for up to a
year. The obtained F2 score, though low, is better than the baseline
performance of random classification (F2 score of 0.02 $\pm$ 0.024). Thus,
mobile sensing has predictive value for detecting an oncoming relapse and needs
further investigation to improve the current performance. Towards that end,
further feature engineering and model personalization based on the behavioral
idiosyncrasies of a patient could be helpful.
|
We propose a vector dark matter model with an exotic dark SU(2) gauge group.
Two Higgs triplets are introduced to spontaneously break the symmetry. All of
the dark gauge bosons become massive, and the lightest one is a viable vector
DM candidate. Its stability is guaranteed by a remaining Z_2 symmetry. We study
the parameter space constrained by the Higgs measurement data, the dark matter
relic density, and direct and indirect detection experiments. We find numerous
parameter points satisfying all the constraints, and they could be further
tested in future experiments. Similar methodology can be used to construct
vector dark matter models from an arbitrary SO(N) gauge group.
|
Following E. Wigner's original vision, we prove that sampling the eigenvalue
gaps within the bulk spectrum of a .fixed (deformed) Wigner matrix $H$ yields
the celebrated Wigner-Dyson-Mehta universal statistics with high probability.
Similarly, we prove universality for a monoparametric family of deformed Wigner
matrices $H+xA$ with a deterministic Hermitian matrix $A$ and a fixed Wigner
matrix $H$, just using the randomness of a single scalar real random variable
$x$. Both results constitute quenched versions of bulk universality that has so
far only been proven in annealed sense with respect to the probability space of
the matrix ensemble.
|
The demand for streaming media and live video conferencing is at peak and
expected to grow further, thereby the need for low-cost streaming services with
better quality and lower latency is essential. Therefore, in this paper, we
propose a novel peer-to-peer (P2P) live streaming platform, called fybrrStream,
where a logical mesh and physical tree i.e., hybrid topology-based approach is
leveraged for low latency streaming. fybrrStream distributes the load on
participating peers in a hierarchical manner by considering their network
bandwidth, network latency, and node stability. fybrrStream costs as low as the
cost of just hosting a light-weight website and the performance is comparable
to the existing state-of-the-art media streaming services. We evaluated and
tested the proposed fybrrStream platform with real-field experiments using 50+
users spread across India and results obtained show significant improvements in
the live streaming performance over other schemes.
|
We conjecture the existence of hidden Onsager algebra symmetries in two
interacting quantum integrable lattice models, i.e. spin-1/2 XXZ model and
spin-1 Zamolodchikov-Fateev model at arbitrary root of unity values of the
anisotropy. The conjectures relate the Onsager generators to the conserved
charges obtained from semi-cyclic transfer matrices. The conjectures are
motivated by two examples which are spin-1/2 XX model and spin-1 U(1)-invariant
clock model. A novel construction of the semi-cyclic transfer matrices of
spin-1 Zamolodchikov-Fateev model at arbitrary root of unity value of the
anisotropy is carried out via transfer matrix fusion procedure.
|
Robust multi-agent trajectory prediction is essential for the safe control of
robots and vehicles that interact with humans. Many existing methods treat
social and temporal information separately and therefore fall short of
modelling the joint future trajectories of all agents in a socially consistent
way. To address this, we propose a new class of Latent Variable Sequential Set
Transformers which autoregressively model multi-agent trajectories. We refer to
these architectures as "AutoBots". AutoBots model the contents of sets (e.g.
representing the properties of agents in a scene) over time and employ
multi-head self-attention blocks over these sequences of sets to encode the
sociotemporal relationships between the different actors of a scene. This
produces either the trajectory of one ego-agent or a distribution over the
future trajectories for all agents under consideration. Our approach works for
general sequences of sets and we provide illustrative experiments modelling the
sequential structure of the multiple strokes that make up symbols in the
Omniglot data. For the single-agent prediction case, we validate our model on
the NuScenes motion prediction task and achieve competitive results on the
global leaderboard. In the multi-agent forecasting setting, we validate our
model on TrajNet. We find that our method outperforms physical extrapolation
and recurrent network baselines and generates scene-consistent trajectories.
|
We present the results of radio observations from the eMERLIN telescope
combined with X-ray data from Swift for the short-duration Gamma-ray burst
(GRB) 200826A, located at a redshift of 0.71. The radio light curve shows
evidence of a sharp rise, a peak around 4-5 days post-burst, followed by a
relatively steep decline. We provide two possible interpretations based on the
time at which the light curve reached its peak. (1) If the light curve peaks
earlier, the peak is produced by the synchrotron self-absorption frequency
moving through the radio band, resulting from the forward shock propagating
into a wind medium and (2) if the light curve peaks later, the turn over in the
light curve is caused by a jet break. In the former case, we find a minimum
equipartition energy of ~3x10^47 erg and bulk Lorentz factor of ~5, while in
the latter case we estimate the jet opening angle of ~9-16 degrees. Due to the
lack of data, it is impossible to determine which is the correct
interpretation, however, due to its relative simplicity and consistency with
other multi-wavelength observations which hint at the possibility that GRB
200826A is in fact a long GRB, we prefer scenario one over scenario two.
|
In this paper, we present a positivity-preserving limiter for nodal
Discontinuous Galerkin disctretizations of the compressible Euler equations. We
use a Legendre-Gauss-Lobatto (LGL) Discontinuous Galerkin Spectral Element
Method (DGSEM) and blend it locally with a consistent LGL-subcell Finite Volume
(FV) discretization using a hybrid FV/DGSEM scheme that was recently proposed
for entropy stable shock capturing. We show that our strategy is able to ensure
robust simulations with positive density and pressure when using the standard
and the split-form DGSEM. Furthermore, we show the applicability of our FV
positivity limiter in extremely under-resolved vortex dominated simulations and
in problems with shocks.
|
We present covariant symmetry operators for the conformal wave equation in
the (off-shell) Kerr-NUT-AdS spacetimes. These operators, that are constructed
from the principal Killing-Yano tensor, its `symmetry descendants', and the
curvature tensor, guarantee separability of the conformal wave equation in
these spacetimes. We next discuss how these operators give rise to a full set
of conformally invariant mutually commuting operators for the conformally
rescaled spacetimes and underlie the $R$-separability of the conformal wave
equation therein. Finally, by employing the WKB approximation we derive the
associated Hamilton-Jacobi equation with a scalar curvature potential term and
show its separability in the Kerr-NUT-AdS spacetimes.
|
We study the problem of repeatedly auctioning off an item to one of $k$
bidders where: a) bidders have a per-round individual rationality constraint,
b) bidders may leave the mechanism at any point, and c) the bidders' valuations
are adversarially chosen (the prior-free setting). Without these constraints,
the auctioneer can run a second-price auction to "sell the business" and
receive the second highest total value for the entire stream of items. We show
that under these constraints, the auctioneer can attain a constant fraction of
the "sell the business" benchmark, but no more than $2/e$ of this benchmark.
In the course of doing so, we design mechanisms for a single bidder problem
of independent interest: how should you repeatedly sell an item to a (per-round
IR) buyer with adversarial valuations if you know their total value over all
rounds is $V$ but not how their value changes over time? We demonstrate a
mechanism that achieves revenue $V/e$ and show that this is tight.
|
Chemical surfactants are omnipresent in consumers' products but they suffer
from environmental concerns. For this reason, complete replacement of
petrochemical surfactants by biosurfactants constitute a holy grail but this is
far from occurring any soon. If the "biosurfactants revolution" has not
occurred, yet, mainly due to the higher cost and lower availability of
biosurfactants, another reason explains this fact: the poor knowledge of their
properties in solution. This tutorial review aims at reviewing the
self-assembly properties and phase behavior, experimental (sections 2.3 and
2.4) and from molecular modelling (section 5), in water of the most important
microbial biosurfactants (sophorolipids, rhamnolipids, surfacting,
cellobioselipids, glucolipids) as well as their major derivatives. A critical
discussion of such properties in light of the well-known packing parameter of
surfactants is also provided (section 2.5). The relationship between the
nanoscale self-assembly and macroscopic materials properties, including
hydrogelling, solid foaming, templating or encapsulation is specifically
discussed (section 2.7). We also present their self-assembly and adsorption at
flat and complex air/liquid (e.g., foams), air/solid (adhesion), liquid/solid
(nanoparticles) and liquid/liquid (e.g., emulsions) interfaces (section 3). A
critical discussion on the use of biosurfactants as capping agents for the
development of stable nanoparticles is specifically provided (section 3.2.4).
Finally, we discuss the major findings involving biosurfactants and
macromolecules, including proteins, enzymes, polymers and polyelectrolytes.
|
We have deduced the structure of the \ce{bromobenzene}--\ce{I2} heterodimer
and the \ce{(bromobenzene)2} homodimer inside helium droplets using a
combination of laser-induced alignment, Coulomb explosion imaging, and
three-dimensional ion imaging. The complexes were fixed in a variety of
orientations in the laboratory frame, then in each case multiply ionized by an
intense laser pulse. A three dimensional ion imaging detector, including a
Timepix3 detector allowed us to measure the correlations between velocity
vectors of different fragments and, in conjunction with classical simulations,
work backward to the initial structure of the complex prior to explosion. For
the heterodimer, we find that the \ce{I2} molecular axis intersects the phenyl
ring of the bromobenzene approximately perpendicularly. The homodimer has a
stacked parallel structure, with the two bromine atoms pointing in opposite
directions. These results illustrate the ability of Coulomb explosion imaging
to determine the structure of large complexes, and point the way toward
real-time measurements of bimolecular reactions inside helium droplets.
|
We present a theoretical investigation of anisotropic superconducting spin
transport at a magnetic interface between a p-wave superconductor and a
ferromagnetic insulator. Our formulation describes the ferromagnetic resonance
modulations due to spin-triplet current generation, including the frequency
shift and enhanced Gilbert damping, in a unified manner. We find that the
Cooper pair symmetry is detectable from the qualitative behavior of the
ferromagnetic resonance modulation. Our theory paves the way toward anisotropic
superconducting spintronics.
|
In this paper, we develop a new free-stream preserving (FP) method for
high-order upwind conservative finite-difference (FD) schemes on the
curvilinear grids. This FP method is constrcuted by subtracting a reference
cell-face flow state from each cell-center value in the local stencil of the
original upwind conservative FD schemes, which effectively leads to a
reformulated dissipation. It is convenient to implement this method, as it does
not require to modify the original forms of the upwind schemes. In addition,
the proposed method removes the constraint in the traditional FP conservative
FD schemes that require a consistent discretization of the mesh metrics and the
fluxes. With this, the proposed method is more flexible in simulating the
engineering problems which usually require a low-order scheme for their
low-quality mesh, while the high-order schemes can be applied to approximate
the flow states to improve the resolution. After demonstrating the strict FP
property and the order of accuracy by two simple test cases, we consider
various validation cases, including the supersonic flow around the cylinder,
the subsonic flow past the three-element airfoil, and the transonic flow around
the ONERA M6 wing, etc., to show that the method is suitable for a wide range
of fluid dynamic problems containing complex geometries. Moreover, these test
cases also indicate that the discretization order of the metrics have no
significant influences on the numerical results if the mesh resolution is not
sufficiently large.
|
The traditional setup of link prediction in networks assumes that a test set
of node pairs, which is usually balanced, is available over which to predict
the presence of links. However, in practice, there is no test set: the
ground-truth is not known, so the number of possible pairs to predict over is
quadratic in the number of nodes in the graph. Moreover, because graphs are
sparse, most of these possible pairs will not be links. Thus, link prediction
methods, which often rely on proximity-preserving embeddings or heuristic
notions of node similarity, face a vast search space, with many pairs that are
in close proximity, but that should not be linked. To mitigate this issue, we
introduce LinkWaldo, a framework for choosing from this quadratic,
massively-skewed search space of node pairs, a concise set of candidate pairs
that, in addition to being in close proximity, also structurally resemble the
observed edges. This allows it to ignore some high-proximity but
low-resemblance pairs, and also identify high-resemblance, lower-proximity
pairs. Our framework is built on a model that theoretically combines Stochastic
Block Models (SBMs) with node proximity models. The block structure of the SBM
maps out where in the search space new links are expected to fall, and the
proximity identifies the most plausible links within these blocks, using
locality sensitive hashing to avoid expensive exhaustive search. LinkWaldo can
use any node representation learning or heuristic definition of proximity, and
can generate candidate pairs for any link prediction method, allowing the
representation power of current and future methods to be realized for link
prediction in practice. We evaluate LinkWaldo on 13 networks across multiple
domains, and show that on average it returns candidate sets containing 7-33%
more missing and future links than both embedding-based and heuristic
baselines' sets.
|
We report in this paper the analysis for the linear and nonlinear version of
the flux corrected transport (FEM-FCT) scheme in combination with the backward
Euler time-stepping scheme applied to time-dependent
convection-diffusion-reaction problems. We present the stability and error
estimates for the linear and nonlinear FEM-FCT scheme. Numerical results
confirm the theoretical predictions.
|
This paper proposes a new reinforcement learning with hyperbolic discounting.
Combining a new temporal difference error with the hyperbolic discounting in
recursive manner and reward-punishment framework, a new scheme to learn the
optimal policy is derived. In simulations, it is found that the proposal
outperforms the standard reinforcement learning, although the performance
depends on the design of reward and punishment. In addition, the averages of
discount factors w.r.t. reward and punishment are different from each other,
like a sign effect in animal behaviors.
|
The discrepancy between theory and experiment severely limits the development
of quantum key distribution (QKD). Reference-frame-independent (RFI) protocol
has been proposed to avoid alignment of the reference frame. However, multiple
optical modes caused by Trojan horse attacks and equipment loopholes lead to
the imperfect emitted signal unavoidably. In this paper, we analyzed the
security of the RFI-QKD protocol with non-qubit sources based on generalizing
loss-tolerant techniques. The simulation results show that our work can
effectively defend against non-qubit sources including a misaligned reference
frame, state preparation flaws, multiple optical modes, and Trojan horse
attacks. Moreover, it only requires the preparation of four quantum states,
which reduces the complexity of the experiment in the future.
|
Humans are arguably one of the most important subjects in video streams, many
real-world applications such as video summarization or video editing workflows
often require the automatic search and retrieval of a person of interest.
Despite tremendous efforts in the person reidentification and retrieval
domains, few works have developed audiovisual search strategies. In this paper,
we present the Audiovisual Person Search dataset (APES), a new dataset composed
of untrimmed videos whose audio (voices) and visual (faces) streams are densely
annotated. APES contains over 1.9K identities labeled along 36 hours of video,
making it the largest dataset available for untrimmed audiovisual person
search. A key property of APES is that it includes dense temporal annotations
that link faces to speech segments of the same identity. To showcase the
potential of our new dataset, we propose an audiovisual baseline and benchmark
for person retrieval. Our study shows that modeling audiovisual cues benefits
the recognition of people's identities. To enable reproducibility and promote
future research, the dataset annotations and baseline code are available at:
https://github.com/fuankarion/audiovisual-person-search
|
The COVID-19 pandemic has influenced the lives of people globally. In the
past year many researchers have proposed different models and approaches to
explore in what ways the spread of the disease could be mitigated. One of the
models that have been used a great deal is the
Susceptible-Exposed-Infectious-Recovered (SEIR) model. Some researchers have
modified the traditional SEIR model, and proposed new versions of it. However,
to the best of our knowledge, the state-of-the-art papers have not considered
the effect of different vaccine types, meaning single shot and double shot
vaccines, in their SEIR model. In this paper, we propose a modified version of
the SEIR model which takes into account the effect of different vaccine types.
We compare how different policies for the administration of the vaccine can
influence the rate at which people are exposed to the disease, get infected,
recover, and pass away. Our results suggest that taking the double shot vaccine
such as Pfizer-BioNTech and Moderna does a better job at mitigating the spread
and fatality rate of the disease compared to the single shot vaccine, due to
its higher efficacy.
|
We study the potential of gravitational wave astronomy to observe the quantum
aspects of black holes. According to Bekenstein's quantization, we find that
black hole area discretization can have observable imprints on the
gravitational wave signal from an inspiraling binary black hole. We study the
impact of quantization on tidal heating. We model the absorption lines and
compute gravitational wave flux due to tidal heating in such a case. By
including the quantization we find the dephasing of the gravitational wave, to
our knowledge it has never been done before. We discuss the observability of
the phenomena in different parameter ranges of the binary. We show that in the
inspiral, it leads to vanishing tidal heating for the high spin values.
Therefore measuring non-zero tidal heating can rule out area quantization. We
also argue that if area quantization is present in nature then our current
modeling with reflectivity can possibly probe the Hawking radiation which may
bring important information regarding the quantum nature of gravity.
|
Summarization evaluation remains an open research problem: current metrics
such as ROUGE are known to be limited and to correlate poorly with human
judgments. To alleviate this issue, recent work has proposed evaluation metrics
which rely on question answering models to assess whether a summary contains
all the relevant information in its source document. Though promising, the
proposed approaches have so far failed to correlate better than ROUGE with
human judgments.
In this paper, we extend previous approaches and propose a unified framework,
named QuestEval. In contrast to established metrics such as ROUGE or BERTScore,
QuestEval does not require any ground-truth reference. Nonetheless, QuestEval
substantially improves the correlation with human judgments over four
evaluation dimensions (consistency, coherence, fluency, and relevance), as
shown in the extensive experiments we report.
|
One of the best ways to understand the gravitation of a massive object is by
studying the photon's motion around it. We study the null geodesic of a regular
black hole in anti-de Sitter spacetime, including a Gaussian matter
distribution. Obtaining the effective potential and possible motions of the
photon are discussed for different energy levels. The nature of the effective
potential implies that the photon is prevented from reaching the black hole's
center. Different types of possible orbits are considered. A photon with
negative energy is trapped in a potential hole and has a back and forth motion
between two horizons of the metric. However, for specific values of positive
energy, the trapped photon still has a back and forth motion; however, it
crosses the horizons in every direction. The effective potential has an
unstable point outside the horizons, which indicates the possible circular
motion of the photon. The closest approach of the photon and the bending angle
are also investigated.
|
We discuss excitation of string oscillation modes by an initial singularity
of inflation. The initial singularity of inflation is known to occur with a
finite Hubble parameter, which is generally lower than the string scale, and
hence it is not clear that stringy effects become significant around it. With
the help of Penrose limit, we find that infinitely heavy oscillation modes get
excited when a singularity is strong in the sense of Krolak's classification.
We demonstrate that the initial singularities of Starobinsky and hill top
inflation, assuming the slow roll inflation to the past infinity, are strong.
Hence stringy corrections are inevitable in the very early stage of these
inflation models. We also find that the initial singularity of the hill top
inflation could be weak for non-slow roll case.
|
The leading-order approximation to a Filippov system $f$ about a generic
boundary equilibrium $x^*$ is a system $F$ that is affine one side of the
boundary and constant on the other side. We prove $x^*$ is exponentially stable
for $f$ if and only if it is exponentially stable for $F$ when the constant
component of $F$ is not tangent to the boundary. We then show exponential
stability and asymptotic stability are in fact equivalent for $F$. We also show
exponential stability is preserved under small perturbations to the pieces of
$F$. Such results are well known for homogeneous systems. To prove the results
here additional techniques are required because the two components of $F$ have
different degrees of homogeneity. The primary function of the results is to
reduce the problem of the stability of $x^*$ from the general Filippov system
$f$ to the simpler system $F$. Yet in general this problem remains difficult.
We provide a four-dimensional example of $F$ for which orbits appear to
converge to $x^*$ in a chaotic fashion. By utilising the presence of both
homogeneity and sliding motion the dynamics of $F$ can in this case be reduced
to the combination of a one-dimensional return map and a scalar function.
|
Network design, a cornerstone of mathematical optimization, is about defining
the main characteristics of a network satisfying requirements on connectivity,
capacity, and level-of-service. It finds applications in logistics and
transportation, telecommunications, data sharing, energy distribution, and
distributed computing. In multi-commodity network design, one is required to
design a network minimizing the installation cost of its arcs and the
operational cost to serve a set of point-to-point connections. The definition
of this prototypical problem was recently enriched by additional constraints
imposing that each origin-destination of a connection is served by a single
path satisfying one or more level-of-service requirements, thus defining the
Network Design with Service Requirements [Balakrishnan, Li, and Mirchandani.
Operations Research, 2017]. These constraints are crucial, e.g., in
telecommunications and computer networks, in order to ensure reliable and
low-latency communication. In this paper we provide a new formulation for the
problem, where variables are associated with paths satisfying the end-to-end
service requirements. We present a fast algorithm for enumerating all the
exponentially-many feasible paths and, when this is not viable, we provide a
column generation scheme that is embedded into a branch-and-cut-and-price
algorithm. Extensive computational experiments on a large set of instances show
that our approach is able to move a step further in the solution of the Network
Design with Service Requirements, compared with the current state-of-the-art.
|
To ensure protection of the intellectual property rights of DNN models,
watermarking techniques have been investigated to insert side-information into
the models without seriously degrading the performance of original task. One of
the threats for the DNN watermarking is the pruning attack such that less
important neurons in the model are pruned to make it faster and more compact as
well as to remove the watermark. In this study, we investigate a channel coding
approach to resist the pruning attack. As the channel model is completely
different from conventional models like digital images, it has been an open
problem what kind of encoding method is suitable for DNN watermarking. A novel
encoding approach by using constant weight codes to immunize the effects of
pruning attacks is presented. To the best of our knowledge, this is the first
study that introduces an encoding technique for DNN watermarking to make it
robust against pruning attacks.
|
Three-dimensional line-nodal superconductors exhibit nontrivial topology,
which is protected by the time-reversal symmetry. Here we investigate four
types of short-range interaction between the gapless line-nodal fermionic
quasiparticles by carrying renormalization group analysis. We find that such
interactions can induce the dynamical breaking of time-reversal symmetry, which
alters the topology and might lead to six possible distinct superconducting
states, distinguished by the group representations. After computing the
susceptibilities for all the possible phase-transition instabilities, we
establish that the superconducting pairing characterized by $id_{xz}$-wave gap
symmetry is the leading instability in noncentrosymmetric superconductors.
Appropriate extension of this approach is promising to pick out the most
favorable superconducting pairing during similar topology-changing transition
in the polar phase of $^3$He.
|
Malnutrition is a major public health concern in low-and-middle-income
countries (LMICs). Understanding food and nutrient intake across communities,
households and individuals is critical to the development of health policies
and interventions. To ease the procedure in conducting large-scale dietary
assessments, we propose to implement an intelligent passive food intake
assessment system via egocentric cameras particular for households in Ghana and
Uganda. Algorithms are first designed to remove redundant images for minimising
the storage memory. At run time, deep learning-based semantic segmentation is
applied to recognise multi-food types and newly-designed handcrafted features
are extracted for further consumed food weight monitoring. Comprehensive
experiments are conducted to validate our methods on an in-the-wild dataset
captured under the settings which simulate the unique LMIC conditions with
participants of Ghanaian and Kenyan origin eating common Ghanaian/Kenyan
dishes. To demonstrate the efficacy, experienced dietitians are involved in
this research to perform the visual portion size estimation, and their
predictions are compared to our proposed method. The promising results have
shown that our method is able to reliably monitor food intake and give feedback
on users' eating behaviour which provides guidance for dietitians in regular
dietary assessment.
|
Increased connectivity has made us all more vulnerable. Cyberspace, besides
all its benefits, spawned more devices to hack and more opportunities to commit
cybercrime. Criminals have found it lucrative to target both individuals and
businesses, by holding or stealing their assets via different types of cyber
attacks. The cyber-enabled theft of Intellectual Property (IP), as one of the
most important and critical intangible assets of nations, organizations and
individuals, by foreign countries has been a devastating challenge of the
United States (U.S.) in the past decades. In this study, we conduct a
socio-technical root cause analysis to investigate one of the recent cases of
IP theft by employing a holistic approach. It concludes with a list of root
causes and some corrective actions to stop the impact and prevent the
recurrence of the problem in the future. Building upon the findings of this
study, the U.S. requires a detailed revision of IP strategies bringing the
whole socio-technical regulatory system into focus and strengthen IP rights
protection considering China's indigenous innovation policies. It is critical
that businesses and other organizations take steps to reduce their exposure to
cyber attacks. It is particularly important to train employees on how to spot
potential threats, and to institute policies that encourage workers to report
potential security failures so that action can be taken quickly. Finally, we
discuss how cyber ranges can provide an efficient and safe platform for dealing
with such challenges. The results of this study can be expanded to other
countries in order to protect their IP rights and deter or prevent and respond
to future incidents.
|
Braiding Majorana zero modes (MZMs) is the key procedure toward topological
quantum computation. However, the complexity of the braiding manipulation
hinders its experimental realization. Here we propose an experimental setup
composing of MZMs and a quantum dot state which can substantially simplify the
braiding protocol of MZMs. Such braiding scheme, which corresponds to a
specific closed loop in the parameter space, is quite universal and can be
realized in various platforms. Moreover, the braiding results can be directly
measured and manifested through electric current, which provides a simple and
novel way to detect the non-Abelian statistics of MZMs.
|
Dense optical flow estimation is challenging when there are large
displacements in a scene with heterogeneous motion dynamics, occlusion, and
scene homogeneity. Traditional approaches to handle these challenges include
hierarchical and multiresolution processing methods. Learning-based optical
flow methods typically use a multiresolution approach with image warping when a
broad range of flow velocities and heterogeneous motion is present. Accuracy of
such coarse-to-fine methods is affected by the ghosting artifacts when images
are warped across multiple resolutions and by the vanishing problem in smaller
scene extents with higher motion contrast. Previously, we devised strategies
for building compact dense prediction networks guided by the effective
receptive field (ERF) characteristics of the network (DDCNet). The DDCNet
design was intentionally simple and compact allowing it to be used as a
building block for designing more complex yet compact networks. In this work,
we extend the DDCNet strategies to handle heterogeneous motion dynamics by
cascading DDCNet based sub-nets with decreasing extents of their ERF. Our
DDCNet with multiresolution capability (DDCNet-Multires) is compact without any
specialized network layers. We evaluate the performance of the DDCNet-Multires
network using standard optical flow benchmark datasets. Our experiments
demonstrate that DDCNet-Multires improves over the DDCNet-B0 and -B1 and
provides optical flow estimates with accuracy comparable to similar lightweight
learning-based methods.
|
Convergence to equilibrium of underdamped Langevin dynamics is studied under
general assumptions on the potential $U$ allowing for singularities. By
modifying the direct approach to convergence in $L^2$ pioneered by F. H\'erau
and developped by Dolbeault, Mouhot and Schmeiser, we show that the dynamics
converges exponentially fast to equilibrium in the topologies $L^2(d\mu)$ and
$L^2(W^* d\mu)$, where $\mu$ denotes the invariant probability measure and
$W^*$ is a suitable Lyapunov weight. In both norms, we make precise how the
exponential convergence rate depends on the friction parameter $\gamma$ in
Langevin dynamics, by providing a lower bound scaling as $\min(\gamma,
\gamma^{-1})$. The results hold for usual polynomial-type potentials as well as
potentials with singularities such as those arising from pairwise Lennard-Jones
interactions between particles.
|
Mobile communication networks were designed to mainly support ubiquitous
wireless communications, yet they are expected to also achieve radio sensing
capabilities in the near future. Most prior studies on radar sensing focus on
distant targets, which usually rely on far-field assumption with uniform plane
wave (UPW) models. However, with ever-increasing antenna size, together with
the growing need to also sense nearby targets, the far-field assumption may
become invalid. This paper studies radar sensing with extremely large-scale
(XL) antenna arrays, where a generic model that takes into account both
spherical wavefront and amplitude variations across array elements is
developed. Furthermore, new closed-form expressions of the sensing
signal-to-noise ratios (SNRs) are derived for both XL-MIMO radar and
XL-phased-array radar modes. Our results reveal that different from the
conventional UPW model where the SNR scales linearly and unboundedly with N for
MIMO radar and with MN for phased-array radar, with M and N being the transmit
and receive antenna numbers, respectively, more practical SNR scaling laws are
obtained. For XL-phased-array radar with optimal power allocation, the SNR
increases with M and N with diminishing returns, governed by new parameters
called the transmit and receive angular spans. On the other hand, for XL-MIMO
radar, while the same SNR scaling as XL-phased-array radar is obeyed for N, the
SNR first increases and then decreases with M.
|
In this paper LQG control over unreliable communication links is derived.
That is to say, the communication channels between the controller and the
actuators and between the sensors and the controller are unreliable. Previous
solutions to finite horizon discrete time hold-input LQG control for this case
do not fully utilize the available information. Here a new solution is
presented which resolves this limitation. The focus is to derive and present a
full mathematical proof to derive the optimal control sequence.
|
We add non-linear and state-dependent terms to quantum field theory. We show
that the resulting low-energy theory, non-linear quantum mechanics, is causal,
preserves probability and permits a consistent description of the process of
measurement. We explore the consequences of such terms and show that non-linear
quantum effects can be observed in macroscopic systems even in the presence of
de-coherence. We find that current experimental bounds on these non-linearities
are weak and propose several experimental methods to significantly probe these
effects. The locally exploitable effects of these non-linearities have enormous
technological implications. For example, they would allow large scale
parallelization of computing (in fact, any other effort) and enable quantum
sensing beyond the standard quantum limit. We also expose a fundamental
vulnerability of any non-linear modification of quantum mechanics - these
modifications are highly sensitive to cosmic history and their locally
exploitable effects can dynamically disappear if the observed universe has a
tiny overlap with the overall quantum state of the universe, as is predicted in
conventional inflationary cosmology. We identify observables that persist in
this case and discuss opportunities to detect them in cosmic ray experiments,
tests of strong field general relativity and current probes of the equation of
state of the universe. Non-linear quantum mechanics also enables novel
gravitational phenomena and may open new directions to solve the black hole
information problem and uncover the theory underlying quantum field theory and
gravitation.
|
In 2021, Dzhunusov and Zaitseva classified two-dimensional normal affine
commutative algebraic monoids. In this work, we extend this classification to
noncommutative monoid structures on normal affine surfaces. We prove that
two-dimensional algebraic monoids are toric. We also show how to find all
monoid structures on a normal toric surface. Every such structure is induced by
a comultiplication formula involving Demazure roots. We also give descriptions
of opposite monoids, quotient monoids, and boundary divisors.
|
We study nonlinear pantograph-type reaction-diffusion PDEs, which, in
addition to the unknown $u=u(x,t)$, also contain the same functions with
dilated or contracted arguments of the form $w=u(px,t)$, $w=u(x,qt)$, and
$w=u(px,qt)$, where $p$ and $q$ are the free scaling parameters (for equations
with proportional delay we have $0<p<1$, $0<q<1$). A brief review of
publications on pantograph-type ODEs and PDEs and their applications is given.
Exact solutions and reductions of various types of such nonlinear partial
functional differential equations are described for the first time. We present
examples of nonlinear pantograph-type PDEs with proportional delay, which admit
traveling-wave and self-similar solutions (note that PDEs with constant delay
do not have self-similar solutions). Additive, multiplicative and functional
separable solutions, as well as some other exact solutions are also obtained.
Special attention is paid to nonlinear pantograph-type PDEs of a rather general
form, which contain one or two arbitrary functions. In total, more than forty
nonlinear pantograph-type reaction-diffusion PDEs with dilated or contracted
arguments, admitting exact solutions, have been considered. Multi-pantograph
nonlinear PDEs are also discussed. The principle of analogy is formulated,
which makes it possible to efficiently construct exact solutions of nonlinear
pantograph-type PDEs. A number of exact solutions of more complex nonlinear
functional differential equations with varying delay, which arbitrarily depends
on time or spatial coordinate, are also described. The presented equations and
their exact solutions can be used to formulate test problems designed to
evaluate the accuracy of numerical and approximate analytical methods for
solving the corresponding nonlinear initial-boundary value problems for PDEs
with varying delay.
|
The regulatory framework of cryptocurrencies (and, in general, blockchain
tokens) is of paramount importance. This framework drives nearly all key
decisions in the respective business areas. In this work, a computational model
is proposed for quantitatively estimating the regulatory stance of countries
with respect to cryptocurrencies. This is conducted via web mining utilizing
web search engines. The proposed model is experimentally validated. In
addition, unsupervised learning (clustering) is applied for better analyzing
the automatically derived estimations. Overall, very good performance is
achieved by the proposed algorithmic approach.
|
Automatic speech recognition systems have been largely improved in the past
few decades and current systems are mainly hybrid-based and end-to-end-based.
The recently proposed CTC-CRF framework inherits the data-efficiency of the
hybrid approach and the simplicity of the end-to-end approach. In this paper,
we further advance CTC-CRF based ASR technique with explorations on modeling
units and neural architectures. Specifically, we investigate techniques to
enable the recently developed wordpiece modeling units and Conformer neural
networks to be succesfully applied in CTC-CRFs. Experiments are conducted on
two English datasets (Switchboard, Librispeech) and a German dataset from
CommonVoice. Experimental results suggest that (i) Conformer can improve the
recognition performance significantly; (ii) Wordpiece-based systems perform
slightly worse compared with phone-based systems for the target language with a
low degree of grapheme-phoneme correspondence (e.g. English), while the two
systems can perform equally strong when such degree of correspondence is high
for the target language (e.g. German).
|
We state and prove in modern terms a Splitting Principle first claimed by
Beniamino Segre in 1938, which should be regarded as a strong form of the
classical Principle of Connectedness.
|
The flex locus parameterizes plane cubics with three collinear cocritical
points under a projection, and the gothic locus arises from quadratic
differentials with zeros at a fiber of the projection and with poles at the
cocritical points. The flex and gothic loci provide the first example of a
primitive, totally geodesic subvariety of moduli space and new ${\rm
SL}_2(\mathbb{R})$-invariant varieties in Teichm\"uller dynamics, as discovered
by McMullen-Mukamel-Wright. In this paper we determine the divisor class of the
flex locus as well as various tautological intersection numbers on the gothic
locus. For the case of the gothic locus our result confirms numerically a
conjecture of Chen-M\"oller-Sauvaget about computing sums of Lyapunov exponents
for ${\rm SL}_2(\mathbb{R})$-invariant varieties via intersection theory.
|
The concept of E-learning in Universities has grown rapidly over the years to
include not just only a learning management system but also tools initially not
designed for learning such as Facebook and advanced learning tools, for example
games, simulations and virtualization. As a result, Cloud-based LMS is being
touted as the next evolution of the traditional LMS. It is hoped that Cloud
based LMS will resolve some of the challenges associated with the traditional
LMS implementation process. In a previous study, we reported that lack of
involvement of faculty and students in the LMS implementation process results
in the limited use of the LMS by faculty and students. The question then is,
Will the cloud-based LMS resolve these issues? We conducted a review of
literature and presented an overview of the traditional LMS, cloud computing
and the cloudbased LMS and we described how the cloud computing LMS resolve
issues raised by faculty and students. we find that even though, cloud-based
LMS resolve most of the technical issues associated with the traditional LMS,
some of the human issues were not resolved. We hope that this study draws
attention to non-technical issues associated with the LMS implementation
process.
|
Deep learning is vulnerable to adversarial examples. Many defenses based on
randomized neural networks have been proposed to solve the problem, but fail to
achieve robustness against attacks using proxy gradients such as the
Expectation over Transformation (EOT) attack. We investigate the effect of the
adversarial attacks using proxy gradients on randomized neural networks and
demonstrate that it highly relies on the directional distribution of the loss
gradients of the randomized neural network. We show in particular that proxy
gradients are less effective when the gradients are more scattered. To this
end, we propose Gradient Diversity (GradDiv) regularizations that minimize the
concentration of the gradients to build a robust randomized neural network. Our
experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv
regularizations improve the adversarial robustness of randomized neural
networks against a variety of state-of-the-art attack methods. Moreover, our
method efficiently reduces the transferability among sample models of
randomized neural networks.
|
This paper is devoted to show that the last quarter of the past century can
be considered as the golden age of the Mathematical Finance. In this period the
collaboration of great economists and the best generation of probabilists, most
of them from the Strasbourg's School led by Paul Andr\'e Meyer, gave rise to
the foundations of this discipline. They established the two fundamentals
theorems of arbitrage theory, close formulas for options, the main modelling a
|
Two dimensional SrTiO3-based interfaces stand out among non-centrosymmetric
superconductors due to their intricate interplay of gate tunable Rashba
spin-orbit coupling and multi-orbital electronic occupations, whose combination
theoretically prefigures various forms of non-standard superconductivity.
However, a convincing demonstration by phase sensitive measurements has been
elusive so far. Here, by employing superconducting transport measurements in
nano-devices we present clear-cut experimental evidences of unconventional
superconductivity in the LaAlO3/SrTiO3 interface. The central observations are
the substantial anomalous enhancement of the critical current by small magnetic
fields applied perpendicularly to the plane of electron motion, and the
asymmetric response with respect to the magnetic field direction. These
features have a unique trend in intensity and sign upon electrostatic gating
that, together with their dependence on temperature and nanowire dimensions,
cannot be accommodated within a scenario of canonical spin-singlet
superconductivity. We theoretically demonstrate that the hall-marks of the
experimental observations unambiguously indicate a coexistence of Josephson
channels with sign difference and intrinsic phase shift. The character of these
findings establishes the occurrence of independent components of unconventional
pairing in the superconducting state due to inversion symmetry breaking. The
outcomes open new venues for the investigation of multi-orbital
non-centrosymmetric superconductivity and Josephson-based devices for quantum
technologies.
|
We clarify the undecided case $c_2 = 3$ of a theorem of Ein, Hartshorne and
Vogelaar [Math. Ann. 259 (1982), 541--569] about the restriction of a stable
rank 3 vector bundle with $c_1 = 0$ on the projective 3-space to a general
plane. It turns out that there are more exceptions to the stable restriction
property than those conjectured by the three authors. One of them is a
Schwarzenberger bundle (twisted by $-1$); it has $c_3 = 6$. There are also some
exceptions with $c_3 = 2$ (plus, of course, their duals). We also prove, for
completeness, the basic properties of the corresponding moduli spaces; they are
all nonsingular and connected, of dimension 28.
|
The Variational Quantum Eigensolver (VQE) is a promising algorithm for Noisy
Intermediate Scale Quantum (NISQ) computation. Verification and validation of
NISQ algorithms' performance on NISQ devices is an important task. We consider
the exactly-diagonalizable Lipkin-Meshkov-Glick (LMG) model as a candidate for
benchmarking NISQ computers. We use the Bethe ansatz to construct eigenstates
of the trigonometric LMG model using quantum circuits inspired by the LMG's
underlying algebraic structure. We construct circuits with depth
$\mathcal{O}(N)$ and $\mathcal{O}(\log_2N)$ that can prepare any trigonometric
LMG eigenstate of $N$ particles. The number of gates required for both circuits
is $\mathcal{O}(N)$. The energies of the eigenstates can then be measured and
compared to the exactly-known answers.
|
In this article, we deal with the order of growth of solutions of
non-homogeneous linear differential-difference equation \begin{equation*}
\sum_{i=0}^{n}\sum_{j=0}^{m}A_{ij}f^{(j)}(z+c_{i})=F(z), \end{equation*} where
$A_{ij},$ $F\left( z\right) $ are entire or meromorphic functions and $c_{i}$
$\left( 0,1,...,n\right) $ are non-zero distinct complex numbers. Under the
sufficient condition that there exists one coefficient having the maximal lower
order or having the maximal lower type strictly greater than the order or the
type of other coefficients, we obtain estimates of the lower bound of the order
of meromorphic solutions of the above equation.
|
Labeling objects at a subordinate level typically requires expert knowledge,
which is not always available when using random annotators. As such, learning
directly from web images for fine-grained recognition has attracted broad
attention. However, the presence of label noise and hard examples in web images
are two obstacles for training robust fine-grained recognition models.
Therefore, in this paper, we propose a novel approach for removing irrelevant
samples from real-world web images during training, while employing useful hard
examples to update the network. Thus, our approach can alleviate the harmful
effects of irrelevant noisy web images and hard examples to achieve better
performance. Extensive experiments on three commonly used fine-grained datasets
demonstrate that our approach is far superior to current state-of-the-art
web-supervised methods.
|
In this paper, we investigate joint information-theoretic security and covert
communication on a network in the presence of a single transmitter (Alice), a
friendly jammer, a single untrusted user, two legitimate users, and a single
warden of the channel (Willie). In the considered network, one of the
authorized users, Bob, needs a secure and covert communication, and therefore
his message must be sent securely, and at the same time, the existence of his
communication with the transmitter should not be detected by the channel's
warden, Willie, Meanwhile, another authorized user, Carol, needs covert
communication. The purpose of secure communication is to prevent the message
being decoded by the untrusted user who is present on the network, which leads
us to use one of the physical layer security methods, named the secure
transmission of information theory. In some cases, in addition to protecting
the content of the message, it is important for the user that the existence of
the transmission not being detected by an adversary, which leads us to covert
communication. In the proposed network model, it is assumed that for covert
communication requirements, Alice will not send any messages to legitimate
users in one time slot and in another time slot will send to them both (Bob and
Carol). One of the main challenges in covert communication is low transmission
rate, because we have to reduce the transmission power such that the main
message get hide in background noise.
|
Permutation Mastermind is a version of the classical mastermind game in which
the number of positions $n$ is equal to the number of colors $k$, and
repetition of colors is not allowed, neither in the codeword nor in the
queries. In this paper we solve the main open question from Glazik, J\"ager,
Schiemann and Srivastav (2021), who asked whether their bound of $O(n^{1.525})$
for the static version can be improved to $O(n \log n)$, which would be best
possible. By using a simple probabilistic argument we show that this is indeed
the case.
|
This is the second in a sequence of three papers investigating the question
for which positive integers $m$ there exists a maximal antichain of size $m$ in
the Boolean lattice $B_n$ (the power set of $[n]:=\{1,2,\dots,n\}$, ordered by
inclusion). In the previous paper we characterized those $m$ between
$\binom{n}{\lceil n/2\rceil}-\lceil n/2\rceil^2$ and the maximum size
$\binom{n}{\lceil n/2 \rceil}$ that are not sizes of maximal antichains. In
this paper we show that all smaller $m$ are sizes of maximal antichains.
|
The Planetary Instrument for X-ray Lithochemistry (PIXL) is a micro-focus
X-ray fluorescence spectrometer mounted on the robotic arm of NASA's
Perseverance rover. PIXL will acquire high spatial resolution observations of
rock and soil chemistry, rapidly analyzing the elemental chemistry of a target
surface. In 10 seconds, PIXL can use its powerful 120 micrometer diameter X-ray
beam to analyze a single, sand-sized grain with enough sensitivity to detect
major and minor rock-forming elements, as well as many trace elements. Over a
period of several hours, PIXL can autonomously scan an area of the rock surface
and acquire a hyperspectral map comprised of several thousand individual
measured points.
|
We review some recent results concerning the Hartle--Hawking wavefunction of
the universe. We focus on pure Einstein theory of gravity in the presence of a
positive cosmological constant. We carefully implement the gauge-fixing
procedure for the minisuperspace path integral, by identifying the single
modulus and by using diffeomorphism-invariant measures for the ghosts and the
scale factor. Field redefinitions of the scale factor yield different
prescriptions for computing the no-boundary ground-state wavefunction. They
give rise to an infinite set of ground-state wavefunctions, each satisfying a
different Wheeler--DeWitt equation, at the semi-classical level. The
differences in the form of the Wheeler--DeWitt equations can be traced to
ordering ambiguities in constructing the Hamiltonian upon canonical
quantization. However, the inner products of the corresponding Hilbert spaces
turn out to be equivalent, at least semi-classically. Thus, the model yields
universal quantum predictions.
|
The virtual try-on task is so attractive that it has drawn considerable
attention in the field of computer vision. However, presenting the
three-dimensional (3D) physical characteristic (e.g., pleat and shadow) based
on a 2D image is very challenging. Although there have been several previous
studies on 2D-based virtual try-on work, most 1) required user-specified target
poses that are not user-friendly and may not be the best for the target
clothing, and 2) failed to address some problematic cases, including facial
details, clothing wrinkles and body occlusions. To address these two
challenges, in this paper, we propose an innovative template-free try-on image
synthesis (TF-TIS) network. The TF-TIS first synthesizes the target pose
according to the user-specified in-shop clothing. Afterward, given an in-shop
clothing image, a user image, and a synthesized pose, we propose a novel model
for synthesizing a human try-on image with the target clothing in the best
fitting pose. The qualitative and quantitative experiments both indicate that
the proposed TF-TIS outperforms the state-of-the-art methods, especially for
difficult cases.
|
We study piece-wise constant signals corrupted by additive Gaussian noise
over a $d$-dimensional lattice. Data of this form naturally arise in a host of
applications, and the tasks of signal detection or testing, de-noising and
estimation have been studied extensively in the statistical and signal
processing literature. In this paper we consider instead the problem of
partition recovery, i.e.~of estimating the partition of the lattice induced by
the constancy regions of the unknown signal, using the
computationally-efficient dyadic classification and regression tree (DCART)
methodology proposed by \citep{donoho1997cart}. We prove that, under
appropriate regularity conditions on the shape of the partition elements, a
DCART-based procedure consistently estimates the underlying partition at a rate
of order $\sigma^2 k^* \log (N)/\kappa^2$, where $k^*$ is the minimal number of
rectangular sub-graphs obtained using recursive dyadic partitions supporting
the signal partition, $\sigma^2$ is the noise variance, $\kappa$ is the minimal
magnitude of the signal difference among contiguous elements of the partition
and $N$ is the size of the lattice. Furthermore, under stronger assumptions,
our method attains a sharper estimation error of order
$\sigma^2\log(N)/\kappa^2$, independent of $k^*$, which we show to be minimax
rate optimal. Our theoretical guarantees further extend to the partition
estimator based on the optimal regression tree estimator (ORT) of
\cite{chatterjee2019adaptive} and to the one obtained through an NP-hard
exhaustive search method. We corroborate our theoretical findings and the
effectiveness of DCART for partition recovery in simulations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.