abstract
stringlengths 42
2.09k
|
---|
The limit order book (LOB) depicts the fine-grained demand and supply
relationship for financial assets and is widely used in market microstructure
studies. Nevertheless, the availability and high cost of LOB data restrict its
wider application. The LOB recreation model (LOBRM) was recently proposed to
bridge this gap by synthesizing the LOB from trades and quotes (TAQ) data.
However, in the original LOBRM study, there were two limitations: (1)
experiments were conducted on a relatively small dataset containing only one
day of LOB data; and (2) the training and testing were performed in a
non-chronological fashion, which essentially re-frames the task as
interpolation and potentially introduces lookahead bias. In this study, we
extend the research on LOBRM and further validate its use in real-world
application scenarios. We first advance the workflow of LOBRM by (1) adding a
time-weighted z-score standardization for the LOB and (2) substituting the
ordinary differential equation kernel with an exponential decay kernel to lower
computation complexity. Experiments are conducted on the extended LOBSTER
dataset in a chronological fashion, as it would be used in a real-world
application. We find that (1) LOBRM with decay kernel is superior to
traditional non-linear models, and module ensembling is effective; (2)
prediction accuracy is negatively related to the volatility of order volumes
resting in the LOB; (3) the proposed sparse encoding method for TAQ exhibits
good generalization ability and can facilitate manifold tasks; and (4) the
influence of stochastic drift on prediction accuracy can be alleviated by
increasing historical samples.
|
We construct finite dimensional families of non-steady solutions to the Euler
equations, existing for all time, and exhibiting all kinds of qualitative
dynamics in the phase space, for example: strange attractors and chaos,
invariant manifolds of arbitrary topology, and quasiperiodic invariant tori of
any dimension.
The main theorem of the paper, from which these families of solutions are
obtained, states that for any given vector field $X$ on a closed manifold $N$,
there is a Riemannian manifold $M$ on which the following holds: $N$ is
diffeomorphic to a finite dimensional manifold in the phase space of fluid
velocities (the space of divergence-free vector fields on $M$) that is
invariant under the Euler evolution, and on which the Euler equation reduces to
a finite dimensional ODE that is given by an arbitrarily small perturbation of
the vector field $X$ on $N$.
|
The placement of a magnetic monopole into an electrically-neutral chiral
plasma with a non-zero axial density results in an electric polarization of the
matter. The electric current produced by the chiral magnetic effect is balanced
by charge diffusion and Ohmic dissipation, which generates a non-trivial charge
distribution. In turn, the latter induces a separation of chiralities along the
magnetic field of the monopole due to the chiral separation effect. We find the
stationary states of such a system, with vanishing total electric current and
stationary axial current balanced by the chiral anomaly. In this solution, the
monopole becomes "dressed" with an electric charge that is proportional to the
averaged chiral density of the matter -- forming a chiral dyon. The interplay
between the chiral effects on the one hand, and presence of magnetic field of
the monopole on the other, may affect the evolution of the monopole density in
the early Universe, contribute to the process of baryogenesis, and can also be
instrumental for detection of relic monopoles using chiral materials.
|
Explaining the decision of a multi-modal decision-maker requires to determine
the evidence from both modalities. Recent advances in XAI provide explanations
for models trained on still images. However, when it comes to modeling multiple
sensory modalities in a dynamic world, it remains underexplored how to
demystify the mysterious dynamics of a complex multi-modal model. In this work,
we take a crucial step forward and explore learnable explanations for
audio-visual recognition. Specifically, we propose a novel space-time attention
network that uncovers the synergistic dynamics of audio and visual data over
both space and time. Our model is capable of predicting the audio-visual video
events, while justifying its decision by localizing where the relevant visual
cues appear, and when the predicted sounds occur in videos. We benchmark our
model on three audio-visual video event datasets, comparing extensively to
multiple recent multi-modal representation learners and intrinsic explanation
models. Experimental results demonstrate the clear superior performance of our
model over the existing methods on audio-visual video event recognition.
Moreover, we conduct an in-depth study to analyze the explainability of our
model based on robustness analysis via perturbation tests and pointing games
using human annotations.
|
We consider the following model of degenerate and singular oscillatory
integral operators: \begin{equation*} Tf(x)=\int_{\mathbb{R}} e^{i\lambda
S(x,y)}K(x,y)\psi(x,y)f(y)dy, \end{equation*} where the phase functions are
homogeneous polynomials of degree $n$ and the singular kernel $K(x,y)$
satisfies suitable conditions related to a real parameter $\mu$. We show that
the sharp decay estimates on $L^2$ spaces, obtained in \cite{liu1999model}, can
be preserved on more general $L^p$ spaces with an additional condition imposed
on the singular kernel. In fact, we obtain that \begin{equation*}
\|Tf\|_{L^p}\leq C_{E,S,\psi,\mu,n,p}\lambda^{-\frac{1-\mu}{n}}\|f\|_{L^p},\ \
\frac{n-2\mu}{n-1-\mu}\leq p \leq\frac{n-2\mu}{1-\mu}. \end{equation*} The case
without the additional condition is also discussed.
|
From a geometric point of view, Pauli's exclusion principle defines a
hypersimplex. This convex polytope describes the compatibility of $1$-fermion
and $N$-fermion density matrices, therefore it coincides with the convex hull
of the pure $N$-representable $1$-fermion density matrices. Consequently, the
description of ground state physics through $1$-fermion density matrices may
not necessitate the intricate pure state generalized Pauli constraints. In this
article, we study the generalization of the $1$-body $N$-representability
problem to ensemble states with fixed spectrum $\mathbf{w}$, in order to
describe finite-temperature states and distinctive mixtures of excited states.
By employing ideas from convex analysis and combinatorics, we present a
comprehensive solution to the corresponding convex relaxation, thus
circumventing the complexity of generalized Pauli constraints. In particular,
we adapt and further develop tools such as symmetric polytopes, sweep
polytopes, and Gale order. For both fermions and bosons, generalized exclusion
principles are discovered, which we determine for any number of particles and
dimension of the $1$-particle Hilbert space. These exclusion principles are
expressed as linear inequalities satisfying hierarchies determined by the
non-zero entries of $\mathbf{w}$. The two families of polytopes resulting from
these inequalities are part of the new class of so-called lineup polytopes.
|
We investigate the effect of the Biermann battery during the Epoch of
Reionization (EoR) using cosmological Adaptive Mesh Refinement simulations
within the framework of the SPHINX project. We develop a novel numerical
technique to solve for the Biermann battery term in the Constrained Transport
method, preserving both the zero divergence of the magnetic field and the
absence of Biermann battery for isothermal flows. The structure-preserving
nature of our numerical method turns out to be very important to minimise
numerical errors during validation tests of the propagation of a Str\"omgren
sphere and of a Sedov blast wave. We then use this new method to model the
evolution of a 2.5 and 5 co-moving Mpc cosmological box with a state-of-the-art
galaxy formation model within the RAMSES code. Contrary to previous findings,
we show that three different Biermann battery channels emerge: the first one is
associated with linear perturbations before the EoR, the second one is the
classical Biermann battery associated with reionization fronts during the EoR,
and the third one is associated with strong, supernova-driven outflows. While
the two former channels generate spontaneously volume-filling magnetic fields
with a strength on the order or below $10^{-20}$ G, the latter, owing to the
higher plasma temperature and a marginally-resolved turbulent dynamo, reaches a
field strength as high as $10^{-18}$ G in the intergalactic medium around
massive haloes.
|
Several astrophysical scenarios have been proposed to explain the origin of
the population of binary black hole (BBH) mergers detected in gravitational
waves (GWs) by the LIGO/Virgo Collaboration. Among them, BBH mergers assembled
dynamically in young massive and open clusters have been shown to produce
merger rate densities consistent with LIGO/Virgo estimated rates. We use the
results of a suite of direct, high-precision $N$-body evolutionary models of
young massive and open clusters and build the population of BBH mergers, by
accounting for both a cosmologically-motivated model for the formation of young
massive and open clusters and the detection probability of LIGO/Virgo. We show
that our models produce dynamically-paired BBH mergers that are well consistent
with the observed masses, mass ratios, effective spin parameters, and final
spins of the second Gravitational Wave Transient Catalog (GWTC-2).
|
We establish a sharp upper-bound for the first non-zero even eigenvalue
(corresponding to an even eigenfunction) of the Hilbert-Brunn-Minkowski
operator associated to a strongly convex $C^2$-smooth origin-symmetric convex
body $K$ in $\mathbb{R}^n$. Our isospectral inequality is centro-affine
invariant, attaining equality if and only if $K$ is a (centered) ellipsoid;
this is reminiscent of the (non affine invariant) classical
Szeg\"{o}--Weinberger isospectral inequality for the Neumann Laplacian. The new
upper-bound complements the conjectural lower-bound, which has been shown to be
equivalent to the log-Brunn-Minkowski inequality and is intimately related to
the uniqueness question in the even log-Minkowski problem. As applications, we
obtain new strong non-uniqueness results in the even $L^p$-Minkowski problem in
the subcritical range $-n < p < 0$, as well as new rigidity results for the
critical exponent $p=-n$ and supercritical regime $p < -n$. In particular, we
show that any $K$ as above which is not an ellipsoid is a witness to
non-uniqueness in the even $L^p$-Minkowski problem for all $p \in (-n,p_K)$ and
some $p_K \in (-n,0)$, and that $K$ can be chosen so that $p_K$ is arbitrarily
close to $0$.
|
We identify points of difference between Invariant Set Theory and standard
quantum theory, and evaluate if these would lead to noticeable differences in
predictions between the two theories. From this evaluation, we design a number
of experiments, which, if undertaken, would allow us to investigate whether
standard quantum theory or invariant set theory best describes reality.
|
We use numerical simulations and linear stability analysis to study an active
nematic layer where the director is allowed to point out of the plane. Our
results highlight the difference between extensile and contractile systems.
Contractile stress suppresses the flows perpendicular to the layer and favours
in-plane orientations of the director. By contrast extensile stress promotes
instabilities that can turn the director out of the plane, leaving behind a
population of distinct, in-plane regions that continually elongate and divide.
Our results suggest a mechanism for the initial stages of layer formation in
living systems, and explain the propensity of dislocation lines in
three-dimensional active nematics to be of twist-type in extensile or
wedge-type in contractile materials.
|
The scoring function, which measures the plausibility of triplets in
knowledge graphs (KGs), is the key to ensure the excellent performance of KG
embedding, and its design is also an important problem in the literature.
Automated machine learning (AutoML) techniques have recently been introduced
into KG to design task-aware scoring functions, which achieve state-of-the-art
performance in KG embedding. However, the effectiveness of searched scoring
functions is still not as good as desired. In this paper, observing that
existing scoring functions can exhibit distinct performance on different
semantic patterns, we are motivated to explore such semantics by searching
relation-aware scoring functions. But the relation-aware search requires a much
larger search space than the previous one. Hence, we propose to encode the
space as a supernet and propose an efficient alternative minimization algorithm
to search through the supernet in a one-shot manner. Finally, experimental
results on benchmark datasets demonstrate that the proposed method can
efficiently search relation-aware scoring functions, and achieve better
embedding performance than state-of-the-art methods.
|
We show that
1. for every $A\subseteq \{0, 1\}^n$, there exists a polytope $P\subseteq
\mathbb{R}^n$ with $P \cap \{0, 1\}^n = A$ and extension complexity
$O(2^{n/2})$,
2. there exists an $A\subseteq \{0, 1\}^n$ such that the extension complexity
of any $P$ with $P\cap \{0, 1\}^n = A$ must be at least
$2^{\frac{n}{3}(1-o(1))}$.
We also remark that the extension complexity of any 0/1-polytope in
$\mathbb{R}^n$ is at most $O(2^n/n)$ and pose the problem whether the upper
bound can be improved to $O(2^{cn})$, for $c<1$.
|
This work exploits commodity, ultra-low cost, commercial radio frequency
identification tags (RFID) as the elements of a reconfigurable surface. Such
batteryless tags are powered and controlled by a software-defined (SDR) reader,
with properly modified software, so that a source-destination link is assisted,
operating at a different carrier frequency. In terms of theory, the optimal
gain and corresponding best element configuration is offered, with tractable
polynomial complexity (instead of exponential) in number of elements. In terms
of practice, a concrete way to design and prototype a wireless, batteryless,
RF-powered, reconfigurable surface is offered and a proof-of-concept is
experimentally demonstrated. It is also found that even with perfect channel
estimation, the weak nature of backscattered links limits the performance
gains, even for large number of surface elements. Impact of channel estimation
errors is also studied. Future extensions at various carrier frequencies could
be directly accommodated, through simple modifications in the antenna and
matching network of each RFID tag/surface element.
|
Particle tracks and differential energy loss measured in high pressure
gaseous detectors can be exploited for event identification in neutrinoless
double beta decay~($0\nu \beta \beta$) searches. We develop a new method based
on Kalman Filter in a Bayesian formalism (KFB) to reconstruct meandering tracks
of MeV-scale electrons. With simulation data, we compare the signal and
background discrimination power of the KFB method assuming different detector
granularities and energy resolutions. Typical background from $^{232}$Th and
$^{238}$U decay chains can be suppressed by another order of magnitude than
that in published literatures, approaching the background-free regime. For the
proposed PandaX-III experiment, the $0\nu \beta \beta$ search half-life
sensitivity at the 90\% confidence level would reach $2.7 \times 10^{26}$~yr
with 5-year live time, a factor of 2.7 improvement over the initial design
target.
|
With the evolution of quantum computing, researchers now-a-days tend to
incline to find solutions to NP-complete problems by using quantum algorithms
in order to gain asymptotic advantage. In this paper, we solve $k$-coloring
problem (NP-complete problem) using Grover's algorithm in any dimensional
quantum system or any $d$-ary quantum system for the first time to the best of
our knowledge, where $d \ge 2$. A newly proposed comparator-based approach
helps to generalize the implementation of the $k$-coloring problem in any
dimensional quantum system. Till date, $k$-coloring problem has been
implemented only in binary and ternary quantum system, hence, we abide to $d=2$
or $d=3$, that is for binary and ternary quantum system for comparing our
proposed work with the state-of-the-art techniques. This proposed approach
makes the reduction of the qubit cost possible, compared to the
state-of-the-art binary quantum systems. Further, with the help of newly
proposed ternary comparator, a substantial reduction in quantum gate count for
the ternary oracle circuit of the $k$-coloring problem than the previous
approaches has been obtained. An end-to-end automated framework has been put
forward for implementing the $k$-coloring problem for any undirected and
unweighted graph on any available Near-term quantum devices or Noisy
Intermediate-Scale Quantum (NISQ) devices or multi-valued quantum simulator,
which helps in generalizing our approach.
|
The combination of ferromagnetism and semiconducting behavior offers an
avenue for realizing novel spintronics and spin-enhanced thermoelectrics. Here
we demonstrate the synthesis of doped and nanocomposite half Heusler
Fe$_{1+x}$VSb films by molecular beam epitaxy. For dilute excess Fe ($x <
0.1$), we observe a decrease in the Hall electron concentration and no
secondary phases in X-ray diffraction, consistent with Fe doping into FeVSb.
Magnetotransport measurements suggest weak ferromagnetism that onsets at a
temperature of $T_{c} \approx$ 5K. For higher Fe content ($x > 0.1$),
ferromagnetic Fe nanostructures precipitate from the semiconducting FeVSb
matrix. The Fe/FeVSb interfaces are epitaxial, as observed by transmission
electron microscopy and X-ray diffraction. Magnetotransport measurements
suggest proximity-induced magnetism in the FeVSb, from the Fe/FeVSb interfaces,
at an onset temperature of $T_{c} \approx$ 20K.
|
We extend the scheme of quantum teleportation by quantum walks introduced by
Wang et al. (2017). First, we introduce the mathematical definition of the
accomplishment of quantum teleportation by this extended scheme. Secondly, we
show a useful necessary and sufficient condition that the quantum teleportation
is accomplished rigorously. Our result classifies the parameters of the setting
for the accomplishment of the quantum teleportation.
|
We describe the application of convolutional neural network style transfer to
the problem of improved visualization of underdrawings and ghost-paintings in
fine art oil paintings. Such underdrawings and hidden paintings are typically
revealed by x-ray or infrared techniques which yield images that are grayscale,
and thus devoid of color and full style information. Past methods for inferring
color in underdrawings have been based on physical x-ray fluorescence spectral
imaging of pigments in ghost-paintings and are thus expensive, time consuming,
and require equipment not available in most conservation studios. Our
algorithmic methods do not need such expensive physical imaging devices. Our
proof-of-concept system, applied to works by Pablo Picasso and Leonardo, reveal
colors and designs that respect the natural segmentation in the ghost-painting.
We believe the computed images provide insight into the artist and associated
oeuvre not available by other means. Our results strongly suggest that future
applications based on larger corpora of paintings for training will display
color schemes and designs that even more closely resemble works of the artist.
For these reasons refinements to our methods should find wide use in art
conservation, connoisseurship, and art analysis.
|
The growth of a pebble accreting planetary core is stopped when reaching its
\textit{isolation mass} that is due to a pressure maximum emerging at the outer
edge of the gap opened in gas. This pressure maximum traps the inward drifting
pebbles stopping the accretion of solids onto the core. On the other hand, a
large amount of pebbles ($\sim 100M_\oplus$) should flow through the orbit of
the core until reaching its isolation mass. The efficiency of pebble accretion
increases if the core grows in a dust trap of the protoplanetary disc. Dust
traps are observed as ring-like structures by ALMA suggesting the existence of
global pressure maxima in discs that can also act as planet migration traps.
This work aims to reveal how large a planetary core can grow in such a pressure
maximum by pebble accretion. In our hydrodynamic simulations, pebbles are
treated as a pressureless fluid mutually coupled to the gas via drag force. Our
results show that in a global pressure maximum the pebble isolation mass for a
planetary core is significantly larger than in discs with power-law surface
density profile. An increased isolation mass shortens the formation time of
giant planets.
|
Parallelization is an algebraic operation that lifts problems to sequences in
a natural way. Given a sequence as an instance of the parallelized problem,
another sequence is a solution of this problem if every component is
instance-wise a solution of the original problem. In the Weihrauch lattice
parallelization is a closure operator. Here we introduce a dual operation that
we call stashing and that also lifts problems to sequences, but such that only
some component has to be an instance-wise solution. In this case the solution
is stashed away in the sequence. This operation, if properly defined, induces
an interior operator in the Weihrauch lattice. We also study the action of the
monoid induced by stashing and parallelization on the Weihrauch lattice, and we
prove that it leads to at most five distinct degrees, which (in the maximal
case) are always organized in pentagons. We also introduce another closely
related interior operator in the Weihrauch lattice that replaces solutions of
problems by upper Turing cones that are strong enough to compute solutions. It
turns out that on parallelizable degrees this interior operator corresponds to
stashing. This implies that, somewhat surprisingly, all problems which are
simultaneously parallelizable and stashable have computability-theoretic
characterizations. Finally, we apply all these results in order to study the
recently introduced discontinuity problem, which appears as the bottom of a
number of natural stashing-parallelization pentagons. The discontinuity problem
is not only the stashing of several variants of the lesser limited principle of
omniscience, but it also parallelizes to the non-computability problem. This
supports the slogan that "non-computability is the parallelization of
discontinuity".
|
We consider close-packed tiling models of geometric objects -- a mixture of
hardcore dimers and plaquettes -- as a generalisation of the familiar dimer
models. Specifically, on an anisotropic cubic lattice, we demand that each site
be covered by either a dimer on a z-link or a plaquette in the x-y plane. The
space of such fully packed tilings has an extensive degeneracy. This maps onto
a fracton-type `higher-rank electrostatics', which can exhibit a
plaquette-dimer liquid and an ordered phase. We analyse this theory in detail,
using height representations and T-duality to demonstrate that the concomitant
phase transition occurs due to the proliferation of dipoles formed by defect
pairs. The resultant critical theory can be considered as a fracton version of
the Kosterlitz-Thouless transition. A significant new element is its UV-IR
mixing, where the low energy behavior of the liquid phase and the transition
out of it is dominated by local (short-wavelength) fluctuations, rendering the
critical phenomenon beyond the renormalization group paradigm.
|
A complete overview of the surrounding vehicle environment is important for
driver assistance systems and highly autonomous driving. Fusing results of
multiple sensor types like camera, radar and lidar is crucial for increasing
the robustness. The detection and classification of objects like cars, bicycles
or pedestrians has been analyzed in the past for many sensor types. Beyond
that, it is also helpful to refine these classes and distinguish for example
between different pedestrian types or activities. This task is usually
performed on camera data, though recent developments are based on radar
spectrograms. However, for most automotive radar systems, it is only possible
to obtain radar targets instead of the original spectrograms. This work
demonstrates that it is possible to estimate the body height of walking
pedestrians using 2D radar targets. Furthermore, different pedestrian motion
types are classified.
|
We explore the potential of twisted light as a tool to unveil many-body
effects in parabolically confined systems. According to the Generalized Kohn
Theorem, the dipole response of such a multi-particle system to a spatially
homogeneous probe is indistinguishable from the response of a system of
non-interacting particles. Twisted light however can excite internal degrees of
freedom, resulting in the appearance of new peaks in the even multipole
spectrum which are not present when the probe is a plane wave. We also
demonstrate the ability of the proposed twisted light probe to capture the
transition of interacting fermions into a strongly correlated regime in a
one-dimensional harmonic trap. We report that by suitable choice of the probe's
parameters, the transition into a strongly correlated phase manifests itself as
an approach and ultimate superposition of peaks in the second order quadrupole
response. These features, observed in exact calculations for two electrons, are
reproduced in adiabatic Time Dependent Density Functional Theory simulations.
|
In this paper, we develop an adaptive high-order surface finite element
method (FEM) incorporating the spectral deferred correction method for chain
contour discretization to solve polymeric self-consistent field equations on
general curved surfaces. The high-order surface FEM is obtained by the
high-order surface geometrical approximation and the high-order function space
approximation. Numerical results demonstrate that the precision order of these
methods is consistent with the theoretical prediction. In order to describe the
sharp interface in the strongly segregated system more accurately, an adaptive
FEM equipped with a new Log marking strategy is proposed. Compared with the
traditional strategy, the Log marking strategy can not only label the elements
that need to be refined or coarsened, but also give the refined or coarsened
times, which can make full use of the information of a posterior error
estimator and improve the ecciency of the adaptive algorithm. To demonstrate
the power of our approach, we investigate the self-assembled patterns of
diblock copolymers on several distinct curved surfaces. Numerical results
illustrate the ecciency of the proposed method, especially for strongly
segregated systems with economical discretization nodes.
|
It is well known that performance of a thermophotovoltaic (TPV) device can be
enhanced if the vacuum gap between the thermal emitter and the TPV cell becomes
nanoscale due to the photon tunneling of evanescent waves. Having multiple
bandgaps, multi-junction TPV cells have received attention as an alternative
way to improve its performance by selectively absorbing the spectral radiation
in each subcell. In this work, we comprehensively analyze the optimized
near-field tandem TPV system consisting of the thin-ITO-covered tungsten
emitter (at 1500 K) and GaInAsSb/InAs monolithic interconnected tandem TPV cell
(at 300 K). We develop a simulation model by coupling the near-field radiation
solved by fluctuational electrodynamics and the diffusion-recombination-based
charge transport equations. The optimal configuration of the near-field tandem
TPV system obtained by the genetic algorithm achieves the electrical power
output of 8.41 W/cm$^2$ and the conversion efficiency of 35.6\% at the vacuum
gap of 100 nm. We show that two resonance modes (i.e., surface plasmon
polaritons supported by the ITO-vacuum interface and the confined waveguide
mode in the tandem TPV cell) greatly contribute to the enhanced performance of
the optimized system. We also show that the near-field tandem TPV system is
superior to the single-cell-based near-field TPV system in both power output
and conversion efficiency through loss analysis. Interestingly, the
optimization performed with the objective function of the conversion efficiency
leads to the current matching condition for the tandem TPV system regardless of
the vacuum gap distances.
|
We identify and describe unique early time behavior of a quantum system
initially in a superposition, interacting with its environment. This behavior
-- the copycat process -- occurs after the system begins to decohere, but
before complete einselection. To illustrate this behavior analytic solutions
for the system density matrix, its eigenvalues, and eigenstates a short time
after system-environment interactions begin are provided. Features of the
solutions and their connection to observables are discussed, including
predictions for the continued evolution of the eigenstates towards
einselection, time dependence of spin expectation values, and an estimate of
the system's decoherence time. In particular we explore which aspects of the
early stages of decoherence exhibit quadratic evolution to leading order, and
which aspects exhibit more rapid linear behavior. Many features of our early
time perturbative solutions are agnostic of the spectrum of the environment. We
also extend our work beyond short time perturbation theory to compare with
numerical work from a companion paper.
|
A source sequence is to be guessed with some fidelity based on a rate-limited
description of an observed sequence with which it is correlated. The trade-off
between the description rate and the exponential growth rate of the least power
mean of the number of guesses is characterized.
|
We present an end-to-end, model-based deep reinforcement learning agent which
dynamically attends to relevant parts of its state during planning. The agent
uses a bottleneck mechanism over a set-based representation to force the number
of entities to which the agent attends at each planning step to be small. In
experiments, we investigate the bottleneck mechanism with several sets of
customized environments featuring different challenges. We consistently observe
that the design allows the planning agents to generalize their learned
task-solving abilities in compatible unseen environments by attending to the
relevant objects, leading to better out-of-distribution generalization
performance.
|
Quantum state estimation for continuously monitored dynamical systems
involves assigning a quantum state to an individual system at some time,
conditioned on the results of continuous observations. The quality of the
estimation depends on how much observed information is used and on how
optimality is defined for the estimate. In this work, we consider problems of
quantum state estimation where some of the measurement records are not
available, but where the available records come from both before (past) and
after (future) the estimation time, enabling better estimates than is possible
using the past information alone. Past-future information for quantum systems
has been used in various ways in the literature, in particular, the quantum
state smoothing, the most-likely path, and the two-state vector and related
formalisms. To unify these seemingly unrelated approaches, we propose a
framework for partially-observed quantum system with continuous monitoring,
wherein the first two existing formalisms can be accommodated, with some
generalization. The unifying framework is based on state estimation with
expected cost minimization, where the cost can be defined either in the space
of the unknown record or in the space of the unknown true state. Moreover, we
connect all three existing approaches conceptually by defining five new cost
functions, and thus new types of estimators, which bridge the gaps between
them. We illustrate the applicability of our method by calculating all seven
estimators we consider for the example of a driven two-level system
dissipatively coupled to bosonic baths. Our theory also allows connections to
classical state estimation, which create further conceptual links between our
quantum state estimators.
|
Presence of haze in images obscures underlying information, which is
undesirable in applications requiring accurate environment information. To
recover such an image, a dehazing algorithm should localize and recover
affected regions while ensuring consistency between recovered and its
neighboring regions. However owing to fixed receptive field of convolutional
kernels and non uniform haze distribution, assuring consistency between regions
is difficult. In this paper, we utilize an encoder-decoder based network
architecture to perform the task of dehazing and integrate an spatially aware
channel attention mechanism to enhance features of interest beyond the
receptive field of traditional conventional kernels. To ensure performance
consistency across diverse range of haze densities, we utilize greedy localized
data augmentation mechanism. Synthetic datasets are typically used to ensure a
large amount of paired training samples, however the methodology to generate
such samples introduces a gap between them and real images while accounting for
only uniform haze distribution and overlooking more realistic scenario of
non-uniform haze distribution resulting in inferior dehazing performance when
evaluated on real datasets. Despite this, the abundance of paired samples
within synthetic datasets cannot be ignored. Thus to ensure performance
consistency across diverse datasets, we train the proposed network within an
adversarial prior-guided framework that relies on a generated image along with
its low and high frequency components to determine if properties of dehazed
images matches those of ground truth. We preform extensive experiments to
validate the dehazing and domain invariance performance of proposed framework
across diverse domains and report state-of-the-art (SoTA) results.
|
In this paper, we extend constructions and results for the Taylor complex to
the generalized Taylor complex constructed by Herzog. We construct an explicit
DG-algebra structure on the generalized Taylor complex and extend a result of
Katth\"an on quotients of the Taylor complex by DG-ideals. We introduce a
generalization of the Scarf complex for families of monomial ideals, and show
that this complex is always a direct summand of the minimal free resolution of
the sum of these ideals. We also give an example of an ideal where the
generalized Scarf complex strictly contains the standard Scarf complex.
Moreover, we introduce the notion of quasitransverse monomial ideals, and prove
a list of results relating to Golodness, Koszul homology, and other homological
properties for such ideals.
|
Using the atomic carbon [CI](1$-$0) and [CI](2$-$1) emission maps observed
with the $Herschel\ Space\ Observatory$, and CO(1$-$0), HI, infrared and submm
maps from literatures, we estimate the [CI]-to-H$_2$ and CO-to-H$_2$ conversion
factors of $\alpha_\mathrm{[CI](1-0)}$, $\alpha_\mathrm{[CI](2-1)}$, and
$\alpha_\mathrm{CO}$ at a linear resolution $\sim1\,$kpc scale for six nearby
galaxies of M 51, M 83, NGC 3627, NGC 4736, NGC 5055, and NGC 6946. This is
perhaps the first effort, to our knowledge, in calibrating both [CI]-to-H$_2$
conversion factors across the spiral disks at spatially resolved $\sim1\,$kpc
scale though such studies have been discussed globally in galaxies near and
far. In order to derive the conversion factors and achieve these calibrations,
we adopt three different dust-to-gas ratio (DGR) assumptions which scale
approximately with metallicity taken from precursory results. We find that for
all DGR assumptions, the $\alpha_\mathrm{[CI](1-0)}$,
$\alpha_\mathrm{[CI](2-1)}$, and $\alpha_\mathrm{CO}$ are mostly flat with
galactocentric radii, whereas both $\alpha_\mathrm{[CI](2-1)}$ and
$\alpha_\mathrm{CO}$ show decrease in the inner regions of galaxies. And the
central $\alpha_\mathrm{CO}$ and $\alpha_\mathrm{[CI](2-1)}$ values are on
average $\sim 2.2$ and $1.8$ times lower than its galaxy averages. The obtained
carbon abundances from different DGR assumptions show flat profiles with
galactocentric radii, and the average carbon abundance of the galaxies is
comparable to the usually adopted value of $3 \times 10^{-5}$. We find that
both metallicity and infrared luminosity correlate moderately with the
$\alpha_\mathrm{CO}$ whereas only weakly with either the
$\alpha_\mathrm{[CI](1-0)}$ or carbon abundance, and not at all with the
$\alpha_\mathrm{[CI](2-1)}$.
|
The robustness of an ecological network quantifies the resilience of the
ecosystem it represents to species loss. It corresponds to the proportion of
species that are disconnected from the rest of the network when extinctions
occur sequentially. Classically, the robustness is calculated for a given
network, from the simulation of a large number of extinction sequences. The
link between network structure and robustness remains an open question. Setting
a joint probabilistic model on the network and the extinction sequences allows
analysis of this relation.
Bipartite stochastic block models have proven their ability to model
bipartite networks e.g. plant-pollinator networks: species are divided into
blocks and interaction probabilities are determined by the blocks of
membership. Analytical expressions of the expectation and variance of
robustness are obtained under this model, for different distributions of
primary extinction sequences. The impact of the network structure on the
robustness is analyzed through a set of properties and numerical illustrations.
The analysis of a collection of bipartite ecological networks allows us to
compare the empirical approach to our probabilistic approach, and illustrates
the relevance of the latter when it comes to computing the robustness of a
partially observed or incompletely sampled network.
|
The understanding of turbulent flows is one of the biggest current challenges
in physics, as no first-principles theory exists to explain their observed
spatio-temporal intermittency. Turbulent flows may be regarded as an intricate
collection of mutually-interacting vortices. This picture becomes accurate in
quantum turbulence, which is built on tangles of discrete vortex filaments.
Here, we study the statistics of velocity circulation in quantum and classical
turbulence. We show that, in quantum flows, Kolmogorov turbulence emerges from
the correlation of vortex orientations, while deviations -- associated with
intermittency -- originate from their non-trivial spatial arrangement. We then
link the spatial distribution of vortices in quantum turbulence to the
coarse-grained energy dissipation in classical turbulence, enabling the
application of existent models of classical turbulence intermittency to the
quantum case. Our results provide a connection between the intermittency of
quantum and classical turbulence and initiate a promising path to a better
understanding of the latter.
|
This paper presents the development of vision-based robotic arm manipulator
control by applying Proportional Derivative-Pseudoinverse Jacobian (PD-PIJ)
kinematics and Denavit Hartenberg forward kinematics. The task of sorting
objects based on color is carried out to observe error propagation in the
implementation of manipulator on real system. The objects image captured by the
digital camera were processed based on HSV-color model and the centroid
coordinate of each object detected were calculated. These coordinates are end
effector position target to pick each object and were placed to the right
position based on its color. Based on the end effector position target, PD-PIJ
inverse kinematics method was used to determine the right angle of each joint
of manipulator links. The angles found by PD-PIJ is the input of DH forward
kinematics. The process was repeated until the square end effector reached the
target. The experiment of model and implementation to actual manipulator were
analyzed using Probability Density Function (PDF) and Weibull Probability
Distribution. The result shows that the manipulator navigation system had a
good performance. The real implementation of color sorting task on manipulator
shows the probability of success rate cm is 94.46% for euclidian distance error
less than 1.2 cm.
|
We show that the massless integrable sector of the AdS_3 \times S^3 \times
T^4 superstring theory, which admits a non-trivial relativistic limit, provides
a setting where it is possible to determine exact minimal solutions to the form
factor axioms, in integral form, based on analiticity considerations, along the
same lines of ordinary relativistic integrable models. We construct in full
detail the formulas for the two- and three-particle case, and show the
similarities as well as the differences with respect to the off-shell Bethe
ansatz procedure of Babujian et al. We show that our expressions pass a series
of non-trivial consistency checks which are substantially more involved than in
the traditional case. We speculate on the problems concerned in a possible
generalisation to an arbitrary number of particles, and on a possible
connection with the hexagon programme.
|
Video dimensions are continuously increasing to provide more realistic and
immersive experiences to global streaming and social media viewers. However,
increments in video parameters such as spatial resolution and frame rate are
inevitably associated with larger data volumes. Transmitting increasingly
voluminous videos through limited bandwidth networks in a perceptually optimal
way is a current challenge affecting billions of viewers. One recent practice
adopted by video service providers is space-time resolution adaptation in
conjunction with video compression. Consequently, it is important to understand
how different levels of space-time subsampling and compression affect the
perceptual quality of videos. Towards making progress in this direction, we
constructed a large new resource, called the ETRI-LIVE Space-Time Subsampled
Video Quality (ETRI-LIVE STSVQ) database, containing 437 videos generated by
applying various levels of combined space-time subsampling and video
compression on 15 diverse video contents. We also conducted a large-scale human
study on the new dataset, collecting about 15,000 subjective judgments of video
quality. We provide a rate-distortion analysis of the collected subjective
scores, enabling us to investigate the perceptual impact of space-time
subsampling at different bit rates. We also evaluated and compared the
performance of leading video quality models on the new database.
|
Topological protection of quantum correlations opens new horizons and
opportunities in quantum technologies. A variety of topological effects has
recently been observed in qubit networks. However, the experimental
identification of the topological phase still remains challenging, especially
in the entangled many-body case. Here, we propose an approach to independently
probe single- and two-photon topological invariants from the time evolution of
the two-photon state in a one-dimensional array of qubits. Extending the
bulk-boundary correspondence to the two-photon scenario, we show that an
appropriate choice of the initial state enables the retrieval of the
topological invariant for the different types of the two-photon states in the
interacting Su-Schrieffer-Heeger model. Our analysis of the Zak phase reveals
additional facets of topological protection in the case of collapse of bound
photon pairs.
|
This paper presents conditions for constructing permutation-invariant quantum
codes for deletion errors and provides a method for constructing them. Our
codes give the first example of quantum codes that can correct two or more
deletion errors. Also, our codes give the first example of quantum codes that
can correct both multiple-qubit errors and multiple-deletion errors. We also
discuss a generalization of the construction of our codes at the end.
|
We derive from the subleading contributions to the chiral three-nucleon
interaction [published in Phys.~Rev.~C77, 064004 (2008) and Phys.~Rev.~C84,
054001 (2011)] their first-order contributions to the energy per particle of
isospin-symmetric nuclear matter and pure neutron matter in an analytical way.
For the variety of short-range and long-range terms that constitute the
subleading chiral 3N-force the pertinent closed 3-ring, 2-ring, and 1-ring
diagrams are evaluated. While 3-ring diagrams vanish by a spin-trace and the
results for 2-ring diagrams can be given in terms of elementary functions of
the ratio Fermi-momentum over pion mass, one ends up in most cases for the
closed 1-ring diagrams with one-parameter integrals. The same treatment is
applied to the subsubleading chiral three-nucleon interactions as far as these
have been constructed up to now.
|
Anaphora and ellipses are two common phenomena in dialogues. Without
resolving referring expressions and information omission, dialogue systems may
fail to generate consistent and coherent responses. Traditionally, anaphora is
resolved by coreference resolution and ellipses by query rewrite. In this work,
we propose a novel joint learning framework of modeling coreference resolution
and query rewriting for complex, multi-turn dialogue understanding. Given an
ongoing dialogue between a user and a dialogue assistant, for the user query,
our joint learning model first predicts coreference links between the query and
the dialogue context, and then generates a self-contained rewritten user query.
To evaluate our model, we annotate a dialogue based coreference resolution
dataset, MuDoCo, with rewritten queries. Results show that the performance of
query rewrite can be substantially boosted (+2.3% F1) with the aid of
coreference modeling. Furthermore, our joint model outperforms the
state-of-the-art coreference resolution model (+2% F1) on this dataset.
|
In this work we initiate the study of Position Based Quantum Cryptography
(PBQC) from the perspective of geometric functional analysis and its
connections with quantum games. The main question we are interested in asks for
the optimal amount of entanglement that a coalition of attackers have to share
in order to compromise the security of any PBQC protocol. Known upper bounds
for that quantity are exponential in the size of the quantum systems
manipulated in the honest implementation of the protocol. However, known lower
bounds are only linear.
In order to deepen the understanding of this question, here we propose a
Position Verification (PV) protocol and find lower bounds on the resources
needed to break it. The main idea behind the proof of these bounds is the
understanding of cheating strategies as vector valued assignments on the
Boolean hypercube. Then, the bounds follow from the understanding of some
geometric properties of particular Banach spaces, their type constants. Under
some regularity assumptions on the former assignment, these bounds lead to
exponential lower bounds on the quantum resources employed, clarifying the
question in this restricted case. Known attacks indeed satisfy the assumption
we make, although we do not know how universal this feature is. Furthermore, we
show that the understanding of the type properties of some more involved Banach
spaces would allow to drop out the assumptions and lead to unconditional lower
bounds on the resources used to attack our protocol. Unfortunately, we were not
able to estimate the relevant type constant. Despite that, we conjecture an
upper bound for this quantity and show some evidence supporting it. A positive
solution of the conjecture would lead to stronger security guarantees for the
proposed PV protocol providing a better understanding of the question asked
above.
|
The aura of mystery surrounding quantum physics makes it difficult to advance
quantum technologies. Demystification requires methodological techniques that
explain the basics of quantum technologies without metaphors and abstract
mathematics. The article provides an example of such an explanation for the
BB84 quantum key distribution protocol based on phase coding. This allows you
to seamlessly get acquainted with the real cryptographic installation QRate,
used at the WorldSkills competition in the competence of "Quantum
Technologies".
|
We study the equilibrium and nonequilibrium electronic transport properties
of multiprobe topological systems using a combination of the
Landauer-B\"uttiker approach and nonequilibrium Green's functions techniques.
We obtain general expressions for both nonequilibrium and equilibrium local
electronic currents that, by suitable projections, allow one to compute charge,
spin, valley, and orbital currents. We show that external magnetic fields give
rise to equilibrium charge currents in mesoscopic system and study the latter
in the quantum Hall regime. Likewise, a spin-orbit interaction leads to local
equilibrium spin currents, that we analyze in the quantum spin Hall regime.
|
$TESS$ photometric data of LS~Cam from sectors 19, 20 and 26 are analysed.
The obtained power spectra from sectors 19 and 20 show multiple periodicities -
orbital variations ($P_{orb} = 0.14237$ days), slightly fluctuating
superorbital variation ($ P_{so} \approx 4.03$ days) and permanent negative
superhump ($P_{-sh} = 0.1375$ days). In sector 26 an additional positive
superhump ($P_{+sh} = 0.155$ days) is present. Using relations from literature,
the mass ratio and the masses of the two components are estimated to be $q
=0.24$, $M_1 = 1.26M_\odot$, and $M_2 = 0.30 M_\odot$ respectively.
|
We investigate the attenuation law in $z\sim 6$ quasars by combining
cosmological zoom-in hydrodynamical simulations of quasar host galaxies, with
multi-frequency radiative transfer calculations. We consider several dust
models differing in terms of grain size distributions, dust mass and chemical
composition, and compare the resulting synthetic Spectral Energy Distributions
(SEDs) with data from bright, early quasars. We show that only dust models with
grain size distributions in which small grains ($a < 0.1~\mu$m, corresponding
to $\approx 60\%$ of the total dust mass) are selectively removed from the
dusty medium provide a good fit to the data. Removal can occur if small grains
are efficiently destroyed in quasar environments and/or early dust production
preferentially results in large grains. Attenuation curves for these models are
close to flat, and consistent with recent data; they correspond to an effective
dust-to-metal ratio $f_d \simeq 0.38$, i.e. close to the Milky Way value.
|
A wheel is a graph consisting of an induced cycle of length at least four and
a single additional vertex with at least three neighbours on the cycle. We
prove that no Burling graph contains an induced wheel. Burling graphs are
triangle-free and have arbitrarily large chromatic number, so this answers a
question of Trotignon and disproves a conjecture of Scott and Seymour.
|
We study the diffusion properties of the strongly interacting quark-gluon
plasma (sQGP) and evaluate the diffusion coefficient matrix for the baryon
($B$), strange ($S$) and electric ($Q$) charges - $\kappa_{qq'}$ ($q,q' = B, S,
Q$) and show their dependence on temperature $T$ and baryon chemical potential
$\mu_B$. The non-perturbative nature of the sQGP is evaluated within the
Dynamical Quasi-Particle Model (DQPM) which is matched to reproduce the
equation of state of the partonic matter above the deconfinement temperature
$T_c$ from lattice QCD. The calculation of diffusion coefficients is based on
two methods: i) the Chapman-Enskog method for the linearized Boltzmann
equation, which allows to explore non-equilibrium corrections for the
phase-space distribution function in leading order of the Knudsen numbers as
well as ii) the relaxation time approximation (RTA). In this work we explore
the differences between the two methods. We find a good agreement with the
available lattice QCD data in case of the electric charge diffusion coefficient
(or electric conductivity) at vanishing baryon chemical potential as well as a
qualitative agreement with the recent predictions from the holographic approach
for all diagonal components of the diffusion coefficient matrix. The knowledge
of the diffusion coefficient matrix is also of special interest for more
accurate hydrodynamic simulations.
|
Training images with data transformations have been suggested as contrastive
examples to complement the testing set for generalization performance
evaluation of deep neural networks (DNNs). In this work, we propose a practical
framework ContRE (The word "contre" means "against" or "versus" in French.)
that uses Contrastive examples for DNN geneRalization performance Estimation.
Specifically, ContRE follows the assumption in contrastive learning that robust
DNN models with good generalization performance are capable of extracting a
consistent set of features and making consistent predictions from the same
image under varying data transformations. Incorporating with a set of
randomized strategies for well-designed data transformations over the training
set, ContRE adopts classification errors and Fisher ratios on the generated
contrastive examples to assess and analyze the generalization performance of
deep models in complement with a testing set. To show the effectiveness and the
efficiency of ContRE, extensive experiments have been done using various DNN
models on three open source benchmark datasets with thorough ablation studies
and applicability analyses. Our experiment results confirm that (1) behaviors
of deep models on contrastive examples are strongly correlated to what on the
testing set, and (2) ContRE is a robust measure of generalization performance
complementing to the testing set in various settings.
|
Computational micromagnetics has become an essential tool in academia and
industry to support fundamental research and the design and development of
devices. Consequently, computational micromagnetics is widely used in the
community, and the fraction of time researchers spend performing computational
studies is growing. We focus on reducing this time by improving the interface
between the numerical simulation and the researcher. We have designed and
developed a human-centred research environment called Ubermag. With Ubermag,
scientists can control an existing micromagnetic simulation package, such as
OOMMF, from Jupyter notebooks. The complete simulation workflow, including
definition, execution, and data analysis of simulation runs, can be performed
within the same notebook environment. Numerical libraries, co-developed by the
computational and data science community, can immediately be used for
micromagnetic data analysis within this Python-based environment. By design, it
is possible to extend Ubermag to drive other micromagnetic packages from the
same environment.
|
The recent wave of detections of interstellar aromatic molecules has sparked
interest in the chemical behavior of aromatic molecules under astrophysical
conditions. In most cases, these detections have been made through chemically
related molecules, called proxies, that implicitly indicate the presence of a
parent molecule. In this study, we present the results of the theoretical
evaluation of the hydrogenation reactions of different aromatic molecules
(benzene, pyridine, pyrrole, furan, thiophene, silabenzene, and phosphorine).
The viability of these reactions allows us to evaluate the resilience of these
molecules to the most important reducing agent in the interstellar medium, the
hydrogen atom (H). All significant reactions are exothermic and most of them
present activation barriers, which are, in several cases, overcome by quantum
tunneling. Instanton reaction rate constants are provided between 50 K and 500
K. For the most efficiently formed radicals, a second hydrogenation step has
been studied. We propose that hydrogenated derivatives of furan, pyrrole, and
specially 2,3-dihydropyrrole, 2,5-dihydropyrrole, 2,3-dihydrofuran, and
2,5-dihydrofuran are promising candidates for future interstellar detections.
|
Colorectal cancer is a leading cause of cancer death for both men and women.
For this reason, histopathological characterization of colorectal polyps is the
major instrument for the pathologist in order to infer the actual risk for
cancer and to guide further follow-up. Colorectal polyps diagnosis includes the
evaluation of the polyp type, and more importantly, the grade of dysplasia.
This latter evaluation represents a critical step for the clinical follow-up.
The proposed deep learning-based classification pipeline is based on
state-of-the-art convolutional neural network, trained using proper
countermeasures to tackle WSI high resolution and very imbalanced dataset. The
experimental results show that one can successfully classify adenomas dysplasia
grade with 70% accuracy, which is in line with the pathologists' concordance.
|
Coherent configurations are a generalization of association schemes. In this
paper, we introduce the concept of $Q$-polynomial coherent configurations and
study the relationship among intersection numbers, Krein numbers, and
eigenmatrices. The examples of $Q$-polynomial coherent configurations are
provided from Delsarte designs in $Q$-polynomial schemes and spherical designs.
|
Robots assisting us in factories or homes must learn to make use of objects
as tools to perform tasks, e.g., a tray for carrying objects. We consider the
problem of learning commonsense knowledge of when a tool may be useful and how
its use may be composed with other tools to accomplish a high-level task
instructed by a human. We introduce a novel neural model, termed TANGO, for
predicting task-specific tool interactions, trained using demonstrations from
human teachers instructing a virtual robot. TANGO encodes the world state,
comprising objects and symbolic relationships between them, using a graph
neural network. The model learns to attend over the scene using knowledge of
the goal and the action history, finally decoding the symbolic action to
execute. Crucially, we address generalization to unseen environments where some
known tools are missing, but alternative unseen tools are present. We show that
by augmenting the representation of the environment with pre-trained embeddings
derived from a knowledge-base, the model can generalize effectively to novel
environments. Experimental results show a 60.5-78.9% absolute improvement over
the baseline in predicting successful symbolic plans in unseen settings for a
simulated mobile manipulator.
|
For the comparison of inequality and welfare in multiple attributes the use
of generalized Gini indices is proposed. Spectral social evaluation functions
are used in the multivariate setting, and Gini dominance orderings are
introduced that are uniform in attribute weights. Classes of spectral
evaluators are considered that are ordered by their aversion to inequality.
Then a set-valued representative endowment is defined that characterizes
$d$-dimensioned welfare. It consists of all points above the lower border of a
convex compact in $R^d$, while the pointwise ordering of such endowments
corresponds to uniform Gini dominance. An application is given to the welfare
of 28 European countries. Properties of uniform Gini dominance are derived,
including relations to other orderings of $d$-variate distributions such as
convex and dependence orderings. The multi-dimensioned representative endowment
can be efficiently calculated from data; in a sampling context, it consistently
estimates its population version.
|
The principle of entropy increase is not only the basis of statistical
mechanics, but also closely related to the irreversibility of time, the origin
of life, chaos and turbulence. In this paper, we first discuss the dynamic
system definition of entropy from the perspective of symbol and partition of
information, and propose the entropy transfer characteristics based on the set
partition. By introducing the hypothesis of limited accuracy of measurement
into the continuous dynamical system, two necessary mechanisms for the
formation of chaos are obtained: the transfer of entropy from small scale to
macro scale (i.e. the increase of local entropy) and the dissipation of macro
information. The relationship between the local entropy increase and Lyapunov
exponent of dynamical system is established. And then the entropy increase and
abnormal dissipation mechanism in physical system are analyzed and discussed.
|
Adaptive mirrors based on voice-coil technology have force actuators with an
internal metrology to close a local loop for controlling its shape in position.
When actuators are requested to be disabled or slaved, control matrices have to
be re-computed. The report describes the algorithms to re-compute the relevant
matrixes for controlling of the mirror without the need of recalibration. This
is related in particular to MMT, LBT, Magellan, VLT, ELT and GMT adaptive
mirrors that use the voice-coil technology. The technique is successfully used
in practice with LBT and VLT-UT4 adaptive secondary mirror units.
|
Data quantity and quality are crucial factors for data-driven learning
methods. In some target problem domains, there are not many data samples
available, which could significantly hinder the learning process. While data
from similar domains may be leveraged to help through domain adaptation,
obtaining high-quality labeled data for those source domains themselves could
be difficult or costly. To address such challenges on data insufficiency for
classification problem in a target domain, we propose a weak adaptation
learning (WAL) approach that leverages unlabeled data from a similar source
domain, a low-cost weak annotator that produces labels based on task-specific
heuristics, labeling rules, or other methods (albeit with inaccuracy), and a
small amount of labeled data in the target domain. Our approach first conducts
a theoretical analysis on the error bound of the trained classifier with
respect to the data quantity and the performance of the weak annotator, and
then introduces a multi-stage weak adaptation learning method to learn an
accurate classifier by lowering the error bound. Our experiments demonstrate
the effectiveness of our approach in learning an accurate classifier with
limited labeled data in the target domain and unlabeled data in the source
domain.
|
Physics-Informed Neural Networks (PINN) are neural networks encoding the
problem governing equations, such as Partial Differential Equations (PDE), as a
part of the neural network. PINNs have emerged as a new essential tool to solve
various challenging problems, including computing linear systems arising from
PDEs, a task for which several traditional methods exist. In this work, we
focus first on evaluating the potential of PINNs as linear solvers in the case
of the Poisson equation, an omnipresent equation in scientific computing. We
characterize PINN linear solvers in terms of accuracy and performance under
different network configurations (depth, activation functions, input data set
distribution). We highlight the critical role of transfer learning. Our results
show that low-frequency components of the solution converge quickly as an
effect of the F-principle. In contrast, an accurate solution of the high
frequencies requires an exceedingly long time. To address this limitation, we
propose integrating PINNs into traditional linear solvers. We show that this
integration leads to the development of new solvers whose performance is on par
with other high-performance solvers, such as PETSc conjugate gradient linear
solvers, in terms of performance and accuracy. Overall, while the accuracy and
computational performance are still a limiting factor for the direct use of
PINN linear solvers, hybrid strategies combining old traditional linear solver
approaches with new emerging deep-learning techniques are among the most
promising methods for developing a new class of linear solvers.
|
In an article published in 1987 in Combinatorica \cite{MR918397}, Frieze and
Jackson established a lower bound on the length of the longest induced path
(and cycle) in a sparse random graph. Their bound is obtained through a rough
analysis of a greedy algorithm. In the present work, we provide a sharp
asymptotic for the length of the induced path constructed by their algorithm.
To this end, we introduce an alternative algorithm that builds the same induced
path and whose analysis falls into the framework of a previous work by the
authors on depth-first exploration of a configuration model \cite{EFMN}. We
also analyze an extension of our algorithm that mixes depth-first and
breadth-first explorations and generates $m$-induced paths.
|
Extended depth of focus (EDOF) optics can enable lower complexity optical
imaging systems when compared to active focusing solutions. With existing EDOF
optics, however, it is difficult to achieve high resolution and high collection
efficiency simultaneously. The subwavelength pitch of meta-optics enables
engineering very steep phase gradients, and thus meta-optics can achieve both a
large physical aperture and high numerical aperture. Here, we demonstrate a
fast (f/1.75) EDOF meta-optic operating at visible wavelengths, with an
aperture of 2 mm and focal range from 3.5 mm to 14.5 mm (286 diopters to 69
diopters), which is a 250 elongation of the depth of focus relative to a
standard lens. Depth-independent performance is shown by imaging at a range of
finite conjugates, with a minimum spatial resolution of ~9.84{\mu}m (50.8
cycles/mm). We also demonstrate operation of a directly integrated EDOF
meta-optic camera module to evaluate imaging at multiple object distances, a
functionality which would otherwise require a varifocal lens.
|
Non-unitary neutrino mixing in the light neutrino sector is a direct
consequence of type-I seesaw neutrino mass models. In these models, light
neutrino mixing is described by a sub-matrix of the full lepton mixing matrix
and, then, it is not unitary in general. In consequence, neutrino oscillations
are characterized by additional parameters, including new sources of CP
violation. Here we perform a combined analysis of short and long-baseline
neutrino oscillation data in this extended mixing scenario. We did not find a
significant deviation from unitary mixing, and the complementary data sets have
been used to constrain the non-unitarity parameters. We have also found that
the T2K and NOvA tension in the determination of the Dirac CP-phase is not
alleviated in the context of non-unitary neutrino mixing.
|
Reshaping accurate and realistic 3D human bodies from anthropometric
parameters (e.g., height, chest size, etc.) poses a fundamental challenge for
person identification, online shopping and virtual reality. Existing approaches
for creating such 3D shapes often suffer from complex measurement by range
cameras or high-end scanners, which either involve heavy expense cost or result
in low quality. However, these high-quality equipments limit existing
approaches in real applications, because the equipments are not easily
accessible for common users. In this paper, we have designed a 3D human body
reshaping system by proposing a novel feature-selection-based local mapping
technique, which enables automatic anthropometric parameter modeling for each
body facet. Note that the proposed approach can leverage limited anthropometric
parameters (i.e., 3-5 measurements) as input, which avoids complex measurement,
and thus better user-friendly experience can be achieved in real scenarios.
Specifically, the proposed reshaping model consists of three steps. First, we
calculate full-body anthropometric parameters from limited user inputs by
imputation technique, and thus essential anthropometric parameters for 3D body
reshaping can be obtained. Second, we select the most relevant anthropometric
parameters for each facet by adopting relevance masks, which are learned
offline by the proposed local mapping technique. Third, we generate the 3D body
meshes by mapping matrices, which are learned by linear regression from the
selected parameters to mesh-based body representation. We conduct experiments
by anthropomorphic evaluation and a user study from 68 volunteers. Experiments
show the superior results of the proposed system in terms of mean
reconstruction error against the state-of-the-art approaches.
|
Recent work in fair machine learning has proposed dozens of technical
definitions of algorithmic fairness and methods for enforcing these
definitions. However, we still lack an understanding of how to develop machine
learning systems with fairness criteria that reflect relevant stakeholders'
nuanced viewpoints in real-world contexts. To address this gap, we propose a
framework for eliciting stakeholders' subjective fairness notions. Combining a
user interface that allows stakeholders to examine the data and the algorithm's
predictions with an interview protocol to probe stakeholders' thoughts while
they are interacting with the interface, we can identify stakeholders' fairness
beliefs and principles. We conduct a user study to evaluate our framework in
the setting of a child maltreatment predictive system. Our evaluations show
that the framework allows stakeholders to comprehensively convey their fairness
viewpoints. We also discuss how our results can inform the design of predictive
systems.
|
We study the identifiability of the interaction kernels in mean-field
equations for intreacting particle systems. The key is to identify function
spaces on which a probabilistic loss functional has a unique minimizer. We
prove that identifiability holds on any subspace of two reproducing kernel
Hilbert spaces (RKHS), whose reproducing kernels are intrinsic to the system
and are data-adaptive. Furthermore, identifiability holds on two ambient L2
spaces if and only if the integral operators associated with the reproducing
kernels are strictly positive. Thus, the inverse problem is ill-posed in
general. We also discuss the implications of identifiability in computational
practice.
|
This paper describes the AISpeech-SJTU system for the accent identification
track of the Interspeech-2020 Accented English Speech Recognition Challenge. In
this challenge track, only 160-hour accented English data collected from 8
countries and the auxiliary Librispeech dataset are provided for training. To
build an accurate and robust accent identification system, we explore the whole
system pipeline in detail. First, we introduce the ASR based phone
posteriorgram (PPG) feature to accent identification and verify its efficacy.
Then, a novel TTS based approach is carefully designed to augment the very
limited accent training data for the first time. Finally, we propose the test
time augmentation and embedding fusion schemes to further improve the system
performance. Our final system is ranked first in the challenge and outperforms
all the other participants by a large margin. The submitted system achieves
83.63\% average accuracy on the challenge evaluation data, ahead of the others
by more than 10\% in absolute terms.
|
Signal predictions for galactic dark matter (DM) searches often rely on
assumptions on the DM phase-space distribution function (DF) in halos. This
applies to both particle (e.g. $p$-wave suppressed or Sommerfeld-enhanced
annihilation, scattering off atoms, etc.) and macroscopic DM candidates (e.g.
microlensing of primordial black holes). As experiments and observations
improve in precision, better assessing theoretical uncertainties becomes
pressing in the prospect of deriving reliable constraints on DM candidates or
trustworthy hints for detection. Most reliable predictions of DFs in halos are
based on solving the steady-state collisionless Boltzmann equation (e.g.
Eddington-like inversions, action-angle methods, etc.) consistently with
observational constraints. One can do so starting from maximal symmetries and a
minimal set of degrees of freedom, and then increasing complexity. Key issues
are then whether adding complexity, which is computationally costy, improves
predictions, and if so where to stop. Clues can be obtained by making
predictions for zoomed-in hydrodynamical cosmological simulations in which one
can access the true (coarse-grained) phase-space information. Here, we test an
axisymmetric extension of the Eddington inversion to predict the full DM DF
from its density profile and the total gravitational potential of the system.
This permits to go beyond spherical symmetry, and is a priori well suited for
spiral galaxies. We show that axisymmetry does not necessarily improve over
spherical symmetry because the (observationally unconstrained) angular momentum
of the DM halo is not generically aligned with the baryonic one. Theoretical
errors are similar to those of the Eddington inversion though, at the 10-20%
level for velocity-dependent predictions related to particle DM searches in
spiral galaxies. We extensively describe the approach and comment on the
results.
|
A system of synchronized radio telescopes is utilized to search for
hypothetical wide bandwidth interstellar communication signals. Transmitted
signals are hypothesized to have characteristics that enable high channel
capacity and minimally low energy per information bit, while containing
energy-efficient signal elements that are readily discoverable, distinct from
random noise. A hypothesized transmitter signal is described. Signal reception
and discovery processes are detailed. Observations using individual and
multiple synchronized radio telescopes, during 2017 - 2021, are described.
Conclusions and further work are suggested.
|
We compare the solutions of two one-dimensional Poisson problems on an
interval with Robin boundary conditions, one with given data, and one where the
data has been symmetrized. When the Robin parameter is positive and the
symmetrization is symmetric decreasing rearrangement, we prove that the
solution to the symmetrized problem has larger increasing convex means. When
the Robin parameter equals zero (so that we have Neumann boundary conditions)
and the symmetrization is decreasing rearrangement, we similarly show that the
solution to the symmetrized problem has larger convex means.
|
In this paper, we connect two types of representations of a permutation
$\sigma$ of the finite field $\F_q$. One type is algebraic, in which the
permutation is represented as the composition of degree-one polynomials and $k$
copies of $x^{q-2}$, for some prescribed value of $k$. The other type is
combinatorial, in which the permutation is represented as the composition of a
degree-one rational function followed by the product of $k$ $2$-cycles on
$\bP^1(\F_q):=\F_q\cup\{\infty\}$, where each $2$-cycle moves $\infty$. We show
that, after modding out by obvious equivalences amongst the algebraic
representations, then for each $k$ there is a bijection between the algebraic
representations of $\sigma$ and the combinatorial representations of $\sigma$.
We also prove analogous results for permutations of $\bP^1(\F_q)$. One
consequence is a new characterization of the notion of Carlitz rank of a
permutation on $\F_q$, which we use elsewhere to provide an explicit formula
for the Carlitz rank. Another consequence involves a classical theorem of
Carlitz, which says that if $q>2$ then the group of permutations of $\F_q$ is
generated by the permutations induced by degree-one polynomials and $x^{q-2}$.
Our bijection provides a new perspective from which the two proofs of this
result in the literature can be seen to arise naturally, without requiring the
clever tricks that previously appeared to be needed in order to discover those
proofs.
|
In a recent paper, Jean Gaudart and colleagues studied the factors associated
with the spatial heterogeneity of the first wave of COVID-19 in France. We make
some critical comments on their work which may be useful for future, similar
studies.
|
Online community moderators are on the front lines of combating problems like
hate speech and harassment, but new modes of interaction can introduce
unexpected challenges. In this paper, we consider moderation practices and
challenges in the context of real-time, voice-based communication through 25
in-depth interviews with moderators on Discord. Our findings suggest that the
affordances of voice-based online communities change what it means to moderate
content and interactions. Not only are there new ways to break rules that
moderators of text-based communities find unfamiliar, such as disruptive noise
and voice raiding, but acquiring evidence of rule-breaking behaviors is also
more difficult due to the ephemerality of real-time voice. While moderators
have developed new moderation strategies, these strategies are limited and
often based on hearsay and first impressions, resulting in problems ranging
from unsuccessful moderation to false accusations. Based on these findings, we
discuss how voice communication complicates current understandings and
assumptions about moderation, and outline ways that platform designers and
administrators can design technology to facilitate moderation.
|
We detect Lyman $\alpha$ absorption from the escaping atmosphere of HD
63433c, a $R=2.67 R_\oplus$, $P=20.5$ d mini Neptune orbiting a young (440 Myr)
solar analogue in the Ursa Major Moving Group. Using HST/STIS, we measure a
transit depth of $11.1 \pm 1.5$% in the blue wing and $8 \pm 3$% in the red.
This signal is unlikely to be due to stellar variability, but should be
confirmed by an upcoming second visit with HST. We do not detect Lyman $\alpha$
absorption from the inner planet, a smaller $R=2.15 R_\oplus$ mini Neptune on a
7.1 d orbit. We use Keck/NIRSPEC to place an upper limit of 0.5% on helium
absorption for both planets. We measure the host star's X-ray spectrum and FUV
flux with XMM-Newton, and model the outflow from both planets using a 3D
hydrodynamic code. This model provides a reasonable match to the light curve in
the blue wing of the Lyman $\alpha$ line and the helium non-detection for
planet c, although it does not explain the tentative red wing absorption or
reproduce the excess absorption spectrum in detail. Its predictions of strong
Lyman $\alpha$ and helium absorption from b are ruled out by the observations.
This model predicts a much shorter mass loss timescale for planet b, suggesting
that b and c are fundamentally different: while the latter still retains its
hydrogen/helium envelope, the former has likely lost its primordial atmosphere.
|
For a group $G$ that is a limit group over Droms RAAGs such that $G$ has
trivial center, we show that $\Sigma^1(G) = \emptyset = \Sigma^1(G,
\mathbb{Q})$. For a group $H$ that is a finitely presented residually Droms
RAAG we calculate $\Sigma^1(H)$ and $\Sigma^2(H)_{dis}$. In addition, we obtain
a necessary condition for $[\chi]$ to belong to $\Sigma^n(H)$.
|
In this paper, we study a general class of causal processes with exogenous
covariates, including many classical processes such as the ARMA-GARCH, APARCH,
ARMAX, GARCH-X and APARCH-X processes.
Under some Lipschitz-type conditions, the existence of a $\tau$-weakly
dependent strictly stationary and ergodic solution is established.
We provide conditions for the strong consistency and derive the asymptotic
distribution of the quasi-maximum likelihood estimator (QMLE), both when the
true parameter is an interior point of the parameter's space and when it
belongs to the boundary.
A significance Wald-type test of parameter is developed. This test is quite
extensive and includes the test of nullity of the parameter's components, which
in particular, allows us to assess the relevance of the exogenous covariates.
Relying on the QMLE of the model, we also propose a penalized criterion to
address the problem of the model selection for this class. The weak and the
strong consistency of the procedure are established.
Finally, Monte Carlo simulations are conducted to numerically illustrate the
main results.
|
Dissipation of electromagnetic energy through absorption is a fundamental
process that underpins phenomena ranging from photovoltaics to photography,
analytical spectroscopy, photosynthesis, and human vision. Absorption is also a
dynamic process that depends on the duration of the optical illumination. Here
we report on the resonant plasmonic absorption of a nanostructured metamaterial
and the non-resonant absorption of an unstructured gold film at different
optical pulse durations. By examining the absorption in travelling and standing
waves, we observe a plasmonic relaxation time of 11 fs as the characteristic
transition time. The metamaterial acts as a beam-splitter with low absorption
for shorter pulses, while as a good absorber for longer pulses. The transient
nature of the absorption puts a frequency limit of ~90 THz on the bandwidth of
coherently-controlled, all-optical switching devices, which is still a thousand
times faster than other leading switching technologies.
|
The paper describes our proposed methodology for the seven basic expression
classification track of Affective Behavior Analysis in-the-wild (ABAW)
Competition 2021. In this task, facial expression recognition (FER) methods aim
to classify the correct expression category from a diverse background, but
there are several challenges. First, to adapt the model to in-the-wild
scenarios, we use the knowledge from pre-trained large-scale face recognition
data. Second, we propose an ensemble model with a convolution neural network
(CNN), a CNN-recurrent neural network (CNN-RNN), and a CNN-Transformer
(CNN-Transformer), to incorporate both spatial and temporal information. Our
ensemble model achieved F1 as 0.4133, accuracy as 0.6216 and final metric as
0.4821 on the validation set.
|
We introduce the problem of \emph{timely} private information retrieval (PIR)
from $N$ non-colluding and replicated servers. In this problem, a user desires
to retrieve a message out of $M$ messages from the servers, whose contents are
continuously updating. The retrieval process should be executed in a timely
manner such that no information is leaked about the identity of the message. To
assess the timeliness, we use the \emph{age of information} (AoI) metric.
Interestingly, the timely PIR problem reduces to an AoI minimization subject to
PIR constraints under \emph{asymmetric traffic}. We explicitly characterize the
optimal tradeoff between the PIR rate and the AoI metric (peak AoI or average
AoI) for the case of $N=2$, $M=3$. Further, we provide some structural insights
on the general problem with arbitrary $N$, $M$.
|
Conflict-avoiding codes (CACs) have been used in multiple-access collision
channel without feedback. The size of a CAC is the number of potential users
that can be supported in the system. A code with maximum size is called
optimal. The use of an optimal CAC enables the largest possible number of
asynchronous users to transmit information efficiently and reliably. In this
paper, a new upper bound on the maximum size of arbitrary equi-difference CAC
is presented. Furthermore, three optimal constructions of equi-difference CACs
are also given. One is a generalized construction for prime length $L=p$ and
the other two are for two-prime length $L=pq$.
|
We compute the electromagnetic fields generated in relativistic heavy-ion
collisions using the iEBE-VISHNU framework. We calculated the incremental drift
velocity from the possible four sources of the electric force (coulomb,
Lorentz, Faraday, and Plasma-based) on the particles created. The effect of
this external electromagnetic field on the flow harmonics of particles was
investigated, and we found out that the flow harmonics values get suppressed
and rouse in a non-uniform fashion throughout the evolution. More precisely, a
maximum of close to three percent increase in elliptic flow was observed. We
also found mass more dominant factor than charges for the change in flow
harmonics due to the created electromagnetic field. On the top of that, the
magnetic field perpendicular to the reaction plane is found to be sizable while
the different radial electric forces were found to cancel out each other.
Finally, we found out that the inclusion of an electromagnetic field affects
the flow of particles by suppressing or rising it in a non-uniform fashion
throughout the evolution.
|
Large-scale deep neural networks (DNNs) such as convolutional neural networks
(CNNs) have achieved impressive performance in audio classification for their
powerful capacity and strong generalization ability. However, when training a
DNN model on low-resource tasks, it is usually prone to overfitting the small
data and learning too much redundant information. To address this issue, we
propose to use variational information bottleneck (VIB) to mitigate overfitting
and suppress irrelevant information. In this work, we conduct experiments ona
4-layer CNN. However, the VIB framework is ready-to-use and could be easily
utilized with many other state-of-the-art network architectures. Evaluation on
a few audio datasets shows that our approach significantly outperforms baseline
methods, yielding more than 5.0% improvement in terms of classification
accuracy in some low-source settings.
|
Outlier detection has gained increasing interest in recent years, due to
newly emerging technologies and the huge amount of high-dimensional data that
are now available. Outlier detection can help practitioners to identify
unwanted noise and/or locate interesting abnormal observations. To address
this, we developed a novel method for outlier detection for use in, possibly
high-dimensional, datasets with both discrete and continuous variables. We
exploit the family of decomposable graphical models in order to model the
relationship between the variables and use this to form an exact likelihood
ratio test for an observation that is considered an outlier. We show that our
method outperforms the state-of-the-art Isolation Forest algorithm on a real
data example.
|
Effective caching is crucial for the performance of modern-day computing
systems. A key optimization problem arising in caching -- which item to evict
to make room for a new item -- cannot be optimally solved without knowing the
future. There are many classical approximation algorithms for this problem, but
more recently researchers started to successfully apply machine learning to
decide what to evict by discovering implicit input patterns and predicting the
future. While machine learning typically does not provide any worst-case
guarantees, the new field of learning-augmented algorithms proposes solutions
that leverage classical online caching algorithms to make the machine-learned
predictors robust. We are the first to comprehensively evaluate these
learning-augmented algorithms on real-world caching datasets and
state-of-the-art machine-learned predictors. We show that a straightforward
method -- blindly following either a predictor or a classical robust algorithm,
and switching whenever one becomes worse than the other -- has only a low
overhead over a well-performing predictor, while competing with classical
methods when the coupled predictor fails, thus providing a cheap worst-case
insurance.
|
We give an elementary topological obstruction for a $(2q{+}1)$-manifold $M$
to admit a contact open book with flexible Weinstein pages: if the torsion
subgroup of the $q$-th integral homology group is non-zero, then no such
contact open book exists. We achieve this by proving that a symplectomorphism
of a flexible Weinstein manifold acts trivially on cohomology. We also produce
examples of non-trivial loops of flexible contact structures using related
ideas.
|
Out-of-order speculation, a technique ubiquitous since the early 1990s,
remains a fundamental security flaw. Via attacks such as Spectre and Meltdown,
an attacker can trick a victim, in an otherwise entirely correct program, into
leaking its secrets through the effects of misspeculated execution, in a way
that is entirely invisible to the programmer's model. This has serious
implications for application sandboxing and inter-process communication.
Designing efficient mitigations, that preserve the performance of
out-of-order execution, has been a challenge. The speculation-hiding techniques
in the literature have been shown to not close such channels comprehensively,
allowing adversaries to redesign attacks. Strong, precise guarantees are
necessary, but at the same time mitigations must achieve high performance to be
adopted. We present Strictness Ordering, a new constraint system that shows how
we can comprehensively eliminate transient side channel attacks, while still
allowing complex speculation and data forwarding between speculative
instructions. We then present GhostMinion, a cache modification built using a
variety of new techniques designed to provide Strictness Order at only 2.5%
overhead.
|
Feedback-based control techniques are useful tools in precision measurements
as they allow to actively shape the mechanical response of high quality factor
oscillators used in force detection measurements. In this paper we implement a
feedback technique on a high-stress low-loss SiN membrane resonator, exploiting
the charges trapped on the dielectric membrane. A properly delayed feedback
force (dissipative feedback) enables the narrowing of the thermomechanical
displacement variance in a similar manner to the cooling of the normal
mechanical mode down to an effective temperature Te f f . In the experiment
here reported we started from room temperature and gradually increasing the
feedback gain we were able to cool down the first normal mode of the resonator
to a minimum temperature of about 124mK. This limit is imposed by our
experimental set-up and in particular by the the injection of the read-out
noise into the feedback. We discuss the implementation details and possible
improvements to the technique
|
The asymmetric skew divergence smooths one of the distributions by mixing it,
to a degree determined by the parameter $\lambda$, with the other distribution.
Such divergence is an approximation of the KL divergence that does not require
the target distribution to be absolutely continuous with respect to the source
distribution. In this paper, an information geometric generalization of the
skew divergence called the $\alpha$-geodesical skew divergence is proposed, and
its properties are studied.
|
"Necks" are features of lipid membranes characterized by an uniquley large
curvature, functioning as bridges between different compartments. These
features are ubiquitous in the life-cycle of the cell and instrumental in
processes such as division, extracellular vesicles uptake and cargo transport
between organelles, but also in life-threatening conditions, as in the
endocytosis of viruses and phages. Yet, the very existence of lipid necks
challenges our understanding of membranes biophysics: their curvature, often
orders of magnitude larger than elsewhere, is energetically prohibitive, even
with the arsenal of molecular machineries and signalling pathways that cells
have at their disposal. Using a geometric triality, namely a correspondence
between three different classes of geometric objects, here we demonstrate that
lipid necks are in fact metastable, thus can exist for finite, but potentially
long times even in the absence of stabilizing mechanisms. This framework allows
us to explicitly calculate the forces a corpuscle must overcome in order to
penetrate cellular membranes, thus paving the way for a predictive theory of
endo/exo-cytic processes.
|
Second-order continuous-time dissipative dynamical systems with viscous and
Hessian driven damping have inspired effective first-order algorithms for
solving convex optimization problems. While preserving the fast convergence
properties of the Nesterov-type acceleration, the Hessian driven damping makes
it possible to significantly attenuate the oscillations. To study the stability
of these algorithms with respect to perturbations, errors, we analyze the
behavior of the corresponding continuous systems when the gradient computation
is subject to errors. {We provide a quantitative analysis of the asymptotic
behavior of two types of systems, those with implicit and explicit Hessian
driven damping}. We consider convex, strongly convex, and non-smooth objective
functions defined on a real Hilbert space and show that, depending on the
formulation, different integrability conditions on the perturbations are
sufficient to maintain the convergence rates of the systems. We highlight the
differences between the implicit and explicit Hessian damping, and in
particular point out that the assumptions on the objective and perturbations
needed in the implicit case are more stringent than in the explicit case.
|
Most of the ongoing projects aimed at the development of specific therapies
and vaccines against COVID-19 use the SARS-CoV-2 spike (S) protein as the main
target [1-3]. The binding of the spike protein with the ACE2 receptor (ACE2) of
the host cell constitutes the first and key step for virus entry. During this
process, the receptor binding domain (RBD) of the S protein plays an essential
role, since it contains the receptor binding motif (RBM), responsible for the
docking to the receptor. So far, mostly biochemical methods are being tested in
order to prevent binding of the virus to ACE2 [4]. Here we show, with the help
of atomistic simulations, that external electric fields of easily achievable
and moderate strengths can dramatically destabilise the S protein, inducing
long-lasting structural damage. One striking field-induced conformational
change occurs at the level of the recognition loop L3 of the RBD where two
parallel beta sheets, believed to be responsible for a high affinity to ACE2
[5], undergo a change into an unstructured coil, which exhibits almost no
binding possibilities to the ACE2 receptor (Figure 1a). Remarkably, while the
structural flexibility of S allows the virus to improve its probability of
entering the cell, it is also the origin of the surprising vulnerability of S
upon application of electric fields of strengths at least two orders of
magnitude smaller than those required for damaging most proteins. Our findings
suggest the existence of a clean physical method to weaken the SARS-CoV-2 virus
without further biochemical processing. Moreover, the effect could be used for
infection prevention purposes and also to develop technologies for in-vitro
structural manipulation of S. Since the method is largely unspecific, it can be
suitable for application to mutations in S, to other proteins of SARS-CoV-2 and
in general to membrane proteins of other virus types.
|
X-ray emission from the gravitational wave transient GW170817 is well
described as non-thermal afterglow radiation produced by a structured
relativistic jet viewed off-axis. We show that the X-ray counterpart continues
to be detected at 3.3 years after the merger. Such long-lasting signal is not a
prediction of the earlier jet models characterized by a narrow jet core and a
viewing angle of about 20 deg, and is spurring a renewed interest in the origin
of the X-ray emission. We present a comprehensive analysis of the X-ray dataset
aimed at clarifying existing discrepancies in the literature, and in particular
the presence of an X-ray rebrightening at late times. Our analysis does not
find evidence for an increase in the X-ray flux, but confirms a growing tension
between the observations and the jet model. Further observations at radio and
X-ray wavelengths would be critical to break the degeneracy between models.
|
Unmanned aerial vehicle (UAV) based visual tracking has been confronted with
numerous challenges, e.g., object motion and occlusion. These challenges
generally introduce unexpected mutations of target appearance and result in
tracking failure. However, prevalent discriminative correlation filter (DCF)
based trackers are insensitive to target mutations due to a predefined label,
which concentrates on merely the centre of the training region. Meanwhile,
appearance mutations caused by occlusion or similar objects usually lead to the
inevitable learning of wrong information. To cope with appearance mutations,
this paper proposes a novel DCF-based method to enhance the sensitivity and
resistance to mutations with an adaptive hybrid label, i.e., MSCF. The ideal
label is optimized jointly with the correlation filter and remains temporal
consistency. Besides, a novel measurement of mutations called mutation threat
factor (MTF) is applied to correct the label dynamically. Considerable
experiments are conducted on widely used UAV benchmarks. The results indicate
that the performance of MSCF tracker surpasses other 26 state-of-the-art
DCF-based and deep-based trackers. With a real-time speed of _38 frames/s, the
proposed approach is sufficient for UAV tracking commissions.
|
We present the first full description of Media Cloud, an open source platform
based on crawling hyperlink structure in operation for over 10 years, that for
many uses will be the best way to collect data for studying the media ecosystem
on the open web. We document the key choices behind what data Media Cloud
collects and stores, how it processes and organizes these data, and its open
API access as well as user-facing tools. We also highlight the strengths and
limitations of the Media Cloud collection strategy compared to relevant
alternatives. We give an overview two sample datasets generated using Media
Cloud and discuss how researchers can use the platform to create their own
datasets.
|
We study three graph complexes related to the higher genus
Grothendieck-Teichm\"uller Lie algebra and diffeomorphism groups of manifolds.
We show how the cohomology of these graph complexes is related, and we compute
the cohomology as the genus $g$ tends to $\infty$. As a byproduct, we find that
the Malcev completion of the genus $g$ mapping class group relative to the
symplectic group is Koszul in the stable limit (partially answering a question
of Hain). Moreover, we obtain that any elliptic associator gives a solution to
the elliptic Kashiwara-Vergne problem.
|
The $n$-queens puzzle is to place $n$ mutually non-attacking queens on an $n
\times n$ chessboard. We present a simple two stage randomized algorithm to
construct such configurations. In the first stage, a random greedy algorithm
constructs an approximate \textit{toroidal} $n$-queens configuration. In this
well-known variant the diagonals wrap around the board from left to right and
from top to bottom. We show that with high probability this algorithm succeeds
in placing $(1-o(1))n$ queens on the board. In the second stage, the method of
absorbers is used to obtain a complete solution to the non-toroidal problem. By
counting the number of choices available at each step of the random greedy
algorithm we conclude that there are more than $\left( \left( 1 - o(1) \right)
n e^{-3} \right)^n$ solutions to the $n$-queens problem. This proves a
conjecture of Rivin, Vardi, and Zimmerman in a strong form.
|
Experiments overall suggested that dilute solid solution of manganese in
body-centered cubic iron transforms from antiferromagnetic coupling into a
ferromagnetic coupling at about 2 at.% Mn. Despite long-term theoretical
studies, this phase transition is poorly understood, and the transition
mechanism is still open. Based on DFT calculations with dense k-point meshes,
we reveal that this "iso-structural" phase transition (IPT) occurs at 1.85 at.%
Mn, originating from the shifting of 3d eg level of Mn across the Fermi level
and consequent intra-atomic electron transfer within 3d states of Mn. The IPT
involves a sudden change of the bulk modulus accompanied by a small yet
detectable change of the lattice constant, an inversion of magnetic coupling
between solute Mn and Fe matrix, and a change in bonding strength between Mn
and the first-nearest neighboring Fe atoms. Our interpretation of this IPT
plays an enlightening role in understanding similar IPTs in other materials.
|
The interaction of localised solitary waves with large-scale, time-varying
dispersive mean flows subject to nonconvex flux is studied in the framework of
the modified Korteweg-de Vries (mKdV) equation, a canonical model for nonlinear
internal gravity wave propagation in stratified fluids. The principal feature
of the studied interaction is that both the solitary wave and the large-scale
mean flow -- a rarefaction wave or a dispersive shock wave (undular bore) --
are described by the same dispersive hydrodynamic equation. A recent
theoretical and experimental study of this new type of dynamic soliton-mean
flow interaction has revealed two main scenarios when the solitary wave either
tunnels through the varying mean flow that connects two constant asymptotic
states, or remains trapped inside it. While the previous work considered convex
systems, in this paper it is demonstrated that the presence of a nonconvex
hydrodynamic flux introduces significant modifications to the scenarios for
transmission and trapping. A reduced set of Whitham modulation equations,
termed the solitonic modulation system, is used to formulate a general,
approximate mathematical framework for solitary wave-mean flow interaction with
nonconvex flux. Solitary wave trapping is conveniently stated in terms of
crossing characteristics for the solitonic system. Numerical simulations of the
mKdV equation agree with the predictions of modulation theory. The developed
theory draws upon general properties of dispersive hydrodynamic partial
differential equations, not on the complete integrability of the mKdV equation.
As such, the mathematical framework developed here enables application to other
fluid dynamic contexts subject to nonconvex flux.
|
We compute the cosmological constant of a spherical space in the limit of
weak gravity. To this end we use a duality developed by the present authors in
a previous work. This duality allows one to treat the Newtonian cosmological
fluid as the probability fluid of a single particle in nonrelativistic quantum
mechanics. We apply this duality to the case when the spacetime manifold on
which this quantum mechanics is defined is given by
$\mathbb{R}\times\mathbb{S}^3$. Here $\mathbb{R}$ stands for the time axis and
$\mathbb{S}^3$ is a 3-dimensional sphere endowed with the standard round
metric. A quantum operator $\Lambda$ satisfying all the requirements of a
cosmological constant is identified, and the matrix representing $\Lambda$
within the Hilbert space $L^2\left(\mathbb{S}^3\right)$ of quantum states is
obtained. Numerical values for the expectation value of the operator $\Lambda$
in certain quantum states are obtained, that are in good agreement with the
experimentally measured cosmological constant.
|
Given a closed, orientable surface of constant negative curvature and genus
$g \ge 2$, we study a family of generalized Bowen-Series boundary maps and
prove the following rigidity result: in this family the topological entropy is
constant and depends only on the genus of the surface. We give an explicit
formula for this entropy and show that the value of the topological entropy
also stays constant in the Teichm\"uller space of the surface. The proofs use
conjugation to maps of constant slope.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.