abstract
stringlengths 42
2.09k
|
---|
Transformers are powerful neural architectures that allow integrating
different modalities using attention mechanisms. In this paper, we leverage the
neural transformer architectures for multi-channel speech recognition systems,
where the spectral and spatial information collected from different microphones
are integrated using attention layers. Our multi-channel transformer network
mainly consists of three parts: channel-wise self attention layers (CSA),
cross-channel attention layers (CCA), and multi-channel encoder-decoder
attention layers (EDA). The CSA and CCA layers encode the contextual
relationship within and between channels and across time, respectively. The
channel-attended outputs from CSA and CCA are then fed into the EDA layers to
help decode the next token given the preceding ones. The experiments show that
in a far-field in-house dataset, our method outperforms the baseline
single-channel transformer, as well as the super-directive and neural
beamformers cascaded with the transformers.
|
In this paper, we study quantum vacuum fluctuation effects on the mass
density of a classical liquid arising from the conical topology of an effective
idealized cosmic string spacetime, as well as from the mixed, Dirichlet, and
Neumann boundary conditions in Minkowski spacetime. In this context, we
consider a phonon field representing quantum excitations of the liquid density,
which obeys an effective Klein-Gordon equation with the sound velocity replaced
by the light velocity. In the idealized cosmic string spacetime, the phonon
field is subject to a quasi-periodic condition. Moreover, in Minkowski
spacetime, the Dirichlet and Neumann boundary conditions are applied on one and
also two parallel planes. We, thus, in each case, obtain closed analytic
expressions for the two-point function and the renormalized mean-squared
density fluctuation of the liquid. We point out specific characteristics of the
latter by plotting their graphs.
|
Bayesian Networks (BNs) have become a powerful technology for reasoning under
uncertainty, particularly in areas that require causal assumptions that enable
us to simulate the effect of intervention. The graphical structure of these
models can be determined by causal knowledge, learnt from data, or a
combination of both. While it seems plausible that the best approach in
constructing a causal graph involves combining knowledge with machine learning,
this approach remains underused in practice. We implement and evaluate 10
knowledge approaches with application to different case studies and BN
structure learning algorithms available in the open-source Bayesys structure
learning system. The approaches enable us to specify pre-existing knowledge
that can be obtained from heterogeneous sources, to constrain or guide
structure learning. Each approach is assessed in terms of structure learning
effectiveness and efficiency, including graphical accuracy, model fitting,
complexity, and runtime; making this the first paper that provides a
comparative evaluation of a wide range of knowledge approaches for BN structure
learning. Because the value of knowledge depends on what data are available, we
illustrate the results both with limited and big data. While the overall
results show that knowledge becomes less important with big data due to higher
learning accuracy rendering knowledge less important, some of the knowledge
approaches are actually found to be more important with big data. Amongst the
main conclusions is the observation that reduced search space obtained from
knowledge does not always imply reduced computational complexity, perhaps
because the relationships implied by the data and knowledge are in tension.
|
A branching process in a Markovian environment consists of an irreducible
Markov chain on a set of "environments" together with an offspring distribution
for each environment. At each time step the chain transitions to a new random
environment, and one individual is replaced by a random number of offspring
whose distribution depends on the new environment. We give a first moment
condition that determines whether this process survives forever with positive
probability. On the event of survival we prove a law of large numbers and a
central limit theorem for the population size. We also define a matrix-valued
generating function for which the extinction matrix (whose entries are the
probability of extinction in state j given that the initial state is i) is a
fixed point, and we prove that iterates of the generating function starting
with the zero matrix converge to the extinction matrix.
|
The kinematics on spatially flat FLRW space-times is presented for the first
time in co-moving local charts with physical coordinates, i. e. the cosmic time
and Painlev\' e-type Cartesian space coordinates. It is shown that there exists
a conserved momentum which determines the form of the covariant four-momentum
on geodesics in terms of physical coordinates. Moreover, with the help of the
conserved momentum one identifies the peculiar momentum separating the peculiar
and recessional motions without ambiguities. It is shown that the energy and
peculiar momentum satisfy the mass-shell condition of special relativity while
the recessional momentum does not produce energy. In this framework, the
measurements of the kinetic quantities along geodesic performed by different
observers are analysed pointing out an energy loss of the massive particles
similar to that giving the photon redshift. The examples of the kinematics on
the de Sitter expanding universe and a new Milne-type space-time are
extensively analysed.
|
Analog quantum simulation offers a hardware-specific approach to studying
quantum dynamics, but mapping a model Hamiltonian onto the available device
parameters requires matching the hardware dynamics. We introduce a paradigm for
quantum Hamiltonian simulation that leverages digital decomposition techniques
and optimal control to perform analog simulation. We validate this approach by
constructing the optimal analog controls for a superconducting transmon device
to emulate the dynamics of an extended Bose-Hubbard model. We demonstrate the
role of control time, digital error, and pulse complexity, and we explore the
accuracy and robustness of these controls. We conclude by discussing the
opportunity for implementing this paradigm in near-term quantum devices.
|
Deterministic classical dynamical systems have an ergodic hierarchy, from
ergodic through mixing, to Bernoulli systems that are "as random as a
coin-toss". Dual-unitary circuits have been recently introduced as solvable
models of many-body nonintegrable quantum chaotic systems having a hierarchy of
ergodic properties. We extend this to include the apex of a putative quantum
ergodic hierarchy which is Bernoulli, in the sense that correlations of single
and two-particle observables vanish at space-time separated points. We derive a
condition based on the entangling power $e_p(U)$ of the basic two-particle
unitary building block, $U$, of the circuit, that guarantees mixing, and when
maximized, corresponds to Bernoulli circuits. Additionally we show, both
analytically and numerically, how local-averaging over random realizations of
the single-particle unitaries, $u_i$ and $v_i$ such that the building block is
$U^\prime = (u_1 \otimes u_2 ) U (v_1 \otimes v_2 )$ leads to an identification
of the average mixing rate as being determined predominantly by the entangling
power $e_p(U)$. Finally we provide several, both analytical and numerical, ways
to construct dual-unitary operators covering the entire possible range of
entangling power. We construct a coupled quantum cat map which is dual-unitary
for all local dimensions and a 2-unitary or perfect tensor for odd local
dimensions, and can be used to build Bernoulli circuits.
|
In this paper, we focus on the weighted Bergman spaces $A_{\varphi}^{p}$ in
$\mathbb{D}$ with $\varphi\in\mathcal{W}_{0}$. We first give characterizations
of those finite positive Borel measures $\mu$ in $\mathbb{D}$ such that the
embedding $A_{\varphi}^{p}\subset L_{\mu}^{q}$ is bounded or compact for
$0<p,q<\infty$. Then we describe bounded or compact Toeplitz operators
$T_{\mu}$ from one Bergman space $A_{\varphi}^{p}$ to another $A_{\varphi}^{q}$
for all possible $0<p,q<\infty$. Finally, we characterize Schatten class
Toeplitz operators on $A_{\varphi}^{2}$.
|
Whereas the positive equilibrium of a mass-action system with deficiency zero
is always globally stable, for deficiency-one networks there are many different
scenarios, mainly involving oscillatory behaviour. We present several examples,
with centers or multiple limit cycles.
|
Climate change has largely impacted our daily lives. As one of its
consequences, we are experiencing more wildfires. In the year 2020, wildfires
burned a record number of 8,888,297 acres in the US. To awaken people's
attention to climate change, and to visualize the current risk of wildfires, We
developed RtFPS, "Real-Time Fire Prediction System". It provides a real-time
prediction visualization of wildfire risk at specific locations base on a
Machine Learning model. It also provides interactive map features that show the
historical wildfire events with environmental info.
|
We propose a method of neural evolution structures (NESs) combining
artificial neural networks (ANNs) and evolutionary algorithms (EAs) to generate
High Entropy Alloys (HEAs) structures. Our inverse design approach is based on
pair distribution functions and atomic properties and allows one to train a
model on smaller unit cells and then generate a larger cell. With a speed-up
factor of approximately 1000 with respect to the SQSs, the NESs dramatically
reduces computational costs and time, making possible the generation of very
large structures (over 40,000 atoms) in few hours. Additionally, unlike the
SQSs, the same model can be used to generate multiple structures with the same
fractional composition.
|
The connection between the Stellar Velocity Ellipsoid (SVE) and the dynamical
evolution of galaxies has been a matter of debate in the last years and there
is no clear consensus whether different heating agents (e.g. spiral arms, giant
molecular clouds, bars and mergers) leave clear detectable signatures in the
present day kinematics. Most of these results are based on a single and global
SVE and have not taken into account that these agents do not necessarily
equally affect all regions of the stellar disc.We study the 2D spatial
distribution of the SVE across the stellar discs of Auriga galaxies, a set of
high resolution magneto-hydrodynamical cosmological zoom-in simulations, to
unveil the connection between local and global kinematic properties in the disc
region. We find very similar, global, $\sigma_{z}/\sigma_{r}$= 0.80$\pm$ 0.08
values for galaxies of different Hubble types. This shows that the global
properties of the SVE at z=0 are not a good indicator of the heating and
cooling events experienced by galaxies. We also find that similar
$\sigma_{z}/\sigma_{r}$radial profiles are obtained through different
combinations of $\sigma_{z}$ and $\sigma_{r}$ trends: at a local level, the
vertical and radial components can evolve differently, leading to similar
$\sigma_{z}/\sigma_{r}$ profiles at z=0. By contrast, the 2D spatial
distribution of the SVE varies a lot more from galaxy to galaxy. Present day
features in the SVE spatial distribution may be associated with specific
interactions such as fly-by encounters or the accretion of low mass satellites
even in the cases when the global SVE is not affected. The stellar populations
decomposition reveals that young stellar populations present colder and less
isotropic SVEs and more complex 2D distributions than their older and hotter
counterparts.
|
We use a five percent sample of Americans' credit bureau data, combined with
a regression discontinuity approach, to estimate the effect of universal health
insurance at age 65-when most Americans become eligible for Medicare-at the
national, state, and local level. We find a 30 percent reduction in debt
collections-and a two-thirds reduction in the geographic variation in
collections-with limited effects on other financial outcomes. The areas that
experienced larger reductions in collections debt at age 65 were concentrated
in the Southern United States, and had higher shares of black residents, people
with disabilities, and for-profit hospitals.
|
Beyond standard model (BSM) particles should be included in effective field
theory in order to compute the scattering amplitudes involving these extra
particles. We formulate an extension of Higgs effective field theory which
contains arbitrary number of scalar and fermion fields with arbitrary electric
and chromoelectric charges. The BSM Higgs sector is described by using the
non-linear sigma model in a manner consistent with the spontaneous electroweak
symmetry breaking. The chiral order counting rule is arranged consistently with
the loop expansion. The leading order Lagrangian is organized in accord with
the chiral order counting rule. We use a geometrical language to describe the
particle interactions. The parametrization redundancy in the effective
Lagrangian is resolved by describing the on-shell scattering amplitudes only
with the covariant quantities in the scalar/fermion field space. We introduce a
useful coordinate (normal coordinate), which simplifies the computations of the
on-shell amplitudes significantly. We show the high energy behaviors of the
scattering amplitudes determine the "curvature tensors" in the scalar/fermion
field space. The massive spinor-wavefunction formalism is shown to be useful in
the computations of on-shell helicity amplitudes.
|
The QCD axion mass may receive contributions from small-size instantons or
other Peccei-Quinn breaking effects. We show that it is possible for such a
heavy QCD axion to induce slow-roll inflation if the potential is sufficiently
flat near its maximum by balancing the small instanton contribution with
another Peccei-Quinn symmetry breaking term. There are two classes of such
axion hilltop inflation, each giving a different relation between the axion
mass at the minimum and the decay constant. The first class predicts the
relation $m_\phi \sim 10^{-6}f_\phi$, and the axion can decay via the gluon
coupling and reheat the universe. Most of the predicted parameter region will
be covered by various experiments such as CODEX, DUNE, FASER, LHC, MATHUSLA,
and NA62 where the production and decay proceed through the same coupling that
induced reheating. The second class predicts the relation $m_\phi \sim 10^{-6}
f^2_\phi/M_{\rm pl}$. In this case, the axion mass is much lighter than in the
previous case, and one needs another mechanism for successful reheating. The
viable decay constant is restricted to be $10^8\,{\rm GeV}\lesssim f_\phi
\lesssim 10^{10}\,{\rm GeV}$, which will be probed by future experiments on the
electric dipole moment of nucleons. In both cases, requiring the axion hilltop
inflation results in the strong CP phase that is close to zero.
|
Developing of an effective flow control algorithm to avoid congestion is a
hot topic in computer network society. This document gives a mathematical model
for general network at the beginning, and then discrete control theory is
proposed as a key tool to design a new flow control algorithm to avoid
congestion in the high-speed computer network, the proposed algorithm ensures
stability of network system. The results of the simulation show that the
proposed method can adjust the sending speed and the queue level in the buffer
quickly and effectively. In addition, the method is easy to implement and apply
to high-speed computer network.
|
Most cosmological structures in the universe spin. Although structures in the
universe form on a wide variety of scales from small dwarf galaxies to large
super clusters, the generation of angular momentum across these scales is
poorly understood. We have investigated the possibility that filaments of
galaxies - cylindrical tendrils of matter hundreds of millions of light-years
across, are themselves spinning. By stacking thousands of filaments together
and examining the velocity of galaxies perpendicular to the filament's axis
(via their red and blue shift), we have found that these objects too display
motion consistent with rotation making them the largest objects known to have
angular momentum. The strength of the rotation signal is directly dependent on
the viewing angle and the dynamical state of the filament. Just as it is
easiest to measure rotation in a spinning disk galaxy viewed edge on, so too is
filament rotation clearly detected under similar geometric alignment.
Furthermore, the mass of the haloes that sit at either end of the filaments
also increases the spin speed. The more massive the haloes, the more rotation
is detected. These results signify that angular momentum can be generated on
unprecedented scales.
|
We show that if a non-amenable, quasi-transitive, unimodular graph $G$ has
all degrees even then it has a factor-of-iid balanced orientation, meaning each
vertex has equal in- and outdegree. This result involves extending earlier
spectral-theoretic results on Bernoulli shifts to the Bernoulli graphings of
quasi-transitive, unimodular graphs. As a consequence, we also obtain that when
$G$ is regular (of either odd or even degree) and bipartite, it has a
factor-of-iid perfect matching. This generalizes a result of Lyons and Nazarov
beyond transitive graphs.
|
The process of selecting points for training a machine learning model is
often a challenging task. Many times, we will have a lot of data, but for
training, we require the labels and labeling is often costly. So we need to
select the points for training in an efficient manner so that the model trained
on the points selected will be better than the ones trained on any other
training set. We propose a novel method to select the nodes in graph datasets
using the concept of graph centrality. Two methods are proposed - one using a
smart selection strategy, where the model is required to be trained only once
and another using active learning method. We have tested this idea on three
popular graph datasets - Cora, Citeseer and Pubmed- and the results are found
to be encouraging.
|
Quantitative predictions of the release of volatile radiocontaminants of
ruthenium (Ru) in the environment from either nuclear power plants (NPP) or
fuel recycling accidents present significant uncertainties while estimated by
severe accidents nuclear analysis codes. Observations of Ru from either
experimental or modeling works suggest that the main limitations relate to the
poor evaluation of the kinetics of gaseous Ru in the form of RuO$_3$ and
RuO$_4$. This work presents relativistic correlated quantum chemical
calculations performed to determine the possible reactions pathways leading to
the formation of gaseous Ru oxides under NPP severe accident conditions, as a
result of reactions of RuO$_2$ gaseous with air radiolysis products, namely
nitrous and nitrogen oxides. The geometries of the relevant species were
optimized with the TPSSh-5%HF functional of the density, while the total
electronic energies were computed at the CCSD(T) level with extrapolations to
the complete basis set CBS limit. The reaction pathways were fully
characterized by localizing the transition states and all intermediate
structures using the internal coordinate reaction algorithm (IRC). The rate
constants were determined over the temperature range 250-2500 K. It is revealed
that the less kinetically limiting pathway to form Ru gaseous fraction is the
oxidation of Ru by nitrogen oxide, corroborating experimental observations.
|
Translationally invariant finetuned single-particle lattice Hamiltonians host
flat bands only. Suitable short-range many-body interactions result in complete
suppression of particle transport due to local constraints and Many-Body
Flatband Localization. Heat can still flow between spatially locked charges. We
show that heat transport is forbidden in dimension one. In higher dimensions
heat transport can be unlocked by tuning filling fractions across a percolation
transition for suitable lattice geometries. Transport in percolation clusters
is additionally affected by effective bulk disorder and edge scattering induced
by the local constraints, which work in favor of arresting the heat flow. We
discuss explicit examples in one and two dimensions.
|
The zero-dimensional (0D) metal halides comprise periodically distributed and
isolated metal-halide polyhedra, which act as the smallest inorganic quantum
systems and can accommodate quasi-localized Frenkel excitons. These excitons
exhibit unique photophysics including broadband photon emission, huge Stokes
shift, and long decay lifetime. The polyhedra can have different symmetries due
to the coordination degree of the metal ions. Little is known about how the
polyhedron type affects the characteristics of the 0D metal halide crystals. We
synthesize and comparatively study three novel kinds of 0D organic-inorganic
hybrid tin halide compounds. They are efficient light emitters with a highest
quantum yield of 92.3%. Although they have the same compositional organic
group, the most stable phases are composed of octahedra for the bromide and
iodide but disphenoids (see-saw structures) for the chloride. They separately
exhibit biexponential and monoexponential luminescence decays due to different
symmetries (Ci group for octahedra and C2 group for disphenoids) and
corresponding different electronic structures. The chloride has the largest
absorption photon energy among the three halides, but it has the smallest
emission photon energy. A model regarding the unoccupied energy band degeneracy
is proposed based on the experiments and density functional theory
calculations, which explains well the experimental phenomena and reveals the
crucial role of polyhedron type in determining the optical properties of the 0D
tin halide compounds.
|
In this note, we introduce and study a notion of bi-exactness for creation
operators acting on full, symmetric and anti-symmetric Fock spaces. This is a
generalization of our previous work, in which we studied the case of
anti-symmetric Fock spaces. As a result, we obtain new examples of solid
actions as well as new proofs for some known solid actions. We also study free
wreath product groups in the same context.
|
We investigate the structure of the fixed-point algebra of $\mathcal{O}_n$
under the action of the cyclic permutation of the generating isometries. We
prove that it is $*$-isomorphic with $\mathcal{O}_n$, thus generalizing a
result of Choi and Latr\'emoli\`ere on $\mathcal{O}_2$. As an application of
the technique employed, we also describe the fixed-point algebra of
$\mathcal{O}_{2n}$ under the exchange automorphism.
|
Nuclear reactions of interest for astrophysics and applications often rely on
statistical model calculations for nuclear reaction rates, particularly for
nuclei far from $\beta$-stability. However, statistical model parameters are
often poorly constrained, where experimental constraints are particularly
sparse for exotic nuclides. For example, our understanding of the breakout from
the NiCu cycle in the astrophysical rp-process is currently limited by
uncertainties in the statistical properties of the proton-rich nucleus
$^{60}$Zn. We have determined the nuclear level density of $^{60}$Zn using
neutron evaporation spectra from $^{58}$Ni($^3$He, n) measured at the Edwards
Accelerator Laboratory. We compare our results to a number of theoretical
predictions, including phenomenological, microscopic, and shell model based
approaches. Notably, we find the $^{60}$Zn level density is somewhat lower than
expected for excitation energies populated in the
$^{59}$Cu(p,$\gamma$)$^{60}$Zn reaction under rp-process conditions. This
includes a level density plateau from roughly 5-6 MeV excitation energy, which
is counter to the usual expectation of exponential growth and all theoretical
predictions that we explore. A determination of the spin-distribution at the
relevant excitation energies in $^{60}$Zn is needed to confirm that the
Hauser-Feshbach formalism is appropriate for the $^{59}$Cu(p,$\gamma$)$^{60}$Zn
reaction rate at X-ray burst temperatures.
|
Reconfigurable Intelligent Surfaces (RISs) are recently attracting a wide
interest due to their capability of tuning wireless propagation environments in
order to increase the system performance of wireless networks. In this paper, a
multiuser wireless network assisted by a RIS is studied and resource allocation
algorithms are presented for several scenarios. First of all, the problem of
channel estimation is considered, and an algorithm that permits separate
estimation of the mobile user-to-RIS and RIS-to-base stations components is
proposed. Then, for the special case of a single-user system, three possible
approaches are shown in order to optimize the Signal-to-Noise Ratio with
respect to the beamformer used at the base station and to the RIS phase shifts.
Next, for a multiuser system with two cells, assuming channel-matched
beamforming, the geometric mean of the downlink Signal-to-Interference plus
Noise Ratios across users is maximized with respect to the base stations
transmit powers and RIS phase shifts configurations. In this scenario, the RIS
is placed at the cell-edge and some users are jointly served by two base
stations to increase the system performance. Numerical results show that the
proposed procedures are effective and that the RIS brings substantial
performance improvements to wireless system.
|
Permanently deformed objects in binary systems can experience complex
rotation evolution, arising from the extensively studied effect of spin-orbit
coupling as well as more nuanced dynamics arising from spin-spin interactions.
The ability of an object to sustain an aspheroidal shape largely determines
whether or not it will exhibit non-trivial rotational behavior. In this work,
we adopt a simplified model of a gravitationally interacting primary and
satellite pair, where each body's quadrupole moment is approximated by two
diametrically opposed point masses. After calculating the net gravitational
torque on the satellite from the primary, and the associated equations of
motion, we employ a Hamiltonian formalism which allows for a perturbative
treatment of the spin-orbit and retrograde and prograde spin-spin coupling
states. By analyzing the resonances individually and collectively, we determine
the criteria for resonance overlap and the onset of chaos, as a function of
orbital and geometric properties of the binary. We extend the 2D planar
geometry to calculate the obliquity evolution, and find that satellites in
spin-spin resonances undergo precession when inclined out of the plane, but do
not tumble. We apply our resonance overlap criteria to the contact binary
system (216) Kleopatra, and find that its satellites, Cleoselene and
Alexhelios, may plausibly be exhibiting chaotic rotational dynamics from the
overlap of the spin-orbit and retrograde spin-spin resonances. While this model
is by construction generalizable to any binary system, it will be particularly
useful to study small bodies in the solar system, whose irregular shapes make
them ideal candidates for exotic rotational states.
|
In many applications, the integrals and derivatives of signals carry valuable
information (e.g., cumulative success over a time window, the rate of change)
regarding the behavior of the underlying system. In this paper, we extend the
expressiveness of Signal Temporal Logic (STL) by introducing predicates that
can define rich properties related to the integral and derivative of a signal.
For control synthesis, the new predicates are encoded into mixed-integer linear
inequalities and are used in the formulation of a mixed-integer linear program
to find a trajectory that satisfies an STL specification. We discuss the
benefits of using the new predicates and illustrate them in a case study
showing the influence of the new predicates on the trajectories of an
autonomous robot.
|
The extensive computer-aided search applied in [arXiv:2010.10519] to find the
minimal charge sourced by the fluxes that stabilize all the (flux-stabilizable)
moduli of a smooth K3xK3 compactification uses differential evolutionary
algorithms supplemented by local searches. We present these algorithms in
detail and show that they can also solve our minimization problem for other
lattices. Our results support the Tadpole Conjecture: The minimal charge grows
linearly with the dimension of the lattice and, for K3xK3, this charge is
larger than allowed by tadpole cancelation.
Even if we are faced with an NP-hard lattice-reduction problem at every step
in the minimization process, we find that differential evolution is a good
technique for identifying the regions of the landscape where the fluxes with
the lowest tadpole can be found. We then design a "Spider Algorithm," which is
very efficient at exploring these regions and producing large numbers of
minimal-tadpole configurations.
|
Graph Neural Networks (GNNs) have received significant attention due to their
state-of-the-art performance on various graph representation learning tasks.
However, recent studies reveal that GNNs are vulnerable to adversarial attacks,
i.e. an attacker is able to fool the GNNs by perturbing the graph structure or
node features deliberately. While being able to successfully decrease the
performance of GNNs, most existing attacking algorithms require access to
either the model parameters or the training data, which is not practical in the
real world.
In this paper, we develop deeper insights into the Mettack algorithm, which
is a representative grey-box attacking method, and then we propose a
gradient-based black-box attacking algorithm. Firstly, we show that the Mettack
algorithm will perturb the edges unevenly, thus the attack will be highly
dependent on a specific training set. As a result, a simple yet useful strategy
to defense against Mettack is to train the GNN with the validation set.
Secondly, to overcome the drawbacks, we propose the Black-Box Gradient Attack
(BBGA) algorithm. Extensive experiments demonstrate that out proposed method is
able to achieve stable attack performance without accessing the training sets
of the GNNs. Further results shows that our proposed method is also applicable
when attacking against various defense methods.
|
Model Order Reduction (MOR) methods enable the generation of
real-time-capable digital twins, which can enable various novel value streams
in industry. While traditional projection-based methods are robust and accurate
for linear problems, incorporating Machine Learning to deal with nonlinearity
becomes a new choice for reducing complex problems. Such methods usually
consist of two steps. The first step is dimension reduction by projection-based
method, and the second is the model reconstruction by Neural Network. In this
work, we apply some modifications for both steps respectively and investigate
how they are impacted by testing with three simulation models. In all cases
Proper Orthogonal Decomposition (POD) is used for dimension reduction. For this
step, the effects of generating the input snapshot database with constant input
parameters is compared with time-dependent input parameters. For the model
reconstruction step, two types of neural network architectures are compared:
Multilayer Perceptron (MLP) and Runge-Kutta Neural Network (RKNN). The MLP
learns the system state directly while RKNN learns the derivative of system
state and predicts the new state as a Runge-Kutta integrator.
|
We prove a removal lemma for induced ordered hypergraphs, simultaneously
generalizing Alon--Ben-Eliezer--Fischer's removal lemma for ordered graphs and
the induced hypergraph removal lemma. That is, we show that if an ordered
hypergraph $(V,G,<)$ has few induced copies of a small ordered hypergraph
$(W,H,\prec)$ then there is a small modification $G'$ so that $(V,G',<)$ has no
induced copies of $(W,H,\prec)$. (Note that we do \emph{not} need to modify the
ordering $<$.)
We give our proof in the setting of an ultraproduct (that is, a Keisler
graded probability space), where we can give an abstract formulation of
hypergraph removal in terms of sequences of $\sigma$-algebras. We then show
that ordered hypergraphs can be viewed as hypergraphs where we view the
intervals as an additional notion of a ``very structured'' set. Along the way
we give an explicit construction of the bijection between the ultraproduct
limit object and the corresponding hyerpgraphon.
|
The availability of biomedical text data and advances in natural language
processing (NLP) have made new applications in biomedical NLP possible.
Language models trained or fine tuned using domain specific corpora can
outperform general models, but work to date in biomedical NLP has been limited
in terms of corpora and tasks. We present BioALBERT, a domain-specific
adaptation of A Lite Bidirectional Encoder Representations from Transformers
(ALBERT), trained on biomedical (PubMed and PubMed Central) and clinical
(MIMIC-III) corpora and fine tuned for 6 different tasks across 20 benchmark
datasets. Experiments show that BioALBERT outperforms the state of the art on
named entity recognition (+11.09% BLURB score improvement), relation extraction
(+0.80% BLURB score), sentence similarity (+1.05% BLURB score), document
classification (+0.62% F1-score), and question answering (+2.83% BLURB score).
It represents a new state of the art in 17 out of 20 benchmark datasets. By
making BioALBERT models and data available, our aim is to help the biomedical
NLP community avoid computational costs of training and establish a new set of
baselines for future efforts across a broad range of biomedical NLP tasks.
|
In this paper, we justify the convergence from the two-species
Vlasov-Poisson-Boltzmann (in briefly,VPB) system to the two-fluid
incompressible Navier-Stokes-Fourier-Poisson (in briefly, NSFP) system with
Ohm's law in the context of classical solutions. We prove the uniform estimates
with respect to the Knudsen number $\varepsilon$ for the solutions to the
two-species VPB system near equilibrium by treating the strong interspecies
interactions. Consequently, we prove the convergence to the two-fluid
incompressible NSFP as $\varepsilon$ go to 0.
|
Inverse design arises in a variety of areas in engineering such as acoustic,
mechanics, thermal/electronic transport, electromagnetism, and optics. Topology
optimization is a major form of inverse design, where we optimize a designed
geometry to achieve targeted properties and the geometry is parameterized by a
density function. This optimization is challenging, because it has a very high
dimensionality and is usually constrained by partial differential equations
(PDEs) and additional inequalities. Here, we propose a new deep learning method
-- physics-informed neural networks with hard constraints (hPINNs) -- for
solving topology optimization. hPINN leverages the recent development of PINNs
for solving PDEs, and thus does not rely on any numerical PDE solver. However,
all the constraints in PINNs are soft constraints, and hence we impose hard
constraints by using the penalty method and the augmented Lagrangian method. We
demonstrate the effectiveness of hPINN for a holography problem in optics and a
fluid problem of Stokes flow. We achieve the same objective as conventional
PDE-constrained optimization methods based on adjoint methods and numerical PDE
solvers, but find that the design obtained from hPINN is often simpler and
smoother for problems whose solution is not unique. Moreover, the
implementation of inverse design with hPINN can be easier than that of
conventional methods.
|
This paper is an extension of the paper by Del Popolo, Chan, and Mota (2020)
to take account the effect of dynamical friction. We show how dynamical
friction changes the threshold of collapse, $\delta_c$, and the turn-around
radius, $R_t$. We find numerically the relationship between the turnaround
radius, $R_{\rm t}$, and mass, $M_{\rm t}$, in $\Lambda$CDM, in dark energy
scenarios, and in a $f(R)$ modified gravity model. Dynamical friction gives
rise to a $R_{\rm t}-M_{\rm t}$ relation differing from that of the standard
spherical collapse. In particular, dynamical friction amplifies the effect of
shear, and vorticity already studied in Del Popolo, Chan, and Mota (2020). A
comparison of the $R_{\rm t}-M_{\rm t}$ relationship for the $\Lambda$CDM, and
those for the dark energy, and modified gravity models shows, that the $R_{\rm
t}-M_{\rm t}$ relationship of the $\Lambda$CDM is similar to that of the dark
energy models, and small differences are seen when comparing with the $f(R)$
models. The effect of shear, rotation, and dynamical friction is particularly
evident at galactic scales, giving rise to a difference between the $R_{\rm
t}-M_{\rm t}$ relation of the standard spherical collapse of the order of
$\simeq 60\%$. Finally, we show how the new values of the $R_{\rm t}-M_{\rm t}$
influence the constraints to the $w$ parameter of the equation of state.
|
The synthesis of control laws for interacting agent-based dynamics and their
mean-field limit is studied. A linearization-based approach is used for the
computation of sub-optimal feedback laws obtained from the solution of
differential matrix Riccati equations. Quantification of dynamic performance of
such control laws leads to theoretical estimates on suitable linearization
points of the nonlinear dynamics. Subsequently, the feedback laws are embedded
into nonlinear model predictive control framework where the control is updated
adaptively in time according to dynamic information on moments of linear
mean-field dynamics. The performance and robustness of the proposed methodology
is assessed through different numerical experiments in collective dynamics.
|
We present an application of the well-known Mask R-CNN approach to the
counting of different types of bacterial colony forming units that were
cultured in Petri dishes. Our model was made available to lab technicians in a
modern SPA (Single-Page Application). Users can upload images of dishes, after
which the Mask R-CNN model that was trained and tuned specifically for this
task detects the number of BVG- and BVG+ colonies and displays these in an
interactive interface for the user to verify. Users can then check the model's
predictions, correct them if deemed necessary, and finally validate them. Our
adapted Mask R-CNN model achieves a mean average precision (mAP) of 94\% at an
intersection-over-union (IoU) threshold of 50\%. With these encouraging
results, we see opportunities to bring the benefits of improved accuracy and
time saved to related problems, such as generalising to other bacteria types
and viral foci counting.
|
This paper develops a non-asymptotic, local approach to quantitative
propagation of chaos for a wide class of mean field diffusive dynamics. For a
system of $n$ interacting particles, the relative entropy between the marginal
law of $k$ particles and its limiting product measure is shown to be
$O((k/n)^2)$ at each time, as long as the same is true at time zero. A simple
Gaussian example shows that this rate is optimal. The main assumption is that
the limiting measure obeys a certain functional inequality, which is shown to
encompass many potentially irregular but not too singular finite-range
interactions, as well as some infinite-range interactions. This unifies the
previously disparate cases of Lipschitz versus bounded measurable interactions,
improving the best prior bounds of $O(k/n)$ which were deduced from global
estimates involving all $n$ particles. We also cover a class of models for
which qualitative propagation of chaos and even well-posedness of the
McKean-Vlasov equation were previously unknown. At the center of a new approach
is a differential inequality, derived from a form of the BBGKY hierarchy, which
bounds the $k$-particle entropy in terms of the $(k+1)$-particle entropy.
|
Intrinsic nonlinear elasticity deals with the deformations of elastic bodies
as isometric immersions of Riemannian manifolds into the Euclidean spaces (see
Ciarlet [9,10]). In this paper, we study the rigidity and continuity properties
of elastic bodies for the intrinsic approach to nonlinear elasticity. We first
establish a geometric rigidity estimate for mappings from Riemannian manifolds
to spheres (in the spirit of Friesecke-James-M\"{u}ller [23]), which is the
first result of this type for the non-Euclidean case as far as we know. Then we
prove the asymptotic rigidity of elastic membranes under suitable geometric
conditions. Finally, we provide a simplified geometric proof of the continuous
dependence of deformations of elastic bodies on the Cauchy-Green tensors and
second fundamental forms, which extends the Ciarlet-Mardare theorem in [18] to
arbitrary dimensions and co-dimensions.
|
We study the nonlinear steady-state transport of spinless fermions through a
quantum dot with a local two-particle interaction. The dot degree of freedom is
in addition coupled to a phonon mode. This setup combines the nonequilibrium
physics of the interacting resonant level model and that of the
Anderson-Holstein model. The fermion-fermion interaction defies a perturbative
treatment. We mainly focus on the antiadiabatic limit, with the phonon
frequency being larger than the lead-dot tunneling rate. In this regime also
the fermion-boson coupling cannot be treated perturbatively. Our goal is
two-fold. We investigate the competing roles of the fermion-fermion and
fermion-boson interactions on the emergent low-energy scale $T_{\rm K}$ and
show how $T_{\rm K}$ manifests in the transport coefficients as well as the
current-voltage characteristics. For small to intermediate interactions, the
latter is in addition directly affected by both interactions independently.
With increasing fermion-boson interaction the Franck-Condon blockade suppresses
the current at small voltages and the emission of phonons leads to shoulders or
steps at multiples of the phonon frequency, while the local fermion-fermion
interaction implies a negative differential conductance at voltages larger than
$T_{\rm K}$. We, in addition, use the model to investigate the limitations of
our low-order truncated functional renormalization group approach on the
Keldysh contour. In particular, we quantify the role of the broken current
conservation.
|
In this review we introduce computer modelling and simulation techniques
which are used for ferrimagnetic materials. We focus on models where thermal
effects are accounted for, atomistic spin dynamics and finite temperature
macrospin approaches. We survey the literature of two of the most commonly
modelled ferrimagnets in the field of spintronics--the amorphous alloy GdFeCo
and the magnetic insulator yttrium iron garnet. We look at how generic models
and material specific models have been applied to predict and understand
spintronic experiments, focusing on the fields of ultrafast magnetisation
dynamics, spincaloritronics and magnetic textures dynamics and give an outlook
for modelling in ferrimagnetic spintronics.
|
The coexistence of diverse services with heterogeneous requirements is a
fundamental feature of 5G. This necessitates efficient radio access network
(RAN) slicing, defined as sharing of the wireless resources among diverse
services while guaranteeing their respective throughput, timing, and/or
reliability requirements. In this paper, we investigate RAN slicing for an
uplink scenario in the form of multiple access schemes for two user types: (1)
broadband users with throughput requirements and (2) intermittently active
users with timing requirements, expressed as either latency-reliability (LR) or
Peak Age of Information (PAoI). Broadband users transmit data continuously,
hence, are allocated non-overlapping parts of the spectrum. We evaluate the
trade-offs between the achievable throughput of a broadband user and the timing
requirements of an intermittent user under Orthogonal Multiple Access (OMA) and
Non-Orthogonal Multiple Access (NOMA), considering capture. Our analysis shows
that NOMA, in combination with packet-level coding, is a superior strategy in
most cases for both LR and PAoI, achieving a similar LR with only slight 2%
decrease in throughput with respect to the upper bound in performance. However,
there are extreme cases where OMA achieves a slightly greater throughput than
NOMA at the expense of an increased PAoI.
|
This paper presents an extremum seeking control algorithm with an adaptive
step-size that adjusts the aggressiveness of the controller based on the
quality of the gradient estimate. The adaptive step-size ensures that the
integral-action produced by the gradient descent does not destabilize the
closed-loop system. To quantify the quality of the gradient estimate, we
present a batch least squares estimator with a novel weighting and show that it
produces bounded estimation errors, where the uncertainty is due to the
curvature of the unknown cost function. The adaptive step-size then maximizes
the decrease of the combined plant and controller Lyapunov function for the
worst-case estimation error. We prove that our ESC is input-to-state stable
with respect to the dither signal. Finally, we demonstrate our proposed ESC
through five numerical examples; one illustrative, one practical, and three
benchmarks.
|
Here we review the understanding of the classical hydrogen atom in classical
electromagnetic zero-point radiation, and emphasize the importance of special
relativity. The crucial missing ingredient in earlier calculational attempts
(both numerical and analytic) is the use of valid approximations to the full
relativistic analysis. It is pointed out that the nonrelativistic time Fourier
expansion coefficients given by Landau and Lifshitz are in error as the
electromagnetic description of a charged particle in a Coulomb potential, and,
because of this error, Marshall and Claverie's conclusion regarding the failure
of radiation balance is invalid. Rather, using Marshall and Claverie's
calculations, but restricted to lowest nonvanishing order in the orbital
eccentricity (where the nonrelativistic orbit is a valid approximation to the
fully relativistic electromagnetic orbit) radiation balance for classical
electromagnetic zero-point radiation is shown to hold at the fundamental
frequencies and associated first overtones.
|
We study a linear high-dimensional regression model in a semi-supervised
setting, where for many observations only the vector of covariates $X$ is given
with no response $Y$. We do not make any sparsity assumptions on the vector of
coefficients, and aim at estimating $\mathrm{Var}(Y|X)$. We propose an
estimator, which is unbiased, consistent, and asymptotically normal. This
estimator can be improved by adding zero-estimators arising from the unlabelled
data. Adding zero-estimators does not affect the bias and potentially can
reduce variance.
In order to achieve optimal improvement, many zero-estimators should be used,
but this raises the problem of estimating many parameters. Therefore, we
introduce covariate selection algorithms that identify which zero-estimators
should be used in order to improve the above estimator.
We further illustrate our approach for other estimators, and present an
algorithm that improves estimation for any given variance estimator. Our
theoretical results are demonstrated in a simulation study.
|
We present a comprehensive study on building and adapting RNN transducer
(RNN-T) models for spoken language understanding(SLU). These end-to-end (E2E)
models are constructed in three practical settings: a case where verbatim
transcripts are available, a constrained case where the only available
annotations are SLU labels and their values, and a more restrictive case where
transcripts are available but not corresponding audio. We show how RNN-T SLU
models can be developed starting from pre-trained automatic speech recognition
(ASR) systems, followed by an SLU adaptation step. In settings where real audio
data is not available, artificially synthesized speech is used to successfully
adapt various SLU models. When evaluated on two SLU data sets, the ATIS corpus
and a customer call center data set, the proposed models closely track the
performance of other E2E models and achieve state-of-the-art results.
|
Loop scheduling techniques aim to achieve load-balanced executions of
scientific applications. Dynamic loop self-scheduling (DLS) libraries for
distributed-memory systems are typically MPI-based and employ a centralized
chunk calculation approach (CCA) to assign variably-sized chunks of loop
iterations. We present a distributed chunk calculation approach (DCA) that
supports various types of DLS techniques. Using both CCA and DCA, twelve DLS
techniques are implemented and evaluated in different CPU slowdown scenarios.
The results show that the DLS techniques implemented using DCA outperform their
corresponding ones implemented with CCA, especially in extreme system slowdown
scenarios.
|
The Off-plane Grating Rocket Experiment (OGRE) is a soft X-ray grating
spectrometer to be flown on a suborbital rocket. The payload is designed to
obtain the highest-resolution soft X-ray spectrum of Capella to date with a
resolution goal of $R(\lambda/\Delta\lambda)>2000$ at select wavelengths in its
10--55 Angstrom bandpass of interest. The optical design of the spectrometer
realizes a theoretical maximum resolution of $R\approx5000$, but this
performance does not consider the finite performance of the individual
spectrometer components, misalignments between components, and in-flight
pointing errors. These errors all degrade the performance of the spectrometer
from its theoretical maximum. A comprehensive line-spread function (LSF) error
budget has been constructed for the OGRE spectrometer to identify contributions
to the LSF, to determine how each of these affects the LSF, and to inform
performance requirements and alignment tolerances for the spectrometer. In this
document, the comprehensive LSF error budget for the OGRE spectrometer is
presented, the resulting errors are validated via raytrace simulations, and the
implications of these results are discussed.
|
We investigate the possibility to induce double peaks of gravitational
wave(GW) spectrum from primordial scalar perturbations in inflationary models
with three inflection points.Where the inflection points can be generated from
a polynomial potential or generated from Higgs like $\phi^4$ potential with the
running of quartic coupling.In such models, the inflection point at large
scales predicts the scalar spectral index and tensor-to-scalar ratio consistent
with current CMB constraints, and the other two inflection points generate two
large peaks in the scalar power spectrum at small scales, which can induce GWs
with double peaks energy spectrum. We find that for some choices parameters the
double peaks spectrum can be detected by future GW detectors, and one of the
peaks around $f\simeq10^{-9}\sim10^{-8}$Hz can also explain the recent NANOGrav
signal. Moreover, the peaks of power spectrum allow for the generation of
primordial black holes, which account for a significant fraction of dark
matter.
|
In this work, we investigate Gaussian process regression used to recover a
function based on noisy observations. We derive upper and lower error bounds
for Gaussian process regression with possibly misspecified correlation
functions. The optimal convergence rate can be attained even if the smoothness
of the imposed correlation function exceeds that of the true correlation
function and the sampling scheme is quasi-uniform. As byproducts, we also
obtain convergence rates of kernel ridge regression with misspecified kernel
function, where the underlying truth is a deterministic function. The
convergence rates of Gaussian process regression and kernel ridge regression
are closely connected, which is aligned with the relationship between sample
paths of Gaussian process and the corresponding reproducing kernel Hilbert
space.
|
Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$, and $L(G)$ be
the line graph of $G$, which has vertex set $E(G)$ and two vertices $e$ and $f$
of $L(G)$ is adjacent if $e$ and $f$ is incident in $G$. The vertex-edge graph
$M(G)$ of $G$ has vertex set $V(G)\cup E(G)$ and edge set $E(L(G))\cup
\{ue,ve|\ \forall\ e=uv\in E(G)\}$. In this paper, by a combinatorial
technique, we show that if $G$ is a connected cubic graph with an even number
of edges, then the number of dimer coverings of $M(G)$ equals
$2^{|V(G)|/2+1}3^{|V(G)|/4}$. As an application, we obtain the exact solution
of the dimer problem of the weighted solicate network obtained from the
hexagonal lattice in the context of statistical physics.
|
Online exams have become widely used to evaluate students' performance in
mastering knowledge in recent years, especially during the pandemic of
COVID-19. However, it is challenging to conduct proctoring for online exams due
to the lack of face-to-face interaction. Also, prior research has shown that
online exams are more vulnerable to various cheating behaviors, which can
damage their credibility. This paper presents a novel visual analytics approach
to facilitate the proctoring of online exams by analyzing the exam video
records and mouse movement data of each student. Specifically, we detect and
visualize suspected head and mouse movements of students in three levels of
detail, which provides course instructors and teachers with convenient,
efficient and reliable proctoring for online exams. Our extensive evaluations,
including usage scenarios, a carefully-designed user study and expert
interviews, demonstrate the effectiveness and usability of our approach.
|
For $M$ a compact Riemannian manifold Brandenbursky and Marcinkowski
constructed a transfer map $H_b^*(\pi_1(M))\to H_b^*(Homeo_{vol,0}(M))$ and
used it to show that for certain $M$ the space
$\overline{EH}_b^3(Homeo_{vol,0}(M))$ is infinite-dimensional. Kimura adapted
the argument to $Diff_{vol}(D^2,\partial D^2)$. We extend both results to the
higher degrees $\overline{EH}_b^{2n}$, $n\geq 1$. We also show that for certain
$M$ the ordinary cohomology $H^*(Homeo_{vol,0}(M))$ is non-trivial in all
degrees. In our computations we view the transfer map as being induced by a
coupling of groups.
|
We define several sorts of mappings on a poset like monotone, strictly
monotone, upper cone preserving and variants of these. Our aim is to
characterize posets in which some of these mappings coincide. We define special
mappings determined by two elements and investigate when these are strictly
monotone or upper cone preserving. If the considered poset is a semilattice
then its monotone mappings coincide with semilattice homomorphisms if and only
if the poset is a chain. Similarly, we study posets which need not be
semilattices but whose upper cones have a minimal element. We extend this
investigation to posets that are direct products of chains or an ordinal sum of
an antichain and a finite chain. We characterize equivalence relations induced
by strongly monotone mappings and show that the quotient set of a poset by such
an equivalence relation is a poset again.
|
In this article, we study several closely related invariants associated to
Dirac operators on odd-dimensional manifolds with boundary with an action of
the compact group $H$ of isometries. In particular, the equality between
equivariant winding numbers, equivariant spectral flow, and equivariant Maslov
indices is established. We also study equivariant $\eta$-invariants which play
a fundamental role in the equivariant analog of Getzler's spectral flow
formula. As a consequence, we establish a relation between equivariant
$\eta$-invariants and equivariant Maslov triple indices in the splitting of
manifolds.
|
The radio source 1146+596 is hosted by an elliptical/S0 galaxy NGC\,3894,
with a low-luminosity active nucleus. The radio structure is compact,
suggesting a very young age of the jets in the system. Recently, the source has
been confirmed as a high-energy (HE, $>0.1$\,GeV) $\gamma$-ray emitter, in the
most recent accumulation of the {\it Fermi} Large Area Telescope (LAT) data.
Here we report on the analysis of the archival {\it Chandra} X-ray Observatory
data for the central part of the galaxy, consisting of a single 40\,ksec-long
exposure. We have found that the core spectrum is best fitted by a combination
of an ionized thermal plasma with the temperature of $\simeq 0.8$\,keV, and a
moderately absorbed power-law component (photon index $\Gamma = 1.4\pm 0.4$,
hydrogen column density $N_{\rm H}/10^{22}$\,cm$^{-2}$\,$= 2.4\pm 0.7$). We
have also detected the iron K$\alpha$ line at $6.5\pm 0.1$\,keV, with a large
equivalent width of EW\,$= 1.0_{-0.5}^{+0.9}$\,keV. Based on the simulations of
the {\it Chandra}'s Point Spread Function (PSF), we have concluded that, while
the soft thermal component is extended on the scale of the galaxy host, the
hard X-ray emission within the narrow photon energy range 6.0--7.0\,keV
originates within the unresolved core (effectively the central kpc radius). The
line is therefore indicative of the X-ray reflection from a cold neutral gas in
the central regions of NGC\,3894. We discuss the implications of our findings
in the context of the X-ray Baldwin effect. NGC\,3894 is the first young radio
galaxy detected in HE $\gamma$-rays with the iron K$\alpha$ line.
|
Particle production from secondary proton-proton collisions, commonly
referred to as pile-up, impair the sensitivity of both new physics searches and
precision measurements at LHC experiments. We propose a novel algorithm, PUMA,
for identifying pile-up objects with the help of deep neural networks based on
sparse transformers. These attention mechanisms were developed for natural
language processing but have become popular in other applications. In a
realistic detector simulation, our method outperforms classical benchmark
algorithms for pile-up mitigation in key observables. It provides a perspective
for mitigating the effects of pile-up in the high luminosity era of the LHC,
where up to 200 proton-proton collisions are expected to occur simultaneously.
|
We formulate a procedure to obtain a gauge-invariant tunneling rate at zero
temperature using the recently developed tunneling potential approach. This
procedure relies on a consistent power counting in gauge coupling and a
derivative expansion. The tunneling potential approach, while numerically more
efficient than the standard bounce solution method, inherits the
gauge-dependence of the latter when naively implemented. Using the Abelian
Higgs model, we show how to obtain a tunneling rate whose residual
gauge-dependence arises solely from the polynomial approximations adopted in
the tunneling potential computation.
|
We consider the two-loop corrections to the $HW^+W^-$ vertex at order
$\alpha\alpha_s$. We construct a canonical basis for the two-loop integrals
using the Baikov representation and the intersection theory. By solving the
$\epsilon$-form differential equations, we obtain fully analytic expressions
for the master integrals in terms of multiple polylogarithms, which allow fast
and accurate numeric evaluation for arbitrary configurations of external
momenta. We apply our analytic results to the decay process $H \to \nu_e e W$,
and study both the integrated and differential decay rates. Our results can
also be applied to the Higgs production process via $W$ boson fusion.
|
There have been considerable research efforts devoted to quantum simulations
of Fermi-Hubbard model with ultracold atoms loaded in optical lattices. In such
experiments, the antiferromagnetically ordered quantum state has been achieved
at half filling in recent years. The atomic lattice away from half filling is
expected to host d-wave superconductivity, but its low temperature phases have
not been reached. In a recent work, we proposed an approach of incommensurate
quantum adiabatic doping, using quantum adiabatic evolution of an
incommensurate lattice for preparation of the highly correlated many-body
ground state of the doped Fermi-Hubbard model starting from a unit-filling band
insulator. Its feasibility has been demonstrated with numerical simulations of
the adiabatic preparation for certain incommensurate particle-doping fractions,
where the major problem to circumvent is the atomic localization in the
incommensurate lattice. Here we carry out a systematic study of the quantum
adiabatic doping for a wide range of doping fractions from particle-doping to
hole-doping, including both commensurate and incommensurate cases. We find that
there is still a localization-like slowing-down problem at commensurate
fillings, and that it becomes less harmful in the hole-doped regime. With
interactions, the adiabatic preparation is found to be more efficient for that
interaction effect destabilizes localization. For both free and interacting
cases, we find the adiabatic doping has better performance in the hole-doped
regime than the particle-doped regime. We also study adiabatic doping starting
from the half-filling Mott insulator, which is found to be more efficient for
certain filling fractions.
|
We present results from seven cosmological simulations that have been
extended beyond the present era as far as redshift $z=-0.995$ or
$t\approx96\,{\rm Gyr}$, using the Enzo simulation code. We adopt the
calibrated star formation and feedback prescriptions from our previous work on
reproducing the Milky Way with Enzo with modifications to the simulation code,
chemistry and cooling library. We then consider the future behaviour of the
halo mass function (HMF), the equation of state (EOS) of the IGM, and the
cosmic star formation history (SFH). Consistent with previous work, we find a
freeze-out in the HMF at $z\approx-0.6$. The evolution of the EOS of the IGM
presents an interesting case study of the cosmological coincidence problem,
where there is a sharp decline in the IGM temperature immediately after $z=0$.
For the SFH, the simulations produce a peak and a subsequent decline into the
future. However, we do find a turnaround in the SFH after $z\approx-0.98$ in
some simulations, probably due to the limitations of the criteria used for star
formation. By integrating the SFH in time up to $z=-0.92$, the simulation with
the best spatial resolution predicts an asymptotic total stellar mass that is
very close to that obtained from extrapolating the fit of the observed SFR.
Lastly, we investigate the future evolution of the partition of baryons within
a Milky Way-sized galaxy, using both a zoom and a box simulation. Despite
vastly different resolutions, these simulations predict individual haloes
containing an equal fraction of baryons in stars and gas at the time of
freeze-out ($t\approx30\,{\rm Gyr}$).
|
We study algebraic curves that are envelopes of families of polygons
supported on the unit circle T. We address, in particular, a characterization
of such curves of minimal class and show that all realizations of these curves
are essentially equivalent and can be described in terms of orthogonal
polynomials on the unit circle (OPUC), also known as Szeg\H{o} polynomials. Our
results have connections to classical results from algebraic and projective
geometry, such as theorems of Poncelet, Darboux, and Kippenhahn; numerical
ranges of a class of matrices; and Blaschke products and disk functions.
This paper contains new results, some old results presented from a different
perspective or with a different proof, and a formal foundation for our
analysis. We give a rigorous definition of the Poncelet property, of curves
tangent to a family of polygons, and of polygons associated with Poncelet
curves. As a result, we are able to clarify some misconceptions that appear in
the literature and present counterexamples to some existing assertions along
with necessary modifications to their hypotheses to validate them. For
instance, we show that curves inscribed in some families of polygons supported
on T are not necessarily convex, can have cusps, and can even intersect the
unit circle.
Two ideas play a unifying role in this work. The first is the utility of OPUC
and the second is the advantage of working with tangent coordinates. This
latter idea has been previously exploited in the works of B. Mirman, whose
contribution we have tried to put in perspective.
|
We compute the two-loop four-point form factor of a length-3 half-BPS
operator in planar N=4 SYM, which belongs to the class of two-loop five-point
scattering observables with one off-shell color-singlet leg. A new
bootstrapping strategy is developed to obtain this result by starting with an
ansatz expanded in terms of master integrals and then solving the master
coefficients via various physical constraints. We find that consistency
conditions of infrared divergences and collinear limits, together with the
cancellation of spurious poles, can fix a significant part of the ansatz. The
remaining degrees of freedom can be fixed by one simple type of two-double
unitarity cut. Full analytic results in terms of both symbol and Goncharov
polylogarithms are provided.
|
We reconsider the problem of calculating the vacuum free energy (density) of
QCD and the shift of the quark condensates in the presence of a uniform
background magnetic field using two-and-three-flavor chiral perturbation theory
($\chi$PT). Using the free energy, we calculate the degenerate, light quark
condensates in the two-flavor case and the up, down and strange quark
condensates in the three-flavor case. We also use the vacuum free energy to
calculate the (renormalized) magnetization of the QCD vacuum, which shows that
it is paramagnetic. We find that the three-flavor light-quark condensates and
(renormalized) magnetization are improvements on the two-flavor results. We
also find that the average light quark condensate is in agreement with the
lattice up to $eB=0.2 {\rm\ GeV^{2}}$, and the (renormalized) magnetization is
in agreement up to $eB=0.3 {\rm\ GeV^{2}}$, while three-flavor $\chi$PT, which
gives a non-zero shift in the difference between the light quark condensates
unlike two-flavor $\chi$PT, underestimates the difference compared to lattice
QCD.
|
We propose that a broad class of excited-state quantum phase transitions
(ESQPTs) gives rise to two different excited-state quantum phases. These phases
are identified by means of an operator, $\hat{\mathcal{C}}$, which is a
constant of motion only in one of them. Hence, the ESQPT critical energy splits
the spectrum into one phase where the equilibirium expectation values of
physical observables crucially depend on this constant of motion, and another
phase where the energy is the only relevant thermodynamic magnitude. The
trademark feature of this operator is that it has two different eigenvalues,
$\pm1$, and therefore it acts as a discrete symmetry in the first of these two
phases. This scenario is observed in systems with and without an additional
discrete symmetry; in the first case, $\hat{\mathcal{C}}$ explains the change
from degenerate doublets to non-degenerate eigenlevels upon crossing the
critical line. We present stringent numerical evidence in the Rabi and Dicke
models, suggesting that this result is exact in the thermodynamic limit, with
finite-size corrections that decrease as a power-law.
|
Several years ago, in the context of the physics of hysteresis in magnetic
materials, a simple stochastic model has been introduced: the ABBM model.
Later, the ABBM model has been advocated as a paradigm for a broad class of
diverse phenomena, baptised "crackling noise phenomena". The model reproduces
many statistical features of such intermittent signals, as the statistics of
burst (or avalanche) durations and sizes, with their power law exponents that
would characterise the dynamics as critical. Beyond such "critical exponents",
the measure of the average shape of the avalanche has also been proposed. Here,
the exact calculation of average and fluctuations of the avalanche shape for
the ABBM model is presented, showing that its normalised shape is independent
from the external drive. Moreover, average and fluctuations of the
multi-avalanche shape, that is a sequence of avalanches of fixed total
duration, is also computed. Surprisingly, the two quantities (avalanche and
multi-avalanche normalised shapes) are identical. This result is obtained using
the exact solution of the ABBM model, obtained leveraging the equivalence with
the Cox-Ingersoll-Ross process (CIR), through an exact "time change". A
presentation of this and other known exact results is provided: notably the
correspondence of the ABBM/CIR model with the generalised Bessel process,
describing the dynamics of the modulus of the multi dimensional
Ornstein-Uhlenbeck process. As a consequence, the correspondence between the
excursion (avalanche) and bridge (multi-avalanche) shape distributions, turns
to apply to all the aforementioned stochastic processes. In simple words:
considering the distance from the origin of such diffusive particles, the
(normalised) average shape (and fluctuations) of its trajectory until a return
in a time T is the same, whether it has returned before T or not.
|
In this paper, we propose a fast second-order approximation to the
variable-order (VO) Caputo fractional derivative, which is developed based on
$L2$-$1_\sigma$ formula and the exponential-sum-approximation technique. The
fast evaluation method can achieve the second-order accuracy and further reduce
the computational cost and the acting memory for the VO Caputo fractional
derivative. This fast algorithm is applied to construct a relevant fast
temporal second-order and spatial fourth-order scheme ($FL2$-$1_{\sigma}$
scheme) for the multi-dimensional VO time-fractional sub-diffusion equations.
Theoretically, $FL2$-$1_{\sigma}$ scheme is proved to fulfill the similar
properties of the coefficients as those of the well-studied $L2$-$1_\sigma$
scheme. Therefore, $FL2$-$1_{\sigma}$ scheme is strictly proved to be
unconditionally stable and convergent. A sharp decrease in the computational
cost and the acting memory is shown in the numerical examples to demonstrate
the efficiency of the proposed method.
|
Direct Delta Mush is a novel skinning deformation technique introduced by Le
and Lewis (2019). It generalizes the iterative Delta Mush algorithm of
Mancewicz et al (2014), providing a direct solution with improved efficiency
and control. Compared to Linear Blend Skinning, Direct Delta Mush offers better
quality of deformations and ease of authoring at comparable performance.
However, Direct Delta Mush does not handle non-rigid joint transformations
correctly which limits its application for most production environments. This
paper presents an extension to Direct Delta Mush that integrates the non-rigid
part of joint transformations into the algorithm. In addition, the paper also
describes practical considerations for computing the orthogonal component of
the transformation and stability issues observed during the implementation and
testing.
|
Geodesic orbit spaces (or g.o. spaces) are defined as those homogeneous
Riemannian spaces $(M=G/H,g)$ whose geodesics are orbits of one-parameter
subgroups of $G$. The corresponding metric $g$ is called a geodesic orbit
metric. We study the geodesic orbit spaces of the form $(\Sp(n)/\Sp(n_1)\times
\cdots \times \Sp(n_s), g)$, with $0<n_1+\cdots +n_s\leq n$. Such spaces
include spheres, quaternionic Stiefel manifolds, Grassmann manifolds and
quaternionic flag manifolds. The present work is a contribution to the study of
g.o. spaces $(G/H,g)$ with $H$ semisimple.
|
Assessing population-level effects of vaccines and other infectious disease
prevention measures is important to the field of public health. In infectious
disease studies, one person's treatment may affect another individual's
outcome, i.e., there may be interference between units. For example, use of bed
nets to prevent malaria by one individual may have an indirect or spillover
effect to other individuals living in close proximity. In some settings,
individuals may form groups or clusters where interference only occurs within
groups, i.e., there is partial interference. Inverse probability weighted
estimators have previously been developed for observational studies with
partial interference. Unfortunately, these estimators are not well suited for
studies with large clusters. Therefore, in this paper, the parametric g-formula
is extended to allow for partial interference. G-formula estimators are
proposed of overall effects, spillover effects when treated, and spillover
effects when untreated. The proposed estimators can accommodate large clusters
and do not suffer from the g-null paradox that may occur in the absence of
interference. The large sample properties of the proposed estimators are
derived, and simulation studies are presented demonstrating the finite-sample
performance of the proposed estimators. The Demographic and Health Survey from
the Democratic Republic of the Congo is then analyzed using the proposed
g-formula estimators to assess the overall and spillover effects of bed net use
on malaria.
|
We present graph-based translation models which translate source graphs into
target strings. Source graphs are constructed from dependency trees with extra
links so that non-syntactic phrases are connected. Inspired by phrase-based
models, we first introduce a translation model which segments a graph into a
sequence of disjoint subgraphs and generates a translation by combining
subgraph translations left-to-right using beam search. However, similar to
phrase-based models, this model is weak at phrase reordering. Therefore, we
further introduce a model based on a synchronous node replacement grammar which
learns recursive translation rules. We provide two implementations of the model
with different restrictions so that source graphs can be parsed efficiently.
Experiments on Chinese--English and German--English show that our graph-based
models are significantly better than corresponding sequence- and tree-based
baselines.
|
Most eCommerce applications, like web-shops have millions of products. In
this context, the identification of similar products is a common sub-task,
which can be utilized in the implementation of recommendation systems, product
search engines and internal supply logistics. Providing this data set, our goal
is to boost the evaluation of machine learning methods for the prediction of
the category of the retail products from tuples of images and descriptions.
|
In this paper, we discuss pre-transformed RM-Polar codes and cyclic
redundancy check (CRC) concatenated pre-transformed RM-Polar codes. The
simulation results show that the pre-transformed RM-Polar (256, 128+9) code
concatenated with 9-bit CRC can achieve a frame error rate (FER) of 0.001 at
Eb/No=1.95dB under the list decoding, which is about 0.05dB from the RCU bound.
The pre-transformed RM-Polar (512, 256+9) concatenated with 9-bit CRC can
achieve a FER of 0.001 at Eb/No=1.6dB under the list decoding, which is 0.15dB
from the RCU bound.
|
In literature, extensive research has been done with respect to synthesis of
supervisory controllers. Such synthesized supervisors can be distributed for
implementation on multiple physical controllers. This paper discusses a method
for distributing a synthesized supervisory controller. In this method,
dependency structure matrices are used to distribute a system, the supervisor
is then distributed accordingly, using existing localization theory. The
existence of communication delays between supervisory controllers is
unavoidable in a distributed application. The influence of these delays on the
behavior of a supervisor is therefore studied using delay robustness theory.
This paper introduces the use of mutex algorithms to make the distributed
supervisor delay-robust. A case study is used to demonstrate the method and
hardware in the loop testing is used to validate the resulting distributed
supervisor.
|
For the fourth-order teleparallel $f\left(T,B\right) $ theory of gravity, we
investigate the cosmological evolution for the universe in the case of a
spatially flat Friedmann--Lema\^{\i}tre--Robertson--Walker background space. We
focus on the case for which $f\left(T,B\right) $ is separable, that is,
$f\left(T,B\right) _{,TB}=0$ and $f\left(T,B\right) $ is a nonlinear function
on the scalars $T$ and $B$. For this fourth-order theory we use a Lagrange
multiplier to introduce a scalar field function which attributes the
higher-order derivatives. In order to perform the analysis of the dynamics we
use dimensionless variables which allow the Hubble function to change sign. The
stationary points of the dynamical system are investigated both in the finite
and infinite regimes. The physical properties of the asymptotic solutions and
their stability characteristics are discussed.
|
In this paper, we develop a zonal-based flexible bus services (ZBFBS) by
considering both passenger demands spatial (origin-destination or OD) and
volume stochastic variations. Service requests are grouped by zonal OD pairs
and number of passengers per request, and aggregated into demand categories
which follow certain probability distributions. A two-stage stochastic program
is formulated to minimize the expected operating cost of ZBFBS, in which the
zonal visit sequences of vehicles are determined in Stage-1, whereas in
Stage-2, service requests are assigned to either regular routes determined in
Stage-1 or ad hoc services that incur additional costs. Demand volume
reliability and detour time reliability are introduced to ensure quality of the
services and separate the problem into two phases for efficient solutions. In
phase-1, given the reliability requirements, we minimize the cost of operating
the regular services. In phase-2, we optimize the passenger assignment to
vehicles to minimize the expected ad hoc service cost. The reliabilities are
then optimized by a gradient-based approach to minimize the sum of the regular
service operating cost and expected ad hoc service cost. We conduct numerical
studies on vehicle capacity, detour time limit and demand volume to demonstrate
the potential of ZBFBS, and apply the model to Chengdu, China, based on real
data to illustrate its applicability.
|
Here we revisit an initial orbit determination method introduced by O. F.
Mossotti employing four geocentric sky-plane observations and a linear equation
to compute the angular momentum of the observed body. We then extend the method
to topocentric observations, yielding a quadratic equation for the angular
momentum. The performance of the two versions are compared through numerical
tests with synthetic asteroid data using different time intervals between
consecutive observations and different astrometric errors. We also show a
comparison test with Gauss's method using simulated observations with the
expected cadence of the VRO-LSST telescope.
|
We propose the attraction Indian buffet distribution (AIBD), a distribution
for binary feature matrices influenced by pairwise similarity information.
Binary feature matrices are used in Bayesian models to uncover latent variables
(i.e., features) that explain observed data. The Indian buffet process (IBP) is
a popular exchangeable prior distribution for latent feature matrices. In the
presence of additional information, however, the exchangeability assumption is
not reasonable or desirable. The AIBD can incorporate pairwise similarity
information, yet it preserves many properties of the IBP, including the
distribution of the total number of features. Thus, much of the interpretation
and intuition that one has for the IBP directly carries over to the AIBD. A
temperature parameter controls the degree to which the similarity information
affects feature-sharing between observations. Unlike other nonexchangeable
distributions for feature allocations, the probability mass function of the
AIBD has a tractable normalizing constant, making posterior inference on
hyperparameters straight-forward using standard MCMC methods. A novel posterior
sampling algorithm is proposed for the IBP and the AIBD. We demonstrate the
feasibility of the AIBD as a prior distribution in feature allocation models
and compare the performance of competing methods in simulations and an
application.
|
The words we use to talk about the current epidemiological crisis on social
media can inform us on how we are conceptualizing the pandemic and how we are
reacting to its development. This paper provides an extensive explorative
analysis of how the discourse about Covid-19 reported on Twitter changes
through time, focusing on the first wave of this pandemic. Based on an
extensive corpus of tweets (produced between 20th March and 1st July 2020)
first we show how the topics associated with the development of the pandemic
changed through time, using topic modeling. Second, we show how the sentiment
polarity of the language used in the tweets changed from a relatively positive
valence during the first lockdown, toward a more negative valence in
correspondence with the reopening. Third we show how the average subjectivity
of the tweets increased linearly and fourth, how the popular and frequently
used figurative frame of WAR changed when real riots and fights entered the
discourse.
|
We investigate the distribution of partisanship in a cross-section of ten
diverse States to elucidate how votes translate into seats won and other
metrics. Markov chain simulations taking into account partisanship distribution
agree surprisingly well with a simple model covering only equal voting
population-weighted distributions of precinct results containing no spatial
information. We find asymmetries where Democrats win fewer precincts than
Republicans but do so with large marjorities. This skew accounts for persistent
Republican control of State Legislatures and Congressional seats even in some
states with statewide vote majorities for Democrats.
Despite overall results showing Republican advantages in many states based on
mean results from simulations covering many random scenarios, the simulations
yield a wide range in metrics, suggesting bias can be minimized better by
selecting districting plans with low values for efficiency gap than by
selecting plans with values near the means for the ensemble of random
simulations.
We examine constraints on county splits to achieve higher compactness and
investigate policies requiring cohesiveness for communities of interest as to
screen out the most obvious gerrymanders. Minimizing county splits does not
necessarily reduce partisan bias, except for Pennsylvania, where limiting
county splits appears to reduce bias.
|
In recent years, studying and predicting alternative mobility (e.g., sharing
services) patterns in urban environments has become increasingly important as
accurate and timely information on current and future vehicle flows can
successfully increase the quality and availability of transportation services.
This need is aggravated during the current pandemic crisis, which pushes
policymakers and private citizens to seek social-distancing compliant urban
mobility services, such as electric bikes and scooter sharing offerings.
However, predicting the number of incoming and outgoing vehicles for different
city areas is challenging due to the nonlinear spatial and temporal
dependencies typical of urban mobility patterns. In this work, we propose
STREED-Net, a novel deep learning network with a multi-attention (spatial and
temporal) mechanism that effectively captures and exploits complex spatial and
temporal patterns in mobility data. The results of a thorough experimental
analysis using real-life data are reported, indicating that the proposed model
improves the state-of-the-art for this task.
|
Detection and classification of objects in overhead images are two important
and challenging problems in computer vision. Among various research areas in
this domain, the task of fine-grained classification of objects in overhead
images has become ubiquitous in diverse real-world applications, due to recent
advances in high-resolution satellite and airborne imaging systems. The small
inter-class variations and the large intra class variations caused by the fine
grained nature make it a challenging task, especially in low-resource cases. In
this paper, we introduce COFGA a new open dataset for the advancement of
fine-grained classification research. The 2,104 images in the dataset are
collected from an airborne imaging system at 5 15 cm ground sampling distance,
providing higher spatial resolution than most public overhead imagery datasets.
The 14,256 annotated objects in the dataset were classified into 2 classes, 15
subclasses, 14 unique features, and 8 perceived colors a total of 37 distinct
labels making it suitable to the task of fine-grained classification more than
any other publicly available overhead imagery dataset. We compare COFGA to
other overhead imagery datasets and then describe some distinguished fine-grain
classification approaches that were explored during an open data-science
competition we have conducted for this task.
|
A new instrument is required to accommodate the need for increased
portability and accuracy in laser power measurement above 100 W. Reflection and
absorption of laser light provide a measurable force from photon momentum
exchange that is directly proportional to laser power, which can be measured
with an electrostatic balance traceable to the SI. We aim for a relative
uncertainty of $10^{-3}$ with coverage factor $k=2$. For this purpose, we have
designed a monolithic parallelogram 4-bar linkage incorporating elastic
circular notch flexure hinges. The design is optimized to address the main
factors driving force measurement uncertainty from the balance mechanism:
corner loading errors, balance stiffness, stress in the flexure hinges,
sensitivity to vibration, and sensitivity to thermal gradients. Parasitic
rotations in the free end of the 4-bar linkage during arcuate motion are
constrained by machining tolerances. An analytical model shows this affects the
force measurement less than 0.01 percent. Incorporating an inverted pendulum
reduces the stiffness of the system without unduly increasing tilt sensitivity.
Finite element modeling of the flexures is used to determine the hinge
orientation that minimizes stress which is therefore expected to minimize
hysteresis. Thermal effects are mitigated using an external enclosure to
minimize temperature gradients, although a quantitative analysis of this effect
is not carried out. These analyses show the optimized mechanism is expected to
contribute less than $10^{-3}$ relative uncertainty in the final laser power
measurement.
|
We give a simple and short proof of the fact that the board game of Y cannot
end in a draw. Our proof, based on the analogous result for the game of Hex
(the so-called 'Hex Theorem'), is purely topological and does not depend on the
shape of the board. We also include a simplified version of Gale's proof of Hex
Theorem.
|
We present a Bayesian treatment for deep regression using an
Errors-in-Variables model which accounts for the uncertainty associated with
the input to the employed neural network. It is shown how the treatment can be
combined with already existing approaches for uncertainty quantification that
are based on variational inference. Our approach yields a decomposition of the
predictive uncertainty into an aleatoric and epistemic part that is more
complete and, in many cases, more consistent from a statistical perspective. We
illustrate and discuss the approach along various toy and real world examples.
|
To effectively control large-scale distributed systems online, model
predictive control (MPC) has to swiftly solve the underlying high-dimensional
optimization. There are multiple techniques applied to accelerate the solving
process in the literature, mainly attributed to software-based algorithmic
advancements and hardware-assisted computation enhancements. However, those
methods focus on arithmetic accelerations and overlook the benefits of the
underlying system's structure. In particular, the existing decoupled
software-hardware algorithm design that naively parallelizes the arithmetic
operations by the hardware does not tackle the hardware overheads such as
CPU-GPU and thread-to-thread communications in a principled manner. Also, the
advantages of parallelizable subproblem decomposition in distributed MPC are
not well recognized and exploited. As a result, we have not reached the full
potential of hardware acceleration for MPC. In this paper, we explore those
opportunities by leveraging GPU to parallelize the distributed and localized
MPC (DLMPC) algorithm. We exploit the locality constraints embedded in the
DLMPC formulation to reduce the hardware-intrinsic communication overheads. Our
parallel implementation achieves up to 50x faster runtime than its CPU
counterparts under various parameters. Furthermore, we find that the
locality-aware GPU parallelization could halve the optimization runtime
comparing to the naive acceleration. Overall, our results demonstrate the
performance gains brought by software-hardware co-design with the information
exchange structure in mind.
|
Close-in co-orbital planets (in a 1:1 mean motion resonance) can experience
strong tidal interactions with the central star. Here, we develop an analytical
model adapted to the study of the tidal evolution of those systems. We use a
Hamiltonian version of the constant time-lag tidal model, which extends the
Hamiltonian formalism developed for the point-mass case. We show that
co-orbital systems undergoing tidal dissipation either favour the Lagrange or
the anti-Lagrange configurations, depending on the system parameters. However,
for all range of parameters and initial conditions, both configurations become
unstable, although the timescale for the destruction of the system can be
larger than the lifetime of the star. We provide an easy-to-use criterion to
determine if an already known close-in exoplanet may have an undetected
co-orbital companion.
|
The frustrated magnet $\alpha$-RuCl$_3$ constitutes a fascinating quantum
material platform that harbors the intriguing Kitaev physics. However, a
consensus on its intricate spin interactions and field-induced quantum phases
has not been reached yet. Here we exploit multiple state-of-the-art many-body
methods and determine the microscopic spin model that quantitatively explains
major observations in $\alpha$-RuCl$_3$, including the zigzag order,
double-peak specific heat, magnetic anisotropy, and the characteristic M-star
dynamical spin structure, etc. According to our model simulations, the in-plane
field drives the system into the polarized phase at about 7 T and a thermal
fractionalization occurs at finite temperature, reconciling observations in
different experiments. Under out-of-plane fields, the zigzag order is
suppressed at 35 T, above which, and below a polarization field of 100 T level,
there emerges a field-induced quantum spin liquid. The fractional entropy and
algebraic low-temperature specific heat unveil the nature of a gapless spin
liquid, which can be explored in high-field measurements on $\alpha$-RuCl$_3$.
|
Justification of tight-binding model from Schroedinger formalism for various
topologies of position-based semiconductor qubits is presented in this work.
Simplistic tight-binding model allows for description of single-electron
devices at large integration scale. However it is due to the fact that
tight-binding model omits the integro-differential equations that arise from
electron-electron interaction in Schroedinger model. Two approaches are given
in derivation of tight-binding model from Schroedinger equation. First approach
is conducted by usage of Green functions obtained from Schroedinger equation.
Second approach is given by usage of Taylor expansion applied to Schroedinger
equation. The obtained results can be extended for the case of many Wannier
qubits with more than one electron and can be applied to 2 and 3 dimensional
model. Furthermore various correlation functions are proposed in Schroedinger
formalism that can account for static and time-dependent electric and magnetic
field polarizing given Wannier qubit system. One of the central results of
presented work relies on the emergence of dissipation processes during smooth
bending of semiconductor nanowires both in the case of classical and quantum
picture. Presented results give the base for physical description of
electrostatic Q-Swap gate of any topology using open loop nanowires. We observe
strong localization of wavepacket due to nanowire bending.
|
This is a short technical report introducing the solution of the Team
TCParser for Short-video Face Parsing Track of The 3rd Person in Context (PIC)
Workshop and Challenge at CVPR 2021. In this paper, we introduce a strong
backbone which is cross-window based Shuffle Transformer for presenting
accurate face parsing representation. To further obtain the finer segmentation
results, especially on the edges, we introduce a Feature Alignment Aggregation
(FAA) module. It can effectively relieve the feature misalignment issue caused
by multi-resolution feature aggregation. Benefiting from the stronger backbone
and better feature aggregation, the proposed method achieves 86.9519% score in
the Short-video Face Parsing track of the 3rd Person in Context (PIC) Workshop
and Challenge, ranked the first place.
|
We present a learning-based method for estimating 4D reflectance field of a
person given video footage illuminated under a flat-lit environment of the same
subject. For training data, we use one light at a time to illuminate the
subject and capture the reflectance field data in a variety of poses and
viewpoints. We estimate the lighting environment of the input video footage and
use the subject's reflectance field to create synthetic images of the subject
illuminated by the input lighting environment. We then train a deep
convolutional neural network to regress the reflectance field from the
synthetic images. We also use a differentiable renderer to provide feedback for
the network by matching the relit images with the input video frames. This
semi-supervised training scheme allows the neural network to handle unseen
poses in the dataset as well as compensate for the lighting estimation error.
We evaluate our method on the video footage of the real Holocaust survivors and
show that our method outperforms the state-of-the-art methods in both realism
and speed.
|
Particle acceleration and heating at mildly relativistic magnetized shocks in
electron-ion plasma are investigated with unprecedentedly high-resolution
two-dimensional particle-in-cell simulations that include ion-scale shock
rippling. Electrons are super-adiabatically heated at the shock, and most of
the energy transfer from protons to electrons takes place at or downstream of
the shock. We are the first to demonstrate that shock rippling is crucial for
the energization of electrons at the shock. They remain well below
equipartition with the protons. The downstream electron spectra are
approximately thermal with a limited supra-thermal power-law component. Our
results are discussed in the context of wakefield acceleration and the
modelling of electromagnetic radiation from blazar cores.
|
This paper introduces a general framework for solving constrained convex
quaternion optimization problems in the quaternion domain. To soundly derive
these new results, the proposed approach leverages the recently developed
generalized $\mathbb{HR}$-calculus together with the equivalence between the
original quaternion optimization problem and its augmented real-domain
counterpart. This new framework simultaneously provides rigorous theoretical
foundations as well as elegant, compact quaternion-domain formulations for
optimization problems in quaternion variables. Our contributions are threefold:
(i) we introduce the general form for convex constrained optimization problems
in quaternion variables, (ii) we extend fundamental notions of convex
optimization to the quaternion case, namely Lagrangian duality and optimality
conditions, (iii) we develop the quaternion alternating direction method of
multipliers (Q-ADMM) as a general purpose quaternion optimization algorithm.
The relevance of the proposed methodology is demonstrated by solving two
typical examples of constrained convex quaternion optimization problems arising
in signal processing. Our results open new avenues in the design, analysis and
efficient implementation of quaternion-domain optimization procedures.
|
Given a database schema, Text-to-SQL aims to translate a natural language
question into the corresponding SQL query. Under the setup of cross-domain,
traditional semantic parsing models struggle to adapt to unseen database
schemas. To improve the model generalization capability for rare and unseen
schemas, we propose a new architecture, ShadowGNN, which processes schemas at
abstract and semantic levels. By ignoring names of semantic items in databases,
abstract schemas are exploited in a well-designed graph projection neural
network to obtain delexicalized representation of question and schema. Based on
the domain-independent representations, a relation-aware transformer is
utilized to further extract logical linking between question and schema.
Finally, a SQL decoder with context-free grammar is applied. On the challenging
Text-to-SQL benchmark Spider, empirical results show that ShadowGNN outperforms
state-of-the-art models. When the annotated data is extremely limited (only
10\% training set), ShadowGNN gets over absolute 5\% performance gain, which
shows its powerful generalization ability. Our implementation will be
open-sourced at \url{https://github.com/WowCZ/shadowgnn}.
|
We construct an infinite family of genus one open book decompositions
supporting Stein-fillable contact structures and show that their monodromies do
not admit positive factorisations. This extends a line of counterexamples in
higher genera and establishes that a correspondence between Stein fillings and
positive factorisations only exists for planar open book decompositions.
|
In heavy atoms and ions, nuclear structure effects are significantly enhanced
due to the overlap of the electron wave functions with the nucleus. This
overlap rapidly increases with the nuclear charge $Z$. We study the energy
level shifts induced by the electric dipole and electric quadrupole nuclear
polarization effects in atoms and ions with $Z \geq 20$. The electric dipole
polarization effect is enhanced by the nuclear giant dipole resonance. The
electric quadrupole polarization effect is enhanced because the electrons in a
heavy atom or ion move faster than the rotation of the deformed nucleus, thus
experiencing significant corrections to the conventional approximation in which
they `see' an averaged nuclear charge density. The electric nuclear
polarization effects are computed numerically for $1s$, $2s$, $2p_{1/2}$ and
high $ns$ electrons. The results are fitted with elementary functions of
nuclear parameters (nuclear charge, mass number, nuclear radius and
deformation). We construct an effective potential which models the energy level
shifts due to nuclear polarization. This effective potential, when added to the
nuclear Coulomb interaction, may be used to find energy level shifts in
multi-electron ions, atoms and molecules. The fitting functions and effective
potentials of the nuclear polarization effects are important for the studies of
isotope shifts and nonlinearity in the King plot which are now used to search
for new interactions and particles.
|
We present online algorithms for directed spanners and Steiner forests. These
problems fall under the unifying framework of online covering linear
programming formulations, developed by Buchbinder and Naor (MOR, 34, 2009),
based on primal-dual techniques. Our results include the following:
For the pairwise spanner problem, in which the pairs of vertices to be
spanned arrive online, we present an efficient randomized
$\tilde{O}(n^{4/5})$-competitive algorithm for graphs with general lengths,
where $n$ is the number of vertices. With uniform lengths, we give an efficient
randomized $\tilde{O}(n^{2/3+\epsilon})$-competitive algorithm, and an
efficient deterministic $\tilde{O}(k^{1/2+\epsilon})$-competitive algorithm,
where $k$ is the number of terminal pairs. These are the first online
algorithms for directed spanners. In the offline setting, the current best
approximation ratio with uniform lengths is $\tilde{O}(n^{3/5 + \epsilon})$,
due to Chlamtac, Dinitz, Kortsarz, and Laekhanukit (TALG 2020).
For the directed Steiner forest problem with uniform costs, in which the
pairs of vertices to be connected arrive online, we present an efficient
randomized $\tilde{O}(n^{2/3 + \epsilon})$-competitive algorithm. The
state-of-the-art online algorithm for general costs is due to Chakrabarty, Ene,
Krishnaswamy, and Panigrahi (SICOMP 2018) and is $\tilde{O}(k^{1/2 +
\epsilon})$-competitive. In the offline version, the current best approximation
ratio with uniform costs is $\tilde{O}(n^{26/45 + \epsilon})$, due to Abboud
and Bodwin (SODA 2018).
A small modification of the online covering framework by Buchbinder and Naor
implies a polynomial-time primal-dual approach with separation oracles, which a
priori might perform exponentially many calls. We convert the online spanner
problem and the online Steiner forest problem into online covering problems and
round in a problem-specific fashion.
|
We report on the search for very-high-energy gamma-ray emission from the
regions around three nearby supersonic pulsars (PSR B0355+54, PSR J0357+3205
and PSR J1740+1000) that exhibit long X-ray tails. To date there is no clear
detection of TeV emission from any pulsar tail that is prominent in X-ray or
radio. We provide upper limits on the TeV flux, and luminosity, and also
compare these limits with other pulsar wind nebulae detected in X-rays and the
tail emission model predictions. We find that at least one of the three tails
is likely to be detected in observations that are a factor of 2-3 more
sensitive. The analysis presented here also has implications for deriving the
properties of pulsar tails, for those pulsars whose tails could be detected in
TeV.
|
Spanning trees are widely used in networks for broadcasting, fault-tolerance,
and securely delivering messages. Hexagonal interconnection networks have a
number of real life applications. Examples are cellular networks, computer
graphics, and image processing. Eisenstein-Jacobi (EJ) networks are a
generalization of hexagonal mesh topology. They have a wide range of potential
applications, and thus they have received researchers' attention in different
areas among which interconnection networks and coding theory. In this paper, we
present two spanning trees' constructions for Eisenstein-Jacobi (EJ). The first
constructs three edge-disjoint node-independent spanning trees, while the
second constructs six node-independent spanning trees but not edge disjoint.
Based on the constructed trees, we develop routing algorithms that can securely
deliver a message and tolerate a number of faults in point-to-point or in
broadcast communications. The proposed work is also applied on higher
dimensional EJ networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.