abstract
stringlengths 42
2.09k
|
---|
Force chains, which are quasi-linear self-organised structures carrying large
stresses, are ubiquitous in jammed amorphous materials, such as granular
materials, foams, emulsions or even assemblies of cells. Predicting where they
will form upon mechanical deformation is crucial in order to describe the
physical properties of such materials, but remains an open question. Here we
demonstrate that graph neural networks (GNN) can accurately infer the location
of these force chains in frictionless materials from the local structure prior
to deformation, without receiving any information about the inter-particle
forces. Once trained on a prototypical system, the GNN prediction accuracy
proves to be robust to changes in packing fraction, mixture composition, amount
of deformation, and the form of the interaction potential. The GNN is also
scalable, as it can make predictions for systems much larger than those it was
trained on. Our results and methodology will be of interest for experimental
realizations of granular matter and jammed disordered systems, e.g. in cases
where direct visualisation of force chains is not possible or contact forces
cannot be measured.
|
In a recent paper published in PNAS authors prove that locality and free
choice are equivalent resources which need to be relaxed in order to fully
reproduce some statistics in Bell experiments (while always maintaining
realism). We explain that their assumption of free choice is simply
counterfactual definiteness or noncontextuality. Therefore the resource in Bell
experiments is contextuality and not the violations of locality and/or of free
choice. It is definitely less mind boggling conclusion because experimenters`
freedom of choice is a prerequisite of science,
|
We present a solution to multi-robot distributed semantic mapping of novel
and unfamiliar environments. Most state-of-the-art semantic mapping systems are
based on supervised learning algorithms that cannot classify novel observations
online. While unsupervised learning algorithms can invent labels for novel
observations, approaches to detect when multiple robots have independently
developed their own labels for the same new class are prone to erroneous or
inconsistent matches. These issues worsen as the number of robots in the system
increases and prevent fusing the local maps produced by each robot into a
consistent global map, which is crucial for cooperative planning and joint
mission summarization. Our proposed solution overcomes these obstacles by
having each robot learn an unsupervised semantic scene model online and use a
multiway matching algorithm to identify consistent sets of matches between
learned semantic labels belonging to different robots. Compared to the state of
the art, the proposed solution produces 20-60% higher quality global maps that
do not degrade even as many more local maps are fused.
|
Fashion is intertwined with external cultural factors, but identifying these
links remains a manual process limited to only the most salient phenomena. We
propose a data-driven approach to identify specific cultural factors affecting
the clothes people wear. Using large-scale datasets of news articles and
vintage photos spanning a century, we present a multi-modal statistical model
to detect influence relationships between happenings in the world and people's
choice of clothing. Furthermore, on two image datasets we apply our model to
improve the concrete vision tasks of visual style forecasting and photo
timestamping. Our work is a first step towards a computational, scalable, and
easily refreshable approach to link culture to clothing.
|
We calculate the response of a lead-based detector, such as the Helium and
Lead Observatory (HALO) or its planned upgrade HALO-1kt to a galactic
core-collapse supernova. We pay particular attention to the time dependence of
the reaction rates. All reaction rates decrease as the neutrino luminosity
exponentially drops during the cooling period but the ratio of one-neutron (1n)
to two-neutron (2n) event rates in HALO is independent of this overall
decrease. Nevertheless, we find that this ratio still changes with time due to
the changing character of neutrino flavor transformations with the evolving
conditions in the supernova. In the case of inverted hierarchy, this is caused
by the fact that the spectral splits become less and less sharp with the
decreasing luminosity. In the case of normal hierarchy, it is caused by the
passage of the shock wave through the Mikheyev-Smirnov-Wolfenstein resonance
region. However, in both cases, we find that the change in the ratio of 1n to
2n event rates is limited to a few percent.
|
In this study, a variational method for the inverse problem of self-assembly,
i.e., a reconstruction of the interparticle interaction potential of a given
structure, is applied to three-dimensional crystals. According to the method,
the interaction potential is derived as a function that maximizes the
free-energy functional of the one- and two-particle density distribution
functions. The interaction potentials of the target crystals, including those
with face-centered cubic (fcc), body-centered cubic (bcc), and simple hexagonal
(shx) lattices, are obtained by numerical maximization of the functional. Monte
Carlo simulations for the systems of particles with these interactions were
carried out, and the self-assembly of the target crystals was confirmed for the
bcc and shx cases. However, in the many-particle system with the predicted
interaction for the fcc lattice, the fcc lattice did not spontaneously form and
was metastable.
|
A formalisation of G\"odel's incompleteness theorems using the Isabelle proof
assistant is described. This is apparently the first mechanical verification of
the second incompleteness theorem. The work closely follows {\'S}wierczkowski
(2003), who gave a detailed proof using hereditarily finite set theory. The
adoption of this theory is generally beneficial, but it poses certain technical
issues that do not arise for Peano arithmetic. The formalisation itself should
be useful to logicians, particularly concerning the second incompleteness
theorem, where existing proofs are lacking in detail.
|
We investigate experimentally the behavior of self-propelled water-in-oil
droplets, confined in capillaries of different square and circular
cross-sections. The droplet's activity comes from the formation of swollen
micelles at its interface. In straight capillaries the velocity of the droplet
decreases with increasing confinement. However at very high confinement, the
velocity converges toward a non-zero value, so that even very long droplets
swim. Stretched circular capillaries are then used to explore even higher
confinement. The lubrication layer around the droplet then takes a non-uniform
thickness which constitutes a significant difference with usual flow-driven
passive droplets. A neck forms at the rear of the droplet, deepens with
increasing confinement, and eventually undergoes successive spontaneous
splitting events for large enough confinement. Such observations stress the
critical role of the activity of the droplet interface on the droplet's
behavior under confinement. We then propose an analytical formulation by
integrating the interface activity and the swollen micelles transport problem
into the classical Bretherton approach. The model accounts for the convergence
of the droplet's velocity to a finite value for large confinement, and for the
non-classical shape of the lubrication layer. We further discuss on the
saturation of the micelles concentration along the interface, which would
explain the divergence of the lubrication layer thickness for long enough
droplets, eventually leading to the spontaneous droplet division.
|
In this methodological paper, we first review the classic cubic Diophantine
equation $a^3 + b^3 + c^3 = d^3$, and consider the specific class of solutions
$q_1^3 + q_2^3 + q_3^3 = q_4^3$ with each $q_i$ being a binary quadratic form.
Next we turn our attention to the familiar sums of powers of the first $n$
positive integers, $S_k = 1^k + 2^k + \cdots + n^k$, and express the squares
$S_k^2$, $S_m^2$, and the product $S_k S_m$ as a linear combination of power
sums. These expressions, along with the above quadratic-form solution for the
cubic equation, allows one to generate an infinite number of relations of the
form $Q_1^3 + Q_2^3 + Q_3^3 = Q_4^3$, with each $Q_i$ being a linear
combination of power sums. Also, we briefly consider the quadratic Diophantine
equations $a^2 + b^2 + c^2 = d^2$ and $a^2 + b^2 = c^2$, and give a family of
corresponding solutions $Q_1^2 + Q_2^2 + Q_3^2 = Q_4^2$ and $Q_1^2 + Q_2^2 =
Q_3^2$ in terms of sums of powers of integers.
|
While reinforcement learning (RL) has proven to be the approach of choice for
tackling many complex problems, it remains challenging to develop and deploy RL
agents in real-life scenarios successfully. This paper presents pH-RL
(personalization in e-Health with RL) a general RL architecture for
personalization to bring RL to health practice. pH-RL allows for various levels
of personalization in health applications and allows for online and batch
learning. Furthermore, we provide a general-purpose implementation framework
that can be integrated with various healthcare applications. We describe a
step-by-step guideline for the successful deployment of RL policies in a mobile
application. We implemented our open-source RL architecture and integrated it
with the MoodBuster mobile application for mental health to provide messages to
increase daily adherence to the online therapeutic modules. We then performed a
comprehensive study with human participants over a sustained period. Our
experimental results show that the developed policies learn to select
appropriate actions consistently using only a few days' worth of data.
Furthermore, we empirically demonstrate the stability of the learned policies
during the study.
|
In this paper we discuss the optimal control of a quasilinear parabolic state
equation. Its form is leaned on the kind of problems arising for example when
controlling the anisotropic Allen-Cahn equation as a model for crystal growth.
Motivated by this application we consider the state equation as a result of a
gradient flow of an energy functional. The quasilinear term is strongly
monotone and obeys a certain growth condition and the lower order term is
non-monotone. The state equation is discretized implicitly in time with
piecewise constant functions. The existence of the control-to-state operator
and its Lipschitz-continuity is shown for the time discretized as well as for
the time continuous problem. Latter is based on the convergence proof of the
discretized solutions. Finally we present for both the existence of global
minimizers. Also convergence of a subsequence of time discrete optimal controls
to a global minimizer of the time continuous problem can be shown. Our results
hold in arbitrary space dimensions.
|
We show that general time-local quantum master equations admit an unravelling
in quantum trajectories with jumps. The sufficient condition is to weigh state
vector Monte Carlo averages by a probability pseudo-measure which we call the
"influence martingale". The influence martingale satisfies a $ 1d $ stochastic
differential equation enslaved to the ones governing the quantum trajectories.
Our interpretation is that the influence martingale models interference effects
between distinct realizations of the quantum trajectories at strong
system-environment coupling.
If the master equation generates a completely positive dynamical map, there
is no interference. In such a case the influence martingale becomes positive
definite and the Monte Carlo average straightforwardly reduces to the well
known unravelling of completely positive divisible dynamical maps.
|
Sustainable urban design or planning is not a LEGO-like assembly of
prefabricated elements, but an embryo-like growth with persistent
differentiation and adaptation towards a coherent whole. The coherent whole has
a striking character - called living structure - that consists of far more
small substructures than large ones. To detect the living structure, natural
streets or axial lines have been previously adopted to be topologically
represent an urban environment as a coherent whole. This paper develops a new
approach to detecting the underlying living structure of urban environments.
The approach takes an urban environment as a whole and recursively decomposes
it into meaningful subwholes at different levels of hierarchy or scale ranging
from the largest to the smallest. We compared the new approach to natural
street and axial line approaches and demonstrated, through four case studies,
that the new approach is better and more powerful. Based on the study, we
further discuss how the new approach can be used not only for understanding,
but also for effectively designing or planning the living structure of an urban
environment to be more living or more livable. Keywords: Urban design or
planning, structural beauty, space syntax, natural streets, life, wholeness
|
Tree-based models are among the most efficient machine learning techniques
for data mining nowadays due to their accuracy, interpretability, and
simplicity. The recent orthogonal needs for more data and privacy protection
call for collaborative privacy-preserving solutions. In this work, we survey
the literature on distributed and privacy-preserving training of tree-based
models and we systematize its knowledge based on four axes: the learning
algorithm, the collaborative model, the protection mechanism, and the threat
model. We use this to identify the strengths and limitations of these works and
provide for the first time a framework analyzing the information leakage
occurring in distributed tree-based model learning.
|
Let $\K$ be a finite field and $X$ be a complete simplicial toric variety
over $\K$. We give an algorithm relying on elimination theory for finding
generators of the vanishing ideal of a subgroup $Y_Q$ parameterized by a matrix
$Q$ which can be used to study algebraic geometric codes arising from $Y_Q$. We
give a method to compute the lattice $L$ whose ideal $I_L$ is exactly $I(Y_Q)$
under a mild condition. As applications, we give precise descriptions for the
lattices corresponding to some special subgroups. We also prove a
Nullstellensatz type theorem valid over finite fields, and share
\verb|Macaulay2| codes for our algorithms.
|
Vanadium oxides have been highly attractive for over half a century since the
discovery of the metal insulator transition near room temperatures. Here NaxVO2
is studied through a systematic comparison with other layered sodium metal
oxides with early 3d transition metals, first disclosing a unified evolution
pattern of Na density waves through in situ XRD analysis. Combining ab-initio
simulations and theoretical modelings, a sodium-modulated Peierls-like
transition mechanism is then proposed for the bonding formation of metal ion
dimers. More importantly, the unique trimer structure in NaxVO2 is shown to be
very sensitive to the onsite Coulomb repulsion value, suggesting a delicate
balance between strong electronic correlations and orbital effects that can be
precisely modulated by both Na compositions and atomic stackings. This unveils
a unique opportunity to design strongly correlated materials with tailored
electronic transitions through electrochemical modulations and crystallographic
designs, to elegantly balance various competition effects. We think the
understanding will also help further elucidate complicated electronic behaviors
in other vanadium oxide systems.
|
In Europe, 20% of road crashes occur at intersections. In recent years,
evolving communication technologies are making V2V and V2I faster and more
reliable; with such advancements, these crashes, as well as their economic
cost, can be partially reduced. In this work, we concentrate on straight path
intersection collisions. Connectivity-based algorithms relying on 5G technology
and smart sensors are presented and compared to a commercial radar AEB logic in
order to evaluate performances and effectiveness in collision avoidance or
mitigation. The aforementioned novel safety systems are tested in a blind
intersection and low adherence scenario. The first algorithm proposed is
obtained by incorporating connectivity information to the original control
scheme, while the second algorithm proposed is a novel control logic fully
capable of utilizing also adherence estimation provided by smart sensors. Test
results show an improvement in terms of safety for both the architecture and
high prospects for future developments.
|
This paper deals with the generalized spectrum of continuously invertible
linear operators defined on infinite dimensional Hilbert spaces. More
precisely, we consider two bounded, coercive, and self-adjoint operators
$\bc{A, B}: V\mapsto V^{\#}$, where $V^{\#}$ denotes the dual of $V$, and
investigate the conditions under which the whole spectrum of
$\bc{B}^{-1}\bc{A}:V\mapsto V$ can be approximated to an arbitrary accuracy by
the eigenvalues of the finite dimensional discretization
$\bc{B}_n^{-1}\bc{A}_n$. Since $\bc{B}^{-1}\bc{A}$ is continuously invertible,
such an investigation cannot use the concept of uniform (normwise) convergence,
and it relies instead on the pointwise (strong) convergence of
$\bc{B}_n^{-1}\bc{A}_n$ to $\bc{B}^{-1}\bc{A}$.
The paper is motivated by operator preconditioning which is employed in the
numerical solution of boundary value problems. In this context, $\bc{A},
\bc{B}: H_0^1(\Omega) \mapsto H^{-1}(\Omega)$ are the standard
integral/functional representations of the differential operators $ -\nabla
\cdot (k(x)\nabla u)$ and $-\nabla \cdot (g(x)\nabla u)$, respectively, and
$k(x)$ and $g(x)$ are scalar coefficient functions. The investigated question
differs from the eigenvalue problem studied in the numerical PDE literature
which is based on the approximation of the eigenvalues within the framework of
compact operators.
This work follows the path started by the two recent papers published in
[SIAM J. Numer. Anal., 57 (2019), pp.~1369-1394 and 58 (2020), pp.~2193-2211]
and addresses one of the open questions formulated at the end of the second
paper.
|
We prove that if there are $\mathfrak c$ incomparable selective ultrafilters
then, for every infinite cardinal $\kappa$ such that $\kappa^\omega=\kappa$,
there exists a group topology on the free Abelian group of cardinality $\kappa$
without nontrivial convergent sequences and such that every finite power is
countably compact. In particular, there are arbitrarily large countably compact
groups. This answers a 1992 question of D. Dikranjan and D. Shakhmatov.
|
For fixed $q\in\{3,7,11,19, 43,67,163\}$, we consider the density of primes
$p$ congruent to $1$ modulo $4$ such that the class group of the number field
$\mathbb{Q}(\sqrt{-qp})$ has order divisible by $16$. We show that this density
is equal to $1/8$, in line with a more general conjecture of Gerth.
Vinogradov's method is the key analytic tool for our work.
|
The recent proposal of antidoping scheme breaks new ground in conceiving
conversely functional materials and devices, yet the few available examples
belong to the correlated electron systems. Here we demonstrate both
theoretically and experimentally that the main group oxide BaBiO$_3$ is a model
system for antidoping using oxygen vacancies. The first principles calculations
show that the band gap systematically increases due to the strongly enhanced
BiO breathing distortions away from the vacancies and the annihilation of Bi 6s
and O 2p hybridized conduction bands near the vacancies. The spectroscopic
experiments confirm the band gap increasing systematically with electron
doping, with a maximal gap enhancement of 75% when the film's stoichiometry is
reduced to BaBiO$_{2.75}$. The Raman and diffraction experiments show the
suppression of the overall breathing distortion. The study unambiguously
demonstrates the remarkable antidoping effect in a material without strong
electron correlations and underscores the importance of bond disproportionation
in realizing such an effect.
|
We provide a tight result for a fundamental problem arising from packing
squares into a circular container: The critical density of packing squares into
a disk is $\delta=\frac{8}{5\pi}\approx 0.509$. This implies that any set of
(not necessarily equal) squares of total area $A \leq \frac{8}{5}$ can always
be packed into a disk with radius 1; in contrast, for any $\varepsilon>0$ there
are sets of squares of total area $\frac{8}{5}+\varepsilon$ that cannot be
packed, even if squares may be rotated. This settles the last (and arguably,
most elusive) case of packing circular or square objects into a circular or
square container: The critical densities for squares in a square
$\left(\frac{1}{2}\right)$, circles in a square
$\left(\frac{\pi}{(3+2\sqrt{2})}\approx 0.539\right)$ and circles in a circle
$\left(\frac{1}{2}\right)$ have already been established, making use of
recursive subdivisions of a square container into pieces bounded by straight
lines, or the ability to use recursive arguments based on similarity of objects
and container; neither of these approaches can be applied when packing squares
into a circular container. Our proof uses a careful manual analysis,
complemented by a computer-assisted part that is based on interval arithmetic.
Beyond the basic mathematical importance, our result is also useful as a
blackbox lemma for the analysis of recursive packing algorithms. At the same
time, our approach showcases the power of a general framework for
computer-assisted proofs, based on interval arithmetic.
|
It is shown that the dipole moment of polar (water, methanol, formamide,
acetone and acetonitrile) molecules in the neighborhood of a cation is
increased primarily by polarization from the bare electrostatic charge of the
cation, although the effective value of the latter is somewhat reduced by "back
donation" of electrons from neighbouring polar molecules. In other words, the
classical picture may be viewed as if a point charge slightly smaller than the
nominal charge of the cation would be placed at the cation site. It was found
that the geometrical arrangement of the polar molecules in the first solvation
shell is such that their mutual polarization reduces the dipole moments of
individual molecules, so that in some cases they become smaller than the dipole
moment of the free protic or aprotic molecule. We conjecture that this behavior
is essentially a manifestation of the Le Chatellier-Braun principle.
|
It was recently discovered that atoms subject to a time-periodic drive can
give rise to a crystal structure in phase space. In this work, we point out the
atom-atom interactions give rise to collective phonon excitations of
phase-space crystal via a pairing interaction with intrinsically complex phases
that can lead to a phonon Chern insulator, accompanied by topologically robust
chiral transport along the edge of the phase-space crystal. This topological
phase is realized even in scenarios where the time-reversal transformation is a
symmetry, which is surprising because the breaking of time-reversal symmetry is
a strict precondition for topological chiral transport in the standard setting
of real-space crystals. Our work has also important implications for the
dynamics of 2D charged particles in a strong magnetic field.
|
Background: For its simplicity, the eikonal method is the tool of choice to
analyze nuclear reactions at high energies ($E>100$ MeV/nucleon), including
knockout reactions. However, so far, the effective interactions used in this
method are assumed to be fully local.
Purpose: Given the recent studies on non-local optical potentials, in this
work we assess whether non-locality in the optical potentials is expected to
impact reactions at high energies and then explore different avenues for
extending the eikonal method to include non-local interactions.
Method: We compare angular distributions obtained for non-local interactions
(using the exact R-matrix approach for elastic scattering and the adiabatic
distorted wave approximation for transfer) with those obtained using their
local-equivalent interactions.
Results: Our results show that transfer observables are significantly
impacted by non-locality in the high-energy regime. Because knockout reactions
are dominated by stripping (transfer to inelastic channels), non-locality is
expected to have a large effect on knockout observables too. Three approaches
are explored for extending the eikonal method to non-local interactions,
including an iterative method and a perturbation theory.
Conclusions: None of the derived extensions of the eikonal model provide a
good description of elastic scattering. This work suggests that non-locality
removes the formal simplicity associated with the eikonal model.
|
Over the past decade the in-medium similarity renormalization group (IMSRG)
approach has proven to be a powerful and versatile ab initio many-body method
for studying medium-mass nuclei. So far, the IMSRG was limited to the
approximation in which only up to two-body operators are incorporated in the
renormalization group flow, referred to as the IMSRG(2). In this work, we
extend the IMSRG(2) approach to fully include three-body operators yielding the
IMSRG(3) approximation. We use a perturbative scaling analysis to estimate the
importance of individual terms in this approximation and introduce truncations
that aim to approximate the IMSRG(3) at a lower computational cost. The
IMSRG(3) is systematically benchmarked for different nuclear Hamiltonians for
${}^{4}\text{He}$ and ${}^{16}\text{O}$ in small model spaces. The IMSRG(3)
systematically improves over the IMSRG(2) relative to exact results.
Approximate IMSRG(3) truncations constructed based on computational cost are
able to reproduce much of the systematic improvement offered by the full
IMSRG(3). We also find that the approximate IMSRG(3) truncations behave
consistently with expectations from our perturbative analysis, indicating that
this strategy may also be used to systematically approximate the IMSRG(3).
|
We demonstrate that the reduced Hartree-Fock equation (REHF) with an Anderson
type background charge distribution has an unique stationary solution by
explicitly computing a screening mass at positive temperature.
|
A* search is an informed search algorithm that uses a heuristic function to
guide the order in which nodes are expanded. Since the computation required to
expand a node and compute the heuristic values for all of its generated
children grows linearly with the size of the action space, A* search can become
impractical for problems with large action spaces. This computational burden
becomes even more apparent when heuristic functions are learned by general, but
computationally expensive, deep neural networks. To address this problem, we
introduce DeepCubeAQ, a deep reinforcement learning and search algorithm that
builds on the DeepCubeA algorithm and deep Q-networks. DeepCubeAQ learns a
heuristic function that, with a single forward pass through a deep neural
network, computes the sum of the transition cost and the heuristic value of all
of the children of a node without explicitly generating any of the children,
eliminating the need for node expansions. DeepCubeAQ then uses a novel variant
of A* search, called AQ* search, that uses the deep Q-network to guide search.
We use DeepCubeAQ to solve the Rubik's cube when formulated with a large action
space that includes 1872 meta-actions and show that this 157-fold increase in
the size of the action space incurs less than a 4-fold increase in computation
time when performing AQ* search and that AQ* search is orders of magnitude
faster than A* search.
|
The resolution of $4$-dimensional massless field operators of higher spins
was constructed by Eastwood-Penrose-Wells by using the twistor method. Recently
physicists are interested in $6$-dimensional physics including the massless
field operators of higher spins on Lorentzian space $\Bbb R^{5,1}$. Its
Euclidean version $\mathscr{D}_0$ and their function theory are discussed in
\cite{wangkang3}. In this paper, we construct an exact sequence of Hilbert
spaces as weighted $L^2$ spaces resolving $\mathscr{D}_0$: $$L^2_\varphi(\Bbb
R^6, \mathscr{V}_0)\overset{\mathscr{D}_0}\longrightarrow L^2_\varphi(\Bbb
R^6,\mathscr{V}_1)\overset{\mathscr{D}_1}\longrightarrow L^2_\varphi(\Bbb R^6,
\mathscr{V}_2)\overset{\mathscr{D}_2}\longrightarrow L^2_\varphi(\Bbb R^6,
\mathscr{V}_3)\longrightarrow 0,$$ with suitable operators $\mathscr{D}_l$ and
vector spaces $\mathscr{V}_l$. Namely, we can solve $\mathscr{D}_{l}u=f$ in
$L^2_\varphi(\Bbb R^6, \mathscr{V}_{l})$ when $\mathscr{D}_{l+1} f=0$ for $f\in
L^2_{\varphi}(\Bbb R^6, \mathscr{V}_{l+1})$. This is proved by using the $L^2$
method in the theory of several complex variables, which is a general framework
to solve overdetermined PDEs under the compatibility condition. To apply this
method here, it is necessary to consider weighted $L^2$ spaces, an advantage of
which is that any polynomial is $L^2_{\varphi}$ integrable. As a corollary, we
prove that $$ P(\Bbb R^6, \mathscr{V}_0)\overset{\mathscr{D}_0}\longrightarrow
P(\Bbb R^6,\mathscr{V}_1)\overset{\mathscr{D}_1}\longrightarrow P(\Bbb R^6,
\mathscr{V}_2)\overset{\mathscr{D}_2}\longrightarrow P(\Bbb R^6,
\mathscr{V}_3)\longrightarrow 0$$ is a resolution, where $P(\Bbb R^6,
\mathscr{V}_l)$ is the space of all $\mathscr{V}_l$-valued polynomials. This
provides an analytic way to construct a resolution of a differential operator
acting on vector valued polynomials.
|
Angular momentum plays a central role in a multitude of phenomena in quantum
mechanics, recurring in every length scale from the microscopic interactions of
light and matter to the macroscopic behavior of superfluids. Vortex beams,
carrying intrinsic orbital angular momentum (OAM), are now regularly generated
with elementary particles such as photons and electrons, and harnessed for
numerous applications including microscopy and communication. Untapped
possibilities remain hidden in vortices of non-elementary particles, as their
composite structure can lead to coupling of OAM with internal degrees of
freedom. However, thus far, the creation of a vortex beam of a non-elementary
particle has never been demonstrated experimentally. We present the first
vortex beams of atoms and molecules, formed by diffracting supersonic beams of
helium atoms and dimers, respectively, off binary masks made from transmission
gratings. By achieving large particle coherence lengths and nanometric grating
features, we observe a series of vortex rings corresponding to different OAM
states in the accumulated images of particles impacting a detector. This method
is general and can be applied to most atomic and molecular gases. Our results
may open new frontiers in atomic physics, utilizing the additional degree of
freedom of OAM to probe collisions and alter fundamental interactions.
|
Using a tight-biding model, we elaborate that the previously discovered
out-of-plane polarized helical edge spin current caused by Rashba spin-orbit
coupling can be attributed to the fact that in a strip geometry, a positive
momentum eigenstate does not always have the same spin polarization at the edge
as the corresponding negative momentum eigenstate. In addition, in the presence
of a magnetization pointing perpendicular to the edge, an edge charge current
is produced, which can be chiral or nonchiral depending on whether the
magnetization lies in-plane or out-of-plane. The spin polarization near the
edge develops a transverse component orthogonal to the magnetization, which is
antisymmetric between the two edges and tends to cause a noncollinear magnetic
order between the two edges. If the magnetization only occupies a region near
one edge, or in an irregular shaped quantum dot, this transverse component has
a nonzero average, rendering a gate voltage-induced magnetoelectric torque
without the need of a bias voltage. We also argue that other types of
spin-orbit coupling that can be obtained from the Rashba type through a unitary
transformation, such as the Dresselhaus spin-orbit coupling, will have similar
effects too.
|
Hydrodynamics is a general theoretical framework for describing the long-time
large-distance behaviors of various macroscopic physical systems, with its
equations based on conservation laws such as energy-momentum conservation and
charge conservation. Recently there has been significant interest in
understanding the implications of angular momentum conservation for a
corresponding hydrodynamic theory. In this work, we examine the key conceptual
issues for such a theory in the relativistic regime where the orbital and spin
components get entangled. We derive the equations for relativistic viscous
hydrodynamics with angular momentum through Navier-Stokes type of gradient
expansion analysis.
|
Split-learning (SL) has recently gained popularity due to its inherent
privacy-preserving capabilities and ability to enable collaborative inference
for devices with limited computational power. Standard SL algorithms assume an
ideal underlying digital communication system and ignore the problem of scarce
communication bandwidth. However, for a large number of agents, limited
bandwidth resources, and time-varying communication channels, the communication
bandwidth can become the bottleneck. To address this challenge, in this work,
we propose a novel SL framework to solve the remote inference problem that
introduces an additional layer at the agent side and constrains the choices of
the weights and the biases to ensure over the air aggregation. Hence, the
proposed approach maintains constant communication cost with respect to the
number of agents enabling remote inference under limited bandwidth. Numerical
results show that our proposed algorithm significantly outperforms the digital
implementation in terms of communication-efficiency, especially as the number
of agents grows large.
|
Time series analysis is quickly proceeding towards long and complex tasks. In
recent years, fast approximate algorithms for discord search have been proposed
in order to compensate for the increasing size of the time series. It is more
interesting, however, to find quick exact solutions. In this research, we
improved HOT SAX by exploiting two main ideas: the warm-up process, and the
similarity between sequences close in time. The resulting algorithm, called HOT
SAX Time (HST), has been validated with real and synthetic time series, and
successfully compared with HOT SAX, RRA, SCAMP, and DADD. The complexity of a
discord search has been evaluated with a new indicator, the cost per sequence
(cps), which allows one to compare searches on time series of different
lengths. Numerical evidence suggests that two conditions are involved in
determining the complexity of a discord search in a non-trivial way: the length
of the discords, and the noise/signal ratio. In the case of complex searches,
HST can be more than 100 times faster than HOT SAX, thus being at the forefront
of the exact discord search.
|
In this work, we revisit the adaptive L1 time-stepping scheme for solving the
time-fractional Allen-Cahn equation in the Caputo's form. The L1 implicit
scheme is shown to preserve a variational energy dissipation law on arbitrary
nonuniform time meshes by using the recent discrete analysis tools, i.e., the
discrete orthogonal convolution kernels and discrete complementary convolution
kernels. Then the discrete embedding techniques and the fractional Gr\"onwall
inequality were applied to establish an $L^2$ norm error estimate on nonuniform
time meshes. An adaptive time-stepping strategy according to the dynamical
feature of the system is presented to capture the multi-scale behaviors and to
improve the computational performance.
|
Extended decorations on naturally decorated trees were introduced in the work
of Bruned, Hairer and Zambotti on algebraic renormalization of regularity
structures to provide a convenient framework for the renormalization of systems
of singular stochastic PDEs within that setting. This non-dynamical feature of
the trees complicated the analysis of the dynamical counterpart of the
renormalization process. We provide a new proof of the renormalized system
by-passing the use of extended decorations and working for a large class of
renormalization maps, with the BPHZ renormalization as a special case. The
proof reveals important algebraic properties connected to preparation maps.
|
PYROBOCOP is a lightweight Python-based package for control and optimization
of robotic systems described by nonlinear Differential Algebraic Equations
(DAEs). In particular, the package can handle systems with contacts that are
described by complementarity constraints and provides a general framework for
specifying obstacle avoidance constraints. The package performs direct
transcription of the DAEs into a set of nonlinear equations by performing
orthogonal collocation on finite elements. The resulting optimization problem
belongs to the class of Mathematical Programs with Complementarity Constraints
(MPCCs). MPCCs fail to satisfy commonly assumed constraint qualifications and
require special handling of the complementarity constraints in order for
NonLinear Program (NLP) solvers to solve them effectively. PYROBOCOP provides
automatic reformulation of the complementarity constraints that enables NLP
solvers to perform optimization of robotic systems. The package is interfaced
with ADOLC for obtaining sparse derivatives by automatic differentiation and
IPOPT for performing optimization. We demonstrate the effectiveness of our
approach in terms of speed and flexibility. We provide several numerical
examples for several robotic systems with collision avoidance as well as
contact constraints represented using complementarity constraints. We provide
comparisons with other open source optimization packages like CasADi and Pyomo .
|
Demand response (DR) programs engage distributed demand-side resources, e.g.,
controllable residential and commercial loads, in providing ancillary services
for electric power systems. Ensembles of these resources can help reducing
system load peaks and meeting operational limits by adjusting their electric
power consumption. To equip utilities or load aggregators with adequate
decision-support tools for ensemble dispatch, we develop a Markov Decision
Process (MDP) approach to optimally control load ensembles in a
privacy-preserving manner. To this end, the concept of differential privacy is
internalized into the MDP routine to protect transition probabilities and,
thus, privacy of DR participants. The proposed approach also provides a
trade-off between solution optimality and privacy guarantees, and is analyzed
using real-world data from DR events in the New York University microgrid in
New York, NY.
|
A 3D unit cell model containing eight different spherical particles embedded
in a homogeneous strain gradient plasticity (SGP) matrix material is presented.
The interaction between particles and matrix is controlled by an interface
model formulated within the higher order SGP theory used. Strengthening of the
particle reinforced material is investigated in terms of the increase in
macroscopic yield stress. The results are used to validate a closed form
strengthening relation proposed by the authors, which suggests that the
increase in macroscopic yield stress is governed by the interface strength
times the total surface area of particles in the material volume.
|
The Higgs boson decay modes to $b$ and $c$ quarks are crucial for many Higgs
precision measurements. The presence of semileptonic decays in the jets
originating from $b$ and $c$ quarks causes missing energy due to the
undetectable neutrinos. A correction for the missing neutrino momenta can be
derived from the kinematics of the decay up to a two-fold ambiguity. The
correct solution can be identified by a kinematic fit, which exploits the
well-known initial state at an $e^{+}e^{-}$ collider by adjusting the measured
quantities within their uncertainties to fulfill the kinematic constraints. The
ParticleFlow concept, based on the reconstruction of individual particles in a
jet allows understanding the individual jet-level uncertainties at an
unprecedented level. The modeling of the jet uncertainties and the resulting
fit performance will be discussed for the example of the ILD detector. Applied
to $H\rightarrow b\bar{b}/c\bar{c}$ events, the combination of the neutrino
correction with the kinematic fit improves the Higgs mass reconstruction
significantly, both in terms of resolution and peak position.
|
A variational discrete element method is applied to simulate quasi-static
crack propagation. Cracks are considered to propagate between the mesh cells
through the mesh facets. The elastic behaviour is parametrized by the
continuous mechanical parameters (Young modulus and Poisson ratio). A discrete
energetic cracking criterion coupled to a discrete kinking criterion guide the
cracking process. Two-dimensional numerical examples are presented to
illustrate the robustness and versatility of the method.
|
In this article, we consider convergence of stochastic gradient descent
schemes (SGD) under weak assumptions on the underlying landscape. More
explicitly, we show that on the event that the SGD stays local we have
convergence of the SGD if there is only a countable number of critical points
or if the target function/landscape satisfies Lojasiewicz-inequalities around
all critical levels as all analytic functions do. In particular, we show that
for neural networks with analytic activation function such as softplus, sigmoid
and the hyperbolic tangent, SGD converges on the event of staying local, if the
random variables modeling the signal and response in the training are compactly
supported.
|
We present a directed variant of Salop (1979) model to analyze bus transport
dynamics. The players are operators competing in cooperative and
non-cooperative games. Utility, like in most bus concession schemes in emerging
countries, is proportional to the total fare collection. Competition for
picking up passengers leads to well documented and dangerous driving practices
that cause road accidents, traffic congestion and pollution. We obtain
theoretical results that support the existence and implementation of such
practices, and give a qualitative description of how they come to occur. In
addition, our results allow to compare the current or base transport system
with a more cooperative one.
|
In reinforcement learning (RL), the goal is to obtain an optimal policy, for
which the optimality criterion is fundamentally important. Two major optimality
criteria are average and discounted rewards, where the later is typically
considered as an approximation to the former. While the discounted reward is
more popular, it is problematic to apply in environments that have no natural
notion of discounting. This motivates us to revisit a) the progression of
optimality criteria in dynamic programming, b) justification for and
complication of an artificial discount factor, and c) benefits of directly
maximizing the average reward. Our contributions include a thorough examination
of the relationship between average and discounted rewards, as well as a
discussion of their pros and cons in RL. We emphasize that average-reward RL
methods possess the ingredient and mechanism for developing the general
discounting-free optimality criterion (Veinott, 1969) in RL.
|
We use the crystal isomorphisms of the Fock space to describe two maps on
partitions and multipartitions which naturally appear in the crystal basis
theory for quantum groups in affine type A and in the representation theory of
Hecke algebras of type G(l, l, n).
|
We prove that Strings-and-Coins -- the combinatorial two-player game
generalizing the dual of Dots-and-Boxes -- is strongly PSPACE-complete on
multigraphs. This result improves the best previous result, NP-hardness, argued
in Winning Ways. Our result also applies to the Nimstring variant, where the
winner is determined by normal play; indeed, one step in our reduction is the
standard reduction (also from Winning Ways) from Nimstring to
Strings-and-Coins.
|
Observed rotation curves in star-forming galaxies indicate a puzzling dearth
of dark matter in extended flat cores within haloes of mass $\geq\!
10^{12}M_\odot$ at $z\!\sim\! 2$. This is not reproduced by current
cosmological simulations, and supernova-driven outflows are not effective in
such massive haloes. We address a hybrid scenario where post-compaction merging
satellites heat up the dark-matter cusps by dynamical friction, allowing
AGN-driven outflows to generate cores. Using analytic and semi-analytic models
(SatGen), we estimate the dynamical-friction heating as a function of satellite
compactness for a cosmological sequence of mergers. Cosmological simulations
(VELA) demonstrate that satellites of initial virial masses
$>\!10^{11.3}M_\odot$, that undergo wet compactions, become sufficiently
compact for significant heating. Constituting a major fraction of the accretion
onto haloes $\geq\!10^{12}M_\odot$, these satellites heat-up the cusps in half
a virial time at $z\!\sim\! 2$. Using a model for outflow-driven core formation
(CuspCore), we demonstrate that the heated dark-matter cusps develop extended
cores in response to removal of half the gas mass, while the more compact
stellar systems remain intact. The mergers keep the dark matter hot, while the
gas supply, fresh and recycled, is sufficient for the AGN outflows. AGN indeed
become effective in haloes $\geq\!10^{12}M_\odot$, where the black-hole growth
is no longer suppressed by supernovae and its compaction-driven rapid growth is
maintained by a hot CGM. For simulations to reproduce the dynamical-friction
effects, they should resolve the compaction of the massive satellites and avoid
artificial tidal disruption. AGN feedback could be boosted by clumpy black-hole
accretion and clumpy response to AGN.
|
We studied the dynamics of an object sliding down on a semi-sphere with
radius $R$. We consider the physical situation where the semi-sphere is free to
move over a horizontal surface. Also, we consider that all surfaces in contact
are friction-less. We analyze the values for the last contact angle
$\theta^\star$, corresponding to the angle when the object and the semi-sphere
detach one of each other, considering all possible scenarios with different
values of $m_A$ and $m_B$. We found that the last contact angle only depends on
the ratio between the masses, and it is independent of the acceleration of
gravity and semi-sphere radius. In addition, we found that the largest possible
value of $\theta^\star$ occurs for the case when the semi-sphere does not move.
On the opposite case, the minimum value of the angle occurs for $m_A \gg m_B$
occurring at the top of the semi-sphere.
|
In a legal system, judgment consistency is regarded as one of the most
important manifestations of fairness. However, due to the complexity of factual
elements that impact sentencing in real-world scenarios, few works have been
done on quantitatively measuring judgment consistency towards real-world data.
In this paper, we propose an evaluation metric for judgment inconsistency,
Legal Inconsistency Coefficient (LInCo), which aims to evaluate inconsistency
between data groups divided by specific features (e.g., gender, region, race).
We propose to simulate judges from different groups with legal judgment
prediction (LJP) models and measure the judicial inconsistency with the
disagreement of the judgment results given by LJP models trained on different
groups. Experimental results on the synthetic data verify the effectiveness of
LInCo. We further employ LInCo to explore the inconsistency in real cases and
come to the following observations: (1) Both regional and gender inconsistency
exist in the legal system, but gender inconsistency is much less than regional
inconsistency; (2) The level of regional inconsistency varies little across
different time periods; (3) In general, judicial inconsistency is negatively
correlated with the severity of the criminal charges. Besides, we use LInCo to
evaluate the performance of several de-bias methods, such as adversarial
learning, and find that these mechanisms can effectively help LJP models to
avoid suffering from data bias.
|
Multiplying matrices is among the most fundamental and compute-intensive
operations in machine learning. Consequently, there has been significant work
on efficiently approximating matrix multiplies. We introduce a learning-based
algorithm for this task that greatly outperforms existing methods. Experiments
using hundreds of matrices from diverse domains show that it often runs
$100\times$ faster than exact matrix products and $10\times$ faster than
current approximate methods. In the common case that one matrix is known ahead
of time, our method also has the interesting property that it requires zero
multiply-adds. These results suggest that a mixture of hashing, averaging, and
byte shuffling$-$the core operations of our method$-$could be a more promising
building block for machine learning than the sparsified, factorized, and/or
scalar quantized matrix products that have recently been the focus of
substantial research and hardware investment.
|
The ability to automatically extract Knowledge Graphs (KG) from a given
collection of documents is a long-standing problem in Artificial Intelligence.
One way to assess this capability is through the task of slot filling. Given an
entity query in form of [Entity, Slot, ?], a system is asked to `fill' the slot
by generating or extracting the missing value from a relevant passage or
passages. This capability is crucial to create systems for automatic knowledge
base population, which is becoming in ever-increasing demand, especially in
enterprise applications. Recently, there has been a promising direction in
evaluating language models in the same way we would evaluate knowledge bases,
and the task of slot filling is the most suitable to this intent. The recent
advancements in the field try to solve this task in an end-to-end fashion using
retrieval-based language models. Models like Retrieval Augmented Generation
(RAG) show surprisingly good performance without involving complex information
extraction pipelines. However, the results achieved by these models on the two
slot filling tasks in the KILT benchmark are still not at the level required by
real-world information extraction systems. In this paper, we describe several
strategies we adopted to improve the retriever and the generator of RAG in
order to make it a better slot filler. Our KGI0 system (available at
https://github.com/IBM/retrieve-write-slot-filling) reached the top-1 position
on the KILT leaderboard on both T-REx and zsRE dataset with a large margin.
|
Let $R$ be a commutative ring with non-zero identity. In this paper, we
introduce the concept of weakly $J$-ideals as a new generalization of
$J$-ideals. We call a proper ideal $I$ of a ring $R$ a weakly $J$-ideal if
whenever $a,b\in R$ with $0\neq ab\in I$ and $a\notin J(R)$, then $a\in I$.
Many of the basic properties and characterizations of this concept are studied.
We investigate weakly $J$-ideals under various contexts of constructions such
as direct products, localizations, homomorphic images. Moreover, a number of
examples and results on weakly $J$-ideals are discussed. Finally, the third
section is devoted to the characterizations of these constructions in an
amagamated ring along an ideal.
|
We compute the Borel equivariant cohomology ring of the left $K$-action on a
homogeneous space $G/H$, where $G$ is a connected Lie group, $H$ and $K$ are
closed, connected subgroups and $2$ and the torsion primes of the Lie groups
are units of the coefficient ring. As a special case, this gives the singular
cohomology rings of biquotients $H \backslash G / K$.
This depends on a version of the Eilenberg-Moore theorem developed in the
appendix, where a novel multiplicative structure on the two-sided bar
construction $\mathbf{B}(A',A,A'')$ is defined, valid when $A' \leftarrow A \to
A''$ is a pair of maps of homotopy Gerstenhaber algebras.
|
Scaling relations are very useful tools for estimating unknown stellar
quantities. Within this framework, eclipsing binaries are ideal for this goal
because their mass and radius are known with a very good level of accuracy,
leading to improved constraints on the models. We aim to provide empirical
relations for the mass and radius as function of luminosity, metallicity, and
age. We investigate, in particular, the impact of metallicity and age on those
relations. We used a multi-dimensional fit approach based on the data from
DEBCat, an updated catalogue of eclipsing binary observations such as mass,
radius, luminosity, effective temperature, gravity, and metallicity. We used
the {PARAM web interface for the Bayesian estimation of stellar parameters,
along with} the stellar evolutionary code MESA to estimate the binary age,
assuming a coeval hypothesis for both members. We derived the mass and
radius-luminosity-metallicity-age relations using 56 stars, {with metallicity
and mass in the range -0.34<[Fe/H]<0.27 and 0.66<M/M{_\odot}<1.8}. With that,
the observed mass and radius are reproduced with an accuracy of 3.5% and 5.9%,
respectively, which is consistent with the other results in literature. We
conclude that including the age in such relations increases the quality of the
fit, particularly in terms of the mass, as compared to the radius. On the other
hand, as other authors have noted, we observed an higher dispersion on the mass
relation than in that of the radius. We propose that this is due to a stellar
age effect.
|
Traditional normalization techniques (e.g., Batch Normalization and Instance
Normalization) generally and simplistically assume that training and test data
follow the same distribution. As distribution shifts are inevitable in
real-world applications, well-trained models with previous normalization
methods can perform badly in new environments. Can we develop new normalization
methods to improve generalization robustness under distribution shifts? In this
paper, we answer the question by proposing CrossNorm and SelfNorm. CrossNorm
exchanges channel-wise mean and variance between feature maps to enlarge
training distribution, while SelfNorm uses attention to recalibrate the
statistics to bridge gaps between training and test distributions. CrossNorm
and SelfNorm can complement each other, though exploring different directions
in statistics usage. Extensive experiments on different fields (vision and
language), tasks (classification and segmentation), settings (supervised and
semi-supervised), and distribution shift types (synthetic and natural) show the
effectiveness. Code is available at
https://github.com/amazon-research/crossnorm-selfnorm
|
We use a combination of Hipparcos space mission data with the USNO dedicated
ground-based astrometric program URAT-Bright designed to complement and verify
Gaia results for the brightest stars in the south to estimate the small
perturbations of observed proper motions caused by exoplanets. One of the 1423
bright stars in the program, $\delta$ Pav, stands out with a small proper
motion difference between our long-term estimate and Gaia EDR3 value, which
corresponds to a projected velocity of $(-17,+13)$ m s$^{-1}$. This difference
is significant at a 0.994 confidence in the RA component, owing to the
proximity of the star and the impressive precision of proper motions. The
effect is confirmed by a comparison of long-term EDR3-Hipparcos and short-term
Gaia EDR3 proper motions at a smaller velocity, but with formally absolute
confidence. We surmise that the close Solar analog $\delta$ Pav harbors a
long-period exoplanet similar to Jupiter.
|
This paper proposes a strategy to assess the robustness of different machine
learning models that involve natural language processing (NLP). The overall
approach relies upon a Search and Semantically Replace strategy that consists
of two steps: (1) Search, which identifies important parts in the text; (2)
Semantically Replace, which finds replacements for the important parts, and
constrains the replaced tokens with semantically similar words. We introduce
different types of Search and Semantically Replace methods designed
specifically for particular types of machine learning models. We also
investigate the effectiveness of this strategy and provide a general framework
to assess a variety of machine learning models. Finally, an empirical
comparison is provided of robustness performance among three different model
types, each with a different text representation.
|
In this paper, we study gapsets and we focus on obtaining information on how
the maximum distance between to consecutive elements influences the behaviour
of the set. In particular, we prove that the cardinality of the set of gapsets
with genus $g$ such that the maximum distance between two consecutive elements
is $\k$ is equal to the cardinality of the set of gapsets with genus $g+1$ such
that the maximum distance between two consecutive elements is $\k+1$, when $2g
\leq 3\k$.
|
Understanding the competition between superconductivity and other ordered
states (such as antiferromagnetic or charge-density-wave (CDW) state) is a
central issue in condensed matter physics. The recently discovered layered
kagome metal AV3Sb5 (A = K, Rb, and Cs) provides us a new playground to study
the interplay of superconductivity and CDW state by involving nontrivial
topology of band structures. Here, we conduct high-pressure electrical
transport and magnetic susceptibility measurements to study CsV3Sb5 with the
highest Tc of 2.7 K in AV3Sb5 family. While the CDW transition is monotonically
suppressed by pressure, superconductivity is enhanced with increasing pressure
up to P1~0.7 GPa, then an unexpected suppression on superconductivity happens
until pressure around 1.1 GPa, after that, Tc is enhanced with increasing
pressure again. The CDW is completely suppressed at a critical pressure P2~2
GPa together with a maximum Tc of about 8 K. In contrast to a common dome-like
behavior, the pressure-dependent Tc shows an unexpected double-peak behavior.
The unusual suppression of Tc at P1 is concomitant with the rapidly damping of
quantum oscillations, sudden enhancement of the residual resistivity and rapid
decrease of magnetoresistance. Our discoveries indicate an unusual competition
between superconductivity and CDW state in pressurized kagome lattice.
|
The missing data issue is ubiquitous in health studies. Variable selection in
the presence of both missing covariates and outcomes is an important
statistical research topic but has been less studied. Existing literature
focuses on parametric regression techniques that provide direct parameter
estimates of the regression model. Flexible nonparametric machine learning
methods considerably mitigate the reliance on the parametric assumptions, but
do not provide as naturally defined variable importance measure as the
covariate effect native to parametric models. We investigate a general variable
selection approach when both the covariates and outcomes can be missing at
random and have general missing data patterns. This approach exploits the
flexibility of machine learning modeling techniques and bootstrap imputation,
which is amenable to nonparametric methods in which the covariate effects are
not directly available. We conduct expansive simulations investigating the
practical operating characteristics of the proposed variable selection
approach, when combined with four tree-based machine learning methods, XGBoost,
Random Forests, Bayesian Additive Regression Trees (BART) and Conditional
Random Forests, and two commonly used parametric methods, lasso and backward
stepwise selection. Numeric results suggest that when combined with bootstrap
imputation, XGBoost and BART have the overall best variable selection
performance with respect to the $F_1$ score and Type I error across various
settings. In general, there is no significant difference in the variable
selection performance due to imputation methods. We further demonstrate the
methods via a case study of risk factors for 3-year incidence of metabolic
syndrome with data from the Study of Women's Health Across the Nation.
|
We explore coherent control of Penning and associative ionization in cold
collisions of metastable He$^*({2}^3\text{S})$ atoms via the quantum
interference between different states of the He$_2^*$ collision complex. By
tuning the preparation coefficients of the initial atomic spin states, we can
benefit from the quantum interference between molecular channels to maximize or
minimize the cross sections for Penning and associative ionization. In
particular, we find that we can enhance the ionization ratio by 30% in the cold
regime. This work is significant for the coherent control of chemical reactions
in the cold and ultracold regime.
|
To make informed decisions in natural environments that change over time,
humans must update their beliefs as new observations are gathered. Studies
exploring human inference as a dynamical process that unfolds in time have
focused on situations in which the statistics of observations are
history-independent. Yet temporal structure is everywhere in nature, and yields
history-dependent observations. Do humans modify their inference processes
depending on the latent temporal statistics of their observations? We
investigate this question experimentally and theoretically using a change-point
inference task. We show that humans adapt their inference process to fine
aspects of the temporal structure in the statistics of stimuli. As such, humans
behave qualitatively in a Bayesian fashion, but, quantitatively, deviate away
from optimality. Perhaps more importantly, humans behave suboptimally in that
their responses are not deterministic, but variable. We show that this
variability itself is modulated by the temporal statistics of stimuli. To
elucidate the cognitive algorithm that yields this behavior, we investigate a
broad array of existing and new models that characterize different sources of
suboptimal deviations away from Bayesian inference. While models with 'output
noise' that corrupts the response-selection process are natural candidates,
human behavior is best described by sampling-based inference models, in which
the main ingredient is a compressed approximation of the posterior, represented
through a modest set of random samples and updated over time. This result comes
to complement a growing literature on sample-based representation and learning
in humans.
|
Unemployment benefits in the US were extended by up to 73 weeks during the
Great Recession. Equilibrium labor market theory indicates that extensions of
benefit duration impact not only search decisions by job seekers but also job
vacancy creations by employers. Most of the literature focused on the former to
show partial equilibrium effect that increment of unemployment benefits
discourage job search and lead to a rise in unemployment. To study the total
effect of UI benefit extensions on unemployment, I follow border county
identification strategy, take advantage of quasi-differenced specification to
control for changes in future benefit policies, apply interactive fixed effects
model to deal with unobserved shocks so as to obtain unbiased and consistent
estimation. I find that benefit extensions have a statistically significant
positive effect on unemployment, which is consistent with the results of
prevailing literature.
|
Human motion characteristics are used to monitor the progression of
neurological diseases and mood disorders. Since perceptions of emotions are
also interleaved with body posture and movements, emotion recognition from
human gait can be used to quantitatively monitor mood changes. Many existing
solutions often use shallow machine learning models with raw positional data or
manually extracted features to achieve this. However, gait is composed of many
highly expressive characteristics that can be used to identify human subjects,
and most solutions fail to address this, disregarding the subject's privacy.
This work introduces a novel deep neural network architecture to disentangle
human emotions and biometrics. In particular, we propose a cross-subject
transfer learning technique for training a multi-encoder autoencoder deep
neural network to learn disentangled latent representations of human motion
features. By disentangling subject biometrics from the gait data, we show that
the subject's privacy is preserved while the affect recognition performance
outperforms traditional methods. Furthermore, we exploit Guided Grad-CAM to
provide global explanations of the model's decision across gait cycles. We
evaluate the effectiveness of our method to existing methods at recognizing
emotions using both 3D temporal joint signals and manually extracted features.
We also show that this data can easily be exploited to expose a subject's
identity. Our method shows up to 7% improvement and highlights the joints with
the most significant influence across the average gait cycle.
|
Standard Model (SM) of particle physics has achieved enormous success in
describing the interactions among the known fundamental constituents of nature,
yet it fails to describe phenomena for which there is very strong experimental
evidence, such as the existence of dark matter, and which point to the
existence of new physics not included in that model; beyond its existence,
experimental data, however, have not provided clear indications as to the
nature of that new physics. The effective field theory (EFT) approach, the
subject of this review, is designed for this type of situations; it provides a
consistent and unbiased framework within which to study new physics effects
whose existence is expected but whose detailed nature is known very
imperfectly. We will provide a description of this approach together with a
discussion of some of its basic theoretical aspects. We then consider
applications to high-energy phenomenology and conclude with a discussion of the
application of EFT techniques to the study of dark matter physics and it
possible interactions with the SM. In several of the applications we also
briefly discuss specific models that are ultraviolet complete and may realize
the effects described by the EFT.
|
To study the heavy quark production processes, we use the transverse momentum
dependent (TMD, or unintegrated) gluon distribution function in a proton
obtained recently using the Kimber-Martin-Ryskin prescription from the
Bessel-inspired behavior of parton densities at small Bjorken $x$ values. We
obtained a good agreement of our results with the latest HERA experimental data
for reduced cross sections $\sigma^{c\overline{c}}_{\rm red}(x,Q^2)$ and
$\sigma^{b\overline{b}}_{\rm red}(x,Q^2)$, and also for deep inelastic
structure functions $F_2^c(x,Q^2)$ and $F_2^b(x,Q^2)$ in a wide range of $x$
and $Q^2$ values. Comparisons with the predictions based on the
Ciafaloni-Catani-Fiorani-Marchesini evolution equation and with the results of
conventional pQCD calculations performed at first three orders of perturbative
expansion are presented.
|
Fast and automated inference of binary-lens, single-source (2L1S)
microlensing events with sampling-based Bayesian algorithms (e.g., Markov Chain
Monte Carlo; MCMC) is challenged on two fronts: high computational cost of
likelihood evaluations with microlensing simulation codes, and a pathological
parameter space where the negative-log-likelihood surface can contain a
multitude of local minima that are narrow and deep. Analysis of 2L1S events
usually involves grid searches over some parameters to locate approximate
solutions as a prerequisite to posterior sampling, an expensive process that
often requires human-in-the-loop domain expertise. As the next-generation,
space-based microlensing survey with the Roman Space Telescope is expected to
yield thousands of binary microlensing events, a new fast and automated method
is desirable. Here, we present a likelihood-free inference (LFI) approach named
amortized neural posterior estimation, where a neural density estimator (NDE)
learns a surrogate posterior $\hat{p}(\theta|x)$ as an observation-parametrized
conditional probability distribution, from pre-computed simulations over the
full prior space. Trained on 291,012 simulated Roman-like 2L1S simulations, the
NDE produces accurate and precise posteriors within seconds for any observation
within the prior support without requiring a domain expert in the loop, thus
allowing for real-time and automated inference. We show that the NDE also
captures expected posterior degeneracies. The NDE posterior could then be
refined into the exact posterior with a downstream MCMC sampler with minimal
burn-in steps.
|
After disasters, distribution networks have to be restored by repair,
reconfiguration, and power dispatch. During the restoration process, changes
can occur in real time that deviate from the situations considered in
pre-designed planning strategies. That may result in the pre-designed plan to
become far from optimal or even unimplementable. This paper proposes a
centralized-distributed bi-level optimization method to solve the real-time
restoration planning problem. The first level determines integer variables
related to routing of the crews and the status of the switches using a genetic
algorithm (GA), while the second level determines the dispatch of
active/reactive power by using distributed model predictive control (DMPC). A
novel Aitken- DMPC solver is proposed to accelerate convergence and to make the
method suitable for real-time decision making. A case study based on the IEEE
123-bus system is considered, and the acceleration performance of the proposed
Aitken-DMPC solver is evaluated and compared with the standard DMPC method.
|
We present extensive, well-sampled optical and ultraviolet photometry and
optical spectra of the Type Ia supernova (SN Ia) 2017hpa. The light curves
indicate that SN 2017hpa is a normal SN Ia with an absolute peak magnitude of
$M_{\rm max}^{B} \approx$ -19.12$\pm$0.11 mag and a post-peak decline rate \mb\
= 1.02$\pm$0.07 mag. According to the quasibolometric light curve, we derive a
peak luminosity of 1.25$\times$10$^{43}$ erg s$^{-1}$ and a $^{56}$Ni mass of
0.63$\pm$0.02 $M_{\odot}$. The spectral evolution of SN 2017hpa is similar to
that of normal SNe Ia, while it exhibits unusually rapid velocity evolution
resembling that of SN 1991bg-like SNe Ia or the high-velocity subclass of SNe
Ia, with a post-peak velocity gradient of $\sim$ 130$\pm$7 km s$^{-1}$
d$^{-1}$. Moreover, its early spectra ($t < -7.9$ d) show prominent
\CII~$\lambda$6580 absorption feature, which disappeared in near-maximum-light
spectra but reemerged at phases from $t \sim +8.7$ d to $t \sim +11.7$ d after
maximum light. This implies that some unburned carbon may mix deep into the
inner layer, and is supported by the low \CII~$\lambda$6580 to
\SiII~$\lambda$6355 velocity ratio ($\sim 0.81$) observed in SN 2017hpa. The
\OI~$\lambda$7774 line shows a velocity distribution like that of carbon. The
prominent carbon feature, low velocity seen in carbon and oxygen, and large
velocity gradient make SN 2017hpa stand out from other normal SNe Ia, and are
more consistent with predictions from a violent merger of two white dwarfs.
Detailed modelling is still needed to reveal the nature of SN 2017hpa.
|
The law of a positive infinitely divisible process with no drift is
characterized by its L\'evy measure on the paths space. Based on recent results
of the two authors, it is shown that even for simple examples of such
processes, the knowledge of their L\'evy measures allows to obtain remarkable
distributional identities.
|
Gravitational waves (GWs) at ultra-low frequencies (${\lesssim
100\,\mathrm{nHz}}$) are key to understanding the assembly and evolution of
astrophysical black hole (BH) binaries with masses $\sim
10^{6}-10^{9}\,M_\odot$ at low redshifts. These GWs also offer a unique window
into a wide variety of cosmological processes. Pulsar timing arrays (PTAs) are
beginning to measure this stochastic signal at $\sim 1-100\,\mathrm{nHz}$ and
the combination of data from several arrays is expected to confirm a detection
in the next few years. The dominant physical processes generating gravitational
radiation at $\mathrm{nHz}$ frequencies are still uncertain. PTA observations
alone are currently unable to distinguish a binary BH astrophysical foreground
from a cosmological background due to, say, a first order phase transition at a
temperature $\sim 1-100\,\mathrm{MeV}$ in a weakly-interacting dark sector.
This letter explores the extent to which incorporating integrated bounds on the
ultra-low frequency GW spectrum from any combination of cosmic microwave
background, big bang nucleosynethesis or astrometric observations can help to
break this degeneracy.
|
Fluctuations of conserved charges are sensitive to the QCD phase transition
and a possible critical endpoint in the phase diagram at finite density. In
this work, we compute the baryon number fluctuations up to tenth order at
finite temperature and density. This is done in a QCD-assisted effective theory
that accurately captures the quantum- and in-medium effects of QCD at low
energies. A direct computation at finite density allows us to assess the
applicability of expansions around vanishing density. By using different
freeze-out scenarios in heavy-ion collisions, we translate these results into
baryon number fluctuations as a function of collision energy. We show that a
non-monotonic energy dependence of baryon number fluctuations can arise in the
non-critical crossover region of the phase diagram. Our results compare well
with recent experimental measurements of the kurtosis and the sixth-order
cumulant of the net-proton distribution from the STAR collaboration. They
indicate that the experimentally observed non-monotonic energy dependence of
fourth-order net-proton fluctuations is highly non-trivial. It could be an
experimental signature of an increasingly sharp chiral crossover and may
indicate a QCD critical point. The physics implications and necessary upgrades
of our analysis are discussed in detail.
|
Contributions: This paper investigates the relations between undergraduate
software architecture students' self-confidence and their course expectations,
cognitive levels, preferred learning methods, and critical thinking.
Background: these students, often, lack self-confidence in their ability to use
their knowledge to design software architectures. Intended Outcomes:
Self-confidence is expected to be related to the students' course expectations,
cognitive levels, preferred learning methods, and critical thinking.
Application Design: We developed a questionnaire with open-ended questions to
assess the self-confidence levels and related factors, which was taken by
one-hundred ten students in two semesters. The students answers were coded and
analyzed afterward. Findings: We found that self-confidence is weakly
associated with the students' course expectations and critical thinking and
independent from their cognitive levels and preferred learning methods. The
results suggest that to improve the self-confidence of the students, the
instructors should ensure that the students' have "correct" course expectations
and work on improving the students' critical thinking capabilities.
|
High-quality 4D reconstruction of human performance with complex interactions
to various objects is essential in real-world scenarios, which enables numerous
immersive VR/AR applications. However, recent advances still fail to provide
reliable performance reconstruction, suffering from challenging interaction
patterns and severe occlusions, especially for the monocular setting. To fill
this gap, in this paper, we propose RobustFusion, a robust volumetric
performance reconstruction system for human-object interaction scenarios using
only a single RGBD sensor, which combines various data-driven visual and
interaction cues to handle the complex interaction patterns and severe
occlusions. We propose a semantic-aware scene decoupling scheme to model the
occlusions explicitly, with a segmentation refinement and robust object
tracking to prevent disentanglement uncertainty and maintain temporal
consistency. We further introduce a robust performance capture scheme with the
aid of various data-driven cues, which not only enables re-initialization
ability, but also models the complex human-object interaction patterns in a
data-driven manner. To this end, we introduce a spatial relation prior to
prevent implausible intersections, as well as data-driven interaction cues to
maintain natural motions, especially for those regions under severe
human-object occlusions. We also adopt an adaptive fusion scheme for temporally
coherent human-object reconstruction with occlusion analysis and human parsing
cue. Extensive experiments demonstrate the effectiveness of our approach to
achieve high-quality 4D human performance reconstruction under complex
human-object interactions whilst still maintaining the lightweight monocular
setting.
|
This graduate textbook on machine learning tells a story of how patterns in
data support predictions and consequential actions. Starting with the
foundations of decision making, we cover representation, optimization, and
generalization as the constituents of supervised learning. A chapter on
datasets as benchmarks examines their histories and scientific bases.
Self-contained introductions to causality, the practice of causal inference,
sequential decision making, and reinforcement learning equip the reader with
concepts and tools to reason about actions and their consequences. Throughout,
the text discusses historical context and societal impact. We invite readers
from all backgrounds; some experience with probability, calculus, and linear
algebra suffices.
|
System-level test, or SLT, is an increasingly important process step in
today's integrated circuit testing flows. Broadly speaking, SLT aims at
executing functional workloads in operational modes. In this paper, we
consolidate available knowledge about what SLT is precisely and why it is used
despite its considerable costs and complexities. We discuss the types or
failures covered by SLT, and outline approaches to quality assessment, test
generation and root-cause diagnosis in the context of SLT. Observing that the
theoretical understanding for all these questions has not yet reached the level
of maturity of the more conventional structural and functional test methods, we
outline new and promising directions for methodical developments leveraging on
recent findings from software engineering.
|
Novel Object Captioning is a zero-shot Image Captioning task requiring
describing objects not seen in the training captions, but for which information
is available from external object detectors. The key challenge is to select and
describe all salient detected novel objects in the input images. In this paper,
we focus on this challenge and propose the ECOL-R model (Encouraging Copying of
Object Labels with Reinforced Learning), a copy-augmented transformer model
that is encouraged to accurately describe the novel object labels. This is
achieved via a specialised reward function in the SCST reinforcement learning
framework (Rennie et al., 2017) that encourages novel object mentions while
maintaining the caption quality. We further restrict the SCST training to the
images where detected objects are mentioned in reference captions to train the
ECOL-R model. We additionally improve our copy mechanism via Abstract Labels,
which transfer knowledge from known to novel object types, and a Morphological
Selector, which determines the appropriate inflected forms of novel object
labels. The resulting model sets new state-of-the-art on the nocaps (Agrawal et
al., 2019) and held-out COCO (Hendricks et al., 2016) benchmarks.
|
The Internet of Things (IoT) is becoming an indispensable part of everyday
life, enabling a variety of emerging services and applications. However, the
presence of rogue IoT devices has exposed the IoT to untold risks with severe
consequences. The first step in securing the IoT is detecting rogue IoT devices
and identifying legitimate ones. Conventional approaches use cryptographic
mechanisms to authenticate and verify legitimate devices' identities. However,
cryptographic protocols are not available in many systems. Meanwhile, these
methods are less effective when legitimate devices can be exploited or
encryption keys are disclosed. Therefore, non-cryptographic IoT device
identification and rogue device detection become efficient solutions to secure
existing systems and will provide additional protection to systems with
cryptographic protocols. Non-cryptographic approaches require more effort and
are not yet adequately investigated. In this paper, we provide a comprehensive
survey on machine learning technologies for the identification of IoT devices
along with the detection of compromised or falsified ones from the viewpoint of
passive surveillance agents or network operators. We classify the IoT device
identification and detection into four categories: device-specific pattern
recognition, Deep Learning enabled device identification, unsupervised device
identification, and abnormal device detection. Meanwhile, we discuss various
ML-related enabling technologies for this purpose. These enabling technologies
include learning algorithms, feature engineering on network traffic traces and
wireless signals, continual learning, and abnormality detection.
|
Nonstationary signals are commonly analyzed and processed in the
time-frequency (T-F) domain that is obtained by the discrete Gabor transform
(DGT). The T-F representation obtained by DGT is spread due to windowing, which
may degrade the performance of T-F domain analysis and processing. To obtain a
well-localized T-F representation, sparsity-aware methods using $\ell_1$-norm
have been studied. However, they need to discretize a continuous parameter onto
a grid, which causes a model mismatch. In this paper, we propose a method of
estimating a sparse T-F representation using atomic norm. The atomic norm
enables sparse optimization without discretization of continuous parameters.
Numerical experiments show that the T-F representation obtained by the proposed
method is sparser than the conventional methods.
|
This paper explores methods for constructing low multipole temperature and
polarisation likelihoods from maps of the cosmic microwave background
anisotropies that have complex noise properties and partial sky coverage. We
use Planck 2018 High Frequency Instrument (HFI) and updated SRoll2 temperature
and polarisation maps to test our methods. We present three likelihood
approximations based on quadratic cross spectrum estimators: (i) a variant of
the simulation-based likelihood (SimBaL) techniques used in the Planck legacy
papers to produce a low multipole EE likelihood; (ii) a semi-analytical
likelihood approximation (momento) based on the principle of maximum entropy;
(iii) a density-estimation `likelihood-free' scheme (DELFI). Approaches (ii)
and (iii) can be generalised to produce low multipole joint
temperature-polarisation (TTTEEE) likelihoods. We present extensive tests of
these methods on simulations with realistic correlated noise. We then analyse
the Planck data and confirm the robustness of our method and likelihoods on
multiple inter- and intra-frequency detector set combinations of SRoll2 maps.
The three likelihood techniques give consistent results and support a low value
of the optical depth to reoinization, tau, from the HFI. Our best estimate of
tau comes from combining the low multipole SRoll2 momento (TTTEEE) likelihood
with the CamSpec high multipole likelihood and is tau = 0.0627+0.0050-0.0058.
This is consistent with the SRoll2 team's determination of tau, though slightly
higher by 0.5 sigma, mainly because of our joint treatment of temperature and
polarisation.
|
Drones are effective for reducing human activity and interactions by
performing tasks such as exploring and inspecting new environments, monitoring
resources and delivering packages. Drones need a controller to maintain
stability and to reach their goal. The most well-known drone controllers are
proportional-integral-derivative (PID) and proportional-derivative (PD)
controllers. However, the controller parameters need to be tuned and optimized.
In this paper, we introduce the use of two evolutionary algorithms,
biogeography-based optimization~(BBO) and particle swarm optimization (PSO),
for multi-objective optimization (MOO) to tune the parameters of the PD
controller of a drone. The combination of MOO, BBO, and PSO results in various
methods for optimization: vector evaluated BBO and PSO, denoted as VEBBO and
VEPSO; and non-dominated sorting BBO and PSO, denoted as NSBBO and NSPSO. The
multi-objective cost function is based on tracking errors for the four states
of the system. Two criteria for evaluating the Pareto fronts of the
optimization methods, normalized hypervolume and relative coverage, are used to
compare performance. Results show that NSBBO generally performs better than the
other methods.
|
Deep neural networks (DNNs) used for brain-computer-interface (BCI)
classification are commonly expected to learn general features when trained
across a variety of contexts, such that these features could be fine-tuned to
specific contexts. While some success is found in such an approach, we suggest
that this interpretation is limited and an alternative would better leverage
the newly (publicly) available massive EEG datasets. We consider how to adapt
techniques and architectures used for language modelling (LM), that appear
capable of ingesting awesome amounts of data, towards the development of
encephalography modelling (EM) with DNNs in the same vein. We specifically
adapt an approach effectively used for automatic speech recognition, which
similarly (to LMs) uses a self-supervised training objective to learn
compressed representations of raw data signals. After adaptation to EEG, we
find that a single pre-trained model is capable of modelling completely novel
raw EEG sequences recorded with differing hardware, and different subjects
performing different tasks. Furthermore, both the internal representations of
this model and the entire architecture can be fine-tuned to a variety of
downstream BCI and EEG classification tasks, outperforming prior work in more
task-specific (sleep stage classification) self-supervision.
|
We investigate the behavior of the Lyapunov spectrum of a linear
discrete-time system under the action of small perturbations in order to obtain
some verifiable conditions for stability and openness of the Lyapunov spectrum.
To this end we introduce the concepts of broken away solutions and splitted
systems. The main results obtained are a necessary condition for stability and
a sufficient condition for the openness of the Lyapunov spectrum, which is
given in terms of the system itself. Finally, examples of using the obtained
results are presented.
|
We prove fixed point theorems in a space with a distance function that takes
values in a partially ordered monoid. On the one hand, such an approach allows
one to generalize some fixed point theorems in a broad class of spaces,
including metric and uniform spaces. On the other hand, compared to the
so-called cone metric spaces and $K$-metric spaces, we do not require that the
distance function range has a linear structure. We also consider several
applications of the obtained fixed point theorems. In particular, we consider
the questions of the existence of solutions of the Fredholm integral equation
in $L$-spaces.
|
In this work, we have proposed augmented KRnets including both discrete and
continuous models. One difficulty in flow-based generative modeling is to
maintain the invertibility of the transport map, which is often a trade-off
between effectiveness and robustness. The exact invertibility has been achieved
in the real NVP using a specific pattern to exchange information between two
separated groups of dimensions. KRnet has been developed to enhance the
information exchange among data dimensions by incorporating the
Knothe-Rosenblatt rearrangement into the structure of the transport map. Due to
the maintenance of exact invertibility, a full nonlinear update of all data
dimensions needs three iterations in KRnet. To alleviate this issue, we will
add augmented dimensions that act as a channel for communications among the
data dimensions. In the augmented KRnet, a fully nonlinear update is achieved
in two iterations. We also show that the augmented KRnet can be reformulated as
the discretization of a neural ODE, where the exact invertibility is kept such
that the adjoint method can be formulated with respect to the discretized ODE
to obtain the exact gradient. Numerical experiments have been implemented to
demonstrate the effectiveness of our models.
|
In this paper, we study 5d $\mathcal{N}=1$ $Sp(N)$ gauge theory with $N_f (
\leq 2N + 3 )$ flavors based on 5-brane web diagram with $O5$-plane. On the one
hand, we discuss Seiberg-Witten curve based on the dual graph of the 5-brane
web with $O5$-plane. On the other hand, we compute the Nekrasov partition
function based on the topological vertex formalism with $O5$-plane. Rewriting
it in terms of profile functions, we obtain the saddle point equation for the
profile function after taking thermodynamic limit. By introducing the
resolvent, we derive the Seiberg-Witten curve and its boundary conditions as
well as its relation to the prepotential in terms of the cycle integrals. They
coincide with those directly obtained from the dual graph of the 5-brane web
with $O5$-plane. This agreement gives further evidence for mirror symmetry
which relates Nekrasov partition function with Seiberg-Witten curve in the case
with orientifold plane and shed light on the non-toric Calabi-Yau 3-folds
including D-type singularities.
|
In this paper, we study the problem of physical layer security in the uplink
of millimeter-wave massive multiple-input multiple-output (MIMO) networks and
propose a jamming detection and suppression method. The proposed method is
based on directional information of the received signals at the base station
antenna array. The proposed jamming detection method can accurately detect both
the existence and direction of the jammer using the received pilot signals in
the training phase. The obtained information is then exploited to develop a
channel estimator that excludes the jammer's angular subspace from received
training signals. The estimated channel information is then used for designing
a combiner at the base station that is able to effectively cancel out the
deliberate interference of the jammer. By numerical simulations, we evaluate
the performance of the proposed jamming detection method in terms of correct
detection probability and false alarm probability and show its effectiveness
when the jammer's power is substantially lower than the user's power. Also, our
results show that the proposed jamming suppression method can achieve a very
close spectral efficiency as the case of no jamming in the network
|
In this study, using low-temperature scanning tunneling microscopy (STM), we
focus on understanding the native defects in pristine \textit{1T}-TiSe$_2$ at
the atomic scale. We probe how they perturb the charge density waves (CDWs) and
lead to local domain formation. These defects influence the correlation length
of CDWs. We establish a connection between suppression of CDWs, Ti
intercalation, and show how this supports the exciton condensation model of CDW
formation in \textit{1T}-TiSe$_2$.
|
We developed recently [A. Fert\'e, et al., J. Phys. Chem. Lett. 11, 4359
(2020)] a method to compute single site double core hole (ssDCH or K$^{-2}$)
spectra. We refer to that method as NOTA+CIPSI. In the present paper this
method is applied to the O K$^{-2}$ spectrum of the CO$_2$ molecule, and we use
this as an example to discuss in detail its convergence properties. Using this
approach, a theoretical spectra in excellent agreement with the experimental
one is obtained. Thanks to a thorough interpretation of the shake-up states
responsible for the main satellite peaks and with the help of a comparison with
the O K$^{-2}$ spectrum of CO, we can highlight the clear signature of the two
non equivalent carbon oxygen bonds in the oxygen ssDCH CO$_2$ dication.
|
Non-negative matrix factorization (NMF) is a powerful tool for dimensionality
reduction and clustering. Unfortunately, the interpretation of the clustering
results from NMF is difficult, especially for the high-dimensional biological
data without effective feature selection. In this paper, we first introduce a
row-sparse NMF with $\ell_{2,0}$-norm constraint (NMF_$\ell_{20}$), where the
basis matrix $W$ is constrained by the $\ell_{2,0}$-norm, such that $W$ has a
row-sparsity pattern with feature selection. It is a challenge to solve the
model, because the $\ell_{2,0}$-norm is non-convex and non-smooth. Fortunately,
we prove that the $\ell_{2,0}$-norm satisfies the Kurdyka-\L{ojasiewicz}
property. Based on the finding, we present a proximal alternating linearized
minimization algorithm and its monotone accelerated version to solve the
NMF_$\ell_{20}$ model. In addition, we also present a orthogonal NMF with
$\ell_{2,0}$-norm constraint (ONMF_$\ell_{20}$) to enhance the clustering
performance by using a non-negative orthogonal constraint. We propose an
efficient algorithm to solve ONMF_$\ell_{20}$ by transforming it into a series
of constrained and penalized matrix factorization problems. The results on
numerical and scRNA-seq datasets demonstrate the efficiency of our methods in
comparison with existing methods.
|
We propose a program at B-factories of inclusive, multi-track displaced
vertex searches, which are expected to be low background and give excellent
sensitivity to non-minimal hidden sectors. Multi-particle hidden sectors often
include long-lived particles (LLPs) which result from approximate symmetries,
and we classify the possible decays of GeV-scale LLPs in an effective field
theory framework. Considering several LLP production modes, including dark
photons and dark Higgs bosons, we study the sensitivity of LLP searches with
different number of displaced vertices per event and track requirements per
displaced vertex, showing that inclusive searches can have sensitivity to a
large range of hidden sector models that are otherwise unconstrained by current
or planned searches.
|
We present a derivation of the integral fluctuation theorem (IFT) for
isolated quantum systems based on some natural assumptions on transition
probabilities. Under these assumptions of "stiffness" and "smoothness" the IFT
immediately follows for microcanonical and pure quantum states. We numerically
check the IFT as well as the validity of our assumptions by analyzing two
exemplary systems. We have been informed by T. Sagawa et al. that he and his
co-workers found comparable numerical results and are preparing a corresponding
paper, which should be available on the same day as the present text. We
recommend reading their submission.
|
Erbium-doped lithium niobate on insulator (Er:LNOI) is a promising platform
for photonic integrated circuits as it adds gain to the LNOI system and enables
on-chip lasers and amplifiers. A challenge for Er:LNOI laser is to increase its
output power while maintaining single-frequency and single (-transverse)-mode
operation. In this work, we demonstrate that single-frequency and single-mode
operation can be achieved even in a single multi-mode Er:LNOI microring by
introducing mode-dependent loss and gain competition. In a single microring
with a free spectral range of 192 GHz, we have achieved single-mode lasing with
an output power of 2.1 microwatt, a side-mode suppression of 35.5 dB, and a
linewidth of 1.27 MHz.
|
In this paper, we study the response of large models from the BERT family to
incoherent inputs that should confuse any model that claims to understand
natural language. We define simple heuristics to construct such examples. Our
experiments show that state-of-the-art models consistently fail to recognize
them as ill-formed, and instead produce high confidence predictions on them. As
a consequence of this phenomenon, models trained on sentences with randomly
permuted word order perform close to state-of-the-art models. To alleviate
these issues, we show that if models are explicitly trained to recognize
invalid inputs, they can be robust to such attacks without a drop in
performance.
|
Supergranules create a peak in the spatial spectrum of photospheric velocity
features. They have some properties of convection cells but their origin is
still being debated in the literature. The time-distance helioseismology
constitutes a method that is suitable for investigating the deep structure of
supergranules. Our aim is to construct the model of the flows in the average
supergranular cell using fully consistent time-distance inverse methodology. We
used the Multi-Channel Subtractive Optimally Localised Averaging inversion
method with regularisation of the cross-talk. We combined the difference and
the mean travel-time averaging geometries. We applied this methodology to
travel-time maps averaged over more than 10000 individual supergranular cells.
These cells were detected automatically in travel-time maps computed for 64
quiet days around the disc centre. The ensemble averaging method allows us to
significantly improve the signal-to-noise ratio and to obtain a clear picture
of the flows in the average supergranule. We found near-surface divergent
horizontal flows which quickly and monotonously weakened with depth; they
became particularly weak at the depth of about 7 Mm, where they even apparently
switched sign. To learn about the vertical component, we integrated the
continuity equation from the surface. The derived estimates of the vertical
flow depicted a sub-surface increase from about 5 m/s at the surface to about
35 m/s at the depth of about 3 Mm followed by a monotonous decrease to greater
depths. The vertical flow remained positive (an upflow) and became
indistinguishable from the background at the depth of about 15 Mm. We further
detected a systematic flow in the longitudinal direction. The course of this
systematic flow with depth agrees well with the model of the solar rotation in
the sub-surface layers.
|
Statistical learning theory provides the foundation to applied machine
learning, and its various successful applications in computer vision, natural
language processing and other scientific domains. The theory, however, does not
take into account the unique challenges of performing statistical learning in
geospatial settings. For instance, it is well known that model errors cannot be
assumed to be independent and identically distributed in geospatial (a.k.a.
regionalized) variables due to spatial correlation; and trends caused by
geophysical processes lead to covariate shifts between the domain where the
model was trained and the domain where it will be applied, which in turn harm
the use of classical learning methodologies that rely on random samples of the
data. In this work, we introduce the geostatistical (transfer) learning
problem, and illustrate the challenges of learning from geospatial data by
assessing widely-used methods for estimating generalization error of learning
models, under covariate shift and spatial correlation. Experiments with
synthetic Gaussian process data as well as with real data from geophysical
surveys in New Zealand indicate that none of the methods are adequate for model
selection in a geospatial context. We provide general guidelines regarding the
choice of these methods in practice while new methods are being actively
researched.
|
We show that the widely used relaxation time approximation to the
relativistic Boltzmann equation contains basic flaws, being incompatible with
microscopic and macroscopic conservation laws. We propose a new approximation
that fixes such fundamental issues and maintains the basic properties of the
linearized Boltzmann collision operator. We show how this correction affects
transport coefficients, such as the bulk viscosity and particle diffusion.
|
Estimating the data density is one of the challenging problems in deep
learning. In this paper, we present a simple yet effective method for
estimating the data density using a deep neural network and the
Donsker-Varadhan variational lower bound on the KL divergence. We show that the
optimal critic function associated with the Donsker-Varadhan representation on
the KL divergence between the data and the uniform distribution can estimate
the data density. We also present the deep neural network-based modeling and
its stochastic learning. The experimental results and possible applications of
the proposed method demonstrate that it is competitive with the previous
methods and has a lot of possibilities in applied to various applications.
|
A unified framework for the Chevalley and equitable presentation of
$U_q(sl_2)$ is introduced. It is given in terms of a system of Freidel-Maillet
type equations satisfied by a pair of quantum K-operators ${\cal K}^\pm$, whose
entries are expressed in terms of either Chevalley or equitable generators. The
Hopf algebra structure is reconsidered in light of this presentation, and
interwining relations for K-operators are obtained. A K-operator solving a
spectral parameter dependent Freidel-Maillet equation is also considered.
Specializations to $U_q(sl_2)$ admit a decomposition in terms of ${\cal
K}^\pm$. Explicit examples of K-matrices are constructed.
|
Automatic medical image segmentation based on Computed Tomography (CT) has
been widely applied for computer-aided surgery as a prerequisite. With the
development of deep learning technologies, deep convolutional neural networks
(DCNNs) have shown robust performance in automated semantic segmentation of
medical images. However, semantic segmentation algorithms based on DCNNs still
meet the challenges of feature loss between encoder and decoder, multi-scale
object, restricted field of view of filters, and lack of medical image data.
This paper proposes a novel algorithm for automated vertebrae segmentation via
3D volumetric spine CT images. The proposed model is based on the structure of
encoder to decoder, using layer normalization to optimize mini-batch training
performance. To address the concern of the information loss between encoder and
decoder, we designed an Atrous Residual Path to pass more features from encoder
to decoder instead of an easy shortcut connection. The proposed model also
applied the attention module in the decoder part to extract features from
variant scales. The proposed model is evaluated on a publicly available dataset
by a variety of metrics. The experimental results show that our model achieves
competitive performance compared with other state-of-the-art medical semantic
segmentation methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.