abstract
stringlengths 42
2.09k
|
---|
Given two-dimensional Riemannian manifolds $\mathcal{M},\mathcal{N}$, we
prove a lower bound on the distortion of embeddings $\mathcal{M} \to
\mathcal{N}$, in terms of the areas' discrepancy
$V_{\mathcal{N}}/V_{\mathcal{M}}$, for a certain class of distortion
functionals. For $V_{\mathcal{N}}/V_{\mathcal{M}} \ge 1/4$, homotheties,
provided they exist, are the unique energy minimizing maps attaining the bound,
while for $V_{\mathcal{N}}/V_{\mathcal{M}} \le 1/4$, there are non-homothetic
minimizers. We characterize the maps attaining the bound, and construct
explicit non-homothetic minimizers between disks. We then prove stability
results for the two regimes. We end by analyzing other families of distortion
functionals. In particular we characterize a family of functionals where no
phase transition in the minimizers occurs; homotheties are the energy
minimizers for all values of $V_{\mathcal{N}}/V_{\mathcal{M}}$, provided they
exist.
|
We consider nonlinear scalar conservation laws posed on a network. We
establish $L^1$ stability, and thus uniqueness, for weak solutions satisfying
the entropy condition. We apply standard finite volume methods and show
stability and convergence to the unique entropy solution, thus establishing
existence of a solution in the process. Both our existence and
stability/uniqueness theory is centred around families of stationary states for
the equation. In one important case -- for monotone fluxes with an upwind
difference scheme -- we show that the set of (discrete) stationary solutions is
indeed sufficiently large to suit our general theory. We demonstrate the
method's properties through several numerical experiments.
|
Lattice thermal conductivity ($\kappa_{lat}$) of MgSiO$_3$ postperovskite
(MgPPv) under the Earth's lower mantle high pressure-temperature conditions is
studied using the phonon quasiparticle approach by combing \textit{ab initio}
molecular dynamics and lattice dynamics simulations. Phonon lifetimes are
extracted from the phonon quasiparticle calculations, and the phonon group
velocities are computed from the anharmonic phonon dispersions, which in
principle capture full anharmonicity. It is found that throughout the lowermost
mantle, including the D" region, $\kappa_{lat}$ of MgPPv is ~25% larger than
that of MgSiO$_3$ perovskite (MgPv), mainly due to MgPPv's higher phonon
velocities. Such a difference in phonon velocities between the two phases
originates in the MgPPv's relatively smaller primitive cell. Systematic results
of temperature and pressure dependences of both MgPPv's and MgPv's
$\kappa_{lat}$ are demonstrated.
|
We propose a new scheme for an improved determination of the Newtonian
gravitational constant G and evaluate it by numerical simulations. Cold atoms
in free fall are probed by atom interferometry measurements to characterize the
gravitational field generated by external source masses. Two source mass
configurations having different geometry and using different materials are
compared to identify an optimized experimental setup for the G measurement. The
effects of the magnetic fields used to manipulate the atoms and to control the
interferometer phase are also characterized.
|
Extended Berkeley Packet Filter (BPF) is a language and run-time system that
allows non-superusers to extend the Linux and Windows operating systems by
downloading user code into the kernel. To ensure that user code is safe to run
in kernel context, BPF relies on a static analyzer that proves properties about
the code, such as bounded memory access and the absence of operations that
crash. The BPF static analyzer checks safety using abstract interpretation with
several abstract domains. Among these, the domain of tnums (tristate numbers)
is a key domain used to reason about the bitwise uncertainty in program values.
This paper formally specifies the tnum abstract domain and its arithmetic
operators. We provide the first proofs of soundness and optimality of the
abstract arithmetic operators for tnum addition and subtraction used in the BPF
analyzer. Further, we describe a novel sound algorithm for multiplication of
tnums that is more precise and efficient (runs 33% faster on average) than the
Linux kernel's algorithm. Our tnum multiplication is now merged in the Linux
kernel.
|
We study the role of ensemble averaging in holography by exploring the
relation between the universal late time behavior of the spectral form factor
and the second Renyi entropy of a thermal mixed state of the doubled system.
Both quantities receive contributions from wormhole saddle-points: in the
former case they lead to non-factorization while in the latter context they
quantify decoherence due to the interaction with an environment. Our reasoning
indicates that the space-time continuity responsible for non-factorization and
space-time continuity through entanglement are in direct competition with each
other. In an accompanying paper, we examine this dual relationship in a general
class of 1D quantum systems with the help of a simple geometric path integral
prescription.
|
Recent advances in unsupervised domain adaptation have seen considerable
progress in semantic segmentation. Existing methods either align different
domains with adversarial training or involve the self-learning that utilizes
pseudo labels to conduct supervised training. The former always suffers from
the unstable training caused by adversarial training and only focuses on the
inter-domain gap that ignores intra-domain knowledge. The latter tends to put
overconfident label prediction on wrong categories, which propagates errors to
more samples. To solve these problems, we propose a two-stage adaptive semantic
segmentation method based on the local Lipschitz constraint that satisfies both
domain alignment and domain-specific exploration under a unified principle. In
the first stage, we propose the local Lipschitzness regularization as the
objective function to align different domains by exploiting intra-domain
knowledge, which explores a promising direction for non-adversarial adaptive
semantic segmentation. In the second stage, we use the local Lipschitzness
regularization to estimate the probability of satisfying Lipschitzness for each
pixel, and then dynamically sets the threshold of pseudo labels to conduct
self-learning. Such dynamical self-learning effectively avoids the error
propagation caused by noisy labels. Optimization in both stages is based on the
same principle, i.e., the local Lipschitz constraint, so that the knowledge
learned in the first stage can be maintained in the second stage. Further, due
to the model-agnostic property, our method can easily adapt to any CNN-based
semantic segmentation networks. Experimental results demonstrate the excellent
performance of our method on standard benchmarks.
|
In this paper we consider a binaural hearing aid setup, where in addition to
the head-mounted microphones an external microphone is available. For this
setup, we investigate the performance of several relative transfer function
(RTF) vector estimation methods to estimate the direction of arrival(DOA) of
the target speaker in a noisy and reverberant acoustic environment. More in
particular, we consider the state-of-the-art covariance whitening (CW) and
covariance subtraction (CS) methods, either incorporating the external
microphone or not, and the recently proposed spatial coherence (SC) method,
requiring the external microphone. To estimate the DOA from the estimated RTF
vector, we propose to minimize the frequency-averaged Hermitian angle between
the estimated head-mounted RTF vector and a database of prototype head-mounted
RTF vectors. Experimental results with stationary and moving speech sources in
a reverberant environment with diffuse-like noise show that the SC method
outperforms the CS method and yields a similar DOA estimation accuracy as the
CW method at a lower computational complexity.
|
We show that some hard to detect properties of quadratic ODEs (eg certain
preserved integrals and measures) can be deduced more or less algorithmically
from their Kahan discretization, using Darboux Polynomials (DPs). Somewhat
similar results hold for ODEs of higher degree/order using certain other
birational discretization methods.
|
We discuss the Sherrington-Kirkpatrick mean-field version of a spin glass
within the distributional zeta-function method (DZFM). In the DZFM, since the
dominant contribution to the average free energy is written as a series of
moments of the partition function of the model, the spin-glass multivalley
structure is obtained. Also, an exact expression for the saddle points
corresponding to each valley and a global critical temperature showing the
existence of many stables or at least metastables equilibrium states is
presented. Near the critical point we obtain analytical expressions of the
order parameters that are in agreement with phenomenological results. We
evaluate the linear and nonlinear susceptibility and we find the expected
singular behavior at the spin-glass critical temperature. Furthermore, we
obtain a positive definite expression for the entropy and we show that
ground-state entropy tends to zero as the temperature goes to zero. We show
that our solution is stable for each term in the expansion. Finally, we analyze
the behavior of the overlap distribution, where we find a general expression
for each moment of the partition function.
|
Astrophysical observations show complex organic molecules (COMs) in the gas
phase of protoplanetary disks. X-rays emitted from the central young stellar
object (YSO) that irradiate interstellar ices in the disk, followed by the
ejection of molecules in the gas phase, are a possible route to explain the
abundances observed in the cold regions. This process, known as X-ray
photodesorption, needs to be quantified for methanol-containing ices. We aim at
experimentally measuring X-ray photodesorption yields of methanol and its
photo-products from binary mixed ices: $^{13}$CO:CH$_3$OH ice and
H$_2$O:CH$_3$OH ice. We irradiated these ices at 15 K with X-rays in the 525 -
570 eV range. The release of species in the gas phase was monitored by
quadrupole mass spectrometry, and photodesorption yields were derived. For
$^{13}$CO:CH$_3$OH ice, CH$_3$OH X-ray photodesorption yield is estimated to be
10$^{-2}$ molecule/photon at 564 eV. X-ray photodesorption of larger COMs,
which can be attributed to either ethanol, dimethyl ether, and/or formic acid,
is detected with a yield of 10$^{-3}$ molecule/photon. When methanol is mixed
with water, X-ray photodesorption of methanol and of the previous COMs is not
detected. X-ray induced chemistry, dominated by low-energy secondary electrons,
is found to be the main mechanism that explains these results. We also provide
desorption yields that are applicable to protoplanetary disk environments for
astrochemical models. The X-ray emission from YSOs should participate in the
enrichment of the protoplanetary disk gas phase with COMs such as methanol in
the cold and X-ray dominated regions because of X-ray photodesorption from
methanol-containing ices.
|
We consider fractional parabolic equations with variable coefficients and
establish maximal $L_{q}$-regularity in Bessel potential spaces of arbitrary
nomnegative order. As an application, we show higher order regularity and
instantaneous smoothing for the fractional porous medium equation and for a
nonlocal Kirchhoff equation.
|
Cluster randomized controlled trials (cRCTs) are designed to evaluate
interventions delivered to groups of individuals. A practical limitation of
such designs is that the number of available clusters may be small, resulting
in an increased risk of baseline imbalance under simple randomization.
Constrained randomization overcomes this issue by restricting the allocation to
a subset of randomization schemes where sufficient overall covariate balance
across comparison arms is achieved with respect to a pre-specified balance
metric. However, several aspects of constrained randomization for the design
and analysis of multi-arm cRCTs have not been fully investigated. Motivated by
an ongoing multi-arm cRCT, we provide a comprehensive evaluation of the
statistical properties of model-based and randomization-based tests under both
simple and constrained randomization designs in multi-arm cRCTs, with varying
combinations of design and analysis-based covariate adjustment strategies. In
particular, as randomization-based tests have not been extensively studied in
multi-arm cRCTs, we additionally develop most-powerful permutation tests under
the linear mixed model framework for our comparisons. Our results indicate that
under constrained randomization, both model-based and randomization-based
analyses could gain power while preserving nominal type I error rate, given
proper analysis-based adjustment for the baseline covariates. The choice of
balance metrics and candidate set size and their implications on the testing of
the pairwise and global hypotheses are also discussed. Finally, we caution
against the design and analysis of multi-arm cRCTs with an extremely small
number of clusters, due to insufficient degrees of freedom and the tendency to
obtain an overly restricted randomization space.
|
Minimum spanning trees (MSTs) are used in a variety of fields, from computer
science to geography. Infectious disease researchers have used them to infer
the transmission pathway of certain pathogens. However, these are often the
MSTs of sample networks, not population networks, and surprisingly little is
known about what can be inferred about a population MST from a sample MST. We
prove that if $n$ nodes (the sample) are selected uniformly at random from a
complete graph with $N$ nodes and unique edge weights (the population), the
probability that an edge is in the population graph's MST given that it is in
the sample graph's MST is $\frac{n}{N}$. We use simulation to investigate this
conditional probability for $G(N,p)$ graphs, Barab\'{a}si-Albert (BA) graphs,
graphs whose nodes are distributed in $\mathbb{R}^2$ according to a bivariate
standard normal distribution, and an empirical HIV genetic distance network.
Broadly, results for the complete, $G(N,p)$, and normal graphs are similar, and
results for the BA and empirical HIV graphs are similar. We recommend that
researchers use an edge-weighted random walk to sample nodes from the
population so that they maximize the probability that an edge is in the
population MST given that it is in the sample MST.
|
We study Bergman kernels $K_\Pi$ and projections $P_\Pi$ in unbounded planar
domains $\Pi$, which are periodic in one dimension. In the case $\Pi$ is simply
connected we write the kernel $K_\Pi$ in terms of a Riemann mapping $\varphi$
related to the bounded periodic cell $\varpi$ of the domain $\Pi$. We also
introduce and adapt to the Bergman space setting the Floquet transform
technique, which is a standard tool for elliptic spectral problems in periodic
domains. We investigate the boundedness properties of the Floquet transform
operators in Bergman spaces and derive a general formula connecting $P_\Pi$ to
a projection on a bounded domain. We show how this theory can be used to
reproduce the above kernel formula for $K_\Pi$. Finally, we consider weighted
$L^p$-estimates for $P_\Pi$ in periodic domains.
|
The bifurcation theory of ordinary differential equations (ODEs), and its
application to deterministic population models, are by now well established. In
this article, we begin to develop a complementary theory for diffusion-like
perturbations of dynamical systems, with the goal of understanding the space
and time scales of fluctuations near bifurcation points of the underlying
deterministic system. To do so we describe the limit processes that arise in
the vicinity of the bifurcation point. In the present article we focus on the
one-dimensional case.
|
Successful innovations achieve large geographical coverage by spreading
across settlements and distances. For decades, spatial diffusion has been
argued to take place along the urban hierarchy such that the innovation first
spreads from large to medium cities then later from medium to small cities.
Yet, the role of geographical distance, the other major factor of spatial
diffusion, was difficult to identify in hierarchical diffusion due to missing
data on spreading events. In this paper, we exploit spatial patterns of
individual invitations on a social media platform sent from registered users to
new users over the entire life cycle of the platform. This enables us to
disentangle the role of urban hierarchy and the role of distance by observing
the source and target locations of flows over an unprecedented timescale. We
demonstrate that hierarchical diffusion greatly overlaps with diffusion to
close distances and these factors co-evolve over the life cycle; thus, their
joint analysis is necessary. Then, a regression framework is applied to
estimate the number of invitations sent between pairs of towns by years in the
life cycle with the population sizes of the source and target towns, their
combinations, and the distance between them. We confirm that hierarchical
diffusion prevails initially across large towns only but emerges in the full
spectrum of settlements in the middle of the life cycle when adoption
accelerates. Unlike in previous gravity estimations, we find that after an
intensifying role of distance in the middle of the life cycle a surprisingly
weak distance effect characterizes the last years of diffusion. Our results
stress the dominance of urban hierarchy in spatial diffusion and inform future
predictions of innovation adoption at local scales.
|
A variety of supergravity and string based models contain hidden sectors
which can play a role in particle physics phenomena and in cosmology. In this
note we discuss the possibility that the visible sector and the hidden sectors
in general live in different heat baths. Further, it is entirely possible that
dark matter resides partially or wholly in hidden sectors in the form of dark
Dirac fermions, dark neutralinos or dark photons. A proper analysis of dark
matter and of dark forces in this case requires that one deals with a
multi-temperature universe. We discuss the basic formalism which includes the
multi-temperature nature of visible and hidden sectors in the analysis of
phenomena observable in the visible sectors. Specifically we discuss the
application of the formalism for explaining the velocity dependence of dark
matter cross sections as one extrapolates from galaxy scales to scales of
galaxy clusters. Here the dark photon exchange among dark fermions can produce
the desired velocity dependent cross sections consistent with existing galactic
cross section data indicating the existence of a new fifth (dark) force. We
also discuss the possibility that the dark photon may constitute a significant
portion of dark matter. We demonstrate a realization of this possibility in a
universe with two hidden sectors and with the visible sector and the hidden
sectors in different heat baths which allows a satisfaction of the constraints
that the dark photon have a lifetime larger than the age of the universe and
that its relic density be consistent with Planck data. Future directions for
further work are discussed.
|
Intensity maps of the 21cm emission line of neutral hydrogen are lensed by
intervening large-scale structure, similar to the lensing of the cosmic
microwave background temperature map. We extend previous work by calculating
the lensing contribution to the full-sky 21cm bispectrum in redshift space. The
lensing contribution tends to peak when equal-redshift fluctuations are lensed
by a lower redshift fluctuation. At high redshift, lensing effects can become
comparable to the contributions from density and redshift-space distortions.
|
A finite-dimensional quantum system is coupled to a bath of oscillators in
thermal equilibrium at temperature $T>0$. We show that for fixed, small values
of the coupling constant $\lambda$, the true reduced dynamics of the system is
approximated by the completely positive, trace preserving Markovian semigroup
generated by the Davies-Lindblad generator. The difference between the true and
the Markovian dynamics is $O(|\lambda|^{1/4})$ for all times, meaning that the
solution of the Gorini-Kossakowski-Sudarshan-Lindblad master equation is
approximating the true dynamics to accuracy $O(|\lambda|^{1/4})$ for all times.
Our method is based on a recently obtained expansion of the full system-bath
propagator. It applies to reservoirs with correlation functions decaying in
time as $1/t^{4}$ or faster, which is a significant improvement relative to the
previously required exponential decay.
|
A pressure-driven two-layer channel flow of a Newtonian fluid with constant
viscosity (top layer) and a fluid with a time-dependent viscosity (bottom
layer) is numerically investigated. The bottom layer goes through an aging
process in which its viscosity increases due to the formation of internal
structure, which is represented by a Coussot-type relationship. The resultant
flow dynamics is the consequence of the competition between structuration and
destructuration, as characterized by the dimensionless timescale for
structuration (tau) and the dimensionless material property (beta) of the
bottom fluid. The development of Kelvin-Helmholtz type instabilities (roll-up
structures) observed in the Newtonian constant viscosity case was found to be
suppressed as the viscosity of the bottom layer increased over time. It is
found that, for the set of parameters considered in the present study, the
bottom layer almost behaves like a Newtonian fluid with constant viscosity for
tau greater than 10 and beta greater than 1. It is also shown that decreasing
the value of the Froude number stabilizes the interfacial instabilities. The
wavelength of the interfacial wave increases as the capillary number increases.
|
This memo describes NTR/TSU winning submission for Low Resource ASR challenge
at Dialog2021 conference, language identification track.
Spoken Language Identification (LID) is an important step in a multilingual
Automated Speech Recognition (ASR) system pipeline. Traditionally, the ASR task
requires large volumes of labeled data that are unattainable for most of the
world's languages, including most of the languages of Russia. In this memo, we
show that a convolutional neural network with a Self-Attentive Pooling layer
shows promising results in low-resource setting for the language identification
task and set up a SOTA for the Low Resource ASR challenge dataset.
Additionally, we compare the structure of confusion matrices for this and
significantly more diverse VoxForge dataset and state and substantiate the
hypothesis that whenever the dataset is diverse enough so that the other
classification factors, like gender, age etc. are well-averaged, the confusion
matrix for LID system bears the language similarity measure.
|
Custom hardware accelerators for Deep Neural Networks are increasingly
popular: in fact, the flexibility and performance offered by FPGAs are
well-suited to the computational effort and low latency constraints required by
many image recognition and natural language processing tasks. The gap between
high-level Machine Learning frameworks (e.g., Tensorflow, Pytorch) and
low-level hardware design in Verilog/VHDL creates a barrier to widespread
adoption of FPGAs, which can be overcome with the help of High-Level Synthesis.
hls4ml is a framework that translates Deep Neural Networks into annotated C++
code for High-Level Synthesis, offering a complete and user-friendly design
process that has been enthusiastically adopted in physics research. We analyze
the strengths and weaknesses of hls4ml, drafting a plan to enhance its core
library of components in order to allow more advanced optimizations, target a
wider selection of FPGAs, and support larger Neural Network models.
|
Designing new 2D systems with tunable properties is an important subject for
science and technology. Starting from graphene, we developed an algorithm to
systematically generate 2D carbon crystals belonging to the family of
graphdiynes (GDYs) and having different structures and sp/sp2 carbon ratio. We
analyze how structural and topological effects can tune the relative stability
and the electronic behavior, to propose a rationale for the development of new
systems with tailored properties. A total of 26 structures have been generated,
including the already known polymorphs such as {\alpha}-, \b{eta}- and
{\gamma}-GDY. Periodic density functional theory calculations have been
employed to optimize the 2D crystal structures and to compute the total energy,
the band structure, and the density of states. Relative energies with respect
to graphene have been found to increase when the values of carbon sp/sp2 ratio
increase, following however different trends based on the peculiar topologies
present in the crystals. These topologies also influence the band structure
giving rise to semiconductors with a finite bandgap, zero-gap semiconductors
displaying Dirac cones, or metallic systems. The different trends allow
identifying some topological effects as possible guidelines in the design of
new 2D carbon materials beyond graphene.
|
A $P_4$-free graph is called a cograph. In this paper we partially
characterize finite groups whose power graph is a cograph. As we will see, this
problem is a generalization of the determination of groups in which every
element has prime power order, first raised by Graham Higman in 1957 and fully
solved very recently.
First we determine all groups $G$ and $H$ for which the power graph of
$G\times H$ is a cograph. We show that groups whose power graph is a cograph
can be characterised by a condition only involving elements whose orders are
prime or the product of two (possibly equal) primes. Some important graph
classes are also taken under consideration. For finite simple groups we show
that in most of the cases their power graphs are not cographs: the only ones
for which the power graphs are cographs are certain groups PSL$(2,q)$ and
Sz$(q)$ and the group PSL$(3,4)$. However, a complete determination of these
groups involves some hard number-theoretic problems.
|
In this paper, we mainly investigate the Cauchy problem of the non-viscous
MHD equations with magnetic diffusion. We first establish the local
well-posedness (existence,~uniqueness and continuous dependence) with initial
data $(u_0,b_0)$ in critical Besov spaces $
{B}^{\frac{d}{p}+1}_{p,1}\times{B}^{\frac{d}{p}}_{p,1}$ with $1\leq
p\leq\infty$, and give a lifespan $T$ of the solution which depends on the norm
of the Littlewood-Paley decomposition of the initial data. Then, we prove the
global existence in critical Besov spaces. In particular, the results of global
existence also hold in Sobolev space $ C([0,\infty);
{H}^{s}(\mathbb{S}^2))\times \Big(C([0,\infty);{H}^{s-1}(\mathbb{S}^2))\cap
L^2\big([0,\infty);{H}^{s}(\mathbb{S}^2)\big)\Big)$ with $s>2$, when the
initial data satisfies $\int_{\mathbb{S}^2}b_0dx=0$ and
$\|u_0\|_{B^1_{\infty,1}(\mathbb{S}^2)}+\|b_0\|_{{B}^{0}_{\infty,1}(\mathbb{S}^2)}\leq
\epsilon$. It's worth noting that our results imply some large and low
regularity initial data for the global existence, which improves considerably
the recent results in \cite{weishen}.
|
We consider a collection of statistically identical two-state continuous time
Markov chains (channels). A controller continuously selects a channel with the
view of maximizing infinite horizon average reward. A switching cost is paid
upon channel changes. We consider two cases: full observation (all channels
observed simultaneously) and partial observation (only the current channel
observed). We analyze the difference in performance between these cases for
various policies. For the partial observation case with two channels, or an
infinite number of channels, we explicitly characterize an optimal threshold
for two sensible policies which we name "call-gapping" and "cool-off". Our
results present a qualitative view on the interaction of the number of
channels, the available information, and the switching costs.
|
We study the cyclotomic exponent sequence of a numerical semigroup $S,$ and
we compute its values at the gaps of $S,$ the elements of $S$ with unique
representations in terms of minimal generators, and the Betti elements $b\in S$
for which the set $\{a \in \operatorname{Betti}(S) : a \le_{S}b\}$ is totally
ordered with respect to $\le_S$ (we write $a \le_S b$ whenever $a - b \in S,$
with $a,b\in S$). This allows us to characterize certain semigroup families,
such as Betti-sorted or Betti-divisible numerical semigroups, as well as
numerical semigroups with a unique Betti element, in terms of their cyclotomic
exponent sequences. Our results also apply to cyclotomic numerical semigroups,
which are numerical semigroups with a finitely supported cyclotomic exponent
sequence. We show that cyclotomic numerical semigroups with certain cyclotomic
exponent sequences are complete intersections, thereby making progress towards
proving the conjecture of Ciolan, Garc\'ia-S\'anchez and Moree (2016) stating
that $S$ is cyclotomic if and only if it is a complete intersection.
|
Ice Ih, the common form of ice in the biosphere, contains proton disorder.
Its proton-ordered counterpart, ice XI, is thermodynamically stable below 72 K.
However, even below this temperature the formation of ice XI is kinetically
hindered and experimentally it is obtained by doping ice with KOH. Doping
creates ionic defects that promote the migration of protons and the associated
change in proton configuration. In this article, we mimic the effect of doping
in molecular dynamics simulations using a bias potential that enhances the
formation of ionic defects. The recombination of the ions thus formed proceeds
through fast migration of the hydroxide and results in the jump of protons
along a hydrogen bond loop. This provides a physical and expedite way to change
the proton configuration, and to accelerate diffusion in proton configuration
space. A key ingredient of this approach is a machine learning potential
trained with density functional theory data and capable of modeling molecular
dissociation. We exemplify the usefulness of this idea by studying the
order-disorder transition using an appropriate order parameter to distinguish
the proton environments in ice Ih and XI. We calculate the changes in free
energy, enthalpy, and entropy associated with the transition. Our estimated
entropy agrees with experiment within the error bars of our calculation.
|
In this work, we present a novel sampling-based path planning method, called
SPRINT. The method finds solutions for high dimensional path planning problems
quickly and robustly. Its efficiency comes from minimizing the number of
collision check samples. This reduction in sampling relies on heuristics that
predict the likelihood that samples will be useful in the search process.
Specifically, heuristics (1) prioritize more promising search regions; (2) cull
samples from local minima regions; and (3) steer the search away from
previously observed collision states. Empirical evaluations show that our
method finds shorter or comparable-length solution paths in significantly less
time than commonly used methods. We demonstrate that these performance gains
can be largely attributed to our approach to achieve sample efficiency.
|
The recent gravity field measurements of Jupiter (Juno) and Saturn (Cassini)
confirm the existence of deep zonal flows reaching to a depth of 5\% and 15\%
of the respective radius. Relating the zonal wind induced density perturbations
to the gravity moments has become a major tool to characterise the interior
dynamics of gas giants. Previous studies differ with respect to the assumptions
made on how the wind velocity relates to density anomalies, on the functional
form of its decay with depth, and on the continuity of antisymmetric winds
across the equatorial plane. Most of the suggested vertical structures exhibit
a rather smooth radial decay of the zonal wind, which seems at odds with the
observed secular variation of the magnetic field and the prevailing geostrophy
of the zonal winds. Moreover, the results relied on an artificial equatorial
regularisation or ignored the equatorial discontinuity altogether. We favour an
alternative structure, where the equatorially antisymmetric zonal wind in an
equatorial latitude belt between $\pm 21^\circ$ remains so shallow that it does
not contribute to the gravity signal. The winds at higher latitudes suffice to
convincingly explain the measured gravity moments. Our results indicate that
the winds are geostrophic, i.e. constant along cylinders, in the outer $3000\,$
km and decay rapidly below. The preferred wind structure is 50\% deeper than
previously thought, agrees with the measured gravity moment, is compliant with
the magnetic constraints and the requirement of an adiabatic atmosphere and
unbiased by the treatment of the equatorial discontinuity.
|
Introduction: One of the most important tasks in the Emergency Department
(ED) is to promptly identify the patients who will benefit from hospital
admission. Machine Learning (ML) techniques show promise as diagnostic aids in
healthcare. Material and methods: We investigated the following features
seeking to investigate their performance in predicting hospital admission:
serum levels of Urea, Creatinine, Lactate Dehydrogenase, Creatine Kinase,
C-Reactive Protein, Complete Blood Count with differential, Activated Partial
Thromboplastin Time, D Dimer, International Normalized Ratio, age, gender,
triage disposition to ED unit and ambulance utilization. A total of 3,204 ED
visits were analyzed. Results: The proposed algorithms generated models which
demonstrated acceptable performance in predicting hospital admission of ED
patients. The range of F-measure and ROC Area values of all eight evaluated
algorithms were [0.679-0.708] and [0.734-0.774], respectively. Discussion: The
main advantages of this tool include easy access, availability, yes/no result,
and low cost. The clinical implications of our approach might facilitate a
shift from traditional clinical decision-making to a more sophisticated model.
Conclusion: Developing robust prognostic models with the utilization of common
biomarkers is a project that might shape the future of emergency medicine. Our
findings warrant confirmation with implementation in pragmatic ED trials.
|
We show that an effective action of the one-dimensional torus $\mathbb{G}_m$
on a normal affine algebraic variety $X$ can be extended to an effective action
of a semi-direct product $\mathbb{G}_m\rightthreetimes\mathbb{G}_a$ with the
same general orbit closures if and only if there is a divisor $D$ on $X$ that
consists of $\mathbb{G}_m$-fixed points. This result is applied to the study of
orbits of the automorphism group $\text{Aut}(X)$ on $X$.
|
We study the amplification of rotation velocity with the Sagnac
interferometer based on the concept of weak-value amplification. By using a
different scheme to perform the Sagnac interferometer with the probe in
momentum space, we have demonstrated the new weak measure protocol to detect
the small rotation velocity by amplifying the phase shift of the Sagnac effect.
At the given the maximum incident intensity of the initial spectrum, the
detection limit of the intensity of the spectrometer and the accuracy of
angular velocity measurement, we can theoretical give the appropriate
potselection and the minimum of optical path area before experiment. In
addition, we put forward a new optical design to increase the optical path area
and decrease the size of the interferometer to overcome the limit of instrument
size. Finally, our modified Sagnac's interferometer based on weak measurement
is innovative and efficient probing the small rotation velocity signal.
|
We consider the task of learning to control a linear dynamical system under
fixed quadratic costs, known as the Linear Quadratic Regulator (LQR) problem.
While model-free approaches are often favorable in practice, thus far only
model-based methods, which rely on costly system identification, have been
shown to achieve regret that scales with the optimal dependence on the time
horizon T. We present the first model-free algorithm that achieves similar
regret guarantees. Our method relies on an efficient policy gradient scheme,
and a novel and tighter analysis of the cost of exploration in policy space in
this setting.
|
We develop a gauge covariant neural network for four dimensional non-abelian
gauge theory, which realizes a map between rank-2 tensor valued vector fields.
We find that the conventional smearing procedure and gradient flow for gauge
fields can be regarded as known neural networks, residual networks and neural
ordinal differential equations for rank-2 tensors with fixed parameters. In
terms of machine learning context, projection or normalization functions in the
smearing schemes correspond to an activation function in neural networks. Using
the locality of the activation function, we derive the backpropagation for the
gauge covariant neural network. Consequently, the smeared force in hybrid Monte
Carlo (HMC) is naturally derived with the backpropagation. As a demonstration,
we develop the self-learning HMC (SLHMC) with covariant neural network
approximated action for non-abelian gauge theory with dynamical fermions, and
we observe SLHMC reproduces results from HMC.
|
For arbitrary four-dimensional quantum field theories with scalars and
fermions, renormalisation group equations in the $\overline{\text{MS}}$ scheme
are investigated at three-loop order in perturbation theory. Collecting
literature results, general expressions are obtained for field anomalous
dimensions, Yukawa interactions, as well as fermion masses. The renormalisation
group evolution of scalar quartic, cubic and mass terms is determined up to a
few unknown coefficients. The combined results are applied to compute the
renormalisation group evolution of the gaugeless Litim-Sannino model.
|
Our work focuses on modeling security of systems from their component-level
designs. Towards this goal we develop a categorical formalism to model attacker
actions. Equipping the categorical formalism with algebras produces two
interesting results for security modeling. First, using the Yoneda lemma, we
are able to model attacker reconnaissance missions. In this context, the Yoneda
lemma formally shows us that if two system representations, one being complete
and the other being the attacker's incomplete view, agree at every possible
test, then they behave the same. The implication is that attackers can still
successfully exploit the system even with incomplete information. Second, we
model the possible changes that can occur to the system via an exploit. An
exploit either manipulates the interactions between system components, for
example, providing the wrong values to a sensor, or changes the components
themselves, for example, manipulating the firmware of a global positioning
system (GPS). One additional benefit of using category theory is that
mathematical operations can be represented as formal diagrams, which is useful
for applying this analysis in a model-based design setting. We illustrate this
modeling framework using a cyber-physical system model of an unmanned aerial
vehicle (UAV). We demonstrate and model two types of attacks (1) a rewiring
attack, which violates data integrity, and (2) a rewriting attack, which
violates availability.
|
Connected and Automated Hybrid Electric Vehicles have the potential to reduce
fuel consumption and travel time in real-world driving conditions. The
eco-driving problem seeks to design optimal speed and power usage profiles
based upon look-ahead information from connectivity and advanced mapping
features. Recently, Deep Reinforcement Learning (DRL) has been applied to the
eco-driving problem. While the previous studies synthesize simulators and
model-free DRL to reduce online computation, this work proposes a Safe
Off-policy Model-Based Reinforcement Learning algorithm for the eco-driving
problem. The advantages over the existing literature are three-fold. First, the
combination of off-policy learning and the use of a physics-based model
improves the sample efficiency. Second, the training does not require any
extrinsic rewarding mechanism for constraint satisfaction. Third, the
feasibility of trajectory is guaranteed by using a safe set approximated by
deep generative models.
The performance of the proposed method is benchmarked against a baseline
controller representing human drivers, a previously designed model-free DRL
strategy, and the wait-and-see optimal solution. In simulation, the proposed
algorithm leads to a policy with a higher average speed and a better fuel
economy compared to the model-free agent. Compared to the baseline controller,
the learned strategy reduces the fuel consumption by more than 21\% while
keeping the average speed comparable.
|
In this paper, we study slow manifolds for infinite-dimensional evolution
equations. We compare two approaches: an abstract evolution equation framework
and a finite-dimensional spectral Galerkin approximation. We prove that the
slow manifolds constructed within each approach are asymptotically close under
suitable conditions. The proof is based upon Lyapunov-Perron methods and a
comparison of the local graphs for the slow manifolds in scales of Banach
spaces. In summary, our main result allows us to change between different
characterizations of slow invariant manifolds, depending upon the technical
challenges posed by particular fast-slow systems.
|
We propose a nonlocal field theory for gravity in presence of matter
consistent with perturbative unitarity, quantum finiteness, and other essential
classical properties that we are going to list below. First, the theory exactly
reproduces the same tree-level scattering amplitudes of Einstein's gravity
coupled to matter insuring no violation of macro-causality. Second, all the
exact solutions of the Einstein's theory are also exact solutions of the
nonlocal theory. Finally, and most importantly, the linear and nonlinear
stability analysis of the exact solutions in nonlocal gravity (with or without
matter) is in one to one correspondence with the same analysis in General
Relativity. Therefore, all the exact solutions stable in the Einstein's theory
are also stable in nonlocal gravity in presence of matter at any perturbative
order.
|
Every small monoidal category with universal (finite) joins of central
idempotents is monoidally equivalent to the category of global sections of a
sheaf of (sub)local monoidal categories on a topological space. Every small
stiff monoidal category monoidally embeds into such a category of global
sections. These representation results are functorial and subsume the
Lambek-Moerdijk-Awodey sheaf representation for toposes, the Stone
representation of Boolean algebras, and the Takahashi representation of Hilbert
modules as continuous fields of Hilbert spaces. Many properties of a monoidal
category carry over to the stalks of its sheaf, including having a trace,
having exponential objects, having dual objects, having limits of some shape,
and the central idempotents forming a Boolean algebra.
|
We propose a 3+1 Higgs Doublet Model based on the $\Delta(27)$ family
symmetry supplemented by several auxiliary cyclic symmetries leading to viable
Yukawa textures for the Standard Model fermions, consistent with the observed
pattern of fermion masses and mixings. The charged fermion mass hierarchy and
the quark mixing pattern is generated by the spontaneous breaking of the
discrete symmetries due to flavons that act as Froggatt-Nielsen fields. The
tiny neutrino masses arise from a radiative seesaw mechanism at one loop level,
thanks to a preserved $Z_2^{\left( 1\right)}$ discrete symmetry, which also
leads to stable scalar and fermionic dark matter candidates. The leptonic
sector features the predictive cobimaximal mixing pattern, consistent with the
experimental data from neutrino oscillations. For the scenario of normal
neutrino mass hierarchy, the model predicts an effective Majorana neutrino mass
parameter in the range $3$~meV$\lesssim m_{\beta\beta}\lesssim$ $18$ meV, which
is within the declared range of sensitivity of modern experiments. The model
predicts Flavour Changing Neutral Currents which constrain the model, for
instance Kaon mixing and $\mu \to e$ nuclear conversion processes, the latter
which are found to be within the reach of the forthcoming experiments.
|
The constant increase in the amount and complexity of information obtained
from IT data networkelements, for its correct monitoring and management, is a
reality. The same happens to data net-works in electrical systems that provide
effective supervision and control of substations and hydro-electric plants.
Contributing to this fact is the growing number of installations and new
environmentsmonitored by such data networks and the constant evolution of the
technologies involved. This sit-uation potentially leads to incomplete and/or
contradictory data, issues that must be addressed inorder to maintain a good
level of monitoring and, consequently, management of these systems. Inthis
paper, a prototype of an expert system is developed to monitor the status of
equipment of datanetworks in electrical systems, which deals with
inconsistencies without trivialising the inferences.This is accomplished in the
context of the remote control of hydroelectric plants and substationsby a
Regional Operation Centre (ROC). The expert system is developed with algorithms
definedupon a combination of Fuzzy logic and Paraconsistent Annotated Logic
with Annotation of TwoValues (PAL2v) in order to analyse uncertain signals and
generate the operating conditions (faulty,normal, unstable or inconsistent /
indeterminate) of the equipment that are identified as importantfor the remote
control of hydroelectric plants and substations. A prototype of this expert
systemwas installed on a virtualised server with CLP500 software (from the
EFACEC manufacturer) thatwas applied to investigate scenarios consisting of a
Regional (Brazilian) Operation Centre, with aGeneric Substation and a Generic
Hydroelectric Plant, representing a remote control environment.
|
We present a class of one-to-one matching models with perfectly transferable
utility. We discuss identification and inference in these separable models, and
we show how their comparative statics are readily analyzed.
|
Digital holography is one of the most widely used label-free microscopy
techniques in biomedical imaging. Recovery of the missing phase information of
a hologram is an important step in holographic image reconstruction. Here we
demonstrate a convolutional recurrent neural network (RNN) based phase recovery
approach that uses multiple holograms, captured at different sample-to-sensor
distances to rapidly reconstruct the phase and amplitude information of a
sample, while also performing autofocusing through the same network. We
demonstrated the success of this deep learning-enabled holography method by
imaging microscopic features of human tissue samples and Papanicolaou (Pap)
smears. These results constitute the first demonstration of the use of
recurrent neural networks for holographic imaging and phase recovery, and
compared with existing methods, the presented approach improves the
reconstructed image quality, while also increasing the depth-of-field and
inference speed.
|
An important paradigm of natural language processing consists of large-scale
pre-training on general domain data and adaptation to particular tasks or
domains. As we pre-train larger models, full fine-tuning, which retrains all
model parameters, becomes less feasible. Using GPT-3 175B as an example --
deploying independent instances of fine-tuned models, each with 175B
parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or
LoRA, which freezes the pre-trained model weights and injects trainable rank
decomposition matrices into each layer of the Transformer architecture, greatly
reducing the number of trainable parameters for downstream tasks. Compared to
GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable
parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA
performs on-par or better than fine-tuning in model quality on RoBERTa,
DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher
training throughput, and, unlike adapters, no additional inference latency. We
also provide an empirical investigation into rank-deficiency in language model
adaptation, which sheds light on the efficacy of LoRA. We release a package
that facilitates the integration of LoRA with PyTorch models and provide our
implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at
https://github.com/microsoft/LoRA.
|
Video Instance Segmentation (VIS) is a multi-task problem performing
detection, segmentation, and tracking simultaneously. Extended from image set
applications, video data additionally induces the temporal information, which,
if handled appropriately, is very useful to identify and predict object
motions. In this work, we design a unified model to mutually learn these tasks.
Specifically, we propose two modules, named Temporally Correlated Instance
Segmentation (TCIS) and Bidirectional Tracking (BiTrack), to take the benefit
of the temporal correlation between the object's instance masks across adjacent
frames. On the other hand, video data is often redundant due to the frame's
overlap. Our analysis shows that this problem is particularly severe for the
YoutubeVOS-VIS2021 data. Therefore, we propose a Multi-Source Data (MSD)
training mechanism to compensate for the data deficiency. By combining these
techniques with a bag of tricks, the network performance is significantly
boosted compared to the baseline, and outperforms other methods by a
considerable margin on the YoutubeVOS-VIS 2019 and 2021 datasets.
|
MAELAS is a computer program for the calculation of magnetocrystalline
anisotropy energy, anisotropic magnetostrictive coefficients and magnetoelastic
constants in an automated way. The method originally implemented in version 1.0
of MAELAS was based on the length optimization of the unit cell, proposed by Wu
and Freeman, to calculate the anisotropic magnetostrictive coefficients. We
present here a revised and updated version (v2.0) of MAELAS, where we added a
new methodology to compute anisotropic magnetoelastic constants from a linear
fitting of the energy versus applied strain. We analyze and compare the
accuracy of both methods showing that the new approach is more reliable and
robust than the one implemented in version 1.0, especially for non-cubic
crystal symmetries. This analysis also help us to find that the accuracy of the
method implemented in version 1.0 could be improved by using deformation
gradients derived from the equilibrium magnetoelastic strain tensor, as well as
potential future alternative methods like the strain optimization method.
Additionally, we clarify the role of the demagnetized state in the fractional
change in length, and derive the expression for saturation magnetostriction for
polycrystals with trigonal, tetragonal and orthorhombic crystal symmetry. In
this new version, we also fix some issues related to trigonal crystal symmetry
found in version 1.0.
|
In this paper, we address the problem of speaker verification in conditions
unseen or unknown during development. A standard method for speaker
verification consists of extracting speaker embeddings with a deep neural
network and processing them through a backend composed of probabilistic linear
discriminant analysis (PLDA) and global logistic regression score calibration.
This method is known to result in systems that work poorly on conditions
different from those used to train the calibration model. We propose to modify
the standard backend, introducing an adaptive calibrator that uses duration and
other automatically extracted side-information to adapt to the conditions of
the inputs. The backend is trained discriminatively to optimize binary
cross-entropy. When trained on a number of diverse datasets that are labeled
only with respect to speaker, the proposed backend consistently and, in some
cases, dramatically improves calibration, compared to the standard PLDA
approach, on a number of held-out datasets, some of which are markedly
different from the training data. Discrimination performance is also
consistently improved. We show that joint training of the PLDA and the adaptive
calibrator is essential -- the same benefits cannot be achieved when freezing
PLDA and fine-tuning the calibrator. To our knowledge, the results in this
paper are the first evidence in the literature that it is possible to develop a
speaker verification system with robust out-of-the-box performance on a large
variety of conditions.
|
In this study, we propose a laboratory-scale methodology, based on X-ray
micro-tomography and caesium (Cs) as a contrast agent, to advance the
understanding of cracking due to alkali-silica reaction (ASR) in concrete. The
methodology allows achieving a completely non-destructive and time-lapse
characterization of the spatial-temporal patterns of both the cracks and the
ASR products. While Cs addition slightly accelerated the ASR kinetics, the
crack pat-terns, with and without Cs addition, were statistically equivalent.
Cracks with ASR products appeared first in the aggregates, close to the
interface with the cement paste. They propagated afterwards towards the
aggregates interior. Some products were then extruded for several mm into air
voids and cracks in the cement paste. This process suggests that, in the early
stage, the ASR products may be a low-viscosity gel that can flow away from the
source aggregate and may settle later elsewhere as a rigid phase, upon calcium
uptake.
|
In this paper, we combined linguistic complexity and (dis)fluency features
with pretrained language models for the task of Alzheimer's disease detection
of the 2021 ADReSSo (Alzheimer's Dementia Recognition through Spontaneous
Speech) challenge. An accuracy of 83.1% was achieved on the test set, which
amounts to an improvement of 4.23% over the baseline model. Our best-performing
model that integrated component models using a stacking ensemble technique
performed equally well on cross-validation and test data, indicating that it is
robust against overfitting.
|
Recently, random walks on dynamic graphs have been studied because of its
adaptivity to dynamical settings including real network analysis. However,
previous works showed a tremendous gap between static and dynamic networks for
the cover time of a lazy simple random walk: Although $O(n^3)$ cover time was
shown for any static graphs of $n$ vertices, there is an edge-changing dynamic
graph with an exponential cover time.
We study a lazy Metropolis walk of Nonaka, Ono, Sadakane, and Yamashita
(2010), which is a weighted random walk using local degree information. We show
that this walk is robust to an edge-changing in dynamic networks: For any
connected edge-changing graphs of $n$ vertices, the lazy Metropolis walk has
the $O(n^2)$ hitting time, the $O(n^2\log n)$ cover time, and the $O(n^2)$
coalescing time, while those times can be exponential for lazy simple random
walks. All of these bounds are tight up to a constant factor. At the heart of
the proof, we give upper bounds of those times for any reversible random walks
with a time-homogeneous stationary distribution.
|
Camera localization aims to estimate 6 DoF camera poses from RGB images.
Traditional methods detect and match interest points between a query image and
a pre-built 3D model. Recent learning-based approaches encode scene structures
into a specific convolutional neural network (CNN) and thus are able to predict
dense coordinates from RGB images. However, most of them require re-training or
re-adaption for a new scene and have difficulties in handling large-scale
scenes due to limited network capacity. We present a new method for scene
agnostic camera localization using dense scene matching (DSM), where a cost
volume is constructed between a query image and a scene. The cost volume and
the corresponding coordinates are processed by a CNN to predict dense
coordinates. Camera poses can then be solved by PnP algorithms. In addition,
our method can be extended to temporal domain, which leads to extra performance
boost during testing time. Our scene-agnostic approach achieves comparable
accuracy as the existing scene-specific approaches, such as KFNet, on the
7scenes and Cambridge benchmark. This approach also remarkably outperforms
state-of-the-art scene-agnostic dense coordinate regression network SANet. The
Code is available at https://github.com/Tangshitao/Dense-Scene-Matching.
|
Loops, seamlessly repeatable musical segments, are a cornerstone of modern
music production. Contemporary artists often mix and match various sampled or
pre-recorded loops based on musical criteria such as rhythm, harmony and
timbral texture to create compositions. Taking such criteria into account, we
present LoopNet, a feed-forward generative model for creating loops conditioned
on intuitive parameters. We leverage Music Information Retrieval (MIR) models
as well as a large collection of public loop samples in our study and use the
Wave-U-Net architecture to map control parameters to audio. We also evaluate
the quality of the generated audio and propose intuitive controls for composers
to map the ideas in their minds to an audio loop.
|
Spin defects in solid-state materials are strong candidate systems for
quantum information technology and sensing applications. Here we explore in
details the recently discovered negatively charged boron vacancies ($V_B^-$) in
hexagonal boron nitride (hBN) and demonstrate their use as atomic scale sensors
for temperature, magnetic fields and externally applied pressure. These
applications are possible due to the high-spin triplet ground state and bright
spin-dependent photoluminescence (PL) of the $V_B^-$. Specifically, we find
that the frequency shift in optically detected magnetic resonance (ODMR)
measurements is not only sensitive to static magnetic fields, but also to
temperature and pressure changes which we relate to crystal lattice parameters.
Our work is important for the future use of spin-rich hBN layers as intrinsic
sensors in heterostructures of functionalized 2D materials.
|
What does it mean today to study a problem from a computational point of
view? We focus on parameterized complexity and on Column 16 "Graph Restrictions
and Their Effect" of D. S. Johnson's Ongoing guide, where several puzzles were
proposed in a summary table with 30 graph classes as rows and 11 problems as
columns. Several of the 330 entries remain unclassified into Polynomial or
NP-complete after 35 years. We provide a full dichotomy for the Steiner Tree
column by proving that the problem is NP-complete when restricted to Undirected
Path graphs. We revise Johnson's summary table according to the granularity
provided by the parameterized complexity for NP-complete problems.
|
We present the one-loop electroweak renormalization of the CP-violating
2-Higgs-Doublet Model with softly broken $\mathbb{Z}_2$ symmetry (C2HDM). The
existence of CP violation in the scalar sector of the model leads to a quite
unique process of renormalization, since it requires the introduction of
several non-physical parameters. The C2HDM will thus have more independent
counterterms than independent renormalized parameters. As a consequence,
different combinations of counterterms can be taken as independent for the same
set of independent renormalized parameters. We compare the behaviour of
selected combinations in specific NLO processes, which are assured to be gauge
independent via a simple prescription. FeynMaster 2 is used to derive the
Feynman rules, counterterms and one-loop processes in a simultaneously
automatic and flexible way. This illustrates its use as an ideal tool to
renormalize models such as the C2HDM and investigate them at NLO.
|
The problems regarding spectrum sensing are studied by exploiting a priori
and a posteriori in information of the received noise variance. First, the
traditional Average Likelihood Ratio (ALR) and the General Likelihood Ratio
Test (GLRT) detectors are investigated under a Gamma distributed function as a
channel noise, for the first time, under the availability of a priori
statistical distribution about the noise variance. Then, two robust detectors
are proposed using the exiting excess bandwidth to deliver a posteriori
probability on the received noise variance uncertainty. The first proposed
detector that is based on traditional ALR employs marginal distribution of the
observation under available a priori and a posteriori of the received signal,
while the second proposed detector employs the Maximum a posteriori (MAP)
estimation of the inverse of the noise power under the same hypothesizes as the
first detector. In addition, analytical expressions for the performance of the
proposed detectors are obtained in terms of the false-alarm and detection
probabilities. The simulation results exhibit the superiority of the proposed
detectors over the traditional counterparts.
|
We consider state changes in quantum theory due to "conditional action" and
relate these to the discussion of entropy decrease due to interventions of
"intelligent beings" and the principles of Szilard and Landauer/Bennett. The
mathematical theory of conditional actions is a special case of the theory of
"instruments" which describes changes of state due to general measurements and
will therefore be briefly outlined in the present paper. As a detailed example
we consider the imperfect erasure of a qubit that can also be viewed as a
conditional action and will be realized by the coupling of a spin to another
small spin system in its ground state.
|
We consider the distribution in residue classes modulo primes $p$ of Euler's
totient function $\phi(n)$ and the sum-of-proper-divisors function
$s(n):=\sigma(n)-n$. We prove that the values $\phi(n)$, for $n\le x$, that are
coprime to $p$ are asymptotically uniformly distributed among the $p-1$ coprime
residue classes modulo $p$, uniformly for $5 \le p \le (\log{x})^A$ (with $A$
fixed but arbitrary). We also show that the values of $s(n)$, for $n$
composite, are uniformly distributed among all $p$ residue classes modulo every
$p\le (\log{x})^A$. These appear to be the first results of their kind where
the modulus is allowed to grow substantially with $x$.
|
Born in the aftermath of core collapse supernovae, neutron stars contain
matter under extraordinary conditions of density and temperature that are
difficult to reproduce in the laboratory. In recent years, neutron star
observations have begun to yield novel insights into the nature of strongly
interacting matter in the high-density regime where current theoretical models
are challenged. At the same time, chiral effective field theory has developed
into a powerful framework to study nuclear matter properties with quantified
uncertainties in the moderate-density regime for modeling neutron stars. In
this article, we review recent developments in chiral effective field theory
and focus on many-body perturbation theory as a computationally efficient tool
for calculating the properties of hot and dense nuclear matter. We also
demonstrate how effective field theory enables statistically meaningful
comparisons between nuclear theory predictions, nuclear experiments, and
observational constraints on the nuclear equation of state.
|
The Semantic Web research community understood since its beginning how
crucial it is to equip practitioners with methods to transform non-RDF
resources into RDF. Proposals focus on either engineering content
transformations or accessing non-RDF resources with SPARQL. Existing solutions
require users to learn specific mapping languages (e.g. RML), to know how to
query and manipulate a variety of source formats (e.g. XPATH, JSON-Path), or to
combine multiple languages (e.g. SPARQL Generate). In this paper, we explore an
alternative solution and contribute a general-purpose meta-model for converting
non-RDF resources into RDF: Facade-X. Our approach can be implemented by
overriding the SERVICE operator and does not require to extend the SPARQL
syntax. We compare our approach with the state of art methods RML and SPARQL
Generate and show how our solution has lower learning demands and cognitive
complexity, and it is cheaper to implement and maintain, while having
comparable extensibility and efficiency.
|
In tokamak plasmas, the interaction among the micro-turbulence, zonal flows
(ZFs) and energetic particles (EPs) can affect the turbulence saturation level
and the consequent confinement quality and thus, is important for future
burning plasmas. In this work, the EP anisotropy effects on the ZF residual
level are studied by using anisotropic EP distributions with dependence on
pitch. Significant effects on the long wavelength ZFs have been found when
small to moderate width around the dominant pitch in the EP distribution
function is assumed. In addition, it is found that ZF residual level is
enhanced by barely passing/trapped and/or deeply trapped EPs, but it is
suppressed by well passing and/or intermediate trapped EPs. Numerical
calculation shows that for ASDEX Upgrade plasmas, typical EP distribution
functions can bring in -3%~+5.5% mitigation/enhancement in ZF residual level,
depending on the EP distribution functions.
|
It was conjectured by White in 1980 that the toric ring associated to a
matroid is defined by symmetric exchange relations. This conjecture was
extended to discrete polymatroids by Herzog and Hibi, and they prove that the
conjecture holds for polymatroids with the so called strong exchange property.
In this paper we generalize their result to polymatroids that are products of
polymatroids with the strong exchange property. This also extends a result by
Conca on transversal polymatroids.
|
From paints to food products, solvent evaporation is ubiquitous and
critically impacts product rheological properties. It affects Newtonian fluids
by concentrating any non-volatile components and viscoelastic materials, which
hardens up. In both of these cases, solvent evaporation leads to a change in
the sample volume, which makes any rheological measurements particularly
challenging with traditional shear geometries. Here we show that the
rheological properties of a sample experiencing `slow' evaporation can be
monitored in a time-resolved fashion by using a zero normal-force controlled
protocol in a parallel-plate geometry. Solvent evaporation from the sample
leads to a decrease of the normal force, which is compensated at all times by a
decrease of the gap height between the plates. As a result, the sample
maintains a constant contact area with the plates despite the significant
decrease of its volume. We validate the method under both oscillatory and
continuous shear by accurately monitoring the viscosity of water-glycerol
mixtures experiencing evaporation and a relative volume decrease as large as
70%. Moreover, we apply this protocol to dehydrating suspensions. Specifically,
we monitor a dispersion of charged silica nanoparticles undergoing a glass
transition induced by evaporation. While the decrease in gap height provides a
direct estimate of the increasing particle volume fraction, oscillatory and
continuous shear measurements allow us to monitor the suspension's evolving
viscoelastic properties in real-time. Overall, our study shows that a zero
normal-force protocol provides a simple approach to bulk and time-resolved
rheological characterization for systems experiencing slow volume variations.
|
We propose a cross-modal transformer-based neural correction models that
refines the output of an automatic speech recognition (ASR) system so as to
exclude ASR errors. Generally, neural correction models are composed of
encoder-decoder networks, which can directly model sequence-to-sequence mapping
problems. The most successful method is to use both input speech and its ASR
output text as the input contexts for the encoder-decoder networks. However,
the conventional method cannot take into account the relationships between
these two different modal inputs because the input contexts are separately
encoded for each modal. To effectively leverage the correlated information
between the two different modal inputs, our proposed models encode two
different contexts jointly on the basis of cross-modal self-attention using a
transformer. We expect that cross-modal self-attention can effectively capture
the relationships between two different modals for refining ASR hypotheses. We
also introduce a shallow fusion technique to efficiently integrate the
first-pass ASR model and our proposed neural correction model. Experiments on
Japanese natural language ASR tasks demonstrated that our proposed models
achieve better ASR performance than conventional neural correction models.
|
Let $\Gamma$ be the Fuchsian group of the first kind. For an even integer
$m\ge 4$, we study $m/2$-holomorphic differentials in terms of space of
(holomorphic) cuspidal modular forms $S_m(\Gamma)$. We also give in depth study
of Wronskians of cuspidal modular forms and their divisors.
|
In this paper we introduce a wide class of space-fractional and
time-fractional fractional semidiscrete Dirac operators of L\'evy-Leblond type
on the semidiscrete space-time $h\mathbb{Z}^n\times[0,\infty)$ ($h>0$),
resembling to fractional semidiscrete counterparts of the so-called parabolic
Dirac operators. The cornerstone this approach is centered around the study of
the null solutions of the underlying space-fractional resp. time-fractional
fractional semidiscrete Dirac operators, and on the study of representation of
the null solutions for both operators with the aid of the analytic fractional
semidiscrete semigroup $\{\exp(-te^{i\theta}(-\Delta_h)^{\alpha})\}_{t\geq 0}$
carrying the parameter constraints $0<\alpha\leq 1$ and $|\theta|\leq
\frac{\alpha \pi}{2}$. The results obtained involve the study of Cauchy
problems on $h\mathbb{Z}^n\times[0,\infty)$.
|
Simultaneous vibration control and energy harvesting of vehicle suspensions
have attracted significant research attention over the past decades. However,
existing energy harvesting shock absorbers (EHSAs) are mainly designed based on
the principle of linear resonance, thereby compromising suspension performance
for high-efficiency energy harvesting and being only responsive to narrow
bandwidth vibrations. In this paper, we propose a new EHSA design -- inerter
pendulum vibration absorber (IPVA) -- that integrates an electromagnetic rotary
EHSA with a nonlinear pendulum vibration absorber. We show that this design
simultaneously improves ride comfort and energy harvesting efficiency by
exploiting the nonlinear effects of pendulum inertia. To further improve the
performance, we develop a novel stochastic linearization model predictive
control (SL-MPC) approach in which we employ stochastic linearization to
approximate the nonlinear dynamics of EHSA that has superior accuracy compared
to standard linearization. In particular, we develop a new stochastic
linearization method with guaranteed stabilizability, which is a prerequisite
for control designs. This leads to an MPC problem that is much more
computationally efficient than the nonlinear MPC counterpart with no major
performance degradation. Extensive simulations are performed to show the
superiority of the proposed new nonlinear EHSA and to demonstrate the efficacy
of the proposed SL-MPC.
|
We investigate the Minimal Theory of Massive Gravity (MTMG) in the light of
different observational data sets which are in tension within the $\Lambda$CDM
cosmology. In particular, we analyze MTMG model, for the first time, with the
Planck-CMB data, and how these precise measurements affect the free parameters
of the theory. The MTMG model can affect the CMB power spectrum at large
angular scales and cause a suppression on the amplitude of the matter power
spectrum. We find that on adding Planck-CMB data, the graviton has a small,
positive, but non-zero mass at 68\% confidence level, and from this
perspective, we show that the tension between redshift space distortions
measurements and Planck-CMB data in the parametric space $S_8 - \Omega_m$ can
be resolved within the MTMG scenario. Through a robust and accurate analysis,
we find that the $H_0$ tension between the CMB and the local distance ladder
measurements still remains but can be reduced to $\sim3.5\sigma$ within the
MTMG theory. The MTMG is very well consistent with the CMB observations, and
undoubtedly, it can serve as a viable candidate amongst other modified gravity
theories.
|
We present a protocol to construct your own depth validation dataset for
navigation. This protocol, called RDC for Rigid Depth Constructor, aims at
being more accessible and cheaper than already existing techniques, requiring
only a camera and a Lidar sensor to get started. We also develop a test suite
to get insightful information from the evaluated algorithm. Finally, we take
the example of UAV videos, on which we test two depth algorithms that were
initially tested on KITTI and show that the drone context is dramatically
different from in-car videos. This shows that a single context benchmark should
not be considered reliable, and when developing a depth estimation algorithm,
one should benchmark it on a dataset that best fits one's particular needs,
which often means creating a brand new one. Along with this paper we provide
the tool with an open source implementation and plan to make it as
user-friendly as possible, to make depth dataset creation possible even for
small teams. Our key contributions are the following: We propose a complete,
open-source and almost fully automatic software application for creating
validation datasets with densely annotated depth, adaptable to a wide variety
of image, video and range data. It includes selection tools to adapt the
dataset to specific validation needs, and conversion tools to other dataset
formats. Using this application, we propose two new real datasets, outdoor and
indoor, readily usable in UAV navigation context. Finally as examples, we show
an evaluation of two depth prediction algorithms, using a collection of
comprehensive (e.g. distribution based) metrics.
|
Despite the superior performance in modeling complex patterns to address
challenging problems, the black-box nature of Deep Learning (DL) methods impose
limitations to their application in real-world critical domains. The lack of a
smooth manner for enabling human reasoning about the black-box decisions hinder
any preventive action to unexpected events, in which may lead to catastrophic
consequences. To tackle the unclearness from black-box models, interpretability
became a fundamental requirement in DL-based systems, leveraging trust and
knowledge by providing ways to understand the model's behavior. Although a
current hot topic, further advances are still needed to overcome the existing
limitations of the current interpretability methods in unsupervised DL-based
models for Anomaly Detection (AD). Autoencoders (AE) are the core of
unsupervised DL-based for AD applications, achieving best-in-class performance.
However, due to their hybrid aspect to obtain the results (by requiring
additional calculations out of network), only agnostic interpretable methods
can be applied to AE-based AD. These agnostic methods are computationally
expensive to process a large number of parameters. In this paper we present the
RXP (Residual eXPlainer), a new interpretability method to deal with the
limitations for AE-based AD in large-scale systems. It stands out for its
implementation simplicity, low computational cost and deterministic behavior,
in which explanations are obtained through the deviation analysis of
reconstructed input features. In an experiment using data from a real
heavy-haul railway line, the proposed method achieved superior performance
compared to SHAP, demonstrating its potential to support decision making in
large scale critical systems.
|
We study the Hall effect in square, planar type-II superconductors using
numerical simulations of time dependent Ginzburg-Landau (TDGL) equations. The
Hall field in some type-II superconductors displays sign-change behavior at
some magnetic fields due to the induced field of vortex flow, when its
contribution is strong enough to reverse the field direction. In this work, we
use modified TDGL equations which couple an externally applied current, and
also incorporate normal-state and flux-flow Hall effects. We obtain the profile
of Hall angle as a function of applied magnetic field for four different sizes
(l\times l) of the superconductor: l/ \xi belongs to {3, 5, 15, 20}. We obtain
vastly different profiles for each size, proving that size is an important
parameter that determines Hall behavior. We find that electric field dynamics
provides an insight into several anomalous features including signchange of
Hall angle, and leads us to the precise transient behavior of order parameter
responsible for them.
|
This paper considers a probabilistic model for floating-point computation in
which the roundoff errors are represented by bounded random variables with mean
zero. Using this model, a probabilistic bound is derived for the forward error
of the computed sum of n real numbers. This work improves upon existing
probabilistic bounds by holding to all orders, and as a result provides
informative bounds for larger problem sizes.
|
We consider those elements of the Schwartz algebra of entire functions which
are Fourier-Laplace transforms of invertible distributions with compact
supports on the real line. These functions are called invertible in the sense
of Ehrenpreis. The presented result concerns with the properties of zero
subsets of invertible in the sense of Ehrenpreis function f. Namely, we
establish some properties of the zero subset formed by zeros of f laying not
far from the real axis.
|
After the discovery of the Higgs boson in 2012, particle physics has entered
an exciting era. An important question is whether the Standard Model of
particle physics correctly describes the scalar sector realized by nature, or
whether it is part of a more extended model, featuring additional particle
content. A prime way to test this is to probe models with extended scalar
sectors at future collider facilities. We here discuss such models in the
context of high-luminosity LHC, a possible proton-proton collider with 27 and
100 TeV center-of-mass energy, as well as future lepton colliders with various
center-of-mass energies.
|
Single-channel speech enhancement (SE) is an important task in speech
processing. A widely used framework combines an analysis/synthesis filterbank
with a mask prediction network, such as the Conv-TasNet architecture. In such
systems, the denoising performance and computational efficiency are mainly
affected by the structure of the mask prediction network. In this study, we aim
to improve the sequential modeling ability of Conv-TasNet architectures by
integrating Conformer layers into a new mask prediction network. To make the
model computationally feasible, we extend the Conformer using linear complexity
attention and stacked 1-D dilated depthwise convolution layers. We trained the
model on 3,396 hours of noisy speech data, and show that (i) the use of linear
complexity attention avoids high computational complexity, and (ii) our model
achieves higher scale-invariant signal-to-noise ratio than the improved
time-dilated convolution network (TDCN++), an extended version of Conv-TasNet.
|
Mechanical properties of transition metal dichalcogenides (TMDCs) are
relevant to their prospective applications in flexible electronics. So far, the
focus has been on the semiconducting TMDCs, mostly MoX2 and WX2 (X=S, Se) due
to their potential in optoelectronics. A comprehensive understanding of the
elastic properties of metallic TMDCs is needed to complement the semiconducting
TMDCs in flexible optoelectronics. Thus, mechanical testing of metallic TMDCs
is pertinent to the realization of the applications. Here, we report on the
atomic force microscopy-based nano-indentation measurements on ultra-thin
2H-TaS2 crystals to elucidate the stretching and breaking of the metallic
TMDCs. We explored the elastic properties of 2H-TaS2 at different thicknesses
ranging from 3.5 nm to 12.6 nm and find that the Young's modulus is independent
of the thickness at a value of 85.9 +- 10.6 GPa, which is lower than the
semiconducting TMDCs reported so far. We determined the breaking strength as
5.07 4- 0.10 GPa which is 6% of the Young's modulus. This value is comparable
to that of other TMDCs. We used ab initio calculations to provide an insight to
the high elasticity measured in 2H-TaS2. We also performed measurements on a
small number of 1T-TaTe2, 3R-NbS2 and 1T-NbTe2 samples and extended our ab
initio calculations to these materials to gain a deeper understanding on the
elastic and breaking properties of metallic TMDCs. This work illustrates that
the studied metallic TMDCs are suitable candidates to be used as additives in
composites as functional and structural elements and for flexible conductive
electronic devices.
|
One billion people live in informal settlements worldwide. The complex and
multilayered spaces that characterize this unplanned form of urbanization pose
a challenge to traditional approaches to mapping and morphological analysis.
This study proposes a methodology to study the morphological properties of
informal settlements based on terrestrial LiDAR (Light Detection and Ranging)
data collected in Rocinha, the largest favela in Rio de Janeiro, Brazil. Our
analysis operates at two resolutions, including a \emph{global} analysis
focused on comparing different streets of the favela to one another, and a
\emph{local} analysis unpacking the variation of morphological metrics within
streets. We show that our methodology reveals meaningful differences and
commonalities both in terms of the global morphological characteristics across
streets and their local distributions. Finally, we create morphological maps at
high spatial resolution from LiDAR data, which can inform urban planning
assessments of concerns related to crowding, structural safety, air quality,
and accessibility in the favela. The methods for this study are automated and
can be easily scaled to analyze entire informal settlements, leveraging the
increasing availability of inexpensive LiDAR scanners on portable devices such
as cellphones.
|
We determine the ability of Cosmic Explorer, a proposed third-generation
gravitational-wave observatory, to detect eccentric binary neutron stars and to
measure their eccentricity. We find that for a matched-filter search, template
banks constructed using binaries in quasi-circular orbits are effectual for
eccentric neutron star binaries with $e_{7} \leq 0.004$ ($e_{7} \leq 0.003$)
for CE1 (CE2), where $e_7$ is the binary's eccentricity at a gravitational-wave
frequency of 7~Hz. We show that stochastic template placement can be used to
construct a matched-filter search for binaries with larger eccentricities and
construct an effectual template bank for binaries with $e_{7} \leq 0.05$. We
show that the computational cost of both the search for binaries in
quasi-circular orbits and eccentric orbits is not significantly larger for
Cosmic Explorer than for Advanced LIGO and is accessible with present-day
computational resources. We investigate Cosmic Explorer's ability to
distinguish between circular and eccentric binaries. We estimate that for a
binary with a signal-to-noise ratio of 8 (800), Cosmic Explorer can distinguish
between a circular binary and a binary with eccentricity $e_7 \gtrsim 10^{-2}$
($10^{-3}$) at 90% confidence.
|
Understanding the spread of false or dangerous beliefs through a population
has never seemed so urgent. Network science researchers have often taken a page
from epidemiologists, and modeled the spread of false beliefs as similar to how
a disease spreads through a social network. However, absent from those
disease-inspired models is an internal model of an individual's set of current
beliefs, where cognitive science has increasingly documented how the
interaction between mental models and incoming messages seems to be crucially
important for their adoption or rejection. Some computational social science
modelers analyze agent-based models where individuals do have simulated
cognition, but they often lack the strengths of network science, namely in
empirically-driven network structures. We introduce a cognitive cascade model
that combines a network science belief cascade approach with an internal
cognitive model of the individual agents as in opinion diffusion models as a
public opinion diffusion (POD) model, adding media institutions as agents which
begin opinion cascades. We conduct an analysis of the cognitive cascade model
with our simple cognitive function across various graph topologies and
institutional messaging patterns. We argue from our results that
population-level aggregate outcomes of the model qualitatively match what has
been reported in COVID-related public opinion polls, and that the model
dynamics lend insights as to how to address the spread of problematic beliefs.
The overall model sets up a framework with which social science misinformation
researchers and computational opinion diffusion modelers can join forces to
understand, and hopefully learn how to best counter, the spread of
disinformation and "alternative facts."
|
This paper aims to invalidate the hypothesis that consciousness is necessary
for the quantum measurement process. To achieve this target, I propose a
considerable modification of the Schroedinger cat and the Dead-Alive Physicist
thought experiments, called 'PIAR', short for Physicist Inside the Ambiguous
Room. A specific strategy has enabled me to plan the experiment in such a way
as to logically justify the inconsistency of the above hypothesis and to oblige
its supporters to rely on an alternative interpretation of quantum mechanics in
which the real-world phenomena exist independently of our conscious mind and
where observers play no special role. Moreover, in my description, the
measurement apparatus will be complete, in the sense that the experiment, given
that it includes also the experimenter, will begin and end exclusively within a
sealed room. Hence, my analysis provides a logical explanation of the
relationship between the observer and the objects of her/his experimental
observation; this and a few other implications are discussed in the fifth
section and the conclusions.
|
Nature-inspired swarm-based algorithms have been widely applied to tackle
high-dimensional and complex optimization problems across many disciplines.
They are general purpose optimization algorithms, easy to use and implement,
flexible and assumption-free. A common drawback of these algorithms is
premature convergence and the solution found is not a global optimum. We
provide sufficient conditions for an algorithm to converge almost surely (a.s.)
to a global optimum. We then propose a general, simple and effective strategy,
called Perturbation-Projection (PP), to enhance an algorithm's exploration
capability so that our convergence conditions are guaranteed to hold. We
illustrate this approach using three widely used nature-inspired swarm-based
optimization algorithms: particle swarm optimization (PSO), bat algorithm (BAT)
and competitive swarm optimizer (CSO). Extensive numerical experiments show
that each of the three algorithms with the enhanced PP strategy outperforms the
original version in a number of notable ways.
|
Understanding pedestrian crossing behavior is an essential goal in
intelligent vehicle development, leading to an improvement in their security
and traffic flow. In this paper, we developed a method called IntFormer. It is
based on transformer architecture and a novel convolutional video
classification model called RubiksNet. Following the evaluation procedure in a
recent benchmark, we show that our model reaches state-of-the-art results with
good performance ($\approx 40$ seq. per second) and size ($8\times $smaller
than the best performing model), making it suitable for real-time usage. We
also explore each of the input features, finding that ego-vehicle speed is the
most important variable, possibly due to the similarity in crossing cases in
PIE dataset.
|
We show that the minimal Type-I Seesaw mechanism can successfully account for
the observed dark matter abundance in the form of a keV sterile neutrino. This
population can be produced by the decay of the heavier neutral leptons, with
masses above the Higgs mass scale, while they are in thermal equilibrium in the
early Universe (freeze-in). Moreover, the implementation of the relevant
phenomenological constraints (relic abundance, indirect detection and structure
formation) on this model automatically selects a region of the parameter space
featuring an approximate lepton number symmetry.
|
Normalizing flows learn a diffeomorphic mapping between the target and base
distribution, while the Jacobian determinant of that mapping forms another
real-valued function. In this paper, we show that the Jacobian determinant
mapping is unique for the given distributions, hence the likelihood objective
of flows has a unique global optimum. In particular, the likelihood for a class
of flows is explicitly expressed by the eigenvalues of the auto-correlation
matrix of individual data point, and independent of the parameterization of
neural network, which provides a theoretical optimal value of likelihood
objective and relates to probabilistic PCA. Additionally, Jacobian determinant
is a measure of local volume change and is maximized when MLE is used for
optimization. To stabilize normalizing flows training, it is required to
maintain a balance between the expansiveness and contraction of volume, meaning
Lipschitz constraint on the diffeomorphic mapping and its inverse. With these
theoretical results, several principles of designing normalizing flow were
proposed. And numerical experiments on highdimensional datasets (such as
CelebA-HQ 1024x1024) were conducted to show the improved stability of training.
|
Unobserved confounding is one of the main challenges when estimating causal
effects. We propose a novel causal reduction method that replaces an arbitrary
number of possibly high-dimensional latent confounders with a single latent
confounder that lives in the same space as the treatment variable without
changing the observational and interventional distributions entailed by the
causal model. After the reduction, we parameterize the reduced causal model
using a flexible class of transformations, so-called normalizing flows. We
propose a learning algorithm to estimate the parameterized reduced model
jointly from observational and interventional data. This allows us to estimate
the causal effect in a principled way from combined data. We perform a series
of experiments on data simulated using nonlinear causal mechanisms and find
that we can often substantially reduce the number of interventional samples
when adding observational training samples without sacrificing accuracy. Thus,
adding observational data may help to more accurately estimate causal effects
even in the presence of unobserved confounders.
|
Blindspots in APIs can cause software engineers to introduce vulnerabilities,
but such blindspots are, unfortunately, common. We study the effect APIs with
blindspots have on developers in two languages by replicating an 109-developer,
24-Java-API controlled experiment. Our replication applies to Python and
involves 129 new developers and 22 new APIs. We find that using APIs with
blindspots statistically significantly reduces the developers' ability to
correctly reason about the APIs in both languages, but that the effect is more
pronounced for Python. Interestingly, for Java, the effect increased with
complexity of the code relying on the API, whereas for Python, the opposite was
true. Whether the developers considered API uses to be more difficult, less
clear, and less familiar did not have an effect on their ability to correctly
reason about them. Developers with better long-term memory recall were more
likely to correctly reason about APIs with blindspots, but short-term memory,
processing speed, episodic memory, and memory span had no effect. Surprisingly,
professional experience and expertice did not improve the developers' ability
to reason about APIs with blindspots across both languages, with long-term
professionals with many years of experience making mistakes as often as
relative novices. Finally, personality traits did not significantly affect the
Python developers' ability to reason about APIs with blindspots, but less
extraverted and more open developers were better at reasoning about Java APIs
with blindspots. Overall, our findings suggest that blindspots in APIs are a
serious problem across languages, and that experience and education alone do
not overcome that problem, suggesting that tools are needed to help developers
recognize blindspots in APIs as they write code that uses those APIs.
|
We present three families of minimal border rank tensors: they come from
highest weight vectors, smoothable algebras, or monomial algebras. We analyse
them using Strassen's laser method and obtain an upper bound $2.431$ on
$\omega$. We also explain how in certain monomial cases using the laser method
directly is less profitable than first degenerating. Our results form possible
paths in the search for valuable tensors for the laser method away from
Coppersmith-Winograd tensors.
|
We use the 21 cm emission line data from the DINGO-VLA project to study the
atomic hydrogen gas H\,{\textsc i} of the Universe at redshifts $z<0.1$.
Results are obtained using a stacking analysis, combining the H\,{\textsc i}
signals from 3622 galaxies extracted from 267 VLA pointings in the G09 field of
the Galaxy and Mass Assembly Survey (GAMA). Rather than using a traditional
one-dimensional spectral stacking method, a three-dimensional cubelet stacking
method is used to enable deconvolution and the accurate recovery of average
galaxy fluxes from this high-resolution interferometric dataset. By probing
down to galactic scales, this experiment also overcomes confusion corrections
that have been necessary to include in previous single dish studies. After
stacking and deconvolution, we obtain a $30\sigma$ H\,{\textsc i} mass
measurement from the stacked spectrum, indicating an average H\,{\textsc i}
mass of $M_{\rm H\,{\textsc i}}=(1.674\pm 0.183)\times 10^{9}~{\Msun}$. The
corresponding cosmic density of neutral atomic hydrogen is $\Omega_{\rm
H\,{\textsc i}}=(0.377\pm 0.042)\times 10^{-3}$ at redshift of $z=0.051$. These
values are in good agreement with earlier results, implying there is no
significant evolution of $\Omega_{\rm H\,{\textsc i}}$ at lower redshifts.
|
Mobile robotics is a research area that has witnessed incredible advances for
the last decades. Robot navigation is an essential task for mobile robots. Many
methods are proposed for allowing robots to navigate within different
environments. This thesis studies different deep learning-based approaches,
highlighting the advantages and disadvantages of each scheme. In fact, these
approaches are promising that some of them can navigate the robot in unknown
and dynamic environments. In this thesis, one of the deep learning methods
based on convolutional neural network (CNN) is realized by software
implementations. There are different preparation studies to complete this
thesis such as introduction to Linux, robot operating system (ROS), C++,
python, and GAZEBO simulator. Within this work, we modified the drone network
(namely, DroNet) approach to be used in an indoor environment by using a ground
robot in different cases. Indeed, the DroNet approach suffers from the absence
of goal-oriented motion. Therefore, this thesis mainly focuses on tackling this
problem via mapping using simultaneous localization and mapping (SLAM) and path
planning techniques using Dijkstra. Afterward, the combination between the
DroNet ground robot-based, mapping, and path planning leads to a goal-oriented
motion, following the shortest path while avoiding the dynamic obstacle.
Finally, we propose a low-cost approach, for indoor applications such as
restaurants, museums, etc, on the base of using a monocular camera instead of a
laser scanner.
|
Inspired by humans' remarkable ability to master arithmetic and generalize to
unseen problems, we present a new dataset, HINT, to study machines' capability
of learning generalizable concepts at three different levels: perception,
syntax, and semantics. In particular, concepts in HINT, including both digits
and operators, are required to learn in a weakly-supervised fashion: Only the
final results of handwriting expressions are provided as supervision. Learning
agents need to reckon how concepts are perceived from raw signals such as
images (i.e., perception), how multiple concepts are structurally combined to
form a valid expression (i.e., syntax), and how concepts are realized to afford
various reasoning tasks (i.e., semantics). With a focus on systematic
generalization, we carefully design a five-fold test set to evaluate both the
interpolation and the extrapolation of learned concepts. To tackle this
challenging problem, we propose a neural-symbolic system by integrating neural
networks with grammar parsing and program synthesis, learned by a novel
deduction--abduction strategy. In experiments, the proposed neural-symbolic
system demonstrates strong generalization capability and significantly
outperforms end-to-end neural methods like RNN and Transformer. The results
also indicate the significance of recursive priors for extrapolation on syntax
and semantics.
|
The lack of publicly available evaluation data for low-resource languages
limits progress in Spoken Language Understanding (SLU). As key tasks like
intent classification and slot filling require abundant training data, it is
desirable to reuse existing data in high-resource languages to develop models
for low-resource scenarios. We introduce xSID, a new benchmark for
cross-lingual Slot and Intent Detection in 13 languages from 6 language
families, including a very low-resource dialect. To tackle the challenge, we
propose a joint learning approach, with English SLU training data and
non-English auxiliary tasks from raw text, syntax and translation for transfer.
We study two setups which differ by type and language coverage of the
pre-trained embeddings. Our results show that jointly learning the main tasks
with masked language modeling is effective for slots, while machine translation
transfer works best for intent classification.
|
The convolutional neural network (CNN) remains an essential tool in solving
computer vision problems. Standard convolutional architectures consist of
stacked layers of operations that progressively downscale the image. Aliasing
is a well-known side-effect of downsampling that may take place: it causes
high-frequency components of the original signal to become indistinguishable
from its low-frequency components. While downsampling takes place in the
max-pooling layers or in the strided-convolutions in these models, there is no
explicit mechanism that prevents aliasing from taking place in these layers.
Due to the impressive performance of these models, it is natural to suspect
that they, somehow, implicitly deal with this distortion. The question we aim
to answer in this paper is simply: "how and to what extent do CNNs counteract
aliasing?" We explore the question by means of two examples: In the first, we
assess the CNNs capability of distinguishing oscillations at the input, showing
that the redundancies in the intermediate channels play an important role in
succeeding at the task; In the second, we show that an image classifier CNN
while, in principle, capable of implementing anti-aliasing filters, does not
prevent aliasing from taking place in the intermediate layers.
|
A key issue in the solution of partial differential equations via integral
equation methods is the evaluation of possibly singular integrals involving the
Green's function and its derivatives multiplied by simple functions over
discretized representations of the boundary. For the Helmholtz equation, while
many authors use numerical quadrature to evaluate these boundary integrals, we
present analytical expressions for such integrals over flat polygons in the
form of infinite series. These can be efficiently truncated based on the
accurate error bounds, which is key to their integration in methods such as the
Fast Multipole Method.
|
This paper introduces Fast Linearized Adaptive Policy (FLAP), a new
meta-reinforcement learning (meta-RL) method that is able to extrapolate well
to out-of-distribution tasks without the need to reuse data from training, and
adapt almost instantaneously with the need of only a few samples during
testing. FLAP builds upon the idea of learning a shared linear representation
of the policy so that when adapting to a new task, it suffices to predict a set
of linear weights. A separate adapter network is trained simultaneously with
the policy such that during adaptation, we can directly use the adapter network
to predict these linear weights instead of updating a meta-policy via gradient
descent, such as in prior meta-RL methods like MAML, to obtain the new policy.
The application of the separate feed-forward network not only speeds up the
adaptation run-time significantly, but also generalizes extremely well to very
different tasks that prior Meta-RL methods fail to generalize to. Experiments
on standard continuous-control meta-RL benchmarks show FLAP presenting
significantly stronger performance on out-of-distribution tasks with up to
double the average return and up to 8X faster adaptation run-time speeds when
compared to prior methods.
|
Solutions of a linear equation b=ax in a homomorphic image of a commutative
Bezout domain of stable range 1.5 is developed. It is proved that the set of
solutions of a solvable linear equation contains at least one solution that
divides the rest, which is called a generating solution. Generating solutions
are pairwise associates. Using this result, the structure of elements of the
Zelisko group is investigated.
|
With recent high-throughput technology we can synthesize large heterogeneous
collections of DNA structures, and also read them all out precisely in a single
procedure. Can we use these tools, not only to do things faster, but also to
devise new techniques and algorithms? In this paper we examine some DNA
algorithms that assume high-throughput synthesis and sequencing. We aim to
monitor, record, and read out the order in which a number $N$ of events occur,
using $N^2$ redundant detectors, and (after sequencing) reconstructing the
order by transitive reduction.
|
Label noise and long-tailed distributions are two major challenges in
distantly supervised relation extraction. Recent studies have shown great
progress on denoising, but pay little attention to the problem of long-tailed
relations. In this paper, we introduce constraint graphs to model the
dependencies between relation labels. On top of that, we further propose a
novel constraint graph-based relation extraction framework(CGRE) to handle the
two challenges simultaneously. CGRE employs graph convolution networks (GCNs)
to propagate information from data-rich relation nodes to data-poor relation
nodes, and thus boosts the representation learning of long-tailed relations. To
further improve the noise immunity, a constraint-aware attention module is
designed in CGRE to integrate the constraint information. Experimental results
on a widely-used benchmark dataset indicate that our approach achieves
significant improvements over the previous methods for both denoising and
long-tailed relation extraction. Our dataset and codes are available at
https://github.com/tmliang/CGRE.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.