abstract
stringlengths 42
2.09k
|
---|
For a discrete function $f\left( x\right) $ on a discrete set, the finite
difference can be either forward and backward. However, we observe that if $
f\left( x\right) $ is a sum of two functions $f\left( x\right) =f_{1}\left(
x\right) +f_{2}\left( x\right) $ defined on the discrete set, the first order
difference of $\Delta f\left( x\right) $ is equivocal for we may have $ \Delta
^{f}f_{1}\left( x\right) +\Delta ^{b}f_{2}\left( x\right) $ where $ \Delta
^{f}$ and $\Delta ^{b}$ denotes the forward and backward difference
respectively. Thus, the first order variation equation for this function $
f\left( x\right) $ gives many solutions which include both true and false one.
A proper formalism of the discrete calculus of variations is proposed to single
out the true one by examination of the second order variations, and is capable
of yielding the exact form of the distributions for Boltzmann, Bose and Fermi
system without requiring the numbers of particle to be infinitely large. The
advantage and peculiarity of our formalism are explicitly illustrated by the
derivation of the Bose distribution.
|
The polynomial $f_{2n}(x)=1+x+\cdots+x^{2n}$ and its minimizer on the real
line $x_{2n}=\operatorname{arg\,inf} f_{2n}(x)$ for $n\in\Bbb N$ are studied.
Results show that $x_{2n}$ exists, is unique, corresponds to $\partial_x
f_{2n}(x)=0$, and resides on the interval $[-1,-1/2]$ for all $n$. It is
further shown that $\inf f_{2n}(x)=(1+2n)/(1+2n(1-x_{2n}))$ and $\inf
f_{2n}(x)\in[1/2,3/4]$ for all $n$ with an exact solution for $x_{2n}$ given in
the form of a finite sum of hypergeometric functions of unity argument.
Perturbation theory is applied to generate rapidly converging and
asymptotically exact approximations to $x_{2n}$. Numerical studies are carried
out to show how many terms of the perturbation expansion for $x_{2n}$ are
needed to obtain suitably accurate approximations to the exact value.
|
Transition metal oxides (TMOs) like MoOx are increasingly explored as hole
transport layers for perovskite-based solar cells. Due to their large work
function, the hole collection mechanism of such solar cells are fundamentally
different from other materials like PEDOT: PSS, and the associated device
optimizations are not well elucidated. In addition, the prospects of such
architectures against the challenges posed by ion migration are yet to be
explored - which we critically examine in this contribution through detailed
numerical simulations. Curiously, we find that, for similar ion densities and
interface recombination velocities, ion migration is more detrimental for
Perovskite solar cells with TMO contact layers with much lower achievable
efficiency limits (21%). The insights shared by this work should be of broad
interest to the community in terms of long-term stability, efficiency
degradation and hence could help critically evaluate the promises and prospects
of TMOs as hole contact layers for perovskite solar cells.
|
In this paper, we propose a novel methodology for better performing
uncertainty and sensitivity analysis for complex mathematical models under
constraints and/or with dependent input variables, including correlated
variables. Our approach allows for assessing the single, overall and
interactions effects of any subset of input variables, that account for the
dependencies structures inferred by the constraints. Using the variance as
importance measure among others, we define the main-effect and total
sensitivity indices of input(s) with the former index less than the latter. We
also derive the consistent estimators and asymptotic distributions of such
indices by distinguishing the case of the multivariate and/or functional
outputs, including spatio-temporal models and dynamic models.
|
We exhibit very small eigenvalues of the quadratic form associated to the
Weil explicit formulas restricted to test functions whose support is within a
fixed interval with upper bound S. We show both numerically and conceptually
that the associated eigenvectors are obtained by a simple arithmetic operation
of finite sum using prolate spheroidal wave functions associated to the scale
S. Then we use these functions to condition the canonical spectral triple of
the circle of length L=2 Log(S) in such a way that they belong to the kernel of
the perturbed Dirac operator. We give numerical evidence that, when one varies
L, the low lying spectrum of the perturbed spectral triple resembles the low
lying zeros of the Riemann zeta function. We justify conceptually this result
and show that, for each eigenvalue, the coincidence is perfect for the special
values of the length L of the circle for which the two natural ways of
realizing the perturbation give the same eigenvalue. This fact is tested
numerically by reproducing the first thirty one zeros of the Riemann zeta
function from our spectral side, and estimate the probability of having
obtained this agreement at random, as a very small number whose first fifty
decimal places are all zero. The theoretical concept which emerges is that of
zeta cycle and our main result establishes its relation with the critical zeros
of the Riemann zeta function and with the spectral realization of these zeros
obtained by the first author.
|
In this paper we solve two problems of Esperet, Kang and Thomasse as well as
Li concerning (i) induced bipartite subgraphs in triangle-free graphs and (ii)
van der Waerden numbers. Each time random greedy algorithms allow us to go
beyond the Lovasz Local Lemma or alteration method used in previous work,
illustrating the power of the algorithmic approach to the probabilistic method.
|
Recently, the convolutional weighted power minimization distortionless
response (WPD) beamformer was proposed, which unifies multi-channel weighted
prediction error dereverberation and minimum power distortionless response
beamforming. To optimize the convolutional filter, the desired speech component
is modeled with a time-varying Gaussian model, which promotes the sparsity of
the desired speech component in the short-time Fourier transform domain
compared to the noisy microphone signals. In this paper we generalize the
convolutional WPD beamformer by using an lp-norm cost function, introducing an
adjustable shape parameter which enables to control the sparsity of the desired
speech component. Experiments based on the REVERB challenge dataset show that
the proposed method outperforms the conventional convolutional WPD beamformer
in terms of objective speech quality metrics.
|
The expected number of secondary infections arising from each index case, the
reproduction number, or $R$ number is a vital summary statistic for
understanding and managing epidemic diseases. There are many methods for
estimating $R$; however, few of these explicitly model heterogeneous disease
reproduction, which gives rise to superspreading within the population. Here we
propose a parsimonious discrete-time branching process model for epidemic
curves that incorporates heterogeneous individual reproduction numbers. Our
Bayesian approach to inference illustrates that this heterogeneity results in
less certainty on estimates of the time-varying cohort reproduction number
$R_t$. Leave-future-out cross-validation evaluates the predictive performance
of the proposed model, allowing us to assess epidemic curves for evidence of
superspreading. We apply these methods to a COVID-19 epidemic curve for the
Republic of Ireland and find some support for heterogeneous disease
reproduction. We conclude that the 10\% most infectious index cases account for
approximately 40-80\% of the expected secondary infections. Our analysis
highlights the difficulties in identifying heterogeneous disease reproduction
from epidemic curves and that heterogeneity is a vital consideration when
estimating $R_t$.
|
Fast and high-order accurate algorithms for three dimensional elastic
scattering are of great importance when modeling physical phenomena in
mechanics, seismic imaging, and many other fields of applied science. In this
paper, we develop a novel boundary integral formulation for the three
dimensional elastic scattering based on the Helmholtz decomposition of elastic
fields, which converts the Navier equation to a coupled system consisted of
Helmholtz and Maxwell equations. An FFT-accelerated separation of variables
solver is proposed to efficiently invert boundary integral formulations of the
coupled system for elastic scattering from axisymmetric rigid bodies. In
particular, by combining the regularization properties of the singular boundary
integral operators and the FFT-based fast evaluation of modal Green's
functions, our numerical solver can rapidly solve the resulting integral
equations with a high-order accuracy. Several numerical examples are provided
to demonstrate the efficiency and accuracy of the proposed algorithm, including
geometries with corners at different wave number.
|
We study generalized spin waves in graphene under a strong magnetic field
when the Landau-level filling factor is $\nu=\pm 1$. In this case, the ground
state is a particular SU(4) quantum Hall ferromagnet, in which not only the
physical spin is fully polarized but also the pseudo-spin associated with the
valley degree of freedom. The nature of the ground state and the spin-valley
polarization depend on explicit symmetry breaking terms that are also reflected
in the generalised spin-wave spectrum. In addition to pure spin waves, one
encounters valley-pseudo-spin waves as well as more exotic entanglement waves
that have a mixed spin-valley character. Most saliently, the SU(4)
symmetry-breaking terms do not only yield gaps in the spectra, but under
certain circumstances, namely in the case of residual ground-state symmetries,
render the originally quadratic (in the wave vector) spin-wave dispersion
linear.
|
In this paper we provide two new semantics for proofs in the constructive
modal logics CK and CD. The first semantics is given by extending the syntax of
combinatorial proofs for propositional intuitionistic logic, in which proofs
are factorised in a linear fragment (arena net) and a parallel
weakening-contraction fragment (skew fibration). In particular we provide an
encoding of modal formulas by means of directed graphs (modal arenas), and an
encoding of linear proofs as modal arenas equipped with vertex partitions
satisfying topological criteria. The second semantics is given by means of
winning innocent strategies of a two-player game over modal arenas. This is
given by extending the Heijltjes-Hughes-Stra{\ss}burger correspondence between
intuitionistic combinatorial proofs and winning innocent strategies in a
Hyland-Ong arena. Using our first result, we provide a characterisation of
winning strategies for games on a modal arena corresponding to proofs with
modalities.
|
Floquet engineering, modulating quantum systems in a time periodic way, lies
at the central part for realizing novel topological dynamical states. Thanks to
the Floquet engineering, various new realms on experimentally simulating
topological materials have emerged. Conventional Floquet engineering, however,
only applies to time periodic non-dissipative Hermitian systems, and for the
quantum systems in reality, non-Hermitian process with dissipation usually
occurs. So far, it remains unclear how to characterize topological phases of
periodically driven non-Hermitian systems via the frequency space Floquet
Hamiltonian. Here, we propose the non-Floquet theory to identify different
Floquet topological phases of time periodic non-Hermitian systems via the
generation of Floquet band gaps in frequency space. In non-Floquet theory, the
eigenstates of non-Hermitian Floquet Hamiltonian are temporally deformed to be
of Wannier-Stark localization. Remarkably, we show that different choices of
starting points of driving period can result to different localization
behavior, which effect can reversely be utilized to design detectors of quantum
phases in dissipative oscillating fields. Our protocols establish a fundamental
rule for describing topological features in non-Hermitian dynamical systems and
can find its applications to construct new types of Floquet topological
materials.
|
We study the convective and absolute forms of azimuthal magnetorotational
instability (AMRI) in a Taylor-Couette (TC) flow with an imposed azimuthal
magnetic field. We show that the domain of the convective AMRI is wider than
that of the absolute AMRI. Actually, it is the absolute instability which is
the most relevant and important for magnetic TC flow experiments. The absolute
AMRI, unlike the convective one, stays in the device, displaying a sustained
growth that can be experimentally detected. We also study the global AMRI in a
TC flow of finite height using DNS and find that its emerging butterfly-type
structure -- a spatio-temporal variation in the form of upward and downward
traveling waves -- is in a very good agreement with the linear stability
analysis, which indicates the presence of two dominant absolute AMRI modes in
the flow giving rise to this global butterfly pattern.
|
Whilst ``slingshot'' prominences have been observed on M-dwarfs, most if not
all theoretical studies have focused on solar-like stars. We present an
investigation into stellar prominences around rapidly rotating young M-dwarfs.
We have extrapolated the magnetic field in the corona from Zeeman-Doppler maps
and determined the sites of mechanical stability where prominences may form. We
analyse the prominence mass that could be supported and the latitude range over
which this material is distributed. We find that for these maps, much of this
prominence mass may be invisible to observation - typically <1\% transits the
stellar disc. On the rapidly-rotating M-dwarf V374 Peg (P$_{\rm rot}$ = 0.45
days) where prominences have been observed, we find the visible prominence mass
to be around only 10\% of the total mass supported. The mass loss rate per unit
area for prominences scales with the X-ray surface flux as $\dot{M}/A \propto$
$F_X^{1.32}$ which is very close to the observationally-derived value for
stellar winds. This suggests that prominence ejection may contribute
significantly to the overall stellar wind loss and spin down. A planet in an
equatorial orbit in the habitable zone of these stars may experience
intermittent enhancements of the stellar wind due to prominence ejections. On
some stars, this may occur throughout 20\% of the orbit.
|
Let $g$ be a metric on hemisphere $S^{n}_{+}$ ($n\geq 3$) which is conformal
to the standard round metric $g_0$. Suppose its $Q$-curvature $Q_g$ is bounded
below by $Q_0$, we show that $g$ is isometric to $g_0$, provided that the
induced metric on $\partial S^{n}_{+}$ coincides with $g_0$ up to certain
order.
|
The discovery of superconductivity in copper oxide compounds has attracted
considerable attention over the past three decades. The high transition
temperature in these compounds, exhibiting proximity to an antiferromagnetic
order in their phase diagrams, remains one of the main areas of research. The
present study attempts to introduce Fe, Co and Ni magnetic impurities into the
superconducting Y-123 with the aim of exploring the transition temperature
behavior. The solid-state synthesis is exploited to prepare fully oxygenated
Y1-xMxBa2Cu3O7 (M = Co, Fe, Ni) samples with low levels of doping (0< x <
0.03). Systematic measurements are then employed to assess the synthesized
samples using AC magnetic susceptibility, electrical resistivity and X-ray
diffraction. The measurements revealed an increase in Tc as a result of
magnetic substitution for Y. However, the study of non-magnetic dopings on the
fully oxygenated Y1-xM'xBa2Cu3O7 (M' = Ca, Sr) samples showed a decrease in Tc.
Quantitative XRD analysis further suggested that the internal pressure could
have minor effects on the increase in Tc. The normal state resistivity vs
temperature showed a linear profile, confirming that the samples are at an
optimal doping of the carrier concentration.
|
We give a new sufficient condition which allows to test primality of Fermat's
numbers. This characterization uses uniquely values at most equal to tested
Fermat number. The robustness of this result is due to a strict use of
elementary arithmetic technical tools and it will be susceptible to open gates
for revolutionary statement that all Fermat's numbers are all decomposable.
|
Recovering 3D phase features of complex, multiple-scattering biological
samples traditionally sacrifices computational efficiency and processing time
for physical model accuracy and reconstruction quality. This trade-off hinders
the rapid analysis of living, dynamic biological samples that are often of
greatest interest to biological research. Here, we overcome this bottleneck by
combining annular intensity diffraction tomography (aIDT) with an
approximant-guided deep learning framework. Using a novel physics model
simulator-based learning strategy trained entirely on natural image datasets,
we show our network can robustly reconstruct complex 3D biological samples of
arbitrary size and structure. This approach highlights that large-scale
multiple-scattering models can be leveraged in place of acquiring experimental
datasets for achieving highly generalizable deep learning models. We devise a
new model-based data normalization pre-processing procedure for homogenizing
the sample contrast and achieving uniform prediction quality regardless of
scattering strength. To achieve highly efficient training and prediction, we
implement a lightweight 2D network structure that utilizes a multi-channel
input for encoding the axial information. We demonstrate this framework's
capabilities on experimental measurements of epithelial buccal cells and
Caenorhabditis elegans worms. We highlight the robustness of this approach by
evaluating dynamic samples on a living worm video, and we emphasize our
approach's generalizability by recovering algae samples evaluated with
different experimental setups. To assess the prediction quality, we develop a
novel quantitative evaluation metric and show that our predictions are
consistent with our experimental measurements and multiple-scattering physics.
|
We propose a deep learning approach to predicting audio event onsets in
electroencephalogram (EEG) recorded from users as they listen to music. We use
a publicly available dataset containing ten contemporary songs and concurrently
recorded EEG. We generate a sequence of onset labels for the songs in our
dataset and trained neural networks (a fully connected network (FCN) and a
recurrent neural network (RNN)) to parse one second windows of input EEG to
predict one second windows of onsets in the audio. We compare our RNN network
to both the standard spectral-flux based novelty function and the FCN. We find
that our RNN was able to produce results that reflected its ability to
generalize better than the other methods.
Since there are no pre-existing works on this topic, the numbers presented in
this paper may serve as useful benchmarks for future approaches to this
research problem.
|
Methods for extracting the $\psi(3770)\to e^+e^-$ decay width from the data
on the reaction cross section $e^+e^-\to D\bar D$ are discussed. Attention is
drawn to the absence of the generally accepted method for determining
$\Gamma_{\psi(3770)e^+e^-}$ in the presence of interference between the
contributions of the $\psi(3770)$ resonance and background. It is shown that
the model for the experimentally measured $D$ meson form factor, which
satisfies the requirement of the Watson theorem and takes into account the
contribution of the complex of the mixed $\psi(3770)$ and $\psi(2S)$
resonances, allows uniquely determine the value of $\Gamma_{\psi(3770)e^+e^-}$
by fitting. The $\Gamma_{\psi(3770)e^+ e^-}$ values found from the data
processing are compared with the estimates in the potential models.
|
A novel reformulation of D=4, N=1 supergravity action in the language of
integral forms is given. We illustrate the construction of the Berezinian in
the supergeometric framework, providing a useful dictionary between mathematics
and physics. We present a unified framework for Berezin-Lebesgue integrals for
functions and for integral forms. As an application, we discuss Volkov-Akulov
theory and its coupling to supergravity from this new perspective.
|
This article provides the unconditional security of a semi quantum key
distribution (SQKD) protocol based on 3-dimensional quantum states. By deriving
a lower bound for the key rate, in the asymptotic scenario, as a function of
the quantum channel's noise, we find that this protocol has improved secret key
rate with much more tolerance for noise compared to the previous 2-dimensional
SQKD protocol. Our results highlight that, similar to the fully quantum key
distribution protocol, increasing the dimension of the system can increase the
noise tolerance in the semi-quantum key distribution, as well.
|
Evolving out of a gender-neutral framing of an involuntary celibate identity,
the concept of `incels' has come to refer to an online community of men who
bear antipathy towards themselves, women, and society-at-large for their
perceived inability to find and maintain sexual relationships. By exploring
incel language use on Reddit, a global online message board, we contextualize
the incel community's online expressions of misogyny and real-world acts of
violence perpetrated against women. After assembling around three million
comments from incel-themed Reddit channels, we analyze the temporal dynamics of
a data driven rank ordering of the glossary of phrases belonging to an emergent
incel lexicon. Our study reveals the generation and normalization of an
extensive coded misogynist vocabulary in service of the group's identity.
|
We present a new higher-order accurate finite difference explicit jump
Immersed Interface Method (HEJIIM) for solving two-dimensional elliptic
problems with singular source and discontinuous coefficients in the irregular
region on a compact Cartesian mesh. We propose a new strategy for discretizing
the solution at irregular points on a nine point compact stencil such that the
higher-order compactness is maintained throughout the whole computational
domain. The scheme is employed to solve four problems embedded with circular
and star shaped interfaces in a rectangular region having analytical solutions
and varied discontinuities across the interface in source and the coefficient
terms. We also simulate a plethora of fluid flow problems past bluff bodies in
complex flow situations, which are governed by the Navier-Stokes equations;
they include problems involving multiple bodies immersed in the flow as well.
In the process, we show the superiority of the proposed strategy over the EJIIM
and other existing IIM methods by establishing the rate of convergence and grid
independence of the computed solutions. In all the cases our computed results
extremely close to the available numerical and experimental results.
|
Pressure and temperature profile are key data for safe production in oil and
gas wells. In this paper, a bucket-brigade inspired sensor network protocol is
proposed which can be used to extract sensed data profile from the nanoscale up
to kilometer long structures. The PHY/MAC layers are discussed. This protocol
is best suited for low data rate exchanges in small fixed-size packets, named
buckets, transmitted as time-domain bursts among high-precision smart sensors
deployed as a queue. There is only one coordinator, which is not directly
accessible by most of the sensor nodes. The coordinator is responsible for
collecting the measurement profile and send it to a supervisory node. There is
no need for complex routing mechanism, as the network topology is determined
during deployment. There are many applications which require sensors to be
deployed as a long queue and sensed data could be transmitted at low data
rates. Examples of such monitoring applications are: neural connected
artificial skin, oil/gas/water pipeline integrity, power transmission line
tower integrity, (rail)road/highway lighting and integrity, individualized
monitoring in vineyard or re-foresting or plantation, underwater
telecommunications cable integrity, oil/gas riser integrity, oil/gas well
temperature and pressure profile, among others. For robustness and reduced
electromagnetic interference, wired network is preferred. Besides in some harsh
environment wireless is not feasible. To reduce wiring, communications can be
carried out in the same cable used to supply electrical power.
|
The task of image segmentation is inherently noisy due to ambiguities
regarding the exact location of boundaries between anatomical structures. We
argue that this information can be extracted from the expert annotations at no
extra cost, and when integrated into state-of-the-art neural networks, it can
lead to improved calibration between soft probabilistic predictions and the
underlying uncertainty. We built upon label smoothing (LS) where a network is
trained on 'blurred' versions of the ground truth labels which has been shown
to be effective for calibrating output predictions. However, LS is not taking
the local structure into account and results in overly smoothed predictions
with low confidence even for non-ambiguous regions. Here, we propose Spatially
Varying Label Smoothing (SVLS), a soft labeling technique that captures the
structural uncertainty in semantic segmentation. SVLS also naturally lends
itself to incorporate inter-rater uncertainty when multiple labelmaps are
available. The proposed approach is extensively validated on four clinical
segmentation tasks with different imaging modalities, number of classes and
single and multi-rater expert annotations. The results demonstrate that SVLS,
despite its simplicity, obtains superior boundary prediction with improved
uncertainty and model calibration.
|
Variational Quantum Algorithms (VQAs) have received considerable attention
due to their potential for achieving near-term quantum advantage. However, more
work is needed to understand their scalability. One known scaling result for
VQAs is barren plateaus, where certain circumstances lead to exponentially
vanishing gradients. It is common folklore that problem-inspired ansatzes avoid
barren plateaus, but in fact, very little is known about their gradient
scaling. In this work we employ tools from quantum optimal control to develop a
framework that can diagnose the presence or absence of barren plateaus for
problem-inspired ansatzes. Such ansatzes include the Quantum Alternating
Operator Ansatz (QAOA), the Hamiltonian Variational Ansatz (HVA), and others.
With our framework, we prove that avoiding barren plateaus for these ansatzes
is not always guaranteed. Specifically, we show that the gradient scaling of
the VQA depends on the controllability of the system, and hence can be
diagnosed trough the dynamical Lie algebra $\mathfrak{g}$ obtained from the
generators of the ansatz. We analyze the existence of barren plateaus in QAOA
and HVA ansatzes, and we highlight the role of the input state, as different
initial states can lead to the presence or absence of barren plateaus. Taken
together, our results provide a framework for trainability-aware ansatz design
strategies that do not come at the cost of extra quantum resources. Moreover,
we prove no-go results for obtaining ground states with variational ansatzes
for controllable system such as spin glasses. We finally provide evidence that
barren plateaus can be linked to dimension of $\mathfrak{g}$.
|
We discuss the usefulness and theoretical consistency of different entropy
variables used in the literature to describe isocurvature perturbations in
multifield inflationary models with a generic curved field space. We clarify
which is the proper entropy variable to be used to match the evolution of
isocurvature modes during inflation to the one after the reheating epoch in
order to compare with observational constraints. In particular, we find that
commonly used variables, as the relative entropy perturbation or the one
associated to the decomposition in tangent and normal perturbations with
respect to the inflationary trajectory, even if more useful to perform
numerical studies, can lead to results which are wrong by several orders of
magnitude, or even to apparent destabilisation effects which are unphysical for
cases with light kinetically coupled spectator fields.
|
This paper proposes a controller for stable grasping of unknown-shaped
objects by two robotic fingers with tactile fingertips. The grasp is stabilised
by rolling the fingertips on the contact surface and applying a desired
grasping force to reach an equilibrium state. The validation is both in
simulation and on a fully-actuated robot hand (the Shadow Modular Grasper)
fitted with custom-built optical tactile sensors (based on the BRL TacTip). The
controller requires the orientations of the contact surfaces, which are
estimated by regressing a deep convolutional neural network over the tactile
images. Overall, the grasp system is demonstrated to achieve stable equilibrium
poses on various objects ranging in shape and softness, with the system being
robust to perturbations and measurement errors. This approach also has promise
to extend beyond grasping to stable in-hand object manipulation with multiple
fingers.
|
Surveillance cameras are widely applied for indoor occupancy measurement and
human movement perception, which benefit for building energy management and
social security. To address the challenges of limited view angle of single
camera as well as lacking of inter-camera collaboration, this study presents a
non-overlapping multi-camera system to enlarge the surveillance area and
devotes to retrieve the same person appeared from different camera views. The
system is deployed in an office building and four-day videos are collected. By
training a deep convolutional neural network, the proposed system first
extracts the appearance feature embeddings of each personal image, which
detected from different cameras, for similarity comparison. Then, a stochastic
inter-camera transition matrix is associated with appearance feature for
further improving the person re-identification ranking results. Finally, a
noise-suppression explanation is given for analyzing the matching improvements.
This paper expands the scope of indoor movement perception based on
non-overlapping multiple cameras and improves the accuracy of pedestrian
re-identification without introducing additional types of sensors.
|
This paper proposes semi-discrete and fully discrete hybridizable
discontinuous Galerkin (HDG) methods for the Burgers' equation in two and three
dimensions. In the spatial discretization, we use piecewise polynomials of
degrees $ k \ (k \geq 1), k-1$ and $ l \ (l=k-1; k) $ to approximate the scalar
function, flux variable and the interface trace of scalar function,
respectively. In the full discretization method, we apply a backward Euler
scheme for the temporal discretization. Optimal a priori error estimates are
derived. Numerical experiments are presented to support the theoretical
results.
|
The Kepler mission has provided a wealth of data, revealing new insights in
time-domain astronomy. However, Kepler's single band-pass has limited studies
to a single wavelength. In this work we build a data-driven, pixel-level model
for the Pixel Response Function (PRF) of Kepler targets, modeling the image
data from the spacecraft. Our model is sufficiently flexible to capture known
detector effects, such as non-linearity, intra-pixel sensitivity variations,
and focus change. In theory, the shape of the Kepler PRF should also be weakly
wavelength dependent, due to optical chromatic aberration and wavelength
dependent detector response functions. We are able to identify these predicted
shape changes to the PRF using the residuals between Kepler data and our model.
In this work, we show that these PRF changes correspond to wavelength
variability in Kepler targets using a small sample of eclipsing binaries. Using
our model, we demonstrate that pixel-level light curves of eclipsing binaries
show variable eclipse depths, ellipsoidal modulation and limb darkening. These
changes at the pixel level are consistent with multi-wavelength photometry. Our
work suggests each pixel in the Kepler data of a single target has a different
effective wavelength, ranging from $\approx$ 550-750 $nm$. In this proof of
concept, we demonstrate our model, and discuss possible use cases for the
wavelength dependent Pixel Response Function of Kepler. These use cases include
characterizing variable systems, and vetting exoplanet discoveries at the pixel
level. The chromatic PRF of Kepler is due to weak wavelength dependence in the
optical systems and detector of the telescope, and similar chromatic PRFs are
expected in other similar telescopes, notably the NASA TESS telescope.
|
Quantum mechanics is well known to accelerate statistical sampling processes
over classical techniques. In quantitative finance, statistical samplings arise
broadly in many use cases. Here we focus on a particular one of such use cases,
credit valuation adjustment (CVA), and identify opportunities and challenges
towards quantum advantage for practical instances. To improve the depths of
quantum circuits for solving such problem, we draw on various heuristics that
indicate the potential for significant improvement over well-known techniques
such as reversible logical circuit synthesis. In minimizing the resource
requirements for amplitude amplification while maximizing the speedup gained
from the quantum coherence of a noisy device, we adopt a recently developed
Bayesian variant of quantum amplitude estimation using engineered likelihood
functions (ELF). We perform numerical analyses to characterize the prospect of
quantum speedup in concrete CVA instances over classical Monte Carlo
simulations.
|
A variational formula for the asymptotic variance of general Markov processes
is obtained. As application, we get a upper bound of the mean exit time of
reversible Markov processes, and some comparison theorems between the
reversible and non-reversible diffusion processes.
|
We derive new variants of the quantitative Borel--Cantelli lemma and apply
them to analysis of statistical properties for some dynamical systems. We
consider intermittent maps of $(0,1]$ which have absolutely continuous
invariant probability measures. In particular, we prove that every sequence of
intervals with left endpoints uniformly separated from zero is the strong
Borel--Cantelli sequence with respect to such map and invariant measure.
|
The debate surrounding fast magnetic energy dissipation by magnetic
reconnection has remained a fundamental topic in the plasma universe, not only
in the Earth's magnetosphere but in astrophysical objects such as pulsar
magnetospheres and magnetars, for more than half a century. Recently,
nonthermal particle acceleration and plasma heating during reconnection have
been extensively studied, and it has been argued that rapid energy dissipation
can occur for a collisionless "thin" current sheet, the thickness of which is
of the order of the particle gyro-radius. However, it is an intriguing enigma
as to how the fast energy dissipation can occur for a "thick" current sheet
with thickness larger than the particle gyro-radius. Here we demonstrate, using
a high-resolution particle-in-cell simulation for a pair plasma, that an
explosive reconnection can emerge with the enhancement of the inertia
resistivity due to the magnetization of the meandering particles by the
reconnecting magnetic field and the shrinkage of the current sheet. In
addition, regardless of the initial thickness of the current sheet, the time
scale of the nonlinear explosive reconnection is tens of the Alfv\'{e}n transit
time.
|
Let $E$ be an elliptic curve over $\mathbb{Q}$ with discriminant $\Delta_E$.
For primes $p$ of good reduction, let $N_p$ be the number of points modulo $p$
and write $N_p=p+1-a_p$. In 1965, Birch and Swinnerton-Dyer formulated a
conjecture which implies $$\lim_{x\to\infty}\frac{1}{\log
x}\sum_{\substack{p\leq x\\ p\nmid \Delta_{E}}}\frac{a_p\log
p}{p}=-r+\frac{1}{2},$$ where $r$ is the order of the zero of the $L$-function
$L_{E}(s)$ of $E$ at $s=1$, which is predicted to be the Mordell-Weil rank of
$E(\mathbb{Q})$. We show that if the above limit exits, then the limit equals
$-r+1/2$. We also relate this to Nagao's conjecture.
|
In the Internet of Things, learning is one of most prominent tasks. In this
paper, we consider an Internet of Things scenario where federated learning is
used with simultaneous transmission of model data and wireless power. We
investigate the trade-off between the number of communication rounds and
communication round time while harvesting energy to compensate the energy
expenditure. We formulate and solve an optimization problem by considering the
number of local iterations on devices, the time to transmit-receive the model
updates, and to harvest sufficient energy. Numerical results indicate that
maximum ratio transmission and zero-forcing beamforming for the optimization of
the local iterations on devices substantially boost the test accuracy of the
learning task. Moreover, maximum ratio transmission instead of zero-forcing
provides the best test accuracy and communication round time trade-off for
various energy harvesting percentages. Thus, it is possible to learn a model
quickly with few communication rounds without depleting the battery.
|
Following the success in advancing natural language processing and
understanding, transformers are expected to bring revolutionary changes to
computer vision. This work provides the first and comprehensive study on the
robustness of vision transformers (ViTs) against adversarial perturbations.
Tested on various white-box and transfer attack settings, we find that ViTs
possess better adversarial robustness when compared with convolutional neural
networks (CNNs). This observation also holds for certified robustness. We
summarize the following main observations contributing to the improved
robustness of ViTs:
1) Features learned by ViTs contain less low-level information and are more
generalizable, which contributes to superior robustness against adversarial
perturbations.
2) Introducing convolutional or tokens-to-token blocks for learning low-level
features in ViTs can improve classification accuracy but at the cost of
adversarial robustness.
3) Increasing the proportion of transformers in the model structure (when the
model consists of both transformer and CNN blocks) leads to better robustness.
But for a pure transformer model, simply increasing the size or adding layers
cannot guarantee a similar effect.
4) Pre-training on larger datasets does not significantly improve adversarial
robustness though it is critical for training ViTs.
5) Adversarial training is also applicable to ViT for training robust models.
Furthermore, feature visualization and frequency analysis are conducted for
explanation. The results show that ViTs are less sensitive to high-frequency
perturbations than CNNs and there is a high correlation between how well the
model learns low-level features and its robustness against different
frequency-based perturbations.
|
Hypothesis Control of capillary flow through porous media has broad practical
implications. However, achieving accurate and reliable control of such
processes by tuning the pore size or by modification of interface wettability
remains challenging. Here we propose that the flow of liquid by capillary
penetration can be accurately adjusted by tuning the geometry of porous media
and develop numerical method to achieve this. Methodologies On the basis of
Darcys law, a general framework is proposed to facilitate the control of
capillary flow in porous systems by tailoring the geometric shape of porous
structures. A numerical simulation approach based on finite element method is
also employed to validate the theoretical prediction. Findings A basic
capillary component with a tunable velocity gradient is designed according to
the proposed framework. By using the basic component, two functional capillary
elements, namely, (i) flow amplifier and (ii) flow resistor, are demonstrated.
Then, multi functional fluidic devices with controllable capillary flow are
realized by integrating the designed capillary elements. All the theoretical
designs are validated by numerical simulations. Finally, it is shown that the
proposed model can be extended to three dimensional designs of porous media
|
Logical theories in the form of ontologies and similar artefacts in computing
and IT are used for structuring, annotating, and querying data, among others,
and therewith influence data analytics regarding what is fed into the
algorithms. Algorithmic bias is a well-known notion, but what does bias mean in
the context of ontologies that provide a structuring mechanism for an
algorithm's input? What are the sources of bias there and how would they
manifest themselves in ontologies? We examine and enumerate types of bias
relevant for ontologies, and whether they are explicit or implicit. These eight
types are illustrated with examples from extant production-level ontologies and
samples from the literature. We then assessed three concurrently developed
COVID-19 ontologies on bias and detected different subsets of types of bias in
each one, to a greater or lesser extent. This first characterisation aims
contribute to a sensitisation of ethical aspects of ontologies primarily
regarding representation of information and knowledge.
|
We show that anagram-free vertex colouring a $2\times n$ square grid requires
a number of colours that increases with $n$. This answers an open question in
Wilson's thesis and shows that even graphs of pathwidth $2$ do not have
anagram-free colourings with a bounded number of colours.
|
Despite their success, large pre-trained multilingual models have not
completely alleviated the need for labeled data, which is cumbersome to collect
for all target languages. Zero-shot cross-lingual transfer is emerging as a
practical solution: pre-trained models later fine-tuned on one transfer
language exhibit surprising performance when tested on many target languages.
English is the dominant source language for transfer, as reinforced by popular
zero-shot benchmarks. However, this default choice has not been systematically
vetted. In our study, we compare English against other transfer languages for
fine-tuning, on two pre-trained multilingual models (mBERT and mT5) and
multiple classification and question answering tasks. We find that other
high-resource languages such as German and Russian often transfer more
effectively, especially when the set of target languages is diverse or unknown
a priori. Unexpectedly, this can be true even when the training sets were
automatically translated from English. This finding can have immediate impact
on multilingual zero-shot systems, and should inform future benchmark designs.
|
Recently, chance-constrained stochastic electricity market designs have been
proposed to address the shortcomings of scenario-based stochastic market
designs. In particular, the use of chance-constrained market-clearing avoids
trading off in-expectation and per-scenario characteristics and yields unique
energy and reserves prices. However, current formulations rely on symmetric
control policies based on the aggregated system imbalance, which restricts
balancing reserve providers in their energy and reserve commitments. This paper
extends existing chance-constrained market-clearing formulations by leveraging
node-to-node and asymmetric balancing reserve policies and deriving the
resulting energy and reserve prices. The proposed node-to-node policy allows
for relating the remuneration of balancing reserve providers and payment of
uncertain resources using a marginal cost-based approach. Further, we introduce
asymmetric balancing reserve policies into the chance-constrained electricity
market design and show how this additional degree of freedom affects market
outcomes.
|
Public transport ridership around the world has been hit hard by the COVID-19
pandemic. Travellers are likely to adapt their behaviour to avoid the risk of
transmission and these changes may even be sustained after the pandemic. To
evaluate travellers' behaviour in public transport networks during these times
and assess how they will respond to future changes in the pandemic, we conduct
a stated choice experiment with train travellers in the Netherlands. We
specifically assess behaviour related to three criteria affecting the risk of
COVID-19 transmission: (i) crowding, (ii) exposure duration, and (iii)
prevalent infection rate.
Observed choices are analysed using a latent class choice model which reveals
two, nearly equally sized traveller segments: 'COVID Conscious' and 'Infection
Indifferent'. The former has a significantly higher valuation of crowding,
accepting, on average 8.75 minutes extra waiting time to reduce one person
on-board. Moreover, they demonstrate a strong desire to sit without anybody in
their neighbouring seat and are quite sensitive to changes in the prevalent
infection rate. By contrast, Infection Indifferent travellers' value of
crowding (1.04 waiting time minutes/person) is only slightly higher than
pre-pandemic estimates and they are relatively unaffected by infection rates.
We find that older and female travellers are more likely to be COVD Conscious
while those reporting to use the trains more frequently during the pandemic
tend to be Infection Indifferent. Further analysis also reveals differences
between the two segments in attitudes towards the pandemic and self-reported
rule-following behaviour. The behavioural insights from this study will not
only contribute to better demand forecasting for service planning but will also
inform public transport policy decisions aimed at curbing the shift to private
modes.
|
There are only few very-high-energy sources in our Galaxy which might
accelerate particles up to the knee of the cosmic-ray spectrum. To understand
the mechanisms of particle acceleration in these PeVatron candidates,
\textit{Fermi}-LAT and H.E.S.S. observations are essential to characterize
their $\gamma$-ray emission. HESS J1640$-$465 and the PeVatron candidate HESS
J1641$-$463 are two neighboring (\ang[astroang]{0.25}) $\gamma$-ray sources,
spatially coincident with the radio supernova remnants (SNRs) G338.3$-$0.0 and
G338.5+0.1. Detected both by H.E.S.S. and \textit{Fermi}-LAT, we present here a
morphological and spectral analysis of these two sources using 8 years of
\textit{Fermi}-LAT data between 200 \si{\mega\electronvolt} and 1
\si{\tera\electronvolt} with multi-wavelength observations to assess their
nature. The morphology of HESS J1640$-$465 is described by a 2D Gaussian
($\sigma=$ \ang[astroang]{0.053} $\pm$ \ang[astroang]{0.011}$_{stat}$ $ \pm$
\ang[astroang]{0.03}$_{syst}$) and its spectrum is modeled by a power-law with
a spectral index $\Gamma = 1.8\pm0.1_{\rm stat}\pm0.2_{\rm syst}$. HESS
J1641$-$463 is detected as a point-like source and its GeV emission is
described by a logarithmic-parabola spectrum with $\alpha = 2.7 \pm 0.1_ {\rm
stat} \pm 0.2_ {\rm syst} $ and significant curvature of $\beta = 0.11 \pm
0.03_ {\rm stat} \pm 0.05_ {\rm syst} $. Radio and X-ray flux upper limits were
derived. We investigated scenarios to explain their emission, namely the
emission from accelerated particles within the SNRs spatially coincident with
each source, molecular clouds illuminated by cosmic rays from the close-by
SNRs, and a pulsar/PWN origin. Our new \emph{Fermi}-LAT results and the radio
and flux X-ray upper limits pose severe constraints on some of these models.
|
We consider discrete Schr\"odinger operators with aperiodic potentials given
by a Sturmian word, which is a natural generalisation of the Fibonacci
Hamiltonian. We introduce the finite section method, which is often used to
solve operator equations approximately, and apply it first to periodic
Schr\"odinger operators. It turns out that the applicability of the method is
always guaranteed for integer-valued potentials provided that the operator is
invertible. By using periodic approximations, we find a necessary and
sufficient condition for the applicability of the finite section method for
aperiodic Schr\"odinger operators and a numerical method to check it.
|
Quantum integrated photonics requires large-scale linear optical circuitry,
and for many applications it is desirable to have a universally programmable
circuit, able to implement an arbitrary unitary transformation on a number of
modes. This has been achieved using the Reck scheme, consisting of a network of
Mach Zehnder interferometers containing a variable phase shifter in one path,
as well as an external phase shifter after each Mach Zehnder. It subsequently
became apparent that with symmetric Mach Zehnders containing a phase shift in
both paths, the external phase shifts are redundant, resulting in a more
compact circuit. The rectangular Clements scheme improves on the Reck scheme in
terms of circuit depth, but it has been thought that an external phase-shifter
was necessary after each Mach Zehnder. Here, we show that the Clements scheme
can be realised using symmetric Mach Zehnders, requiring only a small number of
external phase-shifters that do not contribute to the depth of the circuit.
This will result in a significant saving in the length of these devices,
allowing more complex circuits to fit onto a photonic chip, and reducing the
propagation losses associated with these circuits. We also discuss how similar
savings can be made to alternative schemes which have robustness to imbalanced
beam-splitters.
|
The consistency of a response to a given post at semantic-level and
emotional-level is essential for a dialogue system to deliver human-like
interactions. However, this challenge is not well addressed in the literature,
since most of the approaches neglect the emotional information conveyed by a
post while generating responses. This article addresses this problem by
proposing a unifed end-to-end neural architecture, which is capable of
simultaneously encoding the semantics and the emotions in a post for generating
more intelligent responses with appropriately expressed emotions. Extensive
experiments on real-world data demonstrate that the proposed method outperforms
the state-of-the-art methods in terms of both content coherence and emotion
appropriateness.
|
The Isolation Lemma of Mulmuley, Vazirani and Vazirani [Combinatorica'87]
provides a self-reduction scheme that allows one to assume that a given
instance of a problem has a unique solution, provided a solution exists at all.
Since its introduction, much effort has been dedicated towards derandomization
of the Isolation Lemma for specific classes of problems. So far, the focus was
mainly on problems solvable in polynomial time.
In this paper, we study a setting that is more typical for
$\mathsf{NP}$-complete problems, and obtain partial derandomizations in the
form of significantly decreasing the number of required random bits. In
particular, motivated by the advances in parameterized algorithms, we focus on
problems on decomposable graphs. For example, for the problem of detecting a
Hamiltonian cycle, we build upon the rank-based approach from [Bodlaender et
al., Inf. Comput.'15] and design isolation schemes that use
- $O(t\log n + \log^2{n})$ random bits on graphs of treewidth at most $t$;
- $O(\sqrt{n})$ random bits on planar or $H$-minor free graphs; and
- $O(n)$-random bits on general graphs.
In all these schemes, the weights are bounded exponentially in the number of
random bits used. As a corollary, for every fixed $H$ we obtain an algorithm
for detecting a Hamiltonian cycle in an $H$-minor-free graph that runs in
deterministic time $2^{O(\sqrt{n})}$ and uses polynomial space; this is the
first algorithm to achieve such complexity guarantees. For problems of more
local nature, such as finding an independent set of maximum size, we obtain
isolation schemes on graphs of treedepth at most $d$ that use $O(d)$ random
bits and assign polynomially-bounded weights.
We also complement our findings with several unconditional and conditional
lower bounds, which show that many of the results cannot be significantly
improved.
|
The evolution of epidemiological parameters, such as instantaneous
reproduction number Rt, is important for understanding the transmission
dynamics of infectious diseases. Current estimates of time-varying
epidemiological parameters often face problems such as lagging observations,
averaging inference, and improper quantification of uncertainties. To address
these problems, we propose a Bayesian data assimilation framework for
time-varying parameter estimation. Specifically, this framework is applied to
Rt estimation, resulting in the state-of-the-art DARt system. With DARt, time
misalignment caused by lagging observations is tackled by incorporating
observation delays into the joint inference of infections and Rt; the drawback
of averaging is overcome by instantaneously updating upon new observations and
developing a model selection mechanism that captures abrupt changes; the
uncertainty is quantified and reduced by employing Bayesian smoothing. We
validate the performance of DARt and demonstrate its power in revealing the
transmission dynamics of COVID-19. The proposed approach provides a promising
solution for accurate and timely estimating transmission dynamics from reported
data.
|
While domain adaptation has been used to improve the performance of object
detectors when the training and test data follow different distributions,
previous work has mostly focused on two-stage detectors. This is because their
use of region proposals makes it possible to perform local adaptation, which
has been shown to significantly improve the adaptation effectiveness. Here, by
contrast, we target single-stage architectures, which are better suited to
resource-constrained detection than two-stage ones but do not provide region
proposals. To nonetheless benefit from the strength of local adaptation, we
introduce an attention mechanism that lets us identify the important regions on
which adaptation should focus. Our method gradually adapts the features from
global, image-level to local, instance-level. Our approach is generic and can
be integrated into any single-stage detector. We demonstrate this on standard
benchmark datasets by applying it to both SSD and YOLOv5. Furthermore, for
equivalent single-stage architectures, our method outperforms the
state-of-the-art domain adaptation techniques even though they were designed
for specific detectors.
|
Here we present a computational tool for optical tweezers which calculates
the particle tracking signal measured with a quadrant detector and the
shot-noise limit to position resolution. The tool is a piece of Matlab code
which functions within the freely available Optical Tweezers Toolbox. It allows
the measurements performed in most optical tweezers experiments to be
theoretically characterized in a fast and easy manner. The code supports
particles with arbitrary size, any optical fields and any combination of
objective and condenser, and performs a full vector calculation of the relevant
fields. Example calculations are presented which show the tracking signals for
different particles, and the shot noise limit to position sensitivity as a
function of the effective condenser NA.
|
The validity of our already proposed conjecture -- horizon creates a local
instability which acts as the source of the quantum temperature of black hole
-- is being tested here for Kerr black hole. Earlier this has been explicitly
shown for spherically symmetric static black hole (SSS BH). The more realistic
situation like Kerr spacetime, being stationary and axisymmetric, is a
non-trivial example to analyze. We show that for a chargeless massless
particle, the near horizon radial motion in Kerr spacetime, like SSS BH, can be
locally unstable. The radial contribution in the corresponding Hamiltonian is
$\sim xp$ kind, where $p$ is the canonical momentum and $x$ is its conjugate
position of particle. Finally we show that the horizon thermalization can be
explained through this Hamiltonian when one dose a semi-classical analysis. It
again confirms that near horizon instability is liable for its own temperature
and moreover generalizes the validity of our conjectured mechanism for the
black hole horizon thermalization.
|
Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved
impressive performance due to the powerful representation extracted using deep
neural networks while prioritizing categorical separability. However,
self-reconstruction loss of an AE ignores rich useful relation information and
might lead to indiscriminative representation, which inevitably degrades the
clustering performance. It is also challenging to learn high-level similarity
without feeding semantic labels. Another unsolved problem facing DSC is the
huge memory cost due to $n\times n$ similarity matrix, which is incurred by the
self-expression layer between an encoder and decoder. To tackle these problems,
we use pairwise similarity to weigh the reconstruction loss to capture local
structure information, while a similarity is learned by the self-expression
layer. Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain
knowledge acquired during network training, are further employed to supervise
similarity learning. Joint learning and iterative training facilitate to obtain
an overall optimal solution. Extensive experiments on benchmark datasets
demonstrate the superiority of our approach. By combining with the $k$-nearest
neighbors algorithm, we further show that our method can address the
large-scale and out-of-sample problems.
|
In this paper, we present a network manipulation algorithm based on an
alternating minimization scheme from (Nesterov 2020). In our context, the
latter mimics the natural behavior of agents and organizations operating on a
network. By selecting starting distributions, the organizations determine the
short-term dynamics of the network. While choosing an organization in
accordance with their manipulation goals, agents are prone to errors. This
rational inattentive behavior leads to discrete choice probabilities. We extend
the analysis of our algorithm to the inexact case, where the corresponding
subproblems can only be solved with numerical inaccuracies. The parameters
reflecting the imperfect behavior of agents and the credibility of
organizations, as well as the condition number of the network transition matrix
have a significant impact on the convergence of our algorithm. Namely, they
turn out not only to improve the rate of convergence, but also to reduce the
accumulated errors. From the mathematical perspective, this is due to the
induced strong convexity of an appropriate potential function.
|
Hole spins in semiconductor quantum dots represent a viable route for the
implementation of electrically controlled qubits. In particular, the qubit
implementation based on Si pMOSFETs offers great potentialities in terms of
integration with the control electronics and long-term scalability. Moreover,
the future down scaling of these devices will possibly improve the performance
of both the classical (control) and quantum components of such monolithically
integrated circuits. Here we use a multi-scale approach to simulate a hole-spin
qubit in a down scaled Si-channel pMOSFET, whose structure is based on a
commercial 22nm fully-depleted silicon-on-insulator device. Our calculations
show the formation of well defined hole quantum dots within the Si channel, and
the possibility of a general electrical control, with Rabi frequencies of the
order of 100 MHz for realistic field values. Our calculations demonstrate the
crucial role of the channel aspect ratio, and the presence of a favorable
parameter range for the qubit manipulation.
|
This study examined a simulated confined space modelled as a hospital waiting
area, where people who could have underlying conditions congregate and mix with
potentially infectious individuals. It further investigated the impact of the
volume of the waiting area, the number of people in the room, the placement of
them as well as their weight. The simulation is an agent-based model (ABM).
|
Bipolar resistive switching (BRS) phenomenon has been demonstrated in Mn3O4
using Al (Aluminum)/Mn3O4/FTO (Fluorine doped Tin Oxide) Resistive Random
Access Memory (RRAM) device. The fabricated RRAM device shows good retention,
non volatile behavior and forming free BRS. The Current-Voltage (I-V)
characteristics and the temperature dependence of the resistance (R-T)
measurements were used to explore conduction mechanisms and the thermal
activation energy (Ea). The resistance ratio of high resistance state (HRS) to
low resistance state (LRS) is ~102. The fabricated RRAM device shows different
conduction mechanisms in LRS and HRS state such as ohmic conduction and space
charge limited conduction (SCLC). The rupture and formation of conducting
filaments (CF) of oxygen vacancies take place by changing the polarity of
external voltage, which may be responsible for resistive switching
characteristics in the fabricated RRAM device. This fabricated RRAM device is
suitable for application in future high density non-volatile memory (NVM) RRAM
devices.
|
FourPhonon is a computational package that can calculate four-phonon
scattering rates in crystals. It is built within ShengBTE framework, which is a
well-recognized lattice thermal conductivity solver based on Boltzmann
transport equation. An adaptive energy broadening scheme is implemented for the
calculation of four-phonon scattering rates. In analogy with $thirdorder.py$ in
ShengBTE, we also provide a separate python script, $Fourthorder.py$, to
calculate fourth-order interatomic force-constants. The extension module
preserves all the nice features of the well-recognized lattice thermal
conductivity solver ShengBTE, including good parallelism and straightforward
workflow. In this paper, we discuss the general theory, program design, and
example calculations on Si, BAs and $\mathrm{LiCoO_2}$.
|
In this paper, we propose a linear polarization coding scheme (LPC) combined
with the phase conjugated twin signals (PCTS) technique, referred to as
LPC-PCTS, for fiber nonlinearity mitigation in coherent optical orthogonal
frequency division multiplexing (CO-OFDM) systems. The LPC linearly combines
the data symbols on the adjacent subcarriers of the OFDM symbol, one at full
amplitude and the other at half amplitude. The linearly coded data is then
transmitted as phase conjugate pairs on the same subcarriers of the two OFDM
symbols on the two orthogonal polarizations. The nonlinear distortions added to
these subcarriers are essentially anti-correlated, since they carry phase
conjugate pairs of data. At the receiver, the coherent superposition of the
information symbols received on these pairs of subcarriers eventually leads to
the cancellation of the nonlinear distortions. We conducted numerical
simulation of a single channel 200 Gb/s CO-OFDM system employing the LPCPCTS
technique. The results show that a Q-factor improvement of 2.3 dB and 1.7 dB
with and without the dispersion symmetry, respectively, when compared to the
recently proposed phase conjugated subcarrier coding (PCSC) technique, at an
average launch power of 3 dBm. In addition, our proposed LPCPCTS technique
shows a significant performance improvement when compared to the 16-quadrature
amplitude modulation (QAM) with phase conjugated twin waves (PCTW) scheme, at
the same spectral efficiency, for an uncompensated transmission distance of
2800 km.
|
Accurate simulations of flows in stellar interiors are crucial to improving
our understanding of stellar structure and evolution. Because the typically
slow flows are merely tiny perturbations on top of a close balance between
gravity and the pressure gradient, such simulations place heavy demands on
numerical hydrodynamics schemes. We demonstrate how discretization errors on
grids of reasonable size can lead to spurious flows orders of magnitude faster
than the physical flow. Well-balanced numerical schemes can deal with this
problem. Three such schemes were applied in the implicit, finite-volume
Seven-League Hydro (SLH) code in combination with a low-Mach-number numerical
flux function. We compare how the schemes perform in four numerical experiments
addressing some of the challenges imposed by typical problems in stellar
hydrodynamics. We find that the $\alpha$-$\beta$ and deviation well-balancing
methods can accurately maintain hydrostatic solutions provided that
gravitational potential energy is included in the total energy balance. They
accurately conserve minuscule entropy fluctuations advected in an isentropic
stratification, which enables the methods to reproduce the expected scaling of
convective flow speed with the heating rate. The deviation method also
substantially increases accuracy of maintaining stationary orbital motions in a
Keplerian disk on long timescales. The Cargo-LeRoux method fares substantially
worse in our tests, although its simplicity may still offer some merits in
certain situations. Overall, we find the well-balanced treatment of gravity in
combination with low Mach number flux functions essential to reproducing
correct physical solutions to challenging stellar slow-flow problems on
affordable collocated grids.
|
Improving the performance of deep neural networks (DNNs) is important to both
the compiler and neural architecture search (NAS) communities. Compilers apply
program transformations in order to exploit hardware parallelism and memory
hierarchy. However, legality concerns mean they fail to exploit the natural
robustness of neural networks. In contrast, NAS techniques mutate networks by
operations such as the grouping or bottlenecking of convolutions, exploiting
the resilience of DNNs. In this work, we express such neural architecture
operations as program transformations whose legality depends on a notion of
representational capacity. This allows them to be combined with existing
transformations into a unified optimization framework. This unification allows
us to express existing NAS operations as combinations of simpler
transformations. Crucially, it allows us to generate and explore new tensor
convolutions. We prototyped the combined framework in TVM and were able to find
optimizations across different DNNs, that significantly reduce inference time -
over 3$\times$ in the majority of cases.
Furthermore, our scheme dramatically reduces NAS search time. Code is
available
at~\href{https://github.com/jack-willturner/nas-as-program-transformation-exploration}{this
https url}.
|
This work presents a data-driven reduced-order modeling framework to
accelerate the computations of $N$-body dynamical systems and their pair-wise
interactions. The proposed framework differs from traditional acceleration
methods, like the Barnes-Hut method, which requires online tree building of the
state space, or the fast-multipole method, which requires rigorous $a$ $priori$
analysis of governing kernels and online tree building. Our approach combines
Barnes-Hut hierarchical decomposition, dimensional compression via the
least-squares Petrov-Galerkin (LSPG) projection, and hyper-reduction by way of
the Gauss-Newton with approximated tensor (GNAT) approach. The resulting
$projection-tree$ reduced order model (PTROM) enables a drastic reduction in
operational count complexity by constructing sparse hyper-reduced pairwise
interactions of the $N$-body dynamical system. As a result, the presented
framework is capable of achieving an operational count complexity that is
independent of $N$, the number of bodies in the numerical domain. Capabilities
of the PTROM method are demonstrated on the two-dimensional fluid-dynamic
Biot-Savart kernel within a parametric and reproductive setting. Results show
the PTROM is capable of achieving over 2000$\times$ wall-time speed-up with
respect to the full-order model, where the speed-up increases with $N$. The
resulting solution delivers quantities of interest with errors that are less
than 0.1$\%$ with respect to full-order model.
|
Recently, we introduced the "Newman-Penrose map," a novel correspondence
between a certain class of solutions of Einstein's equations and self-dual
solutions of the vacuum Maxwell equations, which we showed was closely related
to the classical double copy. Here, we give an alternative definition of this
correspondence in terms of quantities that are naturally defined on both
spacetime and twistor space. The advantage of this reformulation is that it is
purely geometrical in nature, being manifestly invariant under both spacetime
diffeomorphisms and projective transformations on twistor space. While the
original formulation of the map may be more convenient for most explicit
calculations, the twistorial formulation we present here may be of greater
theoretical utility.
|
The ultimate performance of any wireless communication system is limited by
electromagnetic principles and mechanisms. Motivated by this, we start from the
first principles of wave propagation and consider a multiple-input
multiple-output (MIMO) representation of a communication system between two
spatially-continuous volumes of arbitrary shape and position. This is the
concept of holographic MIMO communications. The analysis takes into account the
electromagnetic noise field, generated by external sources, and the constraint
on the physical radiated power. The electromagnetic MIMO model is
particularized for a system with parallel linear sources and receivers in
line-of-sight conditions. Inspired by orthogonal-frequency
division-multiplexing, we assume that the spatially-continuous transmit
currents and received fields are represented using the Fourier basis functions.
In doing so, a wavenumber-division multiplexing (WDM) scheme is obtained whose
properties are studied with the conventional tools of linear systems theory.
Particularly, the interplay among the different system parameters (e.g.,
transmission range, wavelength, and sizes of source and receiver) in terms of
number of communication modes and level of interference is studied. Due to the
non-finite support of the electromagnetic channel, we prove that the
interference-free condition can only be achieved when the receiver size grows
to infinity. The spectral efficiency of WDM is evaluated via the singular-value
decomposition architecture with water-filling and compared to that of a
simplified architecture, which uses linear processing at the receiver and
suboptimal power allocation.
|
The computational prediction of atomistic structure is a long-standing
problem in physics, chemistry, materials, and biology. Within conventional
force-field or {\em ab initio} calculations, structure is determined through
energy minimization, which is either approximate or computationally demanding.
Alas, the accuracy-cost trade-off prohibits the generation of synthetic big
data records with meaningful energy based conformational search and structure
relaxation output. Exploiting implicit correlations among relaxed structures,
our kernel ridge regression model, dubbed Graph-To-Structure (G2S), generalizes
across chemical compound space, enabling direct predictions of relaxed
structures for out-of-sample compounds, and effectively bypassing the energy
optimization task. After training on constitutional and compositional isomers
(no conformers) G2S infers atomic coordinates relying solely on stoichiometry
and bond-network information as input (Our numerical evidence includes closed
and open shell molecules, transition states, and solids). For all data
considered, G2S learning curves reach mean absolute interatomic distance
prediction errors of less than 0.2 {\AA} for less than eight thousand training
structures -- on par or better than popular empirical methods. Applicability
test of G2S include meaningful structures of molecules for which standard
methods require manual intervention, improved initial guesses for subsequent
conventional {\em ab initio} based relaxation, and input for structural based
representations commonly used in quantum machine learning models, (bridging the
gap between graph and structure based models).
|
This paper studies a generalized busy-time scheduling model on heterogeneous
machines. The input to the model includes a set of jobs and a set of machine
types. Each job has a size and a time interval during which it should be
processed. Each job is to be placed on a machine for execution. Different types
of machines have distinct capacities and cost rates. The total size of the jobs
running on a machine must always be kept within the machine's capacity, giving
rise to placement restrictions for jobs of various sizes among the machine
types. Each machine used is charged according to the time duration in which it
is busy, i.e., it is processing jobs. The objective is to schedule the jobs
onto machines to minimize the total cost of all the machines used. We develop
an $O(1)$-approximation algorithm in the offline setting and an
$O(\mu)$-competitive algorithm in the online setting (where $\mu$ is the
max/min job length ratio), both of which are asymptotically optimal.
|
We give a new simple proof of boundedness of the family of semistable sheaves
with fixed numerical invariants on a fixed smooth projective variety. In
characteristic zero our method gives a quick proof of Bogomolov's inequality
for semistable sheaves on a smooth projective variety of any dimension $\ge 2$
without using any restriction theorems.
|
The standard for Deep Reinforcement Learning in games, following Alpha Zero,
is to use residual networks and to increase the depth of the network to get
better results. We propose to improve mobile networks as an alternative to
residual networks and experimentally show the playing strength of the networks
according to both their width and their depth. We also propose a generalization
of the PUCT search algorithm that improves on PUCT.
|
We study the Hall response of topologically-trivial mobile impurities (Fermi
polarons) interacting weakly with majority fermions forming a Chern-insulator
background. This setting involves a rich interplay between the genuine
many-body character of the polaron problem and the topological nature of the
surrounding cloud. When the majority fermions are accelerated by an external
field, a transverse impurity current can be induced. To quantify this polaronic
Hall effect, we compute the drag transconductivity, employing controlled
diagrammatic perturbation theory in the impurity-fermion interaction. We show
that the impurity Hall drag is not simply proportional to the Chern number
characterizing the topological transport of the insulator on its own - it also
depends continuously on particle-hole breaking terms, to which the Chern number
is insensitive. However, when the insulator is tuned across a topological phase
transition, a sharp jump of the impurity Hall drag results, for which we derive
an analytical expression. We describe how the Hall drag and jump can be
extracted from a circular dichroic measurement of impurity excitation rates,
particularly suited for ultracold gas experiments.
|
Modern optical satellite sensors enable high-resolution stereo reconstruction
from space. But the challenging imaging conditions when observing the Earth
from space push stereo matching to its limits. In practice, the resulting
digital surface models (DSMs) are fairly noisy and often do not attain the
accuracy needed for high-resolution applications such as 3D city modeling.
Arguably, stereo correspondence based on low-level image similarity is
insufficient and should be complemented with a-priori knowledge about the
expected surface geometry beyond basic local smoothness. To that end, we
introduce ResDepth, a convolutional neural network that learns such an
expressive geometric prior from example data. ResDepth refines an initial, raw
stereo DSM while conditioning the refinement on the images. I.e., it acts as a
smart, learned post-processing filter and can seamlessly complement any stereo
matching pipeline. In a series of experiments, we find that the proposed method
consistently improves stereo DSMs both quantitatively and qualitatively. We
show that the prior encoded in the network weights captures meaningful
geometric characteristics of urban design, which also generalize across
different districts and even from one city to another. Moreover, we demonstrate
that, by training on a variety of stereo pairs, ResDepth can acquire a
sufficient degree of invariance against variations in imaging conditions and
acquisition geometry.
|
In this paper we investigate the critical efficiency of detectors in order to
see Bell nonlocality using multiple copies of the two-qubit maximally entangled
state and local Pauli measurements which act in the corresponding qubit
subspaces. It is known that for the two-qubit maximally entangled state a
symmetric detection efficiency of $82.84\%$ can be tolerated using the
Clauser-Horne-Shimony-Holt (CHSH) Bell test. We show that this threshold can be
lowered by using multiple copies of the two-qubit maximally entangled state. We
get the upper bounds $80.86\%$, $73.99\%$ and $69.29\%$ on the symmetric
detection efficiency threshold for two, three and four copies of the state,
where the respective number of measurements per party are 4, 8 and 16. However,
in the case of four copies the result is partly due to a heuristic method. In
order to get the corresponding Bell inequalities we made use of linear
programming for two copies of the state and convex optimization based on
Gilbert algorithm for three and four copies of the state.
|
Unsupervised Domain Adaptation (UDA) methods for person Re-Identification
(Re-ID) rely on target domain samples to model the marginal distribution of the
data. To deal with the lack of target domain labels, UDA methods leverage
information from labeled source samples and unlabeled target samples. A
promising approach relies on the use of unsupervised learning as part of the
pipeline, such as clustering methods. The quality of the clusters clearly plays
a major role in methods performance, but this point has been overlooked. In
this work, we propose a multi-step pseudo-label refinement method to select the
best possible clusters and keep improving them so that these clusters become
closer to the class divisions without knowledge of the class labels. Our
refinement method includes a cluster selection strategy and a camera-based
normalization method which reduces the within-domain variations caused by the
use of multiple cameras in person Re-ID. This allows our method to reach
state-of-the-art UDA results on DukeMTMC-Market1501 (source-target). We surpass
state-of-the-art for UDA Re-ID by 3.4% on Market1501-DukeMTMC datasets, which
is a more challenging adaptation setup because the target domain (DukeMTMC) has
eight distinct cameras. Furthermore, the camera-based normalization method
causes a significant reduction in the number of iterations required for
training convergence.
|
This paper addresses Monte Carlo algorithms for calculating the
Shapley-Shubik power index in weighted majority games. First, we analyze a
naive Monte Carlo algorithm and discuss the required number of samples. We then
propose an efficient Monte Carlo algorithm and show that our algorithm reduces
the required number of samples as compared to the naive algorithm.
|
CRISPR-Cas is an adaptive immune mechanism that has been harnessed for a
variety of genetic engineering applications: the Cas9 protein recognises a
2-5nt DNA motif, known as the PAM, and a programmable crRNA binds a target DNA
sequence that is then cleaved. While off-target activity is undesirable, it
occurs because cross-reactivity was beneficial in the immune system on which
the machinery is based. Here, a stochastic model of the target recognition
reaction was derived to study the specificity of the innate immune mechanism in
bacteria. CRISPR systems with Cas9 proteins that recognised PAMs of varying
lengths were tested on self and phage DNA. The model showed that the energy
associated with PAM binding impacted mismatch tolerance, cleavage probability,
and cleavage time. Small PAMs allowed the CRISPR to balance catching mutant
phages, avoiding self-targeting, and quickly dissociating from critically
non-matching sequences. Additionally, the results revealed a lower tolerance to
mismatches in the PAM and a PAM-proximal region known as the seed, as seen in
experiments. This work illustrates the role that the Cas9 protein has in
dictating the specificity of DNA cleavage that can aid in preventing off-target
activity in biotechnology applications.
|
We study the complexity of approximating the number of answers to a small
query $\varphi$ in a large database $\mathcal{D}$. We establish an exhaustive
classification into tractable and intractable cases if $\varphi$ is a
conjunctive query with disequalities and negations:
$\bullet$ If there is a constant bound on the arity of $\varphi$, and if the
randomised Exponential Time Hypothesis (rETH) holds, then the problem has a
fixed-parameter tractable approximation scheme (FPTRAS) if and only if the
treewidth of $\varphi$ is bounded.
$\bullet$ If the arity is unbounded and we allow disequalities only, then the
problem has an FPTRAS if and only if the adaptive width of $\varphi$ (a width
measure strictly more general than treewidth) is bounded; the lower bound
relies on the rETH as well.
Additionally we show that our results cannot be strengthened to achieve a
fully polynomial randomised approximation scheme (FPRAS): We observe that,
unless $\mathrm{NP} =\mathrm{RP}$, there is no FPRAS even if the treewidth (and
the adaptive width) is $1$. However, if there are neither disequalities nor
negations, we prove the existence of an FPRAS for queries of bounded fractional
hypertreewidth, strictly generalising the recently established FPRAS for
conjunctive queries with bounded hypertreewidth due to Arenas, Croquevielle,
Jayaram and Riveros (STOC 2021).
|
Pentadiamond is a recently proposed new carbon allotrope consisting of a
network of pentagonal rings where both sp$^2$ and sp$^3$ hybridization are
present. In this work we investigated the mechanical and electronic properties,
as well as, the thermal stability of pentadiamond using DFT and fully atomistic
reactive molecular dynamics (MD) simulations. We also investigated its
properties beyond the elastic regime for three different deformation modes:
compression, tensile and shear. The behavior of pentadiamond under compressive
deformation showed strong fluctuations in the atomic positions which are
responsible for the strain softening at strains beyond the linear regime, which
characterizes the plastic flow. As we increase temperature, as expected,
Young's modulus values decrease, but this variation (up to 300 K) is smaller
than 10\% (from 347.5 to 313.6 GPa), but the fracture strain is very sensitive,
varying from $\sim$44\% at 1K to $\sim$5\% at 300K.
|
We investigate structural and transport properties of highly Ru-deficient
SrRu0.7O3 thin films prepared by molecular beam epitaxy on (001) SrTiO3
substrates. To distinguish the influence of the two types of disorders in the
films, Ru vacancies within lattices and disorders near the interface, SrRu0.7O3
thin films with various thicknesses (t = 1-60 nm) were prepared. It was found
that the influence of the former dominates the electrical and magnetic
properties when t > 5-10 nm, while that of the latter does when t < 5-10 nm.
Structural characterizations revealed that the crystallinity, in terms of the
Sr and O sublattices, of SrRu0.7O3 thin films, is as high as that of the
ultrahigh-quality SrRuO3 ones. The Curie temperature (TC) analysis elucidated
that SrRu0.7O3 (TC = 140 K) is a material distinct from SrRuO3 (TC = 150 K).
Despite the large Ru deficiency (30%), the SrRu0.7O3 films showed metallic
conduction when t > 5 nm. In high-field magnetoresistance measurements, the
fascinating phenomenon of Weyl fermion transport was not observed for the
SrRu0.7O3 thin films irrespective of thickness, which is in contrast to the
stoichiometric SrRuO3 films. The (magneto)transport properties suggest that a
picture of carrier scattering due to the Ru vacancies is appropriate for
SrRu0.7O3, and also that proper stoichiometry control is a prerequisite to
utilizing the full potential of SrRuO3 as a magnetic Weyl semimetal and
two-dimensional spin-polarized system. Nevertheless, the large tolerance in Ru
composition (30 %) to metallic conduction is advantageous for some practical
applications where SrRu1-xO3 is exploited as an epitaxial conducting layer.
|
Recently, mT5 - a massively multilingual version of T5 - leveraged a unified
text-to-text format to attain state-of-the-art results on a wide variety of
multilingual NLP tasks. In this paper, we investigate the impact of
incorporating parallel data into mT5 pre-training. We find that multi-tasking
language modeling with objectives such as machine translation during
pre-training is a straightforward way to improve performance on downstream
multilingual and cross-lingual tasks. However, the gains start to diminish as
the model capacity increases, suggesting that parallel data might not be as
essential for larger models. At the same time, even at larger model sizes, we
find that pre-training with parallel data still provides benefits in the
limited labelled data regime.
|
The effect of the self-energy on the photon-proton elastic scattering is
investigated for the backward and the forward directions. The shape of the
Thomson scattering at the photon energy $\omega \to 0$ is broken by taking into
account the self-energy of proton. The electromagnetic polarizabilities
$\bar{\alpha}\pm\bar{\beta}$ are calculated by the lowest-order perturbative
treatment.
|
We derive the nucleon-nucleon interaction from the Skyrme model using second
order perturbation theory and the dipole approximation to skyrmion dynamics.
Unlike previous derivations, our derivation accounts for the non-trivial
kinetic and potential parts of the skyrmion-skyrmion interaction lagrangian and
how they couple in the quantum calculation. We derive the eight low energy
interaction potentials and compare them with the phenomenological Paris model,
finding qualitative agreement in seven cases.
|
This is an expository paper on tensor products where the standard approaches
for constructing concrete instances of algebraic tensor products of linear
spaces, via quotient spaces or via linear maps of bilinear maps, are reviewed
by reducing them to different but isomorphic interpretations of an abstract
notion, viz., the universal property, which is based on a pair of axioms.
|
This paper is concerned with a novel method allowing communication between
FRET nanonetworks and nerve cells. It is focused on two system components:
fluorophores and channelrhodopsins which serve as transmitters and receivers,
respectively. Channelrhodopsins are used here also as a FRET signal-to-voltage
converter. The trade-off between throughput and bit error rate is also
investigated.
|
The performance of grant-free random access (GF-RA) is limited by the number
of accessible random access resources (RRs) due to the absence of collision
resolution. Compressive sensing (CS)-based RA schemes scale up the RRs at the
expense of increased non-orthogonality among transmitted signals. This paper
presents the design of multi-sequence spreading random access (MSRA) which
employs multiple spreading sequences to spread the different symbols of a user
as opposed to the conventional schemes in which a user employs the same
spreading sequence for each symbol. We show that MSRA provides code diversity,
enabling the multi-user detection (MUD) to be modeled into a well-conditioned
multiple measurement vector (MMV) CS problem. The code diversity is quantified
by the decrease in the average Babel mutual coherence among the spreading
sequences. Moreover, we present a two-stage active user detection (AUD) scheme
for both wideband and narrowband implementation. Our theoretical analysis shows
that with MSRA activity misdetection falls exponentially while the size of
GF-RA frame is increased. Finally, the simulation results show that about 82%
increase in utilization of RRs, i.e., more active users, is supported by MSRA
than the conventional schemes while achieving the RA failure rate lower bound
set by random access collision.
|
In eDiscovery, it is critical to ensure that each page produced in legal
proceedings conforms with the requirements of court or government agency
production requests. Errors in productions could have severe consequences in a
case, putting a party in an adverse position. The volume of pages produced
continues to increase, and tremendous time and effort has been taken to ensure
quality control of document productions. This has historically been a manual
and laborious process. This paper demonstrates a novel automated production
quality control application which leverages deep learning-based image
recognition technology to extract Bates Number and Confidentiality Stamping
from legal case production images and validate their correctness. Effectiveness
of the method is verified with an experiment using a real-world production
data.
|
Acute respiratory distress syndrome (ARDS) is a life-threatening condition
that is often undiagnosed or diagnosed late. ARDS is especially prominent in
those infected with COVID-19. We explore the automatic identification of ARDS
indicators and confounding factors in free-text chest radiograph reports. We
present a new annotated corpus of chest radiograph reports and introduce the
Hierarchical Attention Network with Sentence Objectives (HANSO) text
classification framework. HANSO utilizes fine-grained annotations to improve
document classification performance. HANSO can extract ARDS-related information
with high performance by leveraging relation annotations, even if the annotated
spans are noisy. Using annotated chest radiograph images as a gold standard,
HANSO identifies bilateral infiltrates, an indicator of ARDS, in chest
radiograph reports with performance (0.87 F1) comparable to human annotations
(0.84 F1). This algorithm could facilitate more efficient and expeditious
identification of ARDS by clinicians and researchers and contribute to the
development of new therapies to improve patient care.
|
Current deep reinforcement learning (RL) algorithms are still highly
task-specific and lack the ability to generalize to new environments. Lifelong
learning (LLL), however, aims at solving multiple tasks sequentially by
efficiently transferring and using knowledge between tasks. Despite a surge of
interest in lifelong RL in recent years, the lack of a realistic testbed makes
robust evaluation of LLL algorithms difficult. Multi-agent RL (MARL), on the
other hand, can be seen as a natural scenario for lifelong RL due to its
inherent non-stationarity, since the agents' policies change over time. In this
work, we introduce a multi-agent lifelong learning testbed that supports both
zero-shot and few-shot settings. Our setup is based on Hanabi -- a
partially-observable, fully cooperative multi-agent game that has been shown to
be challenging for zero-shot coordination. Its large strategy space makes it a
desirable environment for lifelong RL tasks. We evaluate several recent MARL
methods, and benchmark state-of-the-art LLL algorithms in limited memory and
computation regimes to shed light on their strengths and weaknesses. This
continual learning paradigm also provides us with a pragmatic way of going
beyond centralized training which is the most commonly used training protocol
in MARL. We empirically show that the agents trained in our setup are able to
coordinate well with unseen agents, without any additional assumptions made by
previous works. The code and all pre-trained models are available at
https://github.com/chandar-lab/Lifelong-Hanabi.
|
Based on the spectator expansion of the multiple scattering series we employ
a chiral next-to-next-to-leading order (NNLO) nucleon-nucleon interaction on
the same footing in the structure as well as in the reaction calculation to
obtain an in leading-order consistent effective potential for nucleon-nucleus
elastic scattering, which includes the spin of the struck target nucleon. As an
example we present proton scattering off $^{12}$C.
|
For modules over group rings we introduce the following numerical parameter.
We say that a module A over a ring R has finite r-generator property if each
f.g. (finitely generated) R-submodule of A can be generated exactly by r
elements and there exists a f.g. R-submodule D of A, which has a minimal
generating subset, consisting exactly of r elements. Let FG be the group
algebra of a finite group G over a field F. In the present paper modules over
the algebra FG having finite generator property are described.
|
The unique optoelectronic properties of black phosphorus (BP) have triggered
great interest in its applications in areas not fulfilled by other layered
materials (LMs). However, its poor stability (fast degradation, i.e. <<1 h for
monolayers) under ambient conditions restricts its practical application. We
demonstrate here, by an experimental-theoretical approach, that the
incorporation of nitrogen molecules (N2) into the BP structure results in a
relevant improvement of its stability in air, up to 8 days without optical
degradation signs. Our strategy involves the generation of defects (phosphorus
vacancies) by electron-beam irradiation, followed by their healing with N2
molecules. As an additional route, N2 plasma treatment is presented as an
alternative for large area application. Our first principles calculations
elucidate the mechanisms involved in the nitrogen incorporation as well as on
the stabilization of the modified BP, which corroborates with our experimental
observations. This stabilization approach can be applied in the processing of
BP, allowing for its use in environmentally stable van der Waals
heterostructures with other LMs as well as in optoelectronic and wearable
devices.
|
The paper studies distributed binary hypothesis testing over a two-hop relay
network where both the relay and the receiver decide on the hypothesis. Both
communication links are subject to expected rate constraints, which differs
from the classical assumption of maximum rate constraints. We exactly
characterize the set of type-II error exponent pairs at the relay and the
receiver when both type-I error probabilities are constrained by the same value
$\epsilon>0$. No tradeoff is observed between the two exponents, i.e., one can
simultaneously attain maximum type-II error exponents both at the relay and at
the receiver. For $\epsilon_1 \neq \epsilon_2$, we present an achievable
exponents region, which we obtain with a scheme that applies different versions
of a basic two-hop scheme that is optimal under maximum rate constraints. We
use the basic two-hop scheme with two choices of parameters and rates,
depending on the transmitter's observed sequence. For $\epsilon_1=\epsilon_2$,
a single choice is shown to be sufficient. Numerical simulations indicate that
extending to three or more parameter choices is never beneficial.
|
The important recent book by G. Schurz appreciates that the no-free-lunch
theorems (NFL) have major implications for the problem of (meta) induction.
Here I review the NFL theorems, emphasizing that they do not only concern the
case where there is a uniform prior -- they prove that there are "as many
priors" (loosely speaking) for which any induction algorithm $A$
out-generalizes some induction algorithm $B$ as vice-versa. Importantly though,
in addition to the NFL theorems, there are many \textit{free lunch} theorems.
In particular, the NFL theorems can only be used to compare the
\textit{marginal} expected performance of an induction algorithm $A$ with the
marginal expected performance of an induction algorithm $B$. There is a rich
set of free lunches which instead concern the statistical correlations among
the generalization errors of induction algorithms. As I describe, the
meta-induction algorithms that Schurz advocate as a "solution to Hume's
problem" are just an example of such a free lunch based on correlations among
the generalization errors of induction algorithms. I end by pointing out that
the prior that Schurz advocates, which is uniform over bit frequencies rather
than bit patterns, is contradicted by thousands of experiments in statistical
physics and by the great success of the maximum entropy procedure in inductive
inference.
|
At present, there is still no officially accepted and extensively verified
implementation of computing the gamma difference distribution allowing unequal
shape parameters. We explore four computational ways of the gamma difference
distribution with the different shape parameters resulting from time series
kriging, a forecasting approach based on the best linear unbiased prediction,
and linear mixed models. The results of our numerical study, with emphasis on
using open data science tools, demonstrate that our open tool implemented in
high-performance Python(with Numba) is exponentially fast, highly accurate, and
very reliable. It combines numerical inversion of the characteristic function
and the trapezoidal rule with the double exponential oscillatory transformation
(DE quadrature). At the double 53-bit precision, our tool outperformed the
speed of the analytical computation based on Tricomi's $U(a, b, z)$ function in
CAS software (commercial Mathematica, open SageMath) by 1.5-2 orders. At the
precision of scientific numerical computational tools, it exceeded open SciPy,
NumPy, and commercial MATLAB 5-10 times. The potential future application of
our tool for a mixture of characteristic functions could open new possibilities
for fast data analysis based on exact probability distributions in areas like
multidimensional statistics, measurement uncertainty analysis in metrology as
well as in financial mathematics and risk analysis.
|
Advanced machine learning techniques have been used in remote sensing (RS)
applications such as crop mapping and yield prediction, but remain
under-utilized for tracking crop progress. In this study, we demonstrate the
use of agronomic knowledge of crop growth drivers in a Long Short-Term
Memory-based, domain-guided neural network (DgNN) for in-season crop progress
estimation. The DgNN uses a branched structure and attention to separate
independent crop growth drivers and capture their varying importance throughout
the growing season. The DgNN is implemented for corn, using RS data in Iowa for
the period 2003-2019, with USDA crop progress reports used as ground truth.
State-wide DgNN performance shows significant improvement over sequential and
dense-only NN structures, and a widely-used Hidden Markov Model method. The
DgNN had a 4.0% higher Nash-Sutfliffe efficiency over all growth stages and 39%
more weeks with highest cosine similarity than the next best NN during test
years. The DgNN and Sequential NN were more robust during periods of abnormal
crop progress, though estimating the Silking-Grainfill transition was difficult
for all methods. Finally, Uniform Manifold Approximation and Projection
visualizations of layer activations showed how LSTM-based NNs separate crop
growth time-series differently from a dense-only structure. Results from this
study exhibit both the viability of NNs in crop growth stage estimation (CGSE)
and the benefits of using domain knowledge. The DgNN methodology presented here
can be extended to provide near-real time CGSE of other crops.
|
In this paper, we wish to investigate the dynamics of information transfer in
evolutionary dynamics. We use information theoretic tools to track how much
information an evolving population has obtained and managed to retain about
different environments that it is exposed to. By understanding the dynamics of
information gain and loss in a static environment, we predict how that same
evolutionary system would behave when the environment is fluctuating.
Specifically, we anticipate a cross-over between the regime in which
fluctuations improve the ability of the evolutionary system to capture
environmental information and the regime in which the fluctuations inhibit it,
governed by a cross-over in the timescales of information gain and decay.
|
Vision transformer (ViT) models exhibit substandard optimizability. In
particular, they are sensitive to the choice of optimizer (AdamW vs. SGD),
optimizer hyperparameters, and training schedule length. In comparison, modern
convolutional neural networks are easier to optimize. Why is this the case? In
this work, we conjecture that the issue lies with the patchify stem of ViT
models, which is implemented by a stride-p p*p convolution (p=16 by default)
applied to the input image. This large-kernel plus large-stride convolution
runs counter to typical design choices of convolutional layers in neural
networks. To test whether this atypical design choice causes an issue, we
analyze the optimization behavior of ViT models with their original patchify
stem versus a simple counterpart where we replace the ViT stem by a small
number of stacked stride-two 3*3 convolutions. While the vast majority of
computation in the two ViT designs is identical, we find that this small change
in early visual processing results in markedly different training behavior in
terms of the sensitivity to optimization settings as well as the final model
accuracy. Using a convolutional stem in ViT dramatically increases optimization
stability and also improves peak performance (by ~1-2% top-1 accuracy on
ImageNet-1k), while maintaining flops and runtime. The improvement can be
observed across the wide spectrum of model complexities (from 1G to 36G flops)
and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us
to recommend using a standard, lightweight convolutional stem for ViT models in
this regime as a more robust architectural choice compared to the original ViT
model design.
|
Image composition plays an important role in the quality of a photo. However,
not every camera user possesses the knowledge and expertise required for
capturing well-composed photos. While post-capture cropping can improve the
composition sometimes, it does not work in many common scenarios in which the
photographer needs to adjust the camera view to capture the best shot. To
address this issue, we propose a deep learning-based approach that provides
suggestions to the photographer on how to adjust the camera view before
capturing. By optimizing the composition before a photo is captured, our system
helps photographers to capture better photos. As there is no publicly-available
dataset for this task, we create a view adjustment dataset by repurposing
existing image cropping datasets. Furthermore, we propose a two-stage
semi-supervised approach that utilizes both labeled and unlabeled images for
training a view adjustment model. Experiment results show that the proposed
semi-supervised approach outperforms the corresponding supervised alternatives,
and our user study results show that the suggested view adjustment improves
image composition 79% of the time.
|
We consider a parabolic sine-Gordon model with periodic boundary conditions.
We prove a fundamental maximum principle which gives a priori uniform control
of the solution. In the one-dimensional case we classify all bounded steady
states and exhibit some explicit solutions. For the numerical discretization we
employ first order IMEX, and second order BDF2 discretization without any
additional stabilization term. We rigorously prove the energy stability of the
numerical schemes under nearly sharp and quite mild time step constraints. We
demonstrate the striking similarity of the parabolic sine-Gordon model with the
standard Allen-Cahn equations with double well potentials.
|
Every spacetime is defined by its metric, the mathematical object which
further defines the spacetime curvature. From the relativity principle, we have
the freedom to choose which coordinate system to write our metric in. Some
coordinate systems, however, are better than others. In this text, we begin
with a brief introduction into general relativity, Einstein's masterpiece
theory of gravity. We then discuss some physically interesting spacetimes and
the coordinate systems that the metrics of these spacetimes can be expressed
in. More specifically, we discuss the existence of the rather useful unit-lapse
forms of these spacetimes. Using the metric written in this form then allows us
to conduct further analysis of these spacetimes, which we discuss.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.