abstract
stringlengths 42
2.09k
|
---|
Mobile contact tracing apps are -- in principle -- a perfect aid to condemn
the human-to-human spread of an infectious disease such as COVID-19 due to the
wide use of smartphones worldwide. Yet, the unknown accuracy of contact
estimation by wireless technologies hinders the broader use. We address this
challenge by conducting a measurement study with a custom testbed to show the
capabilities and limitations of Bluetooth Low Energy (BLE) in different
scenarios. Distance estimation is based on interpreting the signal pathloss
with a basic linear and a logarithmic model. Further, we compare our results
with accurate ultra-wideband (UWB) distance measurements. While the results
indicate that distance estimation by BLE is not accurate enough, a contact
detector can detect contacts below 2.5 m with a true positive rate of 0.65 for
the logarithmic and of 0.54 for the linear model. Further, the measurements
reveal that multi-path signal propagation reduces the effect of body shielding
and thus increases detection accuracy in indoor scenarios.
|
The interstellar turbulence is magnetized and thus anisotropic. The
anisotropy of turbulent magnetic fields and velocities is imprinted in the
related observables, rotation measures (RMs), and velocity centroids (VCs).
This anisotropy provides valuable information on both the direction and
strength of the magnetic field. However, its measurement is difficult
especially in highly supersonic turbulence in cold interstellar phases due to
the distortions by isotropic density fluctuations. By using 3D simulations of
supersonic and sub-Alfv\'enic magnetohydrodynamic(MHD) turbulence, we find that
the problem can be alleviated when we selectively sample the volume-filling
low-density regions in supersonic MHD turbulence. Our results show that in
these low-density regions, the anisotropy of RM and VC fluctuations depends on
the Alfv\'enic Mach number as $\rm M_A^{-4/3}$. This anisotropy-$\rm M_A$
relation is theoretically expected for sub-Alfv 'enic MHD turbulence and
confirmed by our synthetic observations of $^{12}$CO emission. It provides a
new method for measuring the plane-of-the-sky magnetic fields in cold
interstellar phases.
|
Robotic nursing aid is one of the heavily researched areas in robotics
nowadays. Several robotic assistants exist that only focus on a specific
function related to nurses assistance or functions related to patient aid.
There is a need for a unified system that not only performs tasks that would
assist nurses and reduce their burden but also perform tasks that help a
patient. In recent times, due to the COVID-19 pandemic, there is also an
increase in the need for robotic assistants that have teleoperation
capabilities to provide better protection against the virus spread. To address
these requirements, we propose a novel Multi-purpose Intelligent Nurse Aid
(MINA) robotic system that is capable of providing walking assistance to the
patients and perform teleoperation tasks with an easy-to-use and intuitive
Graphical User Interface (GUI). This paper also presents preliminary results
from the walking assistant task that improves upon the current state-of-the-art
methods and shows the developed GUI for teleoperation.
|
In this paper, we propose a direct Eulerian generalized Riemann problem (GRP)
scheme for a blood flow model in arteries. It is an extension of the Eulerian
GRP scheme, which is developed by Ben-Artzi, et. al. in J. Comput. Phys.,
218(2006). By using the Riemann invariants, we diagonalize the blood flow
system into a weakly coupled system, which is used to resolve rarefaction wave.
We also use Rankine-Hugoniot condition to resolve the local GRP formulation. We
pay special attention to the acoustic case as well as the sonic case. The
extension to the two dimensional case is carefully obtained by using the
dimensional splitting technique. We test that the derived GRP scheme is second
order accuracy.
|
We study the Seebeck effect in the three-dimensional Dirac electron system
based on the linear response theory with Luttinger's gravitational potential.
The Seebeck coefficient $S$ is defined by $S = L_{12} / L_{11} T$, where $T$ is
the temperature, and $L_{11}$ and $L_{12}$ are the longitudinal response
coefficients of the charge current to the electric field and to the temperature
gradient, respectively; $L_{11}$ is the electric conductivity and $L_{12}$ is
the thermo-electric conductivity. We consider randomly-distributed impurity
potentials as the source of the momentum relaxation of electrons and
microscopically calculate the relaxation rate and the vertex corrections of
$L_{11}$ and $L_{12}$ due to the impurities. It is confirmed that $L_{11}$ and
$L_{12}$ are related through Mott's formula in low temperatures when the
chemical potential lies above the gap ($|\mu| > \Delta$), irrespective of the
linear dispersion of the Dirac electrons and unconventional energy dependence
of the lifetime of electrons. On the other hand, when the chemical potential
lies in the band gap ($|\mu| < \Delta$), Seebeck coefficient behaves just as in
conventional semiconductors: Its dependences on the chemical potential $\mu$
and the temperature $T$ are partially captured by $S \propto (\Delta - \mu) /
\kB T$ for $\mu > 0$. The Seebeck coefficient takes the relatively large value
$|S| \simeq 1.7 \,\mathrm{m V/K}$ at $T \simeq 8.7\,\mathrm{K}$ for $\Delta =
15 \,\mathrm{m eV}$ by assuming doped bismuth.
|
Replacing a fossil fuel-powered car with an electric model can halve
greenhouse gas emissions over the course of the vehicle's lifetime and reduce
the noise pollution in urban areas. In green logistics, a well-scheduled
charging ensures an efficient operation of transportation and power systems
and, at the same time, provides economical and satisfactory charging services
for drivers. This paper presents a taxonomy of current electric vehicle
charging scheduling problems in green logistics by analyzing its unique
features with some typical use cases, such as space assignment, routing and
energy management; discusses the challenges, i.e., the information availability
and stakeholders' strategic behaviors that arise in stochastic and
decentralized environments; and classifies the existing approaches, as
centralized, distributed and decentralized ones, that apply to these
challenges. Moreover, we discuss research opportunities in applying
market-based mechanisms, which shall be coordinated with stochastic
optimization and machine learning, to the decentralized, dynamic and
data-driven charging scheduling problems for the management of the future green
logistics.
|
We consider an analog of particle production in a quartic $O(N)$ quantum
oscillator with time-dependent frequency, which is a toy model of particle
production in the dynamical Casimir effect and de Sitter space. We calculate
exact quantum averages, Keldysh propagator, and particle number using two
different methods. First, we employ a kind of rotating wave approximation to
estimate these quantities for small deviations from stationarity. Second, we
extend these results to arbitrarily large deviations using the
Schwinger-Keldysh diagrammatic technique. We show that in strongly
nonstationary situations, including resonant oscillations, loop corrections to
the tree-level expressions effectively result in an additional degree of
freedom, $N \to N + \frac{3}{2}$, which modifies the average number and energy
of created particles.
|
In this paper we investigate the commutator relations for prenilpotent roots
which are nested. These commutator relations are trivial in a lot of cases.
|
We extend a recent breakthrough result relating expectation thresholds and
actual thresholds to include rainbow versions.
|
A symmetry-preserving treatment of mesons, within a Dyson-Schwinger and
Bethe-Salpeter equations approach, demands an interconnection between the
kernels of the quark gap equation and meson Bethe-Salpeter equation. Appealing
to those symmetries expressed by the vector and axial-vector
Ward-Green-Takahashi identitiges (WGTI), we construct a two-body Bethe-Salpeter
kernel and study its implications in the vector channel; particularly, we
analyze the structure of the quark-photon vertex, which explicitly develops a
vector meson pole in the timelike axis and the quark anomlaous magnetic moment
term, as well as a variety of $\rho$ meson properties: mass and decay
constants, electromagnetic form factors, and valence-quark distribution
amplitudes.
|
An equation is derived for analyzing the self-action of a wave packets with
few optical cycles in multicore fibers (MCF). A new class of stable
out-of-phase spatio-temporal solitons with few cycle durations in the MCF with
cores located in a ring is found and analyzed. The stability boundary of the
obtained solutions is determined. As an example of using such solitons, we
considered the problem of their self-compression in the process of multisoliton
dynamics in the MCF. The formation of laser pulses with a duration of few
optical cycles at the output of a ten-core MCF is shown.
|
The least squares problem with L1-regularized regressors, called Lasso, is a
widely used approach in optimization problems where sparsity of the regressors
is desired. This formulation is fundamental for many applications in signal
processing, machine learning and control. As a motivating problem, we
investigate a sparse data predictive control problem, run at a cloud service to
control a system with unknown model, using L1-regularization to limit the
behavior complexity. The input-output data collected for the system is
privacy-sensitive, hence, we design a privacy-preserving solution using
homomorphically encrypted data. The main challenges are the non-smoothness of
the L1-norm, which is difficult to evaluate on encrypted data, as well as the
iterative nature of the Lasso problem. We use a distributed ADMM formulation
that enables us to exchange substantial local computation for little
communication between multiple servers. We first give an encrypted multi-party
protocol for solving the distributed Lasso problem, by approximating the
non-smooth part with a Chebyshev polynomial, evaluating it on encrypted data,
and using a more cost effective distributed bootstrapping operation. For the
example of data predictive control, we prefer a non-homogeneous splitting of
the data for better convergence. We give an encrypted multi-party protocol for
this non-homogeneous splitting of the Lasso problem to a non-homogeneous set of
servers: one powerful server and a few less powerful devices, added for
security reasons. Finally, we provide numerical results for our proposed
solutions.
|
Many platforms for benchmarking optimization algorithms offer users the
possibility of sharing their experimental data with the purpose of promoting
reproducible and reusable research. However, different platforms use different
data models and formats, which drastically inhibits identification of relevant
data sets, their interpretation, and their interoperability. Consequently, a
semantically rich, ontology-based, machine-readable data model is highly
desired.
We report in this paper on the development of such an ontology, which we name
OPTION (OPTImization algorithm benchmarking ONtology). Our ontology provides
the vocabulary needed for semantic annotation of the core entities involved in
the benchmarking process, such as algorithms, problems, and evaluation
measures. It also provides means for automated data integration, improved
interoperability, powerful querying capabilities and reasoning, thereby
enriching the value of the benchmark data. We demonstrate the utility of OPTION
by annotating and querying a corpus of benchmark performance data from the BBOB
workshop data - a use case which can be easily extended to cover other
benchmarking data collections.
|
Transmission electron microscopy (TEM) is one of the primary tools to show
microstructural characterization of materials as well as film thickness.
However, manual determination of film thickness from TEM images is
time-consuming as well as subjective, especially when the films in question are
very thin and the need for measurement precision is very high. Such is the case
for head overcoat (HOC) thickness measurements in the magnetic hard disk drive
industry. It is therefore necessary to develop software to automatically
measure HOC thickness. In this paper, for the first time, we propose a HOC
layer segmentation method using NASNet-Large as an encoder and then followed by
a decoder architecture, which is one of the most commonly used architectures in
deep learning for image segmentation. To further improve segmentation results,
we are the first to propose a post-processing layer to remove irrelevant
portions in the segmentation result. To measure the thickness of the segmented
HOC layer, we propose a regressive convolutional neural network (RCNN) model as
well as orthogonal thickness calculation methods. Experimental results
demonstrate a higher dice score for our model which has lower mean squared
error and outperforms current state-of-the-art manual measurement.
|
Objectives: Federal open data initiatives that promote increased sharing of
federally collected data are important for transparency, data quality, trust,
and relationships with the public and state, tribal, local, and territorial
(STLT) partners. These initiatives advance understanding of health conditions
and diseases by providing data to more researchers, scientists, and
policymakers for analysis, collaboration, and valuable use outside CDC
responders. This is particularly true for emerging conditions such as COVID-19
where we have much to learn and have evolving data needs. Since the beginning
of the outbreak, CDC has collected person-level, de-identified data from
jurisdictions and currently has over 8 million records, increasing each day.
This paper describes how CDC designed and produces two de-identified public
datasets from these collected data.
Materials and Methods: Data elements were included based on the usefulness,
public request, and privacy implications; specific field values were suppressed
to reduce risk of reidentification and exposure of confidential information.
Datasets were created and verified for privacy and confidentiality using data
management platform analytic tools as well as R scripts.
Results: Unrestricted data are available to the public through Data.CDC.gov
and restricted data, with additional fields, are available with a data use
agreement through a private repository on GitHub.com.
Practice Implications: Enriched understanding of the available public data,
the methods used to create these data, and the algorithms used to protect
privacy of de-identified individuals allow for improved data use. Automating
data generation procedures allows greater and more timely sharing of data.
|
Non-orthogonal multiple access (NOMA) is a key technology to enable massive
machine type communications (mMTC) in 5G networks and beyond. In this paper,
NOMA is applied to improve the random access efficiency in high-density
spatially-distributed multi-cell wireless IoT networks, where IoT devices
contend for accessing the shared wireless channel using an adaptive
p-persistent slotted Aloha protocol. To enable a capacity-optimal network, a
novel formulation of random channel access management is proposed, in which the
transmission probability of each IoT device is tuned to maximize the geometric
mean of users' expected capacity. It is shown that the network optimization
objective is high dimensional and mathematically intractable, yet it admits
favourable mathematical properties that enable the design of efficient
data-driven algorithmic solutions which do not require a priori knowledge of
the channel model or network topology. A centralized model-based algorithm and
a scalable distributed model-free algorithm, are proposed to optimally tune the
transmission probabilities of IoT devices and attain the maximum capacity. The
convergence of the proposed algorithms to the optimal solution is further
established based on convex optimization and game-theoretic analysis. Extensive
simulations demonstrate the merits of the novel formulation and the efficacy of
the proposed algorithms.
|
The performance of wireless networks is fundamentally limited by the
aggregate interference, which depends on the spatial distributions of the
interferers, channel conditions, and user traffic patterns (or queueing
dynamics). These factors usually exhibit spatial and temporal correlations and
thus make the performance of large-scale networks environment-dependent (i.e.,
dependent on network topology, locations of the blockages, etc.). The
correlation can be exploited in protocol designs (e.g., spectrum-, load-,
location-, energy-aware resource allocations) to provide efficient wireless
services. For this, accurate system-level performance characterization and
evaluation with spatio-temporal correlation are required. In this context,
stochastic geometry models and random graph techniques have been used to
develop analytical frameworks to capture the spatio-temporal interference
correlation in large-scale wireless networks. The objective of this article is
to provide a tutorial on the stochastic geometry analysis of large-scale
wireless networks that captures the spatio-temporal interference correlation
(and hence the signal-to-interference ratio (SIR) correlation). We first
discuss the importance of spatio-temporal performance analysis, different
parameters affecting the spatio-temporal correlation in the SIR, and the
different performance metrics for spatio-temporal analysis. Then we describe
the methodologies to characterize spatio-temporal SIR correlations for
different network configurations (independent, attractive, repulsive
configurations), shadowing scenarios, user locations, queueing behavior,
relaying, retransmission, and mobility. We conclude by outlining future
research directions in the context of spatio-temporal analysis of emerging
wireless communications scenarios.
|
Vortex based spin torque nano oscillators (STVOs) can present more complex
dynamics than the spin torque induced gyrotropic (G) motion of the vortex core.
The respective dynamic modes and the transition between them can be controlled
by experimental parameters such as the applied dc current. An interesting
behavior is the stochastic transition from the G- to a dynamic C-state
occurring for large current densities. Moreover, the C-state oscillations
exhibit a constant active magnetic volume. We present noise measurements in the
different dynamic states that allow accessing specific properties of the
stochastic transition, such as the characteristic state transition frequency.
Furthermore,we confirm, as theoretically predicted, an increase of flicker
noise with $I_{dc}^2$ when the oscillation volume remains constant with the
current. These results bring insight into the potential optimization of noise
properties sought for many potential rf applications with spin torque
oscillators. Furthermore, the investigated stochastic characteristics open up
new potentialities, for instance in the emerging field of neuromorphic
computing schemes.
|
Temperature is a deceptively simple concept that still raises deep questions
at the forefront of quantum physics research. The observation of thermalisation
in completely isolated quantum systems, such as cold-atom quantum simulators,
implies that a temperature can be assigned even to individual, pure quantum
states. Here, we propose a scheme to measure the temperature of such pure
states through quantum interference. Our proposal involves interferometry of an
auxiliary qubit probe, which is prepared in a superposition state and
subsequently undergoes decoherence due to weak coupling with a closed,
thermalised many-body system. Using only a few basic assumptions about chaotic
quantum systems -- namely, the eigenstate thermalisation hypothesis and the
emergence of hydrodynamics at long times -- we show that the qubit undergoes
pure exponential decoherence at a rate that depends on the temperature of its
surroundings. We verify our predictions by numerical experiments on a quantum
spin chain that thermalises after absorbing energy from a periodic drive. Our
work provides a general method to measure the temperature of isolated, strongly
interacting systems under minimal assumptions.
|
The study of hyper-compact (HC) or ultra-compact (UC) HII regions is
fundamental to understanding the process of massive (> 8 M_sun) star formation.
We employed Atacama Large Millimeter/submillimeter Array (ALMA) 1.4 mm Cycle 6
observations to investigate at high angular resolution (~0.050", corresponding
to 330 au) the HC HII region inside molecular core A1 of the high-mass
star-forming cluster G24.78+0.08. We used the H30alpha emission and different
molecular lines of CH3CN and 13CH3CN to study the kinematics of the ionized and
molecular gas, respectively. At the center of the HC HII region, at radii <~500
au, we observe two mutually perpendicular velocity gradients, which are
directed along the axes at PA = 39 deg and PA = 133 deg, respectively. The
velocity gradient directed along the axis at PA = 39 deg has an amplitude of 22
km/s mpc^(-1), which is much larger than the other's, 3 km/s mpc^(-1). We
interpret these velocity gradients as rotation around, and expansion along, the
axis at PA = 39 deg. We propose a scenario where the H30alpha line traces the
ionized heart of a disk-jet system that drives the formation of the massive
star (~20 M_sun) responsible for the HC HII region. Such a scenario is also
supported by the position-velocity plots of the CH3CN and 13CH3CN lines along
the axis at PA = 133 deg, which are consistent with Keplerian rotation around a
20 M_sun star. Toward the HC HII region in G24.78+0.08, the coexistence of mass
infall (at radii of ~5000 au), an outer molecular disk (from <~4000 au to >~500
au), and an inner ionized disk (<~500 au) indicates that the massive ionizing
star is still actively accreting from its parental molecular core. To our
knowledge, this is the first example of a molecular disk around a high-mass
forming star that, while becoming internally ionized after the onset of the HII
region, continues to accrete mass onto the ionizing star.
|
Numerous congruences for partitions with designated summands have been proven
since first being introduced and studied by Andrews, Lewis, and Lovejoy. This
paper explicitly characterizes the number of partitions with designated
summands whose parts are not divisible by $2^\ell$, $2$, and $3^\ell$ working
modulo $2,\ 4,$ and $3$, respectively, greatly extending previous results on
the subject. We provide a few applications of our characterizations throughout
in the form of congruences and a computationally fast recurrence. Moreover, we
illustrate a previously undocumented connection between the number of
partitions with designated summands and the number of partitions with odd
multiplicities.
|
The quadratic rough Heston model provides a natural way to encode Zumbach
effect in the rough volatility paradigm. We apply multi-factor approximation
and use deep learning methods to build an efficient calibration procedure for
this model. We show that the model is able to reproduce very well both SPX and
VIX implied volatilities. We typically obtain VIX option prices within the
bid-ask spread and an excellent fit of the SPX at-the-money skew. Moreover, we
also explain how to use the trained neural networks for hedging with
instantaneous computation of hedging quantities.
|
Compact objects inspiraling into supermassive black holes, known as
extreme-mass-ratio inspirals, are an important source for future space-borne
gravitational-wave detectors. When constructing waveform templates, usually the
adiabatic approximation is employed to treat the compact object as a test
particle for a short duration, and the radiation reaction is reflected in the
changes of the constants of motion. However, the mass of the compact object
should have contributions to the background. In the present paper, employing
the effective-one-body formalism, we analytically calculate the trajectories of
a compact object around a massive Kerr black hole with generally
three-dimensional orbits and express the fundamental orbital frequencies in
explicit forms. In addition, by constructing an approximate "constant" similar
to the Carter constant, we transfer the dynamical quantities such as energy,
angular momentum, and the "Carter constant" to the semilatus rectum,
eccentricity, and orbital inclination with mass-ratio corrections. The linear
mass-ratio terms in the formalism may not be sufficient for accurate waveforms,
but our analytical method for solving the equations of motion could be useful
in various approaches to building waveform models.
|
We study the electronic phase diagram of the excitonic insulator candidates
Ta$_2$Ni(Se$_{1-x}$S$_x$)$_5$ [x=0, ... ,1] using Raman spectroscopy. Critical
excitonic fluctuations are observed, that diminish with $x$ and ultimately
shift to high energies, characteristic of a quantum phase transition.
Nonetheless, a symmetry-breaking transition at finite temperatures is detected
for all $x$, exposing a cooperating lattice instability that takes over for
large $x$. Our study reveals a failed excitonic quantum phase transition,
masked by a preemptive structural order.
|
Spectropolarimetric measurements of gamma-ray burst (GRB) optical afterglows
contain polarization information for both continuum and absorption lines. Based
on the Zeeman effect, an absorption line in a strong magnetic field is
polarized and split into a triplet. In this paper, we solve the polarization
radiative transfer equations of the absorption lines, and obtain the degree of
linear polarization of the absorption lines as a function of the optical depth.
In order to effectively measure the degree of linear polarization for the
absorption lines, a magnetic field strength of at least $10^3$ G is required.
The metal elements that produce the polarized absorption lines should be
sufficiently abundant and have large oscillation strengths or Einstein
absorption coefficients. We encourage both polarization measurements and
high-dispersion observations of the absorption lines in order to detect the
triplet structure in early GRB optical afterglows.
|
Existing deterministic variational inference approaches for diffusion
processes use simple proposals and target the marginal density of the
posterior. We construct the variational process as a controlled version of the
prior process and approximate the posterior by a set of moment functions. In
combination with moment closure, the smoothing problem is reduced to a
deterministic optimal control problem. Exploiting the path-wise Fisher
information, we propose an optimization procedure that corresponds to a natural
gradient descent in the variational parameters. Our approach allows for richer
variational approximations that extend to state-dependent diffusion terms. The
classical Gaussian process approximation is recovered as a special case.
|
In this paper, a distributed formation flight control topology for
Leader-Follower formation structure is presented. Such topology depends in the
first place on online generation of the trajectories that should be followed by
the agents in the formation. The trajectory of each agent is planned during
execution depending on its neighbors and considering that the desired reference
trajectory is only given to the leader. Simulation using MATLAB/SIMULINK is
done on a formation of quadrotor UAVs to illustrate the proposed method. The
topology shows very good results in achieving the formation and following the
reference trajectory.
|
A theory of a pseudogap phase of high-temperature superconductors where
current carriers are translation invariant bipolarons is developed. A
temperature T* of a transition from a pseudogap phase to a normal one is
calculated. For the temperature of a transition to the pseudogap phase, the
isotope coefficient is found. It is shown that the results obtained, in
particular, the possibility of negative values of the isotope coefficient are
consistent with the experiment. New experiments on the influence of the
magnetic field on the isotope coefficient are proposed.
|
FloodNet is a high-resolution image dataset acquired by a small UAV platform,
DJI Mavic Pro quadcopters, after Hurricane Harvey. The dataset presents a
unique challenge of advancing the damage assessment process for post-disaster
scenarios using unlabeled and limited labeled dataset. We propose a solution to
address their classification and semantic segmentation challenge. We approach
this problem by generating pseudo labels for both classification and
segmentation during training and slowly incrementing the amount by which the
pseudo label loss affects the final loss. Using this semi-supervised method of
training helped us improve our baseline supervised loss by a huge margin for
classification, allowing the model to generalize and perform better on the
validation and test splits of the dataset. In this paper, we compare and
contrast the various methods and models for image classification and semantic
segmentation on the FloodNet dataset.
|
Change detection from synthetic aperture radar (SAR) imagery is a critical
yet challenging task. Existing methods mainly focus on feature extraction in
spatial domain, and little attention has been paid to frequency domain.
Furthermore, in patch-wise feature analysis, some noisy features in the
marginal region may be introduced. To tackle the above two challenges, we
propose a Dual-Domain Network. Specifically, we take features from the discrete
cosine transform domain into consideration and the reshaped DCT coefficients
are integrated into the proposed model as the frequency domain branch. Feature
representations from both frequency and spatial domain are exploited to
alleviate the speckle noise. In addition, we further propose a multi-region
convolution module, which emphasizes the central region of each patch. The
contextual information and central region features are modeled adaptively. The
experimental results on three SAR datasets demonstrate the effectiveness of the
proposed model. Our codes are available at
https://github.com/summitgao/SAR_CD_DDNet.
|
Domain mismatch is a noteworthy issue in acoustic event detection tasks, as
the target domain data is difficult to access in most real applications. In
this study, we propose a novel CNN-based discriminative training framework as a
domain compensation method to handle this issue. It uses a parallel CNN-based
discriminator to learn a pair of high-level intermediate acoustic
representations. Together with a binary discriminative loss, the discriminators
are forced to maximally exploit the discrimination of heterogeneous acoustic
information in each audio clip with target events, which results in a robust
paired representations that can well discriminate the target events and
background/domain variations separately. Moreover, to better learn the
transient characteristics of target events, a frame-wise classifier is designed
to perform the final classification. In addition, a two-stage training with the
CNN-based discriminator initialization is further proposed to enhance the
system training. All experiments are performed on the DCASE 2018 Task3
datasets. Results show that our proposal significantly outperforms the official
baseline on cross-domain conditions in AUC by relative $1.8-12.1$% without any
performance degradation on in-domain evaluation conditions.
|
We introduce a novel inferential framework for marked point processes that
enjoys both scalability and interpretability. The framework is based on
variational inference and it aims to speed up inference for a flexible family
of marked point processes where the joint distribution of times and marks can
be specified in terms of the conditional distribution of times given the
process filtration, and of the conditional distribution of marks given the
process filtration and the current time. We assess the predictive ability of
our proposed method over four real-world datasets where results show its
competitive performance against other baselines. The attractiveness of our
framework for the modelling of marked point processes is illustrated through a
case study of association football data where scalability and interpretability
are exploited for extracting useful informative patterns.
|
Decision-based attacks (DBA), wherein attackers perturb inputs to spoof
learning algorithms by observing solely the output labels, are a type of severe
adversarial attacks against Deep Neural Networks (DNNs) requiring minimal
knowledge of attackers. State-of-the-art DBA attacks relying on zeroth-order
gradient estimation require an excessive number of queries. Recently, Bayesian
optimization (BO) has shown promising in reducing the number of queries in
score-based attacks (SBA), in which attackers need to observe real-valued
probability scores as outputs. However, extending BO to the setting of DBA is
nontrivial because in DBA only output labels instead of real-valued scores, as
needed by BO, are available to attackers. In this paper, we close this gap by
proposing an efficient DBA attack, namely BO-DBA. Different from existing
approaches, BO-DBA generates adversarial examples by searching so-called
\emph{directions of perturbations}. It then formulates the problem as a BO
problem that minimizes the real-valued distortion of perturbations. With the
optimized perturbation generation process, BO-DBA converges much faster than
the state-of-the-art DBA techniques. Experimental results on pre-trained
ImageNet classifiers show that BO-DBA converges within 200 queries while the
state-of-the-art DBA techniques need over 15,000 queries to achieve the same
level of perturbation distortion. BO-DBA also shows similar attack success
rates even as compared to BO-based SBA attacks but with less distortion.
|
In this chapter we give an overview of the consensus-based global
optimization algorithm and its recent variants. We recall the formulation and
analytical results of the original model, then we discuss variants using
component-wise independent or common noise. In combination with mini-batch
approaches those variants were tailored for machine learning applications.
Moreover, it turns out that the analytical estimates are dimension independent,
which is useful for high-dimensional problems. We discuss the relationship of
consensus-based optimization with particle swarm optimization, a method widely
used in the engineering community. Then we survey a variant of consensus-based
optimization that is proposed for global optimization problems constrained to
hyper-surfaces. We conclude the chapter with remarks on applications, preprints
and open problems.
|
With the growth of the open-source data science community, both the number of
data science libraries and the number of versions for the same library are
increasing rapidly. To match the evolving APIs from those libraries,
open-source organizations often have to exert manual effort to refactor the
APIs used in the code base. Moreover, due to the abundance of similar
open-source libraries, data scientists working on a certain application may
have an abundance of libraries to choose, maintain and migrate between. The
manual refactoring between APIs is a tedious and error-prone task. Although
recent research efforts were made on performing automatic API refactoring
between different languages, previous work relies on statistical learning with
collected pairwise training data for the API matching and migration. Using
large statistical data for refactoring is not ideal because such training data
will not be available for a new library or a new version of the same library.
We introduce Synthesis for Open-Source API Refactoring (SOAR), a novel
technique that requires no training data to achieve API migration and
refactoring. SOAR relies only on the documentation that is readily available at
the release of the library to learn API representations and mapping between
libraries. Using program synthesis, SOAR automatically computes the correct
configuration of arguments to the APIs and any glue code required to invoke
those APIs. SOAR also uses the interpreter's error messages when running
refactored code to generate logical constraints that can be used to prune the
search space. Our empirical evaluation shows that SOAR can successfully
refactor 80% of our benchmarks corresponding to deep learning models with up to
44 layers with an average run time of 97.23 seconds, and 90% of the data
wrangling benchmarks with an average run time of 17.31 seconds.
|
This paper considers (partial) identification of a variety of counterfactual
parameters in binary response models with possibly endogenous regressors. Our
framework allows for nonseparable index functions with multi-dimensional latent
variables, and does not require parametric distributional assumptions. We
leverage results on hyperplane arrangements and cell enumeration from the
literature on computational geometry in order to provide a tractable means of
computing the identified set. We demonstrate how various functional form,
independence, and monotonicity assumptions can be imposed as constraints in our
optimization procedure to tighten the identified set, and we show how these
assumptions can be assigned meaningful interpretations in terms of restrictions
on latent response types. Finally, we apply our method to study the effects of
health insurance on the decision to seek medical treatment.
|
Scalar metric fluctuations generically source a spectrum of gravitational
waves at second order in perturbation theory, poising gravitational wave
experiments as potentially powerful probes of the small-scale curvature power
spectrum. We perform a detailed study of the imprint of primordial
non-Gaussianity on these induced gravitational waves, emphasizing the role of
both the disconnected and connected components of the primoridal trispectrum.
Specializing to local-type non-Gaussianity, we numerically compute all
contributions and present results for a variety of enhanced primordial
curvature power spectra.
|
The disruption of asteroids and comets produces cm-sized meteoroids that end
up impacting the Earth's atmosphere and producing bright fireballs that might
have associated shock waves or, in geometrically-favorable occasions excavate
craters that put them into unexpected hazardous scenarios. The astrometric
reduction of meteors and fireballs to infer their atmospheric trajectories and
heliocentric orbits involves a complex and tedious process that generally
requires many manual tasks. To streamline the process, we present a software
package called SPMN 3D Fireball Trajectory and Orbit Calculator (3D-FireTOC),
an automatic Python code for detection, trajectory reconstruction of meteors,
and heliocentric orbit computation from video recordings. The automatic
3D-FireTOC package comprises of a user interface and a graphic engine that
generates a realistic 3D representation model, which allows users to easily
check the geometric consistency of the results and facilitates scientific
content production for dissemination. The software automatically detects
meteors from digital systems, completes the astrometric measurements, performs
photometry, computes the meteor atmospheric trajectory, calculates the velocity
curve, and obtains the radiant and the heliocentric orbit, all in all
quantifying the error measurements in each step. The software applies
corrections such as light aberration, refraction, zenith attraction, diurnal
aberration and atmospheric extinction. It also characterizes the atmospheric
flight and consequently determines fireball fates by using the $\alpha - \beta$
criterion that analyses the ability of a fireball to penetrate deep into the
atmosphere and produce meteorites. We demonstrate the performance of the
software by analyzing two bright fireballs recorded by the Spanish Fireball and
Meteorite Network (SPMN).
|
The fermionic quantum emulator (FQE) is a collection of protocols for
emulating quantum dynamics of fermions efficiently taking advantage of common
symmetries present in chemical, materials, and condensed-matter systems. The
library is fully integrated with the OpenFermion software package and serves as
the simulation backend. The FQE reduces memory footprint by exploiting number
and spin symmetry along with custom evolution routines for sparse and dense
Hamiltonians, allowing us to study significantly larger quantum circuits at
modest computational cost when compared against qubit state vector simulators.
This release paper outlines the technical details of the simulation methods and
key advantages.
|
Musculoskeletal models have the potential to improve diagnosis and optimize
clinical treatment by predicting accurate outcomes on an individual basis.
However, the subject-specific modeling of spinal alignment is often strongly
simplified or is based on radiographic assessments, exposing subjects to
unnecessary radiation. We therefore developed a novel skin marker-based
approach for modeling subject-specific spinal alignment and evaluated its
feasibility by comparing the predicted with the actual intervertebral joint
(IVJ) locations/orientations (ground truth) using lateral-view radiographic
images. Moreover, the predictive performance of the subject-specific models was
evaluated by comparing the predicted L1/L2 spinal loads during various
functional activities with in vivo measured data obtained from the OrthoLoad
database. IVJ locations/orientations were predicted closer to ground truth as
opposed to standard model scaling, with average location prediction errors of
0.99+/-0.68 cm on the frontal and 1.21+/-0.97 cm on the transverse axis as well
as an average orientation prediction error of 4.74{\deg}+/-2.80{\deg}.
Simulated spinal loads showed similar curve patterns but considerably larger
values as compared to in vivo measured data. Differences in spinal loads
between generic and subject-specific models become only apparent on an
individual subject level. These results underline the feasibility of the
proposed method and associated workflow for inter- and intra-subject
investigations using musculoskeletal simulations. When implemented into
standard model scaling workflows, it is expected to improve the accuracy of
muscle activity and joint loading simulations, which is crucial for
investigations of treatment effects or pathology-dependent deviations.
|
We identify Whittaker vectors for $\mathcal{W}_k(\mathfrak{g})$-modules with
partition functions of higher Airy structures. This implies that Gaiotto
vectors, describing the fundamental class in the equivariant cohomology of a
suitable compactification of the moduli space of $G$-bundles over
$\mathbb{P}^2$ for $G$ a complex simple Lie group, can be computed by a
non-commutative version of the Chekhov-Eynard-Orantin topological recursion. We
formulate the connection to higher Airy structures for Gaiotto vectors of type
A, B, C, and D, and explicitly construct the topological recursion for type A
(at arbitrary level) and type B (at self-dual level). On the physics side, it
means that the Nekrasov partition function for pure $\mathcal{N} = 2$
four-dimensional supersymmetric gauge theories can be accessed by topological
recursion methods.
|
I present a strategy for unsupervised manifold learning on local atomic
environments in molecular simulations based on simple rotation- and
permutation-invariant three-body features. These features are highly
descriptive, generalize to multiple chemical species, and are
human-interpretable. The low-dimensional embeddings of each atomic environment
can be used to understand and quantify messy crystal structures such as those
near interfaces and defects or well-ordered crystal lattices such as in bulk
materials without modification. The same method can also yield collective
variables describing collections of particles such as for an entire simulation
domain. I demonstrate the method on colloidal crystallization, ice crystals,
and binary mesophases to illustrate its broad applicability. In each case, the
learned latent space yields insights into the details of the observed
microstructures. For ices and mesophases, supervised classifiers are trained
based on the learned manifolds and directly compared against a recent
neural-network-based approach. Notably, while this method provides comparable
classification performance, it can also be deployed on even a handful of
observed environments without labels or \textit{a priori} knowledge. Thus, the
current approach provides an incredibly versatile strategy to characterize and
classify local atomic environments, and may unlock insights in a wide variety
of molecular simulation contexts.
|
The Weisfeiler-Leman (WL) algorithm is a well-known combinatorial procedure
for detecting symmetries in graphs and it is widely used in graph-isomorphism
tests. It proceeds by iteratively refining a colouring of vertex tuples. The
number of iterations needed to obtain the final output is crucial for the
parallelisability of the algorithm.
We show that there is a constant k such that every planar graph can be
identified (that is, distinguished from every non-isomorphic graph) by the
k-dimensional WL algorithm within a logarithmic number of iterations. This
generalises a result due to Verbitsky (STACS 2007), who proved the same for
3-connected planar graphs.
The number of iterations needed by the k-dimensional WL algorithm to identify
a graph corresponds to the quantifier depth of a sentence that defines the
graph in the (k+1)-variable fragment C^{k+1} of first-order logic with counting
quantifiers. Thus, our result implies that every planar graph is definable with
a C^{k+1}-sentence of logarithmic quantifier depth.
|
We present a methodical study of the thermal and nuclear properties for the
hot nuclear matter using relativistic-mean field theory. We examine the effects
of temperature on the binding energy, pressure, thermal index, symmetry energy,
and its derivative for the symmetric nuclear matter using temperature-dependent
relativistic mean-field formalism for the well-known G2$^{*}$ and recently
developed IOPB-I parameter sets. The critical temperature for the liquid-gas
phase transition in an asymmetric nuclear matter system has also been
calculated and collated with the experimentally available data. We investigate
the approach of the thermal index as a function of nucleon density in the wake
of relativistic and non-relativistic formalism. The computation of neutrino
emissivity through the direct Urca process for the supernovae remnants has also
been performed, which manifests some exciting results about the thermal
stabilization and evolution of the newly born proto-neutron star. The central
temperature and the maximum mass of the proto-neutron star have also been
calculated for different entropy values.
|
Let $E/\mathbb{Q}$ be an elliptic curve having multiplicative reduction at a
prime $p$. Let $(g,h)$ be a pair of eigenforms of weight $1$ arising as the
theta series of an imaginary quadratic field $K$, and assume that the
triple-product $L$-function $L(f,g,h,s)$ is self-dual and does not vanish at
the central critical point $s=1$. The main result of this article is a formula
expressing the $p$-adic iterated integrals introduced in [DLR] to the Kolyvagin
classes associated by Bertolini and Darmon to a system of Heegner points on
$E$.
|
This paper is the first paper of a series that will present the derivation of
the modal mineralogy of Mars (M3 project) at a global scale from the
near-infrared dataset acquired by the imaging spectrometer OMEGA (Observatoire
pour la Min\'eralogie, l'Eau, les Glaces et l'Activit\'e) on board ESA/Mars
Express. The objective is to create and provide a global 3-D image-cube of Mars
at 32px/{\deg} covering most of Mars surface. This product has several
advantages. First, it can be used to instantaneously extract atmospheric- and
aerosol-corrected near-infrared (NIR) spectra from any location on Mars.
Second, several new data maps can be built as discussed here. That includes new
global mineral distributions, quantitative mineral abundance distributions and
maps of Martian surface chemistry (wt % oxide) detailed in a companion paper
(Riu et al., submitted). Here we present the method to derive the global
hyperspectral cube from several hundred millions of spectra. Global maps of
some mafic minerals are then shown, and compared to previous works.
|
Learning from demonstration (LfD) is commonly considered to be a natural and
intuitive way to allow novice users to teach motor skills to robots. However,
it is important to acknowledge that the effectiveness of LfD is heavily
dependent on the quality of teaching, something that may not be assured with
novices. It remains an open question as to the most effective way of guiding
demonstrators to produce informative demonstrations beyond ad hoc advice for
specific teaching tasks. To this end, this paper investigates the use of
machine teaching to derive an index for determining the quality of
demonstrations and evaluates its use in guiding and training novices to become
better teachers. Experiments with a simple learner robot suggest that guidance
and training of teachers through the proposed approach can lead to up to 66.5%
decrease in error in the learnt skill.
|
Financial markets are difficult to predict due to its complex systems
dynamics. Although there have been some recent studies that use machine
learning techniques for financial markets prediction, they do not offer
satisfactory performance on financial returns. We propose a novel
one-dimensional convolutional neural networks (CNN) model to predict financial
market movement. The customized one-dimensional convolutional layers scan
financial trading data through time, while different types of data, such as
prices and volume, share parameters (kernels) with each other. Our model
automatically extracts features instead of using traditional technical
indicators and thus can avoid biases caused by selection of technical
indicators and pre-defined coefficients in technical indicators. We evaluate
the performance of our prediction model with strictly backtesting on historical
trading data of six futures from January 2010 to October 2017. The experiment
results show that our CNN model can effectively extract more generalized and
informative features than traditional technical indicators, and achieves more
robust and profitable financial performance than previous machine learning
approaches.
|
The natural world is long-tailed: rare classes are observed orders of
magnitudes less frequently than common ones, leading to highly-imbalanced data
where rare classes can have only handfuls of examples. Learning from few
examples is a known challenge for deep learning based classification
algorithms, and is the focus of the field of low-shot learning. One potential
approach to increase the training data for these rare classes is to augment the
limited real data with synthetic samples. This has been shown to help, but the
domain shift between real and synthetic hinders the approaches' efficacy when
tested on real data.
We explore the use of image-to-image translation methods to close the domain
gap between synthetic and real imagery for animal species classification in
data collected from camera traps: motion-activated static cameras used to
monitor wildlife. We use low-level feature alignment between source and target
domains to make synthetic data for a rare species generated using a graphics
engine more "realistic". Compared against a system augmented with unaligned
synthetic data, our experiments show a considerable decrease in classification
error rates on a rare species.
|
Efficient discovery of a speaker's emotional states in a multi-party
conversation is significant to design human-like conversational agents. During
a conversation, the cognitive state of a speaker often alters due to certain
past utterances, which may lead to a flip in their emotional state. Therefore,
discovering the reasons (triggers) behind the speaker's emotion-flip during a
conversation is essential to explain the emotion labels of individual
utterances. In this paper, along with addressing the task of emotion
recognition in conversations (ERC), we introduce a novel task - Emotion-Flip
Reasoning (EFR), that aims to identify past utterances which have triggered
one's emotional state to flip at a certain time. We propose a masked memory
network to address the former and a Transformer-based network for the latter
task. To this end, we consider MELD, a benchmark emotion recognition dataset in
multi-party conversations for the task of ERC, and augment it with new
ground-truth labels for EFR. An extensive comparison with five state-of-the-art
models suggests improved performances of our models for both tasks. We further
present anecdotal evidence and both qualitative and quantitative error analyses
to support the superiority of our models compared to the baselines.
|
Neural keyphrase generation models have recently attracted much interest due
to their ability to output absent keyphrases, that is, keyphrases that do not
appear in the source text. In this paper, we discuss the usefulness of absent
keyphrases from an Information Retrieval (IR) perspective, and show that the
commonly drawn distinction between present and absent keyphrases is not made
explicit enough. We introduce a finer-grained categorization scheme that sheds
more light on the impact of absent keyphrases on scientific document retrieval.
Under this scheme, we find that only a fraction (around 20%) of the words that
make up keyphrases actually serves as document expansion, but that this small
fraction of words is behind much of the gains observed in retrieval
effectiveness. We also discuss how the proposed scheme can offer a new angle to
evaluate the output of neural keyphrase generation models.
|
The radio emission in many pulsars show sudden changes, usually within a
period, that cannot be related to the steady state processes within the inner
acceleration region (IAR) above the polar cap. These changes are often
quasi-periodic in nature, where regular transitions between two or more stable
emission states are seen. The durations of these states show a wide variety
ranging from several seconds to hours at a time. There are strong, small scale
magnetic field structures and huge temperature gradients present at the polar
cap surface. We have considered several processes that can cause temporal
modifications of the local magnetic field structure and strength at the surface
of the polar cap. Using different magnetic field strengths and scales, and also
assuming realistic scales of the temperature gradients, the evolutionary
timescales of different phenomena affecting the surface magnetic field was
estimated. We find that the Hall drift results in faster changes in comparison
to both Ohmic decay and thermoelectric effects. A mechanism based on the
Partially Screened Gap (PSG) model of the IAR has been proposed, where the Hall
and thermoelectric oscillations perturb the polar cap magnetic field to alter
the sparking process in the PSG. This is likely to affect the observed radio
emission resulting in the observed state changes.
|
Robots are used for collecting samples from natural environments to create
models of, for example, temperature or algae fields in the ocean. Adaptive
informative sampling is a proven technique for this kind of spatial field
modeling. This paper compares the performance of humans versus adaptive
informative sampling algorithms for selecting informative waypoints. The humans
and simulated robot are given the same information for selecting waypoints, and
both are evaluated on the accuracy of the resulting model. We developed a
graphical user interface for selecting waypoints and visualizing samples.
Eleven participants iteratively picked waypoints for twelve scenarios. Our
simulated robot used Gaussian Process regression with two entropy-based
optimization criteria to iteratively choose waypoints. Our results show that
the robot can on average perform better than the average human, and
approximately as good as the best human, when the model assumptions correspond
to the actual field. However, when the model assumptions do not correspond as
well to the characteristics of the field, both human and robot performance are
no better than random sampling.
|
We present a two-dimensional continuum model of tumor growth, which treats
the tissue as a composition of six distinct fluid phases; their dynamics are
governed by the equations of mass and momentum conservation. Our model divides
the cancer cells phase into two sub-phases depending on their maturity state.
The same approach is also applied for the vasculature phase, which is divided
into young sprouts (products of angiogenesis), and fully formed-mature vessels.
The remaining two phases correspond to healthy cells and extracellular material
(ECM). Furthermore, the model foresees the existence of nutrient chemical
species, which are transferred within the tissue through diffusion or supplied
by the vasculature (blood vessels). The model is numerically solved with the
Finite Elements Method and computations are performed with the commercial
software Comsol Multiphysics. The numerical simulations predict that mature
cancer cells are well separated from young cancer cells, which form a
protective shield for the growing tumor. We study the effect of different
mitosis and death rates for mature and young cancer cells on the tumor growth
rate, and predict accelerated rates when the mitosis rate of young cancer cells
is higher compared to mature cancer cells.
|
This paper presents the equivariant systems theory and observer design for
second order kinematic systems on matrix Lie groups. The state of a second
order kinematic system on a matrix Lie group is naturally posed on the tangent
bundle of the group with the inputs lying in the tangent of the tangent bundle
known as the double tangent bundle. We provide a simple parameterization of
both the tangent bundle state space and the input space (the fiber space of the
double tangent bundle) and then introduce a semi-direct product group and group
actions onto both the state and input spaces. We show that with the proposed
group actions the second order kinematics are equivariant. An equivariant lift
of the kinematics onto the symmetry group is defined and used to design a
nonlinear observer on the lifted state space using nonlinear constructive
design techniques. A simple hovercraft simulation verifies the performance of
our observer.
|
This paper investigates the problem of co-synthesis of edit function and
supervisor for opacity enforcement in the supervisory control of discrete-event
systems (DES), assuming the presence of an external (passive) intruder, where
the following goals need to be achieved: 1) the external intruder should never
infer the system secret, i.e., the system is opaque, and never be sure about
the existence of the edit function, i.e., the edit function remains covert; 2)
the controlled plant behaviors should satisfy some safety and nonblockingness
requirements, in the presence of the edit function. We focus on the class of
edit functions that satisfy the following properties: 1) the observation
capability of the edit function in general can be different from those of the
supervisor and the intruder; 2) the edit function can implement insertion,
deletion, and replacement operations; 3) the edit function performs bounded
edit operations, i.e., the length of each string output of the edit function is
upper bounded by a given constant. We propose an approach to solve this
co-synthesis problem by modeling it as a distributed supervisor synthesis
problem in the Ramadge-Wonham supervisory control framework. By taking the
special structure of this distributed supervisor synthesis problem into
consideration and to improve the possibility of finding a non-empty distributed
supervisor, we propose two novel synthesis heuristics that incrementally
synthesize the supervisor and the edit function. The effectiveness of our
approach is illustrated on an example in the enforcement of the location
privacy.
|
In this work we shall study the implications of a subclass of $E$-models
cosmological attractors, namely of $a$-attractors, on hydrodynamically stable
slowly rotating neutron stars. Specifically, we shall present the Jordan frame
theory of the $a$-attractors, and by using a conformal transformation we shall
derive the Einstein frame theory. We discuss the inflationary context of
$a$-attractors in order to specify the allowed range of values for the free
parameters of the model based on the latest cosmic-microwave-background-based
Planck 2018 data. Accordingly, using the notation and physical units frequently
used in theoretical astrophysics contexts, we shall derive the
Tolman-Oppenheimer-Volkoff equations in the Einstein frame. Assuming a
piecewise polytropic equation of state, the lowest density part of which shall
be chosen to be the WFF1, or APR or the SLy EoS, we numerically solve the
Tolman-Oppenheimer-Volkoff equations using a double shooting python-based
"LSODA" numerical code. The resulting picture depends on the value of the
parameter $a$ characterizing the $a$-attractors. As we show, for large values
of $a$, which do not produce a viable inflationary era, the $M-R$ graphs are
nearly identical to the general relativistic result, and these two are
discriminated at large central densities values. Also, for large $a$-values,
the WFF1 equation of state is excluded, due to the GW170817 constraints. In
addition, the small $a$ cases produce larger masses and radii compared to the
general relativistic case and are compatible with the GW170817 constraints on
the radii of neutron stars. Our results indicate deep and not yet completely
understood connections between non-minimal inflationary attractors and neutron
stars phenomenology in scalar-tensor theory.
|
The observation of Majorana fermions as collective excitations in
condensed-matter systems is an ongoing quest, and several state-of-the-art
experiments have been performed in the last decade. As a potential avenue in
this direction, we simulate the high-harmonic spectrum of Kitaev's
superconducting chain model that hosts Majorana edge modes in its topological
phase, and find their fingerprints in the spectral profiles. It is well-known
that this system exhibits a topological--trivial superconducting phase
transition. We demonstrate that high-harmonic spectroscopy is sensitive to the
phase transition in presence of open boundary conditions due to the presence or
absence of these edge modes. The population dynamics of the Majorana edge modes
are different from the bulk modes, which is the underlying reason for the
distinct harmonic profile of both the phases. On the contrary, in presence of
periodic boundary conditions with only bulk modes, high-harmonic spectroscopy
becomes insensitive to the phase transition with similar harmonic profiles in
both phases.
|
Skyrmions are important in topological quantum field theory for being soliton
solutions of a nonlinear sigma model and in information technology for their
attractive applications. Skyrmions are believed to be circular and stripy spin
textures appeared in the vicinity of skyrmion crystals are termed spiral,
helical, and cycloid spin orders, but not skyrmions. Here we present convincing
evidences showing that those stripy spin textures are skyrmions, "siblings" of
circular skyrmions in skyrmion crystals and "cousins" of isolated circular
skyrmions. Specifically, isolated skyrmions are excitations when skyrmion
formation energy is positive. The skyrmion morphologies are various stripy
structures when the ground states of chiral magnetic films are skyrmions. The
density of skyrmion number determines the morphology of condensed skyrmion
states. At the extreme of one skyrmion in the whole sample, the skyrmion is a
ramified stripe. As the skyrmion number density increases, individual skyrmion
shapes gradually change from ramified stripes to rectangular stripes, and
eventually to disk-like objects. At a low skyrmion number density, the natural
width of stripes is proportional to the ratio between the exchange stiffness
constant and Dzyaloshinskii-Moriya interaction coefficient. At a high skyrmion
number density, skyrmion crystals are the preferred states. Our findings reveal
the nature and properties of stripy spin texture, and open a new avenue for
manipulating skyrmions, especially condensed skyrmions such as skyrmion
crystals.
|
Theoretical understanding of evolutionary dynamics in spatially structured
populations often relies on non-spatial models. Biofilms are among such
populations where a more accurate understanding is of theoretical interest and
can reveal new solutions to existing challenges. Here, we studied how the
geometry of the environment affects the evolutionary dynamics of expanding
populations, using the Eden model. Our results show that fluctuations of
sub-populations during range expansion in 2D and 3D environments are not
Brownian. Furthermore, we found that the substrate's geometry interferes with
the evolutionary dynamics of populations that grow upon it. Inspired by these
findings, we propose a periodically wedged pattern on surfaces prone to develop
biofilms. On such patterned surfaces, natural selection becomes less effective
and beneficial mutants would have a harder time establishing. Additionally,
this modification accelerates genetic drift and leads to less diverse biofilms.
Both interventions are highly desired for biofilms.
|
Transformer-based language model approaches to automated story generation
currently provide state-of-the-art results. However, they still suffer from
plot incoherence when generating narratives over time, and critically lack
basic commonsense reasoning. Furthermore, existing methods generally focus only
on single-character stories, or fail to track characters at all. To improve the
coherence of generated narratives and to expand the scope of character-centric
narrative generation, we introduce Commonsense-inference Augmented neural
StoryTelling (CAST), a framework for introducing commonsense reasoning into the
generation process while modeling the interaction between multiple characters.
We find that our CAST method produces significantly more coherent and on-topic
two-character stories, outperforming baselines in dimensions including plot
plausibility and staying on topic. We also show how the CAST method can be used
to further train language models that generate more coherent stories and reduce
computation cost.
|
Prolonged power outages debilitate the economy and threaten public health.
Existing research is generally limited in its scope to a single event, an
outage cause, or a region. Here, we provide one of the most comprehensive
analyses of U.S. power outages for 2002--2019. We categorized all outage data
collected under U.S. federal mandates into four outage causes and computed
industry-standard reliability metrics. Our spatiotemporal analysis reveals six
of the most resilient U.S. states since 2010, improvement of power resilience
against natural hazards in the south and northeast regions, and a
disproportionately large number of human attacks for its population in the
Western Electricity Coordinating Council region. Our regression analysis
identifies several statistically significant predictors and hypotheses for
power resilience. Furthermore, we propose a novel framework for analyzing
outage data using differential weighting and influential points to better
understand power resilience. We share curated data and code as Supplementary
Materials.
|
The cross-correlation sensitivity of two identical balanced photodiode
heterodyne receivers is characterized. Both balanced photodiodes receive the
same weak signal split up equally, a situation equivalent to an astronomical
spatial interferometer. A common local oscillator (LO) is also split up equally
and its phase difference between both receivers is stabilized. We show by
semi-classical photon deletion theory that the post-detection laser shot noise
contributions on both receivers must be completely uncorrelated in this case of
passing three power splitters. We measured the auto- and cross-correlation
outputs as a function of weak signal power (system noise temperature
measurement), and obtain a cross-correlation system noise temperature up to 20
times lower than for the auto-correlation system noise temperature of each
receiver separately. This is supported by Allan plot measurements showing
cross-correlation standard deviations 30 times lower than in auto-correlation.
Careful calibration of the source power shows that the auto-correlation
(regular) noise temperature of the single balanced receivers is already very
near to the quantum limit as expected, which suggests a cross-correlation
system noise temperature below the quantum limit. If validated further, this
experimentally clear finding will not only be relevant for astronomical
instrumentation but also for other fields like telecommunications and medical
imaging.
|
We present a systematic analysis of our ability to tune chiral
Dzyaloshinskii-Moriya Interactions (DMI) in compensated ferrimagnetic
Pt/GdCo/Pt1-xWx trilayers by cap layer composition. Using first principles
calculations, we show that the DMI increases rapidly for only ~ 10% W and
saturates thereafter, in agreement with experiments. The calculated DMI shows a
spread in values around the experimental mean, depending on the atomic
configuration of the cap layer interface. The saturation is attributed to the
vanishing of spin orbit coupling energy at the cap layer and the simultaneous
constancy at the bottom interface. Additionally, we predict the DMI in
Pt/GdCo/X (X=Ta, W, Ir) and find that W in the cap layer favors a higher DMI
than Ta and Ir that can be attributed to the difference in d-band alignment
around the Fermi level. Our results open up exciting combinatorial
possibilities for controlling the DMI in ferrimagnets towards nucleating and
manipulating ultrasmall high-speed skyrmions.
|
Many speech processing methods based on deep learning require an automatic
and differentiable audio metric for the loss function. The DPAM approach of
Manocha et al. learns a full-reference metric trained directly on human
judgments, and thus correlates well with human perception. However, it requires
a large number of human annotations and does not generalize well outside the
range of perturbations on which it was trained. This paper introduces CDPAM, a
metric that builds on and advances DPAM. The primary improvement is to combine
contrastive learning and multi-dimensional representations to build robust
models from limited data. In addition, we collect human judgments on triplet
comparisons to improve generalization to a broader range of audio
perturbations. CDPAM correlates well with human responses across nine varied
datasets. We also show that adding this metric to existing speech synthesis and
enhancement methods yields significant improvement, as measured by objective
and subjective tests.
|
In upcoming years, the number of Internet-of-Things (IoT) devices is expected
to surge up to tens of billions of physical objects. However, while the IoT is
often presented as a promising solution to tackle environmental challenges, the
direct environmental impacts generated over the life cycle of the physical
devices are usually overlooked. It is implicitly assumed that their
environmental burden is negligible compared to the positive impacts they can
generate. In this paper, we present a parametric framework based on hardware
profiles to evaluate the cradle-to-gate carbon footprint of IoT edge devices.
We exploit our framework in three ways. First, we apply it on four use cases to
evaluate their respective production carbon footprint. Then, we show that the
heterogeneity inherent to IoT edge devices must be considered as the production
carbon footprint between simple and complex devices can vary by a factor of
more than 150x. Finally, we estimate the absolute carbon footprint induced by
the worldwide production of IoT edge devices through a macroscopic analysis
over a 10-year period. Results range from 22 to 562 MtCO2-eq/year in 2027
depending on the deployment scenarios. However, the truncation error
acknowledged for LCA bottom-up approaches usually lead to an undershoot of the
environmental impacts. We compared the results of our use cases with the few
reports available from Google and Apple, which suggest that our estimates could
be revised upwards by a factor around 2x to compensate for the truncation
error. Worst-case scenarios in 2027 would therefore reach more than 1000
MtCO2-eq/year. This truly stresses the necessity to consider environmental
constraints when designing and deploying IoT edge devices.
|
Visual search, recommendation, and contrastive similarity learning power a
wide breadth of technologies that impact billions of users across the world.
The best-performing approaches are often complex and difficult to interpret,
and there are several competing techniques one can use to explain a search
engine's behavior. We show that the theory of fair credit assignment provides a
unique axiomatic solution that generalizes several existing recommendation- and
metric-explainability techniques in the literature. Using this formalism, we
are able to determine in what regimes existing approaches fall short of
fairness and provide variations that are fair in more situations and handle
counterfactual information. More specifically, we show existing approaches
implicitly approximate second-order Shapley-Taylor indices and use this
perspective to extend CAM, GradCAM, LIME, SHAP, SBSM, and other methods to
search engines. These extensions can extract pairwise correspondences between
images from trained black-box models. We also introduce a fast kernel-based
method for estimating Shapley-Taylor indices that require orders of magnitude
fewer function evaluations to converge. Finally, we evaluate these methods and
show that these game-theoretic measures yield more consistent explanations for
image similarity architectures.
|
Recently discovered simple quantitative relations, known as bacterial growth
laws, hint on the existence of simple underlying principles at the heart of
bacterial growth. In this work, we provide a unifying picture on how these
known relations, as well as new relations that we derive, stems from a
universal autocatalytic network common to all bacteria, facilitating balanced
exponential growth of individual cells. We show that the core of the cellular
autocatalytic network is the transcription -- translation machinery -- in
itself an autocatalytic network comprising several coupled autocatalytic
cycles, including the ribosome, RNA polymerase, and tRNA charging cycles. We
derive two types of growth laws per autocatalytic cycle, one relating growth
rate to the relative fraction of the catalyst and its catalysis rate, and the
other relating growth rate to all the time scales in the cycle. The structure
of the autocatalytic network generates numerous regimes in state space,
determined by the limiting components, while the number of growth laws can be
much smaller. We also derive a growth law that accounts for the RNA polymerase
autocatalytic cycle, which we use to explain how growth rate depends on the
inducible expression of the rpoB and rpoC genes, which code for the RpoB and C
protein subunits of RNA polymerase, and how the concentration of rifampicin,
which targets RNA polymerase, affects growth rate without changing the
RNA-to-protein ratio. We derive growth laws for tRNA synthesis and charging,
and predict how growth rate depends on temperature, perturbation to ribosome
assembly, and membrane synthesis.
|
Image classification is a common step in image recognition for machine
learning in overhead applications. When applying popular model architectures
like MobileNetV2, known vulnerabilities expose the model to counter-attacks,
either mislabeling a known class or altering box location. This work proposes
an automated approach to defend these models. We evaluate the use of
multi-spectral image arrays and ensemble learners to combat adversarial
attacks. The original contribution demonstrates the attack, proposes a remedy,
and automates some key outcomes for protecting the model's predictions against
adversaries. In rough analogy to defending cyber-networks, we combine
techniques from both offensive ("red team") and defensive ("blue team")
approaches, thus generating a hybrid protective outcome ("green team"). For
machine learning, we demonstrate these methods with 3-color channels plus
infrared for vehicles. The outcome uncovers vulnerabilities and corrects them
with supplemental data inputs commonly found in overhead cases particularly.
|
While the particle-in-cell (PIC) method is quite mature, verification and
validation of both newly developed methods and individual codes has largely
focused on an idiosyncratic choice of a few test cases. Many of these test
cases involve either one- or two-dimensional simulations. This is either due to
availability of (quasi) analytic solutions or historical reasons. Additionally,
tests often focus on investigation of particular physics problems, such as
particle emission or collisions, and do not necessarily study the combined
impact of the suite of algorithms necessary for a full featured PIC code. As
three dimensional (3D) codes become the norm, there is a lack of benchmarks
test that can establish the validity of these codes; existing papers either do
not delve into the details of the numerical experiment or provide other
measurable numeric metrics (such as noise) that are outcomes of the simulation.
This paper seeks to provide several test cases that can be used for validation
and bench-marking of particle in cell codes in 3D. We focus on examples that
are collisionless, and can be run with a reasonable amount of computational
power. Four test cases are presented in significant detail; these include,
basic particle motion, beam expansion, adiabatic expansion of plasma, and two
stream instability. All presented cases are compared either against existing
analytical data or other codes. We anticipate that these cases should help fill
the void of bench-marking and validation problems and help the development of
new particle in cell codes.
|
The unscented transform uses a weighted set of samples called sigma points to
propagate the means and covariances of nonlinear transformations of random
variables. However, unscented transforms developed using either the Gaussian
assumption or a minimum set of sigma points typically fall short when the
random variable is not Gaussian distributed and the nonlinearities are
substantial. In this paper, we develop the generalized unscented transform
(GenUT), which uses 2n+1 sigma points to accurately capture up to the diagonal
components of the skewness and kurtosis tensors of most probability
distributions. Constraints can be analytically enforced on the sigma points
while guaranteeing at least second-order accuracy. The GenUT uses the same
number of sigma points as the original unscented transform while also being
applicable to non-Gaussian distributions, including the assimilation of
observations in the modeling of infectious diseases such as coronavirus
(SARS-CoV-2) causing COVID-19.
|
The hard-sphere model is one of the most extensively studied models in
statistical physics. It describes the continuous distribution of spherical
particles, governed by hard-core interactions. An important quantity of this
model is the normalizing factor of this distribution, called the partition
function. We propose a Markov chain Monte Carlo algorithm for approximating the
grand-canonical partition function of the hard-sphere model in $d$ dimensions.
Up to a fugacity of $\lambda < \text{e}/2^d$, the runtime of our algorithm is
polynomial in the volume of the system. This covers the entire known
real-valued regime for the uniqueness of the Gibbs measure.
Key to our approach is to define a discretization that closely approximates
the partition function of the continuous model. This results in a discrete
hard-core instance that is exponential in the size of the initial hard-sphere
model. Our approximation bound follows directly from the correlation decay
threshold of an infinite regular tree with degree equal to the maximum degree
of our discretization. To cope with the exponential blow-up of the discrete
instance we use clique dynamics, a Markov chain that was recently introduced in
the setting of abstract polymer models. We prove rapid mixing of clique
dynamics up to the tree threshold of the univariate hard-core model. This is
achieved by relating clique dynamics to block dynamics and adapting the
spectral expansion method, which was recently used to bound the mixing time of
Glauber dynamics within the same parameter regime.
|
Novel emergent phenomena are expected to occur under conditions exceeding the
QED critical electric field, where the vacuum becomes unstable to
electron-positron pair production. The required intensity to reach this regime,
$\sim10^{29}\,\mathrm{Wcm^{-2}}$, cannot be achieved even with the most intense
lasers now being planned/constructed without a sizeable Lorentz boost provided
by interactions with ultrarelativistic particles. Seeded laser-laser collisions
may access this strong-field QED regime at laser intensities as low as
$\sim10^{24}\,\mathrm{Wcm^{-2}}$. Counterpropagating e-beam--laser interactions
exceed the QED critical field at still lower intensities
($\sim10^{20}\,\mathrm{Wcm^{-2}}$ at $\sim10\,\mathrm{GeV}$). Novel emergent
phenomena are predicted to occur in the "QED plasma regime", where strong-field
quantum and collective plasma effects play off one another. Here the electron
beam density becomes a decisive factor. Thus, the challenge is not just to
exceed the QED critical field, but to do so with high quality, approaching
solid-density electron beams. Even though laser wakefield accelerators (LWFA)
represent a very promising research field, conventional accelerators still
provide orders of magnitude higher charge densities at energies
$\gtrsim10\,\mathrm{GeV}$. Co-location of extremely dense and highly energetic
electron beams with a multi-petawatt laser system would therefore enable
seminal research opportunities in high-field physics and laboratory
astrophysics. This white paper elucidates the potential scientific impact of
multi-beam capabilities that combine a multi-PW optical laser,
high-energy/density electron beam, and high-intensity x rays and outlines how
to achieve such capabilities by co-locating a 3-10 PW laser with a
state-of-the-art linear accelerator.
|
The key feature of nonlocal kinetic energy functionals is their ability to
reduce to the Thomas-Fermi functional in the regions of high density and to the
von Weizs\"acker functional in the region of low density/high density gradient.
This behavior is crucial when these functionals are employed in subsystem DFT
simulations to approximate the nonadditive kinetic energy. We propose a GGA
nonadditive kinetic energy functional which mimics the good behavior of
nonlocal functionals retaining the computational complexity of typical
semilocal functionals. The new functional reproduces Kohn-Sham DFT and
benchmark CCSD(T) interaction energies of weakly interacting dimers in the
S22-5 and S66 test sets with a mean absolute deviation well below 1 kcal/mol.
|
Federated Bayesian learning offers a principled framework for the definition
of collaborative training algorithms that are able to quantify epistemic
uncertainty and to produce trustworthy decisions. Upon the completion of
collaborative training, an agent may decide to exercise her legal "right to be
forgotten", which calls for her contribution to the jointly trained model to be
deleted and discarded. This paper studies federated learning and unlearning in
a decentralized network within a Bayesian framework. It specifically develops
federated variational inference (VI) solutions based on the decentralized
solution of local free energy minimization problems within exponential-family
models and on local gossip-driven communication. The proposed protocols are
demonstrated to yield efficient unlearning mechanisms.
|
We deploy and demonstrate the CoinTossX low-latency, high-throughput,
open-source matching engine with orders sent using the Julia and Python
languages. We show how this can be deployed for small-scale local desk-top
testing and discuss a larger scale, but local hosting, with multiple traded
instruments managed concurrently and managed by multiple clients. We then
demonstrate a cloud based deployment using Microsoft Azure, with large-scale
industrial and simulation research use cases in mind. The system is exposed and
interacted with via sockets using UDP SBE message protocols and can be
monitored using a simple web browser interface using HTTP. We give examples
showing how orders can be be sent to the system and market data feeds monitored
using the Julia and Python languages. The system is developed in Java with
orders submitted as binary encodings (SBE) via UDP protocols using the Aeron
Media Driver as the low-latency, high throughput message transport. The system
separates the order-generation and simulation environments e.g. agent-based
model simulation, from the matching of orders, data-feeds and various
modularised components of the order-book system. This ensures a more natural
and realistic asynchronicity between events generating orders, and the events
associated with order-book dynamics and market data-feeds. We promote the use
of Julia as the preferred order submission and simulation environment.
|
Training GANs on videos is even more sophisticated than on images because
videos have a distinguished dimension: time. While recent methods designed a
dedicated architecture considering time, generated videos are still far from
indistinguishable from real videos. In this paper, we introduce ArrowGAN
framework, where the discriminators learns to classify arrow of time as an
auxiliary task and the generators tries to synthesize forward-running videos.
We argue that the auxiliary task should be carefully chosen regarding the
target domain. In addition, we explore categorical ArrowGAN with recent
techniques in conditional image generation upon ArrowGAN framework, achieving
the state-of-the-art performance on categorical video generation. Our extensive
experiments validate the effectiveness of arrow of time as a self-supervisory
task, and demonstrate that all our components of categorical ArrowGAN lead to
the improvement regarding video inception score and Frechet video distance on
three datasets: Weizmann, UCFsports, and UCF-101.
|
Understanding the 3D world from 2D projected natural images is a fundamental
challenge in computer vision and graphics. Recently, an unsupervised learning
approach has garnered considerable attention owing to its advantages in data
collection. However, to mitigate training limitations, typical methods need to
impose assumptions for viewpoint distribution (e.g., a dataset containing
various viewpoint images) or object shape (e.g., symmetric objects). These
assumptions often restrict applications; for instance, the application to
non-rigid objects or images captured from similar viewpoints (e.g., flower or
bird images) remains a challenge. To complement these approaches, we propose
aperture rendering generative adversarial networks (AR-GANs), which equip
aperture rendering on top of GANs, and adopt focus cues to learn the depth and
depth-of-field (DoF) effect of unlabeled natural images. To address the
ambiguities triggered by unsupervised setting (i.e., ambiguities between smooth
texture and out-of-focus blurs, and between foreground and background blurs),
we develop DoF mixture learning, which enables the generator to learn real
image distribution while generating diverse DoF images. In addition, we devise
a center focus prior to guiding the learning direction. In the experiments, we
demonstrate the effectiveness of AR-GANs in various datasets, such as flower,
bird, and face images, demonstrate their portability by incorporating them into
other 3D representation learning GANs, and validate their applicability in
shallow DoF rendering.
|
Near-field imaging experiments exist both in optics and microwaves with often
different methods and theoretical supports. For millimeter waves or THz waves,
techniques from both fields can be merged to identify materials at the micron
scale on the surface or in near-surface volumes. The principle of such
near-field vector imaging at the frequency of 60 GHz is discussed in detail
here. We develop techniques for extracting vector voltages and methods for
extracting the normalized near-field vector reflection on the sample. In
particular, the subharmonic IQ mixer imbalance, which produced corrupted
outputs either due to amplitude or phase differences, must be taken into
account and compensated for to avoid any systematic errors. We provide a method
to fully characterize these imperfections and to isolate the only contribution
of the near-field interaction between the probe and the sample. The effects of
the mechanical modulation waveform and harmonic rank used for signal
acquisition are also discussed.
|
The characteristic electron densities, temperatures, and thermal
distributions of 1MK active region loops are now fairly well established, but
their coronal magnetic field strengths remain undetermined. Here we present
measurements from a sample of coronal loops observed by the Extreme-ultraviolet
Imaging Spectrometer (EIS) on Hinode. We use a recently developed diagnostic
technique that involves atomic radiation modeling of the contribution of a
magnetically induced transition (MIT) to the Fe X 257.262A spectral line
intensity. We find coronal magnetic field strengths in the range of 60--150G.
We discuss some aspects of these new results in the context of previous
measurements using different spectropolarimetric techniques, and their
influence on the derived Alfv\'{e}n speeds and plasma $\beta$ in coronal loops.
|
In this paper, we study the existence of traveling waves for a fourth order
Schr\" odinger equations with mixed dispersion, that is, solutions to
$$\Delta^2 u +\beta \Delta u +i V \nabla u +\alpha u =|u|^{p-2} u,\ in\ \R^N ,\
N\geq 2.$$ We consider this equation in the Helmholtz regime, when the Fourier
symbol $P$ of our operator is strictly negative at some point. Under suitable
assumptions, we prove the existence of solution using the dual method of
Evequoz and Weth provided that $p\in (p_1 , 2N/(N-4)_+)$. The real number $p_1$
depends on the number of principal curvature of $M$ staying bounded away from
$0$, where $M$ is the hypersurface defined by the roots of $P$. We also
obtained estimates on the Green function of our operator and a $L^p - L^q$
resolvent estimate which can be of independent interest and can be applied to
other operators.
|
Polyphenols are natural molecules of crucial importance in many applications,
of which tannic acid (TA) is one of the most abundant and established. Most
high-value applications require precise control of TA interactions with the
system of interest. However, the molecular structure of TA is still not
comprehended at the atomic level, of which all electronic and reactivity
properties depend. Here, we combine an enhanced sampling global optimization
method with density functional theory (DFT)-based calculations to explore the
conformational space of TA assisted by unsupervised machine learning
visualization, and then investigate its lowest energy conformers. We study the
external environment's effect on the TA structure and properties. We find that
vacuum favors compact structures by stabilizing peripheral atoms' weak
interactions, while in water, the molecule adopts more open conformations. The
frontier molecular orbitals of the conformers with lowest harmonic vibrational
free energy have a HOMO-LUMO energy gap of 2.21 (3.27) eV, increasing to 2.82
(3.88) eV in water, at the DFT generalized gradient approximation (and hybrid)
level of theory. Structural differences also change the distribution of
potential reactive sites. We establish the fundamental importance of accurate
structural consideration in determining TA and related polyphenols interactions
in relevant technological applications.
|
In this paper we state two quantitative Sylvester-Gallai results for high
degree curves. Moreover we give two constructions which show that these results
are not trivial.
|
The main goal of this paper is to develop the MRA theory along with wavelet
theory in L2(Qp). Generalized scaling sets are important in wavelet theory
because it determine multiwavelet sets. Although the theory of scaling set and
generalized scaling set on R and many other local field of positive
characteristic are available but not on Qp. This article contains discussion of
some necessary conditions of scaling set and characterize generalized scaling
set with examples.
|
Lattice QCD calculations of scattering phaseshifts and resonance parameters
in the two-body sector are becoming precision studies. Early calculations
employed L\"uscher's formula for extracting these quantities at lowest order.
As the calculations become more ambitious, higher-order relations are required.
In this study we present a way to validate the higher-order quantization
conditions. This is an important step given the involved derivations of these
formulae. We derive and validate quantization conditions up to $\ell=5$ partial
waves in both cubic and elongated geometries, and for states zero and non-zero
total momentum. For all 45 quantization conditions we considered (22 in cubic
box, 23 in elongated box) we find perfect agreement.
|
When analysing multiple time series that may be subject to changepoints, it
is sometimes possible to specify a priori, by means of a graph G, which pairs
of time series are likely to be impacted by simultaneous changepoints. This
article proposes a novel Bayesian changepoint model for multiple time series
that borrows strength across clusters of connected time series in G to detect
weak signals for synchronous changepoints. The graphical changepoint model is
further extended to allow dependence between nearby but not necessarily
synchronous changepoints across neighbour time series in G. A novel reversible
jump MCMC algorithm making use of auxiliary variables is proposed to sample
from the graphical changepoint model. The merit of the proposed model is
demonstrated via a changepoint analysis of real network authentication data
from Los Alamos National Laboratory (LANL), with some success at detecting weak
signals for network intrusions across users that are linked by network
connectivity, whilst limiting the number of false alerts.
|
We use Volterra-Hamilton systems theory and their associated cost functional
to study the population dynamics and productive processes of coral reefs in
recovery from bleaching and show that the cost of production remains the same
after the process. The KCC-theory geometrical invariants are determined for the
model proposed to describe the renewed symbiotic interaction between coral and
algae.
|
In this work we consider a multidimensional KdV type equation, the
Zakharov-Kuznetsov (ZK) equation. We derive the 3-wave kinetic equation from
both the stochastic ZK equation and the deterministic ZK equation with random
initial condition. The equation is given on a hypercubic lattice of size $L$.
In the case of the stochastic ZK equation, we show that the two point
correlation function can be asymptotically expressed as the solution of the
3-wave kinetic equation at the kinetic limit under very general assumptions, in
which the initial condition is out of equilibrium and the size $L$ of the
domain is fixed. In the case of the deterministic ZK equation with random
initial condition, the kinetic equation can also be derived at the kinetic
limit, but under more restrictive assumptions.
|
The commercialization of Virtual Reality (VR) headsets has made immersive and
360-degree video streaming the subject of intense interest in the industry and
research communities. While the basic principles of video streaming are the
same, immersive video presents a set of specific challenges that need to be
addressed. In this survey, we present the latest developments in the relevant
literature on four of the most important ones: (i) omnidirectional video coding
and compression, (ii) subjective and objective Quality of Experience (QoE) and
the factors that can affect it, (iii) saliency measurement and Field of View
(FoV) prediction, and (iv) the adaptive streaming of immersive 360-degree
videos. The final objective of the survey is to provide an overview of the
research on all the elements of an immersive video streaming system, giving the
reader an understanding of their interplay and performance.
|
We introduce twisted arrow categories of operads and of algebras over
operads. Up to equivalence of categories, the simplex category $\Delta$,
Segal's category $\Gamma$, Connes cyclic category $\Lambda$, Moerdijk--Weiss
dendroidal category $\Omega$, and categories similar to graphical categories of
Hackney--Robertson--Yau are twisted arrow categories of symmetric or cyclic
operads. Twisted arrow categories of operads admit Segal presheaves and 2-Segal
presheaves, or decomposition spaces. Twisted arrow category of an operad $P$ is
the $(\infty, 1)$-localization of the corresponding category $\Omega/P$ by the
boundary preserving morphisms. Under mild assumptions, twisted arrow categories
of operads, and closely related universal enveloping categories, are
generalized Reedy. We also introduce twisted arrow operads, which are related
to Baez--Dolan plus construction.
|
In this work we explore the new catalog of galactic open clusters that became
available recently, containing 1750 clusters that have been re-analysed using
the Gaia DR2 catalog to determine the stellar memberships. We used the young
open clusters as tracers of spiral arms and determined the spiral pattern
rotation speed of the Galaxy and the corotation radius, the strongest Galactic
resonance. The sample of open clusters used here increases the last one from
Dias et al. (2019) used in the previous determination of the pattern speed by
dozens objects. In addition, the distances and ages values are better
determined, using improvements to isochrone fitting and including an updated
extinction polynomial for the Gaia DR2 photometric band-passes, and the
Galactic abundance gradient as a prior for metallicity. In addition to the
better age determinations, the catalog contains better positions in the
Galactic plane and better proper motions. This allow us to discuss not only the
present space distribution of the clusters, but also the space distribution of
the clusters's birthplaces, obtained by integration of the orbits for a time
equal to their age. The value of the rotation velocity of the arms ($28.5 \pm
1.0$ km s$^{-1}$ kpc$^{-1}$) implies that the corotation radius ($R_c$) is
close to the solar Galactic orbit ($R_c/R_0 = 1.01\pm0.08$), which is supported
by other observational evidence discussed in this text. A simulation is
presented, illustrating the motion of the clusters in the reference frame of
corotation. We also present general statistics of the catalog of clusters, like
spatial distribution, distribution relative to height from the Galactic plane,
and distribution of ages and metallicity. An important feature of the space
distribution, the corotation gap in the gas distribution and its consequences
for the young clusters, is discussed.
|
We numerically investigate the dynamical properties of $\kappa$-type
molecular conductors in their antiferromagnetic Mott insulating state. By
treating the extended Hubbard model on the two-dimensional $\kappa$-type
lattice within the Lanzcos exact diagonalization method, we calculate the
one-particle spectral function $A(\boldsymbol{k}, \omega)$ and the optical
absorption spectra taking advantage of twisted boundary conditions. We find
spin splitting in $A(\boldsymbol{k}, \omega)$ predicted by a previous
Hartree-Fock study [M. Naka et al., Nat. Commun. 10, 4305 (2019)]; namely,
their up- and down-spin components become different in the general
$\boldsymbol{k}$-points of the Brillouin zone, even without the spin-orbit
coupling. Furthermore, we demonstrate the variation in the optical properties
near the Mott gap by the presence or absence of the antiferromagnetic order,
tuned by a small staggered magnetic field.
|
The leap eccentric connectivity index of $G$ is defined as
$$L\xi^{C}(G)=\sum_{v\in V(G)}d_{2}(v|G)e(v|G)$$ where $d_{2}(v|G) $ be the
second degree of the vertex $v$ and $e(v|G)$ be the eccentricity of the vertex
$v$ in $G$. In this paper, we first give a counterexample for that if $G$ be a
graph and $S(G)$ be its the subdivision graph, then each vertex $v\in V(G)$,
$e(v|S(G))=2e(v|G)$ by Yarahmadi in \cite{yar14} in Theorem 3.1. And we
describe the upper and lower bounds of the leap eccentric connectivity index of
four graphs based on subdivision edges, and then give the expressions of the
leap eccentric connectivity index of join graph based on subdivision, finally,
give the bounds of the leap eccentric connectivity index of four variants of
the corona graph.
|
Radially self-accelerating light exhibits an intensity pattern that describes
a spiraling trajectory around the optical axis as the beam propagates. In this
article, we show in simulation and experiment how such beams can be used to
perform a high-accuracy distance measurement with respect to a reference using
simple off-axis intensity detection. We demonstrate that generating beams whose
intensity pattern simultaneously spirals with fast and slow rotation components
enables a distance measurement with high accuracy over a broad range, using the
high and low rotation frequency, respectively. In our experiment, we achieve an
accuracy of around 2~$\mu$m over a longitudinal range of more than 2~mm using a
single beam and only two quadrant detectors. As our method relies on
single-beam interference and only requires a static generation and simple
intensity measurements, it is intrinsically stable and might find applications
in high-speed measurements of longitudinal position.
|
The parallel strong-scaling of Krylov iterative methods is largely determined
by the number of global reductions required at each iteration. The GMRES and
Krylov-Schur algorithms employ the Arnoldi algorithm for nonsymmetric matrices.
The underlying orthogonalization scheme is left-looking and processes one
column at a time. Thus, at least one global reduction is required per
iteration. The traditional algorithm for generating the orthogonal Krylov basis
vectors for the Krylov-Schur algorithm is classical Gram Schmidt applied twice
with reorthogonalization (CGS2), requiring three global reductions per step. A
new variant of CGS2 that requires only one reduction per iteration is applied
to the Arnoldi-QR iteration. Strong-scaling results are presented for finding
eigenvalue-pairs of nonsymmetric matrices. A preliminary attempt to derive a
similar algorithm (one reduction per Arnoldi iteration with a robust
orthogonalization scheme) was presented by Hernandez et al.(2007). Unlike our
approach, their method is not forward stable for eigenvalues.
|
Recently, the magnetic topological insulator MnBi$_2$Te$_4$ emerged as a
competitive platform to realize quantum anomalous Hall (QAH) states. We report
a Berry-curvature splitting mechanism to realize the QAH effect in the
disordered magnetic TI multilayers when switching from an antiferromagnetic
order to a ferromagnetic order. We reveal that the splitting of spin-resolved
Berry curvature, originating from the separation of the mobility edge during
the magnetic switching, can give rise to a QAH insulator even \emph{without}
closing the band gap. We present a global phase diagram, and also provide a
phenomenological picture to elucidate the Berry curvature splitting mechanism
by the evolution of topological charges. At last, we predict that the Berry
curvature splitting mechanism will lead to a reentrant QAH effect, which can be
detected by tuning gate voltage. Our theory will be instructive for the studies
of the QAH effect in MnBi$_2$Te$_4$ in future experiments.
|
We study the effect of Dzyaloshinskii-Moriya (DM) interaction on the
triangular lattice $U(1)$ quantum spin liquid (QSL) which is stabilized by
ring-exchange interactions. A weak DM interaction introduces a staggered flux
to the $U(1)$ QSL state and changes the density of states at the spinon fermi
surface. If the DM vector contains in-plane components, then the spinons gain
nonzero Berry phase. The resultant thermal conductances $\kappa_{xx}$ and
$\kappa_{xy}$ qualitatively agree with the experimental results on the material
EtMe$_3$Sb[Pd(dmit)$_2]_2$. Furthermore, owing to perfect nesting of the fermi
surface, a spin density wave state is triggered by larger DM interactions. On
the other hand, when the ring-exchange interaction decreases, another
anti-ferromagnetic (AFM) phase with $120^\circ$ order shows up which is
proximate to a $U(1)$ Dirac QSL. We discuss the difference of the two AFM
phases from their static structure factors and excitation spectra.
|
We propose a principled method for projecting an arbitrary square matrix to
the non-convex set of asymptotically stable matrices. Leveraging ideas from
large deviations theory, we show that this projection is optimal in an
information-theoretic sense and that it simply amounts to shifting the initial
matrix by an optimal linear quadratic feedback gain, which can be computed
exactly and highly efficiently by solving a standard linear quadratic regulator
problem. The proposed approach allows us to learn the system matrix of a stable
linear dynamical system from a single trajectory of correlated state
observations. The resulting estimator is guaranteed to be stable and offers
explicit statistical bounds on the estimation error.
|
In this paper we shed light on the impact of fine-tuning over social media
data in the internal representations of neural language models. We focus on bot
detection in Twitter, a key task to mitigate and counteract the automatic
spreading of disinformation and bias in social media. We investigate the use of
pre-trained language models to tackle the detection of tweets generated by a
bot or a human account based exclusively on its content. Unlike the general
trend in benchmarks like GLUE, where BERT generally outperforms generative
transformers like GPT and GPT-2 for most classification tasks on regular text,
we observe that fine-tuning generative transformers on a bot detection task
produces higher accuracies. We analyze the architectural components of each
transformer and study the effect of fine-tuning on their hidden states and
output representations. Among our findings, we show that part of the
syntactical information and distributional properties captured by BERT during
pre-training is lost upon fine-tuning while the generative pre-training
approach manage to preserve these properties.
|
An Unmanned Aerial Vehicle (UAV) is a promising technology for providing
wireless coverage to ground user devices. For all the infrastructure
communication networks destroyed in disasters, UAVs battery life is challenging
during service delivery in a post-disaster scenario. Therefore, selecting
cluster heads among user devices plays a vital role in detecting UAV signals
and processing data for improving UAV energy efficacy and reliable
Connectivity. This paper focuses on the performance evaluation of the
clustering approach performance in detecting wireless coverage services with
improving energy efficiency. The evaluation performance is a realistic
simulation for the ground to air channel Line of Sight (LoS). The results show
that the cluster head can effectively link the UAVs and cluster members at
minimal energy expenditure. The UAVs altitudes and path loss exponent affected
user devices for detecting wireless coverage. Moreover, the bit error rate in
the cluster heads is considered for reliable Connectivity in post disaster.
Clustering stabilizes the clusters linking the uncovered nodes to the UAV, and
its effectiveness in doing so resulted in its ubiquity in emergency
communication systems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.