abstract
stringlengths 42
2.09k
|
---|
As the number of IoT devices has increased rapidly, IoT botnets have
exploited the vulnerabilities of IoT devices. However, it is still challenging
to detect the initial intrusion on IoT devices prior to massive attacks. Recent
studies have utilized power side-channel information to identify this intrusion
behavior on IoT devices but still lack accurate models in real-time for
ubiquitous botnet detection.
We proposed the first online intrusion detection system called DeepAuditor
for IoT devices via power auditing. To develop the real-time system, we
proposed a lightweight power auditing device called Power Auditor. We also
designed a distributed CNN classifier for online inference in a laboratory
setting. In order to protect data leakage and reduce networking redundancy, we
then proposed a privacy-preserved inference protocol via Packed Homomorphic
Encryption and a sliding window protocol in our system. The classification
accuracy and processing time were measured, and the proposed classifier
outperformed a baseline classifier, especially against unseen patterns. We also
demonstrated that the distributed CNN design is secure against any distributed
components. Overall, the measurements were shown to the feasibility of our
real-time distributed system for intrusion detection on IoT devices.
|
Vector control strategies are central to the mitigation and containment of
COVID-19 and have come in the form of municipal ordinances that restrict the
operational status of public and private spaces and associated services. Yet,
little is known about specific population responses in terms of risk behaviors.
To help understand the impact of those vector control variable strategies, a
multi-week, multi-site observational study was undertaken outside of 19 New
York City medical facilities during the peak of the city's initial COVID-19
wave (03/22/20-05/19/20). The aim was to capture perishable data of the touch,
destination choice, and PPE usage behavior of individuals egressing hospitals
and urgent care centers. A major goal was to establish an empirical basis for
future research on the way people interact with three-dimensional vector
environments. Anonymized data were collected via smart phones. Each data record
includes the time, data, and location of an individual leaving a healthcare
facility, their routing, interactions with the build environment, other
individuals, and themselves. Most records also note their PPE usage,
destination, intermediary stops, and transportation choices. The records were
linked with 61 socio-economic factors by the facility zip code and 7
contemporaneous weather factors and the merged in a unified shapefile in an
ARCGIS system. This paper describes the project team and protocols used to
produce over 5,100 publicly accessible observational records and an affiliated
codebook that can be used to study linkages between individual behaviors and
on-the-ground conditions.
|
Deep Neural Networks (DNNs), despite their tremendous success in recent
years, could still cast doubts on their predictions due to the intrinsic
uncertainty associated with their learning process. Ensemble techniques and
post-hoc calibrations are two types of approaches that have individually shown
promise in improving the uncertainty calibration of DNNs. However, the
synergistic effect of the two types of methods has not been well explored. In
this paper, we propose a truth discovery framework to integrate ensemble-based
and post-hoc calibration methods. Using the geometric variance of the ensemble
candidates as a good indicator for sample uncertainty, we design an
accuracy-preserving truth estimator with provably no accuracy drop.
Furthermore, we show that post-hoc calibration can also be enhanced by truth
discovery-regularized optimization. On large-scale datasets including CIFAR and
ImageNet, our method shows consistent improvement against state-of-the-art
calibration approaches on both histogram-based and kernel density-based
evaluation metrics. Our codes are available at
https://github.com/horsepurve/truly-uncertain.
|
Space-based transit missions such as Kepler and TESS have demonstrated that
planets are ubiquitous. However, the success of these missions heavily depends
on ground-based radial velocity (RV) surveys, which combined with transit
photometry can yield bulk densities and orbital properties. While most Kepler
host stars are too faint for detailed follow-up observations, TESS is detecting
planets orbiting nearby bright stars that are more amenable to RV
characterization. Here we introduce the TESS-Keck Survey (TKS), an RV program
using ~100 nights on Keck/HIRES to study exoplanets identified by TESS. The
primary survey aims are investigating the link between stellar properties and
the compositions of small planets; studying how the diversity of system
architectures depends on dynamical configurations or planet multiplicity;
identifying prime candidates for atmospheric studies with JWST; and
understanding the role of stellar evolution in shaping planetary systems. We
present a fully-automated target selection algorithm, which yielded 103 planets
in 86 systems for the final TKS sample. Most TKS hosts are inactive,
solar-like, main-sequence stars (4500 K < Teff < 6000 K) at a wide range of
metallicities. The selected TKS sample contains 71 small planets (Rp < 4 Re),
11 systems with multiple transiting candidates, 6 sub-day period planets and 3
planets that are in or near the habitable zone of their host star. The target
selection described here will facilitate the comparison of measured planet
masses, densities, and eccentricities to predictions from planet population
models. Our target selection software is publicly available (at
https://github.com/ashleychontos/sort-a-survey) and can be adapted for any
survey which requires a balance of multiple science interests within a given
telescope allocation.
|
We present an isomorphism test for graphs of Euler genus $g$ running in time
$2^{O(g^4 \log g)}n^{O(1)}$. Our algorithm provides the first explicit upper
bound on the dependence on $g$ for an fpt isomorphism test parameterized by the
Euler genus of the input graphs. The only previous fpt algorithm runs in time
$f(g)n$ for some function $f$ (Kawarabayashi 2015). Actually, our algorithm
even works when the input graphs only exclude $K_{3,h}$ as a minor. For such
graphs, no fpt isomorphism test was known before.
The algorithm builds on an elegant combination of simple group-theoretic,
combinatorial, and graph-theoretic approaches. In particular, we introduce
$(t,k)$-WL-bounded graphs which provide a powerful tool to combine
group-theoretic techniques with the standard Weisfeiler-Leman algorithm. This
concept may be of independent interest.
|
Consider a branching random walk $(V_u)_{u\in \mathcal T^{IGW}}$ in $\mathbb
Z^d$ with the genealogy tree $\mathcal T^{IGW}$ formed by a sequence of i.i.d.
critical Galton-Watson trees. Let $R_n $ be the set of points in $\mathbb Z^d$
visited by $(V_u)$ when the index $u$ explores the first $n$ subtrees in
$\mathcal T^{IGW}$. Our main result states that for $d\in \{3, 4, 5\}$, the
capacity of $R_n$ is almost surely equal to $n^{\frac{d-2}{2}+o(1)}$ as $n \to
\infty$.
|
We investigate magnetic instabilities in charge-neutral twisted bilayer
graphene close to so-called "magic angles" using a combination of real-space
Hartree-Fock and dynamical mean-field theories. In view of the large size of
the unit cell close to magic angles, we examine a previously proposed rescaling
that permits to mimic the same underlying flat minibands at larger twist
angles. We find that localized magnetic states emerge for values of the Coulomb
interaction $U$ that are significantly smaller than what would be required to
render an isolated layer antiferromagnetic. However, this effect is
overestimated in the rescaled system, hinting at a complex interplay of
flatness of the minibands close to the Fermi level and the spatial extent of
the corresponding localized states. Our findings shed new light on perspectives
for experimental realization of magnetic states in charge-neutral twisted
bilayer graphene.
|
Locating lesions is important in the computer-aided diagnosis of X-ray
images. However, box-level annotation is time-consuming and laborious. How to
locate lesions accurately with few, or even without careful annotations is an
urgent problem. Although several works have approached this problem with
weakly-supervised methods, the performance needs to be improved. One obstacle
is that general weakly-supervised methods have failed to consider the
characteristics of X-ray images, such as the highly-structural attribute. We
therefore propose the Cross-chest Graph (CCG), which improves the performance
of automatic lesion detection by imitating doctor's training and
decision-making process. CCG models the intra-image relationship between
different anatomical areas by leveraging the structural information to simulate
the doctor's habit of observing different areas. Meanwhile, the relationship
between any pair of images is modeled by a knowledge-reasoning module to
simulate the doctor's habit of comparing multiple images. We integrate
intra-image and inter-image information into a unified end-to-end framework.
Experimental results on the NIH Chest-14 database (112,120 frontal-view X-ray
images with 14 diseases) demonstrate that the proposed method achieves
state-of-the-art performance in weakly-supervised localization of lesions by
absorbing professional knowledge in the medical field.
|
We study effects of higher-order antinematic interactions on the critical
behavior of the antiferromagnetic (AFM) $XY$ model on a triangular lattice,
using Monte Carlo simulations. The parameter $q$ of the generalized antinematic
(ANq) interaction is found to have a pronounced effect on the phase diagram
topology by inducing new quasi-long-range ordered phases due to competition
with the conventional AFM interaction as well as geometrical frustration. For
values of $q$ divisible by 3 the conflict between the two interactions results
in a frustrated canted AFM phase appearing at low temperatures wedged between
the AFM and ANq phases. For $q$ nondivisible by 3 with the increase of $q$ one
can observe the evolution of the phase diagram topology featuring two ($q=2$),
three ($q=4,5$) and four ($q \geq 7$) ordered phases. In addition to the two
phases previously found for $q=2$, the first new phase with solely AFM ordering
arises for $q=4$ in the limit of strong AFM coupling and higher temperatures by
separating from the phase with the coexisting AFM and ANq orderings. For $q=7$
another phase with AFM ordering but multimodal spin distribution in each
sublattice appears at intermediate temperatures. All these algebraic phases
also display standard and generalized chiral long-range orderings, which
decouple at higher temperatures in the regime of dominant ANq (AFM) interaction
for $q \geq 4$ ($q \geq 7$) preserving only the generalized (standard) chiral
ordering.
|
Deep reinforcement learning (DRL) has great potential for acquiring the
optimal action in complex environments such as games and robot control.
However, it is difficult to analyze the decision-making of the agent, i.e., the
reasons it selects the action acquired by learning. In this work, we propose
Mask-Attention A3C (Mask A3C), which introduces an attention mechanism into
Asynchronous Advantage Actor-Critic (A3C), which is an actor-critic-based DRL
method, and can analyze the decision-making of an agent in DRL. A3C consists of
a feature extractor that extracts features from an image, a policy branch that
outputs the policy, and a value branch that outputs the state value. In this
method, we focus on the policy and value branches and introduce an attention
mechanism into them. The attention mechanism applies a mask processing to the
feature maps of each branch using mask-attention that expresses the judgment
reason for the policy and state value with a heat map. We visualized
mask-attention maps for games on the Atari 2600 and found we could easily
analyze the reasons behind an agent's decision-making in various game tasks.
Furthermore, experimental results showed that the agent could achieve a higher
performance by introducing the attention mechanism.
|
This paper presents Contrastive Reconstruction, ConRec - a self-supervised
learning algorithm that obtains image representations by jointly optimizing a
contrastive and a self-reconstruction loss. We showcase that state-of-the-art
contrastive learning methods (e.g. SimCLR) have shortcomings to capture
fine-grained visual features in their representations. ConRec extends the
SimCLR framework by adding (1) a self-reconstruction task and (2) an attention
mechanism within the contrastive learning task. This is accomplished by
applying a simple encoder-decoder architecture with two heads. We show that
both extensions contribute towards an improved vector representation for images
with fine-grained visual features. Combining those concepts, ConRec outperforms
SimCLR and SimCLR with Attention-Pooling on fine-grained classification
datasets.
|
Let $\mathfrak{g}=\mathfrak{g}_{\bar{0}}+\mathfrak{g}_{\bar{1}}$ be a basic
Lie superalgebra, $\mathcal{W}_0$ (resp.$\mathcal{W}$) be the finite
W-(resp.super-) algebras constructed from a fixed nilpotent element in
$\mathfrak{g}_{\bar{0}}$. Based on a relation between finite W-algebra
$\mathcal{W}_0$ and W-superalgebra $\mathcal{W}$ found recently by the author
and Shu, we study the finite dimensional representations of finite
W-superalgebras in this paper. We first formulate and prove a version of
Premet's conjecture for the finite W-superalgebras from basic simple Lie
superalgebras. As in the W-algebra case, the Premet's conjecture is very close
to give a classification to the finite dimensional simple
$\mathcal{W}$-modules. In the case of $\ggg$ is Lie superalgebras of basic type
\Rmnum{1}, we prove the set of simple $\mathcal{W}$-supermodules is bijective
with that of simple $\mathcal{W}_0$-modules; presenting a triangular
decomposition to the tensor product of $\mathcal{W}$ with a Clifford algebra,
we also give an algorithm to compute the character of the finite dimensional
simple $\mathcal{W}$-supermodules with integral central character.
|
Pulsating ultra-luminous X-ray sources (PULXs) are characterised by an
extremely large luminosity ($ > 10^{40} \text{erg s}^{-1}$). While there is a
general consensus that they host an accreting, magnetized neutron star (NS),
the problem of how to produce luminosities $> 100$ times the Eddington limit,
$L_E$, of a solar mass object is still debated. A promising explanation relies
on the reduction of the opacities in the presence of a strong magnetic field,
which allows for the local flux to be much larger than the Eddington flux.
However, avoiding the onset of the propeller effect may be a serious problem.
Here, we reconsider the problem of column accretion onto a highly magnetized
NS, extending previously published calculations by relaxing the assumption of a
pure dipolar field and allowing for more complex magnetic field topologies. We
find that the maximum luminosity is determined primarily by the magnetic field
strength near the NS surface. We also investigate other factors determining the
accretion column geometry and the emergent luminosity, such as the assumptions
on the parameters governing the accretion flow at the disk-magnetosphere
boundary. We conclude that a strongly magnetized NS with a dipole component of
$\sim 10^{13} \text{G}$, octupole component of $\sim10^{14} \text{G}$ and spin
period $\sim1 \text{s}$ can produce a luminosity of $\sim 10^{41} \text{erg
s}^{-1}$ while avoiding the propeller regime. We apply our model to two PULXs,
NGC 5907 ULX-1 and NGC 7793 P13, and discuss how their luminosity and spin
period rate can be explained in terms of different configurations, either with
or without multipolar magnetic components.
|
Key to successfully deal with complex contemporary datasets is the
development of tractable models that account for the irregular structure of the
information at hand. This paper provides a comprehensive and unifying view of
several sampling, reconstruction, and recovery problems for signals defined on
irregular domains that can be accurately represented by a graph. The workhorse
assumption is that the (partially) observed signals can be modeled as the
output of a graph filter to a structured (parsimonious) input graph signal.
When either the input or the filter coefficients are known, this is tantamount
to assuming that the signals of interest live on a subspace defined by the
supporting graph. When neither is known, the model becomes bilinear. Upon
imposing different priors and additional structure on either the input or the
filter coefficients, a broad range of relevant problem formulations arise. The
goal is then to leverage those priors, the shift operator of the supporting
graph, and the samples of the signal of interest to recover: the signal at the
non-sampled nodes (graph-signal interpolation), the input (deconvolution), the
filter coefficients (system identification), or any combination thereof (blind
deconvolution).
|
We study the boundary behaviour of solutions to second order parabolic linear
equations in moving domains. Our main result is a higher order boundary Harnack
inequality in $C^1$ and $C^{k,\alpha}$ domains, providing that the quotient of
two solutions vanishing on the boundary of the domain is as smooth as the
boundary.
As a consequence of our result, we provide a new proof of higher order
regularity of the free boundary in the parabolic obstacle problem.
|
In this paper, we analyze the so-called Master Equation of the linear
backreaction of a plasma disk in the central object magnetic field, when small
scale ripples are considered. This study allows to single out two relevant
physical properties of the linear disk backreaction: (i) the appearance of a
vertical growth of the magnetic flux perturbations; (ii) the emergence of
sequence of magnetic field O-points, crucial for the triggering of local plasma
instabilities. We first analyze a general Fourier approach to the solution of
the addressed linear partial differential problem. This technique allows to
show how the vertical gradient of the backreaction is, in general, inverted
with respect to the background one. Instead, the fundamental harmonic solution
constitutes a specific exception for which the background and the perturbed
profiles are both decaying. Then, we study the linear partial differential
system from the point of view of a general variable separation method. The
obtained profile describes the crystalline behavior of the disk. Using a simple
rescaling, the governing equation is reduced to the second order differential
Whittaker equation. The zeros of the radial magnetic field are found by using
the solution written in terms Kummer functions. The possible implications of
the obtained morphology of the disk magnetic profile are then discussed in view
of the jet formation.
|
Identifying academic plagiarism is a pressing problem, among others, for
research institutions, publishers, and funding organizations. Detection
approaches proposed so far analyze lexical, syntactical, and semantic text
similarity. These approaches find copied, moderately reworded, and literally
translated text. However, reliably detecting disguised plagiarism, such as
strong paraphrases, sense-for-sense translations, and the reuse of non-textual
content and ideas, is an open research problem.
The thesis addresses this problem by proposing plagiarism detection
approaches that implement a different concept: analyzing non-textual content in
academic documents, specifically citations, images, and mathematical content.
To validate the effectiveness of the proposed detection approaches, the
thesis presents five evaluations that use real cases of academic plagiarism and
exploratory searches for unknown cases.
The evaluation results show that non-textual content elements contain a high
degree of semantic information, are language-independent, and largely immutable
to the alterations that authors typically perform to conceal plagiarism.
Analyzing non-textual content complements text-based detection approaches and
increases the detection effectiveness, particularly for disguised forms of
academic plagiarism.
To demonstrate the benefit of combining non-textual and text-based detection
methods, the thesis describes the first plagiarism detection system that
integrates the analysis of citation-based, image-based, math-based, and
text-based document similarity. The system's user interface employs
visualizations that significantly reduce the effort and time users must invest
in examining content similarity.
|
We introduce "$t$-LC triangulated manifolds" as those triangulations
obtainable from a tree of $d$-simplices by recursively identifying two boundary
$(d-1)$-faces whose intersection has dimension at least $d-t-1$. The $t$-LC
notion interpolates between the class of LC manifolds introduced by
Durhuus--Jonsson (corresponding to the case $t=1$), and the class of all
manifolds (case $t=d$). Benedetti--Ziegler proved that there are at most
$2^{d^2 \, N}$ triangulated $1$-LC $d$-manifolds with $N$ facets. Here we prove
that there are at most $2^{\frac{d^3}{2}N}$ triangulated $2$-LC $d$-manifolds
with $N$ facets. This extends to all dimensions an intuition by Mogami for
$d=3$.
We also introduce "$t$-constructible complexes", interpolating between
constructible complexes (the case $t=1$) and all complexes (case $t=d$). We
show that all $t$-constructible pseudomanifolds are $t$-LC, and that all
$t$-constructible complexes have (homotopical) depth larger than $d-t$. This
extends the famous result by Hochster that constructible complexes are
(homotopy) Cohen--Macaulay.
|
This paper revisits the connection between the girth of a protograph-based
LDPC code given by a parity-check matrix and the properties of powers of the
product between the matrix and its transpose in order to obtain the necessary
and sufficient conditions for a code to have given girth between 6 and 12, and
to show how these conditions can be incorporated into simple algorithms to
construct codes of that girth. To this end, we highlight the role that certain
submatrices that appear in these products have in the construction of codes of
desired girth. In particular, we show that imposing girth conditions on a
parity-check matrix is equivalent to imposing conditions on a square submatrix
obtained from it and we show how this equivalence is particularly strong for a
protograph based parity-check matrix of variable node degree 2, where the
cycles in its Tanner graph correspond one-to-one to the cycles in the Tanner
graph of a square submatrix obtained by adding the permutation matrices (or
products of these) in the composition of the parity-check matrix. We end the
paper with exemplary constructions of codes with various girths and computer
simulations. Although, we mostly assume the case of fully connected protographs
of variable node degree 2 and 3, the results can be used for any parity-check
matrix/protograph-based Tanner graph.
|
Two-dimensional electron systems subjected to a perpendicular magnetic field
absorb electromagnetic radiation via the cyclotron resonance (CR). Here we
report a qualitative breach of this well-known behaviour in graphene. Our study
of the terahertz photoresponse reveals a resonant burst at the main overtone of
the CR, drastically exceeding the signal detected at the position of the
ordinary CR. In accordance with the developed theory, the photoresponse
dependencies on the magnetic field, doping level, and sample geometry suggest
that the origin of this anomaly lies in the near-field magnetoabsorption
facilitated by the Bernstein modes, ultra-slow magnetoplasmonic excitations
reshaped by nonlocal electron dynamics. Close to the CR harmonics, these modes
are characterized by a flat dispersion and a diverging plasmonic density of
states that strongly amplifies the radiation absorption. Besides fundamental
interest, our experimental results and developed theory show that the radiation
absorption via nonlocal collective modes can facilitate a strong photoresponse,
a behaviour potentially useful for infrared and terahertz technology.
|
We obtain results on the condensation principle called local club
condensation. We prove that in extender models an equivalence between the
failure of local club condensation and subcompact cardinals holds. This gives a
characterization of $\square_{\kappa}$ in terms of local club condensation in
extender models. Assuming $\gch$, given an interval of ordinals $I$ we verify
that iterating the forcing defined by Holy-Welch-Wu, we can preserve $\gch$,
cardinals and cofinalities and obtain a model where local club condensation
holds for every ordinal in $I$ modulo those ordinals which cardinality is a
singular cardinal.
We prove that if $\kappa$ is a regular cardinal in an interval $I$, the above
iteration provides enough condensation for the combinatorial principle
$\Dl_{S}^{*}(\Pi^{1}_{2})$, and in particular $\diamondsuit(S)$, to hold for
any stationary $S \subseteq \kappa$.
|
In this paper we find analytical solutions for the scalar and gauge fields in
the Freedman-Robertson-Walker multiply warped braneworld scenario. With this we
find the precise mass spectra for these fields. We compare these spectra with
that previously found in the literature for the static case.
|
The Mn(Bi$_{1-x}$Sb$_x$)$_2$Te$_4$ series is purported to span from
antiferromagnetic (AF) topological insulator at x = 0 to a trivial AF insulator
at x = 1. Here we report on neutron diffraction and inelastic neutron
scattering studies of the magnetic interactions across this series. All
compounds measured possess ferromagnetic (FM) triangular layers and we find a
crossover from AF to FM interlayer coupling near x = 1 for our samples. The
large spin gap at x = 0 closes rapidly and the average FM exchange interactions
within the triangular layer increase with Sb substitution. Similar to a
previous study of MnBi$_2$Te$_4$, we find severe spectral broadening which
increases dramatically across the compositional series. In addition to
broadening, we observe an additional sharp magnetic excitation in
MnSb$_2$Te$_4$ that may indicate the development of local magnetic modes based
on recent reports of antisite disorder between Mn and Sb sublattices. The
results suggest that both substitutional and antisite disorder contribute
substantially to the magnetism in Mn(Bi$_{1-x}$Sb$_x$)$_2$Te$_4$.
|
We have built a renormalizable $U(1)_X$ model with a $\Sigma (18)\times Z_4$
symmetry, whose spontaneous breaking yields the observed SM fermion masses and
fermionic mixing parameters. The tiny masses of the light active neutrinos are
produced by the type I seesaw mechanism mediated by very heavy right handed
Majorana neutrinos. To the best of our knowledge, this model is the first
implementation of the $\Sigma (18)$ flavor symmetry in a renormalizable
$U(1)_X$ model. Our model allows a successful fit for the SM fermion masses,
fermionic mixing angles and CP phases for both quark and lepton sectors. The
obtained values for the physical observables of both quark and lepton sectors
are in accordance with the experimental data. We obtain an effective neutrino
mass parameter of $\langle m_{ee}\rangle=1.51\times 10^{-3}\, \mathrm{eV}$ for
normal ordering and $\langle m_{ee}\rangle =4.88\times 10^{-2} \, \mathrm{eV}$
for inverted ordering which are well consistent with the recent experimental
limits on neutrinoless double beta decay.
|
Spatial resolution is one of the most important specifications of an imaging
system. Recent results in quantum parameter estimation theory reveal that an
arbitrarily small distance between two incoherent point sources can always be
efficiently determined through the use of a spatial mode sorter. However,
extending this procedure to a general object consisting of many incoherent
point sources remains challenging, due to the intrinsic complexity of
multi-parameter estimation problems. Here, we generalize the Richardson-Lucy
(RL) deconvolution algorithm to address this challenge. We simulate its
application to an incoherent confocal microscope, with a Zernike spatial mode
sorter replacing the pinhole used in a conventional confocal microscope. We
test different spatially incoherent objects of arbitrary geometry, and we find
that the resolution enhancement of sorter-based microscopy is on average over
30% higher than that of a conventional confocal microscope using the standard
RL deconvolution algorithm. Our method could potentially be used in diverse
applications such as fluorescence microscopy and astronomical imaging.
|
In this article, we study the phenomenology of a two dimensional dilute
suspension of active amphiphilic Janus particles. We analyze how the morphology
of the aggregates emerging from their self-assembly depends on the strength and
the direction of the active forces. We systematically explore and contrast the
phenomenologies resulting from particles with a range of attractive patch
coverages. Finally, we illustrate how the geometry of the colloids and the
directionality of their interactions can be used to control the physical
properties of the assembled active aggregates and suggest possible strategies
to exploit self-propulsion as a tunable driving force for self-assembly.
|
Purpose Infectious agents, such as SARS-CoV-2, can be carried by droplets
expelled during breathing. The spatial dissemination of droplets varies
according to their initial velocity. After a short literature review, our goal
was to determine the velocity of the exhaled air during vocal exercises.
Methods A propylene glycol cloud produced by 2 e-cigarettes' users allowed
visualization of the exhaled air emitted during vocal exercises. Airflow
velocities were measured during the first 200 ms of a long exhalation, a
sustained vowel /a/ and varied vocal exercises. For the long exhalation and the
sustained vowel /a/, the decrease of airflow velocity was measured until 3 s.
Results were compared with a Computational Fluid Dynamics (CFD) study using
boundary conditions consistent with our experimental study. Results Regarding
the production of vowels, higher velocities were found in loud and whispered
voices than in normal voice. Voiced consonants like /3/ or /v/ generated higher
velocities than vowels. Some voiceless consonants, e.g., /t/ generated high
velocities, but long exhalation had the highest velocities. Semi-occluded vocal
tract exercises generated faster airflow velocities than loud speech, with a
decreased velocity during voicing. The initial velocity quickly decreased as
was shown during a long exhalation or a sustained vowel /a/. Velocities were
consistent with the CFD data. Conclusion Initial velocity of the exhaled air is
a key factor influencing droplets trajectory. Our study revealed that vocal
exercises produce a slower airflow than long exhalation. Speech therapy should,
therefore, not be associated with an increased risk of contamination when
implementing standard recommendations.
|
Various hardware accelerators have been developed for energy-efficient and
real-time inference of neural networks on edge devices. However, most training
is done on high-performance GPUs or servers, and the huge memory and computing
costs prevent training neural networks on edge devices. This paper proposes a
novel tensor-based training framework, which offers orders-of-magnitude memory
reduction in the training process. We propose a novel rank-adaptive tensorized
neural network model, and design a hardware-friendly low-precision algorithm to
train this model. We present an FPGA accelerator to demonstrate the benefits of
this training method on edge devices. Our preliminary FPGA implementation
achieves $59\times$ speedup and $123\times$ energy reduction compared to
embedded CPU, and $292\times$ memory reduction over a standard full-size
training.
|
The reliable detection of neutrons in a harsh gamma-ray environment is an
important aspect of establishing non-destructive methods for the
characterization of spent nuclear fuel. In this study, we present results from
extended in-situ monitoring of detector systems consisting of commercially
available components: EJ-426, a $^6$Li-enriched solid-state scintillator
material sensitive to thermal neutrons, and two different types of Hamamatsu
photomultiplier tubes (PMT). Over the period of eight months, these detectors
were operated in close vicinity to spent nuclear fuel stored at the interim
storage facility CLAB, Oskarshamn, Sweden. At the measurement position the
detectors were continuously exposed to an estimated neutron flux of approx. 280
n/s $\cdot$ cm$^2$ and a gamma-ray dose rate of approx. 6 Sv/h.
Using offline software algorithms, neutron pulses were identified in the
data. Over the entire investigated dose range of up to 35 kGr, the detector
systems were functioning and were delivering detectable neutron signals. Their
performance as measured by the number of identified neutrons degrades down to
about 30% of the initial value. Investigations of the irradiated components
suggest that this degradation is a result of reduced optical transparency of
the involved materials as well as a reduction of PMT gain due to the continuous
high currents. Increasing the gain of the PMT through step-ups of the applied
high voltage allowed to partially compensate for this loss in detection
sensitivity.
The integrated neutron fluence during the measurement was experimentally
verified to be in the order of $5 \cdot 10^9$ n/cm$^2$. The results were
interpreted with the help of MCNP6.2 simulations of the setup and the neutron
flux.
|
This paper presents a novel three-degree-of-freedom (3-DOF) translational
parallel manipulator (TPM) by using a topological design method of parallel
mechanism (PM) based on position and orientation characteristic (POC)
equations. The proposed PM is only composed of lower-mobility joints and
actuated prismatic joints, together with the investigations on three kinematic
issues of importance. The first aspect pertains to geometric modeling of the
TPM in connection with its topological characteristics, such as the POC, degree
of freedom and coupling degree, from which its symbolic direct kinematic
solutions are readily obtained. Moreover, the decoupled properties of
input-output motions are directly evaluated without Jacobian analysis.
Sequentially, based upon the inverse kinematics, the singular configurations of
the TPM are identified, wherein the singular surfaces are visualized by means
of a Gr{\"o}bner based elimination operation. Finally, the workspace of the TPM
is evaluated with a geometric approach. This 3-DOF TPM features less joints and
links compared with the well-known Delta robot, which reduces the structural
complexity. Its symbolic direct kinematics and partially-decoupled property
will ease path planning and dynamic analysis. The TPM can be used for
manufacturing large work pieces.
|
We prove that the slice rank of a 3-tensor (a combinatorial notion introduced
by Tao in the context of the cap-set problem), the analytic rank (a
Fourier-theoretic notion introduced by Gowers and Wolf), and the geometric rank
(a recently introduced algebro-geometric notion) are all equivalent up to an
absolute constant. As a corollary, we obtain strong trade-offs on the
arithmetic complexity of a biased bililnear map, and on the separation between
computing a bilinear map exactly and on average. Our result settles open
questions of Haramaty and Shpilka [STOC 2010], and of Lovett [Discrete Anal.,
2019] for 3-tensors.
|
The field of Energy System Analysis (ESA) has experienced exponential growth
in the number of publications since at least the year 2000. This paper presents
a comprehensive bibliometric analysis on ESA by employing different algorithms
in Matlab and R. The focus of results is on quantitative indicators relating to
number and type of publication outputs, collaboration links between
institutions, authors and countries, and dynamic trends within the field. The
five and twelve most productive countries have 50% and 80% of ESA publications
respectively. The dominant institutions are even more concentrated within a
small number of countries. A significant concentration of published papers
within countries and institutions was also confirmed by analysing collaboration
networks. These show dominant collaboration within the same university or at
least the same country. There is also is a strong link among the most
successful journals, authors and institutions. The Energy journal has had the
most publications in the field, and its editor-in-chief Lund H is the author
with most of the publications in the field, as well as the author with most of
the highly cited publications in the field. In terms of the dynamics within the
field in the past decade, recent years have seen a higher impact of topics
related to flexibility and hybrid/integrated energy systems alongside a decline
in individual technologies. This paper provides a holistic overview of two
decades' research output and enables interested readers to obtain a
comprehensive overview of the key trends in this active field.
|
Recently room temperature superconductivity with Tc=15 degrees Celsius has
been discovered in a pressurized complex ternary hydride, CSHx, which is a
carbon doped H3S alloy. The nanoscale structure of H3S is a particular
realization of the 1993 patent claim of superlattice of quantum wires for room
temperature superconductors where the maximum Tc occurs at the top of a
superconducting dome. Here we focus on the electronic structure of materials
showing nanoscale heterostructures at atomic limit made of a superlattice of
quantum wires like hole doped cuprate perovskites, organics, A15 intermetallics
and pressurized hydrides. We provide a perspective of the theory of room
temperature multigap superconductivity in heterogeneous materials tuned at a
Fano Feshbach resonance (called also shape resonance) in the superconducting
gaps focusing on H3S where the maximum Tc occurs where the pressure tunes the
chemical pressure near a topological Lifshitz transition. Here the
superconductivity dome of Tc versus pressure is driven by both electron-phonon
coupling and contact exchange interaction. We show that the Tc amplification up
to room temperature is driven by the Fano Feshbach resonance between a
superconducting gap in the anti-adiabatic regime and other gaps in the
adiabatic regime. In these cases the Tc amplification via contact exchange
interaction is the missing term in conventional multiband BCS and anisotropic
Migdal-Eliashberg theories including only Cooper pairing
|
This paper is concerned with a reaction--diffusion system modeling the
fixation and the invasion in a population of a gene drive (an allele biasing
inheritance, increasing its own transmission to offspring). In our model, the
gene drive has a negative effect on the fitness of individuals carrying it, and
is therefore susceptible of decreasing the total carrying capacity of the
population locally in space. This tends to generate an opposing demographic
advection that the gene drive has to overcome in order to invade. While
previous reaction--diffusion models neglected this aspect, here we focus on it
and try to predict the sign of the traveling wave speed. It turns out to be an
analytical challenge, only partial results being within reach, and we complete
our theoretical analysis by numerical simulations. Our results indicate that
taking into account the interplay between population dynamics and population
genetics might actually be crucial, as it can effectively reverse the direction
of the invasion and lead to failure. Our findings can be extended to other
bistable systems, such as the spread of cytoplasmic incompatibilities caused by
Wolbachia.
|
We present a new model of neural networks called Min-Max-Plus Neural Networks
(MMP-NNs) based on operations in tropical arithmetic. In general, an MMP-NN is
composed of three types of alternately stacked layers, namely linear layers,
min-plus layers and max-plus layers. Specifically, the latter two types of
layers constitute the nonlinear part of the network which is trainable and more
sophisticated compared to the nonlinear part of conventional neural networks.
In addition, we show that with higher capability of nonlinearity expression,
MMP-NNs are universal approximators of continuous functions, even when the
number of multiplication operations is tremendously reduced (possibly to none
in certain extreme cases). Furthermore, we formulate the backpropagation
algorithm in the training process of MMP-NNs and introduce an algorithm of
normalization to improve the rate of convergence in training.
|
In this paper we study the general minimization vector problem (P),
concerning a perturbation mapping, defined in locally convex Hausdorff
topological vector spaces where the "WInf" stands for the weak infimum with
respect to an ordering generated by a convex cone $K$. Several representations
of the epigraph of the conjugate mapping of the perturbation mapping are
established. From these, variants vector Farkas lemmas are then proved. Armed
with these basic tools, the {\it dual} and the so-called {\it loose dual
problem} of (P) are defined, and then stable strong duality results between
these pairs of primal-dual problems are established. The results just obtained
are then applied to a general class (CCCV) of composed vector optimization
problems with cone-constrained. For this classes of problems, four perturbation
mappings are suggested. Each of these mappings yields several forms of vector
Farkas lemmas and two forms of dual problems for (CCVP). Concretely, one of the
suggested perturbation mapping give rises to well-known {\it Lagrange} and {\it
loose Lagrange dual problems} for (CCVP) while each of the three others, yields
two kinds of Fenchel-Lagrange dual problems for (CCVP). Stable strong duality
for these pairs of primal-dual problems are proved. Several special cases of
(CCVP) are also considered at the end of the paper, including: vector composite
problems (without constraints), cone-constrained vector problems, and scalar
composed problems. The results obtained in this papers when specified to the
two concrete mentioned vector problems go some Lagrange duality results
appeared recently, and also lead to new results on stable strong
Fenchel-Lagrange duality results, which, to the best knowledge of the authors,
appear for the first time in the literature.
|
We prove the existence of elements of infinite order in the homotopy groups
of the spaces $\mathcal{R}_{Ric>0}(M)$ and $\mathcal{R}_{sec>0}(M)$ of positive
Ricci and positive sectional curvature, provided that $M$ is high-dimensional
and Spin, admits such a metric and has a non-vanishing rational Pontryagin
class.
|
In this paper we design efficient quadrature rules for finite element
discretizations of nonlocal diffusion problems with compactly supported kernel
functions. Two of the main challenges in nonlocal modeling and simulations are
the prohibitive computational cost and the nontrivial implementation of
discretization schemes, especially in three-dimensional settings. In this work
we circumvent both challenges by introducing a parametrized mollifying function
that improves the regularity of the integrand, utilizing an adaptive
integration technique, and exploiting parallelization. We first show that the
"mollified" solution converges to the exact one as the mollifying parameter
vanishes, then we illustrate the consistency and accuracy of the proposed
method on several two- and three-dimensional test cases. Furthermore, we
demonstrate the good scaling properties of the parallel implementation of the
adaptive algorithm and we compare the proposed method with recently developed
techniques for efficient finite element assembly.
|
Localization is one of the most fundamental interference phenomena caused by
randomness, and its universal aspects have been extensively explored from the
perspective of one-parameter scaling mainly for static properties. We
numerically study dynamics of fermions on disordered onedimensional potentials
exhibiting localization and find dynamical one-parameter scaling for surface
roughness, which represents particle-number fluctuations at a given
lengthscale, and for entanglement entropy when the system is in delocalized
phases. This dynamical scaling corresponds to the Family-Vicsek scaling
originally developed in classical surface growth, and the associated scaling
exponents depend on the type of disorder. Notably, we find that partially
localized states in the delocalized phase of the random-dimer model lead to
anomalous scaling, where destructive interference unique to quantum systems
leads to exponents unknown for classical systems and clean systems.
|
We reemphasize that the ratio $R_{s\mu} \equiv
\overline{\mathcal{B}}(B_s\to\mu\bar\mu)/\Delta M_s$ is a measure of the
tension of the Standard Model (SM) with latest measurements of
$\overline{\mathcal{B}}(B_s\to\mu\bar\mu)$ that does not suffer from the
persistent puzzle on the $|V_{cb}|$ determinations from inclusive versus
exclusive $b\to c\ell\bar\nu$ decays and which affects the value of the CKM
element $|V_{ts}|$ that is crucial for the SM predictions of both
$\overline{\mathcal{B}}(B_s\to\mu\bar\mu)$ and $\Delta M_s$, but cancels out in
the ratio $R_{s\mu}$. In our analysis we include higher order electroweak and
QED corrections und adapt the latest hadronic input to find a tension of about
$2\sigma$ for $R_{s\mu}$ measurements with the SM independently of $|V_{ts}|$.
We also discuss the ratio $R_{d\mu}$ which could turn out, in particular in
correlation with $R_{s\mu}$, to be useful for the search for New Physics, when
the data on both ratios improves. Also $R_{d\mu}$ is independent of $|V_{cb}|$
or more precisely $|V_{td}|$.
|
We prove a local version of the noncollapsing estimate for mean curvature
flow. By combining our result with earlier work of X.-J. Wang, it follows that
certain ancient convex solutions that sweep out the entire space are
noncollapsed.
|
The optical spectra of vertically stacked MoSe$_2$/WSe$_2$ heterostructures
contain additional 'interlayer' excitonic peaks that are absent in the
individual monolayer materials and exhibit a significant spatial charge
separation in out-of-plane direction. Extending on a previous study, we used a
many-body perturbation theory approach to simulate and analyse the excitonic
spectra of MoSe$_2$/WSe$_2$ heterobilayers with three stacking orders,
considering both momentum-direct and momentum-indirect excitons. We find that
the small oscillator strengths and corresponding optical responses of the
interlayer excitons are significantly stacking-dependent and give rise to high
radiative lifetimes in the range of 5-200\,ns (at T=4\,K) for the 'bright'
interlayer excitons. Solving the finite-momentum Bethe-Salpeter Equation, we
predict that the lowest-energy excitation should be an indirect exciton over
the fundamental indirect band gap (K$\rightarrow$Q), with a binding energy of
220\,meV. However, in agreement with recent magneto-optics experiments and
previous theoretical studies, our simulations of the effective excitonic
Land\'e g-factors suggest that the low-energy momentum-indirect excitons are
not experimentally observed for MoSe$_2$/WSe$_2$ heterostructures. We further
reveal the existence of 'interlayer' C excitons with significant exciton
binding energies and optical oscillator strengths, which are analogous to the
prominent band nesting excitons in mono- and few-layer transition-metal
dichalcogenides.
|
Strong magnetic fields have a large impact on the dynamics of molecules. In
addition to the changes of the electronic structure, the nuclei are exposed to
the Lorentz force with the magnetic field being screened by the electrons. In
this work, we explore these effects using ab-initio molecular dynamics
simulations based on an effective Hamiltonian calculated at the Hartree-Fock
level of theory. To correctly include these non-conservative forces in the
dynamics, we have designed a series of novel propagators that show both good
efficiency and stability in test cases. As a first application, we analyze
simulations of He and H$_2$ at two field strengths characteristic of magnetic
white dwarfs (0.1 $B_0 = 2.35 \times 10^4$ T and $B_0 = 2.35 \times 10^5$ T).
While the He simulations clearly demonstrate the importance of electron
screening of the Lorentz force in the dynamics, the extracted rovibrational
spectra of H$_2$ reveal a number of fascinating features not observed in the
field-free case: couplings of rotations/vibrations with the cyclotron rotation,
overtones with unusual selection rules, and hindered rotations that transmute
into librations with increasing field strength. We conclude that our presented
framework is a powerful tool to investigate molecules in these extreme
environments.
|
The inverse spectral problem for the second-order differential pencil with
quadratic dependence on the spectral parameter is studied. We obtain sufficient
conditions for the global solvability of the inverse problem, prove its local
solvability and stability. The problem is considered in the general case of
complex-valued pencil coefficients and arbitrary eigenvalue multiplicities.
Studying local solvability and stability, we take the possible splitting of
multiple eigenvalues under a small perturbation of the spectrum into account.
Our approach is constructive. It is based on the reduction of the nonlinear
inverse problem to a linear equation in the Banach space of infinite sequences.
The theoretical results are illustrated by numerical examples.
|
Sgr B1 is a luminous H II region in the Galactic Center immediately next to
the massive star-forming giant molecular cloud Sgr B2 and apparently connected
to it from their similar radial velocities. In 2018 we showed from SOFIA
FIFI-LS observations of the [O III] 52 and 88 micron lines that there is no
central exciting star cluster and that the ionizing stars must be widely spread
throughout the region. Here we present SOFIA FIFI-LS observations of the [O I]
146 and [C II] 158 micron lines formed in the surrounding photodissociation
regions (PDRs). We find that these lines correlate neither with each other nor
with the [O III] lines although together they correlate better with the 70
micron Herschel PACS images from Hi-GAL. We infer from this that Sgr B1
consists of a number of smaller H II regions plus their associated PDRs, some
seen face-on and the others seen more or less edge-on. We used the PDR Toolbox
to estimate densities and the far-ultraviolet intensities exciting the PDRs.
Using models computed with Cloudy, we demonstrate possible appearances of
edge-on PDRs and show that the density difference between the PDR densities and
the electron densities estimated from the [O III] line ratios is incompatible
with pressure equilibrium unless there is a substantial pressure contribution
from either turbulence or magnetic field or both. We likewise conclude that the
hot stars exciting Sgr B1 are widely spaced throughout the region at
substantial distances from the gas with no evidence of current massive star
formation.
|
Abortion is one of the biggest causes of maternal deaths, accounting for 15%
of maternal deaths in Southeast Asia. The increase in and effectiveness of
using contraception are still considered to be the effective method to reduce
abortion rate. Data pertaining to abortion incidence and effective efforts to
reduce abortion rate in Indonesia is limited and difficult to access. Meanwhile
such supporting information is necessary to enable the planning and evaluation
of abortion control programs. This paper exemplifies the use of a mathematical
model to explain an abortion decline scenario. The model employs determinants
proposed by Bongaarts, which include average reproductive period, contraceptive
prevalence and effectiveness, total fertility rate (TFR), and intended total
fertility rate (ITFR), as well as birth and abortion intervals. The data used
is from the 1991-2007 Indonesian Demography and Health Survey (Survei Demografi
dan Kesehatan Indonesia/SDKI), and the unit of analysis is women who had been
married and aged 15-49 years old. Based on the current contraceptive prevalence
level in Indonesia at 59-61%, the estimated total abortion rate is 1.9-2.2.
Based on the plot of this total abortion rate, an abortion decline scenario can
be estimated. At the current TFR level of 2.6, the required contraceptive
prevalence is 69% (9% increase) for a decrease of one abortion case per woman.
With a delay of one year in the age of the first marriage and a birth interval
of three years, it is estimated that the abortion rate will decline from 3.05
to 0.69 case per woman throughout her reproductive period. Based on the
assumption of contraceptive prevalence growth at 1-1.4%, it can be estimated
that abortion rate will reach nearly 0 between 2018 and 2022.
|
Topological Spatial Model Checking is a recent paradigm that combines Model
Checking with the topological interpretation of Modal Logic. The Spatial Logic
of Closure Spaces, SLCS, extends Modal Logic with reachability connectives
that, in turn, can be used for expressing interesting spatial properties, such
as "being near to" or "being surrounded by". SLCS constitutes the kernel of a
solid logical framework for reasoning about discrete space, such as graphs and
digital images, interpreted as quasi discrete closure spaces. In particular,
the spatial model checker VoxLogicA, that uses an extended version of SLCS, has
been used successfully in the domain of medical imaging. However, SLCS is not
restricted to discrete space. Following a recently developed geometric
semantics of Modal Logic, we show that it is possible to assign an
interpretation to SLCS in continuous space, admitting a model checking
procedure, by resorting to models based on polyhedra. In medical imaging such
representations of space are increasingly relevant, due to recent developments
of 3D scanning and visualisation techniques that exploit mesh processing. We
demonstrate feasibility of our approach via a new tool, PolyLogicA, aimed at
efficient verification of SLCS formulas on polyhedra, while inheriting some
well-established optimization techniques already adopted in VoxLogicA. Finally,
we cater for a geometric definition of bisimilarity, proving that it
characterises logical equivalence.
|
We show, in detail, that the only non-trivial black hole (BH) solutions for a
neutral as well as a charged spherically symmetric space-times, using the class
${\textit F(R)}={\textit R}\pm{\textit F_1 (R)} $, must-have metric potentials
in the form $h(r)=\frac{1}{2}-\frac{2M}{r}$ and
$h(r)=\frac{1}{2}-\frac{2M}{r}+\frac{q^2}{r^2}$. These BHs have a non-trivial
form of Ricci scalar, i.e., $R=\frac{1}{r^2}$ and the form of ${\textit F_1
(R)}=\mp\frac{\sqrt{\textit R}} {3M} $. We repeat the same procedure for
(Anti-)de Sitter, (A)dS, space-time and got the metric potentials of neutral as
well as charged in the form $h(r)=\frac{1}{2}-\frac{2M}{r}-\frac{2\Lambda r^2}
{3} $ and $h(r)=\frac{1}{2}-\frac{2M}{r}+\frac{q^2}{r^2}-\frac{2\Lambda r^2}
{3} $, respectively. The Ricci scalar of the (A)dS space-times has the form
${\textit R}=\frac{1+8r^2\Lambda}{r^2}$ and the form of ${\textit
F_1(R)}=\mp\frac{\textit 2\sqrt{R-8\Lambda}}{3M}$. We calculate the
thermodynamical quantities, Hawking temperature, entropy, quasi-local energy,
and Gibbs-free energy for all the derived BHs, that behaves asymptotically as
flat and (A)dS, and show that they give acceptable physical thermodynamical
quantities consistent with the literature. Finally, we prove the validity of
the first law of thermodynamics for those BHs.
|
The energy demand is growing daily at an accelerated pace due to the
internationalization and development of civilization. Yet proper economic
utilization of additional energy generated by the Islanded Hybrid Microgrid
System (IHMS) that was not consumed by the load is a major global challenge. To
resolve the above-stated summons, this research focuses on a multi-optimal
combination of IHMS for the Penang Hill Resort located on Penang Island,
Malaysia, with effective use of redundant energy. To avail this excess energy
efficiently, an electrical heater along with a storage tank has been designed
concerning diversion load having proper energy management. Furthermore, the
system design has adopted the HOMER Pro software for profitable and practical
analysis. Alongside, MATLAB Simulink had stabilized the whole system by
representing the values of 2068 and 19,072 kW that have been determined as the
approximated peak and average load per day for the resort. Moreover, the
optimized IHMS is comprehended of Photovoltaic (PV) cells, Diesel Generator,
Wind Turbine, Battery, and Converter. Adjacent to this, the optimized system
ensued in having a Net Present Cost (NPC) of $21.66 million, Renewable Fraction
(RF) of 27.8%, Cost of Energy (COE) of $0.165/kWh, CO2 of 1,735,836 kg/year,
and excess energy of 517.29MWh per annum. Since the diesel generator lead
system was included in the scheme, a COE of $0.217/kWh, CO2 of 5,124,879
kg/year, and NPC of $23.25 million were attained. The amount of excess energy
is effectively utilized with an electrical heater as a diversion load.
|
We analyze a series of trials that randomly assigned Wikipedia users in
Germany to different web banners soliciting donations. The trials varied
framing or content of social information about how many other users are
donating. Framing a given number of donors in a negative way increased donation
rates. Variations in the communicated social information had no detectable
effects. The findings are consistent with the results from a survey experiment.
In line with donations being strategic substitutes, the survey documents that
the negative framing lowers beliefs about others' donations. Varying the social
information, in contrast, is ineffective in changing average beliefs.
|
The $\Xi N$ interaction is investigated in the quark mean-field (QMF) model
based on recent observables of the $\Xi^-+^{14}\rm{N}$ ($_{\Xi^-}^{15}\rm{C}$)
system. The experimental data about the binding energy of $1p$-state $\Xi^-$
hyperon in $_{\Xi^-}^{15}\rm{C}$ hypernuclei at KISO, IBUKI, E07-T011,
E176-14-03-35 events are conflated as $B_{\Xi^-}(1p)=1.14\pm0.11$ MeV. With
this constraint, the coupling strengths between the vector meson and $\Xi$
hyperon are fixed in three QMF parameter sets. Meanwhile, the $\Xi^-$ binding
energy of $1s$ state in $_{\Xi^-}^{15}\rm{C}$ is predicted as
$B_{\Xi^-}(1s)=5.66\pm0.38$ MeV with the same interactions, which are
completely consistent with the data from the KINKA and IRRAWADDY events.
Finally, the single $\Xi N$ potential is calculated in the symmetric nuclear
matter in the framework of QMF models. It is $U_{\Xi N}=-11.96\pm 0.85$ MeV at
nuclear saturation density, which will contribute to the study on the
strangeness degree of freedom in compact star.
|
Consider the Vlasov--Poisson--Landau system with Coulomb potential in the
weakly collisional regime on a $3$-torus, i.e. $$\begin{aligned} \partial_t
F(t,x,v) + v_i \partial_{x_i} F(t,x,v) + E_i(t,x) \partial_{v_i} F(t,x,v) = \nu
Q(F,F)(t,x,v),\\ E(t,x) = \nabla \Delta^{-1} (\int_{\mathbb R^3} F(t,x,v)\,
\mathrm{d} v - \frac{1}{(2\pi)^3}\int_{\mathbb T^3} \int_{\mathbb R^3}
F(t,x,v)\, \mathrm{d} v \, \mathrm{d} x), \end{aligned}$$ with $\nu\ll 1$. We
prove that for $\epsilon>0$ sufficiently small (but independent of $\nu$),
initial data which are $O(\epsilon \nu^{1/3})$-Sobolev space perturbations from
the global Maxwellians lead to global-in-time solutions which converge to the
global Maxwellians as $t\to \infty$. The solutions exhibit uniform-in-$\nu$
Landau damping and enhanced dissipation.
Our main result is analogous to an earlier result of Bedrossian for the
Vlasov--Poisson--Fokker--Planck equation with the same threshold. However,
unlike in the Fokker--Planck case, the linear operator cannot be inverted
explicitly due to the complexity of the Landau collision operator. For this
reason, we develop an energy-based framework, which combines Guo's weighted
energy method with the hypocoercive energy method and the commuting vector
field method. The proof also relies on pointwise resolvent estimates for the
linearized density equation.
|
A new class of sensing paradigm known as lab-onskin where stretchable and
flexible smart sensor devices are integrated into the skin, provides direct
monitoring and diagnostic interfaces to the body. Distributed lab-on-skin
wireless sensors have the ability to provide continuous long term assessment of
the skin health. This paper proposes a distributed skin health monitoring
system using a wireless body area network. The system is responsive to the
dynamic changes in the skin health, and remotely reports on the same. The
proposed algorithm detects the abnormal skin and creates an energy efficient
data aggregation tree covering the affected area while putting the unnecessary
sensors to sleep mode. The algorithm responds to the changing conditions of the
skin by dynamically adapting the size and shape of the monitoring trees to that
of the abnormal skin areas thus providing a comprehensive monitoring.
Simulation results demonstrate the application and utility of the proposed
algorithm for changing wound shapes and sizes.
|
Astrophysical time series often contain periodic signals. The large and
growing volume of time series data from photometric surveys demands
computationally efficient methods for detecting and characterizing such
signals. The most efficient algorithms available for this purpose are those
that exploit the $\mathcal{O}(N\log N)$ scaling of the Fast Fourier Transform
(FFT). However, these methods are not optimal for non-sinusoidal signal shapes.
Template fits (or periodic matched filters) optimize sensitivity for a priori
known signal shapes but at a significant computational cost. Current
implementations of template periodograms scale as $\mathcal{O}(N_f N_{obs})$,
where $N_f$ is the number of trial frequencies and $N_{obs}$ is the number of
lightcurve observations, and due to non-convexity, they do not guarantee the
best fit at each trial frequency, which can lead to spurious results. In this
work, we present a non-linear extension of the Lomb-Scargle periodogram to
obtain a template-fitting algorithm that is both accurate (globally optimal
solutions are obtained except in pathological cases) and computationally
efficient (scaling as $\mathcal{O}(N_f\log N_f)$ for a given template). The
non-linear optimization of the template fit at each frequency is recast as a
polynomial zero-finding problem, where the coefficients of the polynomial can
be computed efficiently with the non-equispaced fast Fourier transform. We show
that our method, which uses truncated Fourier series to approximate templates,
is an order of magnitude faster than existing algorithms for small problems
($N\lesssim 10$ observations) and 2 orders of magnitude faster for long
base-line time series with $N_{obs} \gtrsim 10^4$ observations. An open-source
implementation of the fast template periodogram is available at
https://www.github.com/PrincetonUniversity/FastTemplatePeriodogram.
|
Near the end of the 16th century Wilhelm IV, Landgraf von Hessen-Kassel, set
up an observatory with the main goal to increase the accuracy of stellar
positions primarily for use in astrology and for calendar purposes. A new star
catalogue was compiled from measurements of altitudes and angles between stars
and a print ready version was prepared listing measurements as well as
equatorial and ecliptic coordinates of stellar positions. Unfortunately, this
catalogue appeared in print not before 1666, long after the dissemination of
Brahe's catalogue. With the data given in the manuscript we are able to analyze
the accuracy of measurements and computations. The measurements and the
computations are very accurate, thanks to the instrument maker and
mathematician Jost B\"urgi. The star catalogue is more accurate by a factor two
than the later catalogue of Tycho Brahe.
|
With the rapid growth of data, how to extract effective information from data
is one of the most fundamental problems. In this paper, based on Tikhonov
regularization, we propose an effective method for reconstructing the function
and its derivative from scattered data with random noise. Since the noise level
is not assumed small, we will use the amount of data for reducing the random
error, and use a relatively small number of knots for interpolation. An
indicator function for our algorithm is constructed. It indicates where the
numerical results are good or may not be good. The corresponding error
estimates are obtained. We show how to choose the number of interpolation knots
in the reconstruction process for balancing the random errors and interpolation
errors. Numerical examples show the effectiveness and rapidity of our method.
It should be remarked that the algorithm in this paper can be used for on-line
data.
|
In the context of the longitudinally boost-invariant Bjorken flow with
transverse expansion, we use three different numerical methods to analyze the
emergence of attractor solutions in an ideal gas of massless particles
exhibiting constant shear viscosity to entropy density ratio $\eta / s$. The
fluid energy density is initialized using a Gaussian profile in the transverse
plane, while the ratio $\chi = \mathcal{P}_L / \mathcal{P}_T$ between the
longitudinal and transverse pressures is set at initial time $\tau_0$ to a
constant value $\chi_0$ throughout the system employing the
Romatschke-Strickland distribution. We introduce the hydrodynamization time
$\delta \tau_H = (\tau_H - \tau_0)/ \tau_0$ based on the time $\tau_H$ when the
standard deviation $\sigma(\chi)$ of a family of solutions with different
$\chi_0$ reaches a minimum value at the point of maximum convergence of the
solutions. In the $0+1{\rm D}$ setup, $\delta \tau_H$ exhibits scale
invariance, being a function only of $(\eta / s) / (\tau_0 T_0)$. With
transverse expansion, we find a similar $\delta \tau_H$ computed with respect
to the local initial temperature, $T_0(r)$. We highlight the transition between
the regimes where the longitudinal and transverse expansions dominate. We find
that the hydrodynamization time required for the attractor solution to be
reached increases with the distance from the origin, as expected based on the
properties of the $0+1{\rm D}$ system defined by the local initial conditions.
We argue that hydrodynamization is predominantly the effect of the longitudinal
expansion, being significantly influenced by the transverse dynamics only for
small systems or for large values of $\eta / s$.
|
Continuum kinetic theories provide an important tool for the analysis and
simulation of particle suspensions. When those particles are anisotropic, the
addition of a particle orientation vector to the kinetic description yields a
$2d-1$ dimensional theory which becomes intractable to simulate, especially in
three dimensions or near states where the particles are highly aligned.
Coarse-grained theories that track only moments of the particle distribution
functions provide a more efficient simulation framework, but require closure
assumptions. For the particular case where the particles are apolar, the
Bingham closure has been found to agree well with the underlying kinetic
theory; yet the closure is non-trivial to compute, requiring the solution of an
often nearly-singular nonlinear equation at every spatial discretization point
at every timestep. In this paper, we present a robust, accurate, and efficient
numerical scheme for evaluating the Bingham closure, with a controllable
error/efficiency tradeoff. To demonstrate the utility of the method, we carry
out high-resolution simulations of a coarse-grained continuum model for a
suspension of active particles in parameter regimes inaccessible to kinetic
theories. Analysis of these simulations reveals that inaccurately computing the
closure can act to effectively limit spatial resolution in the coarse-grained
fields. Pushing these simulations to the high spatial resolutions enabled by
our method reveals a coupling between vorticity and topological defects in the
suspension director field, as well as signatures of energy transfer between
scales in this active fluid model.
|
We present forecasted cosmological constraints from combined measurements of
galaxy cluster abundances from the Simons Observatory and galaxy clustering
from a DESI-like experiment on two well-studied modified gravity models, the
chameleon-screened $f(R)$ Hu-Sawicki model and the nDGP braneworld Vainshtein
model.
A Fisher analysis is conducted using $\sigma_8$ constraints derived from
thermal Sunyaev-Zel'dovich (tSZ) selected galaxy clusters, as well as linear
and mildly non-linear redshift-space 2-point galaxy correlation functions. We
find that the cluster abundances drive the constraints on the nDGP model while
$f(R)$ constraints are led by galaxy clustering. The two tracers of the
cosmological gravitational field are found to be complementary, and their
combination significantly improves constraints on the $f(R)$ in particular in
comparison to each individual tracer alone.
For a fiducial model of $f(R)$ with $\text{log}_{10}(f_{R0})=-6$ and $n=1$ we
find combined constraints of $\sigma(\text{log}_{10}(f_{R0}))=0.48$ and
$\sigma(n)=2.3$, while for the nDGP model with $n_{\text{nDGP}}=1$ we find
$\sigma(n_{\text{nDGP}})=0.087$. Around a fiducial General Relativity (GR)
model, we find a $95\%$ confidence upper limit on $f(R)$ of
$f_{R0}\leq5.68\times 10^{-7}$. Our results present the exciting potential to
utilize upcoming galaxy and CMB survey data available in the near future to
discern and/or constrain cosmic deviations from GR.
|
The measurement of the epicyclic frequencies is a widely used astrophysical
technique to infer information on a given self-gravitating system and on the
related gravity background. We derive their explicit expressions in static and
spherically symmetric wormhole spacetimes. We discuss how these theoretical
results can be applied to: (1) detect the presence of a wormhole,
distinguishing it by a black hole; (2) reconstruct wormhole solutions through
the fit of the observational data, once we have them. Finally, we discuss the
physical implications of our proposed epicyclic method.
|
We investigate quantitative estimates in homogenization of the locally
periodic parabolic operator with multiscales
$$ \partial_t- \text{div} (A(x,t,x/\varepsilon,t/\kappa^2) \nabla ),\qquad
\varepsilon>0,\, \kappa>0. $$ Under proper assumptions, we establish the
full-scale interior and boundary Lipschitz estimates. These results are new
even for the case $\kappa=\varepsilon$, and for the periodic operators $
\partial_t-\text{div}(A(x/\varepsilon, t/\varepsilon^{\ell}) \nabla ),$
$0<\varepsilon,\ell<\infty, $ of which the large-scale Lipschitz estimate down
to $\varepsilon+\varepsilon^{\ell/2}$ was recently established by the first
author and Shen in Arch. Ration. Mech. Anal. 236(1): 145--188 (2020). Due to
the non-self-similar structure, the full-scale estimates do not follow directly
from the large-scale estimates and the blow-up argument. As a byproduct, we
also derive the convergence rates for the corresponding initial-Dirichlet
problems, which extend the results in the aforementioned literature to more
general settings.
|
Contact integrators are a family of geometric numerical schemes which
guarantee the conservation of the contact structure. In this work we review the
construction of both the variational and Hamiltonian versions of these methods.
We illustrate some of the advantages of geometric integration in the
dissipative setting by focusing on models inspired by recent studies in
celestial mechanics and cosmology.
|
We present a numerical implementation of the recently developed
unconditionally convergent representation of general Heun functions as integral
series. We produce two codes in Python available for download, one of which is
especially aimed at reproducing the output of Mathematica's HeunG function. We
show that the present code compares favorably with Mathematica's HeunG and with
an Octave/Matlab code of Motygin, in particular when the Heun function is to be
evaluated at a large number of points if less accuracy is sufficient. We
suggest further improvements concerning the accuracy and discuss the issue of
singularities.
|
Thin film lithium niobate (LN) has recently emerged as a playground for
chip-scale nonlinear optics and leads to highly efficient frequency conversions
from near infrared to near-visible bands. For many nonlinear and quantum
photonics applications, it is desirable to operate deep into the visible band
within LN's transparency window. However, the strong material dispersion at
short wavelengths makes phase-matching difficult, necessitating sub-micron
scale control of domain structures for efficient quasi-phase-matching (QPM).
Here we report the operation of thin film LN in the blue wavelength and high
fidelity poling of thin-film LN waveguide to this regime. As a result,
quasi-phase matching is realized between IR (871nm) and blue (435.5nm)
wavelengths in a straight waveguide and prompts strong blue light generation
with a conversion efficiency $2900\pm400\%W^{-1}cm^{-2}$
|
High dimensional categorical data are routinely collected in biomedical and
social sciences. It is of great importance to build interpretable parsimonious
models that perform dimension reduction and uncover meaningful latent
structures from such discrete data. Identifiability is a fundamental
requirement for valid modeling and inference in such scenarios, yet is
challenging to address when there are complex latent structures. In this
article, we propose a class of identifiable multilayer (potentially deep)
discrete latent structure models for discrete data, termed Bayesian pyramids.
We establish the identifiability of Bayesian pyramids by developing novel
transparent conditions on the pyramid-shaped deep latent directed graph. The
proposed identifiability conditions can ensure Bayesian posterior consistency
under suitable priors. As an illustration, we consider the two-latent-layer
model and propose a Bayesian shrinkage estimation approach. Simulation results
for this model corroborate the identifiability and estimability of model
parameters. Applications of the methodology to DNA nucleotide sequence data
uncover useful discrete latent features that are highly predictive of sequence
types. The proposed framework provides a recipe for interpretable unsupervised
learning of discrete data, and can be a useful alternative to popular machine
learning methods.
|
Neural dialogue models suffer from low-quality responses when interacted in
practice, demonstrating difficulty in generalization beyond training data.
Recently, knowledge distillation has been used to successfully regularize the
student by transferring knowledge from the teacher. However, the teacher and
the student are trained on the same dataset and tend to learn similar feature
representations, whereas the most general knowledge should be found through
differences. The finding of general knowledge is further hindered by the
unidirectional distillation, as the student should obey the teacher and may
discard some knowledge that is truly general but refuted by the teacher. To
this end, we propose a novel training framework, where the learning of general
knowledge is more in line with the idea of reaching consensus, i.e., finding
common knowledge that is beneficial to different yet all datasets through
diversified learning partners. Concretely, the training task is divided into a
group of subtasks with the same number of students. Each student assigned to
one subtask not only is optimized on the allocated subtask but also imitates
multi-view feature representation aggregated from other students (i.e., student
peers), which induces students to capture common knowledge among different
subtasks and alleviates the over-fitting of students on the allocated subtasks.
To further enhance generalization, we extend the unidirectional distillation to
the bidirectional distillation that encourages the student and its student
peers to co-evolve by exchanging complementary knowledge with each other.
Empirical results and analysis demonstrate that our training framework
effectively improves the model generalization without sacrificing training
efficiency.
|
Visual explanation methods have an important role in the prognosis of the
patients where the annotated data is limited or unavailable. There have been
several attempts to use gradient-based attribution methods to localize
pathology from medical scans without using segmentation labels. This research
direction has been impeded by the lack of robustness and reliability. These
methods are highly sensitive to the network parameters. In this study, we
introduce a robust visual explanation method to address this problem for
medical applications. We provide an innovative visual explanation algorithm for
general purpose and as an example application, we demonstrate its effectiveness
for quantifying lesions in the lungs caused by the Covid-19 with high accuracy
and robustness without using dense segmentation labels. This approach overcomes
the drawbacks of commonly used Grad-CAM and its extended versions. The premise
behind our proposed strategy is that the information flow is minimized while
ensuring the classifier prediction stays similar. Our findings indicate that
the bottleneck condition provides a more stable severity estimation than the
similar attribution methods.
|
The cuprate superconductors are characterized by numerous ordering
tendencies, with the nematic order being the most distinct form of order. Here
the intertwinement of the electronic nematicity with superconductivity in
cuprate superconductors is studied based on the kinetic-energy-driven
superconductivity. It is shown that the optimized Tc takes a dome-like shape
with the weak and strong strength regions on each side of the optimal strength
of the electronic nematicity, where the optimized Tc reaches its maximum. This
dome-like shape nematic-order strength dependence of Tc indicates that the
electronic nematicity enhances superconductivity. Moreover, this nematic order
induces the anisotropy of the electron Fermi surface (EFS), where although the
original EFS with the four-fold rotation symmetry is broken up into that with a
residual two-fold rotation symmetry, this EFS with the two-fold rotation
symmetry still is truncated to form the Fermi arcs with the most spectral
weight that locates at the tips of the Fermi arcs. Concomitantly, these tips of
the Fermi arcs connected by the wave vectors ${\bf q}_{i}$ construct an octet
scattering model, however, the partial wave vectors and their respective
symmetry-corresponding partners occur with unequal amplitudes, leading to these
ordered states being broken both rotation and translation symmetries. As a
natural consequence, the electronic structure is inequivalent between the
$k_{x}$ and $k_{y}$ directions. These anisotropic features of the electronic
structure are also confirmed via the result of the autocorrelation of the
single-particle excitation spectra, where the breaking of the rotation symmetry
is verified by the inequivalence on the average of the electronic structure at
the two Bragg scattering sites. Furthermore, the strong energy dependence of
the order parameter of the electronic nematicity is also discussed.
|
The unsupervised task of aligning two or more distributions in a shared
latent space has many applications including fair representations, batch effect
mitigation, and unsupervised domain adaptation. Existing flow-based approaches
estimate multiple flows independently, which is equivalent to learning multiple
full generative models. Other approaches require adversarial learning, which
can be computationally expensive and challenging to optimize. Thus, we aim to
jointly align multiple distributions while avoiding adversarial learning.
Inspired by efficient alignment algorithms from optimal transport (OT) theory
for univariate distributions, we develop a simple iterative method to build
deep and expressive flows. Our method decouples each iteration into two
subproblems: 1) form a variational approximation of a distribution divergence
and 2) minimize this variational approximation via closed-form invertible
alignment maps based on known OT results. Our empirical results give evidence
that this iterative algorithm achieves competitive distribution alignment at
low computational cost while being able to naturally handle more than two
distributions.
|
Situations in which immediate self-interest and long-term collective interest
conflict often require some form of influence to prevent them from leading to
undesirable or unsustainable outcomes. Next to sanctioning, social influence
and social structure, it is possible that strategic solutions can exist for
these social dilemmas. However, the existence of strategies that enable a
player to exert control in the long-run outcomes can be difficult to show and
different situations allow for different levels of strategic influence. Here,
we investigate the effect of threshold nonlinearities on the possibilities of
exerting unilateral control in finitely repeated n-player public goods games
and snowdrift games. These models can describe situations in which a collective
effort is necessary in order for a benefit to be created. We identify
conditions in terms of a cooperator threshold for the existence of generous,
extortionate and equalizing zero-determinant (ZD) strategies. Our results show
that, for both games, the thresholds prevent equalizing ZD strategies from
existing. In the snowdrift game, introducing a cooperator threshold has no
effect on the region of feasible extortionate ZD strategies. For extortionate
strategies in the public goods game, the threshold only restricts the region of
enforceable strategies for small values of the public goods multiplier.
Generous ZD strategies exist for both games, but introducing a cooperator
threshold forces the slope more towards the value of a fair strategy, where the
player has approximately the same payoff as the average payoff of his
opponents.
|
Selfie-based biometrics has great potential for a wide range of applications
from marketing to higher security environments like online banking. This is now
especially relevant since e.g. periocular verification is contactless, and
thereby safe to use in pandemics such as COVID-19. However, selfie-based
biometrics faces some challenges since there is limited control over the data
acquisition conditions. Therefore, super-resolution has to be used to increase
the quality of the captured images. Most of the state of the art
super-resolution methods use deep networks with large filters, thereby needing
to train and store a correspondingly large number of parameters, and making
their use difficult for mobile devices commonly used for selfie-based.
In order to achieve an efficient super-resolution method, we propose an
Efficient Single Image Super-Resolution (ESISR) algorithm, which takes into
account a trade-off between the efficiency of the deep neural network and the
size of its filters. To that end, the method implements a novel loss function
based on the Sharpness metric. This metric turns out to be more suitable for
increasing the quality of the eye images. Our method drastically reduces the
number of parameters when compared with Deep CNNs with Skip Connection and
Network (DCSCN): from 2,170,142 to 28,654 parameters when the image size is
increased by a factor of x3. Furthermore, the proposed method keeps the sharp
quality of the images, which is highly relevant for biometric recognition
purposes. The results on remote verification systems with raw images reached an
Equal Error Rate (EER) of 8.7% for FaceNet and 10.05% for VGGFace. Where
embedding vectors were used from periocular images the best results reached an
EER of 8.9% (x3) for FaceNet and 9.90% (x4) for VGGFace.
|
In this work, we use a combination of formal upscaling and data-driven
machine learning for explicitly closing a nonlinear transport and reaction
process in a multiscale tissue. The classical effectiveness factor model is
used to formulate the macroscale reaction kinetics. We train a multilayer
perceptron network using training data generated by direct numerical
simulations over microscale examples. Once trained, the network is used for
numerically solving the upscaled (coarse-grained) differential equation
describing mass transport and reaction in two example tissues. The network is
described as being explicit in the sense that the network is trained using
macroscale concentrations and gradients of concentration as components of the
feature space.
Network training and solutions to the macroscale transport equations were
computed for two different tissues. The two tissue types (brain and liver)
exhibit markedly different geometrical complexity and spatial scale (cell size
and sample size). The upscaled solutions for the average concentration are
compared with numerical solutions derived from the microscale concentration
fields by a posteriori averaging. There are two outcomes of this work of
particular note: 1) we find that the trained network exhibits good
generalizability, and it is able to predict the effectiveness factor with high
fidelity for realistically-structured tissues despite the significantly
different scale and geometry of the two example tissue types; and 2) the
approach results in an upscaled PDE with an effectiveness factor that is
predicted (implicitly) via the trained neural network. This latter result
emphasizes our purposeful connection between conventional averaging methods
with the use of machine learning for closure; this contrasts with some machine
learning methods for upscaling where the exact form of the macroscale equation
remains unknown.
|
In the last few years, Lopez-Permouth and several collaborators have
introduced a new approach in the study of the classical projectivity,
injectivity and flatness of modules. This way, they introduced subprojectivity
domains of modules as a tool to measure, somehow, the projectivity level of
such a module (so not just to determine whether or not the module is
projective). In this paper we develop a new treatment of the subprojectivity in
any abelian category which shed more light on some of its various important
aspects. Namely, in terms of subprojectivity, some classical results are
unified and some classical rings are characterized. It is also shown that, in
some categories, the subprojectivity measures notions other than the
projectivity. Furthermore, this new approach allows, in addition to
establishing nice generalizations of known results, to construct various new
examples such as the subprojectivity domain of the class of Gorenstein
projective objects, the class of semi-projective complexes and particular types
of representations of a finite linear quiver. The paper ends with a study
showing that the fact that a subprojectivity domain of a class coincides with
its first right Ext-orthogonal class can be characterized in terms of the
existence of preenvelopes and precovers.
|
Clustering is an unsupervised learning technique that is useful when working
with a large volume of unlabeled data. Complex dynamical systems in real life
often entail data streaming from a large number of sources. Although it is
desirable to use all source variables to form accurate state estimates, it is
often impractical due to large computational power requirements, and
sufficiently robust algorithms to handle these cases are not common. We propose
a hierarchical time series clustering technique based on symbolic dynamic
filtering and Granger causality, which serves as a dimensionality reduction and
noise-rejection tool. Our process forms a hierarchy of variables in the
multivariate time series with clustering of relevant variables at each level,
thus separating out noise and less relevant variables. A new distance metric
based on Granger causality is proposed and used for the time series clustering,
as well as validated on empirical data sets. Experimental results from
occupancy detection and building temperature estimation tasks show fidelity to
the empirical data sets while maintaining state-prediction accuracy with
substantially reduced data dimensionality.
|
With the advancement of IoT and artificial intelligence technologies, and the
need for rapid application growth in fields such as security entrance control
and financial business trade, facial information processing has become an
important means for achieving identity authentication and information security.
In this paper, we propose a multi-feature fusion algorithm based on integral
histograms and a real-time update tracking particle filtering module. First,
edge and colour features are extracted, weighting methods are used to weight
the colour histogram and edge features to describe facial features, and fusion
of colour and edge features is made adaptive by using fusion coefficients to
improve face tracking reliability. Then, the integral histogram is integrated
into the particle filtering algorithm to simplify the calculation steps of
complex particles. Finally, the tracking window size is adjusted in real time
according to the change in the average distance from the particle centre to the
edge of the current model and the initial model to reduce the drift problem and
achieve stable tracking with significant changes in the target dimension. The
results show that the algorithm improves video tracking accuracy, simplifies
particle operation complexity, improves the speed, and has good
anti-interference ability and robustness.
|
In this article, we present the integral representations of the power series
diagonals. Such representations are obtained by lowering the integration
multiplicity for the previously known integral representation. The procedure is
carried out within the framework of Leray's residue theory. The concept of the
amoeba of the complex analytical hypersurface plays an essential role in the
construction of new integral representations.
|
This paper describes an adaptive method in continuous time for the estimation
of external fields by a team of $N$ agents. The agents $i$ each explore
subdomains $\Omega^i$ of a bounded subset of interest $\Omega\subset X :=
\mathbb{R}^d$. Ideal adaptive estimates $\hat{g}^i_t$ are derived for each
agent from a distributed parameter system (DPS) that takes values in the
scalar-valued reproducing kernel Hilbert space $H_X$ of functions over $X$.
Approximations of the evolution of the ideal local estimate $\hat{g}^i_t$ of
agent $i$ is constructed solely using observations made by agent $i$ on a fine
time scale. Since the local estimates on the fine time scale are constructed
independently for each agent, we say that the method is strictly decentralized.
On a coarse time scale, the individual local estimates $\hat{g}^i_t$ are fused
via the expression $\hat{g}_t:=\sum_{i=1}^N\Psi^i \hat{g}^i_t$ that uses a
partition of unity $\{\Psi^i\}_{1\leq i\leq N}$ subordinate to the cover
$\{\Omega^i\}_{i=1,\ldots,N}$ of $\Omega$. Realizable algorithms are obtained
by constructing finite dimensional approximations of the DPS in terms of
scattered bases defined by each agent from samples along their trajectories.
Rates of convergence of the error in the finite dimensional approximations are
derived in terms of the fill distance of the samples that define the scattered
centers in each subdomain. The qualitative performance of the convergence rates
for the decentralized estimation method is illustrated via numerical
simulations.
|
Convolutional Neural Networks (CNNs) deployed in real-life applications such
as autonomous vehicles have shown to be vulnerable to manipulation attacks,
such as poisoning attacks and fine-tuning. Hence, it is essential to ensure the
integrity and authenticity of CNNs because compromised models can produce
incorrect outputs and behave maliciously. In this paper, we propose a
self-contained tamper-proofing method, called DeepiSign, to ensure the
integrity and authenticity of CNN models against such manipulation attacks.
DeepiSign applies the idea of fragile invisible watermarking to securely embed
a secret and its hash value into a CNN model. To verify the integrity and
authenticity of the model, we retrieve the secret from the model, compute the
hash value of the secret, and compare it with the embedded hash value. To
minimize the effects of the embedded secret on the CNN model, we use a
wavelet-based technique to transform weights into the frequency domain and
embed the secret into less significant coefficients. Our theoretical analysis
shows that DeepiSign can hide up to 1KB secret in each layer with minimal loss
of the model's accuracy. To evaluate the security and performance of DeepiSign,
we performed experiments on four pre-trained models (ResNet18, VGG16, AlexNet,
and MobileNet) using three datasets (MNIST, CIFAR-10, and Imagenet) against
three types of manipulation attacks (targeted input poisoning, output
poisoning, and fine-tuning). The results demonstrate that DeepiSign is
verifiable without degrading the classification accuracy, and robust against
representative CNN manipulation attacks.
|
This article develops new closed-form variance expressions for power analyses
for commonly used difference-in-differences (DID) and comparative interrupted
time series (CITS) panel data estimators. The main contribution is to
incorporate variation in treatment timing into the analysis. The power formulas
also account for other key design features that arise in practice:
autocorrelated errors, unequal measurement intervals, and clustering due to the
unit of treatment assignment. We consider power formulas for both
cross-sectional and longitudinal models and allow for covariates. An
illustrative power analysis provides guidance on appropriate sample sizes. The
key finding is that accounting for treatment timing increases required sample
sizes. Further, DID estimators have considerably more power than standard CITS
and ITS estimators. An available Shiny R dashboard performs the sample size
calculations for the considered estimators.
|
In the graph signal processing (GSP) literature, it has been shown that
signal-dependent graph Laplacian regularizer (GLR) can efficiently promote
piecewise constant (PWC) signal reconstruction for various image restoration
tasks. However, for planar image patches, like total variation (TV), GLR may
suffer from the well-known "staircase" effect. To remedy this problem, we
generalize GLR to gradient graph Laplacian regularizer (GGLR) that provably
promotes piecewise planar (PWP) signal reconstruction for the image
interpolation problem -- a 2D grid with random missing pixels that requires
completion. Specifically, we first construct two higher-order gradient graphs
to connect local horizontal and vertical gradients. Each local gradient is
estimated using structure tensor, which is robust using known pixels in a small
neighborhood, mitigating the problem of larger noise variance when computing
gradient of gradients. Moreover, unlike total generalized variation (TGV), GGLR
retains the quadratic form of GLR, leading to an unconstrained quadratic
programming (QP) problem per iteration that can be solved quickly using
conjugate gradient (CG). We derive the means-square-error minimizing weight
parameter for GGLR, trading off bias and variance of the signal estimate.
Experiments show that GGLR outperformed competing schemes in interpolation
quality for severely damaged images at a reduced complexity.
|
In order to study the ram-pressure interaction between radio galaxies and the
intracluster medium, we analyse a sample of 208 highly-bent narrow-angle tail
radio sources (NATs) in clusters, detected by the LOFAR Two-metre Sky Survey.
For NATs within $7\,R_{500}$ of the cluster centre, we find that their tails
are distributed anisotropically with a strong tendency to be bent radially away
from the cluster, which suggests that they are predominantly on radially
inbound orbits. Within $0.5\,R_{500}$, we also observe an excess of NATs with
their jets bent towards the cluster core, indicating that these outbound
sources fade away soon after passing pericentre. For the subset of NATs with
spectroscopic redshifts, we find the radial bias in the jet angles exists even
out to $10\,R_{500}$, far beyond the virial radius. The presence of NATs at
such large radii implies that significant deceleration of the accompanying
inflowing intergalactic medium must be occurring there to create the ram
pressure that bends the jets, and potentially even triggers the radio source.
|
We consider the problem of preprocessing two strings $S$ and $T$, of lengths
$m$ and $n$, respectively, in order to be able to efficiently answer the
following queries: Given positions $i,j$ in $S$ and positions $a,b$ in $T$,
return the optimal alignment of $S[i \mathinner{.\,.} j]$ and $T[a
\mathinner{.\,.} b]$. Let $N=mn$. We present an oracle with preprocessing time
$N^{1+o(1)}$ and space $N^{1+o(1)}$ that answers queries in $\log^{2+o(1)}N$
time. In other words, we show that we can query the alignment of every two
substrings in almost the same time it takes to compute just the alignment of
$S$ and $T$. Our oracle uses ideas from our distance oracle for planar graphs
[STOC 2019] and exploits the special structure of the alignment graph.
Conditioned on popular hardness conjectures, this result is optimal up to
subpolynomial factors. Our results apply to both edit distance and longest
common subsequence (LCS).
The best previously known oracle with construction time and size
$\mathcal{O}(N)$ has slow $\Omega(\sqrt{N})$ query time [Sakai, TCS 2019], and
the one with size $N^{1+o(1)}$ and query time $\log^{2+o(1)}N$ (using a planar
graph distance oracle) has slow $\Omega(N^{3/2})$ construction time [Long &
Pettie, SODA 2021]. We improve both approaches by roughly a $\sqrt N$ factor.
|
The BINGO telescope was designed to measure the fluctuations of the 21-cm
radiation arising from the hyperfine transition of neutral hydrogen and aims to
measure the Baryon Acoustic Oscillations (BAO) from such fluctuations,
therefore serving as a pathfinder to future deeper intensity mapping surveys.
The requirements for the Phase 1 of the projects consider a large reflector
system (two 40 m-class dishes in a crossed-Dragone configuration), illuminating
a focal plane with 28 horns to measure the sky with two circular polarisations
in a drift scan mode to produce measurements of the radiation in intensity as
well as the circular polarisation. In this paper we present the optical design
for the instrument. We describe the intensity and polarisation properties of
the beams and the optical arrangement of the horns in the focal plane to
produce a homogeneous and well-sampled map after the end of Phase 1. Our
analysis provides an optimal model for the location of the horns in the focal
plane, producing a homogeneous and Nyquist sampled map after the nominal survey
time. We arrive at an optimal configuration for the optical system, including
the focal plane positioning and the beam behavior of the instrument. We present
an estimate of the expected side lobes both for intensity and polarisation, as
well as the effect of band averaging on the final side lobes. The cross
polarisation leakage values for the final configuration allow us to conclude
that the optical arrangement meets the requirements of the project. We conclude
that the chosen optical design meets the requirements for the project in terms
of polarisation purity, area coverage as well as homogeneity of coverage so
that BINGO can perform a successful BAO experiment. We further conclude that
the requirements on the placement and r.m.s. error on the mirrors are also
achievable so that a successful experiment can be conducted.(Abridged)
|
We provide a quantitative asymptotic analysis for the nonlinear
Vlasov--Poisson--Fokker--Planck system with a large linear friction force and
high force-fields. The limiting system is a diffusive model with nonlocal
velocity fields often referred to as aggregation-diffusion equations. We show
that a weak solution to the Vlasov--Poisson--Fokker--Planck system strongly
converges to a strong solution to the diffusive model. Our proof relies on the
modulated macroscopic kinetic energy estimate based on the weak-strong
uniqueness principle together with a careful analysis of the Poisson equation.
|
Recent advances in unsupervised domain adaptation (UDA) show that
transferable prototypical learning presents a powerful means for class
conditional alignment, which encourages the closeness of cross-domain class
centroids. However, the cross-domain inner-class compactness and the underlying
fine-grained subtype structure remained largely underexplored. In this work, we
propose to adaptively carry out the fine-grained subtype-aware alignment by
explicitly enforcing the class-wise separation and subtype-wise compactness
with intermediate pseudo labels. Our key insight is that the unlabeled subtypes
of a class can be divergent to one another with different conditional and label
shifts, while inheriting the local proximity within a subtype. The cases of
with or without the prior information on subtype numbers are investigated to
discover the underlying subtype structure in an online fashion. The proposed
subtype-aware dynamic UDA achieves promising results on medical diagnosis
tasks.
|
Viscoelastic fluids are non-Newtonian fluids that exhibit both "viscous" and
"elastic" characteristics in virtue of mechanisms to store energy and produce
entropy. Usually the energy storage properties of such fluids are modelled
using the same concepts as in the classical theory of nonlinear solids.
Recently new models for elastic solids have been successfully developed by
appealing to implicit constitutive relations, and these new models offer a
different perspective to the old topic of elastic response of materials. In
particular, a sub-class of implicit constitutive relations, namely relations
wherein the left Cauchy-Green tensor is expressed as a function of stress is of
interest. We show how to use this new perspective it the development of
mathematical models for viscoelastic fluids, and we provide a discussion of the
thermodynamic underpinnings of such models. We focus on the use of Gibbs free
energy instead of the Helmholtz free energy, and using the standard
Giesekus/Oldroyd-B models, we show how the alternative approach works in the
case of well-known models. The proposed approach is straightforward to
generalise to more complex setting wherein the classical approach might be
impractical of even inapplicable.
|
Podcast episodes often contain material extraneous to the main content, such
as advertisements, interleaved within the audio and the written descriptions.
We present classifiers that leverage both textual and listening patterns in
order to detect such content in podcast descriptions and audio transcripts. We
demonstrate that our models are effective by evaluating them on the downstream
task of podcast summarization and show that we can substantively improve ROUGE
scores and reduce the extraneous content generated in the summaries.
|
All external electromagnetic fields in which the Klein-Gordon-Fock equation
admits the first-order symmetry operators are found, provided that in the
space-time $V_4$ a group of motion $G_3$ acts simply transitively on a non-null
subspace of transitivity $V_3$. It is shown that in the case of a Riemannian
space $V_n$, in which the group $G_r$ acts simply transitively, the algebra of
symmetry operators of the $n$-dimensional Klein-Gordon-Fock equation in an
external admissible electromagnetic field coincides with the algebra of
operators of the group $G_r$.
|
Air pollution has long been a serious environmental health challenge,
especially in metropolitan cities, where air pollutant concentrations are
exacerbated by the street canyon effect and high building density. Whilst
accurately monitoring and forecasting air pollution are highly crucial,
existing data-driven models fail to fully address the complex interaction
between air pollution and urban dynamics. Our Deep-AIR, a novel hybrid deep
learning framework that combines a convolutional neural network with a long
short-term memory network, aims to address this gap to provide fine-grained
city-wide air pollution estimation and station-wide forecast. Our proposed
framework creates 1x1 convolution layers to strengthen the learning of
cross-feature spatial interaction between air pollution and important urban
dynamic features, particularly road density, building density/height, and
street canyon effect. Using Hong Kong and Beijing as case studies, Deep-AIR
achieves a higher accuracy than our baseline models. Our model attains an
accuracy of 67.6%, 77.2%, and 66.1% in fine-grained hourly estimation, 1-hr,
and 24-hr air pollution forecast for Hong Kong, and an accuracy of 65.0%,
75.3%, and 63.5% for Beijing. Our saliency analysis has revealed that for Hong
Kong, street canyon and road density are the best estimators for NO2, while
meteorology is the best estimator for PM2.5.
|
The counterintuitive fact that wave chaos appears in the bending spectrum of
free rectangular thin plates is presented. After extensive numerical
simulations, varying the ratio between the length of its sides, it is shown
that (i) frequency levels belonging to different symmetry classes cross each
other and (ii) for levels within the same symmetry sector, only avoided
crossings appear. The consequence of anticrossings is studied by calculating
the distributions of the ratio of consecutive level spacings for each symmetry
class. The resulting ratio distribution disagrees with the expected Poissonian
result. They are then compared with some well-known transition distributions
between Poisson and the Gaussian orthogonal random matrix ensemble. It is found
that the distribution of the ratio of consecutive level spacings agrees with
the prediction of the Rosenzweig-Porter model. Also, the normal-mode vibration
amplitudes are found experimentally on aluminum plates, before and after an
avoided crossing for symmetrical-symmetrical, symmetrical-antisymmetrical, and
antisymmetrical-symmetrical classes. The measured modes show an excellent
agreement with our numerical predictions. The expected Poissonian distribution
is recovered for the simply supported rectangular plate.
|
Dating back to Euler, in classical analysis and number theory, the Hurwitz
zeta function $$ \zeta(z,q)=\sum_{n=0}^{\infty}\frac{1}{(n+q)^{z}}, $$ the
Riemann zeta function $\zeta(z)$, the generalized Stieltjes constants
$\gamma_k(q)$, the Euler constant $\gamma$, Euler's gamma function $\Gamma(q)$
and the digamma function $\psi(q)$ have many close connections on their
definitions and properties. There are also many integrals, series or infinite
product representations of them along the history.
In this note, we try to provide a parallel story for the alternating Hurwitz
zeta function (also known as the Hurwitz-type Euler zeta function)
$$\zeta_{E}(z,q)=\sum_{n=0}^\infty\frac{(-1)^{n}}{(n+q)^{z}},$$ the alternating
zeta function $\zeta_{E}(z)$ (also known as the Dirichlet's eta function
$\eta(z)$), the modified Stieltjes constants $\tilde\gamma_k(q)$, the modified
Euler constant $\tilde\gamma_{0}$, the modified gamma function
$\tilde\Gamma(q)$ and the modified digamma function $\tilde\psi(q)$ (also known
as the Nielsen's $\beta$ function). Many new integrals, series or infinite
product representations of these constants and special functions have been
found. By the way, we also get two new series expansions of $\pi:$
\begin{equation*} \frac{\pi^2}{12}=\frac34-\sum_{k=1}^\infty(\zeta_E(2k+2)-1)
\end{equation*} and \begin{equation*} \frac{\pi}{2}=
\log2+2\sum_{k=1}^\infty\frac{(-1)^k}{k!}\tilde\gamma_k(1)\sum_{j=0}^kS(k,j)j!.
\end{equation*}
|
Wearable devices hold great potential for promoting children's health and
well-being. However, research on kids' wearables is sparse and often focuses on
their use in the context of parental surveillance. To gain insight into the
current landscape of kids' wearables, we surveyed 47 wearable devices marketed
for children. We collected rich data on the functionality of these devices and
assessed how different features satisfy parents' information needs, and
identified opportunities for wearables to support children's needs and
interests. We found that many kids' wearables are technologically sophisticated
devices that focus on parents' ability to communicate with their children and
keep them safe, as well as encourage physical activity and nurture good habits.
We discuss how our findings could inform the design of wearables that serve as
more than monitoring devices, and instead support children and parents as equal
stakeholders, providing implications for kids' agency, long-term development,
and overall well-being. Finally, we identify future research efforts related to
designing for kids' self-tracking and collaborative tracking with parents.
|
Elucidating emergent regularities in intriguing crowd dynamics is a
fundamental scientific problem arising in multiple fields. In this work, based
on the social force model, we simulate the typical scenario of collective
escape towards a single exit and reveal the striking analogy of crowd dynamics
and crystallisation. With the outflow of the pedestrians, crystalline order
emerges in the compact crowd. In this process, the local misalignment and
global rearrangement of pedestrians are well rationalized in terms of the
characteristic motions of topological defects in the crystal. Exploiting the
notions from the physics of crystallisation further reveals the emergence of
multiple fast tracks in the collective escape.
|
Supersonic gas jets produced by converging-diverging (C-D) nozzles are
commonly used as targets for laser-plasma acceleration (LPA) experiments. A
major point of interest for these targets is the gas density at the region of
interaction where the laser ionizes the gas plume to create a plasma, providing
the acceleration structure. Tuning the density profiles at this interaction
region is crucial to LPA optimization. A "flat-top" density profile is desired
at this line of interaction to control laser propagation and high energy
electron acceleration, while a short high-density profile is often preferred
for acceleration of lower-energy tightly-focused laser-plasma interactions. A
particular design parameter of interest is the curvature of the nozzle's
diverging section. We examine three nozzle designs with different curvatures:
the concave "bell", straight conical and convex "trumpet" nozzles. We
demonstrate that, at mm-scale distances from the nozzle exit, the trumpet and
straight nozzles, if optimized, produce "flat-top" density profiles whereas the
bell nozzle creates focused regions of gas with higher densities. An
optimization procedure for the trumpet nozzle is derived and compared to the
straight nozzle optimization process. We find that the trumpet nozzle, by
providing an extra parameter of control through its curvature, is more
versatile for creating flat-top profiles and its optimization procedure is more
refined compared to the straight nozzle and the straight nozzle optimization
process. We present results for different nozzle designs from computational
fluid dynamics (CFD) simulations performed with the program ANSYS Fluent and
verify them experimentally using neutral density interferometry.
|
Almost 46% of the world's population resides in a rural landscape. Smart
villages, alongside smart cities, are in need of time for future economic
growth, improved agriculture, better health, and education. The smart village
is a concept that improves the traditional rural aspects with the help of
digital transformation. The smart village is built up using heterogeneous
digital technologies pillared around the Internet-of-Thing (IoT). There exist
many opportunities in research to design a low-cost, secure, and efficient
technical ecosystem. This article identifies the key application areas, where
the IoT can be applied in the smart village. The article also presents a
comparative study of communication technology options.
|
In many quantum materials, strong electron correlations lead to the emergence
of new states of matter. In particular, the study in the last decades of the
complex phase diagram of high temperature superconducting cuprates highlighted
intra-unit-cell electronic instabilities breaking discrete Ising-like
symmetries, while preserving the lattice translation invariance. Polarized
neutron diffraction experiments have provided compelling evidences supporting a
new form of intra-unit-cell magnetism, emerging concomitantly with the
so-called pseudogap state of these materials. This observation is currently
interpreted as the magnetic hallmark of an intra-unit-cell loop current order,
breaking both parity and time-reversal symmetries. More generally, this
magneto-electric state is likely to exist in a wider class of quantum materials
beyond superconducting cuprates. For instance, it has been already observed in
hole-doped Mott insulating iridates or in the spin liquid state of hole-doped
2-leg ladder cuprates.
|
We reconsider the thermodynamics of AdS black holes in the context of
gauge-gravity duality. In this new setting where both the cosmological constant
$\Lambda$ and the gravitational Newton constant $G$ are varied in the bulk, we
rewrite the first law in a new form containing both $\Lambda$ (associated with
thermodynamic pressure) and the central charge $C$ of the dual CFT theory and
their conjugate variables. We obtain a novel thermodynamic volume, in turn
leading to a new understanding of the Van der Waals behavior of the charged AdS
black holes, in which phase changes are governed by the degrees of freedom in
the CFT. Compared to the "old" $P-V$ criticality, this new criticality is
"universal" (independent of the bulk pressure) and directly relates to the
thermodynamics of the dual field theory and its central charge.
|
The Standard Model (SM) is augmented with a $\mathrm{U}(1)_{B-3L_\mu} $ gauge
symmetry spontaneously broken above the TeV scale when an SM-singlet scalar
condenses. Scalar leptoquarks $S_{1(3)} = (\overline{\mathbf{3}},\, \mathbf{1}
(\mathbf{3}),\, ^1\!/_3)$ charged under $\mathrm{U}(1)_{B-3L_\mu} $ mediate the
intriguing effects observed in muon $(g-2)$, $R_{K^{(*)}}$ and $b \to s \mu^+
\mu^-$, while generically evading all other phenomenological constraints. The
fermionic sector is minimally extended with three right-handed neutrinos, and a
successful type-I seesaw mechanism is realized. Charged lepton flavor violation
is effectively suppressed, and proton decay - a common prediction of
leptoquarks - is postponed to the dimension-6 effective Lagrangian. Unavoidable
radiative corrections in the Higgs mass and muon Yukawa favor leptoquark masses
interesting for collider searches. The parameters of the model are radiatively
stable and can be evolved by the renormalization group to the Planck scale
without inconsistencies. Alternative lepton-flavored gauge extensions of the
SM, under which leptoquarks become muoquarks, are proposed for comparison.
|
We present a novel spectral machine learning (SML) method in screening for
pancreatic mass using CT imaging. Our algorithm is trained with approximately
30,000 images from 250 patients (50 patients with normal pancreas and 200
patients with abnormal pancreas findings) based on public data sources. A test
accuracy of 94.6 percents was achieved in the out-of-sample diagnosis
classification based on a total of approximately 15,000 images from 113
patients, whereby 26 out of 32 patients with normal pancreas and all 81
patients with abnormal pancreas findings were correctly diagnosed. SML is able
to automatically choose fundamental images (on average 5 or 9 images for each
patient) in the diagnosis classification and achieve the above mentioned
accuracy. The computational time is 75 seconds for diagnosing 113 patients in a
laptop with standard CPU running environment. Factors that influenced high
performance of a well-designed integration of spectral learning and machine
learning included: 1) use of eigenvectors corresponding to several of the
largest eigenvalues of sample covariance matrix (spike eigenvectors) to choose
input attributes in classification training, taking into account only the
fundamental information of the raw images with less noise; 2) removal of
irrelevant pixels based on mean-level spectral test to lower the challenges of
memory capacity and enhance computational efficiency while maintaining superior
classification accuracy; 3) adoption of state-of-the-art machine learning
classification, gradient boosting and random forest. Our methodology showcases
practical utility and improved accuracy of image diagnosis in pancreatic mass
screening in the era of AI.
|
In Vaidya-Bonner de Sitter Black hole space-time, the tunneling radiation
characteristics of fermions and bosons are corrected by taking Lorentz symmetry
breaking theory into account. The corresponding gamma matrices and ether-like
field vectors of the black hole are constructed, then the new modified form of
Dirac equation for the fermion with spin 1/2 and the new modified form of
Klein-Gordon equation for boson in the curved space-time of the black hole are
obtained. Through solving the two equations, new and corrected expressions of
surface gravity, Hawking temperature and tunneling rate of the black hole are
obtained, and the results obtained are also discussed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.