abstract
stringlengths 42
2.09k
|
---|
Delay differential equations are of great importance in science, engineering,
medicine and biological models. These type of models include time delay
phenomena which is helpful for characterising the real-world applications in
machine learning, mechanics, economics, electrodynamics and so on. Besides,
special classes of functional differential equations have been investigated in
many researches. In this study, a numerical investigation of retarded type of
these models together with initial conditions are introduced. The technique is
based on a polynomial approach along with collocation points which maintains an
approximated solutions to the problem. Besides, an error analysis of the
approximate solutions is given. Accuracy of the method is shown by the results.
Consequently, illustrative examples are considered and detailed analysis of the
problem is acquired. Consequently, the future outlook is discussed in
conclusion.
|
We report the results of the analyses of the cosmic ray data collected with a
4 tonne (3$\times$1$\times$1~m$^3$) active mass (volume) Liquid Argon
Time-Projection Chamber (TPC) operated in a dual-phase mode. We present a
detailed study of the TPC's response, its main detector parameters and
performance. The results are important for the understanding and further
developments of the dual-phase technology, thanks to the verification of key
aspects, such as the extraction of electrons from liquid to gas and their
amplification through the entire one square metre readout plain, gain
stability, purity and charge sharing between readout views.
|
Modern vehicles are complex cyber-physical systems made of hundreds of
electronic control units (ECUs) that communicate over controller area networks
(CANs). This inherited complexity has expanded the CAN attack surface which is
vulnerable to message injection attacks. These injections change the overall
timing characteristics of messages on the bus, and thus, to detect these
malicious messages, time-based intrusion detection systems (IDSs) have been
proposed. However, time-based IDSs are usually trained and tested on
low-fidelity datasets with unrealistic, labeled attacks. This makes difficult
the task of evaluating, comparing, and validating IDSs. Here we detail and
benchmark four time-based IDSs against the newly published ROAD dataset, the
first open CAN IDS dataset with real (non-simulated) stealthy attacks with
physically verified effects. We found that methods that perform hypothesis
testing by explicitly estimating message timing distributions have lower
performance than methods that seek anomalies in a distribution-related
statistic. In particular, these "distribution-agnostic" based methods
outperform "distribution-based" methods by at least 55% in area under the
precision-recall curve (AUC-PR). Our results expand the body of knowledge of
CAN time-based IDSs by providing details of these methods and reporting their
results when tested on datasets with real advanced attacks. Finally, we develop
an after-market plug-in detector using lightweight hardware, which can be used
to deploy the best performing IDS method on nearly any vehicle.
|
A graph $ G $ is said to be $ (H;k) $-vertex stable if $ G $ contains
a~subgraph isomorphic to $ H $ even after removing any $ k $ of its vertices
alongside with their incident edges. We will denote by $ \text{stab}(H;k) $ the
minimum size among sizes of all $ (H;k) $-vertex stable graphs. In this paper
we consider a~case where the structure $ H $ is a~star graph $ K_{1,r} $ and
the the number of vertices in $ G $ is exact, \ie equal to $ 1 + r + k $. We
will show that under the above assumptions $ \text{stab}(K_{1,r};k) $ equals
either $ \frac{1}{2}(k + 1)(2r + k) $, $ \frac{1}{2}\big((r + k)^{2} - 1\big) $
or $ \frac{1}{2}(r + k)^{2} $. Moreover, we will characterize all the extremal
graphs.
|
We present a novel method for reliably explaining the predictions of neural
networks. We consider an explanation reliable if it identifies input features
relevant to the model output by considering the input and the neighboring data
points. Our method is built on top of the assumption of smooth landscape in a
loss function of the model prediction: locally consistent loss and gradient
profile. A theoretical analysis established in this study suggests that those
locally smooth model explanations are learned using a batch of noisy copies of
the input with the L1 regularization for a saliency map. Extensive experiments
support the analysis results, revealing that the proposed saliency maps
retrieve the original classes of adversarial examples crafted against both
naturally and adversarially trained models, significantly outperforming
previous methods. We further demonstrated that such good performance results
from the learning capability of this method to identify input features that are
truly relevant to the model output of the input and the neighboring data
points, fulfilling the requirements of a reliable explanation.
|
We study certain physically-relevant subgeometries of binary symplectic polar
spaces $W(2N-1,2)$ of small rank $N$, when the points of these spaces
canonically encode $N$-qubit observables. Key characteristics of a subspace of
such a space $W(2N-1,2)$ are: the number of its negative lines, the
distribution of types of observables, the character of the geometric hyperplane
the subspace shares with the distinguished (non-singular) quadric of
$W(2N-1,2)$ and the structure of its Veldkamp space. In particular, we classify
and count polar subspaces of $W(2N-1,2)$ whose rank is $N-1$. $W(3,2)$ features
three negative lines of the same type and its $W(1,2)$'s are of five different
types. $W(5,2)$ is endowed with 90 negative lines of two types and its
$W(3,2)$'s split into 13 types. 279 out of 480 $W(3,2)$'s with three negative
lines are composite, i.\,e. they all originate from the two-qubit $W(3,2)$.
Given a three-qubit $W(3,2)$ and any of its geometric hyperplanes, there are
three other $W(3,2)$'s possessing the same hyperplane. The same holds if a
geometric hyperplane is replaced by a `planar' tricentric triad. A hyperbolic
quadric of $W(5,2)$ is found to host particular sets of seven $W(3,2)$'s, each
of them being uniquely tied to a Conwell heptad with respect to the quadric.
There is also a particular type of $W(3,2)$'s, a representative of which
features a point each line through which is negative. Finally, $W(7,2)$ is
found to possess 1908 negative lines of five types and its $W(5,2)$'s fall into
as many as 29 types. 1524 out of 1560 $W(5,2)$'s with 90 negative lines
originate from the three-qubit $W(5,2)$. Remarkably, the difference in the
number of negative lines for any two distinct types of four-qubit $W(5,2)$'s is
a multiple of four.
|
In this paper we studied a broader type of generalized balls which are
domains on the complex projective with possibly Levi-degenerate boundaries. We
proved rigidity theorems for proper holomorphic mappings among them by
exploring the structure of the moduli spaces of projective linear subspaces,
which generalized some earlier results for the ordinary generalized balls with
Levi-nondegenerate boundaries.
|
Decentralized data storage systems like the Interplanetary Filesystem (IPFS)
are becoming increasingly popular, e. g., as a data layer in blockchain
applications and for sharing content in a censorship-resistant manner. In IPFS,
data is hosted by an open set of peers, requests to which are broadcast to all
directly connected peers and routed via a distributed hash table (DHT). In this
paper, we showcase how the monitoring of said data requests allows for profound
insights about the IPFS network while simultaneously breaching individual
users' privacy. To this end, we present a passive monitoring methodology that
enables us to collect data requests of a significant and upscalable portion of
the total IPFS node population. Using a measurement setup implementing our
approach and data collected over a period of fifteen months, we demonstrate the
estimation of, among other things: the size of the IPFS network, activity
levels and structure, and content popularity distributions. We furthermore
present how our methodology can be abused for attacks on users' privacy. As a
demonstration, we identify and successfully surveil public IPFS/HTTP gateways,
thereby also uncovering their (normally hidden) node identifiers. We find that
the number of requests by public gateways is substantial, suggesting
substantial usage of these gateways. We give a detailed analysis of the
mechanics and reasons behind implied privacy threats and discuss possible
countermeasures.
|
The Kerr rotating black hole metric has unstable photon orbits that orbit
around the hole at fixed values of the Boyer-Lindquist coordinate $r$ that
depend on the axial angular momentum of the orbit, as well as on the parameters
of the hole. For zero orbital axial angular momentum, these orbits cross the
rotational axes at a fixed value of $r$ that depends on the mass $M$ and
angular momentum $J$ of the black hole. Nonzero angular momentum of the hole
causes the photon orbit to rotate so that its direction when crossing the north
polar axis changes from one crossing to the next by an angle I shall call
$\Delta\phi$, which depends on the black hole dimensionless rotation parameter
$a/M = cJ/(GM^2)$ by an equation involving a complete elliptic integral of the
first kind. When the black hole has $a/M \approx 0.994\,341\,179\,923\,26$,
which is nearly maximally rotating, a photon sent out in a constant-$r$
direction from the north polar axis at $r \approx 2.423\,776\,210\,035\,73\,
GM/c^2$ returns to the north polar axis in precisely the opposite direction (in
a frame nonrotating with respect to the distant stars), a photon boomerang.
|
Machine-learned potential energy surfaces (PESs) for molecules with more than
10 atoms are typically forced to use lower-level electronic structure methods
such as density functional theory and second-order Moller-Plesset perturbation
theory (MP2). While these are efficient and realistic, they fall short of the
accuracy of the ``gold standard'' coupled-cluster method, especially with
respect to reaction and isomerization barriers. We report a major step forward
in applying a $\Delta$-machine learning method to the challenging case of
acetylacetone, whose MP2 barrier height for H-atom transfer is low by roughly
1.5 kcal/mol relative to the benchmark CCSD(T) barrier of 3.2 kcal/mol. From a
database of 2151 local CCSD(T) energies, and training with as few as 430
energies, we obtain a new PES with a barrier of 3.49 kcal/mol in agreement with
the LCCSD(T) one of 3.54 kcal/mol and close to the benchmark value. Tunneling
splittings due to H-atom transfer are calculated using this new PES, providing
improved estimates over previous ones obtained using an MP2-based PES.
|
Numerical solutions to high-dimensional partial differential equations (PDEs)
based on neural networks have seen exciting developments. This paper derives
complexity estimates of the solutions of $d$-dimensional second-order elliptic
PDEs in the Barron space, that is a set of functions admitting the integral of
certain parametric ridge function against a probability measure on the
parameters. We prove under some appropriate assumptions that if the
coefficients and the source term of the elliptic PDE lie in Barron spaces, then
the solution of the PDE is $\epsilon$-close with respect to the $H^1$ norm to a
Barron function. Moreover, we prove dimension-explicit bounds for the Barron
norm of this approximate solution, depending at most polynomially on the
dimension $d$ of the PDE. As a direct consequence of the complexity estimates,
the solution of the PDE can be approximated on any bounded domain by a
two-layer neural network with respect to the $H^1$ norm with a
dimension-explicit convergence rate.
|
Layout designs are encountered in a variety of fields. For problems with many
design degrees of freedom, efficiency of design methods becomes a major
concern. In recent years, machine learning methods such as artificial neural
networks have been used increasingly to speed up the design process. A main
issue of many such approaches is the need for a large corpus of training data
that are generated using high-dimensional simulations. The high computational
cost associated with training data generation largely diminishes the efficiency
gained by using machine learning methods. In this work, an adaptive artificial
neural network-based generative design approach is proposed and developed. This
method uses a generative adversarial network to generate design candidates and
thus the number of design variables is greatly reduced. To speed up the
evaluation of the objective function, a convolutional neural network is
constructed as the surrogate model for function evaluation. The inverse design
is carried out using the genetic algorithm in conjunction with two neural
networks. A novel adaptive learning and optimization strategy is proposed,
which allows the design space to be effectively explored for the search for
optimal solutions. As such the number of training data needed is greatly
reduced. The performance of the proposed design method is demonstrated on two
heat source layout design problems. In both problems, optimal designs have been
obtained. Compared with several existing approaches, the proposed approach has
the best performance in terms of accuracy and efficiency.
|
Accountability is widely understood as a goal for well governed computer
systems, and is a sought-after value in many governance contexts. But how can
it be achieved? Recent work on standards for governable artificial intelligence
systems offers a related principle: traceability. Traceability requires
establishing not only how a system worked but how it was created and for what
purpose, in a way that explains why a system has particular dynamics or
behaviors. It connects records of how the system was constructed and what the
system did mechanically to the broader goals of governance, in a way that
highlights human understanding of that mechanical operation and the decision
processes underlying it. We examine the various ways in which the principle of
traceability has been articulated in AI principles and other policy documents
from around the world, distill from these a set of requirements on software
systems driven by the principle, and systematize the technologies available to
meet those requirements. From our map of requirements to supporting tools,
techniques, and procedures, we identify gaps and needs separating what
traceability requires from the toolbox available for practitioners. This map
reframes existing discussions around accountability and transparency, using the
principle of traceability to show how, when, and why transparency can be
deployed to serve accountability goals and thereby improve the normative
fidelity of systems and their development processes.
|
Recent works have shown that a rich set of semantic directions exist in the
latent space of Generative Adversarial Networks (GANs), which enables various
facial attribute editing applications. However, existing methods may suffer
poor attribute variation disentanglement, leading to unwanted change of other
attributes when altering the desired one. The semantic directions used by
existing methods are at attribute level, which are difficult to model complex
attribute correlations, especially in the presence of attribute distribution
bias in GAN's training set. In this paper, we propose a novel framework (IALS)
that performs Instance-Aware Latent-Space Search to find semantic directions
for disentangled attribute editing. The instance information is injected by
leveraging the supervision from a set of attribute classifiers evaluated on the
input images. We further propose a Disentanglement-Transformation (DT) metric
to quantify the attribute transformation and disentanglement efficacy and find
the optimal control factor between attribute-level and instance-specific
directions based on it. Experimental results on both GAN-generated and
real-world images collectively show that our method outperforms
state-of-the-art methods proposed recently by a wide margin. Code is available
at https://github.com/yxuhan/IALS.
|
We discuss the recent results on the muon anomalous magnetic moment in the
context of new physics models with light scalars. We propose a model in which
the one-loop contributions to g-2 of a scalar and a pseudoscalar naturally
cancel in the massless limit due to the symmetry structure of the model. This
model allows to interpolate between two possible interpretations. In the first
interpretation, the results provide a strong evidence of the existence of new
physics, dominated by the positive contribution of a CP-even scalar. In the
second one, supported by the recent lattice result, the data provides a strong
upper bound on new physics, specifically in the case of (negative) pseudoscalar
contributions. We emphasize that tree-level signatures of the new degrees of
freedom of the model are enhanced relative to conventional explanations of the
discrepancy. As a result, this model can be tested in the near future with
accelerator-based experiments and possibly also at the precision frontier.
|
Probabilistic point cloud registration methods are becoming more popular
because of their robustness. However, unlike point-to-plane variants of
iterative closest point (ICP) which incorporate local surface geometric
information such as surface normals, most probabilistic methods (e.g., coherent
point drift (CPD)) ignore such information and build Gaussian mixture models
(GMMs) with isotropic Gaussian covariances. This results in sphere-like GMM
components which only penalize the point-to-point distance between the two
point clouds. In this paper, we propose a novel method called CPD with Local
Surface Geometry (LSG-CPD) for rigid point cloud registration. Our method
adaptively adds different levels of point-to-plane penalization on top of the
point-to-point penalization based on the flatness of the local surface. This
results in GMM components with anisotropic covariances. We formulate point
cloud registration as a maximum likelihood estimation (MLE) problem and solve
it with the Expectation-Maximization (EM) algorithm. In the E step, we
demonstrate that the computation can be recast into simple matrix manipulations
and efficiently computed on a GPU. In the M step, we perform an unconstrained
optimization on a matrix Lie group to efficiently update the rigid
transformation of the registration. The proposed method outperforms
state-of-the-art algorithms in terms of accuracy and robustness on various
datasets captured with range scanners, RGBD cameras, and LiDARs. Also, it is
significantly faster than modern implementations of CPD. The source code is
available at https://github.com/ChirikjianLab/LSG-CPD.git.
|
Recent work has proven the effectiveness of transformers in many computer
vision tasks. However, the performance of transformers in gaze estimation is
still unexplored. In this paper, we employ transformers and assess their
effectiveness for gaze estimation. We consider two forms of vision transformer
which are pure transformers and hybrid transformers. We first follow the
popular ViT and employ a pure transformer to estimate gaze from images. On the
other hand, we preserve the convolutional layers and integrate CNNs as well as
transformers. The transformer serves as a component to complement CNNs. We
compare the performance of the two transformers in gaze estimation. The Hybrid
transformer significantly outperforms the pure transformer in all evaluation
datasets with less parameters. We further conduct experiments to assess the
effectiveness of the hybrid transformer and explore the advantage of
self-attention mechanism. Experiments show the hybrid transformer can achieve
state-of-the-art performance in all benchmarks with pre-training.To facilitate
further research, we release codes and models in
https://github.com/yihuacheng/GazeTR.
|
When fitting N-body models to astronomical data - including transit times,
radial velocity, and astrometric positions at observed times - the derivatives
of the model outputs with respect to the initial conditions can help with model
optimization and posterior sampling. Here we describe a general-purpose
symplectic integrator for arbitrary orbital architectures, including those with
close encounters, which we have recast to maintain numerical stability and
precision for small step sizes. We compute the derivatives of the N-body
coordinates and velocities as a function of time with respect to the initial
conditions and masses by propagating the Jacobian along with the N-body
integration. For the first time we obtain the derivatives of the transit times
with respect to the initial conditions and masses using the chain rule, which
is quicker and more accurate than using finite differences or automatic
differentiation. We implement this algorithm in an open source package,
NbodyGradient.jl, written in the Julia language, which has been used in the
optimization and error analysis of transit-timing variations in the TRAPPIST-1
system. We present tests of the accuracy and precision of the code, and show
that it compares favorably in speed to other integrators which are written in
C.
|
This paper proposes a parallel computation strategy and a posterior-based
lattice expansion algorithm for efficient lattice rescoring with neural
language models (LMs) for automatic speech recognition. First, lattices from
first-pass decoding are expanded by the proposed posterior-based lattice
expansion algorithm. Second, each expanded lattice is converted into a minimal
list of hypotheses that covers every arc. Each hypothesis is constrained to be
the best path for at least one arc it includes. For each lattice, the neural LM
scores of the minimal list are computed in parallel and are then integrated
back to the lattice in the rescoring stage. Experiments on the Switchboard
dataset show that the proposed rescoring strategy obtains comparable
recognition performance and generates more compact lattices than a competitive
baseline method. Furthermore, the parallel rescoring method offers more
flexibility by simplifying the integration of PyTorch-trained neural LMs for
lattice rescoring with Kaldi.
|
Motivated by the appearance of fractional powers of line bundles in studies
of vector-like spectra in 4d F-theory compactifications, we analyze the
structure and origin of these bundles. Fractional powers of line bundles are
also known as root bundles and can be thought of as generalizations of spin
bundles. We explain how these root bundles are linked to inequivalent F-theory
gauge potentials of a $G_4$-flux.
While this observation is interesting in its own right, it is particularly
valuable for F-theory Standard Model constructions. In aiming for MSSMs, it is
desired to argue for the absence of vector-like exotics. We work out the root
bundle constraints on all matter curves in the largest class of currently-known
F-theory Standard Model constructions without chiral exotics and gauge coupling
unification. On each matter curve, we conduct a systematic "bottom"-analysis of
all solutions to the root bundle constraints and all spin bundles. Thereby, we
derive a lower bound for the number of combinations of root bundles and spin
bundles whose cohomologies satisfy the physical demand of absence of
vector-like pairs.
On a technical level, this systematic study is achieved by a well-known
diagrammatic description of root bundles on nodal curves. We extend this
description by a counting procedure, which determines the cohomologies of
so-called limit root bundles on full blow-ups of nodal curves. By use of
deformation theory, these results constrain the vector-like spectra on the
smooth matter curves in the actual F-theory geometry.
|
The Bayesian approach is effective for inverse problems. The posterior
density distribution provides useful information of the unknowns. However, for
problems with non-unique solutions, the classical estimators such as the
maximum a posterior (MAP) and conditional mean (CM) are not enough. We
introduce two new estimators, the local maximum a posterior (LMAP) and local
conditional mean (LCM). Their applications are demonstrated by three inverse
problems: an inverse spectral problem, an inverse source problem, and an
inverse medium problem.
|
We derive the planar limit of 2- and 3-point functions of single-trace chiral
primary operators of ${\cal N}=2$ SQCD on $S^4$, to all orders in the 't Hooft
coupling. In order to do so, we first obtain a combinatorial expression for the
planar free energy of a hermitian matrix model with an infinite number of
arbitrary single and double trace terms in the potential; this solution might
have applications in many other contexts. We then use these results to evaluate
the analogous planar correlation functions on ${\mathbb R}^4$. Specifically, we
compute all the terms with a single value of the $\zeta$ function for a few
planar 2- and 3-point functions, and conjecture general formulas for these
terms for all 2- and 3-point functions on ${\mathbb R}^4$.
|
The rotational sublevels of the key (000) and (010) vibrational states of the
H2S molecule were modeled with an accuracy close to experimental uncertainty
using the generating function and Euler approaches. The predictive ability of
the Hamiltonian parameters derived is tested against variational calculations.
Comparison of transitions wavenumbers obtained from the presently calculated
set of the H2S (000) and (010) energy levels with simulated (000)-(000),
(010)-(010) transitions included in HITRAN 2016 database revealed a large
discrepancy up to 44 cm-1. Large sets of accurate rotational sublevels of the
(000) and (010) states are calculated.
|
The work is devoted to ways of modeling street traffic on a street layout
without traffic lights of an established topology. The behavior of traffic
participants takes into account the individual inclinations of drivers to
creatively interpret traffic rules. Participant interactions describe game
theory models that provide information for simulation algorithms based on
cellular automata. Driver diversification comes down to two types often
considered in such research: DE(fective)-agent and CO(operative)-agent. Various
ways of using the description of traffic participants to examine the impact of
behavior on street traffic dynamics were shown. Directions for the further
detailed analysis were indicated, which requires basic research in the field of
game theory models.
|
We study the processes $\gamma \gamma \to \eta_c \to \eta' K^+ K^-$, $\eta'
\pi^+ \pi^-$, and $\eta \pi^+ \pi^-$ using a data sample of 519 $fb^{-1}$
recorded with the BaBar detector operating at the SLAC PEP-II asymmetric-energy
$e^+e^-$ collider at center-of-mass energies at and near the $\Upsilon(nS)$ ($n
= 2,3,4$) resonances. This is the first observation of the decay $\eta_c \to
\eta' K^+ K^-$ and we measure the branching fraction $\Gamma(\eta_c \to \eta'
K^+ K^-)/(\Gamma(\eta_c \to \eta' \pi^+ \pi^-)=0.644\pm 0.039_{\rm stat}\pm
0.032_{\rm sys}$. Significant interference is observed between $\gamma \gamma
\to \eta_c\to \eta \pi^+ \pi^-$ and the non-resonant two-photon process $\gamma
\gamma \to \eta \pi^+ \pi^-$. A Dalitz plot analysis is performed of $\eta_c$
decays to $\eta' K^+ K^-$, $\eta' \pi^+ \pi^-$, and $\eta \pi^+ \pi^-$.
Combined with our previous analysis of $\eta_c \to K \bar K \pi$, we measure
the $K^*_0(1430)$ parameters and the ratio between its $\eta' K$ and $\pi K$
couplings. The decay $\eta_c \to \eta' \pi^+ \pi^-$ is dominated by the
$f_0(2100)$ resonance, also observed in $J/\psi$ radiative decays. A new
$a_0(1700) \to \eta \pi$ resonance is observed in the $\eta_c \to \eta \pi^+
\pi^-$ channel. We also compare $\eta_c$ decays to $\eta$ and $\eta'$ final
states in association with scalar mesons as they relate to the identification
of the scalar glueball.
|
Colloidal self-assembly -- the spontaneous organization of colloids into
ordered structures -- has been considered key to produce next-generation
materials. However, the present-day staggering variety of colloidal building
blocks and the limitless number of thermodynamic conditions make a systematic
exploration intractable. The true challenge in this field is to turn this logic
around, and to develop a robust, versatile algorithm to inverse design colloids
that self-assemble into a target structure. Here, we introduce a generic
inverse design method to efficiently reverse-engineer crystals, quasicrystals,
and liquid crystals by targeting their diffraction patterns. Our algorithm
relies on the synergetic use of an evolutionary strategy for parameter
optimization, and a convolutional neural network as an order parameter, and
provides a new way forward for the inverse design of experimentally feasible
colloidal interactions, specifically optimized to stabilize the desired
structure.
|
There has been recently a growing interest in studying adversarial examples
on natural language models in the black-box setting. These methods attack
natural language classifiers by perturbing certain important words until the
classifier label is changed. In order to find these important words, these
methods rank all words by importance by querying the target model word by word
for each input sentence, resulting in high query inefficiency. A new
interesting approach was introduced that addresses this problem through
interpretable learning to learn the word ranking instead of previous expensive
search. The main advantage of using this approach is that it achieves
comparable attack rates to the state-of-the-art methods, yet faster and with
fewer queries, where fewer queries are desirable to avoid suspicion towards the
attacking agent. Nonetheless, this approach sacrificed the useful information
that could be leveraged from the target classifier for that sake of query
efficiency. In this paper we study the effect of leveraging the target model
outputs and data on both attack rates and average number of queries, and we
show that both can be improved, with a limited overhead of additional queries.
|
A growing number of eclipsing binary systems of the "HW Vir" kind (i. e.,
composed by a subdwarf-B/O primary star and an M dwarf secondary) show
variations in their orbital period, also called Eclipse Time Variations (ETVs).
Their physical origin is not yet known with certainty: while some ETVs have
been claimed to arise from dynamical perturbations due to the presence of
circumbinary planetary companions, other authors suggest that the Applegate
effect or other unknown stellar mechanisms could be responsible for them. In
this work, we present twenty-eight unpublished high-precision light curves of
one of the most controversial of these systems, the prototype HW Virginis. We
homogeneously analysed the new eclipse timings together with historical data
obtained between 1983 and 2012, demonstrating that the planetary models
previously claimed do not fit the new photometric data, besides being
dynamically unstable. In an effort to find a new model able to fit all the
available data, we developed a new approach based on a global-search genetic
algorithm and eventually found two new distinct families of solutions that fit
the observed timings very well, yet dynamically unstable at the 10^5-year time
scale. This serves as a cautionary tale on the existence of formal solutions
that apparently explain ETVs but are not physically meaningful, and on the need
of carefully testing their stability. On the other hand, our data confirm the
presence of an ETV on HW Vir that known stellar mechanisms are unable to
explain, pushing towards further observing and modelling efforts.
|
The electricity sector has tended to be one of the first industries to face
technology change motivated by sustainability concerns. Whilst efficient market
designs for electricity have tended to focus upon market power concerns,
environmental externalities pose extra challenges for efficient solutions.
Thus, we show that ad hoc remedies for market power alongside administered
carbon prices are inefficient unless they are integrated. Accordingly, we
develop an incentive-based market clearing design that can include
externalities as well as market power mitigation. A feature of the solution is
that it copes with incomplete information of the system operator regarding
generation costs. It is uses a network representation of the power system and
the proposed incentive mechanism holds even with energy limited technologies
having temporal constraints, e.g., storage. The shortcomings of price caps to
mitigate market power, in the context of sustainability externalities, are
overcome under the proposed incentive mechanism.
|
Full-field imaging through scattering media is fraught with many challenges.
Despite many achievements in recent years, current imaging methods are too slow
to deal with fast dynamics that occur for example in biomedical imaging. Here
we present an ultra-fast all-optical method, where the object to be imaged and
the scattering medium (diffuser) are inserted into a highly multimode
self-imaging laser cavity. We show that the intra-cavity laser light from the
object is mainly focused onto specific regions of the scattering medium where
the phase variations are low. Thus, round trip loss within the laser cavity is
minimized, thereby overcoming most of the scattering effects. The method is
exploited to image objects through scattering media whose diffusion angle is
lower than the numerical aperture of the laser cavity. As our method is based
on optical feedback inside a laser cavity, it can deal with temporal variations
that occur on timescales as short as several cavity round trips, with an upper
bound of 200 ns.
|
High capacity end-to-end approaches for human motion (behavior) prediction
have the ability to represent subtle nuances in human behavior, but struggle
with robustness to out of distribution inputs and tail events. Planning-based
prediction, on the other hand, can reliably output decent-but-not-great
predictions: it is much more stable in the face of distribution shift (as we
verify in this work), but it has high inductive bias, missing important aspects
that drive human decisions, and ignoring cognitive biases that make human
behavior suboptimal. In this work, we analyze one family of approaches that
strive to get the best of both worlds: use the end-to-end predictor on common
cases, but do not rely on it for tail events / out-of-distribution inputs --
switch to the planning-based predictor there. We contribute an analysis of
different approaches for detecting when to make this switch, using an
autonomous driving domain. We find that promising approaches based on
ensembling or generative modeling of the training distribution might not be
reliable, but that there very simple methods which can perform surprisingly
well -- including training a classifier to pick up on tell-tale issues in
predicted trajectories.
|
We prove the existence of immersed closed curves of constant geodesic
curvature in an arbitrary Riemannian 2-sphere for almost every prescribed
curvature. To achieve this, we develop a min-max scheme for a weighted length
functional.
|
In image anomaly detection, Autoencoders are the popular methods that
reconstruct the input image that might contain anomalies and output a clean
image with no abnormalities. These Autoencoder-based methods usually calculate
the anomaly score from the reconstruction error, the difference between the
input image and the reconstructed image. On the other hand, the accuracy of the
reconstruction is insufficient in many of these methods, so it leads to
degraded accuracy of anomaly detection. To improve the accuracy of the
reconstruction, we consider defining loss function in the frequency domain. In
general, we know that natural images contain many low-frequency components and
few high-frequency components. Hence, to improve the accuracy of the
reconstruction of high-frequency components, we introduce a new loss function
named weighted frequency domain loss(WFDL). WFDL provides a sharper
reconstructed image, which contributes to improving the accuracy of anomaly
detection. In this paper, we show our method's superiority over the
conventional Autoencoder methods by comparing it with AUROC on the MVTec AD
dataset.
|
Background: Poverty among the population of a country is one of the most
disputable topics in social studies. Many researchers devote their work to
identifying the factors that influence it most. Bulgaria is one of the EU
member states with the highest poverty levels. Regional facets of social
exclusion and risks of poverty among the population are a key priority of the
National Development Strategy for the third decade of 21st century. In order to
mitigate the regional poverty levels it is necessary for the social policy
makers to pay more attention to the various factors expected to influence these
levels. Results: Poverty reduction is observed in most areas of the country.
The regions with obviously favorable developments are Sofia district, Pernik,
Pleven, Lovech, Gabrovo, Veliko Tarnovo, Silistra, Shumen, Stara Zagora,
Smolyan, Kyustendil and others. Increased levels of poverty are found for
Razgrad and Montana districts. It was fond that the reduction in the risk of
poverty is associated to the increase in employment, investment, and housing.
Conclusion: The social policy making needs to be aware of the fact that the
degree of exposition to risk of poverty and social exclusion significantly
relates to the levels of regional employment, investment and housing.
|
A natural way of increasing our understanding of NP-complete graph problems
is to restrict the input to a special graph class. Classes of $H$-free graphs,
that is, graphs that do not contain some graph $H$ as an induced subgraph, have
proven to be an ideal testbed for such a complexity study. However, if the
forbidden graph $H$ contains a cycle or claw, then these problems often stay
NP-complete. A recent complexity study on the $k$-Colouring problem shows that
we may still obtain tractable results if we also bound the diameter of the
$H$-free input graph. We continue this line of research by initiating a
complexity study on the impact of bounding the diameter for a variety of
classical vertex partitioning problems restricted to $H$-free graphs. We prove
that bounding the diameter does not help for Independent Set, but leads to new
tractable cases for problems closely related to 3-Colouring. That is, we show
that Near-Bipartiteness, Independent Feedback Vertex Set, Independent Odd Cycle
Transversal, Acyclic 3-Colouring and Star 3-Colouring are all polynomial-time
solvable for chair-free graphs of bounded diameter. To obtain these results we
exploit a new structural property of 3-colourable chair-free graphs.
|
The aim of this paper is to present an elementary computable theory of random
variables, based on the approach to probability via valuations. The theory is
based on a type of lower-measurable sets, which are controlled limits of open
sets, and extends existing work in this area by providing a computable theory
of conditional random variables. The theory is based within the framework of
type-two effectivity, so has an explicit direct link with Turing computation,
and is expressed in a system of computable types and operations, so has a clean
mathematical description.
|
As the field of superconducting quantum computing approaches maturity,
optimization of single-device performance is proving to be a promising avenue
towards large-scale quantum computers. However, this optimization is possible
only if performance metrics can be accurately compared among measurements,
devices, and laboratories. Currently such comparisons are inaccurate or
impossible due to understudied errors from a plethora of sources. In this
Perspective, we outline the current state of error analysis for qubits and
resonators in superconducting quantum circuits, and discuss what future
investigations are required before superconducting quantum device optimization
can be realized.
|
We address the issues of clustering and non-global logarithms for jet shapes
in the process of production of a Higgs/vector boson associated with a single
hard jet at hadron colliders. We perform an analytical fixed-order calculation
up to second order in the coupling as well as an all-orders estimation for the
specific invariant mass distribution of the highest-$p_t$ jet, for various jet
algorithms. Our results are derived in the eikonal (soft) limit and are valid
up to next-to-leading logarithmic accuracy. We perform a matching of the
resummed distribution to next-to-leading order results from MCFM and compare
our findings with the outputs of the Monte Carlo event generators Pythia 8 and
Herwig 7. After accounting for non-perturbative effects we compare our results
with available experimental data from the CMS collaboration for the Z + jet
production. We find good agreement over a wide range of the observable.
|
Bioenergy with Carbon Capture and Sequestration (BECCS) is critical for
stringent climate change mitigation, but is commercially and technologically
immature and resource-intensive. In California, state and federal fuel and
climate policies can drive first-markets for BECCS. We develop a spatially
explicit optimization model to assess niche markets for renewable natural gas
(RNG) production with carbon capture and sequestration (CCS) from waste biomass
in California. Existing biomass residues produce biogas and RNG and enable
low-cost CCS through the upgrading process and CO$_2$ truck transport. Under
current state and federal policy incentives, we could capture and sequester 2.9
million MT CO$_2$/year (0.7% of California's 2018 CO$_2$ emissions) and produce
93 PJ RNG/year (4% of California's 2018 natural gas demand) with a profit
maximizing objective. Existing federal and state policies produce profits of
\$11/GJ. Distributed RNG production with CCS potentially catalyzes markets and
technologies for CO$_2$ capture, transport, and storage in California.
|
Current and future generations of intensity mapping surveys promise dramatic
improvements in our understanding of galaxy evolution and large-scale
structure. An intensity map provides a census of the cumulative emission from
all galaxies in a given region and redshift, including faint objects that are
undetectable individually. Furthermore, cross-correlations between line
intensity maps and galaxy redshift surveys are sensitive to the line intensity
and clustering bias without the limitation of cosmic variance. Using the Fisher
information matrix, we derive simple expressions describing sensitivities to
the intensity and bias obtainable for cross-correlation surveys, focusing on
cosmic variance evasion. Based on these expressions, we conclude that the
optimal sensitivity is obtained by matching the survey depth, defined by the
ratio of the clustering power spectrum to noise in a given mode, between the
two surveys. We find that mid- to far-infrared space telescopes could benefit
from this technique by cross-correlating with coming galaxy redshift surveys
such as those planned for the Nancy Grace Roman Space Telescope, allowing for
sensitivities beyond the cosmic variance limit. Our techniques can therefore be
applied to survey design and requirements development to maximize the
sensitivities of future intensity mapping experiments to tracers of galaxy
evolution and large-scale structure cosmology.
|
We study real steady state varieties of the dynamics of chemical reaction
networks. The dynamics are derived using mass action kinetics with parametric
reaction rates. The models studied are not inherently parametric in nature.
Rather, our interest in parameters is motivated by parameter uncertainty, as
reaction rates are typically either measured with limited precision or
estimated. We aim at detecting toricity and shifted toricity, using a framework
that has been recently introduced and studied for the non-parametric case over
both the real and the complex numbers. While toricity requires that the variety
specifies a subgroup of the direct power of the multiplicative group of the
underlying field, shifted toricity requires only a coset. In the non-parametric
case these requirements establish real decision problems. In the presence of
parameters we must go further and derive necessary and sufficient conditions in
the parameters for toricity or shifted toricity to hold. Technically, we use
real quantifier elimination methods. Our computations on biological networks
here once more confirm shifted toricity as a relevant concept, while toricity
holds only for degenerate parameter choices.
|
Many-body localization of a disorder interacting boson system in one
dimension is studied numerically at the filling factor being one-half, in terms
of level statistics, local compressibility, correlation function and
entanglement entropies. The von Neumann entanglement entropy is decoupled into
a particle number entropy and a configuration entropy. An anomalous volume-law
behavior is found for the configuration entanglement entropy to confirm a
recent experimental observation [A. Lukin, M. Rispoli, R. Schittko, et al.,
Science 364, 256 (2019)] for sufficient strong disorder, while the particle
number entropy fulfills an area-law corresponding to the total entropy for
disordered spin chain. The localization length are extracted from a two-body
correlation function for many-body localization states and corresponding
time-evolutions states as well. A phase diagrams is established with consisting
of an ergodic thermalized region and a many-body-localization region in a
parameter space of the disorder strength and the energy density. Two regions
are separated by a many-body mobility edge deducted from the standard deviation
of the particle-number entanglement entropy, which appears consistent with that
based on the localization length. Slow dynamics characterized by a logarithmic
time-dependence is explicitly shown for both the particle number entropy and
the configuration entropy in an intermediate regime of their time-evolutions,
which does not show up in the Anderson localization case, i.e. non-interacting
disorder systems.
|
Existing work in counterfactual Learning to Rank (LTR) has focussed on
optimizing feature-based models that predict the optimal ranking based on
document features. LTR methods based on bandit algorithms often optimize
tabular models that memorize the optimal ranking per query. These types of
model have their own advantages and disadvantages. Feature-based models provide
very robust performance across many queries, including those previously unseen,
however, the available features often limit the rankings the model can predict.
In contrast, tabular models can converge on any possible ranking through
memorization. However, memorization is extremely prone to noise, which makes
tabular models reliable only when large numbers of user interactions are
available. Can we develop a robust counterfactual LTR method that pursues
memorization-based optimization whenever it is safe to do? We introduce the
Generalization and Specialization (GENSPEC) algorithm, a robust feature-based
counterfactual LTR method that pursues per-query memorization when it is safe
to do so. GENSPEC optimizes a single feature-based model for generalization:
robust performance across all queries, and many tabular models for
specialization: each optimized for high performance on a single query. GENSPEC
uses novel relative high-confidence bounds to choose which model to deploy per
query. By doing so, GENSPEC enjoys the high performance of successfully
specialized tabular models with the robustness of a generalized feature-based
model. Our results show that GENSPEC leads to optimal performance on queries
with sufficient click data, while having robust behavior on queries with little
or noisy data.
|
Common domain shift problem formulations consider the integration of multiple
source domains, or the target domain during training. Regarding the
generalization of machine learning models between different car interiors, we
formulate the criterion of training in a single vehicle: without access to the
target distribution of the vehicle the model would be deployed to, neither with
access to multiple vehicles during training. We performed an investigation on
the SVIRO dataset for occupant classification on the rear bench and propose an
autoencoder based approach to improve the transferability. The autoencoder is
on par with commonly used classification models when trained from scratch and
sometimes out-performs models pre-trained on a large amount of data. Moreover,
the autoencoder can transform images from unknown vehicles into the vehicle it
was trained on. These results are corroborated by an evaluation on real
infrared images from two vehicle interiors.
|
Since its beginning in the 1950s, the field of artificial intelligence has
cycled several times between periods of optimistic predictions and massive
investment ("AI spring") and periods of disappointment, loss of confidence, and
reduced funding ("AI winter"). Even with today's seemingly fast pace of AI
breakthroughs, the development of long-promised technologies such as
self-driving cars, housekeeping robots, and conversational companions has
turned out to be much harder than many people expected. One reason for these
repeating cycles is our limited understanding of the nature and complexity of
intelligence itself. In this paper I describe four fallacies in common
assumptions made by AI researchers, which can lead to overconfident predictions
about the field. I conclude by discussing the open questions spurred by these
fallacies, including the age-old challenge of imbuing machines with humanlike
common sense.
|
Wildfire is one of the biggest disasters that frequently occurs on the west
coast of the United States. Many efforts have been made to understand the
causes of the increases in wildfire intensity and frequency in recent years. In
this work, we propose static and dynamic prediction models to analyze and
assess the areas with high wildfire risks in California by utilizing a
multitude of environmental data including population density, Normalized
Difference Vegetation Index (NDVI), Palmer Drought Severity Index (PDSI), tree
mortality area, tree mortality number, and altitude. Moreover, we focus on a
better understanding of the impacts of different factors so as to inform
preventive actions. To validate our models and findings, we divide the land of
California into 4,242 grids of 0.1 degrees $\times$ 0.1 degrees in latitude and
longitude, and compute the risk of each grid based on spatial and temporal
conditions. To verify the generalizability of our models, we further expand the
scope of wildfire risk assessment from California to Washington without any
fine tuning. By performing counterfactual analysis, we uncover the effects of
several possible methods on reducing the number of high risk wildfires. Taken
together, our study has the potential to estimate, monitor, and reduce the
risks of wildfires across diverse areas provided that such environment data is
available.
|
Among the versatile forms of dynamical patterns of activity exhibited by the
brain, oscillations are one of the most salient and extensively studied, yet
are still far from being well understood. In this paper, we provide various
structural characterizations of the existence of oscillatory behavior in neural
networks using a classical neural mass model of mesoscale brain activity called
linear-threshold dynamics. Exploiting the switched-affine nature of this
dynamics, we obtain various necessary and/or sufficient conditions on the
network structure and its external input for the existence of oscillations in
(i) two-dimensional excitatory-inhibitory networks (E-I pairs), (ii) networks
with one inhibitory but arbitrary number of excitatory nodes, (iii) purely
inhibitory networks with an arbitrary number of nodes, and (iv) networks of E-I
pairs. Throughout our treatment, and given the arbitrary dimensionality of the
considered dynamics, we rely on the lack of stable equilibria as a system-based
proxy for the existence of oscillations, and provide extensive numerical
results to support its tight relationship with the more standard, signal-based
definition of oscillations in computational neuroscience.
|
Thorstensen (2020) recently argued that the cataclysmic variable (CV) LAMOST
J024048.51+195226.9 may be a twin to the unique magnetic propeller system AE
Aqr. If this is the case, two predictions are that it should display a short
period white dwarf spin modulation, and that it should be a bright radio
source. We obtained follow-up optical and radio observations of this CV, in
order to see if this holds true. Our optical high-speed photometry does not
reveal a white dwarf spin signal, but lacks the sensitivity to detect a
modulation similar to the 33-s spin signal seen in AE Aqr. We detect the source
in the radio, and measure a radio luminosity similar to that of AE Aqr and
close to the highest so far reported for a CV. We also find good evidence for
radio variability on a time scale of tens of minutes. Optical polarimetric
observations produce no detection of linear or circular polarization. While we
are not able to provide compelling evidence, our observations are all
consistent with this object being a propeller system.
|
With the growing prevalence of psychological interventions, it is vital to
have measures which rate the effectiveness of psychological care to assist in
training, supervision, and quality assurance of services. Traditionally,
quality assessment is addressed by human raters who evaluate recorded sessions
along specific dimensions, often codified through constructs relevant to the
approach and domain. This is however a cost-prohibitive and time-consuming
method that leads to poor feasibility and limited use in real-world settings.
To facilitate this process, we have developed an automated competency rating
tool able to process the raw recorded audio of a session, analyzing who spoke
when, what they said, and how the health professional used language to provide
therapy. Focusing on a use case of a specific type of psychotherapy called
Motivational Interviewing, our system gives comprehensive feedback to the
therapist, including information about the dynamics of the session (e.g.,
therapist's vs. client's talking time), low-level psychological language
descriptors (e.g., type of questions asked), as well as other high-level
behavioral constructs (e.g., the extent to which the therapist understands the
clients' perspective). We describe our platform and its performance using a
dataset of more than 5,000 recordings drawn from its deployment in a real-world
clinical setting used to assist training of new therapists. Widespread use of
automated psychotherapy rating tools may augment experts' capabilities by
providing an avenue for more effective training and skill improvement,
eventually leading to more positive clinical outcomes.
|
Starting from a recently proposed comprehensive theory for the high-Tc
superconductivity in cuprates, we derive a general analytic expression for the
planar resistivity, in the presence of an applied external magnetic field
$\textbf{H}$ and explore its consequences in the different phases of these
materials. As an initial probe of our result, we show it compares very well
with experimental data for the resistivity of LSCO at different values of the
applied field. We also apply our result to Bi2201 and show that the
magnetoresistivity in the strange metal phase of this material, exhibits the
$H^2$ to $H$ crossover, as we move from the weak to the strong field regime.
Yet, despite of that, the magnetoresistivity does not present a quadrature
scaling. Remarkably, the resistivity H-field derivative does scale as a
function of $\frac{H}{T}$, in complete agreement with recent magneto-transport
measurements made in the strange metal phase of cuprates \cite{Hussey2020}. We,
finally, address the issue of the $T$-power-law dependence of the resistivity
of overdoped cuprates and compare our results with experimental data for
Tl2201. We show that this provides a simple method to determine whether the
quantum critical point associated to the pseudogap temperature $T^*(x)$ belongs
to the SC dome or not.
|
Schooling fish provide a spectacular example of self-organization in Nature.
The most remarkable patterns they form are giant rotating clusters such as
balls, tori, and rings, but the underlying mechanism remains largely unknown.
Here we propose an agent-based model that limits the number of agents that can
interact with each other. We incorporate the characteristic behaviors of fish
by (i) attraction that is weakened in a dense cluster of fish, and (ii)
acceleration with finite duration ("fast-start") when the fish is out of the
cluster. By three-dimensional numerical simulations, we show emergence of giant
rotating clusters (balls, tori, and rings) that are much larger than the radius
of interaction. We present a phase diagram of patterns including polarized
schools and swarms, and propose a physical mechanism that determines the
cluster shape in terms of the interaction capacity and strength of attraction.
Our model also indicates that each fish randomly moves back and forth between
the inner and outer regions of a vortex on a large time-scale. These results
show that fish without inherent chirality form giant rotating clusters
spontaneously and only by short-ranged interactions.
|
The gold standard for COVID-19 is RT-PCR, testing facilities for which are
limited and not always optimally distributed. Test results are delayed, which
impacts treatment. Expert radiologists, one of whom is a co-author, are able to
diagnose COVID-19 positivity from Chest X-Rays (CXR) and CT scans, that can
facilitate timely treatment. Such diagnosis is particularly valuable in
locations lacking radiologists with sufficient expertise and familiarity with
COVID-19 patients. This paper has two contributions. One, we analyse literature
on CXR based COVID-19 diagnosis. We show that popular choices of dataset
selection suffer from data homogeneity, leading to misleading results. We
compile and analyse a viable benchmark dataset from multiple existing
heterogeneous sources. Such a benchmark is important for realistically testing
models. Our second contribution relates to learning from imbalanced data.
Datasets for COVID X-Ray classification face severe class imbalance, since most
subjects are COVID -ve. Twin Support Vector Machines (Twin SVM) and Twin Neural
Networks (Twin NN) have, in recent years, emerged as effective ways of handling
skewed data. We introduce a state-of-the-art technique, termed as Twin
Augmentation, for modifying popular pre-trained deep learning models. Twin
Augmentation boosts the performance of a pre-trained deep neural network
without requiring re-training. Experiments show, that across a multitude of
classifiers, Twin Augmentation is very effective in boosting the performance of
given pre-trained model for classification in imbalanced settings.
|
In this paper we study the time differential dual-phase-lag model of heat
conduction incorporating the microstructural interaction effect in the
fast-transient process of heat transport. We analyse the influence of the delay
times upon some qualitative properties of the solutions of the initial boundary
value problems associated to such a model. Thus, the uniqueness results are
established under the assumption that the conductivity tensor is positive
definite and the delay times $\tau_q$ and $\tau_T$ vary in the set $\{0\leq
\tau_q\leq 2\tau_T\}\cup \{0<2\tau_T< \tau_q\}$. For the continuous dependence
problem we establish two different estimates. The first one is obtained for the
delay times with $0\leq \tau_q \leq 2\tau_T$, which agrees with the
thermodynamic restrictions on the model in concern, and the solutions are
stable. The second estimate is established for the delay times with $0<2\tau_T<
\tau_q$ and it allows the solutions to have an exponential growth in time. The
spatial behavior of the transient solutions and the steady-state vibrations is
also addressed. For the transient solutions we establish a theorem of influence
domain, under the assumption that the delay times are in $\left\{0<\tau_q\leq
2\tau_T\right\}\cup \left\{0<2\tau_T<\tau_q\right\}$. While for the amplitude
of the harmonic vibrations we obtain an exponential decay estimate of
Saint-Venant type, provided the frequency of vibration is lower than a critical
value and without any restrictions upon the delay times.
|
We prove an optimal $\Omega(n^{-1})$ lower bound on the spectral gap of
Glauber dynamics for anti-ferromagnetic two-spin systems with $n$ vertices in
the tree uniqueness regime. This spectral gap holds for all, including
unbounded, maximum degree $\Delta$. Consequently, we have the following mixing
time bounds for the models satisfying the uniqueness condition with a slack
$\delta\in(0,1)$:
$\bullet$ $C(\delta) n^2\log n$ mixing time for the hardcore model with
fugacity $\lambda\le (1-\delta)\lambda_c(\Delta)= (1-\delta)\frac{(\Delta -
1)^{\Delta - 1}}{(\Delta - 2)^\Delta}$;
$\bullet$ $C(\delta) n^2$ mixing time for the Ising model with edge activity
$\beta\in\left[\frac{\Delta-2+\delta}{\Delta-\delta},\frac{\Delta-\delta}{\Delta-2+\delta}\right]$;
where the maximum degree $\Delta$ may depend on the number of vertices $n$, and
$C(\delta)$ depends only on $\delta$.
Our proof is built upon the recently developed connections between the
Glauber dynamics for spin systems and the high-dimensional expander walks. In
particular, we prove a stronger notion of spectral independence, called the
complete spectral independence, and use a novel Markov chain called the field
dynamics to connect this stronger spectral independence to the rapid mixing of
Glauber dynamics for all degrees.
|
Future microcalorimeter X-ray observations will resolve spectral features in
unmatched detail. Understanding the line formation processes in the X-rays
deserves much attention. The purpose of this paper is to discuss such processes
in the presence of a photoionizing source. Line formation processes in one and
two-electron species are broadly categorized into four cases. Case A occurs
when the Lyman line optical depths are very small and photoexcitation does not
occur. Line photons escape the cloud without any scattering. Case B occurs when
the Lyman-line optical depths are large enough for photons to undergo multiple
scatterings. Case C occurs when a broadband continuum source strikes an
optically thin cloud. The Lyman lines are enhanced by induced radiative
excitation of the atoms/ions by continuum photons, also known as continuum
pumping. A fourth less-studied scenario, where the Case B spectrum is enhanced
by continuum pumping, is called Case D. Here, we establish the mathematical
foundation of Cases A, B, C, and D in an irradiated cloud with Cloudy. We also
show the total X-ray emission spectrum for all four cases within the energy
range 0.1 - 10 keV at the resolving power of XRISM around 6 keV. Additionally,
we show that a combined effect of electron scattering and partial blockage of
continuum pumping reduces the resonance line intensities. Such reduction
increases with column density and can serve as an important tool to measure the
column density/optical depth of the cloud.
|
Little is known about the spin-flip diffusion length $l_{\rm sf}$, one of the
most important material parameters in the field of spintronics. We use a
density-functional-theory based scattering approach to determine values of
$l_{\rm sf}$ that result from electron-phonon scattering as a function of
temperature for all 5d transition metal elements. $l_{\rm sf}$ does not
decrease monotonically with the atomic number Z but is found to be inversely
proportional to the density of states at the Fermi level. By using the same
local current methodology to calculate the spin Hall angle $\Theta_{\rm sH}$
that characterizes the efficiency of the spin Hall effect, we show that the
products $\rho(T)l_{\rm sf}(T)$ and $\Theta_{\rm sH}(T)l_{\rm sf}(T)$ are
constant.
|
We propose a predictive $Q_4$ flavored 2HDM model, where the scalar sector is
enlarged by the inclusion of several gauge singlet scalars and the fermion
sector by the inclusion of right handed Majorana neutrinos. In our model, the
$Q_4$ family symmetry is supplemented by several auxiliary cyclic symmetries,
whose spontaneous breaking produces the observed pattern of SM charged fermion
masses and quark mixing angles. The light active neutrino masses are generated
from an inverse seesaw mechanism at one loop level thanks to a remnant
preserved $Z_2$ symmetry. Our model succesfully reproduces the measured dark
matter relic abundance only for masses of the DM candidate below $\sim$ 0.8
TeV. Furthermore, our model is also consistent with the lepton and baryon
asymmetries of the Universe as well as with the muon anomalous magnetic moment.
|
We use the modified Richardson-Lucy deconvolution algorithm to reconstruct
the Primordial Power Spectrum from the Weak Lensing Power spectrum
reconstructed from the CMB anisotropies. This provides an independent window to
observe and constrain the PPS $P_R(k)$ along different $k$ scales as compared
to CMB Temperature Power Spectrum. The Weak Lensing Power spectrum does not
contain secondary variations in power and hence is cleaner, unlike the
Temperature Power spectrum which suffers from lensing which is visible in its
PPS reconstructions. We demonstrate that the physical behaviour of the weak
lensing kernel is different from the temperature kernel and reconstructs broad
features over $k$. We provide an in-depth analysis of the error propagation
using simulated data and Monte-Carlo sampling, based on Planck best-fit
cosmological parameters to simulate the data and cosmic variance limited error
bars. The error and initial condition analysis provides a clear picture of the
optimal reconstruction region for the estimator and we provide and algorithm
for $P_R(k)$ sampling to be used based on the given data, errors and its
binning properties. Eventually we plan to use this method on actual mission
data and provide a cross reference to PPS reconstructed from other sectors and
any possible features in them.
|
Data deduplication saves storage space by identifying and removing repeats in
the data stream. Compared with traditional compression methods, data
deduplication schemes are more time efficient and are thus widely used in large
scale storage systems. In this paper, we provide an information-theoretic
analysis on the performance of deduplication algorithms on data streams in
which repeats are not exact. We introduce a source model in which probabilistic
substitutions are considered. More precisely, each symbol in a repeated string
is substituted with a given edit probability. Deduplication algorithms in both
the fixed-length scheme and the variable-length scheme are studied. The
fixed-length deduplication algorithm is shown to be unsuitable for the proposed
source model as it does not take into account the edit probability. Two
modifications are proposed and shown to have performances within a constant
factor of optimal with the knowledge of source model parameters. We also study
the conventional variable-length deduplication algorithm and show that as
source entropy becomes smaller, the size of the compressed string vanishes
relative to the length of the uncompressed string, leading to high compression
ratios.
|
We report multi-epoch radial velocities, rotational velocities, and
atmospheric parameters for 37 T-type brown dwarfs observed with Keck/NIRSPEC.
Using a Markov Chain Monte Carlo forward-modeling method, we achieve median
precisions of 0.5 km s$^{-1}$ and 0.9 km s$^{-1}$ for radial and rotational
velocities, respectively. All of the T dwarfs in our sample are thin disk brown
dwarfs. We confirm previously reported moving group associations for four T
dwarfs. However, the lack of spectral indicators of youth in two of these
sources suggests that these are chance alignments. We confirm two previously
un-resolved binary candidates, the T0+T4.5 2MASS J11061197+2754225 and the
L7+T3.5 2MASS J21265916+7617440, with orbital periods of 4 yr and 12 yr,
respectively. We find a kinematic age of 3.5$\pm$0.3 Gyr for local T dwarfs,
consistent with nearby late-M dwarfs (4.1$\pm$0.3 Gyr). Removal of thick disk L
dwarfs in the local ultracool dwarf sample gives a similar age for L dwarfs
(4.2$\pm$0.3 Gyr), largely resolving the local L dwarf age anomaly. The
kinematic ages of local late-M, L, and T dwarfs can be accurately reproduced
with population simulations incorporating standard assumptions of the mass
function, star formation rate, and brown dwarf evolutionary models. A kinematic
dispersion break is found at the L4$-$L6 subtypes, likely reflecting the
terminus of the stellar Main Sequence. We provide a compilation of precise
radial velocities for 172 late-M, L, and T dwarfs within $\sim$20 pc of the
Sun.
|
One of the most intriguing phenomena in active matter has been the gas-liquid
like motility induced phase separation (MIPS) observed in repulsive active
particles. However, experimentally no particle can be a perfect sphere, and the
asymmetric shape, mass distribution or catalysis coating can induce an active
torque on the particle, which makes it a chiral active particle. Here using
computer simulations and dynamic mean-field theory, we demonstrate that the
large enough torque of circle active Brownian particles (cABPs) in two
dimensions generates a dynamical clustering state interrupting the conventional
MIPS. Multiple clusters arise from the combination of the conventional MIPS
cohesion, and the circulating current caused disintegration. The non-vanishing
current in non-equilibrium steady states microscopically originates from the
motility ``relieved'' by automatic rotation, which breaks the detailed balance
at the continuum level. This suggests that no equilibrium-like phase separation
theory can be constructed for chiral active colloids even with tiny active
torque, in which no visible collective motion exists. This mechanism also sheds
light on the understanding of dynamic clusters observed in a variety of active
matter systems.
|
In this study we investigate the potential of parametric images formed from
ultrasound B-mode scans using the Nakagami distribution for non-invasive
classification of breast lesions. Through a sliding window technique, we
generated seven types of parametric images from each patient scan in our
dataset using basic and as well as derived parameters of the Nakagami
distribution. To determine the most suitable window size for image generation,
we conducted an empirical analysis using three windows, and selected the best
one for our study. From the parametric images formed for each patient, we
extracted a total of 72 features. Feature selection was performed to find the
optimum subset of features for the best classification performance.
Incorporating the selected subset of features with the Support Vector Machine
(SVM) classifier, and by tuning the decision threshold, we obtained a maximum
classification accuracy of 93.08%, an Area under the ROC Curve (AUC) of 0.9712,
a False Negative Rate of 0%, and a very low False Positive Rate of 8.65%. Our
results indicate that the high accuracy of such a procedure may assist in the
diagnostic process associated with detection of breast cancer, as well as help
to reduce false positive diagnosis.
|
In a previous paper, we presented an extension of our reflection model
RELXILL_NK to include the finite thickness of the accretion disk following the
prescription in Taylor & Reynolds (2018). In this paper, we apply our model to
fit the 2013 simultaneous observations by NuSTAR and XMM-Newton of the
supermassive black hole in MCG-06-30-15 and the 2019 NuSTAR observation of the
Galactic black hole in EXO 1846-031. The high-quality data of these spectra had
previously led to precise black hole spin measurements and very stringent
constraints on possible deviations from the Kerr metric. We find that the disk
thickness does not change previous spin results found with a model employing an
infinitesimally thin disk, which confirms the robustness of spin measurements
in high radiative efficiency disks, where the impact of disk thickness is
minimal. Similar analysis on lower accretion rate systems will be an important
test for measuring the effect of disk thickness on black hole spin
measurements.
|
Searches for new leptophobic resonances at high energy colliders usually
target their decay modes into pairs of light quarks, top quarks, or standard
model bosons. Additional decay modes may also be present, producing signatures
to which current searches are not sensitive. We investigate the performance of
generic searches that look for resonances decaying into two large-radius jets.
As benchmark for our analysis we use a supersymmetric $\text{U}(1)'$ extension
of the Standard Model, the so-called U$\mu\nu$SSM, where all the SM decay modes
of the $Z'$ boson take place, plus additional (cascade) decays into new
scalars. The generic searches use a generic multi-pronged jet tagger and take
advantage of the presence of $b$ quarks in the large-radius jets, and are
sensitive to all these $Z'$ decay modes (except into light quarks) at once. For
couplings that are well below current experimental constraints, these generic
searches are sensitive at the $3\sigma-4\sigma$ level with Run 2 LHC data.
|
Load side participation can provide support to the power network by
appropriately adapting the demand when required. In addition, it enables an
economically improved power allocation. In this study, we consider the problem
of providing an optimal power allocation among generation and on-off loads
within the secondary frequency control timeframe. In particular, we consider a
mixed integer optimization problem which ensures that the secondary frequency
control objectives (i.e. generation/demand balance and the frequency attaining
its nominal value at steady state) are satisfied. We present analytical
conditions on the generation and on-off load profiles such that an
$\epsilon$-optimality interpretation of the steady state power allocation is
obtained, providing a non-conservative value for $\epsilon$. Moreover, we
develop a hierarchical control scheme that provides on-off load values that
satisfy the proposed conditions. We study the interaction of the proposed
control scheme with the physical dynamics of the power network and provide
analytic stability guarantees. Our results are verified with numerical
simulations on the Northeast Power Coordinating Council (NPCC) 140-bus system,
where it is demonstrated that the proposed algorithm yields a close to optimal
power allocation.
|
Relational Hoare logics (RHL) provide rules for reasoning about relations
between programs. Several RHLs include a rule we call sequential product that
infers a relational correctness judgment from judgments of ordinary Hoare logic
(HL). Other rules embody sensible patterns of reasoning and have been found
useful in practice, but sequential product is relatively complete on its own
(with HL). As a more satisfactory way to evaluate RHLs, a notion of alignment
completeness is introduced, in terms of the inductive assertion method and
product automata. Alignment completeness results are given to account for
several different sets of rules. The notion may serve to guide the design of
RHLs and relational verifiers for richer programming languages and alignment
patterns.
|
Antiferromagnetic PbMnTeO6, also known as mineral kuranakhite, has been
reported recently to have all three cations in trigonal prismatic coordination,
which is extremely unusual for both Mn(4+) and Te(6+). In this work, the phase
was reproduced with the same lattice parameters and N\'eel temperature TN = 20
K. However, powder neutron diffraction unambiguously determined octahedral
(trigonal antiprismatic) coordination for all cations within the chiral space
group P312. The same symmetry was proposed for SrMnTeO6 and PbGeTeO6, instead
of the reported space groups P-62m and P31m, respectively. PbMnTeO6 was found
to be a robust antiferromagnet with an assumingly substantial scale of exchange
interactions since the Neel temperature did not show any changes in external
magnetic fields up to 7 T. The determined effective magnetic moment meff = 3.78
mB was in excellent agreement with the numerical estimation using the effective
g-factor g = 1.95 directly measured here by electron spin resonance (ESR). Both
specific heat and ESR data indicated the two-dimensional character of magnetism
in the compound under study. The combination of chirality with magnetic order
makes PbMnTeO6 a promising material with possible multiferroic properties.
|
Beam-induced ionization injection (B-III) is currently being explored as a
method for injecting an electron beam with a controlled density profile into a
plasma wakefield accelerator (PWFA). This process is initiated by the fields of
an unmatched drive beam where the slice envelope reaches its minimum value, the
'pinch'. To control the injected beam's qualities, it is crucial to study the
beam-slice envelope oscillations, especially size and the location of the
pinch. In this proceeding, an ansatz based on the harmonic motion is proposed
to find the analytical solution to beam-slice envelope evolution in the
nonlinear regime. The size of the pinch is then found through the application
of energy conservation in the transverse direction. The resulting analytical
expressions are shown to be in good agreement with numerical solutions.
|
Topological superconductors (TSCs) are unconventional superconductors with
bulk superconducting gap and in-gap Majorana states on the boundary that may be
used as topological qubits for quantum computation. Despite their importance in
both fundamental research and applications, natural TSCs are very rare. Here,
combining state of the art synchrotron and laser-based angle-resolved
photoemission spectroscopy, we investigated a stoichiometric transition metal
dichalcogenide (TMD), 2M-WS2 with a superconducting transition temperature of
8.8 K (the highest among all TMDs in the natural form up to date) and observed
distinctive topological surface states (TSSs). Furthermore, in the
superconducting state, we found that the TSSs acquired a nodeless
superconducting gap with similar magnitude as that of the bulk states. These
discoveries not only evidence 2M-WS2 as an intrinsic TSC without the need of
sensitive composition tuning or sophisticated heterostructures fabrication, but
also provide an ideal platform for device applications thanks to its van der
Waals layered structure.
|
Each knot invariant can be extended to singular knots according to the skein
rule. A Vassiliev invariant of order at most $n$ is defined as a knot invariant
that vanishes identically on knots with more than $n$ double points. A chord
diagram encodes the order of double points along a singular knot. A Vassiliev
invariant of order $n$ gives rise to a function on chord diagrams with $n$
chords. Such a function should satisfy some conditions in order to come from a
Vassiliev invariant. A weight system is a function on chord diagrams that
satisfies so-called 4-term relations. Given a Lie algebra $\mathfrak{g}$
equipped with a non-degenerate invariant bilinear form, one can construct a
weight system with values in the center of the universal enveloping algebra
$U(\mathfrak{g})$. In this paper, we calculate $\mathfrak{sl}_3$ weight system
for chord diagram whose intersection graph is complete bipartite graph
$K_{2,n}$.
|
The interplay of interactions and disorder in low-dimensional superconductors
supports the formation of multiple quantum phases, as possible instabilities of
the Superconductor-Insulator Transition (SIT) at a singular quantum critical
point. We explore a one-dimensional model which exhibits such variety of phases
in the strongly quantum fluctuations regime. Specifically, we study the effect
of weak disorder on a two-leg Josephson ladder with comparable Josephson and
charging energies ($E_J$~$E_C$). An additional key feature of our model is the
requirement of perfect $\mathbb{Z}_2$-symmetry, respected by all parameters
including the disorder. Using a perturbative renormalization-group (RG)
analysis, we derive the phase diagram and identify at least one intermediate
phase between a full-fledged superconductor and a disorder-dominated insulator.
Most prominently, for repulsive interactions on the rungs we identify two
distinct mixed phases: in both of them the longitudinal charge mode is a
gapless superconductor, however one phase exhibits a dipolar charge density
order on the rungs, while the other is disordered. This latter phase is
characterized by coexisting superconducting (phase-locked) and charge-ordered
rungs, and encompasses the potential of evolving into a Griffith's phase
characteristic of the random-field Ising model in the strong disorder limit.
|
The goal of person search is to localize and match query persons from scene
images. For high efficiency, one-step methods have been developed to jointly
handle the pedestrian detection and identification sub-tasks using a single
network. There are two major challenges in the current one-step approaches. One
is the mutual interference between the optimization objectives of multiple
sub-tasks. The other is the sub-optimal identification feature learning caused
by small batch size when end-to-end training. To overcome these problems, we
propose a decoupled and memory-reinforced network (DMRNet). Specifically, to
reconcile the conflicts of multiple objectives, we simplify the standard
tightly coupled pipelines and establish a deeply decoupled multi-task learning
framework. Further, we build a memory-reinforced mechanism to boost the
identification feature learning. By queuing the identification features of
recently accessed instances into a memory bank, the mechanism augments the
similarity pair construction for pairwise metric learning. For better encoding
consistency of the stored features, a slow-moving average of the network is
applied for extracting these features. In this way, the dual networks reinforce
each other and converge to robust solution states. Experimentally, the proposed
method obtains 93.2% and 46.9% mAP on CUHK-SYSU and PRW datasets, which exceeds
all the existing one-step methods.
|
Sequence-to-sequence models have lead to significant progress in keyphrase
generation, but it remains unknown whether they are reliable enough to be
beneficial for document retrieval. This study provides empirical evidence that
such models can significantly improve retrieval performance, and introduces a
new extrinsic evaluation framework that allows for a better understanding of
the limitations of keyphrase generation models. Using this framework, we point
out and discuss the difficulties encountered with supplementing documents with
-- not present in text -- keyphrases, and generalizing models across domains.
Our code is available at https://github.com/boudinfl/ir-using-kg
|
Despite the successes of deep neural networks on many challenging vision
tasks, they often fail to generalize to new test domains that are not
distributed identically to the training data. The domain adaptation becomes
more challenging for cross-modality medical data with a notable domain shift.
Given that specific annotated imaging modalities may not be accessible nor
complete. Our proposed solution is based on the cross-modality synthesis of
medical images to reduce the costly annotation burden by radiologists and
bridge the domain gap in radiological images. We present a novel approach for
image-to-image translation in medical images, capable of supervised or
unsupervised (unpaired image data) setups. Built upon adversarial training, we
propose a learnable self-attentive spatial normalization of the deep
convolutional generator network's intermediate activations. Unlike previous
attention-based image-to-image translation approaches, which are either
domain-specific or require distortion of the source domain's structures, we
unearth the importance of the auxiliary semantic information to handle the
geometric changes and preserve anatomical structures during image translation.
We achieve superior results for cross-modality segmentation between unpaired
MRI and CT data for multi-modality whole heart and multi-modal brain tumor MRI
(T1/T2) datasets compared to the state-of-the-art methods. We also observe
encouraging results in cross-modality conversion for paired MRI and CT images
on a brain dataset. Furthermore, a detailed analysis of the cross-modality
image translation, thorough ablation studies confirm our proposed method's
efficacy.
|
Dynamic Causal Modeling (DCM) is a Bayesian framework for inferring on hidden
(latent) neuronal states, based on measurements of brain activity. Since its
introduction in 2003 for functional magnetic resonance imaging data, DCM has
been extended to electrophysiological data, and several variants have been
developed. Their biophysically motivated formulations make these models
promising candidates for providing a mechanistic understanding of human brain
dynamics, both in health and disease. However, due to their complexity and
reliance on concepts from several fields, fully understanding the mathematical
and conceptual basis behind certain variants of DCM can be challenging. At the
same time, a solid theoretical knowledge of the models is crucial to avoid
pitfalls in the application of these models and interpretation of their
results. In this paper, we focus on one of the most advanced formulations of
DCM, i.e. conductance-based DCM for cross-spectral densities, whose components
are described across multiple technical papers. The aim of the present article
is to provide an accessible exposition of the mathematical background, together
with an illustration of the model's behavior. To this end, we include
step-by-step derivations of the model equations, point to important aspects in
the software implementation of those models, and use simulations to provide an
intuitive understanding of the type of responses that can be generated and the
role that specific parameters play in the model. Furthermore, all code utilized
for our simulations is made publicly available alongside the manuscript to
allow readers an easy hands-on experience with conductance-based DCM.
|
Direct simulation of physical processes on a kinetic level is prohibitively
expensive in aerospace applications due to the extremely high dimension of the
solution spaces. In this paper, we consider the moment system of the Boltzmann
equation, which projects the kinetic physics onto the hydrodynamic scale. The
unclosed moment system can be solved in conjunction with the entropy closure
strategy. Using an entropy closure provides structural benefits to the physical
system of partial differential equations. Usually computing such closure of the
system spends the majority of the total computational cost, since one needs to
solve an ill-conditioned constrained optimization problem. Therefore, we build
a neural network surrogate model to close the moment system, which preserves
the structural properties of the system by design, but reduces the
computational cost significantly. Numerical experiments are conducted to
illustrate the performance of the current method in comparison to the
traditional closure.
|
Document grounded generation is the task of using the information provided in
a document to improve text generation. This work focuses on two different
document grounded generation tasks: Wikipedia Update Generation task and
Dialogue response generation. Our work introduces two novel adaptations of
large scale pre-trained encoder-decoder models focusing on building context
driven representation of the document and enabling specific attention to the
information in the document. Additionally, we provide a stronger BART baseline
for these tasks. Our proposed techniques outperform existing methods on both
automated (at least 48% increase in BLEU-4 points) and human evaluation for
closeness to reference and relevance to the document. Furthermore, we perform
comprehensive manual inspection of the generated output and categorize errors
to provide insights into future directions in modeling these tasks.
|
Physics-Informed Machine Learning (PIML) has gained momentum in the last 5
years with scientists and researchers aiming to utilize the benefits afforded
by advances in machine learning, particularly in deep learning. With large
scientific data sets with rich spatio-temporal data and high-performance
computing providing large amounts of data to be inferred and interpreted, the
task of PIML is to ensure that these predictions, categorizations, and
inferences are enforced by, and conform to the limits imposed by physical laws.
In this work a new approach to utilizing PIML is discussed that deals with the
use of physics-based loss functions. While typical usage of physical equations
in the loss function requires complex layers of derivatives and other functions
to ensure that the known governing equation is satisfied, here we show that a
similar level of enforcement can be found by implementing more simpler loss
functions on specific kinds of output data. The generalizability that this
approach affords is shown using examples of simple mechanical models that can
be thought of as sufficiently simplified surrogate models for a wide class of
problems.
|
We discover a new minimality property of the absolute minimisers of supremal
functionals (also known as $L^\infty$ Calculus of Variations problems).
|
We present a method to tune the resonantly enhanced harmonic emission from
engineered potentials, which would be experimentally feasible in the purview of
the recent advances in atomic and condensed matter physics. The recombination
of the electron from the potential dependent excited state to the ground state
causes the emission of photons with a specific energy. The energy of the
emitted photons can be controlled by appropriately tweaking the potential
parameters. The resonant enhancement in high-harmonic generation enables the
emission of very intense extreme ultra-violet or soft x-ray radiations. The
scaling law of the resonant harmonic emission with the model parameter of the
potential is also obtained by numerically solving the time-dependent
Schr\"odinger equation in two dimensions.
|
We prove equidistribution theorems for a family of holomorphic Siegel cusp
forms of general degree in the level aspect. Our main contribution is to
estimate unipotent contributions for general degree in the geometric side of
Arthur's invariant trace formula in terms of Shintani zeta functions. Several
applications including the vertical Sato-Tate theorem and low-lying zeros for
standard $L$-functions of holomorphic Siegel cusp forms are discussed. We also
show that the ``non-genuine forms" which come from non-trivial endoscopic
contributions by Langlands functoriality classified by Arthur are negligible.
|
Image decomposition is a crucial subject in the field of image processing. It
can extract salient features from the source image. We propose a new image
decomposition method based on convolutional neural network. This method can be
applied to many image processing tasks. In this paper, we apply the image
decomposition network to the image fusion task. We input infrared image and
visible light image and decompose them into three high-frequency feature images
and a low-frequency feature image respectively. The two sets of feature images
are fused using a specific fusion strategy to obtain fusion feature images.
Finally, the feature images are reconstructed to obtain the fused image.
Compared with the state-of-the-art fusion methods, this method has achieved
better performance in both subjective and objective evaluation.
|
Exascale computing systems will exhibit high degrees of hierarchical
parallelism, with thousands of computing nodes and hundreds of cores per node.
Efficiently exploiting hierarchical parallelism is challenging due to load
imbalance that arises at multiple levels.
OpenMP is the most widely-used standard for expressing and exploiting the
ever-increasing node-level parallelism.
The scheduling options in OpenMP are insufficient to address the load
imbalance that arises during the execution of multithreaded applications.
The limited scheduling options in OpenMP hinder research on novel scheduling
techniques which require comparison with others from the literature.
This work introduces LB4OMP, an open-source dynamic load balancing library
that implements successful scheduling algorithms from the literature.
LB4OMP is a research infrastructure designed to spur and support present and
future scheduling research, for the benefit of multithreaded applications
performance.
Through an extensive performance analysis campaign, we assess the
effectiveness and demystify the performance of all loop scheduling techniques
in the library.
We show that, for numerous applications-systems pairs, the scheduling
techniques in LB4OMP outperform the scheduling options in OpenMP.
Node-level load balancing using LB4OMP leads to reduced cross-node load
imbalance and to improved MPI+OpenMP applications performance, which is
critical for Exascale computing.
|
We study the steady-state patterns of population of the coupled oscillators
that sync and swarm, where the interaction distances among oscillators have
finite-cutoff in interaction distance. We examine how the static patterns known
in the infinite-cutoff are reproduced or deformed, and explore a new static
pattern that does not appear until a finite-cutoff is considered. All
steady-state patterns of the infinite-cutoff, static sync, static async, and
static phase wave are respectively repeated in space for proper finite-cutoff
ranges. Their deformation in shape and density takes place for the other
finite-cutoff ranges. Bar-like phase wave states are observed, which has not
been the case for the infinite-cutoff. All the patterns are investigated via
numerical and theoretical analysis.
|
We consider linear parameter-dependent systems $A(\mu) x(\mu) = b$ for many
different $\mu$, where $A$ is large and sparse, and depends nonlinearly on
$\mu$. Solving such systems individually for each $\mu$ would require great
computational effort. In this work we propose to compute a partial
parameterization $\tilde{x} \approx x(\mu)$ where $\tilde{x}(\mu)$ is cheap to
compute for many different $\mu$. Our methods are based on the observation that
a companion linearization can be formed where the dependence on $\mu$ is only
linear. In particular, we develop methods which combine the well-established
Krylov subspace method for linear systems, GMRES, with algorithms for nonlinear
eigenvalue problems (NEPs) to generate a basis for the Krylov subspace. Within
this new approach, the basis matrix is constructed in three different ways,
using a tensor structure and exploiting that certain problems have low-rank
properties. We show convergence factor bounds obtained similarly to those for
the method GMRES for linear systems. More specifically, a bound is obtained
based on the magnitude of the parameter $\mu$ and the spectrum of the linear
companion matrix, which corresponds to the reciprocal solutions to the
corresponding NEP. Numerical experiments illustrate the competitiveness of our
methods for large-scale problems. The simulations are reproducible and publicly
available online.
|
We prove a folklore conjecture concerning the sum-of-digits functions in
bases two and three: there are infinitely many positive integers $n$ such that
the binary sum of digits of $n$ equals its ternary sum of digits.
|
Context. Radiation-driven mass loss is key to our understanding of
massive-star evolution. However, for low-luminosity O-type stars there are big
discrepancies between theoretically predicted and empirically derived mass-loss
rates (called the weak-wind problem). Aims. We compute radiation-line-driven
wind models of a typical weak-wind star to determine its temperature structure
and the corresponding impact on ultra-violet (UV) line formation. Methods. We
carried out hydrodynamic simulations of the line-deshadowing instability (LDI)
for a weak-wind star in the Galaxy. Subsequently, we used this LDI model as
input in a short-characteristics radiative transfer code to compute synthetic
UV line profiles. Results. We find that the line-driven weak wind is
significantly shock heated to high temperatures and is unable to cool down
effciently. This results in a complex temperature structure where more than
half of the wind volume has temperatures significantly higher than the stellar
effective temperature. Therefore, a substantial portion of the weak wind will
be more ionised, resulting in a reduction of the UV line opacity and therefore
in weaker line profiles for a given mass-loss rate. Quantifying this, we find
that weak-wind mass-loss rates derived from unsaturated UV lines could be
underestimated by a factor of between 10 and 100 if the high-temperature gas is
not properly taken into account in the spectroscopic analysis. This offers a
tentative basic explanation for the weak-wind problem: line-driven weak winds
are not really weaker than theoretically expected, but rather a large portion
of their wind volume is much hotter than the stellar effective temperature.
|
In end-to-end optimized learned image compression, it is standard practice to
use a convolutional variational autoencoder with generalized divisive
normalization (GDN) to transform images into a latent space. Recently,
Operational Neural Networks (ONNs) that learn the best non-linearity from a set
of alternatives, and their self-organized variants, Self-ONNs, that approximate
any non-linearity via Taylor series have been proposed to address the
limitations of convolutional layers and a fixed nonlinear activation. In this
paper, we propose to replace the convolutional and GDN layers in the
variational autoencoder with self-organized operational layers, and propose a
novel self-organized variational autoencoder (Self-VAE) architecture that
benefits from stronger non-linearity. The experimental results demonstrate that
the proposed Self-VAE yields improvements in both rate-distortion performance
and perceptual image quality.
|
In wireless sensor networks (WSNs), designing a stable, low-power routing
protocol is a major challenge because successive changes in links or breakdowns
destabilize the network topology. Therefore, choosing the right route in this
type of network due to resource constraints and their operating environment is
one of the most important challenges in these networks. Therefore, the main
purpose of these networks is to collect appropriate routing information about
the environment around the network sensors while observing the energy
consumption of the sensors. One of the important approaches to reduce energy
consumption in sensor networks is the use of the clustering technique, but in
most clustering methods, only the criterion of the amount of energy of the
cluster or the distance of members to the cluster has been considered.
Therefore, in this paper, a method is presented using the firefly algorithm and
using the four criteria of residual energy, noise rate, number of hops, and
distance. The proposed method called EM-FIREFLY is introduced which selects the
best cluster head with high attractiveness and based on the fitness function
and transfers the data packets through these cluster head to the sink. The
proposed method is evaluated with NS-2 simulator and compared with the
algorithm-PSO and optimal clustering methods. The evaluation results show the
efficiency of the EM-FIREFLY method in maximum relative load and network
lifetime criteria compared to other methods discussed in this article.
|
Employing the time-dependent variational principle combined with the multiple
Davydov $\mathrm{D}_2$ Ansatz, we investigate Landau-Zener (LZ) transitions in
a qubit coupled to a photon mode with various initial photon states at zero
temperature. Thanks to the multiple Davydov trial states, exact photonic
dynamics taking place in the course of the LZ transition is also studied
efficiently. With the qubit driven by a linear external field and the photon
mode initialized with Schr\"odinger-cat states, asymptotic behavior of the
transition probability beyond the rotating-wave approximation is uncovered for
a variety of initial states. Using a sinusoidal external driving field, we also
explore the photon-assisted dynamics of Landau-Zener-St\"{u}ckelberg-Majorana
interferometry. Transition pathways involving multiple energy levels are
unveiled by analyzing the photon dynamics.
|
In this paper, we propose a novel framework for multi-target multi-camera
tracking (MTMCT) of vehicles based on metadata-aided re-identification
(MA-ReID) and the trajectory-based camera link model (TCLM). Given a video
sequence and the corresponding frame-by-frame vehicle detections, we first
address the isolated tracklets issue from single camera tracking (SCT) by the
proposed traffic-aware single-camera tracking (TSCT). Then, after automatically
constructing the TCLM, we solve MTMCT by the MA-ReID. The TCLM is generated
from camera topological configuration to obtain the spatial and temporal
information to improve the performance of MTMCT by reducing the candidate
search of ReID. We also use the temporal attention model to create more
discriminative embeddings of trajectories from each camera to achieve robust
distance measures for vehicle ReID. Moreover, we train a metadata classifier
for MTMCT to obtain the metadata feature, which is concatenated with the
temporal attention based embeddings. Finally, the TCLM and hierarchical
clustering are jointly applied for global ID assignment. The proposed method is
evaluated on the CityFlow dataset, achieving IDF1 76.77%, which outperforms the
state-of-the-art MTMCT methods.
|
We investigate latent-space scalability for multi-task collaborative
intelligence, where one of the tasks is object detection and the other is input
reconstruction. In our proposed approach, part of the latent space can be
selectively decoded to support object detection while the remainder can be
decoded when input reconstruction is needed. Such an approach allows reduced
computational resources when only object detection is required, and this can be
achieved without reconstructing input pixels. By varying the scaling factors of
various terms in the training loss function, the system can be trained to
achieve various trade-offs between object detection accuracy and input
reconstruction quality. Experiments are conducted to demonstrate the adjustable
system performance on the two tasks compared to the relevant benchmarks.
|
Observatory publications comprise the work of local astronomers from
observatories around the world and are traditionally exchanged between
observatories through libraries. However, large collections of observatory
publications seem to be rare; or at the least rarely digitally described or
accessible on the Internet. Notable examples to the contrary are the Woodman
Astronomical Library at Wisconsin-Madison and the Dudley Observatory in
Loudonville, New York both in the US. Due to the irregularities in receiving
material, the collections are generally often incomplete both with respect to
the observatories included as well as volumes. In order to assess the unique
properties of the collections, we summarize and compare observatories present
in our own as well as the collections from the Woodman Library and the Dudley
Observatory.
|
We present a detailed comparison of several recent and new approaches to
multigrid solver algorithms suitable for the solution of 5d chiral fermion
actions such as Domain Wall fermions in the Shamir formulation, and also for
the Partial Fraction and Continued Fraction overlap. Our focus is on the
acceleration of gauge configuration sampling, and a compact nearest neighbour
stencil is required to limit the calculational cost of obtaining a coarse
operator. This necessitates the coarsening of a nearest neighbour operator to
preserve sparsity in coarsened grids, unlike HDCG. We compare the approaches of
HDCR and the Multigrid algorithm and also several new hybrid schemes. In this
work we introduce a new recursive Chebyshev polynomial based setup scheme. We
find that the HDCR approach, can both setup, and solve standard Shamir Domain
Wall Fermions faster than a single solve with red-black preconditioned
Conjugate Gradients on large volumes and for modern GPU systems such as the
Summit supercomputer. This is promising for the acceleration of HMC,
particularly if setup costs are shared across multiple Hasenbusch determinant
factors. The setup scheme is likely generally applicable to other Fermion
actions.
|
Let $P,Q$ be longest paths in a simple graph. We analyze the possible
connections between the components of $P\cup Q\setminus (V(P)\cap V(Q))$ and
introduce the notion of a bi-traceable graph. We use the results for all the
possible configurations of the intersection points when $\#V(P)\cap V(Q)\le 5$
in order to prove that if the intersection of three longest paths $P,Q,R$ is
empty, then $\#(V(P)\cap V(Q))\ge 6$. We also prove Hippchen's conjecture for
$k\le 6$: If a graph $G$ is $k$-connected for $k\le 6$, and $P$ and $Q$ are
longest paths in $G$, then $\#(V(P)\cap V(Q))\ge 6$.
|
This paper presents a framework for the design and analysis of an
$\mathcal{L}_1$ adaptive controller with a switching reference system. The use
of a switching reference system allows the desired behavior to be scheduled
across the operating envelope, which is often required in aerospace
applications. The analysis uses a switched reference system that assumes
perfect knowledge of uncertainties and uses a corresponding non-adaptive
controller. Provided that this switched reference system is stable, it is shown
that the closed-loop system with unknown parameters and disturbances and the
$\mathcal{L}_1$ adaptive controller can behave arbitrarily close to this
reference system. Simulations of the short period dynamics of a transport class
aircraft during the approach phase illustrate the theoretical results.
|
Music streaming services heavily rely on recommender systems to improve their
users' experience, by helping them navigate through a large musical catalog and
discover new songs, albums or artists. However, recommending relevant and
personalized content to new users, with few to no interactions with the
catalog, is challenging. This is commonly referred to as the user cold start
problem. In this applied paper, we present the system recently deployed on the
music streaming service Deezer to address this problem. The solution leverages
a semi-personalized recommendation strategy, based on a deep neural network
architecture and on a clustering of users from heterogeneous sources of
information. We extensively show the practical impact of this system and its
effectiveness at predicting the future musical preferences of cold start users
on Deezer, through both offline and online large-scale experiments. Besides, we
publicly release our code as well as anonymized usage data from our
experiments. We hope that this release of industrial resources will benefit
future research on user cold start recommendation.
|
Government agencies always need to carefully consider potential risks of
disclosure whenever they publish statistics based on their data or give
external researchers access to the collected data. For this reason, research on
disclosure avoiding techniques has a long tradition at statistical agencies. In
this context, the promise of formal privacy guarantees offered by concepts such
as differential privacy seem to be the panacea enabling the agencies to exactly
quantify and control the privacy loss incurred by any data release. Still,
despite the excitement in academia and industry, most agencies-with the
prominent exception of the U.S. Census Bureau-have been reluctant to even
consider the concept for their data release strategy.
This paper aims to shed some light on potential reasons for this. We argue
that the requirements when implementing differential privacy approaches at
government agencies are often fundamentally different from the requirements in
industry. This raises many challenging problems and open questions that still
need to be addressed before the concept might be used as an overarching
principle when sharing data with the public. The paper will not offer any
solutions to these challenges. Instead, we hope to stimulate some collaborative
research efforts, as we believe that many of the problems can only be addressed
by inter-disciplinary collaborations.
|
To characterize entanglement of tripartite
$\mathbb{C}^d\otimes\mathbb{C}^d\otimes\mathbb{C}^d$ systems, we employ
algebraic-geometric tools that are invariants under Stochastic Local Operation
and Classical Communication (SLOCC), namely $k$-secant varieties and
one-multilinear ranks. Indeed, by means of them, we present a classification of
tripartite pure states in terms of a finite number of families and subfamilies.
At the core of it stands out a fine-structure grouping of three-qutrit
entanglement.
|
Electric currents carrying a net spin polarization are widely used in
spintronics, whereas globally spin-neutral currents are expected to play no
role in spin-dependent phenomena. Here we show that, in contrast to this common
expectation, spin-independent conductance in compensated antiferromagnets and
normal metals can be efficiently exploited in spintronics, provided their
magnetic space group symmetry supports a non-spin-degenerate Fermi surface. Due
to their momentum-dependent spin polarization, such antiferromagnets can be
used as active elements in antiferromagnetic tunnel junctions (AFMTJs) and
produce a giant tunneling magnetoresistance (TMR) effect. Using RuO$_{2}$ as a
representative compensated antiferromagnet exhibiting spin-independent
conductance along the [001] direction but a non-spin-degenerate Fermi surface,
we design a RuO$_{2}$/TiO$_{2}$/RuO$_{2}$ (001) AFMTJ, where a globally
spin-neutral charge current is controlled by the relative orientation of the
N\'eel vectors of the two RuO$_{2}$ electrodes, resulting in the TMR effect as
large as ~500%. These results are expanded to normal metals which can be used
as a counter electrode in AFMTJs with a single antiferromagnetic layer or other
elements in spintronic devices. Our work uncovers an unexplored potential of
the materials with no global spin polarization for utilizing them in
spintronics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.