abstract
stringlengths 42
2.09k
|
---|
We introduce a method where particle physics processes in cosmology may be
calculated by the usual perturbative flat space quantum field theory through an
effective Minkowski space description at small time intervals provided that the
running of the effective particle masses are sufficiently slow. We discuss the
necessary conditions for the applicability of this method and illustrate the
method through a simple example. This method has the advantage of avoiding the
effects of gravitational particle creation in the calculation of rates and
cross sections i.e. giving directly the rates and the cross sections due to the
scatterings or the decay processes.
|
Complex linear differential equations with entire coefficients are studied in
the situation where one of the coefficients is an exponential polynomial and
dominates the growth of all the other coefficients. If such an equation has an
exponential polynomial solution $f$, then the order of $f$ and of the dominant
coefficient are equal, and the two functions possess a certain duality
property. The results presented in this paper improve earlier results by some
of the present authors, and the paper adjoins with two open problems.
|
In this article, we study the logarithm of the central value
$L\left(\frac{1}{2}, \chi_D\right)$ in the symplectic family of Dirichlet
$L$-functions associated with the hyperelliptic curve of genus $\delta$ over a
fixed finite field $\mathbb{F}_q$ in the limit as $\delta\to \infty$.
Unconditionally, we show that the distribution of $\log
\big|L\left(\frac{1}{2}, \chi_D\right)\big|$ is asymptotically bounded above by
the Gaussian distribution of mean $\frac{1}{2}\log \deg(D)$ and variance $\log
\deg(D)$. Assuming a mild condition on the distribution of the low-lying zeros
in this family, we obtain the full Gaussian distribution.
|
In this article, we obtain a complete list of inequivalent irreducible
representations of the compact quantum group $U_q(2)$ for non-zero complex
deformation parameters $q$, which are not roots of unity. The matrix
coefficients of these representations are described in terms of the little
$q$-Jacobi polynomials. The Haar state is shown to be faithful and an
orthonormal basis of $L^2(U_q(2))$ is obtained. Thus, we have an explicit
description of the Peter-Weyl decomposition of $U_q(2)$. As an application, we
discuss the Fourier transform and establish the Plancherel formula. We also
describe the decomposition of the tensor product of two irreducible
representations into irreducible components. Finally, we classify the compact
quantum groups $U_q(2)$.
|
In this work, the $\overline{\partial}$ steepest descent method is employed
to investigate the soliton resolution for the Hirota equation with the initial
value belong to weighted Sobolev space $H^{1,1}(\mathbb{R})=\{f\in
L^{2}(\mathbb{R}): f',xf\in L^{2}(\mathbb{R})\}$. The long-time asymptotic
behavior of the solution $q(x,t)$ is derived in any fixed space-time cone
$C(x_{1},x_{2},v_{1},v_{2})=\left\{(x,t)\in \mathbb{R}\times\mathbb{R}:
x=x_{0}+vt ~\text{with}~ x_{0}\in[x_{1},x_{2}]\right\}$. We show that solution
resolution conjecture of the Hirota equation is characterized by the leading
order term $\mathcal {O}(t^{-1/2})$ in the continuous spectrum, $\mathcal
{N}(\mathcal {I})$ soliton solutions in the discrete spectrum and error order
$\mathcal {O}(t^{-3/4})$ from the $\overline{\partial}$ equation.
|
We present the first 3D radiation-hydrodynamic simulations on the formation
and evolution of born-again planetary nebulae (PNe), with particular emphasis
to the case of HuBi1, the inside-out PN. We use the extensively-tested GUACHO
code to simulate the formation of HuBi1 adopting mass-loss and stellar wind
terminal velocity estimates obtained from observations presented by our group.
We found that, if the inner shell of HuBi1 was formed by an explosive very late
thermal pulse (VLTP) ejecting material with velocities of $\sim$300 km
s$^{-1}$, the age of this structure is consistent with that of $\simeq$200 yr
derived from multi-epoch narrow-band imaging. Our simulations predict that, as
a consequence of the dramatic reduction of the stellar wind velocity and photon
ionizing flux during the VLTP, the velocity and pressure structure of the outer
H-rich nebula are affected creating turbulent ionized structures surrounding
the inner shell. These are indeed detected in Gran Telescopio Canarias MEGARA
optical observations. Furthermore, we demonstrate that the current relatively
low ionizing photon flux from the central star of HuBi1 is not able to
completely ionize the inner shell, which favors previous suggestions that its
excitation is dominated by shocks. Our simulations suggest that the kinetic
energy of the H-poor ejecta of HuBi1 is at least 30 times that of the clumps
and filaments in the evolved born-again PNe A30 and A78, making it a truly
unique VLTP event.
|
Here we study the effect of an additional interfacial spin-transfer torque,
as well as the well established spin-orbit torque and bulk spin-transfer
torque, on skyrmion collections - group of skyrmions dense enough that they are
not isolated from on another - in ultrathin heavy metal / ferromagnetic
multilayers, by comparing modelling with experimental results. Using a skyrmion
collection with a range of skyrmion diameters and landscape disorder, we study
the dependence of the skyrmion Hall angle on diameter and velocity, as well as
the velocity as a function of diameter. We show the experimental results are in
good agreement with modelling when including the interfacial spin-transfer
torque, and cannot be reproduced by using the spin-orbit torque alone. We also
show that for skyrmion collections the velocity is approximately independent of
diameter, in marked contrast to the motion of isolated skyrmions, as the group
of skyrmions move together at an average group velocity. Moreover, the
calculated skyrmion velocities are comparable to those obtained in experiments
when the interfacial spin-transfer torque in included, whilst modelling using
the spin-orbit torque alone shows large discrepancies with the experimental
data. Our results thus show the significance of the interfacial spin-transfer
torque in ultrathin magnetic multilayers, which is of similar strength to the
spin-orbit torque, and both significantly larger than the bulk spin-transfer
torque. Due to the good agreement with experiments, we conclude that the
interfacial spin-transfer torque should be included in numerical modelling for
correct reproduction of experimental results.
|
A method is demonstrated to optimize a stellarator's geometry to eliminate
magnetic islands and achieve other desired physics properties at the same time.
For many physics quantities that have been used in stellarator optimization,
including quasisymmetry, neoclassical transport, and magnetohydrodynamic
stability, it is convenient to use a magnetic equilibrium representation that
assures the existence of magnetic surfaces. However, this representation hides
the possible presence of magnetic islands, which are typically undesirable. To
include both surface-based objectives and island widths in a single
optimization, two fixed-boundary equilibrium calculations are run at each
iteration of the optimization: one that enforces the existence of magnetic
surfaces (VMEC [S. P. Hirshman and J. C. Whitson, Phys. Fluids 26, 3553
(1983)]), and one that does not (SPEC [S. R. Hudson, et al, Phys. Plasmas 19,
112502 (2012)]). By penalizing the island residues in the objective function,
the two magnetic field representations are brought into agreement during the
optimization. An example is presented in which, particularly on the surface
where quasisymmetry was targeted, quasisymmetry is achieved more accurately
than in previously published examples.
|
This paper discusses relational operations in the first-order logical
environment {FOLE}. Here we demonstrate how FOLE expresses the relational
operations of database theory in a clear and implementable representation. An
analysis of the representation of database tables/relations in FOLE reveals a
principled way to express the relational operations. This representation is
expressed in terms of a distinction between basic components versus composite
relational operations. The 9 basic components fall into three categories:
reflection (2), Booleans or basic operations (3), and adjoint flow (4). Adjoint
flow is given for signatures (2) and for type domains (2), which are then
combined into full adjoint flow. The basic components are used to express
various composite operations, where we illustrate each of these with a
flowchart. Implementation of the composite operations is then expressed in an
input/output table containing four parts: constraint, construction, input, and
output. We explain how limits and colimits are constructed from diagrams of
tables, and then classify composite relational operations into three
categories: limit-like, colimit-like and unorthodox.
|
In this paper, the generation of magnetic fields in a nonuniformly rotating
layer of finite thickness of an electrically conducting fluid by thermomagnetic
(TM) instability. This instability arises due to the temperature gradient
$\nabla T_0$ and thermoelectromotive coefficient gradient $\nabla\alpha $. The
influence of the generation of a toroidal magnetic field by TM instability on
convective instability in a nonuniformly rotating layer of an electrically
conductive fluid in the presence of a vertical constant magnetic field
${\bf{B}}_0 \| {\rm OZ}$ is established. As a result of applying the method of
perturbation theory for the small parameter $ \epsilon = \sqrt {(\textrm
{Ra}-\textrm {Ra}_c) / \textrm {Ra}_c} $ of supercriticality of the stationary
Rayleigh number $\textrm {Ra}_c$ a nonlinear equation of the Ginzburg-Landau
type was obtained. This equation describes the evolution of the finite
amplitude of perturbations. Numerical solutions of this equation made it
possible to determine the heat transfer in the fluid layer with and without TM
effects. It is shown that the amplitude of the stationary toroidal magnetic
field noticeably increases with allowance for TM effects.
|
Head and Neck Squamous Cell Carcinoma (HNSCC) is one of cancer type that is
most distressing leading to acute pain, effecting speech and primary survival
functions such as swallowing and breathing. The morbidity and mortality of
HNSCC patients have not significantly improved even tough there has been
advancement in surgical and radiotherapy treatments. The high mortality may be
attributed to the complexity and significant changes in the clinical outcomes.
Therefore, it is important to increase the accuracy of predicting the outcome
of cancer survival. Few cancer survival prediction models of HNSCC have been
proposed so far. In this study, genomic data (whole exome sequencing) are
integrated with clinical data to improve the performance of prediction model.
The somatic mutations of every patient is processed using Multifractal
Deterended Fluctuation Analysis (MFDFA) algorithm and the parameter values of
Fractal Dimension (Dq) is included along with clinical data for cancer survival
prediction. Feature ranking proves that the new engineered feature is one of
the important feature in prediction model. In order to improve the performance
index of models, hyperparameters were also tuned in all the classifiers
considered. 10-Fold cross validation is implemented and XGBoost (98% AUROC, 94%
precision, and 93% recall) proves to be best model classifier followed by
Random Forest 93% AUROC, 93% precision, and 93% recall), Support Vector Machine
(84% AUCROC, 79% precision, and 79% recall) and Logistic Regression (80% AUROC,
77% precision, and 76% recall).
|
In this paper, the formation of primordial black holes (PBHs) is
reinvestigated using inflationary $\alpha$-attractors. Instead of using the
conventional Press-Schechter theory to compute the abundance, the optimized
peaks theory is used, which was developed in Ref.
\cite{Yoo:2018kvb,Yoo:2020dkz}. This method takes into account how curvature
perturbations play a r\^{o}le in modifying the mass of primordial black holes.
Analyzing the model proposed in \cite{Mahbub:2019uhl} it is seen that the
horizon mass of the collapsed Hubble patch is larger by $\mathcal{O}(10)$
compared to the usual computation. Moreover, PBHs can be formed from curvature
power spectrum, $\mathcal{P}_{\zeta}(k)$, peaked at lower values using
numerically favored threshold overdensities. As a result of the generally
larger masses predicted, the peak of the power spectrum can be placed at larger
$k$ modes than that is typical with which potential future constraints on the
primordial power spectrum through gravitational waves (GWs) can be evaded.
|
The Einstein field equations for a class of irrotational non-orthogonally
transitive $G_{2}$ cosmologies are written down as a system of partial
differential equations. The equilibrium points are self-similar and can be
written as a one-parameter, five-dimensional, ordinary differential equation.
The corresponding cosmological models both evolve and have one-dimension of
inhomogeneity. The major mathematical features of this ordinary differential
equation are derived, and a cosmological interpretation is given. The
relationship to the exceptional Bianchi models is explained and exploited to
provide a conjecture about future generalizations.
|
Polarimetric observations of Fast Radio Bursts (FRBs) are a powerful resource
for better understanding these mysterious sources by directly probing the
emission mechanism of the source and the magneto-ionic properties of its
environment. We present a pipeline for analysing the polarized signal of FRBs
captured by the triggered baseband recording system operating on the FRB survey
of The Canadian Hydrogen Intensity Mapping Experiment (CHIME/FRB). Using a
combination of simulated and real FRB events, we summarize the main features of
the pipeline and highlight the dominant systematics affecting the polarized
signal. We compare parametric (QU-fitting) and non-parametric (rotation measure
synthesis) methods for determining the Faraday rotation measure (RM) and find
the latter method susceptible to systematic errors from known instrumental
effects of CHIME/FRB observations. These errors include a leakage artefact that
appears as polarized signal near $\rm{RM\sim 0 \; rad \, m^{-2}}$ and an RM
sign ambiguity introduced by path length differences in the system's
electronics. We apply the pipeline to a bright burst previously reported by
\citet[FRB 20191219F;][]{Leung2021}, detecting an $\mathrm{RM}$ of $\rm{+6.074
\pm 0.006 \pm 0.050 \; rad \, m^{-2}}$ with a significant linear polarized
fraction ($\gtrsim0.87$) and strong evidence for a non-negligible circularly
polarized component. Finally, we introduce an RM search method that employs a
phase-coherent de-rotation algorithm to correct for intra-channel
depolarization in data that retain electric field phase information, and
successfully apply it to an unpublished FRB, FRB 20200917A, measuring an
$\mathrm{RM}$ of $\rm{-1294.47 \pm 0.10 \pm 0.05 \; rad \, m^{-2}}$ (the second
largest unambiguous RM detection from any FRB source observed to date).
|
We propose a projected Wasserstein gradient descent method (pWGD) for
high-dimensional Bayesian inference problems. The underlying density function
of a particle system of WGD is approximated by kernel density estimation (KDE),
which faces the long-standing curse of dimensionality. We overcome this
challenge by exploiting the intrinsic low-rank structure in the difference
between the posterior and prior distributions. The parameters are projected
into a low-dimensional subspace to alleviate the approximation error of KDE in
high dimensions. We formulate a projected Wasserstein gradient flow and analyze
its convergence property under mild assumptions. Several numerical experiments
illustrate the accuracy, convergence, and complexity scalability of pWGD with
respect to parameter dimension, sample size, and processor cores.
|
In this paper, we investigate the computational intelligibility of Boolean
classifiers, characterized by their ability to answer XAI queries in polynomial
time. The classifiers under consideration are decision trees, DNF formulae,
decision lists, decision rules, tree ensembles, and Boolean neural nets. Using
9 XAI queries, including both explanation queries and verification queries, we
show the existence of large intelligibility gap between the families of
classifiers. On the one hand, all the 9 XAI queries are tractable for decision
trees. On the other hand, none of them is tractable for DNF formulae, decision
lists, random forests, boosted decision trees, Boolean multilayer perceptrons,
and binarized neural networks.
|
An approach that extends equilibrium thermodynamics principles to
out-of-equilibrium systems is based on the local equilibrium hypothesis.
However, the validity of the a priori assumption of local equilibrium has been
questioned due to the lack of sufficient experimental evidence. In this paper,
we present experimental results obtained from a pure thermodynamic study of the
non-turbulent Rayleigh-B\'enard convection at steady-state to verify the
validity of the local equilibrium hypothesis. A non-turbulent Rayleigh-B\'enard
convection at steady-state is an excellent `model thermodynamic system' in
which local measurements provide no insights about the spatial heterogeneity
present in the macroscopic thermodynamic landscape. Indeed, the onset of
convection leads to the emergence of spatially stable hot and cold domains. Our
results indicate that these domains while break spatial symmetry
macroscopically, preserves it locally that exhibit room temperature
equilibrium-like statistics. Furthermore, the role of the emergent heat flux is
investigated and a linear relationship is observed between the heat flux and
the external driving force following the onset of thermal convection. Finally,
theoretical and conceptual implications of these results are discussed which
opens up new avenues in the study non-equilibrium steady-states, especially in
complex, soft, and active-matter systems.
|
Graphical languages are symmetric monoidal categories presented by generators
and equations. The string diagrams notation allows to transform numerous axioms
into low dimension topological rules we are comfortable with as three
dimensional space citizens. This aspect is often referred to by the Only
Topology Matters paradigm (OTM). However OTM remains quite informal and its
exact meaning in terms of rewriting rules is ambiguous. In this paper we define
three precise aspects of the OTM paradigm, namely flexsymmetry, flexcyclicity
and flexibility of Frobenius algebras. We investigate how this new framework
can simplify the presentation of known graphical languages based on Frobenius
algebras.
|
We show that there exist K\"ahler-Einstein metrics on two exceptional
Pasquier's two-orbits varieties. As an application, we will provide a new
example of K-unstable Fano manifold with Picard number one.
|
In this letter, we investigate the changes in the quantum vacuum energy
density of a massless scalar field inside a Casimir cavity that orbits a
wormhole, by considering the cosmological model with an isotropic form of the
Morris-Thorne wormhole, embedded in the FLRW universe. In this sense, we
examine the effects of its global curvature and scale factor in an instant of
the cosmic history, besides the influences of the local geometry as well as of
inertial forces, on the Casimirenergy density. We also study the behavior of
this quantity when each plate is fixed without rotation at the opposite sides
of the wormhole throat, at zero and finite temperatures, taking into account
the effective distance between the plates through the wormhole throat.
|
Determination of the neutrino mass ordering (NMO) is one of the biggest
priorities in the intensity frontier of high energy particle physics. To
accomplish that goal a lot of efforts are being put together with the
atmospheric, solar, reactor, and accelerator neutrinos. In the standard
3-flavor framework, NMO is defined to be normal if $m_1<m_2<m_3$, and inverted
if $m_3<m_1<m_2$, where $m_1$, $m_2$, and $m_3$ are the masses of the three
neutrino mass eigenstates $\nu_1$, $\nu_2$, and $\nu_3$ respectively.
Interestingly, two long-baseline experiments T2K and NO$\nu$A are playing a
leading role in this direction and provide a $\sim2.4\sigma$ indication in
favor of normal ordering (NO) which we find in this work. In addition, we
examine how the situation looks like in presence of non-standard interactions
(NSI) of neutrinos with a special focus on the non-diagonal flavor changing
type $\varepsilon_{e\tau}$ and $\varepsilon_{e\mu}$. We find that the present
indication of NO in the standard 3-flavor framework gets completely vanished in
the presence of NSI of the flavor changing type involving the $e-\tau$ flavors.
|
The current Siamese network based on region proposal network (RPN) has
attracted great attention in visual tracking due to its excellent accuracy and
high efficiency. However, the design of the RPN involves the selection of the
number, scale, and aspect ratios of anchor boxes, which will affect the
applicability and convenience of the model. Furthermore, these anchor boxes
require complicated calculations, such as calculating their
intersection-over-union (IoU) with ground truth bounding boxes.Due to the
problems related to anchor boxes, we propose a simple yet effective anchor-free
tracker (named Siamese corner networks, SiamCorners), which is end-to-end
trained offline on large-scale image pairs. Specifically, we introduce a
modified corner pooling layer to convert the bounding box estimate of the
target into a pair of corner predictions (the bottom-right and the top-left
corners). By tracking a target as a pair of corners, we avoid the need to
design the anchor boxes. This will make the entire tracking algorithm more
flexible and simple than anchorbased trackers. In our network design, we
further introduce a layer-wise feature aggregation strategy that enables the
corner pooling module to predict multiple corners for a tracking target in deep
networks. We then introduce a new penalty term that is used to select an
optimal tracking box in these candidate corners. Finally, SiamCorners achieves
experimental results that are comparable to the state-of-art tracker while
maintaining a high running speed. In particular, SiamCorners achieves a 53.7%
AUC on NFS30 and a 61.4% AUC on UAV123, while still running at 42 frames per
second (FPS).
|
Information exchange is a crucial component of many real-world multi-agent
systems. However, the communication between the agents involves two major
challenges: the limited bandwidth, and the shared communication medium between
the agents, which restricts the number of agents that can simultaneously
exchange information. While both of these issues need to be addressed in
practice, the impact of the latter problem on the performance of the
multi-agent systems has often been neglected. This becomes even more important
when the agents' information or observations have different importance, in
which case the agents require different priorities for accessing the medium and
sharing their information. Representing the agents' priorities by fairness
weights and normalizing each agent's share by the assigned fairness weight, the
goal can be expressed as equalizing the agents' normalized shares of the
communication medium. To achieve this goal, we adopt a queueing theoretic
approach and propose a distributed fair scheduling algorithm for providing
weighted fairness in single-hop networks. Our proposed algorithm guarantees an
upper-bound on the normalized share disparity among any pair of agents. This
can particularly improve the short-term fairness, which is important in
real-time applications. Moreover, our scheduling algorithm adjusts itself
dynamically to achieve a high throughput at the same time. The simulation
results validate our claims and comparisons with the existing methods show our
algorithm's superiority in providing short-term fairness, while achieving a
high throughput.
|
We show that quantum state tomography with perfect knowledge of the
measurement apparatus proves to be, in some instances, inferior to strategies
discarding all information about the measurement at hand, as in the case of
data pattern tomography. In those scenarios, the larger uncertainty about the
measurement is traded for the smaller uncertainty about the reconstructed
signal. This effect is more pronounced for minimal or nearly minimal
informationally complete measurement settings, which are of utmost practical
importance.
|
This letter gives a credit to a pioneering paper that is almost unknown to
scientific community. On the basis of Transmission Electron Microscopy images
and X-ray Ray Diffraction patterns of carbon multi-layer tubular crystals the
authors suggested a model of nanotube structure formation and hypothesis on
various chirality of carbon nanotubes.
|
Subspace-valued functions arise in a wide range of problems, including
parametric reduced order modeling (PROM). In PROM, each parameter point can be
associated with a subspace, which is used for Petrov-Galerkin projections of
large system matrices. Previous efforts to approximate such functions use
interpolations on manifolds, which can be inaccurate and slow. To tackle this,
we propose a novel Bayesian nonparametric model for subspace prediction: the
Gaussian Process Subspace regression (GPS) model. This method is extrinsic and
intrinsic at the same time: with multivariate Gaussian distributions on the
Euclidean space, it induces a joint probability model on the Grassmann
manifold, the set of fixed-dimensional subspaces. The GPS adopts a simple yet
general correlation structure, and a principled approach for model selection.
Its predictive distribution admits an analytical form, which allows for
efficient subspace prediction over the parameter space. For PROM, the GPS
provides a probabilistic prediction at a new parameter point that retains the
accuracy of local reduced models, at a computational complexity that does not
depend on system dimension, and thus is suitable for online computation. We
give four numerical examples to compare our method to subspace interpolation,
as well as two methods that interpolate local reduced models. Overall, GPS is
the most data efficient, more computationally efficient than subspace
interpolation, and gives smooth predictions with uncertainty quantification.
|
Scheduling computational tasks represented by directed acyclic graphs (DAGs)
is challenging because of its complexity. Conventional scheduling algorithms
rely heavily on simple heuristics such as shortest job first (SJF) and critical
path (CP), and are often lacking in scheduling quality. In this paper, we
present a novel learning-based approach to scheduling DAG tasks. The algorithm
employs a reinforcement learning agent to iteratively add directed edges to the
DAG, one at a time, to enforce ordering (i.e., priorities of execution and
resource allocation) of "tricky" job nodes. By doing so, the original DAG
scheduling problem is dramatically reduced to a much simpler proxy problem, on
which heuristic scheduling algorithms such as SJF and CP can be efficiently
improved. Our approach can be easily applied to any existing heuristic
scheduling algorithms. On the benchmark dataset of TPC-H, we show that our
learning based approach can significantly improve over popular heuristic
algorithms and consistently achieves the best performance among several methods
under a variety of settings.
|
The fused deposition modeling is one of the most rapidly developing 3D
printing techniques, with numerous applications, also in the field of applied
electrochemistry. Here, utilization of conductive polylactic acid (C-PLA) for
3D printouts is the most promising, due to its biodegradability, commercial
availability, and ease of processing. To use C-PLA as an electrode material, an
activation process must be performed, removing the polymer matrix and
uncovering the electroactive filler. The most popular chemical or
electrochemical activation routes are done in solvents. In this manuscript, we
present a novel, alternative approach towards C-PLA activation with Nd:YAG
(lambda = 1064 nm) laser ablation. We present and discuss the activation
efficiency based on various laser source operating conditions, and the gas
matrix. The XPS, contact angle, and Raman analyses were performed for
evaluation of the surface chemistry and to discuss the mechanism of the
activation process. The ablation process carried out in the inert gas matrix
(helium) delivers a highly electroactive C-PLA electrode surface, while the
resultant charge transfer process is hindered when activated in the air. This
is due to thermally induced oxide layers formation. The electroanalytical
performance of laser-treated C-PLA in He atmosphere was confirmed through
caffeine detection, offering detection limits of 0.49 and 0.40 microM (S/N = 3)
based on CV and DPV studies, respectively.
|
Supernova properties in radio strongly depend on their circumstellar
environment and they are an important probe to investigate the mass loss of
supernova progenitors. Recently, core-collapse supernova observations in radio
have been assembled and the rise time and peak luminosity distribution of
core-collapse supernovae at 8.4 GHz has been estimated. In this paper, we
constrain the mass-loss prescriptions for red supergiants by using the rise
time and peak luminosity distribution of Type II supernovae in radio. We take
the de Jager and van Loon mass-loss rates for red supergiants, calculate the
rise time and peak luminosity distribution based on them, and compare the
results with the observed distribution. We found that the de Jager mass-loss
rate explains the widely spread radio rise time and peak luminosity
distribution of Type II supernovae well, while the van Loon mass-loss rate
predicts a relatively narrow range for the rise time and peak luminosity. We
conclude that the mass-loss prescriptions of red supergiants should have strong
dependence on the luminosity as in the de Jager mass-loss rate to reproduce the
widely spread distribution of the rise time and peak luminosity in radio
observed in Type II supernovae.
|
gComm is a step towards developing a robust platform to foster research in
grounded language acquisition in a more challenging and realistic setting. It
comprises a 2-d grid environment with a set of agents (a stationary speaker and
a mobile listener connected via a communication channel) exposed to a
continuous array of tasks in a partially observable setting. The key to solving
these tasks lies in agents developing linguistic abilities and utilizing them
for efficiently exploring the environment. The speaker and listener have access
to information provided in different modalities, i.e. the speaker's input is a
natural language instruction that contains the target and task specifications
and the listener's input is its grid-view. Each must rely on the other to
complete the assigned task, however, the only way they can achieve the same, is
to develop and use some form of communication. gComm provides several tools for
studying different forms of communication and assessing their generalization.
|
Autonomous driving highly depends on capable sensors to perceive the
environment and to deliver reliable information to the vehicles' control
systems. To increase its robustness, a diversified set of sensors is used,
including radar sensors. Radar is a vital contribution of sensory information,
providing high resolution range as well as velocity measurements. The increased
use of radar sensors in road traffic introduces new challenges. As the so far
unregulated frequency band becomes increasingly crowded, radar sensors suffer
from mutual interference between multiple radar sensors. This interference must
be mitigated in order to ensure a high and consistent detection sensitivity. In
this paper, we propose the use of Complex-Valued Convolutional Neural Networks
(CVCNNs) to address the issue of mutual interference between radar sensors. We
extend previously developed methods to the complex domain in order to process
radar data according to its physical characteristics. This not only increases
data efficiency, but also improves the conservation of phase information during
filtering, which is crucial for further processing, such as angle estimation.
Our experiments show, that the use of CVCNNs increases data efficiency, speeds
up network training and substantially improves the conservation of phase
information during interference removal.
|
Using a total of $5.25~{\rm fb}^{-1}$ of $e^{+}e^{-}$ collision data with
center-of-mass energies from 4.236 to 4.600 GeV, we report the first
observation of the process $e^{+}e^{-}\to \eta\psi(2S)$ with a statistical
significance of $5\sigma$. The data sets were collected by the BESIII detector
operating at the BEPCII storage ring. We measure the yield of events integrated
over center-of-mass energies and also present the energy dependence of the
measured cross section.
|
The ongoing trend of moving data and computation to the cloud is met with
concerns regarding privacy and protection of intellectual property. Cloud
Service Providers (CSP) must be fully trusted to not tamper with or disclose
processed data, hampering adoption of cloud services for many sensitive or
critical applications. As a result, CSPs and CPU manufacturers are rushing to
find solutions for secure outsourced computation in the Cloud. While enclaves,
like Intel SGX, are strongly limited in terms of throughput and size, AMD's
Secure Encrypted Virtualization (SEV) offers hardware support for transparently
protecting code and data of entire VMs, thus removing the performance, memory
and software adaption barriers of enclaves. Through attestation of boot code
integrity and means for securely transferring secrets into an encrypted VM,
CSPs are effectively removed from the list of trusted entities. There have been
several attacks on the security of SEV, by abusing I/O channels to encrypt and
decrypt data, or by moving encrypted code blocks at runtime. Yet, none of these
attacks have targeted the attestation protocol, the core of the secure
computing environment created by SEV. We show that the current attestation
mechanism of Zen 1 and Zen 2 architectures has a significant flaw, allowing us
to manipulate the loaded code without affecting the attestation outcome. An
attacker may abuse this weakness to inject arbitrary code at startup -- and
thus take control over the entire VM execution, without any indication to the
VM's owner. Our attack primitives allow the attacker to do extensive
modifications to the bootloader and the operating system, like injecting spy
code or extracting secret data. We present a full end-to-end attack, from the
initial exploit to leaking the key of the encrypted disk image during boot,
giving the attacker unthrottled access to all of the VM's persistent data.
|
We prove the well-posedness of entropy solutions for a wide class of nonlocal
transport equations with nonlinear mobility in one spatial dimension. The
solution is obtained as the limit of approximations constructed via a
deterministic system of interacting particles that exhibits a gradient flow
structure. At the same time, we expose a rigorous gradient flow structure for
this class of equations in terms of an Energy-Dissipation balance, which we
obtain via the asymptotic convergence of functionals.
|
In autonomous microgrids frequency regulation (FR) is a critical issue,
especially with a high level of penetration of the photovoltaic (PV)
generation. In this study, a novel virtual synchronous generator (VSG) control
for PV generation was introduced to provide frequency support without energy
storage. PV generation reserve a part of the active power in accordance with
the pre-defined power versus voltage curve. Based on the similarities of the
synchronous generator power-angle characteristic curve and the PV array
characteristic curve, PV voltage Vpv can be analogized to the power angle
{\delta}. An emulated governor (droop control) and the swing equation control
is designed and applied to the DC-DC converter. PV voltage deviation is
subsequently generated and the pre-defined power versus voltage curve is
modified to provide the primary frequency and inertia support. A simulation
model of an autonomous microgrid with PV, storage, and diesel generator was
built. The feasibility and effectiveness of the proposed VSG strategy are
examined under different operating conditions.
|
One of the paramount advantages of multi-level cache-enabled (MLCE) networks
is pushing contents proximity to the network edge and proactively caching them
at multiple transmitters (i.e., small base-stations (SBSs), unmanned aerial
vehicles (UAVs), and cache-enabled device-to-device (CE-D2D) users). As such,
the fronthaul congestion between a core network and a large number of
transmitters is alleviated. For this objective, we exploit network coding (NC)
to schedule a set of users to the same transmitter. Focusing on this, we
consider the throughput maximization problem that optimizes jointly the
network-coded user scheduling and power allocation, subject to fronthaul
capacity, transmit power, and NC constraints. Given the intractability of the
problem, we decouple it into two separate subproblems. In the first subproblem,
we consider the network-coded user scheduling problem for the given power
allocation, while in the second subproblem, we use the NC resulting user
schedule to optimize the power levels. We design an innovative
\textit{two-layered rate-aware NC (RA-IDNC)} graph to solve the first
subproblem and evaluate the second subproblem using an iterative function
evaluation (IFE) approach. Simulation results are presented to depict the
throughput gain of the proposed approach over the existing solutions.
|
Latent alignment objectives such as CTC and AXE significantly improve
non-autoregressive machine translation models. Can they improve autoregressive
models as well? We explore the possibility of training autoregressive machine
translation models with latent alignment objectives, and observe that, in
practice, this approach results in degenerate models. We provide a theoretical
explanation for these empirical results, and prove that latent alignment
objectives are incompatible with teacher forcing.
|
Graph convolutional networks (GCNs) are widely used in graph-based
applications such as graph classification and segmentation. However, current
GCNs have limitations on implementation such as network architectures due to
their irregular inputs. In contrast, convolutional neural networks (CNNs) are
capable of extracting rich features from large-scale input data, but they do
not support general graph inputs. To bridge the gap between GCNs and CNNs, in
this paper we study the problem of how to effectively and efficiently map
general graphs to 2D grids that CNNs can be directly applied to, while
preserving graph topology as much as possible. We therefore propose two novel
graph-to-grid mapping schemes, namely, {\em graph-preserving grid layout
(GPGL)} and its extension {\em Hierarchical GPGL (H-GPGL)} for computational
efficiency. We formulate the GPGL problem as integer programming and further
propose an approximate yet efficient solver based on a penalized Kamada-Kawai
method, a well-known optimization algorithm in 2D graph drawing. We propose a
novel vertex separation penalty that encourages graph vertices to lay on the
grid without any overlap. Along with this image representation, even extra 2D
maxpooling layers contribute to the PointNet, a widely applied point-based
neural network. We demonstrate the empirical success of GPGL on general graph
classification with small graphs and H-GPGL on 3D point cloud segmentation with
large graphs, based on 2D CNNs including VGG16, ResNet50 and multi-scale maxout
(MSM) CNN.
|
The input of almost every machine learning algorithm targeting the properties
of matter at the atomic scale involves a transformation of the list of
Cartesian atomic coordinates into a more symmetric representation. Many of
these most popular representations can be seen as an expansion of the
symmetrized correlations of the atom density, and differ mainly by the choice
of basis. Here we discuss how to build an adaptive, optimal numerical basis
that is chosen to represent most efficiently the structural diversity of the
dataset at hand. For each training dataset, this optimal basis is unique, and
can be computed at no additional cost with respect to the primitive basis by
approximating it with splines. We demonstrate that this construction yields
representations that are accurate and computationally efficient, presenting
examples that involve both molecular and condensed-phase machine-learning
models.
|
We formulate a plausible conjecture for the optimal Ehrhard-type inequality
for convex symmetric sets with respect to the Gaussian measure. Namely, letting
$J_{k-1}(s)=\int^s_0 t^{k-1} e^{-\frac{t^2}{2}}dt$ and
$c_{k-1}=J_{k-1}(+\infty)$, we conjecture that the function
$F:[0,1]\rightarrow\mathbb{R},$ given by $$F(a)= \sum_{k=1}^n 1_{a\in
E_k}\cdot(\beta_k J_{k-1}^{-1}(c_{k-1} a)+\alpha_k)$$ (with an appropriate
choice of a decomposition $[0,1]=\cup_{i} E_i$ and coefficients $\alpha_i,
\beta_i$) satisfies, for all symmetric convex sets $K$ and $L,$ and any
$\lambda\in[0,1]$, $$ F\left(\gamma(\lambda K+(1-\lambda)L)\right)\geq \lambda
F\left(\gamma(K)\right)+(1-\lambda) F\left(\gamma(L)\right). $$ We explain that
this conjecture is ``the most optimistic possible'', and is equivalent to the
fact that for any symmetric convex set $K,$ its \emph{Gaussian concavity power}
$p^s(K,\gamma)$ is greater than or equal to $p_s(RB^k_2\times
\mathbb{R}^{n-k},\gamma),$ for some $k\in \{1,...,n\}$. We call the sets
$RB^k_2\times \mathbb{R}^{n-k}$ round $k$-cylinders; they also appear as the
conjectured Gaussian isoperimetric minimizers for symmetric sets, see Heilman
\cite{Heilman}. In this manuscript, we make progress towards this question, and
prove certain inequality for which the round k-cylinders are the only equality
cases. As an auxiliary result on the way to the equality case characterization,
we characterize the equality cases in the ``convex set version'' of the
Brascamp-Lieb inequality, and moreover, obtain a quantitative stability version
in the case of the standard Gaussian measure; this may be of independent
interest.
|
Federated learning (FL) is a distributed machine learning architecture that
leverages a large number of workers to jointly learn a model with decentralized
data. FL has received increasing attention in recent years thanks to its data
privacy protection, communication efficiency and a linear speedup for
convergence in training (i.e., convergence performance increases linearly with
respect to the number of workers). However, existing studies on linear speedup
for convergence are only limited to the assumptions of i.i.d. datasets across
workers and/or full worker participation, both of which rarely hold in
practice. So far, it remains an open question whether or not the linear speedup
for convergence is achievable under non-i.i.d. datasets with partial worker
participation in FL. In this paper, we show that the answer is affirmative.
Specifically, we show that the federated averaging (FedAvg) algorithm (with
two-sided learning rates) on non-i.i.d. datasets in non-convex settings
achieves a convergence rate $\mathcal{O}(\frac{1}{\sqrt{mKT}} + \frac{1}{T})$
for full worker participation and a convergence rate
$\mathcal{O}(\frac{\sqrt{K}}{\sqrt{nT}} + \frac{1}{T})$ for partial worker
participation, where $K$ is the number of local steps, $T$ is the number of
total communication rounds, $m$ is the total worker number and $n$ is the
worker number in one communication round if for partial worker participation.
Our results also reveal that the local steps in FL could help the convergence
and show that the maximum number of local steps can be improved to $T/m$ in
full worker participation. We conduct extensive experiments on MNIST and
CIFAR-10 to verify our theoretical results.
|
We consider rather a general class of multi-level optimization problems,
where a convex objective function is to be minimized, subject to constraints to
optima of a nested convex optimization problem. As a special case, we consider
a trilevel optimization problem, where the objective of the two lower layers
consists of a sum of a smooth and a non-smooth term. Based on fixed-point
theory and related arguments, we present a natural first-order algorithm and
analyze its convergence and rates of convergence in several regimes of
parameters.
|
The aim of this paper is to generalize results known for the symplectic
involutions on K3 surfaces to the order 3 symplectic automorphisms on K3
surfaces. In particular, we will explicitly describe the action induced on the
lattice $\Lambda_{K3}$, isometric to the second cohomology group of a K3
surface, by a symplectic automorphism of order 3; we exhibit the maps $\pi_*$
and $\pi^*$ induced in cohomology by the rational quotient map
$\pi:X\dashrightarrow Y$, where $X$ is a K3 surface admitting an order 3
symplectic automorphism $\sigma$ and $Y$ is the minimal resolution of the
quotient $X/\sigma$; we deduce the relation between the N\'eron--Severi group
of $X$ and the one of $Y$. Applying these results we describe explicit
geometric examples and generalize the Shioda--Inose structures, relating
Abelian surfaces admitting order 3 endomorphisms with certain specific K3
surfaces admitting particular order 3 symplectic automorphisms.
|
This book develops an effective theory approach to understanding deep neural
networks of practical relevance. Beginning from a first-principles
component-level picture of networks, we explain how to determine an accurate
description of the output of trained networks by solving layer-to-layer
iteration equations and nonlinear learning dynamics. A main result is that the
predictions of networks are described by nearly-Gaussian distributions, with
the depth-to-width aspect ratio of the network controlling the deviations from
the infinite-width Gaussian description. We explain how these effectively-deep
networks learn nontrivial representations from training and more broadly
analyze the mechanism of representation learning for nonlinear models. From a
nearly-kernel-methods perspective, we find that the dependence of such models'
predictions on the underlying learning algorithm can be expressed in a simple
and universal way. To obtain these results, we develop the notion of
representation group flow (RG flow) to characterize the propagation of signals
through the network. By tuning networks to criticality, we give a practical
solution to the exploding and vanishing gradient problem. We further explain
how RG flow leads to near-universal behavior and lets us categorize networks
built from different activation functions into universality classes.
Altogether, we show that the depth-to-width ratio governs the effective model
complexity of the ensemble of trained networks. By using information-theoretic
techniques, we estimate the optimal aspect ratio at which we expect the network
to be practically most useful and show how residual connections can be used to
push this scale to arbitrary depths. With these tools, we can learn in detail
about the inductive bias of architectures, hyperparameters, and optimizers.
|
Programming efficiently heterogeneous systems is a major challenge, due to
the complexity of their architectures. Intel oneAPI, a new and powerful
standards-based unified programming model, built on top of SYCL, addresses
these issues. In this paper, oneAPI is provided with co-execution strategies to
run the same kernel between different devices, enabling the exploitation of
static and dynamic policies. On top of that, static and dynamic load-balancing
algorithms are integrated and analyzed.
This work evaluates the performance and energy efficiency for a well-known
set of regular and irregular HPC benchmarks, using an integrated GPU and CPU.
Experimental results show that co-execution is worthwhile when using dynamic
algorithms, improving efficiency even more when using unified shared memory.
|
Open Radio Access Network (ORAN) is being developed with an aim to
democratise access and lower the cost of future mobile data networks,
supporting network services with various QoS requirements, such as massive IoT
and URLLC. In ORAN, network functionality is dis-aggregated into remote units
(RUs), distributed units (DUs) and central units (CUs), which allows flexible
software on Commercial-Off-The-Shelf (COTS) deployments. Furthermore, the
mapping of variable RU requirements to local mobile edge computing centres for
future centralized processing would significantly reduce the power consumption
in cellular networks. In this paper, we study the RU-DU resource assignment
problem in an ORAN system, modelled as a 2D bin packing problem. A deep
reinforcement learning-based self-play approach is proposed to achieve
efficient RU-DU resource management, with AlphaGo Zero inspired neural
Monte-Carlo Tree Search (MCTS). Experiments on representative 2D bin packing
environment and real sites data show that the self-play learning strategy
achieves intelligent RU-DU resource assignment for different network
conditions.
|
This paper presents a novel solution paradigm of general optimization under
both exogenous and endogenous uncertainties. This solution paradigm consists of
a probability distribution (PD)-free method of obtaining deterministic
equivalents and an innovative approach of scenario reduction. First, dislike
the existing methods that use scenarios sampled from pre-known PD functions,
the PD-free method uses historical measurements of uncertain variables as input
to convert the logical models into a type of deterministic equivalents called
General Scenario Program (GSP). Our contributions to the PD-free deterministic
equivalent construction reside in generalization (making it applicable to
general optimization under uncertainty rather than just chance-constrained
optimization) and extension (enabling it to the problems under endogenous
uncertainty via developing an iterative and a non-iterative frameworks).
Second, this paper reveals some unknown properties of the PD-free deterministic
equivalent construction, such as the characteristics of active scenarios and
repeated scenarios. Base on this discoveries, we propose a concept and methods
of strategic scenario selection which can effectively reduce the required
number of scenarios as demonstrated in both mathematical analysis and numerical
experiments. Numerical experiments are conducted on two typical smart grid
optimization problems under exogenous and endogenous uncertainties.
|
The detection of spatial or temporal variations in very thin samples has
important applications in the biological sciences. For example, cellular
membranes exhibit changes in lipid composition and order, which in turn
modulate their function in space and time. Simultaneous measurement of
thickness and refractive index would be one way to observe these variations,
yet doing it noninvasively remains an elusive goal. Here we present a
microscopic-imaging technique to simultaneously measure the thickness and
refractive index of thin layers in a spatially resolved manner using
reflectometry. The heterodyne-detected interference between a light field
reflected by the sample and a reference field allows measurement of the
amplitude and phase of the reflected field and thus determination of the
complex reflection coefficient. Comparing the results with the simulated
reflection of a thin layer under coherent illumination of high numerical
aperture by the microscope objective, the refractive index and thickness of the
layer can be determined. We present results on a layer of polyvinylacetate
(PVA) with a thickness of approximately 80~nm. These results have a precision
better than 10\% in the thickness and better than 1\% in the refractive index
and are consistent within error with measurements by quantitative differential
interference contrast (qDIC) and literature values. We discuss the significance
of these results, and the possibility of performing accurate measurements on
nanometric layers. Notably, the shot-noise limit of the technique is below
0.5~nm in thickness and 0.0005 in refractive index for millisecond measurement
times.
|
We consider the scenario where human-driven/autonomous vehicles with low/high
occupancy are sharing a segment of highway and autonomous vehicles are capable
of increasing the traffic throughput by preserving a shorter headway than
human-driven vehicles. We propose a toll lane framework where a lane on the
highway is reserved freely for autonomous vehicles with high occupancy, which
have the greatest capability to increase social mobility, and the other three
classes of vehicles can choose to use the toll lane with a toll or use the
other regular lanes freely. All vehicles are assumed to be only interested in
minimizing their own travel costs. We explore the resulting lane choice
equilibria under the framework and establish desirable properties of the
equilibria, which implicitly compare high-occupancy vehicles with autonomous
vehicles in terms of their capabilities to increase social mobility. We further
use numerical examples in the optimal toll design, the occupancy threshold
design, and the policy design problems to clarify the various potential
applications of this toll lane framework that unites high-occupancy vehicles
and autonomous vehicles. To our best knowledge, this is the first work that
systematically studies a toll lane framework that unites autonomous vehicles
and high-occupancy vehicles on the roads.
|
We explore explicit virtual resolutions, as introduced by Berkesch, Erman,
and Smith, for ideals of sets of points in $\mathbb{P}^1 \times \mathbb{P}^1$.
Specifically, we describe a virtual resolution for a sufficiently general set
of points $X$ in $\mathbb{P}^1 \times \mathbb{P}^1$ that only depends on $|X|$.
We also improve an existence result of Berkesch, Erman, and Smith in the
special case of points in $\mathbb{P}^1 \times \mathbb{P}^1$; more precisely,
we give an effective bound for their construction that gives a virtual
resolution of length two for any set of points in $\mathbb{P}^1 \times
\mathbb{P}^1$.
|
Many defenses have emerged with the development of adversarial attacks.
Models must be objectively evaluated accordingly. This paper systematically
tackles this concern by proposing a new parameter-free benchmark we coin RoBIC.
RoBIC fairly evaluates the robustness of image classifiers using a new
half-distortion measure. It gauges the robustness of the network against white
and black box attacks, independently of its accuracy. RoBIC is faster than the
other available benchmarks. We present the significant differences in the
robustness of 16 recent models as assessed by RoBIC.
|
The automorphism groups of the Fano-Mukai fourfold of genus 10 were studied
in our previous paper [arXiv:1706.04926]. In particular, we found in
[arXiv:1706.04926] the neutral components of these groups. In the present paper
we finish the description of the discrete parts.
Up to isomorphism, there are two special Fano--Mukai fourfold of genus 10
with the automorphism groups $GL_2(k)\rtimes\mathbb{Z}/2\mathbb{Z}$ and
$(\mathbb{G}_a\times\mathbb{G}m)\rtimes\mathbb{Z}/2\mathbb{Z}$, respectively.
For any other Fano-Mukai fourfold $V$ of genus 10 one has
$\mathrm{Aut}(V)=\mathbb{G}_m^2\rtimes \mathbb{Z}/2\mathbb{Z}$, except for
exactly one of them with $\mathrm{Aut}(V)=\mathbb{G}_m^2\rtimes \mathbb{Z}/6
\mathbb{Z}$.
|
Piezo ion channels underlie many forms of mechanosensation in vertebrates,
and have been found to bend the membrane into strongly curved dome shapes. We
develop here a methodology describing the self-assembly of lipids and Piezo
into polyhedral bilayer vesicles. We validate this methodology for bilayer
vesicles formed from bacterial mechanosensitive channels of small conductance,
for which experiments found a polyhedral arrangement of proteins with snub cube
symmetry and a well-defined characteristic vesicle size. On this basis, we
calculate the self-assembly diagram for polyhedral bilayer vesicles formed from
Piezo. We find that the radius of curvature of the Piezo dome provides a
critical control parameter for the self-assembly of Piezo vesicles, with high
abundances of Piezo vesicles with octahedral, icosahedral, and snub cube
symmetry with increasing Piezo dome radius of curvature.
|
Heart rate variability (HRV), defined as the variability between consecutive
heartbeats, is a surrogate measure of cardiac vagal tone. It is widely accepted
that a decreased HRV is associated to several risk factors and cardiovascular
diseases. However, a possible association between HRV and altered cerebral
hemodynamics is still debated, suffering from HRV short-term measures and the
paucity of high-resolution deep cerebral data. We propose a computational
approach to evaluate the deep cerebral and central hemodynamics subject to
physiological alterations of HRV in an ideal young healthy patient at rest. The
cardiovascular-cerebral model was validated and recently exploited to
understand the hemodynamic mechanisms between cardiac arrythmia and cognitive
deficit. Three configurations (baseline, increased HRV, and decreased HRV) are
built based on the standard deviation (SDNN) of RR beats. In the cerebral
circulation, our results show that HRV has overall a stronger impact on
pressure than flow rate mean values but similarly alters pressure and flow rate
in terms of extreme events. By comparing reduced and increased HRV, this latter
induces a higher probability of altered mean and extreme values, and is
therefore more detrimental at distal cerebral level. On the contrary, at
central level a decreased HRV induces a higher cardiac effort without improving
the mechano-contractile performance, thus overall reducing the heart
efficiency. Present results suggest that: (i) the increase of HRV per se does
not seem to be sufficient to trigger a better cerebral hemodynamic response;
(ii) by accounting for both central and cerebral circulations, the optimal HRV
configuration is found at baseline. Given the relation inversely linking HRV
and HR, the presence of this optimal condition can contribute to explain why
the mean HR of the general population settles around the baseline value (70
bpm).
|
This paper is concerned with the localized behaviors of the solution $u$ to
the Navier-Stokes equations near the potential singular points. We establish
the concentration rate for the $L^{p,\infty}$ norm of $u$ with $3\leq
p\leq\infty$. Namely, we show that if $z_0=(t_0,x_0)$ is a singular point, then
for any $r>0$, it holds \begin{align} \limsup_{t\to
t_0^-}||u(t,x)-u(t)_{x_0,r}||_{L^{3,\infty}(B_r(x_0))}>\delta^*,\notag
\end{align} and \begin{align} \limsup_{t\to
t_0^-}(t_0-t)^{\frac{1}{\mu}}r^{\frac{2}{\nu}-\frac{3}{p}}||u(t)||_{L^{p,\infty}(B_r(x_0))}>\delta^*\notag
for~3<p\leq\infty,
~\frac{1}{\mu}+\frac{1}{\nu}=\frac{1}{2}~and~2\leq\nu\leq\frac{2}{3}p,\notag
\end{align}where $\delta^*$ is a positive constant independent of $p$ and
$\nu$. Our main tools are some $\varepsilon$-regularity criteria in
$L^{p,\infty}$ spaces and an embedding theorem from $L^{p,\infty}$ space into a
Morrey type space. These are of independent interests.
|
Multiple Kernel Learning is a conventional way to learn the kernel function
in kernel-based methods. MKL algorithms enhance the performance of kernel
methods. However, these methods have a lower complexity compared to deep
learning models and are inferior to these models in terms of recognition
accuracy. Deep learning models can learn complex functions by applying
nonlinear transformations to data through several layers. In this paper, we
show that a typical MKL algorithm can be interpreted as a one-layer neural
network with linear activation functions. By this interpretation, we propose a
Neural Generalization of Multiple Kernel Learning (NGMKL), which extends the
conventional multiple kernel learning framework to a multi-layer neural network
with nonlinear activation functions. Our experiments on several benchmarks show
that the proposed method improves the complexity of MKL algorithms and leads to
higher recognition accuracy.
|
The density power divergence (DPD) and related measures have produced many
useful statistical procedures which provide a good balance between model
efficiency on one hand, and outlier stability or robustness on the other. The
large number of citations received by the original DPD paper (Basu et al.,
1998) and its many demonstrated applications indicate the popularity of these
divergences and the related methods of inference. The estimators that are
derived from this family of divergences are all M-estimators where the defining
$\psi$ function is based explicitly on the form of the model density. The
success of the minimum divergence estimators based on the density power
divergence makes it imperative and meaningful to look for other, similar
divergences in the same spirit. The logarithmic density power divergence (Jones
et al., 2001), a logarithmic transform of the density power divergence, has
also been very successful in producing inference procedures with a high degree
of efficiency simultaneously with a high degree of robustness. This further
strengthens the motivation to look for statistical divergences that are
transforms of the density power divergence, or, alternatively, members of the
functional density power divergence class. This note characterizes the
functional density power divergence class, and thus identifies the available
divergence measures within this construct that may possibly be explored for
robust and efficient statistical inference.
|
Experiments observe an enhanced superconducting gap over impurities as
compared to the clean-bulk value. In order to shed more light on this
phenomenon, we perform simulations within the framework of Bogoliubov-deGennes
theory applied to the attractive Hubbard model. The simulations qualitatively
reproduce the experimentally observed enhancement effect; it can be traced back
to an increased particle density in the metal close to the impurity site. In
addition, the simulations display significant differences between a thin (2D)
and a very thick (3D) film. In 2D pronounced Friedel oscillations can be
observed, which decay much faster in (3D) and therefore are more difficult to
resolve. Also this feature is in qualitative agreement with the experiment.
|
Supermassive black hole (SMBH) binaries represent the main target for
missions such as the Laser Interferometer Space Antenna and Pulsar Timing
Arrays. The understanding of their dynamical evolution prior to coalescence is
therefore crucial to improving detection strategies and for the astrophysical
interpretation of the gravitational wave data. In this paper, we use
high-resolution $N$-body simulations to model the merger of two equal-mass
galaxies hosting a central SMBH. In our models, all binaries are initially
prograde with respect to the galaxy sense of rotation. But, binaries that form
with a high eccentricity, $e\gtrsim 0.7$, quickly reverse their sense of
rotation and become almost perfectly retrograde at the moment of binary
formation. The evolution of these binaries proceeds towards larger
eccentricities, as expected for a binary hardening in a counter-rotating
stellar distribution. Binaries that form with lower eccentricities remain
prograde and at comparatively low eccentricities. We study the origin of the
orbital flip by using an analytical model that describes the early stages of
binary evolution. This model indicates that the orbital plane flip is due to
the torque from the triaxial background mass distribution that naturally arises
from the galactic merger process. Our results imply the existence of a
population of SMBH binaries with a high eccentricity and could have significant
implications for the detection of the gravitational wave signal emitted by
these systems.
|
In this work, we deal with the problem of rating in sports, where the skills
of the players/teams are inferred from the observed outcomes of the games. Our
focus is on the online rating algorithms which estimate the skills after each
new game by exploiting the probabilistic models of the relationship between the
skills and the game outcome. We propose a Bayesian approach which may be seen
as an approximate Kalman filter and which is generic in the sense that it can
be used with any skills-outcome model and can be applied in the individual --
as well as in the group-sports. We show how the well-know algorithms (such as
the Elo, the Glicko, and the TrueSkill algorithms) may be seen as instances of
the one-fits-all approach we propose. In order to clarify the conditions under
which the gains of the Bayesian approach over the simpler solutions can
actually materialize, we critically compare the known and the new algorithms by
means of numerical examples using the synthetic as well as the empirical data.
|
Our experiments demonstrate that alloying the cubic--phase YbN into the
wurtzite--phase AlN results in clear mechanical softening and enhanced
electromechanical coupling of AlN. The first principle calculations reproduce
experimental results well, and predict a maximum of 270% increase in
electromechanical coupling coefficient caused by 1) enhanced piezoelectric
response induced by the local strain of Yb ions and 2) structural flexibility
of the YbAlN alloy. Extensive calculations suggest that the substitutional
neighbor Yb--Yb pairs in wurtzite AlN are energetically stable along c axis,
and avoid to form on the basal plane of wurtzite structure due to the repulsion
between them, which explains that YbAlN films with high Yb concentrations are
difficult to fabricate in our sputtering experiments. Moreover, the neighbor
Yb--Yb pair interactions also promote structural flexibility of YbAlN, and are
considered a cause for mechanical softening of YbAlN.
|
In the simplest game-theoretic formulation of Schelling's model of
segregation on graphs, agents of two different types each select their own
vertex in a given graph such as to maximize the fraction of agents of their
type in their occupied neighborhood. Two ways of modeling agent movement here
are either to allow two agents to swap their vertices or to allow an agent to
jump to a free vertex. The contributions of this paper are twofold. First, we
prove that deciding the existence of a swap-equilibrium and a jump-equilibrium
in this simplest model of Schelling games is NP-hard, thereby answering
questions left open by Agarwal et al. [AAAI '20] and Elkind et al. [IJCAI '19].
Second, we introduce a measure for the robustness of equilibria in Schelling
games in terms of the minimum number of edges that need to be deleted to make
an equilibrium unstable. We prove tight lower and upper bounds on the
robustness of swap-equilibria in Schelling games on different graph classes.
|
In this work, we study the problem of co-optimize communication,
pre-computing, and computation cost in one-round multi-way join evaluation. We
propose a multi-way join approach ADJ (Adaptive Distributed Join) for complex
join which finds one optimal query plan to process by exploring cost-effective
partial results in terms of the trade-off between pre-computing, communication,
and computation.We analyze the input relations for a given join query and find
one optimal over a set of query plans in some specific form, with high-quality
cost estimation by sampling. Our extensive experiments confirm that ADJ
outperforms the existing multi-way join methods by up to orders of magnitude.
|
This study delves into semi-supervised object detection (SSOD) to improve
detector performance with additional unlabeled data. State-of-the-art SSOD
performance has been achieved recently by self-training, in which training
supervision consists of ground truths and pseudo-labels. In current studies, we
observe that class imbalance in SSOD severely impedes the effectiveness of
self-training. To address the class imbalance, we propose adaptive
class-rebalancing self-training (ACRST) with a novel memory module called
CropBank. ACRST adaptively rebalances the training data with foreground
instances extracted from the CropBank, thereby alleviating the class imbalance.
Owing to the high complexity of detection tasks, we observe that both
self-training and data-rebalancing suffer from noisy pseudo-labels in SSOD.
Therefore, we propose a novel two-stage filtering algorithm to generate
accurate pseudo-labels. Our method achieves satisfactory improvements on
MS-COCO and VOC benchmarks. When using only 1\% labeled data in MS-COCO, our
method achieves 17.02 mAP improvement over supervised baselines, and 5.32 mAP
improvement compared with state-of-the-art methods.
|
Cyclotron line scattering features are detected in a few tens of X-ray
pulsars (XRPs) and used as direct indicators of a strong magnetic field at the
surface of accreting neutron stars (NSs). In a few cases, cyclotron lines are
known to be variable with accretion luminosity of XRPs. It is accepted that the
observed variations of cyclotron line scattering features are related to
variations of geometry and dynamics of accretion flow above the magnetic poles
of a NS. A positive correlation between the line centroid energy and luminosity
is typical for sub-critical XRPs, where the accretion results in hot spots at
the magnetic poles. The negative correlation was proposed to be a specific
feature of bright super-critical XRPs, where radiation pressure supports
accretion columns above the stellar surface. Cyclotron line in spectra of
Be-transient X-ray pulsar GRO J1008-57 is detected at energies from $\sim 75
-90$ keV, the highest observed energy of cyclotron line feature in XRPs. We
report the peculiar relation of cyclotron line centroid energies with
luminosity in GRO J1008-57 during the Type II outburst in August 2017 observed
by Insight-HXMT. The cyclotron line energy was detected to be negatively
correlated with the luminosity at $3.2\times 10^{37}\,\ergs<L<4.2\times
10^{37}\,\ergs$, and positively correlated at $L\gtrsim 5\times
10^{37}\,\ergs$. We speculate that the observed peculiar behavior of a
cyclotron line would be due to variations of accretion channel geometry.
|
Reconfiguration aims at recovering a system from a fault by automatically
adapting the system configuration, such that the system goal can be reached
again. Classical approaches typically use a set of pre-defined faults for which
corresponding recovery actions are defined manually. This is not possible for
modern hybrid systems which are characterized by frequent changes. Instead,
AI-based approaches are needed which leverage on a model of the non-faulty
system and which search for a set of reconfiguration operations which will
establish a valid behavior again.
This work presents a novel algorithm which solves three main challenges: (i)
Only a model of the non-faulty system is needed, i.e. the faulty behavior does
not need to be modeled. (ii) It discretizes and reduces the search space which
originally is too large -- mainly due to the high number of continuous system
variables and control signals. (iii) It uses a SAT solver for propositional
logic for two purposes: First, it defines the binary concept of validity.
Second, it implements the search itself -- sacrificing the optimal solution for
a quick identification of an arbitrary solution. It is shown that the approach
is able to reconfigure faults on simulated process engineering systems.
|
We study the $T\bar T$ deformation on a multi-quantum mechanical systems. By
introducing the dynamical coordinate transformation, we obtain the deformed
theory as well as the solution. We further study the thermo-field-double state
under the $T\bar T$ deformation on these systems, including conformal quantum
mechanical system, the Sachdev-Ye-Kitaev model, and the model satisfying
Eigenstate Thermalization Hypothesis. We find common regenesis phenomena where
the signal injected into one local system can regenerate from the other local
system. From the bulk picture, we study the deformation on Jackiw-Teitelboim
gravity governed by Schwarzian action and find that the regenesis phenomena
here are not related to the causal structure of semi-classical wormhole.
|
Labelled networks are an important class of data, naturally appearing in
numerous applications in science and engineering. A typical inference goal is
to determine how the vertex labels (or features) affect the network's
structure. In this work, we introduce a new generative model, the feature-first
block model (FFBM), that facilitates the use of rich queries on labelled
networks. We develop a Bayesian framework and devise a two-level Markov chain
Monte Carlo approach to efficiently sample from the relevant posterior
distribution of the FFBM parameters. This allows us to infer if and how the
observed vertex-features affect macro-structure. We apply the proposed methods
to a variety of network data to extract the most important features along which
the vertices are partitioned. The main advantages of the proposed approach are
that the whole feature-space is used automatically and that features can be
rank-ordered implicitly according to impact.
|
This paper concerns a new optimization problem arising in the management of a
multi-object spectrometer with a configurable slit unit. The field of view of
the spectrograph is divided into contiguous and parallel spatial bands, each
one associated with two opposite sliding metal bars that can be positioned to
observe one astronomical object. Thus several objects can be analyzed
simultaneously within a configuration of the bars called a mask. Due to the
high demand from astronomers, pointing the spectrograph's field of view to the
sky, rotating it, and selecting the objects to conform a mask is a crucial
optimization problem for the efficient use of the spectrometer. The paper
describes this problem, presents a Mixed Integer Linear Programming formulation
for the case where the rotation angle is fixed, presents a non-convex
formulation for the case where the rotation angle is unfixed, describes a
heuristic approach for the general problem, and discusses computational results
on real-world and randomly-generated instances.
|
In this work, the electrical and spin properties of monolayer MoSi2X4 (X= N,
P, As, and Sb) under vertical strain are investigated. The band structures
state that MoSi2N4 is an indirect semiconductor, whereas other compounds are
direct semiconductors. The vertical strain has been selected to modify the
electrical properties. The bandgap shows a maximum and decreases for both
tensile and compressive strains. The valence band at K-point displays a large
spin-splitting, whereas the conduction band has a negligible splitting. On the
other hand, the second conduction band has a large spin-splitting and moves
down under vertical strain which leads to a large spin-splitting in both
conduction and valence bands edges. The projected density of states along with
the projected band structure clarifies the origin of these large
spin-splittings. These three spin-splittings can be controlled by vertical
strain.
|
Let $\Gamma$ be a graph with vertex set $V$, and let $a$ and $b$ be
nonnegative integers. A subset $C$ of $V$ is called an $(a,b)$-regular set in
$\Gamma$ if every vertex in $C$ has exactly $a$ neighbors in $C$ and every
vertex in $V\setminus C$ has exactly $b$ neighbors in $C$. In particular, $(0,
1)$-regular sets and $(1, 1)$-regular sets in $\Ga$ are called perfect codes
and total perfect codes in $\Ga$, respectively. A subset $C$ of a group $G$ is
said to be an $(a,b)$-regular set of $G$ if there exists a Cayley graph of $G$
which admits $C$ as an $(a,b)$-regular set. In this paper we prove that, for
any generalized dihedral group $G$ or any group $G$ of order $4p$ or $pq$ for
some primes $p$ and $q$, if a nontrivial subgroup $H$ of $G$ is a $(0,
1)$-regular set of $G$, then it must also be an $(a,b)$-regular set of $G$ for
any $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$ such that $a$
is even when $|H|$ is odd. A similar result involving $(1, 1)$-regular sets of
such groups is also obtained in the paper.
|
We consider the convex geometry of the cone of nonnegative quadratics over
Stanley-Reisner varieties. Stanley-Reisner varieties (which are unions of
coordinate planes) are amongst the simplest real projective varieties, so this
is potentially a starting point that can generalize to more complicated real
projective varieties. This subject has some suprising connections to algebraic
topology and category theory, which we exploit heavily in our work.
These questions are also valuable in applied math, because they directly
translate to questions about positive semidefinite (PSD) matrices. In
particular, this relates to a long line of work concerning the extent to which
it is possible to approximately check that a matrix is PSD by checking that
some principle submatrices are PSD, or to check if a partial matrix can be
approximately completed to full PSD matrix.
We systematize both these practical and theoretical questions using a
framework based on algebraic topology, category theory, and convex geometry. As
applications of this framework we are able to classify the extreme nonnegative
quadratics over many Stanley-Reisner varieties. We plan to follow these
structural results with a paper that is more focused on quantitative questions
about PSD matrix completion, which have applications in sparse semidefinite
programming.
|
Chiral antiferromagnets are currently considered for broad range of
applications in spintronics, spin-orbitronics and magnonics. In contrast to the
established approach relying on materials screening, the anisotropic and chiral
responses of low-dimensional antifferromagnets can be tailored relying on the
geometrical curvature. Here, we consider an achiral, anisotropic
antiferromagnetic spin chain and demonstrate that these systems possess
geometry-driven effects stemming not only from the exchange interaction but
also from the anisotropy. Peculiarly, the anisotropy-driven effects are
complementary to the curvature effects stemming from the exchange interaction
and rather strong as they are linear in curvature. These effects are
responsible for the tilt of the equilibrium direction of vector order
parameters and the appearance of the homogeneous Dzyaloshinskii-Moriya
interaction. The latter is a source of the geometry-driven weak ferromagnetism
emerging in curvilinear antiferromagnetic spin chains. Our findings provide a
deeper fundamental insight into the physics of curvilinear antiferromagnets
beyond the $\sigma$-model and offer an additional degree of freedom in the
design of spintronic and magnonic devices.
|
In a recent paper we showed that the collapse to a black hole in
one-parameter families of initial data for massless, minimally coupled scalar
fields in spherically symmetric semi-classical loop quantum gravity exhibited a
universal mass scaling similar to the one in classical general relativity. In
particular, no evidence of a mass gap appeared as had been suggested by
previous studies. The lack of a mass gap indicated the possible existence of a
self-similar critical solution as in general relativity. Here we provide
further evidence for its existence. Using an adaptive mesh refinement code, we
show that "echoes" arise as a result of the discrete self-similarity in
space-time. We also show the existence of "wiggles" in the mass scaling
relation, as in the classical theory. The results from the semi-classical
theory agree well with those of classical general relativity unless one takes
unrealistically large values for the polymerization parameter.
|
Nuclear-powered X-ray millisecond pulsars are the third type of millisecond
pulsars, which are powered by thermonuclear fusion processes. The corresponding
brightness oscillations, known as burst oscillations, are observed during some
thermonuclear X-ray bursts, when the burning and cooling accreted matter gives
rise to an azimuthally asymmetric brightness pattern on the surface of the
spinning neutron star. Apart from providing neutron star spin rates, this X-ray
timing feature can be a useful tool to probe the fundamental physics of neutron
star interior and surface. This chapter presents an overview of the relatively
new field of nuclear-powered X-ray millisecond pulsars.
|
It was recently shown that wavepackets with skewed momentum distribution
exhibit a boomerang-like dynamics in the Anderson model due to Anderson
localization: after an initial ballistic motion, they make a U-turn and
eventually come back to their starting point. In this paper, we study the
robustness of the quantum boomerang effect in various kinds of disordered and
dynamical systems: tight-binding models with pseudo-random potentials, systems
with band random Hamiltonians, and the kicked rotor. Our results show that the
boomerang effect persists in models with pseudo-random potentials. It is also
present in the kicked rotor, although in this case with a specific dependency
on the initial state. On the other hand, we find that random hopping processes
inhibit any drift motion of the wavepacket, and consequently the boomerang
effect. In particular, if the random nearest-neighbor hopping amplitudes have
zero average, the wavepacket remains in its initial position.
|
This article is a response to the continued assumption, cited even in reports
and reviews of recent experimental breakthroughs and advances in theoretical
methods, that the antiJaynes-Cummings (AJC) interaction is an intractable
energy non-conserving component of the quantum Rabi model (QRM). We present
three key features of QRM dynamics : (a) the AJC interaction component has a
conserved excitation number operator and is exactly solvable (b) QRM dynamical
space consists of a rotating frame (RF) dominated by an exactly solved
Jaynes-Cummings (JC) interaction specified by a conserved JC excitation number
operator which generates the U(1) symmetry of RF and a correlated
counterrotating frame (CRF) dominated by an exactly solved antiJaynes-Cummings
(AJC) interaction specified by a conserved AJC excitation number operator which
generates the U(1) symmetry of CRF.
|
Smart homes are one of the most promising applications of the emerging
Internet of Things (IoT) technology. With the growing number of IoT related
devices such as smart thermostats, smart fridges, smart speaker, smart light
bulbs and smart locks, smart homes promise to make our lives easier and more
comfortable. However, the increased deployment of such smart devices brings an
increase in potential security risks and home privacy breaches. In order to
overcome such risks, Intrusion Detection Systems are presented as pertinent
tools that can provide network-level protection for smart devices deployed in
home environments. These systems monitor the network activities of the smart
home-connected de-vices and focus on alerting suspicious or malicious activity.
They also can deal with detected abnormal activities by hindering the impostors
in accessing the victim devices. However, the employment of such systems in the
context of a smart home can be challenging due to the devices hardware
limitations, which may restrict their ability to counter the existing and
emerging attack vectors. Therefore, this paper proposes an experimental
comparison between the widely used open-source NIDSs namely Snort, Suricata and
Bro IDS to find the most appropriate one for smart homes in term of detection
accuracy and resources consumption including CP and memory utilization.
Experimental Results show that Suricata is the best performing NIDS for smart
homes
|
Time-harmonic electromagnetic waves in vacuum are described by the Helmholtz
equation $\Delta u+\omega ^{2}u=0 $ for $ (x,y,z) \in \mathbb{R}^3 $. For the
evolution of such waves along the $z$-axis a Schr\"odinger equation can be
derived through a multiple scaling ansatz. It is the purpose of this paper to
justify this formal approximation by proving bounds between this formal
approximation and true solutions of the original system. The challenge of the
presented validity analysis is the fact that the Helmholtz equation is
ill-posed as an evolutionary system along the $z$-axis.
|
Using detailed synchrotron diffraction, magnetization, thermodynamic and
transport measurements, we investigate the relationship between the mixed
valence of Ir, lattice strain and the resultant structural and magnetic ground
states in the geometrically frustrated triple perovskite iridate
Ba$_{3}$NaIr$_{2}$O$_{9}$. We observe a complex interplay between lattice
strain and structural phase co-existence, which is in sharp contrast to what is
typically observed in this family of compounds. The low temperature magnetic
ground state is characterized by the absence of long range order, and points
towards the condensation of a cluster glass state from an extended regime of
short range magnetic correlations.
|
Modulo-wrapping receivers have attracted interest in several areas of digital
communications, including precoding and lattice coding. The asymptotic capacity
and error performance of the modulo AWGN channel have been well established.
However, due to underlying assumptions of the asymptotic analyses, these
findings might not always be realistic in physical world applications, which
are often dimension- or delay-limited. In this work, the optimum ways to
achieve minimum probability of error for binary signaling through a scalar
modulo AWGN channel is examined under different scenarios where the receiver
has access to full or partial information. In case of partial information at
the receiver, an iterative estimation rule is proposed to reduce the error
rate, and the performance of different estimators are demonstrated in simulated
experiments.
|
We estimate the chirality of the cosmological medium due to parity violating
decays of standard model particles, focusing on the example of tau leptons. The
non-trivial chirality is however too small to make a significant contribution
to the cosmological magnetic field via the chiral-magnetic effect.
|
For a bipartite graph $G$ with parts $X$ and $Y$, an $X$-interval coloring is
a proper edge coloring of $G$ by integers such that the colors on the edges
incident to any vertex in $X$ form an interval. Denote by $\chi'_{int}(G,X)$
the minimum $k$ such that $G$ has an $X$-interval coloring with $k$ colors. The
author and Toft conjectured [Discrete Mathematics 339 (2016), 2628--2639] that
there is a polynomial $P(x)$ such that if $G$ has maximum degree at most
$\Delta$, then $\chi'_{int}(G,X) \leq P(\Delta)$. In this short note, we prove
this conjecture; in fact, we prove that a cubic polynomial suffices. We also
deduce some improved upper bounds on $\chi'_{int}(G,X)$ for bipartite graphs
with small maximum degree.
|
In pursuit of explainability, we develop generative models for sequential
data. The proposed models provide state-of-the-art classification results and
robust performance for speech phone classification. We combine modern neural
networks (normalizing flows) and traditional generative models (hidden Markov
models - HMMs). Normalizing flow-based mixture models (NMMs) are used to model
the conditional probability distribution given the hidden state in the HMMs.
Model parameters are learned through judicious combinations of time-tested
Bayesian learning methods and contemporary neural network learning methods. We
mainly combine expectation-maximization (EM) and mini-batch gradient descent.
The proposed generative models can compute likelihood of a data and hence
directly suitable for maximum-likelihood (ML) classification approach. Due to
structural flexibility of HMMs, we can use different normalizing flow models.
This leads to different types of HMMs providing diversity in data modeling
capacity. The diversity provides an opportunity for easy decision fusion from
different models. For a standard speech phone classification setup involving 39
phones (classes) and the TIMIT dataset, we show that the use of standard
features called mel-frequency-cepstral-coeffcients (MFCCs), the proposed
generative models, and the decision fusion together can achieve $86.6\%$
accuracy by generative training only. This result is close to state-of-the-art
results, for examples, $86.2\%$ accuracy of PyTorch-Kaldi toolkit [1], and
$85.1\%$ accuracy using light gated recurrent units [2]. We do not use any
discriminative learning approach and related sophisticated features in this
article.
|
Explaining the predictions of opaque machine learning algorithms is an
important and challenging task, especially as complex models are increasingly
used to assist in high-stakes decisions such as those arising in healthcare and
finance. Most popular tools for post-hoc explainable artificial intelligence
(XAI) are either insensitive to context (e.g., feature attributions) or
difficult to summarize (e.g., counterfactuals). In this paper, I introduce
\emph{rational Shapley values}, a novel XAI method that synthesizes and extends
these seemingly incompatible approaches in a rigorous, flexible manner. I
leverage tools from decision theory and causal modeling to formalize and
implement a pragmatic approach that resolves a number of known challenges in
XAI. By pairing the distribution of random variables with the appropriate
reference class for a given explanation task, I illustrate through theory and
experiments how user goals and knowledge can inform and constrain the solution
set in an iterative fashion. The method compares favorably to state of the art
XAI tools in a range of quantitative and qualitative comparisons.
|
We show that there are 4 infinite families of lattice equable kites, given by
corresponding Pell or Pell-like equations, but up to Euclidean motions, there
are exactly 5 lattice equable trapezoids (2 isosceles, 2 right, 1 singular) and
4 lattice equable cyclic quadrilaterals. We also show that, with one exception,
the interior diagonals of lattice equable quadrilaterals are irrational.
|
A major challenge in the study of cryptography is characterizing the
necessary and sufficient assumptions required to carry out a given
cryptographic task. The focus of this work is the necessity of a broadcast
channel for securely computing symmetric functionalities (where all the parties
receive the same output) when one third of the parties, or more, might be
corrupted. Assuming all parties are connected via a peer-to-peer network, but
no broadcast channel (nor a secure setup phase) is available, we prove the
following characterization:
1) A symmetric $n$-party functionality can be securely computed facing
$n/3\le t<n/2$ corruptions (\ie honest majority), if and only if it is
\emph{$(n-2t)$-dominated}; a functionality is $k$-dominated, if \emph{any}
$k$-size subset of its input variables can be set to \emph{determine} its
output.
2) Assuming the existence of one-way functions, a symmetric $n$-party
functionality can be securely computed facing $t\ge n/2$ corruptions (\ie no
honest majority), if and only if it is $1$-dominated and can be securely
computed with broadcast.
It follows that, in case a third of the parties might be corrupted, broadcast
is necessary for securely computing non-dominated functionalities (in which
"small" subsets of the inputs cannot determine the output), including, as
interesting special cases, the Boolean XOR and coin-flipping functionalities.
|
We consider Shimura varieties for orthogonal or spin groups acting on
hermitian symmetric domains of type IV. We give regular p-adic integral models
for these varieties over odd primes p at which the level subgroup is the
connected stabilizer of a vertex lattice in the orthogonal space. Our
construction is obtained by combining results of Kisin and the first author
with an explicit presentation and resolution of a corresponding local model.
|
To indirectly study the internal structure of giant clumps in main sequence
galaxies at $z \sim 1-3$, we target very turbulent and gas-rich local analogues
from the DYNAMO sample with the Hubble Space Telescope, over a wavelength range
of $\sim 200-480$ nm. We present a catalog of 58 clumps identified in six
DYNAMO galaxies, including the WFC3/UVIS F225W, F336W, and F467M photometry
where the ($225-336$) and ($336-467$) colours are sensitive to extinction and
stellar population age respectively. We measure the internal colour gradients
of clumps themselves to study their age and extinction properties. We find a
marked colour trend within individual clumps, where the resolved colour
distributions show that clumps generally have bluer ($336-467$) colours
(denoting very young ages) in their centers than at their edges, with little
variation in the ($225-336$) colour associated with extinction. Furthermore, we
find that clumps whose colours suggest they are older, are preferentially
located closer toward the centers of their galaxies, and we find no young
clumps at small galactocentric distances. Both results are consistent with
simulations of high-redshift star forming systems that show clumps form via
violent disk instability, and through dynamic processes migrate to the centers
of their galaxies to contribute to bulge growth on timescales of a few 100 Myr,
while continually forming stars in their centers. When we compare the DYNAMO
clumps to those in these simulations, we find the best agreement with the
long-lived clumps.
|
In this paper, we analyse the causal aspects of evolving marginally trapped
surfaces in a D-dimensional spherically symmetric spacetime, sourced by perfect
fluid with a cosmological constant. The norm of the normal to the marginally
trapped tube is shown to be the product of lie derivatives of the expansion
parameter of future outgoing null rays along the incoming and outgoing null
directions. We obtain a closed form expression for this norm in terms of
principal density, pressure, areal radius and cosmological constant. For the
case of a homogeneous fluid distribution, we obtain a simple formula for
determining the causal nature of the evolving horizons. We obtain the causal
phase portraits and highlight the critical radius. We identify many solutions
where the causal signature of the marginally trapped tube or marginally
anti-trapped tube is always null despite having an evolving area. These
solutions don't comply with the standard inner and outer horizon classification
for degenerate horizons. we propose an alternate prescription for this
classification of these degenerate horizons.
|
We examine the feasibility of the Bell test (i.e., detecting a violation of
the Bell inequality) with the ATLAS detector in Large Hadron Collider (LHC) at
CERN through the flavor entanglement between the B mesons. After addressing the
possible issues that arise associated with the experiment and how they may be
treated based on an analogy with conventional Bell tests, we show in our
simulation study that under realistic conditions (expected from the LHC Run 3
operation) the Bell test is feasible under mild assumptions. The definitive
factor for this promising result lies primarily in the fact that the ATLAS
detector is capable of measuring the decay times of the B mesons independently,
which was not available in the previous experiment with the Belle detector at
KEK. This result suggests the possibility of the Bell test in much higher
energy domains and may open up a new arena for experimental studies of quantum
foundations.
|
Our contributions with this paper are twofold. First, we elucidate the
methodological requirements for a risk framework of custodial operations and
argue for the value of this type of risk model as complementary with
cryptographic and blockchain security models. Second, we present a risk model
in the form of a library of attack-trees for Revault -- an open-source custody
protocol. The model can be used by organisations as a risk quantification
framework for a thorough security analysis in their specific deployment
context. Our work exemplifies an approach that can be used independent of which
custody protocol is being considered, including complex protocols with multiple
stakeholders and active defence infrastructure.
|
Obtaining high-quality parallel corpora is of paramount importance for
training NMT systems. However, as many language pairs lack adequate
gold-standard training data, a popular approach has been to mine so-called
"pseudo-parallel" sentences from paired documents in two languages. In this
paper, we outline some problems with current methods, propose computationally
economical solutions to those problems, and demonstrate success with novel
methods on the Tatoeba similarity search benchmark and on a downstream task,
namely NMT. We uncover the effect of resource-related factors (i.e. how much
monolingual/bilingual data is available for a given language) on the optimal
choice of bitext mining approach, and echo problems with the oft-used BUCC
dataset that have been observed by others. We make the code and data used for
our experiments publicly available.
|
In this paper we consider the massive scalar perturbation on the top of a
small spinning-like black hole in context of Einstein-bumblebee modified
gravity in order to probe the role of spontaneous Lorentz symmetry breaking on
the superradiance scattering and corresponding instability. We show that at the
low-frequency limit of the scalar wave the superradiance scattering will be
enhanced with the Lorentz-violating parameter $\alpha<0$ and will be weakened
with $\alpha>0$. Moreover, by addressing the black hole bomb issue, we extract
an improved bound in the instability regime indicating that $\alpha<0$
increases the parameter space of the scalar field instability, while $\alpha>0$
decreases it.
|
We highlight new results on the localization number of a graph, a parameter
derived from the localization graph searching game. After introducing the game
and providing an overview of existing results, we describe recent results on
the localization number. We describe bounds or exact values of the localization
number of incidence graphs of designs, polarity graphs, and Kneser graphs.
|
One of the most critical tasks for startups is to validate their business
model. Therefore, entrepreneurs try to collect information such as feedback
from other actors to assess the validity of their assumptions and make
decisions. However, previous work on decisional guidance for business model
validation provides no solution for the highly uncertain and complex context of
earlystage startups. The purpose of this paper is, thus, to develop design
principles for a Hybrid Intelligence decision support system (HI-DSS) that
combines the complementary capabilities of human and machine intelligence. We
follow a design science research approach to design a prototype artifact and a
set of design principles. Our study provides prescriptive knowledge for HI-DSS
and contributes to previous work on decision support for business models, the
applications of complementary strengths of humans and machines for making
decisions, and support systems for extremely uncertain decision-making
problems.
|
We compute the Chow groups of smooth Gushel-Mukai varieties of dimension $5$.
|
We study the distribution of the Frobenius traces on $K3$ surfaces. We
compare experimental data with the predictions made by the Sato--Tate
conjecture, i.e. with the theoretical distributions derived from the theory of
Lie groups assuming equidistribution. Our sample consists of generic $K3$
surfaces, as well as of such having real and complex multiplication. We report
evidence for the Sato--Tate conjecture for the surfaces considered.
|
In this work, the anisotropic variant of the quantum Rabi model with
different coupling strengths of the rotating and counter-rotating wave terms is
studied by the Bogoliubov operator approach. The anisotropy preserves the
parity symmetry of the original model. We derive the corresponding
$G$-function, which yields both the regular and exceptional eigenvalues. The
exceptional eigenvalues correspond to the crossing points of two energy levels
with different parities and are doubly degenerate. We find analytically that
the ground-state and the first excited state can cross several times,
indicating multiple first-order phase transitions as function of the coupling
strength. These crossing points are related to manifest parity symmetry of the
Hamiltonian, in contrast to the level crossings in the asymmetric quantum Rabi
model which are caused by a hidden symmetry.
|
We study the connection between risk aversion, number of consumers and
uniqueness of equilibrium. We consider an economy with two goods and $c$
impatience types, where each type has additive separable preferences with HARA
Bernoulli utility function,
$u_H(x):=\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}$.
We show that if $\gamma\in \left(1, \frac{c}{c-1}\right]$, the equilibrium is
unique. Moreover, the methods used, involving Newton's symmetric polynomials
and Descartes' rule of signs, enable us to offer new sufficient conditions for
uniqueness in a closed-form expression highlighting the role played by
endowments, patience and specific HARA parameters. Finally, new necessary and
sufficient conditions in ensuring uniqueness are derived for the particular
case of CRRA Bernoulli utility functions with $\gamma =3$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.