abstract
stringlengths 42
2.09k
|
---|
Active matter comprises individually driven units that convert locally stored
energy into mechanical motion. Interactions between driven units lead to a
variety of non-equilibrium collective phenomena in active matter. One of such
phenomena is anomalously large density fluctuations, which have been observed
in both experiments and theories. Here we show that, on the contrary, density
fluctuations in active matter can also be greatly suppressed. Our experiments
are carried out with marine algae ($\it{Effrenium\ voratum}$) which swim in
circles at the air-liquid interfaces with two different eukaryotic flagella.
Cell swimming generates fluid flow which leads to effective repulsions between
cells in the far field. Long-range nature of such repulsive interactions
suppresses density fluctuations and generates disordered hyperuniform states
under a wide range of density conditions. Emergence of hyperuniformity and
associated scaling exponent are quantitatively reproduced in a numerical model
whose main ingredients are effective hydrodynamic interactions and uncorrelated
random cell motion. Our results demonstrate a new form of collective state in
active matter and suggest the possibility to use hydrodynamic flow for
self-assembly in active matter.
|
We consider a fully directed self-avoiding walk model on a cubic lattice to
mimic the conformations of an infinitely long confined flexible polymer chain;
and the confinement condition is achieved by two parallel athermal plates. The
confined polymer system is under good solvent condition and we revisit this
problem to solve the real polymer's model for any length of chain and also for
any separation in between the plates. The equilibrium statistics of the
confined polymer chain is derived using an analytical calculations based on the
generating function technique. The force of the confinement, the surface
tension and the monomer density profile of confined chain is obtained. We
propose that a method of calculations is suitable to understand thermodynamics
of an arbitrary length confined polymer chain.
|
With the recent advances of the Internet of Things, and the increasing
accessibility of ubiquitous computing resources and mobile devices, the
prevalence of rich media contents, and the ensuing social, economic, and
cultural changes, computing technology and applications have evolved quickly
over the past decade. They now go beyond personal computing, facilitating
collaboration and social interactions in general, causing a quick proliferation
of social relationships among IoT entities. The increasing number of these
relationships and their heterogeneous social features have led to computing and
communication bottlenecks that prevent the IoT network from taking advantage of
these relationships to improve the offered services and customize the delivered
content, known as relationship explosion. On the other hand, the quick advances
in artificial intelligence applications in social computing have led to the
emerging of a promising research field known as Artificial Social Intelligence
(ASI) that has the potential to tackle the social relationship explosion
problem. This paper discusses the role of IoT in social relationships detection
and management, the problem of social relationships explosion in IoT and
reviews the proposed solutions using ASI, including social-oriented
machine-learning and deep-learning techniques.
|
The present paper is a continuation of earlier work by Gunnar Carlsson and
the first author on a motivic variant of the classical Becker-Gottlieb transfer
and an additivity theorem for such a transfer by the present authors. Here, we
establish a motivic variant of the classical Segal-Becker theorem relating the
classifying space of a 1-dimensional torus with the spectrum defining
(algebraic) K-theory.
|
Graph Neural Networks have revolutionized many machine learning tasks in
recent years, ranging from drug discovery, recommendation systems, image
classification, social network analysis to natural language understanding. This
paper shows their efficacy in modeling relationships between products and
making predictions for unseen product networks. By representing products as
nodes and their relationships as edges of a graph, we show how an inductive
graph neural network approach, named GraphSAGE, can efficiently learn
continuous representations for nodes and edges. These representations also
capture product feature information such as price, brand, or engineering
attributes. They are combined with a classification model for predicting the
existence of the relationship between products. Using a case study of the
Chinese car market, we find that our method yields double the prediction
performance compared to an Exponential Random Graph Model-based method for
predicting the co-consideration relationship between cars. While a vanilla
GraphSAGE requires a partial network to make predictions, we introduce an
`adjacency prediction model' to circumvent this limitation. This enables us to
predict product relationships when no neighborhood information is known.
Finally, we demonstrate how a permutation-based interpretability analysis can
provide insights on how design attributes impact the predictions of
relationships between products. This work provides a systematic method to
predict the relationships between products in many different markets.
|
In Survival Analysis, the observed lifetimes often correspond to individuals
for which the event occurs within a specific calendar time interval. With such
interval sampling, the lifetimes are doubly truncated at values determined by
the birth dates and the sampling interval. This double truncation may induce a
systematic bias in estimation, so specific corrections are needed. A relevant
target in Survival Analysis is the hazard rate function, which represents the
instantaneous probability for the event of interest. In this work we introduce
a flexible estimation approach for the hazard rate under double truncation.
Specifically, a kernel smoother is considered, in both a fully nonparametric
setting and a semiparametric setting in which the incidence process fits a
given parametric model. Properties of the kernel smoothers are investigated
both theoretically and through simulations. In particular, an asymptotic
expression of the mean integrated squared error is derived, leading to a
data-driven bandwidth for the estimators. The relevance of the semiparametric
approach is emphasized, in that it is generally more accurate and, importantly,
it avoids the potential issues of nonexistence or nonuniqueness of the fully
nonparametric estimator. Applications to the age of diagnosis of Acute Coronary
Syndrome (ACS) and AIDS incubation times are included.
|
We are interested in dendrites for which all invariant measures of
zero-entropy mappings have discrete spectrum, and we prove that this holds when
the closure of the endpoint set of the dendrite is countable. This solves an
open question which was around for awhile, almost completing the
characterization of dendrites with this property.
|
The Radial Basis Function-generated finite differences became a popular
variant of local meshless strong form methods due to its robustness regarding
the position of nodes and its controllable order of accuracy. In this paper, we
present a GPU accelerated numerical solution of Poisson's equation on scattered
nodes in 2D for orders from 2 up to 6. We specifically study the effect of
using different orders on GPU acceleration efficiency.
|
This paper considers optimal control of a quadrotor unmanned aerial vehicles
(UAV) using the discrete-time, finite-horizon, linear quadratic regulator
(LQR). The state of a quadrotor UAV is represented as an element of the matrix
Lie group of double direct isometries, $SE_2(3)$. The nonlinear system is
linearized using a left-invariant error about a reference trajectory, leading
to an optimal gain sequence that can be calculated offline. The reference
trajectory is calculated using the differentially flat properties of the
quadrotor. Monte-Carlo simulations demonstrate robustness of the proposed
control scheme to parametric uncertainty, state-estimation error, and initial
error. Additionally, when compared to an LQR controller that uses a
conventional error definition, the proposed controller demonstrates better
performance when initial errors are large.
|
We propose an alternative reconstruction for weighted essentially
non-oscillatory schemes with adaptive order (WENO-AO) for solving hyperbolic
conservation laws. The alternative reconstruction has a more concise form than
the original WENO-AO reconstruction. Moreover, it is a strictly convex
combination of polynomials with unequal degrees. Numerical examples show that
the alternative reconstruction maintains the accuracy and robustness of the
WENO-AO schemes.
|
This thesis presents three results in geometric analysis. We first analyze
the curve-shortening flow on figure eight curves in the plane. Afterwards, we
examine the point-wise curvature preserving flow on space curves. Lastly, we
present an abridgment of our work on a family of three-dimensional Lie groups,
which, when equipped with canonical left-invariant metrics, interpolate between
Sol and hyperbolic space.
|
We learn, in an unsupervised way, an embedding from sequences of radar images
that is suitable for solving place recognition problem using complex radar
data. We experiment on 280 km of data and show performance exceeding
state-of-the-art supervised approaches, localising correctly 98.38% of the time
when using just the nearest database candidate.
|
We study the Monge-Ampere equation with some power nonlinear term. A solution
u is called to be Euclidean complete if it is an entire solution defined over
the whole R^n or its graph is a large hypersurface satisfying the large
condition on boundary \partial\Omega in case \Omega\not=R^n. In this paper, we
will give various sharp conditions on p and \Omega classifying the Euclidean
complete solution. Our results clarify and extend largely the existence theorem
of Cirstea-Trombetti (Calc. Var., 31, 2008, 167-186) for bounded convex domain
and p>n.
|
Starting from the model in Koch-Vargiolu (2019), we test the real impact of
current renewable installed power in the electricity price in Italy, and assess
how much the renewable installation strategy which was put in place in Italy
deviated from the optimal one obtained from the model in the period 2012--2018.
To do so, we consider the Ornstein-Uhlenbeck (O-U) process, including an
exogenous increasing process influencing the mean reverting term, which is
interpreted as the current renewable installed power. Using real data of
electricity price, photovoltaic and wind energy production from the six main
Italian price zones, we estimate the parameters of the model and obtain
quantitative results, such as the production of photovoltaic energy impacts the
North zone, while wind is significant for Sardinia and the Central North zone
does not present electricity price impact. Then we implement the solution of
the singular optimal control problem of installing renewable power production
devices, in order to maximize the profit of selling the produced energy in the
market net of installation costs. We extend the results of \cite{KV} to the
case when no impact on power price is presented, and to the case when $N$
players can produce electricity by installing renewable power plants. We are
thus able to describe the optimal strategy and compare it with the real
installation strategy that was put in place in Italy.
|
On 13 May 1787, a convict fleet of 11 ships left Portsmouth, England, on a
24,000 km, 8-month-long voyage to New South Wales. The voyage would take the
"First Fleet" under Captain Arthur Phillip via Tenerife (Canary Islands), the
port of Rio de Janeiro (Brazil), Table Bay at the southern extremity of the
African continent and the southernmost cape of present-day Tasmania to their
destination of Botany Bay. Given the navigation tools available at the time and
the small size of the convoy's ships, their safe arrival within a few days of
each other was a phenomenal achievement. This was particularly so, because they
had not lost a single ship and only a relatively small number of crew and
convicts. Phillip and his crew had only been able to ensure their success
because of the presence of crew members who were highly proficient in practical
astronomy, most notably Lieutenant William Dawes. We explore in detail his
educational background and the events leading up to Dawes' appointment by the
Board of Longitude as the convoy's dedicated astronomer-cum-Marine. In addition
to Dawes, John Hunter, second captain of the convoy's flagship H.M.S. Sirius,
Lieutenant William Bradley and Lieutenant Philip Gidley King were also experts
in navigation and longitude determination, using both chronometers and "lunar
distance" measurements. The historical record of the First Fleet's voyage is
remarkably accurate, even by today's standards.
|
In this paper, we introduce a novel iterative algorithm which carries out
$\alpha$-divergence minimisation by ensuring a systematic decrease in the
$\alpha$-divergence at each step. In its most general form, our framework
allows us to simultaneously optimise the weights and components parameters of a
given mixture model. Notably, our approach permits to build on various methods
previously proposed for $\alpha$-divergence minimisation such as gradient or
power descent schemes. Furthermore, we shed a new light on an integrated
Expectation Maximization algorithm. We provide empirical evidence that our
methodology yields improved results, all the while illustrating the numerical
benefits of having introduced some flexibility through the parameter $\alpha$
of the $\alpha$-divergence.
|
The scaling behavior for the rectification of bipolar nanopores is studied
using the Nernst-Planck equation coupled to the Local Equilibrium Monte Carlo
method. The bipolar nanopore's wall carries $\sigma$ and $-\sigma$ surface
charge densities in its two half regions axially. Scaling means that the device
function (rectification) depends on the system parameters (pore length, $H$,
pore radius, $R$, concentration, $c$, voltage, $U$, and surface charge density,
$\sigma$) via a single scaling parameter that is a smooth analytical function
of the system parameters. Here, we suggest using a modified Dukhin number,
$\mathrm{mDu}=|\sigma|l_{\mathrm{B}}^{*}\lambda_{\mathrm{D}}HU/(RU_{0})$, where
$l_{\mathrm{B}}^{*}=8\pi l_{\mathrm{B}}$, $l_{\mathrm{B}}$ is the Bjerrum
length, $\lambda_{\mathrm{D}}$ is the Debye length, and $U_{0}$ is a reference
voltage. We show how scaling depends on $H$, $U$, and $\sigma$ and through what
mechanisms these parameters influence the pore's behavior.
|
We present the analytic calculation of two-loop master integrals that are
relevant for $tW$ production at hadron colliders. We focus on the integral
families with only one massive propagator. After choosing a canonical basis,
the differential equations for the master integrals can be transformed into the
$d$ln form. The boundaries are determined by simple direct integrations or
regularity conditions at kinematic points without physical singularities. The
analytical results in this work are expressed in terms of multiple
polylogarithms, and have been checked with numerical computations.
|
Deep learning models are vulnerable to adversarial examples. As a more
threatening type for practical deep learning systems, physical adversarial
examples have received extensive research attention in recent years. However,
without exploiting the intrinsic characteristics such as model-agnostic and
human-specific patterns, existing works generate weak adversarial perturbations
in the physical world, which fall short of attacking across different models
and show visually suspicious appearance. Motivated by the viewpoint that
attention reflects the intrinsic characteristics of the recognition process,
this paper proposes the Dual Attention Suppression (DAS) attack to generate
visually-natural physical adversarial camouflages with strong transferability
by suppressing both model and human attention. As for attacking, we generate
transferable adversarial camouflages by distracting the model-shared similar
attention patterns from the target to non-target regions. Meanwhile, based on
the fact that human visual attention always focuses on salient items (e.g.,
suspicious distortions), we evade the human-specific bottom-up attention to
generate visually-natural camouflages which are correlated to the scenario
context. We conduct extensive experiments in both the digital and physical
world for classification and detection tasks on up-to-date models (e.g.,
Yolo-V5) and significantly demonstrate that our method outperforms
state-of-the-art methods.
|
In the past few years, researches on advanced driver assistance systems
(ADASs) have been carried out and deployed in intelligent vehicles. Systems
that have been developed can perform different tasks, such as lane keeping
assistance (LKA), lane departure warning (LDW), lane change warning (LCW) and
adaptive cruise control (ACC). Real time lane detection and tracking (LDT) is
one of the most consequential parts to performing the above tasks. Images which
are extracted from the video, contain noise and other unwanted factors such as
variation in lightening, shadow from nearby objects and etc. that requires
robust preprocessing methods for lane marking detection and tracking.
Preprocessing is critical for the subsequent steps and real time performance
because its main function is to remove the irrelevant image parts and enhance
the feature of interest. In this paper, we survey preprocessing methods for
detecting lane marking as well as tracking lane boundaries in real time
focusing on vision-based system.
|
Increasing volume of user-generated human-centric video content and their
applications, such as video retrieval and browsing, require compact
representations that are addressed by the video summarization literature.
Current supervised studies formulate video summarization as a
sequence-to-sequence learning problem and the existing solutions often neglect
the surge of human-centric view, which inherently contains affective content.
In this study, we investigate the affective-information enriched supervised
video summarization task for human-centric videos. First, we train a visual
input-driven state-of-the-art continuous emotion recognition model (CER-NET) on
the RECOLA dataset to estimate emotional attributes. Then, we integrate the
estimated emotional attributes and the high-level representations from the
CER-NET with the visual information to define the proposed affective video
summarization architectures (AVSUM). In addition, we investigate the use of
attention to improve the AVSUM architectures and propose two new architectures
based on temporal attention (TA-AVSUM) and spatial attention (SA-AVSUM). We
conduct video summarization experiments on the TvSum database. The proposed
AVSUM-GRU architecture with an early fusion of high level GRU embeddings and
the temporal attention based TA-AVSUM architecture attain competitive video
summarization performances by bringing strong performance improvements for the
human-centric videos compared to the state-of-the-art in terms of F-score and
self-defined face recall metrics.
|
Accurate lighting estimation is challenging yet critical to many computer
vision and computer graphics tasks such as high-dynamic-range (HDR) relighting.
Existing approaches model lighting in either frequency domain or spatial domain
which is insufficient to represent the complex lighting conditions in scenes
and tends to produce inaccurate estimation. This paper presents NeedleLight, a
new lighting estimation model that represents illumination with needlets and
allows lighting estimation in both frequency domain and spatial domain jointly.
An optimal thresholding function is designed to achieve sparse needlets which
trims redundant lighting parameters and demonstrates superior localization
properties for illumination representation. In addition, a novel spherical
transport loss is designed based on optimal transport theory which guides to
regress lighting representation parameters with consideration of the spatial
information. Furthermore, we propose a new metric that is concise yet effective
by directly evaluating the estimated illumination maps rather than rendered
images. Extensive experiments show that NeedleLight achieves superior lighting
estimation consistently across multiple evaluation metrics as compared with
state-of-the-art methods.
|
We evaluate the rotational velocity of stars observed by the Pristine survey
towards the Galactic anticentre, spanning a wide range of metallicities from
the extremely metal-poor regime ($\mathrm{[Fe/H]}<-3$) to nearly solar
metallicity. In the Galactic anticentre direction, the rotational velocity
($V_{\phi}$) is similar to the tangential velocity in the galactic longitude
direction ($V_{\ell}$). This allows us to estimate $V_{\phi}$ from Gaia early
data-release 3 (Gaia EDR3) proper motions for stars without radial velocity
measurements. This substantially increases the sample of stars in the outer
disc with estimated rotational velocities. Our stellar sample towards the
anticentre is dominated by a kinematical thin disc with a mean rotation of
$\sim -220$ km $\mathrm{s}^{-1}$. However, our analysis reveals the presence of
more stellar substructures. The most intriguing is a well populated extension
of the kinematical thin disc down to $\mathrm{[Fe/H]} \sim -2$. A scarser fast
rotating population reaching the extremely metal-poor regime, down to
$\mathrm{[Fe/H]} \sim -3.5$ is also detected, but without statistical
significance to unambiguously state whether this is the extremely metal-poor
extension of the thin disc or the high rotating tail of hotter structures (like
the thick disc or the halo). In addition, a more slowly rotating kinematical
thick disc component is also required to explain the observed $V_{\ell}$
distribution at $\mathrm{[Fe/H]} > -1.5$. Furthermore, we detect signatures of
a "heated disc", the so-called Splash, at metallicities higher than $\sim-1.5$.
Finally, at $\mathrm{[Fe/H]} < -1.5$ our anticentre sample is dominated by a
kinematical halo with a net prograde motion.
|
Van Zuylen et al. introduced the notion of a popular ranking in a voting
context, where each voter submits a strictly-ordered list of all candidates. A
popular ranking $\pi$ of the candidates is at least as good as any other
ranking $\sigma$ in the following sense: if we compare $\pi$ to $\sigma$, at
least half of all voters will always weakly prefer~$\pi$. Whether a voter
prefers one ranking to another is calculated based on the Kendall distance.
A more traditional definition of popularity -- as applied to popular
matchings, a well-established topic in computational social choice -- is
stricter, because it requires at least half of the voters \emph{who are not
indifferent between $\pi$ and $\sigma$} to prefer~$\pi$. In this paper, we
derive structural and algorithmic results in both settings, also improving upon
the results by van Zuylen et al. We also point out strong connections to the
famous open problem of finding a Kemeny consensus with 3 voters.
|
This contribution is a review of the deep and powerful connection between the
large scale properties of critical systems and their description in terms of a
field theory. Although largely applicable to many other models, the details of
this connection are illustrated in the class of two-dimensional Abelian
sandpile models. Bulk and boundary height variables, spanning tree related
observables, boundary conditions and dissipation are all discussed in this
context and found to have a proper match in the field theoretic description.
|
Graph convolutional neural networks (GCNNs) are nonlinear processing tools to
learn representations from network data. A key property of GCNNs is their
stability to graph perturbations. Current analysis considers deterministic
perturbations but fails to provide relevant insights when topological changes
are random. This paper investigates the stability of GCNNs to stochastic graph
perturbations induced by link losses. In particular, it proves the expected
output difference between the GCNN over random perturbed graphs and the GCNN
over the nominal graph is upper bounded by a factor that is linear in the link
loss probability. We perform the stability analysis in the graph spectral
domain such that the result holds uniformly for any graph. This result also
shows the role of the nonlinearity and the architecture width and depth, and
allows identifying handle to improve the GCNN robustness. Numerical simulations
on source localization and robot swarm control corroborate our theoretical
findings.
|
Potential strategies for the development and large-scale application of
renewable energy sources aimed at reducing the usage of carbon-based fossil
fuels are assessed here, especially in the event of the abandonment of such
fuels. The aim is to aid the initiative to reduce the harmful effects of
carbon-based fossil fuels on the environment and ensure a reduction in
greenhouse gases and sustainability of natural resources. Small-scale renewable
energy application for heating, cooling, and electricity generation in
households and commercial buildings are already underway around the world.
Hydrogen (H2) and ammonia (NH3), which are presently produced using fossil
fuels, already have significant applications in society and industry, and are
therefore good candidates for large-scale production through the use of
renewable energy sources. This will help to reduce the greenhouse gas emissions
appreciably around the world. While the first-generation biofuels production
using food crops may not be suitable for long-range fuel production, due to
competition with the food supply, the 2nd, 3rd and 4th generation biofuels have
the potential to produce large, worldwide supplies of fuels. Production of
advanced biofuels will not increase the emission of greenhouse gases, and the
ammonia produced through the use of renewable energy resources will serve as
fertilizer for biofuels production. The perspective of renewable energy
sources, such as technology status, economics, overall environmental benefits,
obstacles for commercialization, relative competitiveness of various renewable
energy sources, etc., are also discussed whenever applicable.
|
Interactions among multiple time series of positive random variables are
crucial in diverse financial applications, from spillover effects to volatility
interdependence. A popular model in this setting is the vector Multiplicative
Error Model (vMEM) which poses a linear iterative structure on the dynamics of
the conditional mean, perturbed by a multiplicative innovation term. A main
limitation of vMEM is however its restrictive assumption on the distribution of
the random innovation term. A Bayesian semiparametric approach that models the
innovation vector as an infinite location-scale mixture of multidimensional
kernels with support on the positive orthant is used to address this major
shortcoming of vMEM. Computational complications arising from the constraints
to the positive orthant are avoided through the formulation of a slice sampler
on the parameter-extended unconstrained version of the model. The method is
applied to simulated and real data and a flexible specification is obtained
that outperforms the classical ones in terms of fitting and predictive power.
|
Kagome magnets are believed to have numerous exotic physical properties due
to the possible interplay between lattice geometry, electron correlation and
band topology. Here, we report the large anomalous Hall effect in the kagome
ferromagnet LiMn$_6$Sn$_6$, which has a Curie temperature of 382 K and easy
plane along with the kagome lattice. At low temperatures, unsaturated positive
magnetoresistance and opposite signs of ordinary Hall coefficient for
$\rho_{xz}$ and $\rho_{yx}$ indicate the coexistence of electrons and holes in
the system. A large intrinsic anomalous Hall conductivity of 380 $\Omega^{-1}$
cm$^{-1}$, or 0.44 $e^2/h$ per Mn layer, is observed in $\sigma_{xy}^A$. This
value is significantly larger than those in other $R$Mn$_6$Sn$_6$ ($R$ = rare
earth elements) kagome compounds. Band structure calculations show several band
crossings, including a spin-polarized Dirac point at the K point, close to the
Fermi energy. The calculated intrinsic Hall conductivity agrees well with the
experimental value, and shows a maximum peak near the Fermi energy. We
attribute the large anomalous Hall effect in LiMn$_6$Sn$_6$ to the band
crossings closely located near the Fermi energy.
|
Prior work has proved that Translation memory (TM) can boost the performance
of Neural Machine Translation (NMT). In contrast to existing work that uses
bilingual corpus as TM and employs source-side similarity search for memory
retrieval, we propose a new framework that uses monolingual memory and performs
learnable memory retrieval in a cross-lingual manner. Our framework has unique
advantages. First, the cross-lingual memory retriever allows abundant
monolingual data to be TM. Second, the memory retriever and NMT model can be
jointly optimized for the ultimate translation goal. Experiments show that the
proposed method obtains substantial improvements. Remarkably, it even
outperforms strong TM-augmented NMT baselines using bilingual TM. Owning to the
ability to leverage monolingual data, our model also demonstrates effectiveness
in low-resource and domain adaptation scenarios.
|
The use of Reinforcement Learning (RL) agents in practical applications
requires the consideration of suboptimal outcomes, depending on the familiarity
of the agent with its environment. This is especially important in
safety-critical environments, where errors can lead to high costs or damage. In
distributional RL, the risk-sensitivity can be controlled via different
distortion measures of the estimated return distribution. However, these
distortion functions require an estimate of the risk level, which is difficult
to obtain and depends on the current state. In this work, we demonstrate the
suboptimality of a static risk level estimation and propose a method to
dynamically select risk levels at each environment step. Our method ARA
(Automatic Risk Adaptation) estimates the appropriate risk level in both known
and unknown environments using a Random Network Distillation error. We show
reduced failure rates by up to a factor of 7 and improved generalization
performance by up to 14% compared to both risk-aware and risk-agnostic agents
in several locomotion environments.
|
We extend the theory of Interacting Hopf algebras with an order primitive,
and give a sound and complete axiomatisation of the prop of polyhedral cones.
Next, we axiomatise an affine extension and prove soundness and completeness
for the prop of polyhedra.
|
Accurate phase diagrams of multicomponent plasmas are required for the
modeling of dense stellar plasmas, such as those found in the cores of white
dwarf stars and the crusts of neutron stars. Those phase diagrams have been
computed using a variety of standard techniques, which suffer from physical and
computational limitations. Here, we present an efficient and accurate method
that overcomes the drawbacks of previously used approaches. In particular,
finite-size effects are avoided as each phase is calculated separately; the
plasma electrons and volume changes are explicitly taken into account; and
arbitrary analytic fits to simulation data are avoided. Furthermore, no
simulations at uninteresting state conditions, i.e., away from the phase
coexistence curves, are required, which improves the efficiency of the
technique. The method consists of an adaptation of the so-called Gibbs-Duhem
integration approach to electron-ion plasmas, where the coexistence curve is
determined by direct numerical integration of its underlying Clapeyron
equation. The thermodynamics properties of the coexisting phases are evaluated
separately using Monte Carlo simulations in the isobaric semi-grand canonical
ensemble. We describe this Monte Carlo-based Clapeyron integration method,
including its basic principles, our extension to electron-ion plasmas, and our
numerical implementation. We illustrate its applicability and benefits with the
calculation of the melting curve of dense C/O plasmas under conditions relevant
for white dwarf cores and provide analytic fits to implement this new melting
curve in white dwarf models. While this work focuses on the liquid-solid phase
boundary of dense two-component plasmas, a wider range of physical systems and
phase boundaries are within the scope of the Clapeyron integration method,
which had until now only been applied to simple model systems of neutral
particles.
|
We prove that if there is an elementary embedding from the universe to
itself, then there is a proper class of measurable successor cardinals.
|
In the present paper, we first give a detailed study on the pQCD corrections
to the leading-twist part of BSR. Previous pQCD corrections to the
leading-twist part derived under conventional scale-setting approach up to
${\cal O}(\alpha_s^4)$-level still show strong renormalization scale
dependence. The principle of maximum conformality (PMC) provides a systematic
way to eliminate conventional renormalization scale-setting ambiguity by
determining the accurate $\alpha_s$-running behavior of the process with the
help of renormalization group equation. Our calculation confirms the PMC
prediction satisfies the standard renormalization group invariance, e.g. its
fixed-order prediction does scheme-and-scale independent. In low $Q^2$-region,
the effective momentum of the process is small and to have a reliable
prediction, we adopt four low-energy $\alpha_s$ models to do the analysis. Our
predictions show that even though the high-twist terms are generally power
suppressed in high $Q^2$-region, they shall have sizable contributions in low
and intermediate $Q^2$ domain. By using the more accurate scheme-and-scale
independent pQCD prediction, we present a novel fit of the non-perturbative
high-twist contributions by comparing with the JLab data.
|
We investigate a result on convergence of double sequences of numbers and how
it extends to measurable functions.
|
The problem of joint design of transmit waveforms and receive filters is
desirable in many application scenarios of multiple-input multiple-output
(MIMO) radar systems. In this paper, the joint design problem is investigated
under the signal-to-interference-plus-noise ratio (SINR) performance metric, in
which case the problem is formulated to maximize the SINR at the receiver side
subject to some practical transmit waveform constraints. A numerical algorithm
is proposed for problem resolution based on the manifold optimization method,
which has been shown to be powerful and flexible to address nonconvex
constrained optimization problems in many engineering applications. The
proposed algorithm is able to efficiently solve the SINR maximization problem
with different waveform constraints under a unified framework. Numerical
experiments show that the proposed algorithm outperforms the existing
benchmarks in terms of computation efficiency and achieves comparable SINR
performance.
|
Here, we report successful single crystal growth of SnSb2Te4 using the
self-flux method. Unidirectional crystal growth is confirmed through X Ray
Diffraction (XRD) pattern taken on mechanically cleaved crystal flake while the
rietveld refined Powder XRD (PXRD) pattern confirms the phase purity of the
grown crystal. Scanning Electron Microscopy (SEM) image and Energy Dispersive
X-Ray analysis (EDAX) confirm crystalline morphology and exact stoichiometry of
constituent elements. Vibrational Modes observed in Raman spectra also confirm
the formation of the SnSb2Te4 phase. DC resistivity measurements confirm the
metallic character of the grown crystal. Magneto-transport measurements up to
5T show a nonsaturating low magneto-resistance percentage. V type cusp and
Hikami Larkin Nagaoka (HLN) fitting at lower field confirms the Weak
Anti-localization (WAL) effect in SnSb2Te4. Density Functional Theory (DFT)
calculations were showing topological non-trivial electronic band structure. It
is the first-ever report on MR study and WAL analysis of SnSb2Te4 single
crystal.
|
We consider controlling the false discovery rate for testing many time series
with an unknown cross-sectional correlation structure. Given a large number of
hypotheses, false and missing discoveries can plague an analysis. While many
procedures have been proposed to control false discovery, most of them either
assume independent hypotheses or lack statistical power. A problem of
particular interest is in financial asset pricing, where the goal is to
determine which ``factors" lead to excess returns out of a large number of
potential factors. Our contribution is two-fold. First, we show the consistency
of Fama and French's prominent method under multiple testing. Second, we
propose a novel method for false discovery control using double bootstrapping.
We achieve superior statistical power to existing methods and prove that the
false discovery rate is controlled. Simulations and a real data application
illustrate the efficacy of our method over existing methods.
|
In this paper, we study how to efficiently and reliably detect active devices
and estimate their channels in a multiple-input multiple-output (MIMO)
orthogonal frequency-division multiplexing (OFDM) based grant-free
non-orthogonal multiple access (NOMA) system to enable massive machine-type
communications (mMTC). First, by exploiting the correlation of the channel
frequency responses in narrow-band mMTC, we propose a block-wise linear channel
model. Specifically, the continuous OFDM subcarriers in the narrow-band are
divided into several sub-blocks and a linear function with only two variables
(mean and slope) is used to approximate the frequency-selective channel in each
sub-block. This significantly reduces the number of variables to be determined
in channel estimation and the sub-block number can be adjusted to reliably
compensate the channel frequency-selectivity. Second, we formulate the joint
active device detection and channel estimation in the block-wise linear system
as a Bayesian inference problem. By exploiting the block-sparsity of the
channel matrix, we develop an efficient turbo message passing (Turbo-MP)
algorithm to resolve the Bayesian inference problem with near-linear
complexity. We further incorporate machine learning approaches into Turbo-MP to
learn unknown prior parameters. Numerical results demonstrate the superior
performance of the proposed algorithm over state-of-the-art algorithms.
|
The quest for nonmagnetic Weyl semimetals with high tunability of phase has
remained a demanding challenge. As the symmetry breaking control parameter, the
ferroelectric order can be steered to turn on/off the Weyl semimetals phase,
adjust the band structures around the Fermi level, and enlarge/shrink the
momentum separation of Weyl nodes which generate the Berry curvature as the
emergent magnetic field. Here, we report the realization of a ferroelectric
nonmagnetic Weyl semimetal based on indium doped Pb1 xSnxTe alloy where the
underlying inversion symmetry as well as mirror symmetry is broken with the
strength of ferroelectricity adjustable via tuning indium doping level and
Sn/Pb ratio. The transverse thermoelectric effect, i.e., Nernst effect both for
out of plane and in plane magnetic field geometry, is exploited as a Berry
curvature sensitive experimental probe to manifest the generation of Berry
curvature via the redistribution of Weyl nodes under magnetic fields. The
results demonstrate a clean non-magnetic Weyl semimetal coupled with highly
tunable ferroelectric order, providing an ideal platform for manipulating the
Weyl fermions in nonmagnetic system.
|
The processes of the coronal plasma heating and cooling were previously shown
to significantly affect the dynamics of slow magnetoacoustic (MA) waves,
causing amplification or attenuation, and also dispersion. However, the entropy
mode is also excited in such a thermodynamically active plasma and is affected
by the heating/cooling misbalance too. This mode is usually associated with the
phenomenon of coronal rain and formation of prominences. Unlike the adiabatic
plasmas, the properties and evolution of slow MA and entropy waves in
continuously heated and cooling plasmas get mixed. Different regimes of the
misbalance lead to a variety of scenarios for the initial perturbation to
evolve. In order to describe properties and evolution of slow MA and entropy
waves in various regimes of the misbalance, we obtained an exact analytical
solution of the linear evolutionary equation. Using the characteristic
timescales and the obtained exact solution, we identified regimes with
qualitatively different behaviour of slow MA and entropy modes. For some of
those regimes, the spatio-temporal evolution of the initial Gaussian pulse is
shown. In particular, it is shown that slow MA modes may have a range of
non-propagating harmonics. In this regime, perturbations caused by slow MA and
entropy modes in a low-$\beta$ plasma would look identically in observations,
as non-propagating disturbances of the plasma density (and temperature) either
growing or decaying with time. We also showed that the partition of the initial
energy between slow MA and entropy modes depends on the properties of the
heating and cooling processes involved. The obtained exact analytical solution
could be further applied to the interpretation of observations and results of
numerical modelling of slow MA waves in the corona and the formation and
evolution of coronal rain.
|
We discuss how colour flows can be used to simplify the computation of matrix
elements, and in the context of parton shower Monte Carlos with accuracy beyond
leading-colour. We show that, by systematically employing them, the results for
tree-level matrix elements and their soft limits can be given in a closed form
that does not require any colour algebra. The colour flows that we define are a
natural generalization of those exploited by existing Monte Carlos; we
construct their representations in terms of different but conceptually
equivalent quantities, namely colour loops and dipole graphs, and examine how
these objects may help to extend the accuracy of Monte Carlos through the
inclusion of subleading-colour effects. We show how the results that we obtain
can be used, with trivial modifications, in the context of QCD+QED simulations,
since we are able to put the gluon and photon soft-radiation patterns on the
same footing. We also comment on some peculiar properties of gluon-only colour
flows, and their relationships with established results in the mathematics of
permutations.
|
State-of-the-art music recommender systems are based on collaborative
filtering, which builds upon learning similarities between users and songs from
the available listening data. These approaches inherently face the cold-start
problem, as they cannot recommend novel songs with no listening history.
Content-aware recommendation addresses this issue by incorporating content
information about the songs on top of collaborative filtering. However, methods
falling in this category rely on a shallow user/item interaction that
originates from a matrix factorization framework. In this work, we introduce
neural content-aware collaborative filtering, a unified framework which
alleviates these limits, and extends the recently introduced neural
collaborative filtering to its content-aware counterpart. We propose a
generative model which leverages deep learning for both extracting content
information from low-level acoustic features and for modeling the interaction
between users and songs embeddings. The deep content feature extractor can
either directly predict the item embedding, or serve as a regularization prior,
yielding two variants (strict} and relaxed) of our model. Experimental results
show that the proposed method reaches state-of-the-art results for a cold-start
music recommendation task. We notably observe that exploiting deep neural
networks for learning refined user/item interactions outperforms approaches
using a more simple interaction model in a content-aware framework.
|
An automated treatment of iterated integrals based on letters induced by
real-valued quadratic forms and Kummer--Poincar\'e letters is presented. These
quantities emerge in analytic single and multi--scale Feynman diagram
calculations. To compactify representations, one wishes to apply general
properties of these quantities in computer-algebraic implementations. We
provide the reduction to basis representations, expansions, analytic
continuation and numerical evaluation of these quantities.
|
As shown in earlier work, skew-adjoint linear differential operators, mapping
efforts into flows, give rise to Dirac structures on a bounded spatial domain
by a proper definition of boundary variables. In the present paper this is
extended to pairs of linear differential operators defining a formally
skew-adjoint relation between flows and efforts. Furthermore it is shown how
the underlying repeated integration by parts operation can be streamlined by
the use of two-variable polynomial calculus. Dirac structures defined by
formally skew adjoint operators and differential operator effort constraints
are treated within the same framework. Finally it is sketched how the approach
can be also used for Lagrangian subspaces on bounded domains.
|
Recent observations made with Advanced LIGO and Advanced Virgo have initiated
the era of gravitational-wave astronomy. The number of events detected by these
"2nd Generation" (2G) ground-based observatories is partially limited by noise
arising from temperature-induced position fluctuations of the test mass mirror
surfaces used for probing spacetime dynamics. The design of next-generation
gravitational-wave observatories addresses this limitation by using
cryogenically cooled test masses; current approaches for continuously removing
heat (resulting from absorbed laser light) rely on heat extraction via
black-body radiation or conduction through suspension fibres. As a
complementing approach for extracting heat during observational runs, we
investigate cooling via helium gas impinging on the test mass in free molecular
flow. We establish a relation between cooling power and corresponding
displacement noise, based on analytical models, which we compare to numerical
simulations. Applying this theoretical framework with regard to the conceptual
design of the Einstein Telescope (ET), we find a cooling power of 10 mW at 18 K
for a gas pressure that exceeds the ET design strain noise goal by at most a
factor of $\sim 3$ in the signal frequency band from 3 to 11 Hz. A cooling
power of 100 mW at 18 K corresponds to a gas pressure that exceeds the ET
design strain noise goal by at most a factor of $\sim 11$ in the band from 1 to
28 Hz.
|
In 1955, Lehto showed that, for every measurable function $\psi$ on the unit
circle $\mathbb T,$ there is function $f$ holomorphic in the unit disc $\mathbb
D,$ having $\psi$ as radial limit a.e. on $\mathbb T.$ We consider an analogous
boundary value problem, where the unit disc is replaced by a Stein domain on a
complex manifold and radial approach to a boundary point $p$ is replaced by
(asymptotically) total approach to $p.$
|
How difficult are interactive theorem provers to use? We respond by reviewing
the formalization of Hilbert's tenth problem in Isabelle/HOL carried out by an
undergraduate research group at Jacobs University Bremen. We argue that, as
demonstrated by our example, proof assistants are feasible for beginners to
formalize mathematics. With the aim to make the field more accessible, we also
survey hurdles that arise when learning an interactive theorem prover. Broadly,
we advocate for an increased adoption of interactive theorem provers in
mathematical research and curricula.
|
The ALS-U light source will implement on-axis single-train swap-out injection
employing an accumulator between the booster and storage rings. The accumulator
ring design is a twelve period triple-bend achromat that will be installed
along the inner circumference of the storage-ring tunnel. A non-conventional
injection scheme will be utilized for top-off off-axis injection from the
booster into the accumulator ring meant to accommodate a large $\sim 300$~nm
emittance beam into a vacuum-chamber with a limiting horizontal aperture radius
as small as $8$ mm. The scheme incorporates three dipole kickers distributed
over three sectors, with two kickers perturbing the stored beam and the third
affecting both the stored and the injected beam trajectories. This paper
describes this ``3DK'' injection scheme and how it fits the accumulator ring's
particular requirements. We describe the design and optimization process, and
how we evaluated its fitness as a solution for booster-to-accumulator ring
injection.
|
Quantum channels, which break entanglement, incompatibility, or nonlocality,
are not useful for entanglement-based, one-sided device-independent, or
device-independent quantum information processing, respectively. Here, we show
that such breaking channels are related to certain temporal quantum
correlations, i.e., temporal separability, channel unsteerability, temporal
unsteerability, and macrorealism. More specifically, we first define the
steerability-breaking channel, which is conceptually similar to the
entanglement and nonlocality-breaking channels and prove that it is identical
to the incompatibility-breaking channel. Similar to the hierarchy relations of
the temporal and spatial quantum correlations, the hierarchy of non-breaking
channels is discussed. We then introduce the concept of the channels which
break temporal correlations, explain how they are related to the standard
breaking channels, and prove the following results: (1) A certain measure of
temporal nonseparability can be used to quantify a non-entanglement-breaking
channel in the sense that the measure is a memory monotone under the framework
of the resource theory of the quantum memory. (2) A non-steerability-breaking
channel can be certified with channel steering because the
steerability-breaking channel is equivalent to the incompatibility-breaking
channel. (3) The temporal steerability and non-macrorealism can, respectively,
distinguish the steerability-breaking and the nonlocality-breaking unital
channel from their corresponding non-breaking channels. Finally, a
two-dimensional depolarizing channel is experimentally implemented as a
proof-of-principle example to compare the temporal quantum correlations with
non-breaking channels.
|
We obtain exact densities of contractible and non-contractible loops in the
O(1) model on a strip of the square lattice rolled into an infinite cylinder of
finite even circumference $L$. They are also equal to the densities of critical
percolation clusters on forty five degree rotated square lattice rolled into a
cylinder, which do not or do wrap around the cylinder respectively. The results
are presented as explicit rational functions of $L$ taking rational values for
any even $L$. Their asymptotic expansions in the large $L$ limit have
irrational coefficients reproducing the earlier results in the leading orders.
The solution is based on a mapping to the six-vertex model and the use of
technique of Baxter's T-Q equation.
|
To date, close to fifty presumed black hole binary mergers were observed by
the LIGO and Virgo detectors. The analyses have been done with an assumption
that these objects are black holes by limiting the spin prior to the Kerr
bound. However, the above assumption is not valid for superspinars, which have
the Kerr geometry but rotate beyond the Kerr bound. In this study, we
investigate whether and how the limited spin prior range causes a bias in
parameter estimation for superspinars if they are detected. To this end, we
estimate binary parameters of the simulated inspiral signals of the
gravitational waves of compact binaries by assuming that at least one component
of them is a superspinar. We have found that when the primary is a superspinar,
both mass and spin parameters are biased in parameter estimation due to the
limited spin prior range. In this case, the extended prior range is strongly
favored compared to the limited one. On the other hand, when the primary is a
black hole, we do not see much bias in parameter estimation due to the limited
spin prior range, even though the secondary is a superspinar. We also apply the
analysis to black hole binary merger events GW170608 and GW190814, which have a
long and loud inspiral signal. We do not see any preference of superspinars
from the model selection for both events. We conclude that the extension of the
spin prior range is necessary for accurate parameter estimation if highly
spinning primary objects are found, while it is difficult to identify
superspinars if they are only the secondary objects. Nevertheless, the bias in
parameter estimation of spin for the limited spin prior range can be a clue of
the existence of superspinars.
|
Image classification models deployed in the real world may receive inputs
outside the intended data distribution. For critical applications such as
clinical decision making, it is important that a model can detect such
out-of-distribution (OOD) inputs and express its uncertainty. In this work, we
assess the capability of various state-of-the-art approaches for
confidence-based OOD detection through a comparative study and in-depth
analysis. First, we leverage a computer vision benchmark to reproduce and
compare multiple OOD detection methods. We then evaluate their capabilities on
the challenging task of disease classification using chest X-rays. Our study
shows that high performance in a computer vision task does not directly
translate to accuracy in a medical imaging task. We analyse factors that affect
performance of the methods between the two tasks. Our results provide useful
insights for developing the next generation of OOD detection methods.
|
Referring to a recent experiment, we theoretically study the process of a
two-channel decay of the diatomic silver anion (Ag$_2^-$), namely the
spontaneous electron ejection giving Ag$_2$ + e$^-$ and the dissociation
leading to Ag$^-$ + Ag. The ground state potential energy curves of the silver
molecules of diatomic neutral and negative ion were calculated using proper
pseudo-potentials and atomic basis sets. We also estimated the non-adiabatic
electronic coupling between the ground state of Ag$_2^-$ and the ground state
of Ag$_2$ + e$^-$, which in turn allowed us to estimate the minimal and mean
values of the electron autodetachment lifetimes. The relative energies of the
rovibrational levels allow the description of the spontaneous electron emission
process, while the description of the rotational dissociation is treated with
the quantum dynamics method as well as time-independent methods. The results of
our calculations are verified by comparison with experimental data.
|
To manage the COVID-19 epidemic effectively, decision-makers in public health
need accurate forecasts of case numbers. A potential near real-time predictor
of future case numbers is human mobility; however, research on the predictive
power of mobility is lacking. To fill this gap, we introduce a novel model for
epidemic forecasting based on mobility data, called mobility marked Hawkes
model. The proposed model consists of three components: (1) A Hawkes process
captures the transmission dynamics of infectious diseases. (2) A mark modulates
the rate of infections, thus accounting for how the reproduction number R
varies across space and time. The mark is modeled using a regularized Poisson
regression based on mobility covariates. (3) A correction procedure
incorporates new cases seeded by people traveling between regions. Our model
was evaluated on the COVID-19 epidemic in Switzerland. Specifically, we used
mobility data from February through April 2020, amounting to approximately 1.5
billion trips. Trip counts were derived from large-scale telecommunication
data, i.e., cell phone pings from the Swisscom network, the largest
telecommunication provider in Switzerland. We compared our model against
various state-of-the-art baselines in terms of out-of-sample root mean squared
error. We found that our model outperformed the baselines by 15.52%. The
improvement was consistently achieved across different forecast horizons
between 5 and 21 days. In addition, we assessed the predictive power of
conventional point of interest data, confirming that telecommunication data is
superior. To the best of our knowledge, our work is the first to predict the
spread of COVID-19 from telecommunication data. Altogether, our work
contributes to previous research by developing a scalable early warning system
for decision-makers in public health tasked with controlling the spread of
infectious diseases.
|
Dropout has been demonstrated as a simple and effective module to not only
regularize the training process of deep neural networks, but also provide the
uncertainty estimation for prediction. However, the quality of uncertainty
estimation is highly dependent on the dropout probabilities. Most current
models use the same dropout distributions across all data samples due to its
simplicity. Despite the potential gains in the flexibility of modeling
uncertainty, sample-dependent dropout, on the other hand, is less explored as
it often encounters scalability issues or involves non-trivial model changes.
In this paper, we propose contextual dropout with an efficient structural
design as a simple and scalable sample-dependent dropout module, which can be
applied to a wide range of models at the expense of only slightly increased
memory and computational cost. We learn the dropout probabilities with a
variational objective, compatible with both Bernoulli dropout and Gaussian
dropout. We apply the contextual dropout module to various models with
applications to image classification and visual question answering and
demonstrate the scalability of the method with large-scale datasets, such as
ImageNet and VQA 2.0. Our experimental results show that the proposed method
outperforms baseline methods in terms of both accuracy and quality of
uncertainty estimation.
|
The 2019 coronavirus disease (COVID-19) became a worldwide pandemic with
currently no effective antiviral drug except treatments for symptomatic
therapy. Flux balance analysis is an efficient method to analyze metabolic
networks. It allows optimizing for a metabolic function and thus e.g.,
predicting the growth rate of a specific cell or the production rate of a
metabolite of interest. Here flux balance analysis was applied on human lung
cells infected with severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2) to reposition metabolic drugs and drug combinations against the
replication of the SARS-CoV-2 virus within the host tissue. Making use of
expression data sets of infected lung tissue, genome-scale COVID-19-specific
metabolic models were reconstructed. Then host-specific essential genes and
gene-pairs were determined through in-silico knockouts that permit reducing the
viral biomass production without affecting the host biomass. Key pathways that
are associated with COVID-19 severity in lung tissue are related to oxidative
stress, as well as ferroptosis, sphingolipid metabolism, cysteine metabolism,
and fat digestion. By in-silico screening of FDA approved drugs on the putative
disease-specific essential genes and gene-pairs, 45 drugs and 99 drug
combinations were predicted as promising candidates for COVID-19 focused drug
repositioning (https://github.com/sysbiolux/DCcov). Among the 45 drug
candidates, six antiviral drugs were found and seven drugs that are being
tested in clinical trials against COVID-19. Other drugs like gemcitabine,
rosuvastatin and acetylcysteine, and drug combinations like
azathioprine-pemetrexed might offer new chances for treating COVID-19.
|
The ensemble covariance matrix of a wide sense stationary signal spatially
sampled by a full linear array is positive semi-definite and Toeplitz. However,
the direct augmented covariance matrix of an augmentable sparse array is
Toeplitz but not positive semi-definite, resulting in negative eigenvalues that
pose inherent challenges in its applications, including model order estimation
and source localization. The positive eigenvalues-based covariance matrix for
augmentable sparse arrays is robust but the matrix is unobtainable when all
noise eigenvalues of the direct augmented matrix are negative, which is a
possible case. To address this problem, we propose a robust covariance matrix
for augmentable sparse arrays that leverages both positive and negative noise
eigenvalues. The proposed covariance matrix estimate can be used in conjunction
with subspace based algorithms and adaptive beamformers to yield accurate
signal direction estimates.
|
Application of Machine Learning algorithms to the medical domain is an
emerging trend that helps to advance medical knowledge. At the same time, there
is a significant a lack of explainable studies that promote informed,
transparent, and interpretable use of Machine Learning algorithms. In this
paper, we present explainable multi-class classification of the Covid-19 mental
health data. In Machine Learning study, we aim to find the potential factors to
influence a personal mental health during the Covid-19 pandemic. We found that
Random Forest (RF) and Gradient Boosting (GB) have scored the highest accuracy
of 68.08% and 68.19% respectively, with LIME prediction accuracy 65.5% for RF
and 61.8% for GB. We then compare a Post-hoc system (Local Interpretable
Model-Agnostic Explanations, or LIME) and an Ante-hoc system (Gini Importance)
in their ability to explain the obtained Machine Learning results. To the best
of these authors knowledge, our study is the first explainable Machine Learning
study of the mental health data collected during Covid-19 pandemics.
|
We consider the problem of mapping a logical quantum circuit onto a given
hardware with limited two-qubit connectivity. We model this problem as an
integer linear program, using a network flow formulation with binary variables
that includes the initial allocation of qubits and their routing. We consider
several cost functions: an approximation of the fidelity of the circuit, its
total depth, and a measure of cross-talk, all of which can be incorporated in
the model. Numerical experiments on synthetic data and different hardware
topologies indicate that the error rate and depth can be optimized
simultaneously without significant loss. We test our algorithm on a large
number of quantum volume circuits, optimizing for error rate and depth; our
algorithm significantly reduces the number of CNOTs compared to Qiskit's
default transpiler SABRE, and produces circuits that, when executed on
hardware, exhibit higher fidelity.
|
Solar active region 12673 produced two successive X-class flares (X2.2 and
X9.3) approximately 3 hours apart in September 2017. The X9.3 was the recorded
largest solar flare in Solar Cycle 24. In this study we perform a
data-constrained magnetohydrodynamic simulation taking into account the
observed photospheric magnetic field to reveal the initiation and dynamics of
the X2.2 and X9.3 flares. According to our simulation, the X2.2 flare is first
triggered by magnetic reconnection at a local site where at the photosphere the
negative polarity intrudes into the opposite-polarity region. This magnetic
reconnection expels the innermost field lines upward beneath which the magnetic
flux rope is formed through continuous reconnection with external twisted field
lines. Continuous magnetic reconnection after the X2.2 flare enhances the
magnetic flux rope, which is lifted up and eventually erupts via the torus
instability. This gives rise to the X9.3 flare.
|
Interconnects are a major discriminator for superconducting digital
technology, enabling energy efficient data transfer and high-bandwidth
heterogeneous integration. We report a method to simulate propagation of
picosecond pulses in superconducting passive transmission lines (PTLs). A
frequency-domain propagator model obtained from the Ansys High Frequency
Structure Simulator (HFSS) field solver is incorporated in a Cadence Spectre
circuit model, so that the particular PTL geometry can be simulated in the
time-domain. The Mattis-Bardeen complex conductivity of the superconductor is
encoded in the HFSS field solver as a complex-conductivity insulator.
Experimental and simulation results show that Nb 20 Ohm microstrip PTLs with
1um width can support propagation of a single-flux-quantum pulse up to 7mm and
a double-flux-quantum pulse up to 28mm.
|
We present and analyze optical photometry and high resolution SALT spectra of
the symbiotic recurrent nova V3890 Sgr at quiescence. The orbital period,
P=747.6 days has been derived from both photometric and spectroscopic data. Our
double-line spectroscopic orbits indicate that the mass ratio is
q=M_g/M_WD=0.78+/-0.05, and that the component masses are M_WD=1.35+/-0.13
Msun, and M_g=1.05+/-0.11 Msun. The orbit inclination is approximately 67-69
degr. The red giant is filling (or nearly filling) its Roche lobe, and the
distance set by its Roche lobe radius, d=9 kpc, is consistent with that
resulting from the giant pulsation period. The outburst magnitude of V3890 Sgr
is then very similar to those of RNe in the Large Magellanic Cloud. V3890 Sgr
shows remarkable photometric and spectroscopic activity between the nova
eruptions with timescales similar to those observed in the symbiotic recurrent
novae T CrB and RS Oph and Z And-type symbiotic systems. The active source has
a double-temperature structure which we have associated with the presence of an
accretion disc. The activity would be then caused by changes in the accretion
rate. We also provide evidence that V3890 Sgr contains a CO WD accreting at a
high, a few 1e-8 - 1e-7 Msun/yr, rate. The WD is growing in mass, and should
give rise to a Type Ia supernova within about 1,000,000 yrs - the expected
lifetime of the red giant.
|
Real-world graphs are massive in size and we need a huge amount of space to
store them. Graph compression allows us to compress a graph so that we need a
lesser number of bits per link to store it. Of many techniques to compress a
graph, a typical approach is to find clique-like caveman or traditional
communities in a graph and encode those cliques to compress the graph. On the
other side, an alternative approach is to consider graphs as a collection of
hubs connecting spokes and exploit it to arrange the nodes such that the
resulting adjacency matrix of the graph can be compressed more efficiently. We
perform an empirical comparison of these two approaches and show that both
methods can yield good results under favorable conditions. We perform our
experiments on ten real-world graphs and define two cost functions to present
our findings.
|
The lifetimes of localized nonlinear modes in both the
$\beta$-Fermi-Pasta-Ulam-Tsingou ($\beta$-FPUT) chain and a cubic $\beta$-FPUT
lattice are studied as functions of perturbation amplitude, and by extension,
the relative strength of the nonlinear interactions compared to the linear
part. We first recover the well known result that localized nonlinear
excitations (LNEs) produced by a bond squeeze can be reduced to an approximate
two-frequency solution and then show that the nonlinear term in the potential
can lead to the production of secondary frequencies within the phonon band.
This can affect the stability and lifetime of the LNE by facilitating
interactions between the LNE and a low energy acoustic background which can be
regarded as "noise" in the system. In the one dimensional FPUT chain, the LNE
is stabilized by low energy acoustic emissions at early times; in some cases
allowing for lifetimes several orders of magnitude larger than the oscillation
period. The longest lived LNEs are found to satisfy the parameter dependence
$\mathcal{A}\sqrt{\beta}\approx1.1$ where $\beta$ is the relative nonlinear
strength and $\mathcal{A}$ is the displacement amplitude of the center
particles in the LNE. In the cubic FPUT lattice, the LNE lifetime $T$ decreases
rapidly with increasing amplitude $\mathcal{A}$ and is well described by the
double log relationship $\log_{10}\log_{10}(T)\approx
-(0.15\pm0.01)\mathcal{A}\sqrt{\beta}+(0.62\pm0.02)$.
|
Link prediction is a fundamental challenge in network science. Among various
methods, similarity-based algorithms are popular for their simplicity,
interpretability, high efficiency and good performance. In this paper, we show
that the most elementary local similarity index Common Neighbor (CN) can be
linearly decomposed by eigenvectors of the adjacency matrix of the target
network, with each eigenvector's contribution being proportional to the square
of the corresponding eigenvalue. As in many real networks, there is a huge gap
between the largest eigenvalue and the second largest eigenvalue, the CN index
is thus dominated by the leading eigenvector and much useful information
contained in other eigenvectors may be overlooked. Accordingly, we propose a
parameter-free algorithm that ensures the contributions of the leading
eigenvector and the secondary eigenvector the same. Extensive experiments on
real networks demonstrate that the prediction performance of the proposed
algorithm is remarkably better than well-performed local similarity indices in
the literature. A further proposed algorithm that can adjust the contribution
of leading eigenvector shows the superiority over state-of-the-art algorithms
with tunable parameters for its competitive accuracy and lower computational
complexity.
|
The great influence of Bitcoin has promoted the rapid development of
blockchain-based digital currencies, especially the altcoins, since 2013.
However, most altcoins share similar source codes, resulting in concerns about
code innovations. In this paper, an empirical study on existing altcoins is
carried out to offer a thorough understanding of various aspects associated
with altcoin innovations. Firstly, we construct the dataset of altcoins,
including source code repositories, GitHub fork relations, and market
capitalizations (cap). Then, we analyze the altcoin innovations from the
perspective of source code similarities. The results demonstrate that more than
85% of altcoin repositories present high code similarities. Next, a temporal
clustering algorithm is proposed to mine the inheritance relationship among
various altcoins. The family pedigrees of altcoin are constructed, in which the
altcoin presents similar evolution features as biology, such as power-law in
family size, variety in family evolution, etc. Finally, we investigate the
correlation between code innovations and market capitalization. Although we
fail to predict the price of altcoins based on their code similarities, the
results show that altcoins with higher innovations reflect better market
prospects.
|
Reinforcement Learning (RL) is a semi-supervised learning paradigm which an
agent learns by interacting with an environment. Deep learning in combination
with RL provides an efficient method to learn how to interact with the
environment is called Deep Reinforcement Learning (deep RL). Deep RL has gained
tremendous success in gaming - such as AlphaGo, but its potential have rarely
being explored for challenging tasks like Speech Emotion Recognition (SER). The
deep RL being used for SER can potentially improve the performance of an
automated call centre agent by dynamically learning emotional-aware response to
customer queries. While the policy employed by the RL agent plays a major role
in action selection, there is no current RL policy tailored for SER. In
addition, extended learning period is a general challenge for deep RL which can
impact the speed of learning for SER. Therefore, in this paper, we introduce a
novel policy - "Zeta policy" which is tailored for SER and apply Pre-training
in deep RL to achieve faster learning rate. Pre-training with cross dataset was
also studied to discover the feasibility of pre-training the RL Agent with a
similar dataset in a scenario of where no real environmental data is not
available. IEMOCAP and SAVEE datasets were used for the evaluation with the
problem being to recognize four emotions happy, sad, angry and neutral in the
utterances provided. Experimental results show that the proposed "Zeta policy"
performs better than existing policies. The results also support that
pre-training can reduce the training time upon reducing the warm-up period and
is robust to cross-corpus scenario.
|
Decentralized cryptocurrency exchanges offer compelling security benefits
over centralized exchanges: users control their funds and avoid the risk of an
exchange hack or malicious operator. However, because user assets are fully
accessible by a secret key, decentralized exchanges pose significant internal
security risks for trading firms and automated trading systems, where a
compromised system can result in total loss of funds. Centralized exchanges
mitigate this risk through API key based security policies that allow
professional users to give individual traders or automated systems specific and
customizable access rights such as trading or withdrawal limits. Such policies,
however, are not compatible with decentralized exchanges, where all exchange
operations require a signature generated by the owner's secret key. This paper
introduces a protocol based upon multiparty computation that allows for the
creation of API keys and security policies that can be applied to any existing
decentralized exchange. Our protocol works with both ECDSA and EdDSA signature
schemes and prioritizes efficient computation and communication. We have
deployed this protocol on Nash exchange, as well as around several
Ethereum-based automated market maker smart contracts, where it secures the
trading accounts and wallets of thousands of users.
|
In-phase synchronization is a stable state of identical Kuramoto oscillators
coupled on a network with identical positive connections, regardless of network
topology. However, this fact does not mean that the networks always synchronize
in-phase because other attractors besides the stable state may exist. The
critical connectivity $\mu_{\mathrm{c}}$ is defined as the network connectivity
above which only the in-phase state is stable for all the networks. In other
words, below $\mu_{\mathrm{c}}$, one can find at least one network which has a
stable state besides the in-phase sync. The best known evaluation of the value
so far is $0.6828\cdots\leq\mu_{\mathrm{c}}\leq0.75$. In this paper, focusing
on the twisted states of the circulant networks, we provide a method to
systematically analyze the linear stability of all possible twisted states on
all possible circulant networks. This method using integer programming enables
us to find the densest circulant network having a stable twisted state besides
the in-phase sync, which breaks a record of the lower bound of the
$\mu_{\mathrm{c}}$ from $0.6828\cdots$ to $0.6838\cdots$. We confirm the
validity of the theory by numerical simulations of the networks not converging
to the in-phase state.
|
We study two-photon scattering in a mixed cavity optomechanical system, which
is composed of a single-mode cavity field coupled to a single-mode mechanical
oscillation via both the first-order and quadratic optomechanical interactions.
By solving the scattering problem within the Wigner-Weisskopf framework, we
obtain the analytical scattering state and find four physical processes
associated with the two-photon scattering in this system. We calculate the
two-photon scattering spectrum and find that two-photon frequency
anticorrelation can be induced in the scattering process. We also establish the
relationship between the parameters of the mixed cavity optomechanical system
and the characteristics of the two-photon scattering spectrum. This work not
only provides a scattering means to create correlated photon pairs, but also
presents a spectrometric method to characterize the optomechanical systems.
|
Survey scientists increasingly face the problem of high-dimensionality in
their research as digitization makes it much easier to construct
high-dimensional (or "big") data sets through tools such as online surveys and
mobile applications. Machine learning methods are able to handle such data, and
they have been successfully applied to solve \emph{predictive} problems.
However, in many situations, survey statisticians want to learn about
\emph{causal} relationships to draw conclusions and be able to transfer the
findings of one survey to another. Standard machine learning methods provide
biased estimates of such relationships. We introduce into survey statistics the
double machine learning approach, which gives approximately unbiased estimators
of causal parameters, and show how it can be used to analyze survey nonresponse
in a high-dimensional panel setting.
|
Scanning tunneling microscope lithography can be used to create
nanoelectronic devices in which dopant atoms are precisely positioned in a Si
lattice within $\sim$1 nm of a target position. This exquisite precision is
promising for realizing various quantum technologies. However, a potentially
impactful form of disorder is due to incorporation kinetics, in which the
number of P atoms that incorporate into a single lithographic window is
manifestly uncertain. We present experimental results indicating that the
likelihood of incorporating into an ideally written three-dimer single-donor
window is $63 \pm 10\%$ for room-temperature dosing, and corroborate these
results with a model for the incorporation kinetics. Nevertheless, further
analysis of this model suggests conditions that might raise the incorporation
rate to near-deterministic levels. We simulate bias spectroscopy on a chain of
comparable dimensions to the array in our yield study, indicating that such an
experiment may help confirm the inferred incorporation rate.
|
We compute the phase diagram of the simplest holographic bottom-up model of
conformal interfaces. The model consists of a thin domain wall between
three-dimensional Anti-de Sitter (AdS) vacua, anchored on a boundary circle. We
distinguish five phases depending on the existence of a black hole, the
intersection of its horizon with the wall, and the fate of inertial observers.
We show that, like the Hawking-Page phase transition, the capture of the wall
by the horizon is also a first order transition and comment on its field-theory
interpretation. The static solutions of the domain-wall equations include
gravitational avatars of the Faraday cage, black holes with negative specific
heat, and an intriguing phenomenon of suspended vacuum bubbles corresponding to
an exotic interface/anti-interface fusion. Part of our analysis overlaps with
recent work by Simidzija and Van Raamsdonk but the interpretation is different.
|
We study the extent to which it is possible to approximate the optimal value
of a Unique Games instance in Fixed-Point Logic with Counting (FPC). We prove
two new FPC-inexpressibility results for Unique Games: the existence of a (1/2,
1/3 + $\delta$)-inapproximability gap, and inapproximability to within any
constant factor. Previous recent work has established similar
FPC-inapproximability results for a small handful of other problems. Our
construction builds upon some of these ideas, but contains a novel technique.
While most FPC-inexpressibility results are based on variants of the
CFI-construction, ours is significantly different.
|
We report some recent results on analytic pseudodifferential operators, also
known as Wick operators. An important tool in our study is the Bargmann
transform which provides a coupling between the classical (real) and analytic
pseudodifferential calculus. Since the Bargmann transform of Hermite functions
gives rise to formal power series in the complex domain, the results are
formulated in terms of the Bargmann images of Pilipovi\'c spaces.
|
We study the shape of the normalized stable L\'{e}vy tree $\mathcal{T}$ near
its root. We show that, when zooming in at the root at the proper speed with a
scaling depending on the index of stability, we get the unnormalized Kesten
tree. In particular the limit is described by a tree-valued Poisson point
process which does not depend on the initial normalization. We apply this to
study the asymptotic behavior of additive functionals of the form
\[\mathbf{Z}_{\alpha,\beta}=\int_{\mathcal{T}} \mu(\mathrm{d} x) \int_0^{H(x)}
\sigma_{r,x}^\alpha \mathfrak{h}_{r,x}^\beta\,\mathrm{d} r\]as
$\max(\alpha,\beta) \to \infty$, where $\mu$ is the mass measure on
$\mathcal{T}$, $H(x)$ is the height of $x$ and $\sigma_{r,x}$ (resp.
$\mathfrak{h}_{r,x}$) is the mass (resp. height) of the subtree of
$\mathcal{T}$ above level $r$ containing $x$. Such functionals arise as scaling
limits of additive functionals of the size and height on conditioned
Bienaym{\'e}-Galton-Watson trees.
|
Order-agnostic autoregressive distribution (density) estimation (OADE), i.e.,
autoregressive distribution estimation where the features can occur in an
arbitrary order, is a challenging problem in generative machine learning. Prior
work on OADE has encoded feature identity by assigning each feature to a
distinct fixed position in an input vector. As a result, architectures built
for these inputs must strategically mask either the input or model weights to
learn the various conditional distributions necessary for inferring the full
joint distribution of the dataset in an order-agnostic way. In this paper, we
propose an alternative approach for encoding feature identities, where each
feature's identity is included alongside its value in the input. This feature
identity encoding strategy allows neural architectures designed for sequential
data to be applied to the OADE task without modification. As a proof of
concept, we show that a Transformer trained on this input (which we refer to as
"the DEformer", i.e., the distribution estimating Transformer) can effectively
model binarized-MNIST, approaching the performance of fixed-order
autoregressive distribution estimating algorithms while still being entirely
order-agnostic. Additionally, we find that the DEformer surpasses the
performance of recent flow-based architectures when modeling a tabular dataset.
|
The long fascination antiferromagnetic materials have exerted on the
scientific community over about a century has been entirely renewed recently
with the discovery of several unexpected phenomena including various classes of
anomalous spin and charge Hall effects and unconventional magnonic transport,
but also homochiral magnetic entities such as skyrmions. With these
breakthroughs, antiferromagnets standout as a rich playground for the
investigation of novel topological behaviors, and as promising candidate
materials for disruptive low-power microelectronic applications. Remarkably,
the newly discovered phenomena are all related to the topology of the magnetic,
electronic or magnonic ground state of the antiferromagnets. This review
exposes how non-trivial topology emerges at different levels in
antiferromagnets and explores the novel mechanisms that have been discovered
recently. We also discuss how novel classes of quantum magnets could enrich the
currently expanding field of antiferromagnetic spintronics and how spin
transport can in turn favor a better understanding of exotic quantum
excitations.
|
This study presents a physically consistent displacement-driven reformulation
of the concept of action-at-a-distance, which is at the foundation of nonlocal
elasticity. In contrast to existing approaches that adopts an integral
stress-strain constitutive relation, the displacement-driven approach is
predicated on an integral strain-displacement relation. The most remarkable
consequence of this reformulation is that the (total) strain energy is
guaranteed to be convex and positive-definite without imposing any constraint
on the symmetry of the kernels. This feature is critical to enable the
application of nonlocal formulations to general continua exhibiting asymmetric
interactions; ultimately a manifestation of material heterogeneity. Remarkably,
the proposed approach also enables a strong satisfaction of the locality
recovery condition and of the laws of thermodynamics, which are not foregone
conclusions in most classical nonlocal elasticity theories. Additionally, the
formulation is frame-invariant and the nonlocal operator remains physically
consistent at boundaries. The study is complemented by a detailed analysis of
the dynamic response of the nonlocal continuum and of its intrinsic dispersion
leading to the consideration that the choice of nonlocal kernels should depend
on the specific material. Examples of exponential or power-law kernels are
presented in order to demonstrate the applicability of the method to different
classes of nonlocal media. The ability to admit generalized kernels reinforces
the generalized nature of the displacement-driven approach over existing
integral methodologies, which typically lead to simplified differential models
based on exponential kernels. The theoretical formulation is also leveraged to
simulate the static response of nonlocal beams and plates illustrating the
intrinsic consistency of the approach, which is free from unwanted boundary
effects.
|
Recent studies on semantic frame induction show that relatively high
performance has been achieved by using clustering-based methods with
contextualized word embeddings. However, there are two potential drawbacks to
these methods: one is that they focus too much on the superficial information
of the frame-evoking verb and the other is that they tend to divide the
instances of the same verb into too many different frame clusters. To overcome
these drawbacks, we propose a semantic frame induction method using masked word
embeddings and two-step clustering. Through experiments on the English FrameNet
data, we demonstrate that using the masked word embeddings is effective for
avoiding too much reliance on the surface information of frame-evoking verbs
and that two-step clustering can improve the number of resulting frame clusters
for the instances of the same verb.
|
A recent strand of research in structural proof theory aims at exploring the
notion of analytic calculi (i.e. those calculi that support general and modular
proof-strategies for cut elimination), and at identifying classes of logics
that can be captured in terms of these calculi. In this context, Wansing
introduced the notion of proper display calculi as one possible design
framework for proof calculi in which the analiticity desiderata are realized in
a particularly transparent way. Recently, the theory of properly displayable
logics (i.e. those logics that can be equivalently presented with some proper
display calculus) has been developed in connection with generalized Sahlqvist
theory (aka unified correspondence). Specifically, properly displayable logics
have been syntactically characterized as those axiomatized by analytic
inductive axioms, which can be equivalently and algorithmically transformed
into analytic structural rules so that the resulting proper display calculi
enjoy a set of basic properties: soundness, completeness, conservativity, cut
elimination and subformula property. In this context, the proof that the given
calculus is complete w.r.t. the original logic is usually carried out
syntactically, i.e. by showing that a (cut free) derivation exists of each
given axiom of the logic in the basic system to which the analytic structural
rules algorithmically generated from the given axiom have been added. However,
so far this proof strategy for syntactic completeness has been implemented on a
case-by-case base, and not in general. In this paper, we address this gap by
proving syntactic completeness for properly displayable logics in any normal
(distributive) lattice expansion signature. Specifically, we show that for
every analytic inductive axiom a cut free derivation can be effectively
generated which has a specific shape, referred to as pre-normal form.
|
We initiate the study of fine-grained completeness theorems for exact and
approximate optimization in the polynomial-time regime. Inspired by the first
completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova,
Williams, TALG 2019) as well as the classic class MaxSNP and
MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis,
JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain
a number of natural optimization problems in P, including Maximum Inner
Product, general forms of nearest neighbor search and optimization variants of
the $k$-XOR problem. Specifically, we define MaxSP as the class of problems
definable as $\max_{x_1,\dots,x_k} \#\{ (y_1,\dots,y_\ell) :
\phi(x_1,\dots,x_k, y_1,\dots,y_\ell) \}$, where $\phi$ is a quantifier-free
first-order property over a given relational structure (with MinSP defined
analogously). On $m$-sized structures, we can solve each such problem in time
$O(m^{k+\ell-1})$. Our results are:
- We determine (a sparse variant of) the Maximum/Minimum Inner Product
problem as complete under *deterministic* fine-grained reductions: A strongly
subquadratic algorithm for Maximum/Minimum Inner Product would beat the
baseline running time of $O(m^{k+\ell-1})$ for *all* problems in MaxSP/MinSP by
a polynomial factor.
- This completeness transfers to approximation: Maximum/Minimum Inner Product
is also complete in the sense that a strongly subquadratic $c$-approximation
would give a $(c+\varepsilon)$-approximation for all MaxSP/MinSP problems in
time $O(m^{k+\ell-1-\delta})$, where $\varepsilon > 0$ can be chosen
arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019),
we obtain the perhaps surprising consequence that refuting the OV Hypothesis is
*equivalent* to giving a $O(1)$-approximation for all MinSP problems in
faster-than-$O(m^{k+\ell-1})$ time.
|
We study baryonic matter with isospin asymmetry, including fully dynamically
its interplay with pion condensation. To this end, we employ the holographic
Witten-Sakai-Sugimoto model and the so-called homogeneous ansatz for the gauge
fields in the bulk to describe baryonic matter. Within the confined geometry
and restricting ourselves to the chiral limit, we map out the phase structure
in the presence of baryon and isospin chemical potentials, showing that for
sufficiently large chemical potentials condensed pions and isospin-asymmetric
baryonic matter coexist. We also present first results of the same approach in
the deconfined geometry and demonstrate that this case, albeit technically more
involved, is better suited for comparisons with and predictions for real-world
QCD. Our study lays the ground for future improved holographic studies aiming
towards a realistic description of charge neutral, beta-equilibrated matter in
compact stars, and also for more refined comparisons with lattice studies at
nonzero isospin chemical potential.
|
This paper studies recurrence phenomena in iterative holomorphic dynamics of
certain multi-valued maps. In particular, we prove an analogue of the
Poincar\'e recurrence theorem for meromorphic correspondences with respect to
certain dynamically interesting measures associated with them. Meromorphic
correspondences present a significant measure-theoretic obstacle: the image of
a Borel set under a meromorphic correspondence need not be Borel. We manage
this issue using the Measurable Projection Theorem, which is an aspect of
descriptive set theory. We also prove a result concerning invariance properties
of the supports of the measures mentioned.
|
Polynomial chaos expansions (PCEs) have been used in many real-world
engineering applications to quantify how the uncertainty of an output is
propagated from inputs. PCEs for models with independent inputs have been
extensively explored in the literature. Recently, different approaches have
been proposed for models with dependent inputs to expand the use of PCEs to
more real-world applications. Typical approaches include building PCEs based on
the Gram-Schmidt algorithm or transforming the dependent inputs into
independent inputs. However, the two approaches have their limitations
regarding computational efficiency and additional assumptions about the input
distributions, respectively. In this paper, we propose a data-driven approach
to build sparse PCEs for models with dependent inputs. The proposed algorithm
recursively constructs orthonormal polynomials using a set of monomials based
on their correlations with the output. The proposed algorithm on building
sparse PCEs not only reduces the number of minimally required observations but
also improves the numerical stability and computational efficiency. Four
numerical examples are implemented to validate the proposed algorithm.
|
Gaia provided the largest-ever catalogue of white dwarf stars. We use this
catalogue, along with the third public data release of the Zwicky Transient
Facility (ZTF), to identify new eclipsing white dwarf binaries. Our method
exploits light curve statistics and the Box Least Squares algorithm to detect
periodic light curve variability. The search revealed 18 new binaries, of which
17 are eclipsing. We use the position in the Gaia H-R diagram to classify these
binaries and find that the majority of these white dwarfs have main sequence
companions. We identify one system as a candidate eclipsing white dwarf--brown
dwarf binary and a further two as extremely low mass (ELM) white dwarf
binaries. We also provide identification spectroscopy for 17 of our 18
binaries. Running our search method on mock light curves with real ZTF
sampling, we estimate our efficiency of detecting objects with light curves
similar to the ones of the newly discovered binaries. Many more binaries are to
be found in the ZTF footprint as the data releases grow, so our survey is
ongoing.
|
Proton radiography is a widely-fielded diagnostic used to measure magnetic
structures in plasma. The deflection of protons with multi-MeV kinetic energy
by the magnetic fields is used to infer their path-integrated field strength.
Here, the use of tomographic methods is proposed for the first time to lift the
degeneracy inherent in these path-integrated measurements, allowing full
reconstruction of spatially resolved magnetic field structures in three
dimensions. Two techniques are proposed which improve the performance of
tomographic reconstruction algorithms in cases with severely limited numbers of
available probe beams, as is the case in laser-plasma interaction experiments
where the probes are created by short, high-power laser pulse irradiation of
secondary foil targets. The methods are equally applicable to optical probes
such as shadowgraphy and interferometry [M. Kasim et al. Phys. Rev. E 95,
023306 (2017)], thereby providing a disruptive new approach to three
dimensional imaging across the physical sciences and engineering disciplines.
|
This paper proposes a method to probabilistically quantify the moments (mean
and variance) of excavated material during excavation by aggregating the prior
moments of the grade blocks around the given bucket dig location. By modelling
the moments as random probability density functions (pdf) at sampled locations,
a formulation of the sums of Gaussian based uncertainty estimation is presented
that jointly estimates the location pdfs, as well as the prior values for
uncertainty coming from ore body knowledge (obk) sub block models. The moments
calculated at each random location is a single Gaussian and they are the
components of Gaussian mixture distribution. The overall uncertainty of the
excavated material at the given bucket location is represented by the Gaussian
Mixture Model (GMM) and therefore moment matching method is proposed to
estimate the moments of the reduced GMM. The method was tested in a region at a
Pilbara iron ore deposit situated in the Brockman Iron Formation of the
Hamersley Province, Western Australia, and suggests a frame work to quantify
the uncertainty in the excavated material that hasn't been studied anywhere in
the literature yet.
|
We describe a physics program at the Relativistic Heavy Ion Collider (RHIC)
with tagged forward protons. The program started with the proton-proton elastic
scattering experiment (PP2PP), for which a set of Roman Pot stations was build.
The PP2PP experiment took data at RHIC as a dedicated experiment at the
beginning of RHIC operations. To expand the physics program to include
non-elastic channels with forward protons, like Central Exclusive Production
(CEP), Central Production (CP) and Single Diffraction Dissociation (SD), the
experiment with its equipment was merged with the STAR experiment at RHIC.
Consequently the expanded program, which included both elastic and inelastic
channels became part of the physics program and operations of the STAR
experiment. In this paper we shall describe the physics results obtained by the
PP2PP and STAR experiments to date.
|
In the present paper, we study the existence of best proximity pair in
ultrametric spaces. We show, under suitable assumptions, that the proximinal
pair $(A,B)$ has a best proximity pair. As a consequence we generalize a well
known best approximation result and we derive some fixed point theorems.
Moreover, we provide examples to illustrate the obtained results.
|
By using the quantum extremal island formula, we perform a simple calculation
of the generalized entanglement entropy of Hawking radiation from the two
dimensional Liouville black hole. No reasonable island was found when
extremizing the generalized entropy. We explain qualitatively the reason why
the page curve cannot be reproduced in the present model. This suggests that
the islands may not necessarily save the information paradox for the Liouville
black holes.
|
We substantially extend our relaxation theory for perturbed many-body quantum
systems from [Phys. Rev. Lett. 124, 120602 (2020)] by establishing an
analytical prediction for the time-dependent observable expectation values
which depends on only two characteristic parameters of the perturbation
operator: its overall strength and its range or band width. Compared to the
previous theory, a significantly larger range of perturbation strengths is
covered. The results are obtained within a typicality framework by solving the
pertinent random matrix problem exactly for a certain class of banded
perturbations and by demonstrating the (approximative) universality of these
solutions, which allows us to adopt them to considerably more general classes
of perturbations. We also verify the prediction by comparison with several
numerical examples.
|
With an increase in low-cost machine learning APIs, advanced machine learning
models may be trained on private datasets and monetized by providing them as a
service. However, privacy researchers have demonstrated that these models may
leak information about records in the training dataset via membership inference
attacks. In this paper, we take a closer look at another inference attack
reported in literature, called attribute inference, whereby an attacker tries
to infer missing attributes of a partially known record used in the training
dataset by accessing the machine learning model as an API. We show that even if
a classification model succumbs to membership inference attacks, it is unlikely
to be susceptible to attribute inference attacks. We demonstrate that this is
because membership inference attacks fail to distinguish a member from a nearby
non-member. We call the ability of an attacker to distinguish the two (similar)
vectors as strong membership inference. We show that membership inference
attacks cannot infer membership in this strong setting, and hence inferring
attributes is infeasible. However, under a relaxed notion of attribute
inference, called approximate attribute inference, we show that it is possible
to infer attributes close to the true attributes. We verify our results on
three publicly available datasets, five membership, and three attribute
inference attacks reported in literature.
|
We have investigated the magneto-transport properties of beta-Bi4I4 bulk
crystal, which was recently theoretically proposed and experimentally
demonstrated to be a topological insulator. At low temperature T and magnetic
field B, a series of Shubnikov-De Haas(SdH) oscillations are observed on the
magnetoresistivity (MR). The detailed analysis reveals a light cyclotron mass
of 0.1 me, and the field angle dependence of MR reveals that the SdH
oscillations originate from a convex Fermi surface. In the extreme quantum
limit (EQL) region, there is a metal-insulator transition occurring soon after
the EQL. We perform the scaling analysis, and all the isotherms fall onto a
universal scaling with a fitted critical exponent of 6.5. The enormous value of
critical exponent implies this insulating quantum phase originated from strong
electron-electron interactions in high fields. However, in the far end of EQL,
both the longitudinal and Hall resistivity increase exponentially with B, and
the temperature dependence of the MR reveals an energy gap induced by the high
magnetic field, signifying a magnetic freeze-out effect. Our findings indicate
that bulk beta-Bi4I4 is an excellent candidate for a 3D topological system for
exploring EQL physics and relevant exotic quantum phases.
|
In an active power distribution system, Volt-VAR optimization (VVO) methods
are employed to achieve network-level objectives such as minimization of
network power losses. The commonly used model-based centralized and distributed
VVO algorithms perform poorly in the absence of a communication system and with
model and measurement uncertainties. In this paper, we proposed a model-free
local Volt-VAR control approach for network-level optimization that does not
require communication with other decision-making agents. The proposed algorithm
is based on extremum-seeking approach that uses only local measurements to
minimize the network power losses. To prove that the proposed extremum-seeking
controller converges to the optimum solution, we also derive mathematical
conditions for which the loss minimization problem is convex with respect to
the control variables. Local controllers pose stability concerns during highly
variable scenarios. Thus, the proposed extremum-seeking controller is
integrated with an adaptive-droop control module to provide a stable local
control response. The proposed approach is validated using IEEE 4-bus and IEEE
123-bus systems and achieves the loss minimization objective while maintaining
the voltage within the pre-specific limits even during highly variable DER
generation scenarios.
|
Technology has changed both our way of life and the way in which we learn.
Students now attend lectures with laptops and mobile phones, and this situation
is accentuated in the case of students on Computer Science degrees, since they
require their computers in order to participate in both theoretical and
practical lessons. Problems, however, arise when the students' social networks
are opened on their computers and they receive notifications that interrupt
their work. We set up a workshop regarding time, thoughts and attention
management with the objective of teaching our students techniques that would
allow them to manage interruptions, concentrate better and definitively make
better use of their time. Those who took part in the workshop were then
evaluated to discover its effects. The results obtained are quite optimistic
and are described in this paper with the objective of encouraging other
universities to perform similar initiatives.
|
Multi-scale 3D characterization is widely used by materials scientists to
further their understanding of the relationships between microscopic structure
and macroscopic function. Scientific computed tomography (CT) instruments are
one of the most popular choices for 3D non-destructive characterization of
materials at length scales ranging from the angstrom-scale to the micron-scale.
These instruments typically have a source of radiation that interacts with the
sample to be studied and a detector assembly to capture the result of this
interaction. A collection of such high-resolution measurements are made by
re-orienting the sample which is mounted on a specially designed stage/holder
after which reconstruction algorithms are used to produce the final 3D volume
of interest. The end goal of scientific CT scans include determining the
morphology,chemical composition or dynamic behavior of materials when subjected
to external stimuli. In this article, we will present an overview of recent
advances in reconstruction algorithms that have enabled significant
improvements in the performance of scientific CT instruments - enabling faster,
more accurate and novel imaging capabilities. In the first part, we will focus
on model-based image reconstruction algorithms that formulate the inversion as
solving a high-dimensional optimization problem involving a data-fidelity term
and a regularization term. In the last part of the article, we will present an
overview of recent approaches using deep-learning based algorithms for
improving scientific CT instruments.
|
Random K-out graphs, denoted $\mathbb{H}(n;K)$, are generated by each of the
$n$ nodes drawing $K$ out-edges towards $K$ distinct nodes selected uniformly
at random, and then ignoring the orientation of the arcs. Recently, random
K-out graphs have been used in applications as diverse as random (pairwise) key
predistribution in ad-hoc networks, anonymous message routing in
crypto-currency networks, and differentially-private federated averaging. In
many applications, connectivity of the random K-out graph when some of its
nodes are dishonest, have failed, or have been captured is of practical
interest. We provide a comprehensive set of results on the connectivity and
giant component size of $\mathbb{H}(n;K_n,\gamma_n)$, i.e., random K-out graph
when $\gamma_n$ of its nodes, selected uniformly at random, are deleted. First,
we derive conditions for $K_n$ and $n$ that ensure, with high probability
(whp), the connectivity of the remaining graph when the number of deleted nodes
is $\gamma_n=\Omega(n)$ and $\gamma_n=o(n)$, respectively. Next, we derive
conditions for $\mathbb{H}(n;K_n,\gamma_n)$ to have a giant component, i.e., a
connected subgraph with $\Omega(n)$ nodes, whp. This is also done for different
scalings of $\gamma_n$ and upper bounds are provided for the number of nodes
outside the giant component. Simulation results are presented to validate the
usefulness of the results in the finite node regime.
|
Subsets and Splits