abstract
stringlengths 42
2.09k
|
---|
We show how phase-space simulations of Gaussian quantum states in a photonic
network permit verification of measurable correlations of Gaussian boson
sampling (GBS) quantum computers. Our results agree with experiments for up to
100-th order correlations, provided decoherence is included. We extend this to
more than 16,000 modes, and describe how to simulate genuine multipartite
entanglement.
|
A fundamental task in AI is to assess (in)dependence between mixed-type
variables (text, image, sound). We propose a Bayesian kernelised correlation
test of (in)dependence using a Dirichlet process model. The new measure of
(in)dependence allows us to answer some fundamental questions: Based on data,
are (mixed-type) variables independent? How likely is dependence/independence
to hold? How high is the probability that two mixed-type variables are more
than just weakly dependent? We theoretically show the properties of the
approach, as well as algorithms for fast computation with it. We empirically
demonstrate the effectiveness of the proposed method by analysing its
performance and by comparing it with other frequentist and Bayesian approaches
on a range of datasets and tasks with mixed-type variables.
|
Superconductivity in a crystalline lattice without inversion is subject to
complex spin-orbit-coupling effects, which can lead to mixed-parity pairing and
an unusual magnetic response. In this study, the properties of a layered
superconductor with alternating Rashba spin-orbit coupling in the stacking of
layers, hence (globally) possessing a center of inversion, is analyzed in an
applied magnetic field, using a generalized Ginzburg-Landau model. The
superconducting order parameter consists of an even- and an odd-parity pairing
component which exchange their roles as dominant pairing channel upon
increasing the magnetic field. This leads to an unusual kink feature in the
upper critical field and a first-order phase transition within the mixed phase.
We investigate various signatures of this internal phase transition. The
physics we discuss here could explain the recently found $H$--$T$ phase diagram
of the heavy Fermion superconductor CeRh$_2$As$_2$.
|
Recent studies strive to incorporate various human rationales into neural
networks to improve model performance, but few pay attention to the quality of
the rationales. Most existing methods distribute their models' focus to
distantly-labeled rationale words entirely and equally, while ignoring the
potential important non-rationale words and not distinguishing the importance
of different rationale words. In this paper, we propose two novel auxiliary
loss functions to make better use of distantly-labeled rationales, which
encourage models to maintain their focus on important words beyond labeled
rationales (PINs) and alleviate redundant training on non-helpful rationales
(NoIRs). Experiments on two representative classification tasks show that our
proposed methods can push a classification model to effectively learn crucial
clues from non-perfect rationales while maintaining the ability to spread its
focus to other unlabeled important words, thus significantly outperform
existing methods.
|
A novel framework is proposed to extract near-threshold resonant states from
finite-volume energy levels of lattice QCD and is applied to elucidate
structures of the positive parity $D_s$. The quark model, the
quark-pair-creation mechanism and $D^{(*)}K$ interaction are incorporated into
the Hamiltonian effective field theory. The bare $1^+$ $c\bar s$ states are
almost purely given by the states with heavy-quark spin bases. The physical
$D^*_{s0}(2317)$ and $D^*_{s1}(2460)$ are the mixtures of bare $c\bar s$ core
and $D^{(*)}K$ component, while the $D^*_{s1}(2536)$ and $D^*_{s2}(2573)$ are
almost dominated by bare $c\bar{s}$. Furthermore, our model reproduces the
clear level crossing of the $D^*_{s1}(2536)$ with the scattering state at a
finite volume.
|
As the worldwide population gets increasingly aged, in-home telemedicine and
mobile-health solutions represent promising services to promote active and
independent aging and to contribute to a paradigm shift towards patient-centric
healthcare. In this work, we present ACTA (Advanced Cognitive Training for
Aging), a prototype mobile-health solution to provide advanced cognitive
training for senior citizens with mild cognitive impairments. We disclose here
the conceptualization of ACTA as the integration of two promising
rehabilitation strategies: the "Nudge theory", from the cognitive domain, and
the neurofeedback, from the neuroscience domain. Moreover, in ACTA we exploit
the most advanced machine learning techniques to deliver customized and fully
adaptive support to the elderly, while training in an ecological environment.
ACTA represents the next-step beyond SENIOR, an earlier mobile-health project
for cognitive training based on Nudge theory, currently ongoing in Lombardy
Region. Beyond SENIOR, ACTA represents a highly-usable, accessible, low-cost,
new-generation mobile-health solution to promote independent aging and
effective motor-cognitive training support, while empowering the elderly in
their own aging.
|
Analysis of large observational data sets generated by a reactive system is a
common challenge in debugging system failures and determining their root cause.
One of the major problems is that these observational data suffer from
survivorship bias. Examples include analyzing traffic logs from networks, and
simulation logs from circuit design. In such applications, users want to detect
non-spurious correlations from observational data and obtain actionable
insights about them. In this paper, we introduce log to Neuro-symbolic
(Log2NS), a framework that combines probabilistic analysis from machine
learning (ML) techniques on observational data with certainties derived from
symbolic reasoning on an underlying formal model. We apply the proposed
framework to network traffic debugging by employing the following steps. To
detect patterns in network logs, we first generate global embedding vector
representations of entities such as IP addresses, ports, and applications.
Next, we represent large log flow entries as clusters that make it easier for
the user to visualize and detect interesting scenarios that will be further
analyzed. To generalize these patterns, Log2NS provides an ability to query
from static logs and correlation engines for positive instances, as well as
formal reasoning for negative and unseen instances. By combining the strengths
of deep learning and symbolic methods, Log2NS provides a very powerful
reasoning and debugging tool for log-based data. Empirical evaluations on a
real internal data set demonstrate the capabilities of Log2NS.
|
Gaze estimation methods learn eye gaze from facial features. However, among
rich information in the facial image, real gaze-relevant features only
correspond to subtle changes in eye region, while other gaze-irrelevant
features like illumination, personal appearance and even facial expression may
affect the learning in an unexpected way. This is a major reason why existing
methods show significant performance degradation in cross-domain/dataset
evaluation. In this paper, we tackle the cross-domain problem in gaze
estimation. Different from common domain adaption methods, we propose a domain
generalization method to improve the cross-domain performance without touching
target samples. The domain generalization is realized by gaze feature
purification. We eliminate gaze-irrelevant factors such as illumination and
identity to improve the cross-domain performance. We design a plug-and-play
self-adversarial framework for the gaze feature purification. The framework
enhances not only our baseline but also existing gaze estimation methods
directly and significantly. To the best of our knowledge, we are the first to
propose domain generalization methods in gaze estimation. Our method achieves
not only state-of-the-art performance among typical gaze estimation methods but
also competitive results among domain adaption methods. The code is released in
https://github.com/yihuacheng/PureGaze.
|
The High Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory surveys the
very high energy sky in the 300 GeV to $>100$ TeV energy range. HAWC has
detected two blazars above $11\sigma$, Markarian 421 (Mrk 421) and Markarian
501 (Mrk 501). The observations are comprised of data taken in the period
between June 2015 and July 2018, resulting in a $\sim 1038$ days of exposure.
In this work we report the time-averaged spectral analysis for both sources
above 0.5 TeV. Taking into account the flux attenuation due to the
extragalactic background light (EBL), the intrinsic spectrum of Mrk 421 is
described by a power law with an exponential energy cut-off with index
$\alpha=2.26\pm(0.12)_{stat}(_{-0.2}^{+0.17})_{sys}$ and energy cut-off
$E_c=5.1\pm(1.6)_{stat}(_{-2.5}^{+1.4})_{sys}$ TeV, while the intrinsic
spectrum of Mrk 501 is better described by a simple power law with index
$\alpha=2.61\pm(0.11)_{stat}(_{-0.07}^{+0.01})_{sys}$. The maximum energies at
which the Mrk 421 and Mrk 501 signals are detected are 9 and 12 TeV,
respectively. This makes these some of the highest energy detections to date
for spectra averaged over years-long timescales. Since the observation of gamma
radiation from blazars provides information about the physical processes that
take place in their relativistic jets, it is important to study the broad-band
spectral energy distribution (SED) of these objects. To this purpose,
contemporaneous data from the Large Area Telescope on board the {\em Fermi}
satellite and literature data, in the radio to X-ray range, were used to build
time-averaged SEDs that were modeled within a synchrotron self-Compton leptonic
scenario to derive the physical parameters that describe the nature of the
respective jets.
|
We describe the structure of finite Boolean inverse monoids and apply our
results to the representation theory of finite inverse semigroups. We then
generalize to semisimple Boolean inverse semigroups.
|
The score of a vertex $x$ in an oriented graph is defined to be its
outdegree, \emph{i.e.}, the number of arcs with initial vertex $x$. The score
sequence of an oriented graph is the sequence of all scores arranged in
nondecreasing order. An oriented complete bipartite graph is called a
bitournament. The score sequence of a bitournament consists of two
nondecreasing sequences of nonnegative integers, one for each of the two
partite sets. Moon has characterized the score sequences of bitournaments. This
paper introduces the concept of trimming a sequence and gives a
characterization of score sequences of bitournaments utilizing this concept.
|
In the present work, $k_T$-factorization formalism is applied to compute the
exclusive dilepton production by timelike Compton scattering (TCS) in $eA$,
$pA$ and $AA$ collisions. The nuclear effects are investigated considering
heavy and light ions. The production cross section in terms of invariant mass
and rapidity distribution of the lepton pair is shown. The analysis is done for
electron-ion collisions at the Large Hadron-Electron Collider (LHeC), its
high-energy upgrade (HE-LHeC) and at the Future Circular Collider (FCC) in
lepton-hadron mode. Additionally, ultraperipheral heavy ion collisions at
future runs of the Large Hadron Collider (LHC) and at the FCC (hadron-hadron
mode) are also considered.
|
This paper surveys 60 English Machine Reading Comprehension datasets, with a
view to providing a convenient resource for other researchers interested in
this problem. We categorize the datasets according to their question and answer
form and compare them across various dimensions including size, vocabulary,
data source, method of creation, human performance level, and first question
word. Our analysis reveals that Wikipedia is by far the most common data source
and that there is a relative lack of why, when, and where questions across
datasets.
|
The security of mobile robotic networks (MRNs) has been an active research
topic in recent years. This paper demonstrates that the observable interaction
process of MRNs under formation control will present increasingly severe
threats. Specifically, we find that an external attack robot, who has only
partial observation over MRNs while not knowing the system dynamics or access,
can learn the interaction rules from observations and utilize them to replace a
target robot, destroying the cooperation performance of MRNs. We call this
novel attack as sneak, which endows the attacker with the intelligence of
learning knowledge and is hard to be tackled by traditional defense techniques.
The key insight is to separately reveal the internal interaction structure
within robots and the external interaction mechanism with the environment, from
the coupled state evolution influenced by the model-unknown rules and
unobservable part of the MRN. To address this issue, we first provide general
interaction process modeling and prove the learnability of the interaction
rules. Then, with the learned rules, we design an Evaluate-Cut-Restore (ECR)
attack strategy considering the partial interaction structure and geometric
pattern. We also establish the sufficient conditions for a successful sneak
with maximum control impacts over the MRN. Extensive simulations illustrate the
feasibility and effectiveness of the proposed attack.
|
We present a fully Eulerian hybrid immersed-boundary/phase-field model to
simulate wetting and contact line motion over any arbitrary geometry. The solid
wall is described with a volume-penalisation ghost-cell immersed boundary
whereas the interface between the two fluids by a diffuse-interface method. The
contact line motion on the complex wall is prescribed via slip velocity in the
momentum equation and static/dynamic contact angle condition for the order
parameter of the Cahn-Hilliard model. This combination requires accurate
computations of the normal and tangential gradients of the scalar order
parameter and of the components of the velocity. However, the present algorithm
requires the computation of averaging weights and other geometrical variables
as a preprocessing step. Several validation tests are reported in the
manuscript, together with 2D simulations of a droplet spreading over a
sinusoidal wall with different contact angles and slip length and a spherical
droplet spreading over a sphere, showing that the proposed algorithm is capable
to deal with the three-phase contact line motion over any complex wall. The
Eulerian feature of the algorithm facilitates the implementation and provides a
straight-forward and potentially highly scalable parallelisation. The employed
parallelisation of the underlying Navier-Stokes solver can be efficiently used
for the multiphase part as well. The procedure proposed here can be directly
employed to impose any types of boundary conditions (Neumann, Dirichlet and
mixed) for any field variable evolving over a complex geometry, modelled with
an immersed-boundary approach (for instance, modelling deformable biological
membranes, red blood cells, solidification, evaporation and boiling, to name a
few)
|
Many systems nowadays require protection against security or safety threats.
A physical protection system (PPS) integrates people, procedures, and equipment
to protect assets or facilities. PPSs have targeted various systems, including
airports, rail transport, highways, hospitals, bridges, the electricity grid,
dams, power plants, seaports, oil refineries, and water systems. Hence, PPSs
are characterized by a broad set of features, from which part is common, while
other features are variant and depend on the particular system to be developed.
The notion of PPS has been broadly addressed in the literature, and even
domain-specific PPS development methods have been proposed. However, the common
and variant features are fragmented across many studies. This situation
seriously impedes the identification of the required features and likewise the
guidance of the systems engineering process of PPSs. To enhance the
understanding and support the guidance of the development of PPS, in this
paper, we provide a feature-driven survey of PPSs. The approach applies a
systematic domain analysis process based on the state-of-the-art of PPSs. It
presents a family feature model that defines the common and variant features
and herewith the configuration space of PPSs
|
Multi-task learning is an important trend of machine learning in facing the
era of artificial intelligence and big data. Despite a large amount of
researches on learning rate estimates of various single-task machine learning
algorithms, there is little parallel work for multi-task learning. We present
mathematical analysis on the learning rate estimate of multi-task learning
based on the theory of vector-valued reproducing kernel Hilbert spaces and
matrix-valued reproducing kernels. For the typical multi-task regularization
networks, an explicit learning rate dependent both on the number of sample data
and the number of tasks is obtained. It reveals that the generalization ability
of multi-task learning algorithms is indeed affected as the number of tasks
increases.
|
We present multi-scale and multi-wavelength data of the Galactic HII region
G25.4-0.14 (hereafter G25.4NW, distance ~5.7 kpc). The SHARC-II 350 micron
continuum map displays a hub-filament configuration containing five parsec
scale filaments and a central compact hub. Through the 5 GHz radio continuum
map, four ionized clumps (i.e., Ia-Id) are identified toward the central hub,
and are powered by massive OB-stars. The Herschel temperature map depicts the
warm dust emission (i.e., Td ~23-39 K) toward the hub. High resolution Atacama
Large Millimeter/submillimeter Array (ALMA) 1.3 mm continuum map (resolution
~0".82 X 0".58) reveals three cores (c1-c3; mass ~80-130 Msun) toward the
ionized clumps Ia, and another one (c4; mass ~70 Msun) toward the ionized clump
Ib. A compact near-infrared (NIR) emission feature (extent ~0.2 pc) is
investigated toward the ionized clump Ia excited by an O8V-type star, and
contains at least three embedded K-band stars. In the direction of the ionized
clump Ia, the ALMA map also shows an elongated feature (extent ~0.2 pc) hosting
the cores c1-c3. All these findings together illustrate the existence of a
small cluster of massive stars in the central hub. Considering the detection of
the hub-filament morphology and the spatial locations of the mm cores, a global
non-isotropic collapse (GNIC) scenario appears to be applicable in G25.4NW,
which includes the basic ingredients of the global hierarchical collapse and
clump-fed accretion models. Overall, the GNIC scenario explains the birth of
massive stars in G25.4NW.
|
In this work, we introduce a control variate approximation technique for low
error approximate Deep Neural Network (DNN) accelerators. The control variate
technique is used in Monte Carlo methods to achieve variance reduction. Our
approach significantly decreases the induced error due to approximate
multiplications in DNN inference, without requiring time-exhaustive retraining
compared to state-of-the-art. Leveraging our control variate method, we use
highly approximated multipliers to generate power-optimized DNN accelerators.
Our experimental evaluation on six DNNs, for Cifar-10 and Cifar-100 datasets,
demonstrates that, compared to the accurate design, our control variate
approximation achieves same performance and 24% power reduction for a merely
0.16% accuracy loss.
|
We study unbendable rational curves, i.e., nonsingular rational curves in a
complex manifold of dimension $n$ with normal bundles isomorphic to
$\mathcal{O}_{\mathbb{P}^1}(1)^{\oplus p} \oplus
\mathcal{O}_{\mathbb{P}^1}^{\oplus (n-1-p)}$ for some nonnegative integer $p$.
Well-known examples arise from algebraic geometry as general minimal rational
curves of uniruled projective manifolds. After describing the relations between
the differential geometric properties of the natural distributions on the
deformation spaces of unbendable rational curves and the projective geometric
properties of their varieties of minimal rational tangents, we concentrate on
the case of $p=1$ and $n \leq 5$, which is the simplest nontrivial situation.
In this case, the families of unbendable rational curves fall essentially into
two classes: Goursat type or Cartan type. Those of Goursat type arise from
ordinary differential equations and those of Cartan type have special features
related to contact geometry. We show that the family of lines on any
nonsingular cubic 4-fold is of Goursat type, whereas the family of lines on a
general quartic 5-fold is of Cartan type, in the proof of which the projective
geometry of varieties of minimal rational tangents plays a key role.
|
Deep Neural Networks (DNNs) are witnessing increased adoption in multiple
domains owing to their high accuracy in solving real-world problems. However,
this high accuracy has been achieved by building deeper networks, posing a
fundamental challenge to the low latency inference desired by user-facing
applications. Current low latency solutions trade-off on accuracy or fail to
exploit the inherent temporal locality in prediction serving workloads.
We observe that caching hidden layer outputs of the DNN can introduce a form
of late-binding where inference requests only consume the amount of computation
needed. This enables a mechanism for achieving low latencies, coupled with an
ability to exploit temporal locality. However, traditional caching approaches
incur high memory overheads and lookup latencies, leading us to design learned
caches - caches that consist of simple ML models that are continuously updated.
We present the design of GATI, an end-to-end prediction serving system that
incorporates learned caches for low-latency DNN inference. Results show that
GATI can reduce inference latency by up to 7.69X on realistic workloads.
|
India has a maternal mortality ratio of 113 and child mortality ratio of 2830
per 100,000 live births. Lack of access to preventive care information is a
major contributing factor for these deaths, especially in low resource
households. We partner with ARMMAN, a non-profit based in India employing a
call-based information program to disseminate health-related information to
pregnant women and women with recent child deliveries. We analyze call records
of over 300,000 women registered in the program created by ARMMAN and try to
identify women who might not engage with these call programs that are proven to
result in positive health outcomes. We built machine learning based models to
predict the long term engagement pattern from call logs and beneficiaries'
demographic information, and discuss the applicability of this method in the
real world through a pilot validation. Through a pilot service quality
improvement study, we show that using our model's predictions to make
interventions boosts engagement metrics by 61.37%. We then formulate the
intervention planning problem as restless multi-armed bandits (RMABs), and
present preliminary results using this approach.
|
The evolution of deformation from plasticity to localization to damage is
investigated in ferritic-pearlitic steel through nanometer-resolution
microstructure-correlated SEM-DIC (u-DIC) strain mapping, enabled through
highly accurate microstructure-to-strain alignment. We reveal the key
plasticity mechanisms in ferrite and pearlite as well as their evolution into
localization and damage and their relation to the microstructural arrangement.
Notably, two contrasting mechanisms were identified that control whether damage
initiation in pearlite occurs and, through connection of localization hotspots
in ferrite grains, potentially results in macroscale fracture: (i) cracking of
pearlite bridges with relatively clean lamellar structure by brittle fracture
of cementite lamellae due to build-up of strain concentrations in nearby
ferrite, versus (ii) large plasticity without damage in pearlite bridges with a
more "open", chaotic pearlite morphology, which enables plastic percolation
paths in the interlamellar ferrite channels. Based on these insights,
recommendations for damage resistant ferritic-pearlitic steels are proposed.
|
This is a survey of recent results on central and non-central limit theorems
for quadratic functionals of stationary processes. The underlying processes are
Gaussian, linear or L\'evy-driven linear processes with memory, and are defined
either in discrete or continuous time. We focus on limit theorems for Toeplitz
and tapered Toeplitz type quadratic functionals of stationary processes with
applications in parametric and nonparametric statistical estimation theory. We
discuss questions concerning Toeplitz matrices and operators, Fej\'er-type
singular integrals, and L\'evy-It\^o-type and Stratonovich-type multiple
stochastic integrals. These are the main tools for obtaining limit theorems.
|
Although deep convolution neural networks (DCNN) have achieved excellent
performance in human pose estimation, these networks often have a large number
of parameters and computations, leading to the slow inference speed. For this
issue, an effective solution is knowledge distillation, which transfers
knowledge from a large pre-trained network (teacher) to a small network
(student). However, there are some defects in the existing approaches: (I) Only
a single teacher is adopted, neglecting the potential that a student can learn
from multiple teachers. (II) The human segmentation mask can be regarded as
additional prior information to restrict the location of keypoints, which is
never utilized. (III) A student with a small number of parameters cannot fully
imitate heatmaps provided by datasets and teachers. (IV) There exists noise in
heatmaps generated by teachers, which causes model degradation. To overcome
these defects, we propose an orderly dual-teacher knowledge distillation (ODKD)
framework, which consists of two teachers with different capabilities.
Specifically, the weaker one (primary teacher, PT) is used to teach keypoints
information, the stronger one (senior teacher, ST) is utilized to transfer
segmentation and keypoints information by adding the human segmentation mask.
Taking dual-teacher together, an orderly learning strategy is proposed to
promote knowledge absorbability. Moreover, we employ a binarization operation
which further improves the learning ability of the student and reduces noise in
heatmaps. Experimental results on COCO and OCHuman keypoints datasets show that
our proposed ODKD can improve the performance of different lightweight models
by a large margin, and HRNet-W16 equipped with ODKD achieves state-of-the-art
performance for lightweight human pose estimation.
|
RGB-D salient object detection (SOD) is usually formulated as a problem of
classification or regression over two modalities, i.e., RGB and depth. Hence,
effective RGBD feature modeling and multi-modal feature fusion both play a
vital role in RGB-D SOD. In this paper, we propose a depth-sensitive RGB
feature modeling scheme using the depth-wise geometric prior of salient
objects. In principle, the feature modeling scheme is carried out in a
depth-sensitive attention module, which leads to the RGB feature enhancement as
well as the background distraction reduction by capturing the depth geometry
prior. Moreover, to perform effective multi-modal feature fusion, we further
present an automatic architecture search approach for RGB-D SOD, which does
well in finding out a feasible architecture from our specially designed
multi-modal multi-scale search space. Extensive experiments on seven standard
benchmarks demonstrate the effectiveness of the proposed approach against the
state-of-the-art.
|
This paper attempts to study the optimal stopping time for semi-Markov
processes (SMPs) under the discount optimization criteria with unbounded cost
rates. In our work, we introduce an explicit construction of the equivalent
semi-Markov decision processes (SMDPs). The equivalence is embodied in the
value functions of SMPs and SMDPs, that is, every stopping time of SMPs can
induce a policy of SMDPs such that the value functions are equal, and vice
versa. The existence of the optimal stopping time of SMPs is proved by this
equivalence relation. Next, we give the optimality equation of the value
function and develop an effective iterative algorithm for computing it.
Moreover, we show that the optimal and {\epsilon}-optimal stopping time can be
characterized by the hitting time of the special sets. Finally, to illustrate
the validity of our results, an example of a maintenance system is presented in
the end.
|
Autoregressive models are widely used for tasks such as image and audio
generation. The sampling process of these models, however, does not allow
interruptions and cannot adapt to real-time computational resources. This
challenge impedes the deployment of powerful autoregressive models, which
involve a slow sampling process that is sequential in nature and typically
scales linearly with respect to the data dimension. To address this difficulty,
we propose a new family of autoregressive models that enables anytime sampling.
Inspired by Principal Component Analysis, we learn a structured representation
space where dimensions are ordered based on their importance with respect to
reconstruction. Using an autoregressive model in this latent space, we trade
off sample quality for computational efficiency by truncating the generation
process before decoding into the original data space. Experimentally, we
demonstrate in several image and audio generation tasks that sample quality
degrades gracefully as we reduce the computational budget for sampling. The
approach suffers almost no loss in sample quality (measured by FID) using only
60\% to 80\% of all latent dimensions for image data. Code is available at
https://github.com/Newbeeer/Anytime-Auto-Regressive-Model .
|
A novel spin orientation mechanism - dynamic electron spin polarization has
been recently suggested in Phys. Rev. Lett. $\mathbf{125}$, 156801 (2020). It
takes place for unpolarized optical excitation in weak magnetic fields of the
order of a few millitesla. In this paper we demonstrate experimentally and
theoretically that the dynamic electron spin polarization degree changes sign
as a function of time, strength of the applied magnetic field and its
direction. The studies are performed on indirect band-gap (In,Al)As/AlAs
quantum dots and their results are explained in the framework of a theoretical
model developed for our experimental setting.
|
We investigate the computability of algebraic closure and definable closure
with respect to a collection of formulas. We show that for a computable
collection of formulas of quantifier rank at most $n$, in any given computable
structure, both algebraic and definable closure with respect to that collection
are $\Sigma^0_{n+2}$ sets. We further show that these bounds are tight.
|
We prove the existence of an eddy heat diffusion coefficient coming from an
idealized model of turbulent fluid. A difficulty lies in the presence of a
boundary, with also turbulent mixing and the eddy diffusion coefficient going
to zero at the boundary. Nevertheless enhanced diffusion takes place.
|
A simplification strategy for Segmented Mirror Splitters (SMS) used as beam
combiners is presented. These devices are useful for compact beam division and
combination of linear and 2-D arrays. However, the standard design requires
unique thin-film coating sections for each input beam and thus potential for
scaling to high beam-counts is limited due to manufacturing complexity. Taking
advantage of the relative insensitivity of the beam combination process to
amplitude variations, numerical techniques are used to optimize
highly-simplified designs with only one, two or three unique coatings. It is
demonstrated that with correctly chosen coating reflectivities, the simplified
optics are capable of high combination efficiency for several tens of beams.
The performance of these optics as beamsplitters in multicore fiber amplifier
systems is analyzed, and inhomogeneous power distribution of the simplified
designs is noted as a potential source of combining loss in such systems. These
simplified designs may facilitate further scaling of filled-aperture coherently
combined systems.
|
In this paper, we provide causal evidence on abortions and risky health
behaviors as determinants of mental health development among young women. Using
administrative in- and outpatient records from Sweden, we apply a novel grouped
fixed-effects estimator proposed by Bonhomme and Manresa (2015) to allow for
time-varying unobserved heterogeneity. We show that the positive association
obtained from standard estimators shrinks to zero once we control for grouped
time-varying unobserved heterogeneity. We estimate the group-specific profiles
of unobserved heterogeneity, which reflect differences in unobserved risk to be
diagnosed with a mental health condition. We then analyze mental health
development and risky health behaviors other than unwanted pregnancies across
groups. Our results suggest that these are determined by the same type of
unobserved heterogeneity, which we attribute to the same unobserved process of
decision-making. We develop and estimate a theoretical model of risky choices
and mental health, in which mental health disparity across groups is generated
by different degrees of self-control problems. Our findings imply that mental
health concerns cannot be used to justify restrictive abortion policies.
Moreover, potential self-control problems should be targeted as early as
possible to combat future mental health consequences.
|
Recent years have witnessed a renewed interest in Boolean function in
explaining binary classifiers in the field of explainable AI (XAI). The
standard approach of Boolean function is propositional logic. We study a family
of classifier models, axiomatize it and show completeness of our axiomatics.
Moreover, we prove that satisfiability checking for our modal language relative
to such a class of models is NP-complete. We leverage the language to formalize
counterfactual conditional as well as a variety of notions of explanation
including abductive, contrastive and counterfactual explanations, and biases.
Finally, we present two extensions of our language: a dynamic extension by the
notion of assignment enabling classifier change and an epistemic extension in
which the classifier's uncertainty about the actual input can be represented.
|
We discuss a probe of the contribution of wind-related shocks to the radio
emission in otherwise radio-quiet quasars. Given 1) the non-linear correlation
between UV and X-ray luminosity in quasars, 2) that such correlation leads to
higher likelihood of radiation-line-driven winds in more luminous quasars, and
3) that luminous quasars are more abundant at high redshift, deep radio
observations of high-redshift quasars are needed to probe potential
contributions from accretion disk winds. We target a sample of 50 $z\simeq
1.65$ color-selected quasars that span the range of expected accretion disk
wind properties as traced by broad CIV emission. 3-GHz observations with the
Very Large Array to an rms of $\approx10\mu$Jy beam$^{-1}$ probe to star
formation rates of $\approx400\,M_{\rm Sun}\,{\rm yr}^{-1}$, leading to 22
detections. Supplementing these pointed observations are survey data of 388
sources from the LOFAR Two-metre Sky Survey Data Release 1 that reach
comparable depth (for a typical radio spectral index), where 123 sources are
detected. These combined observations reveal a radio detection fraction that is
a non-linear function of \civ\ emission-line properties and suggest that the
data may require multiple origins of radio emission in radio-quiet quasars. We
find evidence for radio emission from weak jets or coronae in radio-quiet
quasars with low Eddingtion ratios, with either (or both) star formation and
accretion disk winds playing an important role in optically luminous quasars
and correlated with increasing Eddington ratio. Additional pointed radio
observations are needed to fully establish the nature of radio emission in
radio-quiet quasars.
|
Like adiabatic time-dependent density-functional theory (TD-DFT), the
Bethe-Salpeter equation (BSE) formalism of many-body perturbation theory, in
its static approximation, is "blind" to double (and higher) excitations, which
are ubiquitous, for example, in conjugated molecules like polyenes. Here, we
apply the spin-flip \textit{ansatz} (which considers the lowest triplet state
as the reference configuration instead of the singlet ground state) to the BSE
formalism in order to access, in particular, double excitations. The present
scheme is based on a spin-unrestricted version of the $GW$ approximation
employed to compute the charged excitations and screened Coulomb potential
required for the BSE calculations. Dynamical corrections to the static BSE
optical excitations are taken into account via an unrestricted generalization
of our recently developed (renormalized) perturbative treatment. The
performance of the present spin-flip BSE formalism is illustrated by computing
excited-state energies of the beryllium atom, the hydrogen molecule at various
bond lengths, and cyclobutadiene in its rectangular and square-planar
geometries.
|
Question answering from semi-structured tables can be seen as a semantic
parsing task and is significant and practical for pushing the boundary of
natural language understanding. Existing research mainly focuses on
understanding contents from unstructured evidence, e.g., news, natural language
sentences, and documents. The task of verification from structured evidence,
such as tables, charts, and databases, is still less explored. This paper
describes sattiy team's system in SemEval-2021 task 9: Statement Verification
and Evidence Finding with Tables (SEM-TAB-FACT). This competition aims to
verify statements and to find evidence from tables for scientific articles and
to promote the proper interpretation of the surrounding article. In this paper,
we exploited ensemble models of pre-trained language models over tables, TaPas
and TaBERT, for Task A and adjust the result based on some rules extracted for
Task B. Finally, in the leaderboard, we attain the F1 scores of 0.8496 and
0.7732 in Task A for the 2-way and 3-way evaluation, respectively, and the F1
score of 0.4856 in Task B.
|
A generalized method of alternating resolvents was introduced by Boikanyo and
Moro{\c s}anu as a way to approximate common zeros of two maximal monotone
operators. In this paper we analyse the strong convergence of this algorithm
under two different sets of conditions. As a consequence we obtain effective
rates of metastability (in the sense of Terence Tao) and quasi-rates of
asymptotic regularity. Furthermore, we bypass the need for sequential weak
compactness in the original proofs. Our quantitative results are obtained using
proof-theoretical techniques in the context of the proof mining program.
|
At the latest since the advent of the Internet, disinformation and conspiracy
theories have become ubiquitous. Recent examples like QAnon and Pizzagate prove
that false information can lead to real violence. In this motivation statement
for the Workshop on Human Aspects of Misinformation at CHI 2021, I explain my
research agenda focused on 1. why people believe in disinformation, 2. how
people can be best supported in recognizing disinformation, and 3. what the
potentials and risks of different tools designed to fight disinformation are.
|
We investigate the asymptotic risk of a general class of overparameterized
likelihood models, including deep models. The recent empirical success of
large-scale models has motivated several theoretical studies to investigate a
scenario wherein both the number of samples, $n$, and parameters, $p$, diverge
to infinity and derive an asymptotic risk at the limit. However, these theorems
are only valid for linear-in-feature models, such as generalized linear
regression, kernel regression, and shallow neural networks. Hence, it is
difficult to investigate a wider class of nonlinear models, including deep
neural networks with three or more layers. In this study, we consider a
likelihood maximization problem without the model constraints and analyze the
upper bound of an asymptotic risk of an estimator with penalization.
Technically, we combine a property of the Fisher information matrix with an
extended Marchenko-Pastur law and associate the combination with empirical
process techniques. The derived bound is general, as it describes both the
double descent and the regularized risk curves, depending on the penalization.
Our results are valid without the linear-in-feature constraints on models and
allow us to derive the general spectral distributions of a Fisher information
matrix from the likelihood. We demonstrate that several explicit models, such
as parallel deep neural networks, ensemble learning, and residual networks, are
in agreement with our theory. This result indicates that even large and deep
models have a small asymptotic risk if they exhibit a specific structure, such
as divisibility. To verify this finding, we conduct a real-data experiment with
parallel deep neural networks. Our results expand the applicability of the
asymptotic risk analysis, and may also contribute to the understanding and
application of deep learning.
|
We study Lagrangian systems with a finite number of degrees of freedom that
are non-local in time. We obtain an extension of Noether theorem and Noether
identities to this kind of Lagrangians. A Hamiltonian formalism is then set up
for this systems. $n$-order local Lagrangians can be treated as a particular
case and the standard results for them are recovered. The method is then
applied to several other cases, namely two examples of non-local oscillators
and the p-adic particle.
|
Quantum algorithms for computing classical nonlinear maps are widely known
for toy problems but might not suit potential applications to realistic physics
simulations. Here, we propose how to compute a general differentiable
invertible nonlinear map on a quantum computer using only linear unitary
operations. The price of this universality is that the original map is
represented adequately only on a finite number of iterations. More iterations
produce spurious echos, which are unavoidable in any finite unitary emulation
of generic non-conservative dynamics. Our work is intended as the first survey
of these issues and possible ways to overcome them in the future. We propose
how to monitor spurious echos via auxiliary measurements, and we illustrate our
results with numerical simulations.
|
Assessing the exploitability of software vulnerabilities at the time of
disclosure is difficult and error-prone, as features extracted via technical
analysis by existing metrics are poor predictors for exploit development.
Moreover, exploitability assessments suffer from a class bias because "not
exploitable" labels could be inaccurate.
To overcome these challenges, we propose a new metric, called Expected
Exploitability (EE), which reflects, over time, the likelihood that functional
exploits will be developed. Key to our solution is a time-varying view of
exploitability, a departure from existing metrics, which allows us to learn EE
using data-driven techniques from artifacts published after disclosure, such as
technical write-ups, proof-of-concept exploits, and social media discussions.
Our analysis reveals that prior features proposed for related exploit
prediction tasks are not always beneficial for predicting functional exploits,
and we design novel feature sets to capitalize on previously under-utilized
artifacts.
This view also allows us to investigate the effect of the label biases on the
classifiers. We characterize the noise-generating process for exploit
prediction, showing that our problem is subject to class- and feature-dependent
label noise, considered the most challenging type. By leveraging
domain-specific observations, we then develop techniques to incorporate noise
robustness into learning EE.
On a dataset of 103,137 vulnerabilities, we show that EE increases precision
from 49\% to 86\% over existing metrics, including two state-of-the-art exploit
classifiers, while the performance of our metric also improving over time. EE
scores capture exploitation imminence, by distinguishing exploits which are
going to be developed in the near future.
|
We introduce the "inverse bandit" problem of estimating the rewards of a
multi-armed bandit instance from observing the learning process of a low-regret
demonstrator. Existing approaches to the related problem of inverse
reinforcement learning assume the execution of an optimal policy, and thereby
suffer from an identifiability issue. In contrast, our paradigm leverages the
demonstrator's behavior en route to optimality, and in particular, the
exploration phase, to obtain consistent reward estimates. We develop simple and
efficient reward estimation procedures for demonstrations within a class of
upper-confidence-based algorithms, showing that reward estimation gets
progressively easier as the regret of the algorithm increases. We match these
upper bounds with information-theoretic lower bounds that apply to any
demonstrator algorithm, thereby characterizing the optimal tradeoff between
exploration and reward estimation. Extensive empirical evaluations on both
synthetic data and simulated experimental design data from the natural sciences
corroborate our theoretical results.
|
Interevent times in temporal contact data from humans and animals typically
obey heavy-tailed distributions, and this property impacts contagion and other
dynamical processes on networks. We theoretically show that distributions of
interevent times heavier-tailed than exponential distributions are a
consequence of the most basic metapopulation model used in epidemiology and
ecology, in which individuals move from a patch to another according to the
simple random walk. Our results hold true irrespectively of the network
structure and also for more realistic mobility rules such as high-order random
walks and the recurrent mobility patterns used for modeling human dynamics.
|
We present observations of a region of the Galactic plane taken during the
Early Science Program of the Australian Square Kilometre Array Pathfinder
(ASKAP). In this context, we observed the SCORPIO field at 912 MHz with an
uncompleted array consisting of 15 commissioned antennas. The resulting map
covers a square region of ~40 deg^2, centred on (l, b)=(343.5{\deg},
0.75{\deg}), with a synthesized beam of 24"x21" and a background rms noise of
150-200 {\mu}Jy/beam, increasing to 500-600 {\mu}Jy/beam close to the Galactic
plane. A total of 3963 radio sources were detected and characterized in the
field using the CAESAR source finder. We obtained differential source counts in
agreement with previously published data after correction for source extraction
and characterization uncertainties, estimated from simulated data. The ASKAP
positional and flux density scale accuracy were also investigated through
comparison with previous surveys (MGPS, NVSS) and additional observations of
the SCORPIO field, carried out with ATCA at 2.1 GHz and 10" spatial resolution.
These allowed us to obtain a measurement of the spectral index for a subset of
the catalogued sources and an estimated fraction of (at least) 8% of resolved
sources in the reported catalogue. We cross-matched our catalogued sources with
different astronomical databases to search for possible counterparts, finding
~150 associations to known Galactic objects. Finally, we explored a
multiparametric approach for classifying previously unreported Galactic sources
based on their radio-infrared colors.
|
In this paper, a general class of mixture of some densities is proposed. The
proposed class contains some of classical and weighted distributions as special
cases. Formulas for each of cumulative distribution function, reliability
function, hazard rate function, rth raw moments function, characteristic
function, stress-strength reliability and Tsallis entropy of order are derived.
|
Attribution methods provide an insight into the decision-making process of
machine learning models, especially deep neural networks, by assigning
contribution scores to each individual feature. However, the attribution
problem has not been well-defined, which lacks a unified guideline to the
contribution assignment process. Furthermore, existing attribution methods
often built upon various empirical intuitions and heuristics. There still lacks
a general theoretical framework that not only can offer a good description of
the attribution problem, but also can be applied to unifying and revisiting
existing attribution methods. To bridge the gap, in this paper, we propose a
Taylor attribution framework, which models the attribution problem as how to
decide individual payoffs in a coalition. Then, we reformulate fourteen
mainstream attribution methods into the Taylor framework and analyze these
attribution methods in terms of rationale, fidelity, and limitation in the
framework. Moreover, we establish three principles for a good attribution in
the Taylor attribution framework, i.e., low approximation error, correct Taylor
contribution assignment, and unbiased baseline selection. Finally, we
empirically validate the Taylor reformulations and reveal a positive
correlation between the attribution performance and the number of principles
followed by the attribution method via benchmarking on real-world datasets.
|
We develop some basic concepts in the theory of higher categories internal to
an arbitrary $\infty$-topos. We define internal left and right fibrations and
prove a version of the Grothendieck construction and of Yoneda's lemma for
internal categories.
|
Much of interesting complex biological behaviour arises from collective
properties. Important information about collective behaviour lies in the time
and space structure of fluctuations around average properties, and two-point
correlation functions are a fundamental tool to study these fluctuations. We
give a self-contained presentation of definitions and techniques for
computation of correlation functions aimed at providing students and
researchers outside the field of statistical physics a practical guide to
calculating correlation functions from experimental and simulation data. We
discuss some properties of correlations in critical systems, and the effect of
finite system size, which is particularly relevant for most biological
experimental systems. Finally we apply these to the case of the dynamical
transition in a simple neuronal model,
|
In this paper, we propose a time-dependent
Susceptible-Exposed-Infectious-Recovered-Died (SEIRD) reaction-diffusion system
for the COVID-19 pandemic and we deal with its derivation from a kinetic model.
The derivation is obtained by mathematical description delivered at the
micro-scale of individuals. Our approach is based on the micro-macro
decomposition which leads to an equivalent formulation of the kinetic model
which couples the microscopic equations with the macroscopic equations. We
develop a numerical asymptotic preservation scheme to solve the kinetic model.
The proposed approach is validated by various numerical tests where particular
attention is paid to the Moroccan situation against the actual pandemic.
|
Luminescent multifunctional nanomaterials are important because of their
potential impact on the development of key technologies such as smart
luminescent sensors and solid-state lightings. To be technologically viable,
the luminescent material needs to fulfil a number of requirements such as
facile and cost-effective fabrication, a high quantum yield, structural
robustness, and long-term material stability. To achieve these requirements, an
eco-friendly and scalable synthesis of a highly photoluminescent, multistimuli
responsive and electroluminescent silver-based metal-organic framework
(Ag-MOF), termed "OX-2" was developed. Its exceptional photophysical and
mechanically resilient properties that can be reversibly switched by
temperature and pressure make this material stood out over other competing
luminescent materials. The potential use of OX-2 MOF as a good
electroluminescent material was tested by constructing a proof-of-concept
MOF-LED (light emitting diode) device, further contributing to the rare
examples of electroluminescent MOFs. The results reveal the huge potential for
exploiting the Ag MOF as a multitasking platform to engineer innovative
photonic technologies.
|
I find that several models for information sharing in social networks can be
interpreted as age-dependent multi-type branching processes, and build them
independently following Sewastjanow. This allows to characterize criticality in
(real and random) social networks. For random networks, I develop a
moment-closure method that handles the high-dimensionality of these models: By
modifying the timing of sharing with followers, all users can be represented by
a single representative, while leaving the total progeny unchanged. Thus I
compute the exact popularity distribution, revealing a viral character of
critical models expressed by fat tails of order minus three half.
|
We address the task of domain generalization, where the goal is to train a
predictive model such that it is able to generalize to a new, previously unseen
domain. We choose a hierarchical generative approach within the framework of
variational autoencoders and propose a domain-unsupervised algorithm that is
able to generalize to new domains without domain supervision. We show that our
method is able to learn representations that disentangle domain-specific
information from class-label specific information even in complex settings
where domain structure is not observed during training. Our interpretable
method outperforms previously proposed generative algorithms for domain
generalization as well as other non-generative state-of-the-art approaches in
several hierarchical domain settings including sequential overlapped near
continuous domain shift. It also achieves competitive performance on the
standard domain generalization benchmark dataset PACS compared to
state-of-the-art approaches which rely on observing domain-specific information
during training, as well as another domain unsupervised method. Additionally,
we proposed model selection purely based on Evidence Lower Bound (ELBO) and
also proposed weak domain supervision where implicit domain information can be
added into the algorithm.
|
Generative adversarial networks (GAN) is a framework for generating fake data
based on given reals but is unstable in the optimization. In order to stabilize
GANs, the noise enlarges the overlap of the real and fake distributions at the
cost of significant variance. The data smoothing may reduce the dimensionality
of data but suppresses the capability of GANs to learn high-frequency
information. Based on these observations, we propose a data representation for
GANs, called noisy scale-space, that recursively applies the smoothing with
noise to data in order to preserve the data variance while replacing
high-frequency information by random data, leading to a coarse-to-fine training
of GANs. We also present a synthetic data-set using the Hadamard bases that
enables us to visualize the true distribution of data. We experiment with a
DCGAN with the noise scale-space (NSS-GAN) using major data-sets in which
NSS-GAN overtook state-of-the-arts in most cases independent of the image
content.
|
This paper discusses the design, implementation and field trials of WiMesh -
a resilient Wireless Mesh Network (WMN) based disaster communication system
purpose-built for underdeveloped and rural parts of the world. Mesh networking
is a mature area, and the focus of this paper is not on proposing novel models,
protocols or other mesh solutions. Instead, the paper focuses on the
identification of important design considerations and justifications for
several design trade offs in the context of mesh networking for disaster
communication in developing countries with very limited resources. These
trade-offs are discussed in the context of key desirable traits including
security, low cost, low power, size, availability, customization, portability,
ease of installation and deployment, and coverage area among others. We discuss
at length the design, implementation, and field trial results of the WiMesh
system which enables users spread over large geographical regions, to
communicate with each other despite the lack of cellular coverage, power, and
other communication infrastructure by leveraging multi-hop mesh networking and
Wi-Fi equipped handheld devices. Lessons learned along with real-world results
are shared for WiMesh deployment in a remote rural mountainous village of
Pakistan, and the source code is shared with the research community.
|
Learning by interaction is the key to skill acquisition for most living
organisms, which is formally called Reinforcement Learning (RL). RL is
efficient in finding optimal policies for endowing complex systems with
sophisticated behavior. All paradigms of RL utilize a system model for finding
the optimal policy. Modeling dynamics can be done by formulating a mathematical
model or system identification. Dynamic models are usually exposed to aleatoric
and epistemic uncertainties that can divert the model from the one acquired and
cause the RL algorithm to exhibit erroneous behavior. Accordingly, the RL
process sensitive to operating conditions and changes in model parameters and
lose its generality. To address these problems, Intensive system identification
for modeling purposes is needed for each system even if the model dynamics
structure is the same, as the slight deviation in the model parameters can
render the model useless in RL. The existence of an oracle that can adaptively
predict the rest of the trajectory regardless of the uncertainties can help
resolve the issue. The target of this work is to present a framework for
facilitating the system identification of different instances of the same
dynamics class by learning a probability distribution of the dynamics
conditioned on observed data with variational inference and show its
reliability in robustly solving different instances of control problems with
the same model in model-based RL with maximum sample efficiency.
|
Distributed quantum metrology can enhance the sensitivity for sensing
spatially distributed parameters beyond the classical limits. Here we
demonstrate distributed quantum phase estimation with discrete variables to
achieve Heisenberg limit phase measurements. Based on parallel entanglement in
modes and particles, we demonstrate distributed quantum sensing for both
individual phase shifts and an averaged phase shift, with an error reduction up
to 1.4 dB and 2.7 dB below the shot-noise limit. Furthermore, we demonstrate a
combined strategy with parallel mode entanglement and multiple passes of the
phase shifter in each mode. In particular, our experiment uses six entangled
photons with each photon passing the phase shifter up to six times, and
achieves a total number of photon passes N=21 at an error reduction up to 4.7
dB below the shot-noise limit. Our research provides a faithful verification of
the benefit of entanglement and coherence for distributed quantum sensing in
general quantum networks.
|
We consider a simple model of a stochastic heat engine, which consists of a
single Brownian particle moving in a one-dimensional periodically breathing
harmonic potential. Overdamped limit is assumed. Expressions of second moments
(variances and covariances ) of heat and work are obtained in the form of
integrals, whose integrands contain functions satisfying certain differential
equations. The results in the quasi-static limit are simple functions of
temperatures of hot and cold thermal baths. The coefficient of variation of the
work is suggested to give an approximate probability for the work to exceeds a
certain threshold. During derivation, we get the expression of the
cumulant-generating function.
|
We apply general moment identities for Poisson stochastic integrals with
random integrands to the computation of the moments of Markovian
growth-collapse processes. This extends existing formulas for mean and variance
available in the literature to closed form moments expressions of all orders.
In comparison with other methods based on differential equations, our approach
yields polynomial expressions in the time parameter. We also treat the case of
the associated embedded chain.
|
Studies on stratospheric ozone have attracted much attention due to its
serious impacts on climate changes and its important role as a tracer of
Earth's global circulation. Tropospheric ozone as a main atmospheric pollutant
damages human health as well as the growth of vegetation. Yet there is still a
lack of a theoretical framework to fully describe the variation of ozone. To
understand ozone's spatiotemporal variance, we introduce the eigen microstate
method to analyze the global ozone mass mixing ratio (OMMR) between 1979-01-01
and 2020-06-30 at 37 pressure layers. We find that eigen microstates at
different geopotential heights can capture different climate phenomena and
modes. Without deseasonalization, the first eigen microstates capture the
seasonal effect and reveal that the phase of the intra-annual cycle moves with
the geopotential heights. After deseasonalization, by contrast, the collective
patterns from the overall trend, ENSO, QBO, and tropopause pressure are
identified by the first few significant eigen microstates. The theoretical
framework proposed here can also be applied to other complex Earth systems.
|
Effective and causal observable functions for low-order lifting linearization
of nonlinear controlled systems are learned from data by using neural networks.
While Koopman operator theory allows us to represent a nonlinear system as a
linear system in an infinite-dimensional space of observables, exact
linearization is guaranteed only for autonomous systems with no input, and
finding effective observable functions for approximation with a low-order
linear system remains an open question. Dual-Faceted Linearization uses a set
of effective observables for low-order lifting linearization, but the method
requires knowledge of the physical structure of the nonlinear system. Here, a
data-driven method is presented for generating a set of nonlinear observable
functions that can accurately approximate a nonlinear control system to a
low-order linear control system. A caveat in using data of measured variables
as observables is that the measured variables may contain input to the system,
which incurs a causality contradiction when lifting the system, i.e. taking
derivatives of the observables. The current work presents a method for
eliminating such anti-causal components of the observables and lifting the
system using only causal observables. The method is applied to excavation
automation, a complex nonlinear dynamical system, to obtain a low-order lifted
linear model for control design.
|
With the widespread use and adoption of mobile platforms like Android a new
software quality concern has emerged -- energy consumption. However, developing
energy-efficient software and applications requires knowledge and likewise
proper tooling to support mobile developers. To this aim, we present an
approach to examine the energy evolution of software revisions based on their
API interactions. The approach stems from the assumption that the utilization
of an API has direct implications on the energy being consumed during runtime.
Based on an empirical evaluation, we show initial results that API interactions
serve as a flexible, lightweight, and effective way to compare software
revisions regarding their energy evolution. Given our initial results we
envision that in future using our approach mobile developers will be able to
gain insights on the energy implications of changes in source code in the
course of the software development life-cycle.
|
In this work we develop a novel characterization of marginal causal effect
and causal bias in the continuous treatment setting. We show they can be
expressed as an expectation with respect to a conditional probability
distribution, which can be estimated via standard statistical and probabilistic
methods. All terms in the expectations can be computed via automatic
differentiation, also for highly non-linear models. We further develop a new
complete criterion for identifiability of causal effects via covariate
adjustment, showing the bias equals zero if the criterion is met. We study the
effectiveness of our framework in three different scenarios: linear models
under confounding, overcontrol and endogenous selection bias; a non-linear
model where full identifiability cannot be achieved because of missing data; a
simulated medical study of statins and atherosclerotic cardiovascular disease.
|
In weather disasters, first responders access dedicated communication
channels different from civilian commercial channels to facilitate rescues.
However, rescues in recent disasters have increasingly involved civilian and
volunteer forces, requiring civilian channels not to be overloaded with
traffic. We explore seven enhancements to the wording of Wireless Emergency
Alerts (WEAs) and their effectiveness in getting smartphone users to comply,
including reducing frivolous mobile data consumption during critical weather
disasters. We conducted a between-subjects survey (N=898), in which
participants were either assigned no alert (control) or an alert framed as
Basic Information, Altruism, Multimedia, Negative Feedback, Positive Feedback,
Reward, or Punishment. We find that Basic Information alerts resulted in the
largest reduction of multimedia and video services usage; we also find that
Punishment alerts have the lowest absolute compliance. This work has
implications for creating more effective WEAs and providing a better
understanding of how wording can affect emergency alert compliance.
|
Malaria is an infectious disease with an immense global health burden.
Plasmodium vivax is the most geographically widespread species of malaria.
Relapsing infections, caused by the activation of liver-stage parasites known
as hypnozoites, are a critical feature of the epidemiology of Plasmodium vivax.
Hypnozoites remain dormant in the liver for weeks or months after inoculation,
but cause relapsing infections upon activation. Here, we introduce a dynamic
probability model of the activation-clearance process governing both potential
relapses and the size of the hypnozoite reservoir. We begin by modelling
activation-clearance dynamics for a single hypnozoite using a continuous-time
Markov chain. We then extend our analysis to consider activation-clearance
dynamics for a single mosquito bite, which can simultaneously establish
multiple hypnozoites, under the assumption of independent hypnozoite behaviour.
We derive analytic expressions for the time to first relapse and the time to
hypnozoite clearance for mosquito bites establishing variable numbers of
hypnozoites, both of which are quantities of epidemiological significance. Our
results extend those in the literature, which were limited due to an assumption
of non-independence. Our within-host model can be embedded readily in
multi-scale models and epidemiological frameworks, with analytic solutions
increasing the tractability of statistical inference and analysis. Our work
therefore provides a foundation for further work on immune development and
epidemiological-scale analysis, both of which are important for achieving the
goal of malaria elimination.
|
Academic neural models for coreference resolution (coref) are typically
trained on a single dataset, OntoNotes, and model improvements are benchmarked
on that same dataset. However, real-world applications of coref depend on the
annotation guidelines and the domain of the target dataset, which often differ
from those of OntoNotes. We aim to quantify transferability of coref models
based on the number of annotated documents available in the target dataset. We
examine eleven target datasets and find that continued training is consistently
effective and especially beneficial when there are few target documents. We
establish new benchmarks across several datasets, including state-of-the-art
results on PreCo.
|
In many randomized clinical trials of therapeutics for COVID-19, the primary
outcome is an ordinal categorical variable, and interest focuses on the odds
ratio (active agent vs. control) under the assumption of a proportional odds
model. Although at the final analysis the outcome will be determined for all
subjects, at an interim analysis, the status of some participants may not yet
be determined, e.g., because ascertainment of the outcome may not be possible
until some pre-specified follow-up time. Accordingly, the outcome from these
subjects can be viewed as censored. A valid interim analysis can be based on
data only from those subjects with full follow up; however, this approach is
inefficient, as it does not exploit additional information that may be
available on those for whom the outcome is not yet available at the time of the
interim analysis. Appealing to the theory of semiparametrics, we propose an
estimator for the odds ratio in a proportional odds model with censored,
time-lagged categorical outcome that incorporates additional baseline and
time-dependent covariate information and demonstrate that it can result in
considerable gains in efficiency relative to simpler approaches. A byproduct of
the approach is a covariate-adjusted estimator for the odds ratio based on the
full data that would be available at a final analysis.
|
Scoring rules measure the deviation between a probabilistic forecast and
reality. Strictly proper scoring rules have the property that for any forecast,
the mathematical expectation of the score of a forecast p by the lights of p is
strictly better than the mathematical expectation of any other forecast q by
the lights of p. Probabilistic forecasts need not satisfy the axioms of the
probability calculus, but Predd, et al. (2009) have shown that given a finite
sample space and any strictly proper additive and continuous scoring rule, the
score for any forecast that does not satisfy the axioms of probability is
strictly dominated by the score for some probabilistically consistent forecast.
Recently, this result has been extended to non-additive continuous scoring
rules. In this paper, a condition weaker than continuity is given that suffices
for the result, and the condition is proved to be optimal.
|
We prove that, if $\mathcal{GP}$ is the class of all Gorenstein projective
modules over a ring $R$, then $\mathfrak{GP}=(\mathcal{GP},\mathcal{GP}^\perp)$
is a cotorsion pair. Moreover, $\mathfrak{GP}$ is complete when all projective
modules are $\lambda$-pure-injective for some infinite regular cardinal
$\lambda$ (in particular, if $R$ is right $\Sigma$-pure-injective).
We obtain these results, on the one hand, studying the class of totally
acyclic complexes over $R$. We prove that, when $R$ is $\Sigma$-pure-injective,
this class is deconstructible and forms a coreflective subcategory of the
homotopy category of the projective modules. On the other hand, we use results
about $\lambda$-pure-injective modules for infinite regular cardinals
$\lambda$.
Finally, under different set-theoretical hypotheses, we show that for an
arbitrary ring $R$, the following hold: (1) There exists an infinite regular
cardinal number $\lambda$ such that every projective module is
$\lambda$-pure-injective (and $\mathfrak{GP}$ is complete). (2) $R$ is right
pure-semisimple if and only if there exists a regular uncountable $\lambda$
such that $\mathrm{Mod}$-$R$ has enough $\lambda$-pure-injective objects.
|
Intensity mapping of the 21cm signal of neutral hydrogen will yield exciting
insights into the Epoch of Reionisation and the nature of the first galaxies.
However, the large amount of data that will be generated by the next generation
of radio telescopes, such as the Square Kilometre Array (SKA), as well as the
numerous observational obstacles to overcome, require analysis techniques tuned
to extract the reionisation history and morphology. In this context, we
introduce a one-point statistic, to which we refer as the local variance,
$\sigma_\mathrm{loc}$, that describes the distribution of the mean differential
21cm brightness temperatures measured in two-dimensional maps along the
frequency direction of a light-cone. The local variance takes advantage of what
is usually considered an observational bias, the sample variance. We find the
redshift-evolution of the local variance to not only probe the reionisation
history of the observed patches of the sky, but also trace the ionisation
morphology. This estimator provides a promising tool to constrain the midpoint
of reionisation as well as gaining insight into the ionising properties of
early galaxies.
|
Temperature fluctuations of a finite system follows the Landau bound $\delta
T^2 = T^2/C(T)$ where $C(T)$ is the heat capacity of the system. In turn, the
same bound sets a limit to the precision of temperature estimation when the
system itself is used as a thermometer. In this paper, we employ graph theory
and the concept of Fisher information to assess the role of topology on the
thermometric performance of a given system. We find that low connectivity is a
resource to build precise thermometers working at low temperatures, whereas
highly connected systems are suitable for higher temperatures. Upon modelling
the thermometer as a set of vertices for the quantum walk of an excitation, we
compare the precision achievable by position measurement to the optimal one,
which itself corresponds to energy measurement.
|
Deep neural networks have been widely used for feature learning in facial
expression recognition systems. However, small datasets and large intra-class
variability can lead to overfitting. In this paper, we propose a method which
learns an optimized compact network topology for real-time facial expression
recognition utilizing localized facial landmark features. Our method employs a
spatio-temporal bilinear layer as backbone to capture the motion of facial
landmarks during the execution of a facial expression effectively. Besides, it
takes advantage of Monte Carlo Dropout to capture the model's uncertainty which
is of great importance to analyze and treat uncertain cases. The performance of
our method is evaluated on three widely used datasets and it is comparable to
that of video-based state-of-the-art methods while it has much less complexity.
|
Accurate evaluation of the treatment result on X-ray images is a significant
and challenging step in root canal therapy since the incorrect interpretation
of the therapy results will hamper timely follow-up which is crucial to the
patients' treatment outcome. Nowadays, the evaluation is performed in a manual
manner, which is time-consuming, subjective, and error-prone. In this paper, we
aim to automate this process by leveraging the advances in computer vision and
artificial intelligence, to provide an objective and accurate method for root
canal therapy result assessment. A novel anatomy-guided multi-branch
Transformer (AGMB-Transformer) network is proposed, which first extracts a set
of anatomy features and then uses them to guide a multi-branch Transformer
network for evaluation. Specifically, we design a polynomial curve fitting
segmentation strategy with the help of landmark detection to extract the
anatomy features. Moreover, a branch fusion module and a multi-branch structure
including our progressive Transformer and Group Multi-Head Self-Attention
(GMHSA) are designed to focus on both global and local features for an accurate
diagnosis. To facilitate the research, we have collected a large-scale root
canal therapy evaluation dataset with 245 root canal therapy X-ray images, and
the experiment results show that our AGMB-Transformer can improve the diagnosis
accuracy from 57.96% to 90.20% compared with the baseline network. The proposed
AGMB-Transformer can achieve a highly accurate evaluation of root canal
therapy. To our best knowledge, our work is the first to perform automatic root
canal therapy evaluation and has important clinical value to reduce the
workload of endodontists.
|
Inference of population structure from genetic data plays an important role
in population and medical genetics studies. The traditional EIGENSTRAT method
has been widely used for computing and selecting top principal components that
capture population structure information (Price et al., 2006). With the
advancement and decreasing cost of sequencing technology, whole-genome
sequencing data provide much richer information about the underlying population
structures. However, the EIGENSTRAT method was originally developed for
analyzing array-based genotype data and thus may not perform well on sequencing
data for two reasons. First, the number of genetic variants $p$ is much larger
than the sample size $n$ in sequencing data such that the sample-to-marker
ratio $n/p$ is nearly zero, violating the assumption of the Tracy-Widom test
used in the EIGENSTRAT method. Second, the EIGENSTRAT method might not be able
to handle the linkage disequilibrium (LD) well in sequencing data. To resolve
those two critical issues, we propose a new statistical method called ERStruct
to estimate the number of latent sub-populations based on sequencing data. We
propose to use the ratio of successive eigenvalues as a more robust testing
statistic, and then we approximate the null distribution of our proposed test
statistic using modern random matrix theory. Simulation studies found that our
proposed ERStruct method has outperformed the traditional Tracy-Widom test on
sequencing data. We further use two public data sets from the HapMap 3 and the
1000 Genomes Projects to demonstrate the performance of our ERStruct method. We
also implement our ERStruct in a MATLAB toolbox which is now publicly available
on GitHub through https://github.com/bglvly/ERStruct.
|
It is well known that the classic Allen-Cahn equation satisfies the maximum
bound principle (MBP), that is, the absolute value of its solution is uniformly
bounded for all time by certain constant under suitable initial and boundary
conditions. In this paper, we consider numerical solutions of the modified
Allen-Cahn equation with a Lagrange multiplier of nonlocal and local effects,
which not only shares the same MBP as the original Allen-Cahn equation but also
conserves the mass exactly. We reformulate the model equation with a linear
stabilizing technique, then construct first- and second-order exponential time
differencing schemes for its time integration. We prove the unconditional MBP
preservation and mass conservation of the proposed schemes in the time discrete
sense and derive their error estimates under some regularity assumptions.
Various numerical experiments in two and three dimensions are also conducted to
verify the theoretical results.
|
We prove that the set of Segre-degenerate points of a real-analytic
subvariety $X$ in ${\mathbb{C}}^n$ is a closed semianalytic set. It is a
subvariety if $X$ is coherent. More precisely, the set of points where the germ
of the Segre variety is of dimension $k$ or greater is a closed semianalytic
set in general, and for a coherent $X$, it is a real-analytic subvariety of
$X$. For a hypersurface $X$ in ${\mathbb{C}}^n$, the set of Segre-degenerate
points, $X_{[n]}$, is a semianalytic set of dimension at most $2n-4$. If $X$ is
coherent, then $X_{[n]}$ is a complex subvariety of (complex) dimension $n-2$.
Example hypersurfaces are given showing that $X_{[n]}$ need not be a subvariety
and that it also needs not be complex; $X_{[n]}$ can, for instance, be a real
line.
|
In the house credit process, banks and lenders rely on a fast and accurate
estimation of a real estate price to determine the maximum loan value. Real
estate appraisal is often based on relational data, capturing the hard facts of
the property. Yet, models benefit strongly from including image data, capturing
additional soft factors. The combination of the different data types requires a
multi-view learning method. Therefore, the question arises which strengths and
weaknesses different multi-view learning strategies have. In our study, we test
multi-kernel learning, multi-view concatenation and multi-view neural networks
on real estate data and satellite images from Asheville, NC. Our results
suggest that multi-view learning increases the predictive performance up to 13%
in MAE. Multi-view neural networks perform best, however result in
intransparent black-box models. For users seeking interpretability, hybrid
multi-view neural networks or a boosting strategy are a suitable alternative.
|
The electronic density of states (DOS) highlights fundamental properties of
materials that oftentimes dictate their properties, such as the band gap and
Van Hove singularities. In this short note, we discuss how sharp features of
the density of states can be obscured by smearing methods (such as the Gaussian
and Fermi smearing methods) when calculating the DOS. While the common approach
to reach a "converged" density of states of a material is to increase the
discrete k-point mesh density, we show that the DOS calculated by smearing
methods can appear to converge but not to the correct DOS. Employing the
tetrahedron method for Brillouin zone integration resolves key features of the
density of states far better than smearing methods.
|
In this study, the in-plane Bloch wave propagation and bandgaps in a finitely
stretched square lattice were investigated numerically and theoretically. To be
specific, the elastic band diagram was calculated for an infinite periodic
structure with a cruciform hyperelastic unit cell under uniaxial or biaxial
tension. In addition, an elastodynamic "tight binding" model was proposed to
investigate the formation and evolution of the band structure. The elastic
waves were found to propagate largely under "easy" modes in the pre-stretched
soft lattice, and finite stretch tuned the symmetry of the band structure, but
also "purify" the propagation modes. Moreover, the uniaxial stretch exhibits
the opposite impacts on the two "easy" modes. The effect of the biaxial stretch
was equated with the superposition of the uniaxial stretches in the
tessellation directions. The mentioned effects on the band structure could be
attributed to the competition between the effective shear moduli and lengths
for different beam components. Next, the finite stretch could tune the
directional bandgap of the soft lattice, and the broadest elastic wave bandgaps
could be anticipated in an equi-biaxial stretch. In this study, an avenue was
opened to design and implement elastic wave control devices with weight
efficiency and tunability. Furthermore, the differences between the physical
system and the corresponding simplified theoretical model (e.g., the
theoretically predicted flat bands) did not exist in the numerical
calculations.
|
In this paper, a sparse Kronecker-product (SKP) coding scheme is proposed for
unsourced multiple access. Specifically, the data of each active user is
encoded as the Kronecker product of two component codewords with one being
sparse and the other being forward-error-correction (FEC) coded. At the
receiver, an iterative decoding algorithm is developed, consisting of matrix
factorization for the decomposition of the Kronecker product and soft-in
soft-out decoding for the component sparse code and the FEC code. The cyclic
redundancy check (CRC) aided interference cancellation technique is further
incorporated for performance improvement. Numerical results show that the
proposed scheme outperforms the state-of-the-art counterparts, and approaches
the random coding bound within a gap of only 0.1 dB at the code length of 30000
when the number of active users is less than 75, and the error rate can be made
very small even if the number of active users is relatively large.
|
Deep neural networks for medical image reconstruction are traditionally
trained using high-quality ground-truth images as training targets. Recent work
on Noise2Noise (N2N) has shown the potential of using multiple noisy
measurements of the same object as an alternative to having a ground-truth.
However, existing N2N-based methods are not suitable for learning from the
measurements of an object undergoing nonrigid deformation. This paper addresses
this issue by proposing the deformation-compensated learning (DeCoLearn) method
for training deep reconstruction networks by compensating for object
deformations. A key component of DeCoLearn is a deep registration module, which
is jointly trained with the deep reconstruction network without any
ground-truth supervision. We validate DeCoLearn on both simulated and
experimentally collected magnetic resonance imaging (MRI) data and show that it
significantly improves imaging quality.
|
This paper develops a theory of polynomial maps from commutative semigroups
to arbitrary groups and proves that it has desirable formal properties when the
target group is locally nilpotent. We apply this theory to solve Waring's
Problem for Heisenberg groups in a sequel to this paper.
|
Conditional image synthesis aims to create an image according to some
multi-modal guidance in the forms of textual descriptions, reference images,
and image blocks to preserve, as well as their combinations. In this paper,
instead of investigating these control signals separately, we propose a new
two-stage architecture, M6-UFC, to unify any number of multi-modal controls. In
M6-UFC, both the diverse control signals and the synthesized image are
uniformly represented as a sequence of discrete tokens to be processed by
Transformer. Different from existing two-stage autoregressive approaches such
as DALL-E and VQGAN, M6-UFC adopts non-autoregressive generation (NAR) at the
second stage to enhance the holistic consistency of the synthesized image, to
support preserving specified image blocks, and to improve the synthesis speed.
Further, we design a progressive algorithm that iteratively improves the
non-autoregressively generated image, with the help of two estimators developed
for evaluating the compliance with the controls and evaluating the fidelity of
the synthesized image, respectively. Extensive experiments on a newly collected
large-scale clothing dataset M2C-Fashion and a facial dataset Multi-Modal
CelebA-HQ verify that M6-UFC can synthesize high-fidelity images that comply
with flexible multi-modal controls.
|
The scaling up of quantum hardware is the fundamental challenge ahead in
order to realize the disruptive potential of quantum technology in information
science. Among the plethora of hardware platforms, photonics stands out by
offering a modular approach, where the main challenge is to construct
sufficiently high-quality building blocks and develop methods to efficiently
interface them. Importantly, the subsequent scaling-up will make full use of
the mature integrated photonic technology provided by photonic foundry
infrastructure to produce small foot-print quantum processors of immense
complexity. A fully coherent and deterministic photon-emitter interface is a
key enabler of quantum photonics, and can today be realized with solid-state
quantum emitters with specifications reaching the quantitative benchmark
referred to as Quantum Advantage. This light-matter interaction primer realizes
a range of quantum photonic resources and functionalities, including on-demand
single-photon and multi-photon entanglement sources, and photon-photon
nonlinear quantum gates. We will present the current state-of-the-art in
single-photon quantum hardware and the main photonic building blocks required
in order to scale up. Furthermore, we will point out specific promising
applications of the hardware building blocks within quantum communication and
photonic quantum computing, laying out the road ahead for quantum photonics
applications that could offer a genuine quantum advantage.
|
This paper presents an NLP (Natural Language Processing) approach to
detecting spoilers in book reviews, using the University of California San
Diego (UCSD) Goodreads Spoiler dataset. We explored the use of LSTM, BERT, and
RoBERTa language models to perform spoiler detection at the sentence-level.
This was contrasted with a UCSD paper which performed the same task, but using
handcrafted features in its data preparation. Despite eschewing the use of
handcrafted features, our results from the LSTM model were able to slightly
exceed the UCSD team's performance in spoiler detection.
|
An iterative numerical method to compute the conformal mapping in the context
of propagating water waves over uneven topographies is investigated. The map
flattens the fluid domain onto a canonical strip in which computations are
performed. The accuracy of the method is tested by using the MATLAB
Schwarz-Christoffel toolbox mapping as a benchmark. Besides, we give a
numerical alternative to compute the inverse of the conformal map.
|
We consider an investment process that includes a number of features, each of
which can be active or inactive. Our goal is to attribute or decompose an
achieved performance to each of these features, plus a baseline value. There
are many ways to do this, which lead to potentially different attributions in
any specific case. We argue that a specific attribution method due to Shapley
is the preferred method, and discuss methods that can be used to compute this
attribution exactly, or when that is not practical, approximately.
|
The evolution of circumstellar discs is highly influenced by their
surroundings, in particular by external photoevaporation due to nearby stars
and dynamical truncations. The impact of these processes on disc populations
depends on the dynamical evolution of the star-forming region. Here we
implement a simple model of molecular cloud collapse and star formation to
obtain primordial positions and velocities of young stars and follow their
evolution in time, including that of their circumstellar discs. Our disc model
takes into account viscous evolution, internal and external photoevaporation,
dust evolution, and dynamical truncations. The disc evolution is resolved
simultaneously with the star cluster dynamics and stellar evolution. Our
results show that an extended period of star formation allows for massive discs
formed later in the simulations to survive for several million years. This
could explain massive discs surviving in regions of high UV radiation.
|
This article proposes a framework for the study of periodic maps $T$ from a
(typically finite) set $X$ to itself when the set $X$ is equipped with one or
more real- or complex-valued functions. The main idea, inspired by the
time-evolution operator construction from ergodic theory, is the introduction
of a vector space that contains the given functions and is closed under
composition with $T$, along with a time-evolution operator on that vector
space. I show that the invariant functions and 0-mesic functions span
complementary subspaces associated respectively with the eigenvalue 1 and the
other eigenvalues. Alongside other examples, I give an explicit description of
the spectrum of the evolution operator when $X$ is the set of $k$-element
multisets with elements in $\{0,1,\dots,n-1\}$, $T$ increments each element of
a multiset by 1 mod $n$, and $g_i: X \rightarrow \mathbb{R}$ (with $1 \leq i
\leq k$) maps a multiset to its $i$th smallest element.
|
In the upcoming decades large facilities, such as the SKA, will provide
resolved observations of the kinematics of millions of galaxies. In order to
assist in the timely exploitation of these vast datasets we explore the use of
a self-supervised, physics aware neural network capable of Bayesian kinematic
modelling of galaxies. We demonstrate the network's ability to model the
kinematics of cold gas in galaxies with an emphasis on recovering physical
parameters and accompanying modelling errors. The model is able to recover
rotation curves, inclinations and disc scale lengths for both CO and HI data
which match well with those found in the literature. The model is also able to
provide modelling errors over learned parameters thanks to the application of
quasi-Bayesian Monte-Carlo dropout. This work shows the promising use of
machine learning, and in particular self-supervised neural networks, in the
context of kinematically modelling galaxies. This work represents the first
steps in applying such models for kinematic fitting and we propose that
variants of our model would seem especially suitable for enabling emission-line
science from upcoming surveys with e.g. the SKA, allowing fast exploitation of
these large datasets.
|
Context: Petri net slicing is a technique to reduce the size of a Petri net
so that it can ease the analysis or understanding of the original Petri net.
Objective: Presenting two new Petri net slicing algorithms to isolate those
places and transitions of a Petri net (the slice) that may contribute tokens to
one or more places given (the slicing criterion).
Method: The two algorithms proposed are formalized. The completeness of the
first algorithm and the minimality of the second algorithm are formally proven.
Both algorithms together with other three state-of-the-art algorithms have been
implemented and integrated into a single tool so that we have been able to
carry out a fair empirical evaluation.
Results: Besides the two new Petri net slicing algorithms, a public, free,
and open-source implementation of five algorithms is reported. The results of
an empirical evaluation of the new algorithms and the slices that they produce
are also presented.
Conclusions: The first algorithm collects all places and transitions that may
influence (in any computation) the slicing criterion, while the second
algorithm collects a minimum set of places and transitions that may influence
(in some computation) the slicing criterion. Therefore, the net computed by the
first algorithm can reproduce any computation that contributes tokens to any
place of interest. In contrast, the second algorithm loses this possibility but
it often produces a much more reduced subnet (which still can reproduce some
computations that contribute tokens to some places of interest). The first
algorithm is proven complete, and the second one is proven minimal.
|
The recently developed generalized Fourier-Galerkin method is complemented by
a numerical continuation with respect to the kinetic energy, which extends the
framework to the investigation of modal interactions resulting in folds of the
nonlinear modes. In order to enhance the practicability regarding the
investigation of complex large-scale systems, it is proposed to provide
analytical gradients and exploit sparsity of the nonlinear part of the
governing algebraic equations. A novel reduced order model (ROM) is developed
for those regimes where internal resonances are absent. The approach allows for
an accurate approximation of the multi-harmonic content of the resonant mode
and accounts for the contributions of the off-resonant modes in their
linearized forms. The ROM facilitates the efficient analysis of self-excited
limit cycle oscillations, frequency response functions and the direct tracing
of forced resonances. The ROM is equipped with a large parameter space
including parameters associated with linear damping and near-resonant harmonic
forcing terms. An important objective of this paper is to demonstrate the broad
applicability of the proposed overall methodology. This is achieved by selected
numerical examples including finite element models of structures with strongly
nonlinear, non-conservative contact constraints.
|
In this position paper, we explore the adoption of a Smart City with a
socio-technical perspective. A Smart city is a transformational technological
process leading to profound modifications of existing urban regimes and
infrastructure components. In this study, we consider a Smart City as a
socio-technical system where the interplay between technologies and users
ensures the sustainable development of smart city initiatives that improve the
quality of life and solve important socio-economic problems. The adoption of a
Smart City required a participative approach where users are involved during
the adoption process to joint optimise both systems. Thus, we contribute to
socio-technical research showing how a participative approach based on press
relationships to facilitate information exchange between municipal actors and
citizens worked as a success factor for the smart city adoption. We also
discuss the limitations of this approach.
|
We investigate the role played by density inhomogeneities and dissipation on
the final outcome of collapse of a self-gravitating sphere. By imposing a
perturbative scheme on the thermodynamical variables and gravitational
potentials we track the evolution of the collapse process starting off with an
initially static perfect fluid sphere which is shear-free. The collapsing core
dissipates energy in the form of a radial heat flux with the exterior spacetime
being filled with a superposition of null energy and an anisotropic string
distribution. The ensuing dynamical process slowly evolves into a shear-like
regime with contributions from the heat flux and density fluctuations. We show
that the anisotropy due to the presence of the strings drives the stellar fluid
towards instability with this effect being enhanced by the density
inhomogeneity. An interesting and novel consequence of this collapse scenario
is the delay in the formation of the horizon.
|
Shapley Values, a solution to the credit assignment problem in cooperative
game theory, are a popular type of explanation in machine learning, having been
used to explain the importance of features, embeddings, and even neurons. In
NLP, however, leave-one-out and attention-based explanations still predominate.
Can we draw a connection between these different methods? We formally prove
that -- save for the degenerate case -- attention weights and leave-one-out
values cannot be Shapley Values. $\textit{Attention flow}$ is a post-processed
variant of attention weights obtained by running the max-flow algorithm on the
attention graph. Perhaps surprisingly, we prove that attention flows are indeed
Shapley Values, at least at the layerwise level. Given the many desirable
theoretical qualities of Shapley Values -- which has driven their adoption
among the ML community -- we argue that NLP practitioners should, when
possible, adopt attention flow explanations alongside more traditional ones.
|
Currently there has been increasing demand for real-time training on
resource-limited IoT devices such as smart sensors, which realizes standalone
online adaptation for streaming data without data transfers to remote servers.
OS-ELM (Online Sequential Extreme Learning Machine) has been one of promising
neural-network-based online algorithms for on-chip learning because it can
perform online training at low computational cost and is easy to implement as a
digital circuit. Existing OS-ELM digital circuits employ fixed-point data
format and the bit-widths are often manually tuned, however, this may cause
overflow or underflow which can lead to unexpected behavior of the circuit. For
on-chip learning systems, an overflow/underflow-free design has a great impact
since online training is continuously performed and the intervals of
intermediate variables will dynamically change as time goes by. In this paper,
we propose an overflow/underflow-free bit-width optimization method for
fixed-point digital circuits of OS-ELM. Experimental results show that our
method realizes overflow/underflow-free OS-ELM digital circuits with 1.0x -
1.5x more area cost compared to the baseline simulation method where overflow
or underflow can happen.
|
In 1914 Bohr proved that there is an $r_0 \in(0,1)$ such that if a power
series $\sum_{m=0}^\infty c_m z^m$ is convergent in the open unit disc and
$|\sum_{m=0}^\infty c_m z^m|<1$ then, $\sum_{m=0}^\infty |c_m z^m|<1$ for
$|z|<r_0$. The largest value of such $r_0$ is called the Bohr radius. In this
article, we find Bohr radius for some univalent harmonic mappings having
different dilatations and in addition, also compute Bohr radius for the
functions convex in one direction.
|
Edge computing was introduced as a technical enabler for the demanding
requirements of new network technologies like 5G. It aims to overcome
challenges related to centralized cloud computing environments by distributing
computational resources to the edge of the network towards the customers. The
complexity of the emerging infrastructures increases significantly, together
with the ramifications of outages on critical use cases such as self-driving
cars or health care. Artificial Intelligence for IT Operations (AIOps) aims to
support human operators in managing complex infrastructures by using machine
learning methods. This paper describes the system design of an AIOps platform
which is applicable in heterogeneous, distributed environments. The overhead of
a high-frequency monitoring solution on edge devices is evaluated and
performance experiments regarding the applicability of three anomaly detection
algorithms on edge devices are conducted. The results show, that it is feasible
to collect metrics with a high frequency and simultaneously run specific
anomaly detection algorithms directly on edge devices with a reasonable
overhead on the resource utilization.
|
Pushing is an essential non-prehensile manipulation skill used for tasks
ranging from pre-grasp manipulation to scene rearrangement, reasoning about
object relations in the scene, and thus pushing actions have been widely
studied in robotics. The effective use of pushing actions often requires an
understanding of the dynamics of the manipulated objects and adaptation to the
discrepancies between prediction and reality. For this reason, effect
prediction and parameter estimation with pushing actions have been heavily
investigated in the literature. However, current approaches are limited because
they either model systems with a fixed number of objects or use image-based
representations whose outputs are not very interpretable and quickly accumulate
errors. In this paper, we propose a graph neural network based framework for
effect prediction and parameter estimation of pushing actions by modeling
object relations based on contacts or articulations. Our framework is validated
both in real and simulated environments containing different shaped multi-part
objects connected via different types of joints and objects with different
masses. Our approach enables the robot to predict and adapt the effect of a
pushing action as it observes the scene. Further, we demonstrate 6D effect
prediction in the lever-up action in the context of robot-based hard-disk
disassembly.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.