abstract
stringlengths 42
2.09k
|
---|
The influence of forward speed on stochastic free-surface crossing, in a
Gaussian wave field, is investigated. The case of a material point moving with
a constant forward speed is considered; the wave field is assumed stationary in
time, and homogeneous in space. The focus is on up-crossing events, which are
defined as the material point crossing the free surface, into the water domain.
The effect of the Doppler shift (induced by the forward speed) on the
up-crossing frequency, and the related conditional joint distribution of wave
kinematic variables is analytically investigated. Some general trends are
illustrated through different examples, where three kinds of wave direction
distribution are considered: unidirectional, short-crested anisotropic, and
isotropic. The way the developed approach may be used in the context of
slamming on marine structures is briefly discussed.
|
In this paper, we consider the use of cross-layer network coding (CLNC),
caching, and device-to-device (D2D) communications to jointly optimize the
delivery of a set of popular contents to a set of user devices (UDs). In the
considered D2D network, a group of near-by UDs cooperate with each other and
use NC to combine their cached files, so as the completion time required for
delivering all requested contents to all UDs is minimized. Unlike the previous
work that considers only one transmitting UD at a time, our work allows
multiple UDs to transmit simultaneously given the interference among the active
links is small. Such configuration brings a new trade-off among scheduling UDs
to transmitting UDs, selecting the coding decisions and the transmission
rate/power. Therefore, we consider the completion time minimization problem
that involves scheduling multiple transmitting UDs, determining their
transmission rates/powers and file combinations. The problem is shown to be
intractable because it involves all future coding decisions. To tackle the
problem at each transmission slot, we first design a graph called herein the
D2D Rate-Aware IDNC graph where its vertices have weights that judiciously
balance between the rates/powers of the transmitting UDs and the number of
their scheduled UDs. Then, we propose an innovative and efficient CLNC solution
that iteratively selects a set of transmitting UDs only if the interference
caused by the transmissions of the newly selected UDs does not significantly
impact the overall completion time. Simulation results show that the proposed
solution offers significant completion time reduction compared with the
existing algorithms.
|
Massive machine type communications (mMTC) is one of the cornerstone services
that have to be supported by 5G systems. 3GPP has already introduced LTE-M and
NB-IoT, often referred to as cellular IoT, in 3GPP Releases 13, 14, and 15 and
submitted these technologies as part of 3GPP IMT-2020 (i.e., 5G) technology
submission to ITU-R. Even though NB-IoT and LTE-M have shown to satisfy 5G mMTC
requirements defined by ITU-R, it is expected that these cellular IoT solutions
will not address all aspects of IoT and ongoing digitalization, including the
support for direct communication between "things" with flexible deployments,
different business models, as well as support for even higher node densities
and enhanced coverage. In this paper, we introduce the DECT-2020 standard
recently published by ETSI for mMTC communications. We evaluate its performance
and compare it to the existing LPWAN solutions showing that it outperforms
those in terms of supported density of nodes while still keeping delay and loss
guarantees at the required level.
|
The paths leading to future networks are pointing towards a data-driven
paradigm to better cater to the explosive growth of mobile services as well as
the increasing heterogeneity of mobile devices, many of which generate and
consume large volumes and variety of data. These paths are also hampered by
significant challenges in terms of security, privacy, services provisioning,
and network management. Blockchain, which is a technology for building
distributed ledgers that provide an immutable log of transactions recorded in a
distributed network, has become prominent recently as the underlying technology
of cryptocurrencies and is revolutionizing data storage and processing in
computer network systems. For future data-driven networks (DDNs), blockchain is
considered as a promising solution to enable the secure storage, sharing, and
analytics of data, privacy protection for users, robust, trustworthy network
control, and decentralized routing and resource managements. However, many
important challenges and open issues remain to be addressed before blockchain
can be deployed widely to enable future DDNs. In this article, we present a
survey on the existing research works on the application of blockchain
technologies in computer networks, and identify challenges and potential
solutions in the applications of blockchains in future DDNs. We identify
application scenarios in which future blockchain-empowered DDNs could improve
the efficiency and security, and generally the effectiveness of network
services.
|
Cross-language authorship attribution problems rely on either translation to
enable the use of single-language features, or language-independent feature
extraction methods. Until recently, the lack of datasets for this problem
hindered the development of the latter, and single-language solutions were
performed on machine-translated corpora. In this paper, we present a novel
language-independent feature for authorship analysis based on dependency graphs
and universal part of speech tags, called DT-grams (dependency tree grams),
which are constructed by selecting specific sub-parts of the dependency graph
of sentences. We evaluate DT-grams by performing cross-language authorship
attribution on untranslated datasets of bilingual authors, showing that, on
average, they achieve a macro-averaged F1 score of 0.081 higher than previous
methods across five different language pairs. Additionally, by providing
results for a diverse set of features for comparison, we provide a baseline on
the previously undocumented task of untranslated cross-language authorship
attribution.
|
As the 5G standards mature and awareness of the capabilities of the
technology increases, industry verticals are becoming more eager to test new
services and develop them to the level of maturity required for market
adoption. Network slicing, i.e. multiple virtual networks running on a common
infrastructure, is considered a key mechanism to serve the multitude of tenants
(e.g. vertical industries) targeted by forthcoming fifth generation (5G)
systems, in a flexible and cost-efficient manner. It is predicted that one of
the most popular models for customers will be the Network Slice as a Service
(NSaaS) model. This model allows a Network Service Customer to order and
configure a Network Slice and offered it as a service. This work presents
Openlice a Service based, opens-ource OSS for delivering NSaaS following
emerging standards from SDOs. We strongly believe that such open source
solutions make it easier for organizations to enable complex scenarios
especially in the are of Non-Public Networks.
|
We have measured trigonometric parallaxes for four water masers associated
with distant massive young stars in the inner regions of the Galaxy using the
VLBA as part of the BeSSeL Survey. G026.50$+$0.28. is located at the near end
of the Galactic bar, perhaps at the origin of the Norma spiral arm.
G020.77$-$0.05 is in the Galactic Center region and is likely associated with a
far-side extension of the Scutum arm. G019.60$-$0.23 and G020.08$-$0.13 are
likely associated and lie well past the Galactic Center. These sources appear
to be in the Sagittarius spiral arm, but an association with the Perseus arm
cannot be ruled out.
|
The goal of this paper is to adapt speaker embeddings for solving the problem
of speaker diarisation. The quality of speaker embeddings is paramount to the
performance of speaker diarisation systems. Despite this, prior works in the
field have directly used embeddings designed only to be effective on the
speaker verification task. In this paper, we propose three techniques that can
be used to better adapt the speaker embeddings for diarisation: dimensionality
reduction, attention-based embedding aggregation, and non-speech clustering. A
wide range of experiments is performed on various challenging datasets. The
results demonstrate that all three techniques contribute positively to the
performance of the diarisation system achieving an average relative improvement
of 25.07% in terms of diarisation error rate over the baseline.
|
Prevention and early diagnosis of breast cancer (BC) is an essential
prerequisite for the selection of proper treatment. The substantial pressure
due to the increase of demand for faster and more precise diagnostic results
drives for automatic solutions. In the past decade, deep learning techniques
have demonstrated their power over several domains, and Computer-Aided (CAD)
diagnostic became one of them. However, when it comes to the analysis of Whole
Slide Images (WSI), most of the existing works compute predictions from levels
independently. This is, however, in contrast to the histopathologist expert
approach who requires to see a global architecture of tissue structures
important in BC classification.
We present a deep learning-based solution and framework for processing WSI
based on a novel approach utilizing the advantages of image levels. We apply
the weighing of information extracted from several levels into the final
classification of the malignancy. Our results demonstrate the profitability of
global information with an increase of accuracy from 72.2% to 84.8%.
|
In the present paper we address double parton scattering (DPS) in quasi-real
photon-proton interactions. By using electromagnetic and hadronic models of the
photon light cone wave functions, we compute the so-called effective
cross-section, $\sigma_{eff}^{\gamma p}$ which allows us to calculate the DPS
contribution to these processes under dedicated assumptions. In particular, for
the four-jet photoproduction in HERA kinematics we found a sizeable DPS
contribution. We show that if the photon virtuality $Q^2$ could be measured and
thus the dependence of $\sigma_{eff}^{\gamma p}$ on such a parameter exposed,
information on the transverse distance between partons active in proton could
be extracted. To this aim, we set lower limits on the integrated luminosity
needed to observe such an effect which would allow the extraction of novel
information on the proton structure.
|
This paper studies the dissipative generalized surface quasi-geostrophic
equations in a supercritical regime where the order of the dissipation is small
relative to order of the velocity, and the velocities are less regular than the
advected scalar by up to one order of derivative. We also consider a
non-degenerate modification of the endpoint case in which the velocity is less
smooth than the advected scalar by slightly more than one order. The existence
and uniqueness theory of these equations in the borderline Sobolev spaces is
addressed, as well as the instantaneous smoothing effect of their corresponding
solutions. In particular, it is shown that solutions emanating from initial
data belonging to these Sobolev classes immediately enter a Gevrey class. Such
results appear to be the first of its kind for a quasilinear parabolic equation
whose coefficients are of higher order than its linear term; they rely on an
approximation scheme which modifies the flux in such a way that preserves the
underlying commutator structure lost by having to work in the critical space
setting, as well as delicate adaptations of well-known commutator estimates to
Gevrey classes.
|
A nonnoetherian spacetime is a Lorentzian manifold that contains a set of
causal curves with no distinct interior points. We show that on such a
spacetime, hidden within the free Dirac Lagrangian is the entire standard model
(with four massive neutral scalar bosons), with the correct spin, electric
charge, color charge, and relative mass orderings of each particle.
Furthermore, using two 'fusion rules', we are able to reproduce almost all of
the standard model trivalent vertices, as well as electroweak parity violation.
Finally, we find that on a nonnoetherian spacetime, C, P, and T each sit in a
different connected component of the full Lorentz group, and their product is
the identity.
|
We use theory and numerical computation to determine the shape of an
axisymmetric fluid membrane with a resistance to bending and constant area. The
membrane connects two rings in the classic geometry that produces a catenoidal
shape in a soap film. In our problem, we find infinitely many branches of
solutions for the shape and external force as functions of the separation of
the rings, analogous to the infinite family of eigenmodes for the Euler
buckling of a slender rod. Special attention is paid to the catenoid, which
emerges as the shape of maximal allowable separation when the area is less than
a critical area equal to the planar area enclosed by the two rings. A
perturbation theory argument directly relates the tension of catenoidal
membranes to the stability of catenoidal soap films in this regime. When the
membrane area is larger than the critical area, we find additional cylindrical
tether solutions to the shape equations at large ring separation, and that
arbitrarily large ring separations are possible. These results apply for the
case of vanishing Gaussian curvature modulus; when the Gaussian curvature
modulus is nonzero and the area is below the critical area, the force and the
membrane tension diverge as the ring separation approaches its maximum value.
We also examine the stability of our shapes and analytically show that
catenoidal membranes have markedly different stability properties than their
soap film counterparts.
|
Since the mapping relationship between definitized intra-interventional X-ray
and undefined pre-interventional Computed Tomography(CT) is uncertain,
auxiliary positioning devices or body markers, such as medical implants, are
commonly used to determine this relationship. However, such approaches can not
be widely used in clinical due to the complex realities. To determine the
mapping relationship, and achieve a initializtion post estimation of human body
without auxiliary equipment or markers, proposed method applies image
segmentation and deep feature matching to directly match the X-ray and CT
images. As a result, the well-trained network can directly predict the spatial
correspondence between arbitrary X-ray and CT. The experimental results show
that when combining our approach with the conventional approach, the achieved
accuracy and speed can meet the basic clinical intervention needs, and it
provides a new direction for intra-interventional registration.
|
Billions of photos are uploaded to the web daily through various types of
social networks. Some of these images receive millions of views and become
popular, whereas others remain completely unnoticed. This raises the problem of
predicting image popularity on social media. The popularity of an image can be
affected by several factors, such as visual content, aesthetic quality, user,
post metadata, and time. Thus, considering all these factors is essential for
accurately predicting image popularity. In addition, the efficiency of the
predictive model also plays a crucial role. In this study, motivated by
multimodal learning, which uses information from various modalities, and the
current success of convolutional neural networks (CNNs) in various fields, we
propose a deep learning model, called visual-social convolutional neural
network (VSCNN), which predicts the popularity of a posted image by
incorporating various types of visual and social features into a unified
network model. VSCNN first learns to extract high-level representations from
the input visual and social features by utilizing two individual CNNs. The
outputs of these two networks are then fused into a joint network to estimate
the popularity score in the output layer. We assess the performance of the
proposed method by conducting extensive experiments on a dataset of
approximately 432K images posted on Flickr. The simulation results demonstrate
that the proposed VSCNN model significantly outperforms state-of-the-art
models, with a relative improvement of greater than 2.33%, 7.59%, and 14.16% in
terms of Spearman's Rho, mean absolute error, and mean squared error,
respectively.
|
We consider some energy integrals under slow growth and we prove that the
local minimizers are locally Lipschitz continuous. Many examples are given,
either with subquadratic $p,q-$growth and/or anisotropic growth.
|
Making use of the gauge/string duality, it is possible to study some aspects
of the string breaking phenomenon in the three quark system. Our results point
out that the string breaking distance is not universal and depends on quark
geometry. The estimates of the ratio of the string breaking distance in the
three quark system to that in the quark-antiquark system would range
approximately from $\frac{2}{3}$ to $1$. In addition, it is shown that there
are special geometries which allow more than one breaking distance.
|
We present a measurement of the Hubble constant $H_0$ from surface brightness
fluctuation (SBF) distances for 63 bright, mainly early-type galaxies out to
100 Mpc observed with the Wide Field Camera 3 Infrared Channel (WFC3/IR) on the
Hubble Space Telescope (HST). The sample is drawn from several independent HST
imaging programs using the F110W bandpass of WFC3/IR. The majority of galaxies
are in the 50 to 80 Mpc range and come from the MASSIVE galaxy survey. The
median statistical uncertainty on individual distance measurements is 4%. We
construct the Hubble diagram with these IR SBF distances and constrain $H_0$
using {four} different treatments of the galaxy velocities. For the SBF zero
point calibration, we use both the existing tie to Cepheid variables, updated
for consistency with the latest determination of the distance to the Large
Magellanic Cloud from detached eclipsing binaries, and a new tie to the tip of
the red giant branch (TRGB) calibrated from the maser distance to NGC4258.
These two SBF calibrations are consistent with each other and with theoretical
predictions from stellar population models. From a weighted average of the
Cepheid and TRGB calibrations, we derive $H_0=73.3{\,\pm\,}0.7{\,\pm\,}2.4$
km/s/Mpc, where the error bars reflect the statistical and systematic
uncertainties. This result accords well with recent measurements of $H_0$ from
Type~Ia supernovae, time delays in multiply lensed quasars, and water masers.
The systematic uncertainty could be reduced to below 2% by calibrating the SBF
method with precision TRGB distances for a statistical sample of massive
early-type galaxies out to the Virgo cluster measured with the James Webb Space
Telescope.
|
Despite significant advancements in the field of multi-agent navigation,
agents still lack the sophistication and intelligence that humans exhibit in
multi-agent settings. In this paper, we propose a framework for learning a
human-like general collision avoidance policy for agent-agent interactions in
fully decentralized, multi-agent environments. Our approach uses knowledge
distillation with reinforcement learning to shape the reward function based on
expert policies extracted from human trajectory demonstrations through behavior
cloning. We show that agents trained with our approach can take human-like
trajectories in collision avoidance and goal-directed steering tasks not
provided by the demonstrations, outperforming the experts as well as
learning-based agents trained without knowledge distillation.
|
In this note we show that the Linet-Tian family of solutions of the vacuum
Einstein equations with a cosmological constant are a restricted set of the
solutions of the Einstein field equations for a rotating perfect fluid
previously found by A. Krasi\'nski.
|
This paper provides an overview of the current state of the stripping model
for short gamma-ray bursts. After the historical joint detection of the
gravitational wave event GW170817 and the accompanying gamma-ray burst
GRB170817A, the relation between short gamma-ray bursts and neutron star
mergers has been reliably confirmed. We show that many properties of
GRB170817A, which turned out to be peculiar in comparison with other short
gamma-ray bursts, are naturally explained in the context of the stripping
model, specifically, the time (1.7 s) between the peak of the gravitational
wave signal and the detection of the gamma-ray burst, its total isotropic
energy, and the parameters of the red and blue components of the accompanying
kilonova.
|
Increasingly important photomechanical materials produce stress and
mechanical work when illuminated. We propose experimentally accessible
performance metrics for photostress and photowork, enabling comparison of
materials performance. We relate these metrics to material properties,
providing a framework for the design and optimization of photomechanical
materials.
|
Real-world data is usually segmented by attributes and distributed across
different parties. Federated learning empowers collaborative training without
exposing local data or models. As we demonstrate through designed attacks, even
with a small proportion of corrupted data, an adversary can accurately infer
the input attributes. We introduce an adversarial learning based procedure
which tunes a local model to release privacy-preserving intermediate
representations. To alleviate the accuracy decline, we propose a defense method
based on the forward-backward splitting algorithm, which respectively deals
with the accuracy loss and privacy loss in the forward and backward gradient
descent steps, achieving the two objectives simultaneously. Extensive
experiments on a variety of datasets have shown that our defense significantly
mitigates privacy leakage with negligible impact on the federated learning
task.
|
Humans tend to build environments with structure, which consists of mainly
planar surfaces. From the intersection of planar surfaces arise straight lines.
Lines have more degrees-of-freedom than points. Thus, line-based
Structure-from-Motion (SfM) provides more information about the environment. In
this paper, we present solutions for SfM using lines, namely, incremental SfM.
These approaches consist of designing state observers for a camera's dynamical
visual system looking at a 3D line. We start by presenting a model that uses
spherical coordinates for representing the line's moment vector. We show that
this parameterization has singularities, and therefore we introduce a more
suitable model that considers the line's moment and shortest viewing ray.
Concerning the observers, we present two different methodologies. The first
uses a memory-less state-of-the-art framework for dynamic visual systems. Since
the previous states of the robotic agent are accessible -- while performing the
3D mapping of the environment -- the second approach aims at exploiting the use
of memory to improve the estimation accuracy and convergence speed. The two
models and the two observers are evaluated in simulation and real data, where
mobile and manipulator robots are used.
|
We show that, by utilising temporal quantum correlations as expressed by
pseudo-density operators (PDOs), it is possible to recover formally the
standard quantum dynamical evolution as a sequence of teleportations in time.
We demonstrate that any completely positive evolution can be formally
reconstructed by teleportation with different temporally correlated states.
This provides a different interpretation of maximally correlated PDOs, as
resources to induce quantum time-evolution. Furthermore, we note that the
possibility of this protocol stems from the strict formal correspondence
between spatial and temporal entanglement in quantum theory. We proceed to
demonstrate experimentally this correspondence, by showing a multipartite
violation of generalised temporal and spatial Bell inequalities and verifying
agreement with theoretical predictions to a high degree of accuracy, in
high-quality photon qubits.
|
Shear bands originating from in situ tensile tests of
Al$_{88}$Y$_{7}$Fe$_{5}$ melt-spun ribbons conducted in a transmission electron
microscope are compared with ones which had formed ex situ during cold rolling.
During in situ straining, the observations of a spearhead-like shear front, a
meniscus-like foil thickness reduction and no apparent shear steps to
accommodate strain suggest shear band initiation by a rejuvenating shear front
followed by shearing along the already softened paths. This leads to necking
and subsequent failure under the reduced constraint of a 2D geometry in the
thin foil and thus explains the observed lack of ductility under tension. In
contrast, shear bands formed during cold rolling display distinct alternating
density changes and shear off-sets. An explanation for this difference may be
that in situ shear bands rip before such features could develop. Moreover, both
in and ex situ experiments suggest that initiation, propagation and arrest of
shear bands occur during different stages.
|
We consider the problem of sensor selection for designing observer and filter
for continuous linear time invariant systems such that the sensor precisions
are minimized, and the estimation errors are bounded by the prescribed
$\mathcal{H}_2/\mathcal{H}_{\infty}$ performance criteria. The proposed
integrated framework formulates the precision minimization as a convex
optimization problem subject to linear matrix inequalities, and it is solved
using an algorithm based on the alternating direction method of multipliers
(ADMM). We also present a greedy approach for sensor selection and demonstrate
the performance of the proposed algorithms using numerical simulations.
|
We study the quantization of the corner symmetry algebra of 3d gravity, that
is the algebra of observables associated with 1d spatial boundaries. In the
continuum field theory, at the classical level, this symmetry algebra is given
by the central extension of the Poincar\'e loop algebra. At the quantum level,
we construct a discrete current algebra based on a quantum symmetry group given
by the Drinfeld double $\mathcal{D}\mathrm{SU}(2)$. Those discrete currents
depend on an integer $N$, a discreteness parameter, understood as the number of
quanta of geometry on the 1d boundary: low $N$ is the deep quantum regime,
while large $N$ should lead back to a continuum picture. We show that this
algebra satisfies two fundamental properties. First, it is compatible with the
quantum space-time picture given by the Ponzano-Regge state-sum model, which
provides discrete path integral amplitudes for 3d quantum gravity. The integer
$N$ then counts the flux lines attached to the boundary. Second, we analyse the
refinement, coarse-graining and fusion processes as $N$ changes, and we show
that the $N\rightarrow\infty$ limit is a classical limit where we recover the
Poincar\'e current algebra. Identifying such a discrete current algebra on
quantum boundaries is an important step towards understanding how conformal
field theories arise on spatial boundaries in quantized space-times such as in
loop quantum gravity.
|
Gate-all-around nanowire transistor, due to its extremely tight electrostatic
control and vertical integration capability, is a highly promising candidate
for sub-5 nm technology node. In particular, the junctionless nanowire
transistors are highly scalable with reduced variability due to avoidance of
steep source/drain junction formation by ion implantation. Here we demonstrate
a dual-gated junctionless nanowire \emph{p}-type field effect transistor using
tellurium nanowire as the channel. The dangling-bond-free surface due to the
unique helical crystal structure of the nanowire, coupled with an integration
of dangling-bond-free, high quality hBN gate dielectric, allows us to achieve a
phonon-limited field effect hole mobility of $570\,\mathrm{cm^{2}/V\cdot s}$ at
270 K, which is well above state-of-the-art strained Si hole mobility. By
lowering the temperature, the mobility increases to
$1390\,\mathrm{cm^{2}/V\cdot s}$ and becomes primarily limited by Coulomb
scattering. \txc{The combination of an electron affinity of $\sim$4 eV and a
small bandgap of tellurium provides zero Schottky barrier height for hole
injection at the metal-contact interface}, which is remarkable for reduction of
contact resistance in a highly scaled transistor. Exploiting these properties,
coupled with the dual-gated operation, we achieve a high drive current of
$216\,\mathrm{\mu A/\mu m}$ while maintaining an on-off ratio in excess of
$2\times10^4$. The findings have intriguing prospects for alternate channel
material based next-generation electronics.
|
In [I.R. Khairulin et al., submitted to Phys. Rev. Lett.] we propose a method
for amplifying a train of sub-femtosecond pulses of circularly or elliptically
polarized extreme ultraviolet (XUV) radiation, constituted by high-order
harmonics of an infrared (IR) laser field, in a neon-like active medium of a
plasma-based X-ray laser, additionally irradiated with a replica of a
fundamental frequency laser field used to generate harmonics, and show the
possibility of maintaining or enhancing the ellipticity of high-harmonic
radiation during its amplification. In the present paper we describe this
process in detail both for a single harmonic component and a sub-femtosecond
pulse train formed by a set of harmonics. We derive the analytical theory and
describe both analytically and numerically the evolution of the high-harmonic
field during its propagation through the medium. We discuss also the
possibility of an experimental implementation of the suggested technique in an
active medium of an X-ray laser based on neon-like Ti^{12+} ions irradiated by
an IR laser field with a wavelength of 3.9 microns.
|
Bottom-up approaches for image-based multi-person pose estimation consist of
two stages: (1) keypoint detection and (2) grouping of the detected keypoints
to form person instances. Current grouping approaches rely on learned embedding
from only visual features that completely ignore the spatial configuration of
human poses. In this work, we formulate the grouping task as a graph
partitioning problem, where we learn the affinity matrix with a Graph Neural
Network (GNN). More specifically, we design a Geometry-aware Association GNN
that utilizes spatial information of the keypoints and learns local affinity
from the global context. The learned geometry-based affinity is further fused
with appearance-based affinity to achieve robust keypoint association. Spectral
clustering is used to partition the graph for the formation of the pose
instances. Experimental results on two benchmark datasets show that our
proposed method outperforms existing appearance-only grouping frameworks, which
shows the effectiveness of utilizing spatial context for robust grouping.
Source code is available at: https://github.com/jiahaoLjh/PoseGrouping.
|
Existing works for aspect-based sentiment analysis (ABSA) have adopted a
unified approach, which allows the interactive relations among subtasks.
However, we observe that these methods tend to predict polarities based on the
literal meaning of aspect and opinion terms and mainly consider relations
implicitly among subtasks at the word level. In addition, identifying multiple
aspect-opinion pairs with their polarities is much more challenging. Therefore,
a comprehensive understanding of contextual information w.r.t. the aspect and
opinion are further required in ABSA. In this paper, we propose Deep
Contextualized Relation-Aware Network (DCRAN), which allows interactive
relations among subtasks with deep contextual information based on two modules
(i.e., Aspect and Opinion Propagation and Explicit Self-Supervised Strategies).
Especially, we design novel self-supervised strategies for ABSA, which have
strengths in dealing with multiple aspects. Experimental results show that
DCRAN significantly outperforms previous state-of-the-art methods by large
margins on three widely used benchmarks.
|
A common feature of electromagnetic emission from solar flares is the
presence of intensity pulsations that vary as a function of time. Known as
quasi-periodic pulsations (QPPs), these variations in flux appear to include
periodic components and characteristic time-scales. Here, we analyse a GOES
M3.7 class flare exhibiting pronounced QPPs across a broad band of wavelengths
using imaging and time-series analysis. We identify QPPs in the timeseries of
X-ray, low frequency radio and EUV wavelengths using wavelet analysis, and
localise the region of the flare site from which the QPPs originate via X-ray
and EUV imaging. It was found that the pulsations within the 171 \.A, 1600 \.A,
soft X-ray (SXR), and hard X-ray (HXR) light curves yielded similar periods of
$\sim$122 s, $\sim$131s, $\sim$123 s, and $\sim$137 s, respectively, indicating
a common progenitor. The low frequency radio emission at 2.5 MHz contained a
longer period of $\sim$231 s. Imaging analysis indicates that the location of
the X-ray and EUV pulsations originates from a HXR footpoint linked to a system
of nearby open magnetic field lines. Our results suggest that intermittent
particle acceleration, likely due to 'bursty' magnetic reconnection, is
responsible for the QPPs. The precipitating electrons accelerated towards the
chromosphere produce the X-ray and EUV pulsations, while the escaping electrons
result in low frequency radio pulses in the form of type III radio bursts. The
modulation of the reconnection process, resulting in episodic particle
acceleration, explains the presence of these QPPs across the entire spatial
range of flaring emission.
|
Data is the engine of modern computer vision, which necessitates collecting
large-scale datasets. This is expensive, and guaranteeing the quality of the
labels is a major challenge. In this paper, we investigate efficient annotation
strategies for collecting multi-class classification labels for a large
collection of images. While methods that exploit learnt models for labeling
exist, a surprisingly prevalent approach is to query humans for a fixed number
of labels per datum and aggregate them, which is expensive. Building on prior
work on online joint probabilistic modeling of human annotations and
machine-generated beliefs, we propose modifications and best practices aimed at
minimizing human labeling effort. Specifically, we make use of advances in
self-supervised learning, view annotation as a semi-supervised learning
problem, identify and mitigate pitfalls and ablate several key design choices
to propose effective guidelines for labeling. Our analysis is done in a more
realistic simulation that involves querying human labelers, which uncovers
issues with evaluation using existing worker simulation methods. Simulated
experiments on a 125k image subset of the ImageNet100 show that it can be
annotated to 80% top-1 accuracy with 0.35 annotations per image on average, a
2.7x and 6.7x improvement over prior work and manual annotation, respectively.
Project page: https://fidler-lab.github.io/efficient-annotation-cookbook
|
There is currently a gap between the natural language expression of scholarly
publications and their structured semantic content modeling to enable
intelligent content search. With the volume of research growing exponentially
every year, a search feature operating over semantically structured content is
compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. 'the NCG
task') tasks participants to develop automated systems that structure
contributions from NLP scholarly articles in the English language. Being the
first-of-its-kind in the SemEval series, the task released structured data from
NLP scholarly articles at three levels of information granularity, i.e. at
sentence-level, phrase-level, and phrases organized as triples toward Knowledge
Graph (KG) building. The sentence-level annotations comprised the few sentences
about the article's contribution. The phrase-level annotations were scientific
term and predicate phrases from the contribution sentences. Finally, the
triples constituted the research overview KG. For the Shared Task,
participating systems were then expected to automatically classify contribution
sentences, extract scientific terms and relations from the sentences, and
organize them as KG triples.
Overall, the task drew a strong participation demographic of seven teams and
27 participants. The best end-to-end task system classified contribution
sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While
the absolute performance to generate triples remains low, in the conclusion of
this article, the difficulty of producing such data and as a consequence of
modeling it is highlighted.
|
The paper utilizes H\"older graphical derivatives for characterizing H\"older
strong subregularity, isolated calmness and sharp minimum. As applications, we
characterize H\"older isolated calmness in linear semi-infinite optimization
and H\"older sharp minimizers of some penalty functions for constrained
optimization.
|
Given the prominence of current 3D sensors, a fine-grained analysis on the
basic point cloud data is worthy of further investigation. Particularly, real
point cloud scenes can intuitively capture complex surroundings in the real
world, but due to 3D data's raw nature, it is very challenging for machine
perception. In this work, we concentrate on the essential visual task, semantic
segmentation, for large-scale point cloud data collected in reality. On the one
hand, to reduce the ambiguity in nearby points, we augment their local context
by fully utilizing both geometric and semantic features in a bilateral
structure. On the other hand, we comprehensively interpret the distinctness of
the points from multiple resolutions and represent the feature map following an
adaptive fusion method at point-level for accurate semantic segmentation.
Further, we provide specific ablation studies and intuitive visualizations to
validate our key modules. By comparing with state-of-the-art networks on three
different benchmarks, we demonstrate the effectiveness of our network.
|
Vehicle safety systems have substantially decreased motor vehicle
crash-related injuries and fatalities, but injuries to the lumbar spine still
have been reported. Experimental and computational analyses of upright and,
particularly, reclined occupants in frontal crashes have shown that the lumbar
spine can be subjected to axial compression followed by combined
compression-flexion loading. Lumbar spine failure tolerance in combined
compression-flexion has not been widely explored in the literature. Therefore,
the goal of this study was to measure the failure tolerance of the lumbar spine
in combined compression and flexion. Forty 3-vertebra lumbar spine segments
were pre-loaded with axial compression and then subjected to dynamic flexion
bending until failure. Clinically relevant middle vertebra fractures were
observed in twenty-one of the specimens, including compression and burst
fractures. The remaining nineteen specimens experienced failure at the potting
grip interface. Since specimen characteristics and pre-test axial load varied
widely within the sample, failure forces (mean 3.4 kN, range 1.6-5.1 kN) and
moments (mean 73 Nm, range 0-181 Nm) also varied widely. Tobit univariate
regressions were performed to determine the relationship between censored
failure tolerance and specimen sex, segment type (upper/lower), age, and
cross-sectional area. Age, sex, and cross-sectional area significantly affected
failure force and moment individually (p<0.0024). These data can be used to
develop injury prediction tools for lumbar spine fractures and further research
in future safety systems.
|
As more and more robots are envisioned to cooperate with humans sharing the
same space, it is desired for robots to be able to predict others' trajectories
to navigate in a safe and self-explanatory way. We propose a Convolutional
Neural Network-based approach to learn, detect, and extract patterns in
sequential trajectory data, known here as Social Pattern Extraction Convolution
(Social-PEC). A set of experiments carried out on the human trajectory
prediction problem shows that our model performs comparably to the state of the
art and outperforms in some cases. More importantly, the proposed approach
unveils the obscurity in the previous use of a pooling layer, presenting a way
to intuitively explain the decision-making process.
|
It is widely believed that natural image data exhibits low-dimensional
structure despite the high dimensionality of conventional pixel
representations. This idea underlies a common intuition for the remarkable
success of deep learning in computer vision. In this work, we apply dimension
estimation tools to popular datasets and investigate the role of
low-dimensional structure in deep learning. We find that common natural image
datasets indeed have very low intrinsic dimension relative to the high number
of pixels in the images. Additionally, we find that low dimensional datasets
are easier for neural networks to learn, and models solving these tasks
generalize better from training to test data. Along the way, we develop a
technique for validating our dimension estimation tools on synthetic data
generated by GANs allowing us to actively manipulate the intrinsic dimension by
controlling the image generation process. Code for our experiments may be found
here https://github.com/ppope/dimensions.
|
We derive explicitly the structural properties of the $p$-adic special
orthogonal groups in dimension three, for all primes $p$, and, along the way,
the two-dimensional case. In particular, starting from the unique definite
quadratic form in three dimensions (up to linear equivalence and rescaling), we
show that every element of $SO(3)_p$ is a rotation around an axis. An important
part of the analyis is the classification of all definite forms in two
dimensions, yielding a description of the rotation subgroups around any fixed
axis, which all turn out to be abelian and parametrised naturally by the
projective line.
Furthermore, we find that for odd primes $p$, the entire group $SO(3)_p$
admits a representation in terms of Cardano angles of rotations around the
reference axes, in close analogy to the real orthogonal case. However, this
works only for certain orderings of the product of rotations around the
coordinate axes, depending on the prime; furthermore, there is no general Euler
angle decomposition. For $p=2$, no Euler or Cardano decomposition exists.
|
Direct-to-satellite (DtS) communication has gained importance recently to
support globally connected Internet of things (IoT) networks. However,
relatively long distances of densely deployed satellite networks around the
Earth cause a high path loss. In addition, since high complexity operations
such as beamforming, tracking and equalization have to be performed in IoT
devices partially, both the hardware complexity and the need for high-capacity
batteries of IoT devices increase. The reconfigurable intelligent surfaces
(RISs) have the potential to increase the energy-efficiency and to perform
complex signal processing over the transmission environment instead of IoT
devices. But, RISs need the information of the cascaded channel in order to
change the phase of the incident signal. This study proposes graph attention
networks (GATs) for the challenging channel estimation problem and examines the
performance of DtS IoT networks for different RIS configurations under GAT
channel estimation.
|
Mechatronic systems are commonly used in the industry, where fast and
accurate motion performance is always required to guarantee manufacturing
precision and efficiency. Nevertheless, the system model and parameters are
difficult to be obtained accurately. Moreover, the high-order modes, strong
coupling in the multi-axis systems, or unmodeled frictions will bring uncertain
dynamics to the system. To overcome the above-mentioned issues and enhance the
motion performance, this paper introduces a novel intelligent and totally
model-free control method for mechatronic systems with unknown dynamics. In
detail, a 2-degree-of-freedom (DOF) architecture is designed, which organically
merges a generalized super-twisting algorithm with a unique iterative learning
law. The controller solely utilizes the input-output data collected in
iterations such that it works without any knowledge of the system parameters.
The rigorous proof of convergence ability is given and a case study on
flexture-joint dual-drive H-gantry stage is shown to validate the effectiveness
of the proposed method.
|
We construct a new class of efficient Monte Carlo methods based on
continuous-time piecewise deterministic Markov processes (PDMP) suitable for
inference in high dimensional sparse models, i.e. models for which there is
prior knowledge that many coordinates are likely to be exactly $0$. This is
achieved with the fairly simple idea of endowing existing PDMP samplers with
sticky coordinate axes, coordinate planes etc. Upon hitting those subspaces, an
event is triggered, during which the process sticks to the subspace, this way
spending some time in a sub-model. That introduces non-reversible jumps between
different (sub-)models. The approach can also be combined with local
implementations of PDMP samplers to target measures that additionally exhibit a
sparse dependency structure. We illustrate the new method for a number of
statistical models where both the sample size $N$ and the dimensionality $d$ of
the parameter space are large.
|
The direct imaging of rocky exoplanets is one of the major science goals for
upcoming large telescopes. The contrast requirement for imaging such planets is
challenging. However, the mid-IR (InfraRed) regime provides the optimum
contrast to directly detect the thermal signatures of exoplanets in our solar
neighbourhood. We aim to exploit novel fast chopping techniques newly developed
for astronomy with the aid of adaptive optics to look for thermal signatures of
exoplanets around bright stars in the solar neighbourhood. We use the upgraded
VISIR (Very Large Telescope Imager and Spectrometer for the mid-InfraRed)
instrument with high contrast imaging (HCI) capability optimized for
observations at 10~$\mu$m to look for exoplanets around five nearby ($d$ < 4
pc) stars. The instrument provides an improved signal-to-noise (S/N) by a
factor of $\sim$4 in the N-band compared to standard VISIR for a given S/N and
time. In this work we achieve a detection sensitivity of sub-mJy, which is
sufficient to detect few Jupiter mass planets in nearby systems. Although no
detections are made we achieve most sensitive limits within $<2''$ for all the
observed targets compared to previous campaigns. For $\epsilon$ Indi A and
$\epsilon$ Eri we achieve detection limits very close to the giant planets
discovered by RV, with the limits on $\epsilon$ Indi A being the most sensitive
to date. Our non-detection therefore supports an older age for $\epsilon$ Indi
A. The results presented here show the promise for high contrast imaging and
exoplanet detections in the mid-IR regime.
|
Effectively recognising and applying emotions to interactions is a highly
desirable trait for social robots. Implicitly understanding how subjects
experience different kinds of actions and objects in the world is crucial for
natural HRI interactions, with the possibility to perform positive actions and
avoid negative actions. In this paper, we utilize the NICO robot's appearance
and capabilities to give the NICO the ability to model a coherent affective
association between a perceived auditory stimulus and a temporally asynchronous
emotion expression. This is done by combining evaluations of emotional valence
from vision and language. NICO uses this information to make decisions about
when to extend conversations in order to accrue more affective information if
the representation of the association is not coherent. Our primary contribution
is providing a NICO robot with the ability to learn the affective associations
between a perceived auditory stimulus and an emotional expression. NICO is able
to do this for both individual subjects and specific stimuli, with the aid of
an emotion-driven dialogue system that rectifies emotional expression
incoherences. The robot is then able to use this information to determine a
subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
|
Core Damage Frequency (CDF) is a risk metric often employed by nuclear
regulatory bodies worldwide. Numerical values for this metric are required by
U.S. regulators, prior to reactor licensing, and reported values can trigger
regulatory inspections. CDF is reported as a constant, sometimes accompanied by
a confidence interval. It is well understood that CDF characterizes the arrival
rate of a stochastic point process modeling core damage events. However,
consequences of the assumptions imposed on this stochastic process as a
computational necessity are often overlooked. Herein, we revisit CDF in the
context of modern point process theory. We argue that the assumptions required
to yield a constant CDF are typically unrealistic. We further argue that
treating CDF as an informative approximation is suspect, because of the
inherent difficulties in quantifying its quality as an approximation.
|
Weakly-supervised semantic segmentation (WSSS) is introduced to narrow the
gap for semantic segmentation performance from pixel-level supervision to
image-level supervision. Most advanced approaches are based on class activation
maps (CAMs) to generate pseudo-labels to train the segmentation network. The
main limitation of WSSS is that the process of generating pseudo-labels from
CAMs that use an image classifier is mainly focused on the most discriminative
parts of the objects. To address this issue, we propose Puzzle-CAM, a process
that minimizes differences between the features from separate patches and the
whole image. Our method consists of a puzzle module and two regularization
terms to discover the most integrated region in an object. Puzzle-CAM can
activate the overall region of an object using image-level supervision without
requiring extra parameters. % In experiments, Puzzle-CAM outperformed previous
state-of-the-art methods using the same labels for supervision on the PASCAL
VOC 2012 test dataset. In experiments, Puzzle-CAM outperformed previous
state-of-the-art methods using the same labels for supervision on the PASCAL
VOC 2012 dataset. Code associated with our experiments is available at
https://github.com/OFRIN/PuzzleCAM.
|
Recent work has revealed two classes of Globular Clusters (GCs), dubbed
Type-I and Type-II. Type-II GCs are characterized by a blue- and a red- red
giant branch composed of stars with different metallicities, often coupled with
distinct abundances in the slow-neutron capture elements (s-elements). Here we
continue the chemical tagging of Type-II GCs by adding the two least-massive
clusters of this class, NGC1261 and NGC6934. Based on both spectroscopy and
photometry, we find that red stars in NGC1261 are slightly enhanced in [Fe/H]
by ~0.1 dex and confirm that red stars of NGC 6934 are enhanced in iron by ~0.2
dex. Neither NGC1261 nor NGC6934 show internal variations in the s-elements,
which suggests a GC mass threshold for the occurrence of s-process enrichment.
We found a significant correlation between the additional Fe locked in the red
stars of Type-II GCs and the present-day mass of the cluster. Nevertheless,
most Type II GCs retained a small fraction of Fe produced by SNe II, lower than
the 2%; NGC6273, M54 and omega Centauri are remarkable exceptions. In the
appendix, we infer for the first time chemical abundances of Lanthanum, assumed
as representative of the s-elements, in M54, the GC located in the nucleus of
the Sagittarius dwarf galaxy. Red-sequence stars are marginally enhanced in
[La/Fe] by 0.10\pm0.06 dex, in contrast with the large [La/Fe] spread of most
Type II GCs. We suggest that different processes are responsible for the
enrichment in iron and s-elements in Type-II GCs.
|
A set of quantum states is said to be absolutely entangled, when at least one
state in the set remains entangled for any definition of subsystems, i.e. for
any choice of the global reference frame. In this work we investigate the
properties of absolutey entangled sets (AES) of pure quantum states. For the
case of a two-qubit system, we present a sufficient condition to detect an AES,
and use it to construct families of $N$ states such that $N-3$ (the maximal
possible number) remain entangled for any definition of subsystems. For a
general bipartition $d=d_1d_2$, we prove that sets of
$N>\left\lfloor{(d_{1}+1)(d_{2}+1)/2}\right \rfloor$ states are AES with Haar
measure 1. Then, we define AES for multipartitions. We derive a general lower
bound on the number of states in an AES for a given multipartition, and also
construct explicit examples. In particular, we exhibit an AES with respect to
any possible multi-partitioning of the total system.
|
We study the numerical approximation of stochastic evolution equations with a
monotone drift driven by an infinite-dimensional Wiener process. To discretize
the equation, we combine a drift-implicit two-step BDF method for the temporal
discretization with an abstract Galerkin method for the spatial discretization.
After proving well-posedness of the BDF2-Maruyama scheme, we establish a
convergence rate of the strong error for equations under suitable Lipschitz
conditions. We illustrate our theoretical results through various numerical
experiments and compare the performance of the BDF2-Maruyama scheme to the
backward Euler--Maruyama scheme.
|
Many long-term goals, such as learning a language, require people to
regularly practice every day to achieve mastery. At the same time, people
regularly surf the web and read social news feeds in their spare time. We have
built a browser extension that teaches vocabulary to users in the context of
Facebook feeds and arbitrary websites, by showing users interactive quizzes
they can answer without leaving the website. On Facebook, the quizzes show up
as part of the news feed, while on other sites, the quizzes appear where
advertisements normally would. In our user study, we examined the effectiveness
of inserting microlearning tasks into social news feeds. We compared vocabulary
learning rates when we inserted interactive quizzes into feeds, versus
inserting links that lead them to a website where they could do the quizzes.
Our results suggest that users engage with and learn from our embedded quizzes,
and engagement increases when the quizzes can be done directly within their
feeds.
|
The $s,p-d$ exchange coupling between the spins of band carriers and of
transition metal (TM) dopants ranging from Ti to Cu in ZnO is studied within
the density functional theory. The $+U$ corrections are included to reproduce
the experimental ZnO band gap and the dopant levels. The $p-d$ coupling reveals
unexpectedly complex features. In particular, (i) the $p-d$ coupling constants
$N_0\beta$ vary about 10 times when going from V to Cu, (ii) not only the value
but also the sign of $N_0\beta$ depends on the charge state of the dopant,
(iii) the $p-d$ coupling with the heavy holes and the light holes is not the
same; in the case of Fe, Co and Ni, $N_0\beta$s for the two subbands can differ
twice, and for Cu the opposite sign of the coupling is found for light and
heavy holes. The main features of the $p-d$ coupling are determined by the
$p-d$ hybridization between the $d$(TM) and $p$(O) orbitals. In contrast, the
$s-d$ coupling constant $N_0\alpha$ is almost the same for all TM ions, and
does not depend on the charge state of the dopant. The TM-induced spin
polarization of the $p$(O) orbitals contributes to the $s-d$ coupling,
enhancing $N_0\alpha$.
|
We present results on electron transport in quasi-one dimensional (1D)
quantum wires in GaAs/AlGaAs heterostructures obtained using an asymmetric
confinement potential. The variation of the energy levels of the spatially
quantized states is followed from strong confinement through weak confinement
to the onset of two-dimensionality. An anticrossing of the initial ground and
first excited states is found as the asymmetry of the potential is varied
giving rise to two anticrossing events which occur on either side of symmetric
confinement. We present results analysing this behaviour and showing how it can
be affected by the inhomogeneity in background potential. The use of an
enhanced source-drain voltage to alter the energy levels is shown to be a
significant validation of the analysis by showing the formation of double rows
of electrons which correlate with the anticrossing.
|
For any given positive integer $l$, we prove that every plane deformation of
a circle which preserves the $1/2$ and $1/(2l+1)$-rational caustics is trivial
i.e. the deformation consists only of similarities (rescalings plus
isometries).
|
The progress in neuromorphic computing is fueled by the development of novel
nonvolatile memories capable of storing analog information and implementing
neural computation efficiently. However, like most other analog circuits, these
devices and circuits are prone to imperfections, such as temperature
dependency, noise, tuning error, etc., often leading to considerable
performance degradation in neural network implementations. Indeed,
imperfections are major obstacles in the path of further progress and ultimate
commercialization of these technologies. Hence, a practically viable approach
should be developed to deal with these nonidealities and unleash the full
potential of nonvolatile memories in neuromorphic systems. Here, for the first
time, we report a comprehensive characterization of critical imperfections in
two analog-grade memories, namely passively-integrated memristors and
redesigned eFlash memories, which both feature long-term retention, high
endurance, analog storage, low-power operation, and compact nano-scale
footprint. Then, we propose a holistic approach that includes modifications in
the training, tuning algorithm, memory state optimization, and circuit design
to mitigate these imperfections. Our proposed methodology is corroborated on a
hybrid software/experimental framework using two benchmarks: a moderate-size
convolutional neural network and ResNet-18 trained on CIFAR-10 and ImageNet
datasets, respectively. Our proposed approaches allow 2.5x to 9x improvements
in the energy consumption of memory arrays during inference and sub-percent
accuracy drop across 25-100 C temperature range. The defect tolerance is
improved by >100x, and a sub-percent accuracy drop is demonstrated in deep
neural networks built with 64x64 passive memristive crossbars featuring 25%
normalized switching threshold variations.
|
We study the behavior of R\'enyi entropies for pure states from standard
assumptions about chaos in the high-energy spectrum of the Hamiltonian of a
many-body quantum system. We compute the exact long-time averages of R\'enyi
entropies and show that the quantum noise around these values is exponentially
suppressed in the microcanonical entropy. For delocalized states over the
microcanonical band, the long-time average approximately reproduces the
equilibration proposal of H. Liu and S. Vardhan, with extra structure arising
at the order of non-planar permutations. We analyze the equilibrium
approximation for AdS/CFT systems describing black holes in equilibrium in a
box. We extend our analysis to the situation of an evaporating black hole, and
comment on the possible gravitational description of the new terms in our
approximation.
|
Beam breakup instability is a potential issue for all particle accelerators
and is often the limiting factor for the maximum beam current that can be
achieved. This is particularly relevant for Energy Recovery Linacs with
multiple passes where a relatively small amount of charge can result in a large
beam current. Recent studies have shown that the choice of filling pattern and
recirculation scheme for a multi-pass energy recovery linac can drastically
affect the interactions between the beam and RF system. In this paper we
further explore this topic to study how filling patterns affect the beam
breakup instability and how this can allow us to optimise the design in order
to minimise this effect. We present a theoretical model of the beam-RF
interaction as well as numerical modeling and show that the threshold current
can vary by factors of 2-4, and potentially even more depending on the machine
design parameters. Therefore a judicious choice of filling pattern can greatly
increase the onset of BBU, expanding the utility of future ERLs.
|
This paper aims to devise a generalized maximum likelihood (ML) estimator to
robustly detect signals with unknown noise statistics in multiple-input
multiple-output (MIMO) systems. In practice, there is little or even no
statistical knowledge on the system noise, which in many cases is non-Gaussian,
impulsive and not analyzable. Existing detection methods have mainly focused on
specific noise models, which are not robust enough with unknown noise
statistics. To tackle this issue, we propose a novel ML detection framework to
effectively recover the desired signal. Our framework is a fully probabilistic
one that can efficiently approximate the unknown noise distribution through a
normalizing flow. Importantly, this framework is driven by an unsupervised
learning approach, where only the noise samples are required. To reduce the
computational complexity, we further present a low-complexity version of the
framework, by utilizing an initial estimation to reduce the search space.
Simulation results show that our framework outperforms other existing
algorithms in terms of bit error rate (BER) in non-analytical noise
environments, while it can reach the ML performance bound in analytical noise
environments. The code of this paper is available at
https://github.com/skypitcher/manfe.
|
The applicability of process mining techniques hinges on the availability of
event logs capturing the execution of a business process. In some use cases,
particularly those involving customer-facing processes, these event logs may
contain private information. Data protection regulations restrict the use of
such event logs for analysis purposes. One way of circumventing these
restrictions is to anonymize the event log to the extent that no individual can
be singled out using the anonymized log. This paper addresses the problem of
anonymizing an event log in order to guarantee that, upon disclosure of the
anonymized log, the probability that an attacker may single out any individual
represented in the original log, does not increase by more than a threshold.
The paper proposes a differentially private disclosure mechanism, which
oversamples the cases in the log and adds noise to the timestamps to the extent
required to achieve the above privacy guarantee. The paper reports on an
empirical evaluation of the proposed approach using 14 real-life event logs in
terms of data utility loss and computational efficiency.
|
Recently, Rendle has warned that the use of sampling-based top-$k$ metrics
might not suffice. This throws a number of recent studies on deep
learning-based recommendation algorithms, and classic non-deep-learning
algorithms using such a metric, into jeopardy. In this work, we thoroughly
investigate the relationship between the sampling and global top-$K$ Hit-Ratio
(HR, or Recall), originally proposed by Koren[2] and extensively used by
others. By formulating the problem of aligning sampling top-$k$ ($SHR@k$) and
global top-$K$ ($HR@K$) Hit-Ratios through a mapping function $f$, so that
$SHR@k\approx HR@f(k)$, we demonstrate both theoretically and experimentally
that the sampling top-$k$ Hit-Ratio provides an accurate approximation of its
global (exact) counterpart, and can consistently predict the correct winners
(the same as indicate by their corresponding global Hit-Ratios).
|
We look for minimal conditions on a two-dimensional metric surface $X$ of
locally finite Hausdorff $2$-measure under which $X$ admits an (almost)
parametrization with good geometric and analytic properties. Only assuming that
$X$ is locally geodesic, we show that Jordan domains in $X$ of finite boundary
length admit a quasiconformal almost parametrization. If $X$ satisfies some
further conditions then such an almost parametrization can be upgraded to a
geometrically quasiconformal homeomorphism or a quasisymmetric homeomorphism.
In particular, we recover Rajala's recent quasiconformal uniformization theorem
in the special case that $X$ is locally geodesic as well as Bonk-Kleiner's
quasisymmetric uniformization theorem. On the way we establish the existence of
Sobolev discs spanning a given Jordan curve in $X$ under nearly minimal
assumptions on $X$ and prove the continuity of energy minimizers.
|
In Japan, teacher and student is randomly matched in the first year of
elementary school. Under the quasi-natural experimental setting, we examine how
learning in female teacher homeroom class in the elementary school influence
pupils' smoking behavior after they become adult. We found that pupils are
unlikely to smoke later in life if they belonged to female teacher homeroom
class in pupil's first year of school.
|
Colloidal gels formed by strongly attractive particles at low particle volume
fractions are composed of space-spanning networks of uniformly sized clusters.
We study the thermal fluctuations of the clusters using differential dynamic
microscopy by decomposing them into two modes of dynamics, and link them to the
macroscopic viscoelasticity via rheometry. The first mode, dominant at early
times, represents the localized, elastic fluctuations of individual clusters.
The second mode, pronounced at late times, reflects the collective,
viscoelastic dynamics facilitated by the connectivity of the clusters. By
mixing two types of particles of distinct attraction strengths in different
proportions, we control the transition time at which the collective mode starts
to dominate, and hence tune the frequency dependence of the linear viscoelastic
moduli of the binary gels.
|
In the paper we continue to study Special Bohr-Sommerfeld geometry of compact
symplectic manifolds. Using natural deformation parameters we avoid the
difficulties appeared in the definition of the moduli space of Special
Bohr-Sommerfeld cycles for compact simply connected algebraic varieties. As a
byproduct we present certain remarks on the Weinstein structures and Eliashberg
conjectures.
|
Because all stars contribute to its gravitational potential, stellar clusters
amplify perturbations collectively. In the limit of small fluctuations, this is
described through linear response theory, via the so-called response matrix.
While the evaluation of this matrix is somewhat straightforward for unstable
modes (i.e. with a positive growth rate), it requires a careful analytic
continuation for damped modes (i.e. with a negative growth rate). We present a
generic method to perform such a calculation in spherically symmetric stellar
clusters. When applied to an isotropic isochrone cluster, we recover the
presence of a low-frequency weakly damped $\ell = 1$ mode. We finally use a set
of direct $N$-body simulations to test explicitly this prediction through the
statistics of the correlated random walk undergone by a cluster's density
centre.
|
Convergence of order $O(1/\sqrt{n})$ is obtained for the distance in total
variation between the Poisson distribution and the distribution of the number
of fixed size cycles in generalized random graphs with random vertex weights.
The weights are assumed to be independent identically distributed random
variables which have a power-law distribution. The proof is based on the
Chen--Stein approach and on the derived properties of the ratio of the sum of
squares of random variables and the sum of these variables. These properties
can be applied to other asymptotic problems related to generalized random
graphs.
|
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) is
the task of predicting the entailment relation between a pair of sentences
(premise and hypothesis). This task has been described as a valuable testing
ground for the development of semantic representations, and is a key component
in natural language understanding evaluation benchmarks. Models that understand
entailment should encode both, the premise and the hypothesis. However,
experiments by Poliak et al. revealed a strong preference of these models
towards patterns observed only in the hypothesis, based on a 10 dataset
comparison. Their results indicated the existence of statistical irregularities
present in the hypothesis that bias the model into performing competitively
with the state of the art. While recast datasets provide large scale generation
of NLI instances due to minimal human intervention, the papers that generate
them do not provide fine-grained analysis of the potential statistical patterns
that can bias NLI models. In this work, we analyze hypothesis-only models
trained on one of the recast datasets provided in Poliak et al. for word-level
patterns. Our results indicate the existence of potential lexical biases that
could contribute to inflating the model performance.
|
This paper considers the equilibrium positions of $n$ particles in one
dimension. Two forces act on the particles; a nonlocal repulsive
particle-interaction force and an external force which pushes them to an
impenetrable barrier. While the continuum limit as $n \to \infty$ is known for
a certain class of potentials, numerical simulations show that a discrete
boundary layer appears at the impenetrable barrier, i.e. the positions of
$o(n)$ particles do not fit to the particle density predicted by the continuum
limit. In this paper we establish a first-order $\Gamma$-convergence result
which guarantees that these $o(n)$ particles converge to a specific continuum
boundary-layer profile.
|
Priors allow us to robustify inference and to incorporate expert knowledge in
Bayesian hierarchical models. This is particularly important when there are
random effects that are hard to identify based on observed data. The challenge
lies in understanding and controlling the joint influence of the priors for the
variance parameters, and makemyprior is an R package that guides the
formulation of joint prior distributions for variance parameters. A joint prior
distribution is constructed based on a hierarchical decomposition of the total
variance in the model along a tree, and takes the entire model structure into
account. Users input their prior beliefs or express ignorance at each level of
the tree. Prior beliefs can be general ideas about reasonable ranges of
variance values and need not be detailed expert knowledge. The constructed
priors lead to robust inference and guarantee proper posteriors. A graphical
user interface facilitates construction and assessment of different choices of
priors through visualization of the tree and joint prior. The package aims to
expand the toolbox of applied researchers and make priors an active component
in their Bayesian workflow.
|
Consider a nonlinear wave equation for a massless scalar field with
self-interaction in the spatially flat Friedmann-Lema\^{i}tre-Robertson-Walker
spacetimes. For the case of accelerated expansion, we show that blow-up in a
finite time occurs for the equation with arbitrary power nonlinearity as well
as upper bounds of the lifespan of blow-up solutions. Comparing to the case of
the Minkowski spacetime, we discuss how the scale factor affects the lifespan
of blow-up solutions of the equation.
|
In a geographically distributed population, assortative clustering plays an
important role in evolution by modifying local environments. To examine its
effects in a linear habitat, we consider a one-dimensional grid of cells, where
each cell is either empty or occupied by an organism whose replication strategy
is genetically inherited to offspring. The strategy determines whether to have
offspring in surrounding cells, as a function of the neighborhood
configuration. If more than one offspring compete for a cell, then they can be
all exterminated due to the cost of conflict depending on environmental
conditions. We find that the system is more densely populated in an unfavorable
environment than in a favorable one because only the latter has to pay the cost
of conflict. This observation agrees reasonably well with a mean-field analysis
which takes assortative clustering of strategies into consideration. Our
finding suggests a possibility of intrinsic nonlinearity between environmental
conditions and population density when an evolutionary process is involved.
|
Ultrahigh repetition rate lasers will become vital light sources for many
future technologies; however, their realization is challenging because the
cavity size must be minimized. Whispering-gallery-mode (WGM) microresonators
are attractive for this purpose since they allow the strong light-matter
interaction usually needed to enable mode-locking. However, the optimum
parameter ranges are entirely unknown since no experiments have yet been
conducted. Here, we numerically investigate pulsed operation in a toroidal WGM
microresonator with gain and saturable absorption (SA) to study the
experimental feasibility. We show that dispersion is the key parameter for
achieving passive mode-locking in this system. Moreover, the design guideline
provided in this work can apply to any small resonators with gain and SA and is
not limited to a specific cavity system.
|
Density functional theory (DFT) based modeling of electronic excited states
is of importance for investigation of the photophysical/photochemical
properties and spectroscopic characterization of large systems. The widely used
linear response time-dependent DFT (TDDFT) approach is however not effective at
modeling many types of excited states, including (but not limited to)
charge-transfer states, doubly excited states and core-level excitations. In
this perspective, we discuss state-specific orbital optimized (OO) DFT
approaches as an alterative to TDDFT for electronic excited states. We motivate
the use of OO-DFT methods and discuss reasons behind their relatively
restricted historical usage (vs TDDFT). We subsequently highlight modern
developments that address these factors and allow efficient and reliable OO-DFT
computations. Several successful applications of OO-DFT for challenging
electronic excitations are also presented, indicating their practical efficacy.
OO-DFT approaches are thus increasingly becoming a useful route for computing
excited states of large chemical systems. We conclude by discussing the
limitations and challenges still facing OO-DFT methods, as well as some
potential avenues for addressing them.
|
Executing scientific workflows with heterogeneous tasks on HPC platforms
poses several challenges which will be further exacerbated by the upcoming
exascale platforms. At that scale, bespoke solutions will not enable effective
and efficient workflow executions. In preparation, we need to look at ways to
manage engineering effort and capability duplication across software systems by
integrating independently developed, production-grade software solutions. In
this paper, we integrate RADICAL-Pilot (RP) and Parsl and develop an MPI
executor to enable the execution of workflows with heterogeneous (non)MPI
Python functions at scale. We characterize the strong and weak scaling of the
integrated RP-Parsl system when executing two use cases from polar science, and
of the function executor on both SDSC Comet and TACC Frontera. We gain
engineering insight about how to analyze and integrate workflow and runtime
systems, minimizing changes in their code bases and overall development effort.
Our experiments show that the overheads of the integrated system are invariant
of resource and workflow scale, and measure the impact of diverse MPI
overheads. Together, those results define a blueprint towards an ecosystem
populated by specialized, efficient, effective and independently-maintained
software systems to face the upcoming scaling challenges.
|
This paper proposes novel end-to-end framework for detecting suspicious
pulmonary nodules in chest CT scans. The method core idea is a new nodule
segmentation architecture with a model-based feature projection block on
three-dimensional convolutions. This block acts as a preliminary feature
extractor for a two-dimensional U-Net-like convolutional network. Using the
proposed approach along with an axial, coronal, and sagittal projection
analysis makes it possible to abandon the widely used false positives reduction
step. The proposed method achieves SOTA on LUNA2016 with 0.959 average
sensitivity, and 0.936 sensitivity if the false-positive level per scan is
0.25. The paper describes the proposed approach and represents the experimental
results on LUNA2016 as well as ablation studies.
|
Pruning the weights of randomly initialized neural networks plays an
important role in the context of lottery ticket hypothesis. Ramanujan et al.
(2020) empirically showed that only pruning the weights can achieve remarkable
performance instead of optimizing the weight values. However, to achieve the
same level of performance as the weight optimization, the pruning approach
requires more parameters in the networks before pruning and thus more memory
space. To overcome this parameter inefficiency, we introduce a novel framework
to prune randomly initialized neural networks with iteratively randomizing
weight values (IteRand). Theoretically, we prove an approximation theorem in
our framework, which indicates that the randomizing operations are provably
effective to reduce the required number of the parameters. We also empirically
demonstrate the parameter efficiency in multiple experiments on CIFAR-10 and
ImageNet.
|
In this paper, we use the Carnot-Caratheodory distance from sub-Riemanian
geometry to prove entropy decay estimates for all finite dimensional symmetric
quantum Markov semigroups. This estimate is independent of the environment size
and hence stable under tensorization. Our approach relies on the transference
principle, the existence of $t$-designs, and the sub-Riemannian diameter of
compact Lie groups and implies estimates for the spectral gap.
|
We study online Multi-Agent Path Finding (MAPF), where new agents are
constantly revealed over time and all agents must find collision-free paths to
their given goal locations. We generalize existing complexity results of
(offline) MAPF to online MAPF. We classify online MAPF algorithms into
different categories based on (1) controllability (the set of agents that they
can plan paths for at each time) and (2) rationality (the quality of paths they
plan) and study the relationships between them. We perform a competitive
analysis for each category of online MAPF algorithms with respect to
commonly-used objective functions. We show that a naive algorithm that routes
newly-revealed agents one at a time in sequence achieves a competitive ratio
that is asymptotically bounded from both below and above by the number of
agents with respect to flowtime and makespan. We then show a counter-intuitive
result that, if rerouting of previously-revealed agents is not allowed, any
rational online MAPF algorithms, including ones that plan optimal paths for all
newly-revealed agents, have the same asymptotic competitive ratio as the naive
algorithm, even on 2D 4-neighbor grids. We also derive constant lower bounds on
the competitive ratio of any rational online MAPF algorithms that allow
rerouting. The results thus provide theoretical insights into the effectiveness
of using MAPF algorithms in an online setting for the first time.
|
In this paper, we use the first-principles calculations based on the density
functional theory to investigate structural, electronic and magnetic properties
of Fe$_{2}$YSn with (Y = Mn, Ti and V). The generalized gradient approximation
(GGA) method is used for calculations. The Cu$_{2}$MnAl type structure is
energetically more stable than the Hg$_{2}$CuTi type structure. The negative
formation energy is shown as the evidence of thermodynamic stability of the
alloy. The calculated total spin moment is found as 3$\mu_\text{B}$ and
0$\mu_\text{B}$ at the equilibrium lattice constant for Fe$_{2}$MnSn and
Fe$_{2}$TiSn respectively, which agrees with the Slater-Pauling rule of $M_t=
Z_t-24$. The study of electronic and magnetic properties proves that
Fe$_{2}$MnSn and Fe$_{2}$TiSn full-Heusler alloys are complete half-metallic
ferromagnetic materials.
|
The different thermo-elastic properties of glass fibers and polymer matrices
can generate residual thermal stresses in injection-molded fiber-reinforced
plastic (FRP) objects. During cooling from mold to room temperature, these
stresses can be relaxed by large deformations resulting from an instability of
the unwarped configuration (i.e., buckling). This article investigates the
thermal buckling of thin FRP disks via an analytical formulation based on the
Foppl-von Karman theory. Expanding on our previous work, cylindrical orthotropy
with material parameters varying over the disk thickness is assumed in order to
account for thickness dependency of the glass fiber orientation distribution. A
disk parameter generalizing the thermal anisotropy ratio for homogeneous
orthotropic disks is introduced and its relation with the occurrence and
periodicity of buckling is discussed. This is done for a skin-coreskin model,
for which the core-to-total thickness ratio is defined. For fiber orientation
distributions typical of injection-molded disks, it is found that there exists
a value of the thickness ratio for which no buckling occurs. It is also
demonstrated that the periodicity of the first buckling mode is described by
the generalized thermal anisotropy ratio, thus extending the results obtained
for a homogeneous fiber orientation distribution. Improvements in the accuracy
of the predictions for experimental data available in the literature when using
the skin-core-skin model are shown. Finally, we study the relation between
buckling temperature and disk thickness and propose an expression for the
dependence of the normalized buckling temperature on the thermal anisotropy
ratio. Results of FEM simulations are used to validate the proposed expression,
proving its applicability and accuracy.
|
Neural networks serve as effective controllers in a variety of complex
settings due to their ability to represent expressive policies. The complex
nature of neural networks, however, makes their output difficult to verify and
predict, which limits their use in safety-critical applications. While
simulations provide insight into the performance of neural network controllers,
they are not enough to guarantee that the controller will perform safely in all
scenarios. To address this problem, recent work has focused on formal methods
to verify properties of neural network outputs. For neural network controllers,
we can use a dynamics model to determine the output properties that must hold
for the controller to operate safely. In this work, we develop a method to use
the results from neural network verification tools to provide probabilistic
safety guarantees on a neural network controller. We develop an adaptive
verification approach to efficiently generate an overapproximation of the
neural network policy. Next, we modify the traditional formulation of Markov
decision process (MDP) model checking to provide guarantees on the
overapproximated policy given a stochastic dynamics model. Finally, we
incorporate techniques in state abstraction to reduce overapproximation error
during the model checking process. We show that our method is able to generate
meaningful probabilistic safety guarantees for aircraft collision avoidance
neural networks that are loosely inspired by Airborne Collision Avoidance
System X (ACAS X), a family of collision avoidance systems that formulates the
problem as a partially observable Markov decision process (POMDP).
|
We study four-point functions of scalars, conserved currents, and stress
tensors in a conformal field theory, generated by a local contact term in the
bulk dual description, in two different causal configurations. The first of
these is the standard Regge configuration in which the chaos bound applies. The
second is the `causally scattering configuration' in which the correlator
develops a bulk point singularity. We find an expression for the coefficient of
the bulk point singularity in terms of the bulk S matrix of the bulk dual
metric, gauge fields and scalars, and use it to determine the Regge scaling of
the correlator on the causally scattering sheet in terms of the Regge growth of
this S matrix. We then demonstrate that the Regge scaling on this sheet is
governed by the same power as in the standard Regge configuration, and so is
constrained by the chaos bound, which turns out to be violated unless the bulk
flat space S matrix grows no faster than $s^2$ in the Regge limit. It follows
that in the context of the AdS/CFT correspondence, the chaos bound applied to
the boundary field theory implies that the S matrices of the dual bulk scalars,
gauge fields, and gravitons obey the Classical Regge Growth (CRG) conjecture.
|
This paper deals with data-driven output synchronization for heterogeneous
leader-follower linear multi-agent systems. Given a multi-agent system that
consists of one autonomous leader and a number of heterogeneous followers with
external disturbances, we provide necessary and sufficient data-based
conditions for output synchronization. We also provide a design method for
obtaining such output synchronizing protocols directly from data. The results
are then extended to the special case that the followers are disturbance-free.
Finally, a simulation example is provided to illustrate our results.
|
In machine translation field, in both academia and industry, there is a
growing interest in increasingly powerful systems, using corpora of several
hundred million to several billion examples. These systems represent the
state-of-the-art. Here we defend the idea of developing in parallel <<frugal>>
bilingual translation systems, trained with relatively small corpora. Based on
the observation of a standard human professional translator, we estimate that
the corpora should be composed at maximum of a monolingual sub-corpus of 75
million examples for the source language, a second monolingual sub-corpus of 6
million examples for the target language, and an aligned bilingual sub-corpus
of 6 million bi-examples. A less desirable alternative would be an aligned
bilingual corpus of 47.5 million bi-examples.
|
We survey the area of strongly regular graphs satisfying the 4-vertex
condition and find several new families. We describe a switching operation on
collinearity graphs of polar spaces that produces cospectral graphs. The
obtained graphs satisfy the 4-vertex condition if the original graph belongs to
a symplectic polar space.
|
It is well known that two-sided markets are unfair in a number of ways. For
instance, female workers at Uber earn less than their male colleagues per mile
driven. Similar observations have been made for other minority subgroups in
other two-sided markets. Here, we suggest a novel market-clearing mechanism for
two-sided markets, which promotes equalisation of the pay per hour worked
across multiple subgroups, as well as within each subgroup. In the process, we
introduce a novel notion of subgroup fairness (which we call Inter-fairness),
which can be combined with other notions of fairness within each subgroup
(called Intra-fairness), and the utility for the customers (Customer-Care) in
the objective of the market-clearing problem. While the novel non-linear terms
in the objective complicate market clearing by making the problem non-convex,
we show that a certain non-convex augmented Lagrangian relaxation can be
approximated to any precision in time polynomial in the number of market
participants using semi-definite programming. This makes it possible to
implement the market-clearing mechanism efficiently. On the example of
driver-ride assignment in an Uber-like system, we demonstrate the efficacy and
scalability of the approach, and trade-offs between Inter- and Intra-fairness.
|
We report the discovery of a bright, compact ultraviolet source at a
projected separation of 1.1~kpc from the known active galactic nucleus (AGN) in
Mrk~766 based on Astrosat/UVIT observations. We perform radial profile analysis
and derive the UV flux almost free from the nearby contaminating sources. The
new source is about 2.5 and 5.6 times fainter than the AGN in the far and near
UV bands. The two sources appear as a pair of nuclei in Mrk~766. We investigate
the nature of the new source based on the UV flux ratio, X-ray and optical
emission. The new source is highly unlikely to be another accreting
supermassive black hole in Mrk~766 as it lacks X-ray emission. We find that the
UV/Optical flux of the new source measured at four different bands closely
follow the shape of the template spectrum of starburst galaxies. This strongly
suggests that the new source is a compact star-forming region.
|
Tate-Hochschild cohomology of an algebra is a generalization of ordinary
Hochschild cohomology, which is defined on positive and negative degrees and
has a ring structure. Our purpose of this paper is to study the eventual
periodicity of an algebra by using the Tate-Hochschild cohomology ring. First,
we deal with eventually periodic algebras and show that they are not
necessarily Gorenstein algebras. Secondly, we characterize the eventual
periodicity of a Gorenstein algebra as the existence of an invertible
homogeneous element of the Tate-Hochschild cohomology ring of the algebra,
which is our main result. Finally, we use tensor algebras to establish a way of
constructing eventually periodic Gorenstein algebras.
|
The chiral anomaly is a fundamental quantum mechanical phenomenon which is of
great importance to both particle physics and condensed matter physics alike.
In the context of QED it manifests as the breaking of chiral symmetry in the
presence of electromagnetic fields. It is also known that anomalous chiral
symmetry breaking can occur through interactions alone, as is the case for
interacting one dimensional systems. In this paper we investigate the interplay
between these two modes of anomalous chiral symmetry breaking in the context of
interacting Weyl semimetals. Using Fujikawa's path integral method we show that
the chiral charge continuity equation is modified by the presence of
interactions which can be viewed as including the effect of the electric and
magnetic fields generated by the interacting quantum matter. This can be
understood further using dimensional reduction and a Luttinger liquid
description of the lowest Landau level. These effects manifest themselves in
the non-linear response of the system. In particular we find an interaction
dependent density response due to a change in the magnetic field as well as a
contribution to the non-equilibrium and inhomogeneous anomalous Hall response
while preserving its equilibrium value.
|
We study a class of perturbative scalar quantum field theories where dynamics
is characterized by Lorentz-invariant or Lorentz-breaking non-local operators
of fractional order and the underlying spacetime has a varying spectral
dimension. These theories are either ghost free or power-counting
renormalizable but they cannot be both at the same time. However, some of them
are one-loop unitary and finite, and possibly unitary and finite at all orders.
|
In this note we prove a sharp lower bound on the necessary number of nestings
of nested absolute-value functions of generalized hinging hyperplanes (GHH) to
represent arbitrary CPWL functions. Previous upper bound states that $n+1$
nestings is sufficient for GHH to achieve universal representation power, but
the corresponding lower bound was unknown. We prove that $n$ nestings is
necessary for universal representation power, which provides an almost tight
lower bound. We also show that one-hidden-layer neural networks don't have
universal approximation power over the whole domain. The analysis is based on a
key lemma showing that any finite sum of periodic functions is either
non-integrable or the zero function, which might be of independent interest.
|
Purpose: To develop a knowledge-based voxel-wise dose prediction system using
a convolution neural network for high-dose-rate brachytherapy cervical cancer
treatments with a tandem-and-ovoid (T&O) applicator. Methods: A 3D U-NET was
utilized to output dose predictions using organ-at-risk (OAR), high-risk
clinical target volume (HRCTV), and possible source locations. A sample of
previous T&O treatments comprising 397 cases (273 training:62 validation:62
test), HRCTV and OARs (bladder/rectum/sigmoid) was used. Structures and dose
were interpolated to 1x1x2.5mm3 dose planes with two input channels (source
positions, voxel identification) and one output channel for dose. We evaluated
dose difference (\Delta D)(xyz)=D_(actual)(x,y,z)-D_(predicted)(x,y,z) and dice
similarity coefficients in all cohorts across the clinically-relevant dose
range (20-130% of prescription), mean and standard deviation. We also examined
discrete DVH metrics used for T&O plan quality assessment: HRCTV D_90%(dose to
hottest 90% volume) and OAR D_2cc, with \Delta
D_x=D_(x,actual)-D_(x,predicted). Pearson correlation coefficient, standard
deviation, and mean quantified model performance on the clinical metrics.
Results: Voxel-wise dose difference accuracy for 20-130% dose range for
training(test) ranges for mean (\Delta D) and standard deviation for all voxels
was [-0.3%+/-2.0% to +1.0%+/-12.0%] ([-0.1%+/-4% to +4.0%+/-26%]). Voxel-wise
dice similarity coefficients for 20-130% dose ranged from [0.96, 0.91]([0.94,
0.87]). DVH metric prediction in the training (test) set were HRCTV(\Delta
D_90)=-0.19+/-0.55 Gy (-0.09+/-0.67 Gy), bladder(\Delta D_2cc)=-0.06+/-0.54 Gy
(-0.17+/-0.67 Gy), rectum(\Delta D)_2cc=-0.03+/-0.36 Gy (-0.04+/-0.46 Gy), and
sigmoid(\Delta D_2cc)=-0.01+/-0.34 Gy (0.00+/-0.44 Gy). Conclusion: 3D
knowledge-based dose predictions for T&O brachytherapy provide accurate
voxel-level and DVH metric estimates.
|
Using data samples collected with the BESIII detector operating at the BEPCII
storage ring at center-of-mass energies from 4.178 to 4.600 GeV, we study the
process $e^+e^-\rightarrow\pi^{0}X(3872)\gamma$ and search for
$Z_c(4020)^{0}\rightarrow X(3872)\gamma$. We find no significant signal and set
upper limits on
$\sigma(e^+e^-\rightarrow\pi^{0}X(3872)\gamma)\cdot\mathcal{B}(X(3872)\rightarrow\pi^{+}\pi^{-}J/\psi)$
and
$\sigma(e^+e^-\rightarrow\pi^{0}Z_c(4020)^{0})\cdot\mathcal{B}(Z_c(4020)^{0}\rightarrow
X(3872)\gamma)\cdot\mathcal{B}(X(3872)\rightarrow\pi^{+}\pi^{-}J/\psi)$ for
each energy point at $90\%$ confidence level, which is of the order of several
tenths pb.
|
We study some sum-product problems over matrix rings. Firstly, for $A, B,
C\subseteq M_n(\mathbb{F}_q)$, we have $$ |A+BC|\gtrsim q^{n^2}, $$ whenever
$|A||B||C|\gtrsim q^{3n^2-\frac{n+1}{2}}$. Secondly, if a set $A$ in
$M_n(\mathbb{F}_q)$ satisfies $|A|\geq C(n)q^{n^2-1}$ for some sufficiently
large $C(n)$, then we have $$ \max\{|A+A|, |AA|\}\gtrsim
\min\left\{\frac{|A|^2}{q^{n^2-\frac{n+1}{4}}}, q^{n^2/3}|A|^{2/3}\right\}. $$
These improve the results due to The and Vinh (2020), and generalize the
results due to Mohammadi, Pham, and Wang (2021). We also give a new proof for a
recent result due to The and Vinh (2020). Our method is based on spectral graph
theory and linear algebra.
|
Supernova remnants (SNRs) are observable for about 6-15x10^4 years before
they fade into the Galactic interstellar medium. With a Galactic supernova rate
of approximately two per century, we can expect to have of the order of 1200
SNRs in our Galaxy. However, only about 300 of them are known to date, with the
majority having been discovered in Galactic plane radio surveys. Given that
these SNRs represent the brightest tail of the distribution and are mostly
located close to the plane, they are not representative of the complete sample.
Here we report findings from the search for new SNRs in the eROSITA all-sky
survey data which led to the detection of one of the largest SNRs discovered at
wavelengths other than the radio: G249.5+24.5. This source is located at a
relatively high Galactic latitude, where SNRs are not usually expected to be
found. The remnant, 'Hoinga', has a diameter of about 4.4 degrees and shows a
circular shaped morphology with diffuse X-ray emission filling almost the
entire remnant. Spectral analysis of the remnant emission reveals that an APEC
spectrum from collisionally ionised diffuse gas and a plane-parallel shock
plasma model with non-equilibrium ionisation are both able to provide an
adequate description of the data, suggesting a gas temperature of the order of
kT = 0.1 keV and an absorbing column density of N_H=3.6 x 10^20 cm^-2.
Subsequent searches for a radio counterpart of the Hoinga remnant identified
its radio emission in archival data from the Continuum HI Parkes All-Sky Survey
(CHIPASS) and the 408-MHz `Haslam' all-sky survey. The radio spectral index
alpha=-0.69 +- 0.08 obtained from these data definitely confirms the SNR nature
of Hoinga. From its size and X-ray and radio spectral properties we conclude
that Hoinga is a middle-aged Vela-like SNR located at a distance of about twice
that of the Vela SNR, i.e. at ~500 pc.
|
Cross-resolution image alignment is a key problem in multiscale gigapixel
photography, which requires to estimate homography matrix using images with
large resolution gap. Existing deep homography methods concatenate the input
images or features, neglecting the explicit formulation of correspondences
between them, which leads to degraded accuracy in cross-resolution challenges.
In this paper, we consider the cross-resolution homography estimation as a
multimodal problem, and propose a local transformer network embedded within a
multiscale structure to explicitly learn correspondences between the multimodal
inputs, namely, input images with different resolutions. The proposed local
transformer adopts a local attention map specifically for each position in the
feature. By combining the local transformer with the multiscale structure, the
network is able to capture long-short range correspondences efficiently and
accurately. Experiments on both the MS-COCO dataset and the real-captured
cross-resolution dataset show that the proposed network outperforms existing
state-of-the-art feature-based and deep-learning-based homography estimation
methods, and is able to accurately align images under $10\times$ resolution
gap.
|
In this note, we are concerned with dark modes in a class of non-Markovian
open quantum systems. Based on a microscopic model, a time-convoluted linear
quantum stochastic differential equation and an output equation are derived to
describe the system dynamics. The definition of dark modes is given building on
the input-output structure of the system. Then, we present a necessary and
sufficient condition for the existence of dark modes. Also, the problem of dark
mode synthesis via Hamiltonian engineering is constructively solved and an
example is presented to illustrate our results.
|
The data available in Electronic Health Records (EHRs) provides the
opportunity to transform care, and the best way to provide better care for one
patient is through learning from the data available on all other patients.
Temporal modelling of a patient's medical history, which takes into account the
sequence of past events, can be used to predict future events such as a
diagnosis of a new disorder or complication of a previous or existing disorder.
While most prediction approaches use mostly the structured data in EHRs or a
subset of single-domain predictions and outcomes, we present MedGPT a novel
transformer-based pipeline that uses Named Entity Recognition and Linking tools
(i.e. MedCAT) to structure and organize the free text portion of EHRs and
anticipate a range of future medical events (initially disorders). Since a
large portion of EHR data is in text form, such an approach benefits from a
granular and detailed view of a patient while introducing modest additional
noise. MedGPT effectively deals with the noise and the added granularity, and
achieves a precision of 0.344, 0.552 and 0.640 (vs LSTM 0.329, 0.538 and 0.633)
when predicting the top 1, 3 and 5 candidate future disorders on real world
hospital data from King's College Hospital, London, UK (\textasciitilde600k
patients). We also show that our model captures medical knowledge by testing it
on an experimental medical multiple choice question answering task, and by
examining the attentional focus of the model using gradient-based saliency
methods.
|
We present the clustering analysis of photometric luminous red galaxies
(LRGs) at a redshift range of $0.1\leq z \leq 1.05$ using $615,317$ photometric
LRGs selected from the Hyper Suprime-Cam Subaru Strategic Program covering
$\sim124$ deg$^{2}$. Our sample covers a broad range of stellar masses and
photometric redshifts and enables a halo occupation distribution analysis to
study the redshift and stellar-mass dependence of dark halo properties of LRGs.
We find a tight correlation between the characteristic dark halo mass to host
central LRGs, $M_{\min}$, and the number density of LRGs independently of
redshifts, indicating that the formation of LRGs is associated with the global
environment. The $M_{\min}$ of LRGs depends only weakly on the stellar mass
$M_{\star}$ at $M_{\star} \lesssim 10^{10.75}h^{-2} M_{\odot}$ at $0.3<z<1.05$,
in contrast to the case for all photometrically selected galaxies for which
$M_{\min}$ shows significant dependence on $M_{\star}$ even at low $M_{\star}$.
The weak stellar mass dependence is indicative of the dark halo mass being the
key parameter for the formation of LRGs rather than the stellar mass. Our
result suggests that the halo mass of $\sim 10^{12.5 \pm 0.2}h^{-1} M_{\odot}$
is the critical mass for an efficient halo quenching due to the halo
environment. We compare our result with the result of the hydrodynamical
simulation to find that low-mass LRGs at $z \sim 1$ will increase their stellar
masses by an order magnitude from $z=1$ to $0$ through mergers and satellite
accretions, and a large fraction of massive LRGs at $z<0.9$ consist of LRGs
that are recently migrated from massive green valley galaxies or those evolved
from less massive LRGs through mergers and satellite accretions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.