abstract
stringlengths 42
2.09k
|
---|
The 331 model with right-handed neutrinos is re-assessed to investigate the
CP violation in the quark sector. After the spontaneous symmetry breaking, the
masses and physical fields of the particle content are obtained. The fermions
content of the 331 model is enlarged to include exotic quarks with known
electric charge and with masses defined at the TeV scale. The existence of
these exotic quarks induces extra CP violations via couplings with quarks of
the Standard Model mediated by charged gauge boson with mass fixed at the TeV
scale. An extra discrete $\mathbb{Z}_{2}$ symmetry is introduced in the 331
model to get a stable scalar field that can be a candidate to the dark matter
content. The new scalar field interacts at tree level with the $Z^{\prime}$
gauge boson that works as a dark matter portal. The relic density associated
with the scalar field is calculated to yield the solution mass that satisfies
the observed dark matter. The region allowed on the parameter space of the dark
matter mass versus $Z^{\prime}$ mass is obtained to include the bounds of
PANDAX2017, XENON1T(2t.y) and LUX experiments.
|
Recent years have seen spectacular progress in the development of innovative
acceleration methods that are not based on traditional RF accelerating
structures. These novel developments are at the interface of laser, plasma and
accelerator physics and may potentially lead to much more compact and
cost-effective accelerator facilities. While primarily focusing on the ability
to accelerate charged particles with much larger gradients than traditional RF
structures, these new techniques have yet to demonstrate comparable
performances to RF structures in terms of both beam parameters and
reproducibility. To guide the developments beyond the necessary basic R&D and
concept validations, a common understanding and definition of required
performance and beam parameters for an operational user facility is now needed.
These innovative user facilities can include "table-top" light sources, medical
accelerators, industrial accelerators or even high-energy colliders. This paper
will review the most promising developments in new acceleration methods and it
will present the status of on-going projects.
|
We analyze the features of strongly interacting matter in the presence of
nonzero isospin chemical potential $\mu_I$, within a nonlocal two-flavor
Polyakov-Nambu-Jona-Lasinio (PNJL) model. For a system at finite temperature
$T$, we describe the behavior of various thermodynamic quantities and study the
phase diagram in the $\mu_I - T$ plane. In particular, it is found that for
values of $\mu_I$ larger than the pion mass and temperatures lower than a
critical value of about 170 MeV the system lies in an isospin symmetry broken
phase signaled by the presence of a nonzero pion condensate. Our results for
the phase diagram are found to be in better agreement with those arising from
lattice QCD calculations, as compared to the predictions from other theoretical
approaches like the local PNJL model.
|
In dynamical systems governed by differential equations, a guarantee that
trajectories emanating from a given set of initial conditions do not enter
another given set can be obtained by constructing a barrier function that
satisfies certain inequalities on phase space. Often these inequalities amount
to nonnegativity of polynomials and can be enforced using sum-of-squares
conditions, in which case barrier functions can be constructed computationally
using convex optimization over polynomials. To study how well such computations
can characterize sets of initial conditions in a chaotic system, we use the
undamped double pendulum as an example and ask which stationary initial
positions do not lead to flipping of the pendulum within a chosen time window.
Computations give semialgebraic sets that are close inner approximations to the
fractal set of all such initial positions.
|
Most available semantic parsing datasets, comprising of pairs of natural
utterances and logical forms, were collected solely for the purpose of training
and evaluation of natural language understanding systems. As a result, they do
not contain any of the richness and variety of natural-occurring utterances,
where humans ask about data they need or are curious about. In this work, we
release SEDE, a dataset with 12,023 pairs of utterances and SQL queries
collected from real usage on the Stack Exchange website. We show that these
pairs contain a variety of real-world challenges which were rarely reflected so
far in any other semantic parsing dataset, propose an evaluation metric based
on comparison of partial query clauses that is more suitable for real-world
queries, and conduct experiments with strong baselines, showing a large gap
between the performance on SEDE compared to other common datasets.
|
General-purpose Markov Chain Monte Carlo sampling algorithms suffer from a
dramatic reduction in efficiency as the system being studied is driven towards
a critical point. Recently, a series of seminal studies suggested that
normalizing flows - a class of deep generative models - can form the basis of a
sampling strategy that does not suffer from this 'critical slowing down'. The
central idea is to use machine learning techniques to build (approximate)
trivializing maps, i.e. field transformations that map the theory of interest
into a 'simpler' theory in which the degrees of freedom decouple, and where the
statistical weight in the path integral is given by a distribution from which
sampling is easy. No separate process is required to generate training data for
such models, and convergence to the desired distribution is guaranteed through
a reweighting procedure such as a Metropolis test. In a proof-of-principle
demonstration on two-dimensional $\phi^4$ theory, Albergo et al.
(arXiv:1904.12072) modelled the trivializing map as a sequence of pointwise
affine transformations. We pick up this thread, with the aim of quantifying how
well we can expect this approach to scale as we increase the number of degrees
of freedom in the system. We make several modifications to the original design
that allow our models learn more efficient representations of trivializing maps
using much smaller neural networks, which leads to a large reduction in the
computational cost required to train models of equivalent quality. After making
these changes, we find that sampling efficiency is almost entirely dictated by
how extensively a model has been trained, while being unresponsive to further
alterations that increase model flexibility. However, as we move towards the
continuum limit the training costs scale extremely quickly, which urgently
requires further work to fully understand and mitigate.
|
Nowadays, the bulk of Internet traffic uses TCP protocol for reliable
transmission. But the standard TCP's performance is very poor in High Speed
Networks (HSN) and hence the core gigabytes links are usually underutilization.
This problem has roots in conservative nature of TCP, especially in its
Additive Increase Multiplicative Decrease (AIMD) phase. In other words, since
TCP can't figure out precisely the congestion status of the network, it follows
a conservative strategy to keep the network from overwhelming. We believe that
precisely congestion estimation in the network can solve this problem by
avoiding unnecessary conservation. To this end, this paper proposes an
algorithm which considers packet loss and delay information jointly and employs
a probabilistic approach to accurately estimation of congestion status in the
network. To examine the proposed scheme performance, extensive simulations have
been performed in the NS-2 environment. Simulation results reveal that the
proposed algorithm has better performance than existing algorithms in terms of
bottleneck utilization, stability, throughput and fairness.
|
Learning sentence embeddings often requires a large amount of labeled data.
However, for most tasks and domains, labeled data is seldom available and
creating it is expensive. In this work, we present a new state-of-the-art
unsupervised method based on pre-trained Transformers and Sequential Denoising
Auto-Encoder (TSDAE) which outperforms previous approaches by up to 6.4 points.
It can achieve up to 93.1% of the performance of in-domain supervised
approaches. Further, we show that TSDAE is a strong domain adaptation and
pre-training method for sentence embeddings, significantly outperforming other
approaches like Masked Language Model.
A crucial shortcoming of previous studies is the narrow evaluation: Most work
mainly evaluates on the single task of Semantic Textual Similarity (STS), which
does not require any domain knowledge. It is unclear if these proposed methods
generalize to other domains and tasks. We fill this gap and evaluate TSDAE and
other recent approaches on four different datasets from heterogeneous domains.
|
We have produced persistent currents of ultracold fermionic atoms trapped in
a toroidal geometry with lifetimes greater than 10 seconds in the
strongly-interacting limit. These currents remain stable well into the BCS
limit at sufficiently low temperature. We drive a circulating BCS superfluid
into the normal phase and back by changing the interaction strength and find
that the probability for quantized superflow to reappear is remarkably
insensitive to the time spent in the normal phase and the minimum interaction
strength. After ruling out the Kibble-Zurek mechanism for our experimental
conditions, we argue that the reappearance of superflow is due to long-lived
normal currents and the Hess-Fairbank effect.
|
We prove that torsion in the abelianizations of open normal subgroups in
finitely presented pro-$p$ groups can grow arbitrarily fast. By way of contrast
in $\mathbb Z_p$- analytic groups the torsion growth is at most polynomial.
|
Employing the the stellar evolution code (Modules for Experiments in Stellar
Astrophysics), we calculate yields of heavy elements from massive stars via
stellar wind and core-collapse supernovae (CCSN) ejecta to interstellar medium
(ISM). In our models, the initial masses ($M_{\rm ini}$) of massive stars are
taken from 13 to 80 $M_\odot$, their initial rotational velocities (V) are 0,
300 and 500 km s$^{-1}$, and their metallicities are [Fe/H] = -3, -2, -1, and
0. The yields of heavy elements coming from stellar winds are mainly affected
by the stellar rotation which changes the chemical abundances of stellar
surfaces via chemically homogeneous evolution, and enhances mass-loss rate. We
estimate that the stellar wind can produce heavy element yields of about
$10^{-2}$ (for low metallicity models) to several $M_\odot$ (for low
metallicity and rapid rotation models) mass. The yields of heavy element
produced by CCSN ejecta also depend on the remnant mass of massive mass which
is mainly determined by the mass of CO-core. Our models calculate that the
yields of heavy elements produced by CCSN ejecta can get up to several
$M_\odot$. Compared with stellar wind, CCSN ejecta has a greater contribution
to the heavy elements in ISM. We also compare the $^{56}$Ni yields by
calculated in this work with observational estimate. Our models only explain
the $^{56}$Ni masses produced by faint SNe or normal SNe with progenitor mass
lower than about 25 $M_\odot$, and greatly underestimate the $^{56}$Ni masses
produced by stars with masses higher than about 30 $M_\odot$.
|
While remarkable advances have been made in Computed Tomography (CT),
capturing CT images with non-standardized protocols causes low reproducibility
regarding radiomic features, forming a barrier on CT image analysis in a large
scale. RadiomicGAN is developed to effectively mitigate the discrepancy caused
by using non-standard reconstruction kernels. RadiomicGAN consists of hybrid
neural blocks including both pre-trained and trainable layers adopted to learn
radiomic feature distributions efficiently. A novel training approach, called
Dynamic Window-based Training, has been developed to smoothly transform the
pre-trained model to the medical imaging domain. Model performance evaluated
using 1401 radiomic features show that RadiomicGAN clearly outperforms the
state-of-art image standardization models.
|
Cross domain recommender systems have been increasingly valuable for helping
consumers identify useful items in different applications. However, existing
cross-domain models typically require large number of overlap users, which can
be difficult to obtain in some applications. In addition, they did not consider
the duality structure of cross-domain recommendation tasks, thus failing to
take into account bidirectional latent relations between users and items and
achieve optimal recommendation performance. To address these issues, in this
paper we propose a novel cross-domain recommendation model based on dual
learning that transfers information between two related domains in an iterative
manner until the learning process stabilizes. We develop a novel latent
orthogonal mapping to extract user preferences over multiple domains while
preserving relations between users across different latent spaces. Furthermore,
we combine the dual learning method with the metric learning approach, which
allows us to significantly reduce the required common user overlap across the
two domains and leads to even better cross-domain recommendation performance.
We test the proposed model on two large-scale industrial datasets and six
domain pairs, demonstrating that it consistently and significantly outperforms
all the state-of-the-art baselines. We also show that the proposed model works
well with very few overlap users to obtain satisfying recommendation
performance comparable to the state-of-the-art baselines that use many overlap
users.
|
Answer selection is a task to choose the positive answers from a pool of
candidate answers for a given question. In this paper, we propose a novel
strategy for answer selection, called hierarchical ranking. We introduce three
levels of ranking: point-level ranking, pair-level ranking, and list-level
ranking. They formulate their optimization objectives by employing supervisory
information from different perspectives to achieve the same goal of ranking
candidate answers. Therefore, the three levels of ranking are related and they
can promote each other. We take the well-performed compare-aggregate model as
the backbone and explore three schemes to implement the idea of applying the
hierarchical rankings jointly: the scheme under the Multi-Task Learning (MTL)
strategy, the Ranking Integration (RI) scheme, and the Progressive Ranking
Integration (PRI) scheme. Experimental results on two public datasets, WikiQA
and TREC-QA, demonstrate that the proposed hierarchical ranking is effective.
Our method achieves state-of-the-art (non-BERT) performance on both TREC-QA and
WikiQA.
|
Video Question Answering (Video QA) is a powerful testbed to develop new AI
capabilities. This task necessitates learning to reason about objects,
relations, and events across visual and linguistic domains in space-time.
High-level reasoning demands lifting from associative visual pattern
recognition to symbol-like manipulation over objects, their behavior and
interactions. Toward reaching this goal we propose an object-oriented reasoning
approach in that video is abstracted as a dynamic stream of interacting
objects. At each stage of the video event flow, these objects interact with
each other, and their interactions are reasoned about with respect to the query
and under the overall context of a video. This mechanism is materialized into a
family of general-purpose neural units and their multi-level architecture
called Hierarchical Object-oriented Spatio-Temporal Reasoning (HOSTR) networks.
This neural model maintains the objects' consistent lifelines in the form of a
hierarchically nested spatio-temporal graph. Within this graph, the dynamic
interactive object-oriented representations are built up along the video
sequence, hierarchically abstracted in a bottom-up manner, and converge toward
the key information for the correct answer. The method is evaluated on multiple
major Video QA datasets and establishes new state-of-the-arts in these tasks.
Analysis into the model's behavior indicates that object-oriented reasoning is
a reliable, interpretable and efficient approach to Video QA.
|
Finite frieze patterns with entries in
$\mathbb{Z}[\lambda_{p_1},\ldots,\lambda_{p_s}]$ where $\{p_1,\ldots,p_s\}
\subseteq \mathbb{Z}_{\geq 3}$ and $\lambda_p = 2 \cos(\pi/p)$ were shown to
have a connection to dissected polygons by Holm and Jorgensen. We extend their
work by studying the connection between infinite frieze patterns with such
entries and dissections of annuli and once-punctured discs. We give an
algorithm to determine whether a frieze pattern with entries in
$\mathbb{Z}[\lambda_{p_1},\ldots,\lambda_{p_s}]$, finite or infinite, comes
from a dissected surface. We introduce quotient dissections as a realization
for some frieze patterns unrealizable by an ordinary dissection. We also
introduce two combinatorial interpretations for entries of frieze patterns from
dissected surfaces. These interpretations are a generalization of matchings
introduced by Broline, Crowe, and Isaacs for finite frieze patterns over
$\mathbb{Z}$.
|
Position $n$ points uniformly at random in the unit square $S$, and consider
the Voronoi tessellation of $S$ corresponding to the set $\eta$ of points. Toss
a fair coin for each cell in the tessellation to determine whether to colour
the cell red or blue. Let $H_S$ denote the event that there exists a red
horizontal crossing of $S$ in the resulting colouring. In 1999, Benjamini,
Kalai and Schramm conjectured that knowing the tessellation, but not the
colouring, asymptotically gives no information as to whether the event $H_S$
will occur or not. More precisely, since $H_S$ occurs with probability $1/2$,
by symmetry, they conjectured that the conditional probabilities
$\mathbb{P}(H_S|\eta)$ converge in probability to 1/2, as $n\to\infty$. This
conjecture was settled in 2016 by Ahlberg, Griffiths, Morris and Tassion. In
this paper we derive a stronger bound on the rate at which
$\mathbb{P}(H_S|\eta)$ approaches its mean. As a consequence we strengthen the
convergence in probability to almost sure convergence.
|
Nonlocal games are extensions of Bell inequalities, aimed at demonstrating
quantum advantage. These games are well suited for noisy quantum computers
because they only require the preparation of a shallow circuit, followed by the
measurement of non-commuting observable. Here, we consider the minimal
implementation of the nonlocal game proposed in Science 362, 308 (2018). We
test this game by preparing a 6-qubit cluster state using quantum computers on
the cloud by IBM, Ionq, and Honeywell. Our approach includes several levels of
optimization, such as circuit identities and error mitigation and allows us to
cross the classical threshold and demonstrate quantum advantage in one quantum
computer. We conclude by introducing a different inequality that allows us to
observe quantum advantage in less accurate quantum computers, at the expense of
probing a larger number of circuits.
|
We prove bounds for the number of solutions to
$$a_1 + \dots + a_k = a_1' + \dots + a_k'$$ over $N$-element sets of reals,
which are sufficiently convex or near-convex. A near-convex set will be the
image of a set with small additive doubling under a convex function with
sufficiently many strictly monotone derivatives. We show, roughly, that every
time the number of terms in the equation is doubled, an additional saving of
$1$ in the exponent of the trivial bound $N^{2k-1}$ is made, starting from the
trivial case $k=1$. In the context of near-convex sets we also provide explicit
dependencies on the additive doubling parameters.
Higher convexity is necessary for such bounds to hold, as evinced by sets of
perfect powers of consecutive integers. We exploit these stronger assumptions
using an idea of Garaev, rather than the ubiquitous Szemer\'edi-Trotter
theorem, which has not been adapted in earlier results to embrace higher
convexity.
As an application we prove small improvements for the best known bounds for
sumsets of convex sets under additional convexity assumptions.
|
Estimating the unknown causal dependencies among graph-connected time series
plays an important role in many applications, such as sensor network analysis,
signal processing over cyber-physical systems, and finance engineering.
Inference of such causal dependencies, often know as topology identification,
is not well studied for non-linear non-stationary systems, and most of the
existing methods are batch-based which are not capable of handling streaming
sensor signals. In this paper, we propose an online kernel-based algorithm for
topology estimation of non-linear vector autoregressive time series by solving
a sparse online optimization framework using the composite objective mirror
descent method. Experiments conducted on real and synthetic data sets show that
the proposed algorithm outperforms the state-of-the-art methods for topology
estimation.
|
A mixed graph is obtained by orienting some edges of a simple graph. The
positive inertia index of a mixed graph is defined as the number of positive
eigenvalues of its Hermitian adjacency matrix, including multiplicities. This
matrix was introduced by Liu and Li, independently by Guo and Mohar, in the
study of graph energy. Recently, Yuan et al. characterized the mixed graphs
with exactly one positive eigenvalue. In this paper, we study the positive
inertia indices of mixed graphs and characterize the mixed graphs with cut
vertices having positive inertia index 2.
|
In this paper, we propose and analyse a system that can automatically detect,
localise and classify polyps from colonoscopy videos. The detection of frames
with polyps is formulated as a few-shot anomaly classification problem, where
the training set is highly imbalanced with the large majority of frames
consisting of normal images and a small minority comprising frames with polyps.
Colonoscopy videos may contain blurry images and frames displaying feces and
water jet sprays to clean the colon -- such frames can mistakenly be detected
as anomalies, so we have implemented a classifier to reject these two types of
frames before polyp detection takes place. Next, given a frame containing a
polyp, our method localises (with a bounding box around the polyp) and
classifies it into five different classes. Furthermore, we study a method to
improve the reliability and interpretability of the classification result using
uncertainty estimation and classification calibration. Classification
uncertainty and calibration not only help improve classification accuracy by
rejecting low-confidence and high-uncertain results, but can be used by doctors
to decide how to decide on the classification of a polyp. All the proposed
detection, localisation and classification methods are tested using large data
sets and compared with relevant baseline approaches.
|
Lexical Semantics is concerned with how words encode mental representations
of the world, i.e., concepts . We call this type of concepts, classification
concepts . In this paper, we focus on Visual Semantics , namely on how humans
build concepts representing what they perceive visually. We call this second
type of concepts, substance concepts . As shown in the paper, these two types
of concepts are different and, furthermore, the mapping between them is
many-to-many. In this paper we provide a theory and an algorithm for how to
build substance concepts which are in a one-to-one correspondence with
classifications concepts, thus paving the way to the seamless integration
between natural language descriptions and visual perception. This work builds
upon three main intuitions: (i) substance concepts are modeled as visual
objects , namely sequences of similar frames, as perceived in multiple
encounters ; (ii) substance concepts are organized into a visual subsumption
hierarchy based on the notions of Genus and Differentia ; (iii) the human
feedback is exploited not to name objects, but, rather, to align the hierarchy
of substance concepts with that of classification concepts. The learning
algorithm is implemented for the base case of a hierarchy of depth two. The
experiments, though preliminary, show that the algorithm manages to acquire the
notions of Genus and Differentia with reasonable accuracy, this despite seeing
a small number of examples and receiving supervision on a fraction of them.
|
In this article we argue that in quantum mechanics, and in opposition to
classical physics, it is impossible to say that an isolated quantum system
"owns" a physical property. Some properties of the system, its mass for
example, belong to it in a sense close to that of classical physics; but most
often a property must be attributed to the system within a context. We give
simple motivations for adopting this point of view, and show that it clarifies
many issues in quantum physics.
|
YouTube has revolutionized the way people discover and consume video.
Although YouTube facilitates easy access to hundreds of well-produced and
trustworthy videos, abhorrent, misinformative, and mistargeted content is also
common. The platform is plagued by various types of problematic content: 1)
disturbing videos targeting young children; 2) hateful and misogynistic
content; and 3) pseudoscientific misinformation. While YouTube's recommendation
algorithm plays a vital role in increasing user engagement and YouTube's
monetization, its role in unwittingly promoting problematic content is not
entirely understood. In this thesis, we shed some light on the degree of
problematic content on YouTube and the role of the recommendation algorithm in
the dissemination of such content. Following a data-driven quantitative
approach, we analyze thousands of videos on YouTube, to shed light on: 1) the
risks of YouTube media consumption by young children; 2) the role of the
recommendation algorithm in the dissemination of misogynistic content, by
focusing on the Involuntary Celibates (Incels) community; and 3) user exposure
to pseudoscientific content on various parts of the platform and how this
exposure changes based on the user's watch history. Our analysis reveals that
young children are likely to encounter disturbing content when they randomly
browse the platform. By analyzing the Incel community on YouTube, we find that
Incel activity is increasing over time and that platforms may play an active
role in steering users towards extreme content. Finally, when studying
pseudoscientific misinformation, we find that YouTube suggests more
pseudoscientific content regarding traditional pseudoscientific topics (e.g.,
flat earth) than for emerging ones (like COVID-19) and that these
recommendations are more common on the search results page than on a user's
homepage or the video recommendations section.
|
Neural Arithmetic Logic Modules have become a growing area of interest,
though remain a niche field. These units are small neural networks which aim to
achieve systematic generalisation in learning arithmetic operations such as {+,
-, *, \} while also being interpretive in their weights. This paper is the
first in discussing the current state of progress of this field, explaining key
works, starting with the Neural Arithmetic Logic Unit (NALU). Focusing on the
shortcomings of NALU, we provide an in-depth analysis to reason about design
choices of recent units. A cross-comparison between units is made on experiment
setups and findings, where we highlight inconsistencies in a fundamental
experiment causing the inability to directly compare across papers. We finish
by providing a novel discussion of existing applications for NALU and research
directions requiring further exploration.
|
We propose a novel planning technique for satisfying tasks specified in
temporal logic in partially revealed environments. We define high-level actions
derived from the environment and the given task itself, and estimate how each
action contributes to progress towards completing the task. As the map is
revealed, we estimate the cost and probability of success of each action from
images and an encoding of that action using a trained neural network. These
estimates guide search for the minimum-expected-cost plan within our model. Our
learned model is structured to generalize across environments and task
specifications without requiring retraining. We demonstrate an improvement in
total cost in both simulated and real-world experiments compared to a
heuristic-driven baseline.
|
We investigate the electronic properties in the Bloch electron on a square
lattice with vacancies in the uniform magnetic field. We show that a single
vacancy site introduced to the system creates a defect energy level in every
single innumerable fractal energy gap in the Hofstadter butterfly. The
wavefunctions of different defect levels have all different localization
lengths depending on their fractal generations, and they can be described by a
single universal function after an appropriate fractal scaling. We also show
that each defect state has its own characteristic orbital magnetic moment,
which is exactly correlated to the gradient of the energy level in the
Hofstadter diagram. Probing the spatial nature of the defect-localized states
provides a powerful way to elucidate the fractal nature of the Hofstadter
butterfly.
|
We define and study 'non-abelian' Poincar\'e series for the group
$G=\mathrm{SU} (2,1)$, i.e. Poincar\'e series attached to a Stone-Von Neumann
representation of the unipotent subgroup $N$ of $G$. Such Poincar\'e series
have in general exponential growth. In this study we use results on abelian and
non-abelian Fourier term modules obtained in arXiv:1912.01334. We compute the
inner product of truncations of these series and those associated to unitary
characters of $N$ with square integrable automorphic forms, in connection with
their Fourier expansions. As a consequence, we obtain general completeness
results that, in particular, generalize those valid for the classical
holomorphic (and antiholomorphic) Poincar\'e series for
$\mathrm{SL}(2,\mathbb{R})$.
|
In this paper, we consider static parameter estimation for a class of
continuous-time state-space models. Our goal is to obtain an unbiased estimate
of the gradient of the log-likelihood (score function), which is an estimate
that is unbiased even if the stochastic processes involved in the model must be
discretized in time. To achieve this goal, we apply a doubly randomized scheme,
that involves a novel coupled conditional particle filter (CCPF) on the second
level of randomization. Our novel estimate helps facilitate the application of
gradient-based estimation algorithms, such as stochastic-gradient Langevin
descent. We illustrate our methodology in the context of stochastic gradient
descent (SGD) in several numerical examples and compare with the Rhee & Glynn
estimator.
|
The sensitivity of light and matter-wave interferometers to rotations is
based on the Sagnac effect and increases with the area enclosed by the
interferometer. In the case of light, the latter can be enlarged by forming
multiple fibre loops, whereas the equivalent for matter-wave interferometers
remains an experimental challenge. We present a concept for a multi-loop atom
interferometer with a scalable area formed by light pulses. Our method will
offer sensitivities as high as $2\cdot10^{-11}$ rad/s at 1 s in combination
with the respective long-term stability as required for Earth rotation
monitoring.
|
Modern classification models tend to struggle when the amount of annotated
data is scarce. To overcome this issue, several neural few-shot classification
models have emerged, yielding significant progress over time, both in Computer
Vision and Natural Language Processing. In the latter, such models used to rely
on fixed word embeddings before the advent of transformers. Additionally, some
models used in Computer Vision are yet to be tested in NLP applications. In
this paper, we compare all these models, first adapting those made in the field
of image processing to NLP, and second providing them access to transformers.
We then test these models equipped with the same transformer-based encoder on
the intent detection task, known for having a large number of classes. Our
results reveal that while methods perform almost equally on the ARSC dataset,
this is not the case for the Intent Detection task, where the most recent and
supposedly best competitors perform worse than older and simpler ones (while
all are given access to transformers). We also show that a simple baseline is
surprisingly strong. All the new developed models, as well as the evaluation
framework, are made publicly available.
|
Recent ground-based deep observations of the Universe have discovered large
populations of massive quiescent galaxies at z~3-5. With the launch of the
James Webb Space Telescope (JWST), the on-board NIRSpec instrument will provide
continuous 0.6-5.3 $\mu$ m spectroscopic coverage of these galaxies. Here we
show that NIRSpec/CLEAR spectroscopy is ideal to probe the completeness of
photometrically selected massive quiescent galaxies such as the ones presented
by Schreiber et al. (2018b). Using a subset of the Schreiber et al. (2018b)
sample with deep Keck/MOSFIRE spectroscopy presented by Esdaile et al. (2020),
we perform a suite of mock JWST/NIRSpec observations to determine optimal
observing strategies to efficiently recover the star-formation histories
(SFHs), element abundances, and kinematics of these massive quiescent galaxies.
We find that at z~3, medium resolution G235M/FL170LP NIRSpec observations could
recover element abundances at an accuracy of ~15%, which is comparable to local
globular clusters. Mimicking ZFOURGE COSMOS photometry, we perform mock
spectrophotometric fitting with Prospector to show that the overall shape of
the SFHs of our mock galaxies can be recovered well, albeit with a dependency
on the number of non-parametric SFH bins. We show that deep high-resolution
G235H/FL170LP integral field spectroscopy with a S/N~7 per spaxel is required
to constrain the rotational properties of our sample at >2$\sigma$ confidence.
Thus, through optimal grism/filter choices, JWST/NIRSpec slit and integral
field spectroscopy observations would provide tight constraints to galaxy
evolution in the early Universe.
|
Since physics of the dark sector components of the Universe is not yet
well-understood, the phenomenological studies of non-minimal interaction in the
dark sector could possibly pave the way to theoretical and experimental
progress in this direction. Therefore, in this work, we intend to explore some
features and consequences of a phenomenological interaction in the dark sector.
We use the Planck 2018, BAO, JLA, KiDS and HST data to investigate two
extensions of the base $\Lambda$CDM model, viz., (i) we allow the interaction
among vacuum energy and dark matter, namely the I$\Lambda$CDM model, wherein
the interaction strength is proportional to the vacuum energy density and
expansion rate of the Universe, and (ii) the I$\Lambda$CDM scenario with free
effective neutrino mass and number, namely the $\nu$I$\Lambda$CDM model. We
also present comparative analyses of the interaction models with the companion
models, namely, $\Lambda$CDM, $\nu\Lambda$CDM, $w$CDM and $\nu w$CDM. In both
the interaction models, we find non-zero coupling in the dark sector up to 99\%
CL with energy transfer from dark matter to vacuum energy, and observe a
phantom-like behavior of the effective dark energy without actual ``phantom
crossing". The well-known tensions on the cosmological parameters $H_0$ and
$\sigma_8$, prevailing within the $\Lambda$CDM cosmology, are relaxed
significantly in these models wherein the $\nu$I$\Lambda$CDM model shows
consistency with the standard effective neutrino mass and number. Both the
interaction models find a better fit to the combined data compared to the
companion models under consideration.
|
Transformer has been widely adopted in Neural Machine Translation (NMT)
because of its large capacity and parallel training of sequence generation.
However, the deployment of Transformer is challenging because different
scenarios require models of different complexities and scales. Naively training
multiple Transformers is redundant in terms of both computation and memory. In
this paper, we propose a novel Scalable Transformers, which naturally contains
sub-Transformers of different scales and have shared parameters. Each
sub-Transformer can be easily obtained by cropping the parameters of the
largest Transformer. A three-stage training scheme is proposed to tackle the
difficulty of training the Scalable Transformers, which introduces additional
supervisions from word-level and sequence-level self-distillation. Extensive
experiments were conducted on WMT EN-De and En-Fr to validate our proposed
Scalable Transformers.
|
It is shown that the Ablowitz-Kaup-Newell-Segur (AKNS) integrable hierarchy
can be obtained as the dynamical equations of three-dimensional General
Relativity with a negative cosmological constant. This geometrization of the
AKNS system is possible through the construction of novel boundary conditions
for the gravitational field. These are invariant under an asymptotic symmetry
group characterized by an infinite set of AKNS commuting conserved charges.
Gravitational configurations are studied by means of $SL(2,\mathbb{R})$
conjugacy classes. Conical singularities and black hole solutions are included
in the boundary conditions.
|
We show that if $(X,d)$ is a metric space which admits a consistent covex
geodesic bicombing, then we can construct a conical bicombing on $CB(X)$, the
hyperspace of nonempty, closed, bounded, and convex subsets of $X$ (with the
Hausdorff metric). If $X$ is a normed space, this same method produces a
consistent convex bicombing on $CB(X)$. We follow this by examining a geodesic
bicombing on the nonempty compact subsets of $X$, assuming $X$ is a proper
metric space.
|
In order to solve the recent defect in garbage classification - including low
level of intelligence, low accuracy and high cost of equipment, this paper
presents a series of methods in identification and judgment in intelligent
garbage classification, including a material identification based on thermal
principle and non-destructive laser irradiation, another material
identification based on optical diffraction and phase analysis, a profile
identification which utilizes a scenery thermal image after PCA and histogram
correction, another profile identification which utilizes computer vision with
innovated data sets and algorithms. Combining AHP and Bayesian formula, the
paper innovates a coupling algorithm which helps to make a comprehensive
judgment of the garbage sort, based on the material and profile identification.
This paper also proposes a method for real-time space measurement of garbage
cans, which based on the characteristics of air as fluid, and analyses the
functions of air cleaning and particle disposing. Instead of the single use of
garbage image recognition, this paper provides a comprehensive method to judge
the garbage sort by material and profile identifications, which greatly
enhancing the accuracy and intelligence in garbage classification.
|
We study the entanglement dynamics generated by quantum quenches in the
quantum cellular automaton Rule $54$. We consider the evolution from a recently
introduced class of solvable initial states. States in this class relax
(locally) to a one-parameter family of Gibbs states and the thermalisation
dynamics of local observables can be characterised exactly by means of an
evolution in space. Here we show that the latter approach also gives access to
the entanglement dynamics and derive exact formulas describing the asymptotic
linear growth of all R\'enyi entropies in the thermodynamic limit and their
eventual saturation for finite subsystems. While in the case of von Neumann
entropy we recover exactly the predictions of the quasiparticle picture, we
find no physically meaningful quasiparticle description for other R\'enyi
entropies. Our results apply to both homogeneous and inhomogeneous quenches.
|
The Virtual Brain (TVB) is now available as open-source cloud ecosystem on
EBRAINS, a shared digital research platform for brain science. It offers
services for constructing, simulating and analysing brain network models (BNMs)
including the TVB network simulator; magnetic resonance imaging (MRI)
processing pipelines to extract structural and functional connectomes;
multiscale co-simulation of spiking and large-scale networks; a domain specific
language for automatic high-performance code generation from user-specified
models; simulation-ready BNMs of patients and healthy volunteers; Bayesian
inference of epilepsy spread; data and code for mouse brain simulation; and
extensive educational material. TVB cloud services facilitate reproducible
online collaboration and discovery of data assets, models, and software
embedded in scalable and secure workflows, a precondition for research on large
cohort data sets, better generalizability and clinical translation.
|
We analyze the Bianchi I cosmology in the presence of a massless scalar field
and describe its dynamics via a semiclassical and quantum polymer approach. We
study the morphology of the Big Bounce by adopting three different sets of
configurational variables: the Ashtekar connections, a set of anisotropic
volume-like coordinates and the Universe volume plus two anisotropy coordinates
(the latter two sets of variables would coincide in the case of an isotropic
Universe). In the semiclassical analysis we demonstrate that the value of the
critical matter energy density depends on the Cauchy problem for the dynamics
when adopting the Ashtekar connections or the anisotropic volume-like
coordinates. On the contrary, when the Universe volume is considered as a
configurational coordinate, we are able to derive a polymer-modified Friedmann
equation for the Bianchi I model, from which the expression of the critical
energy density can be derived. This analysis shows that the Big Bounce has
universal features only when the Universe volume is defined on the polymer
lattice. Then, a cosmological constant is included in the Ashtekar connections'
formulation and some interesting results are mentioned making a comparison
between the synchronous dynamics and that one when the scalar field is taken as
a relational time. From a pure quantum point of view, we investigate the
Bianchi I dynamics in terms of the Ashtekar connections. We apply the ADM
reduction of the variational principle and then we quantize the system. We
study the resulting Schr\"{o}dinger dynamics, stressing that the behavior of
the wave packet peak over time singles out common features with the
semiclassical trajectories, confirming the non-universal character of the
emerging Big Bounce also on a quantum level.
|
We use the stochastic series expansion quantum Monte Carlo method, together
with the eigenstate-to-Hamiltonian mapping approach, to map the localized
ground states of the disordered two-dimensional Heisenberg model, to excited
states of a target Hamiltonian. The localized nature of the ground state is
established by studying the spin stiffness, local entanglement entropy, and
local magnetization. This construction allows us to define many body localized
states in an energy resolved phase diagram thereby providing concrete numerical
evidence for the existence of a many-body localized phase in two dimensions.
|
There is a large ongoing research effort towards obtaining a quantum
advantage in the solution of combinatorial optimization problems on near-term
quantum devices. A particularly promising platform for testing and developing
quantum optimization algorithms are arrays of trapped neutral atoms,
laser-coupled to highly excited Rydberg states. However, encoding combinatorial
optimization problems in atomic arrays is challenging due to the limited
inter-qubit connectivity given by their native finite-range interactions. Here
we propose and analyze a fast, high fidelity four-body Rydberg parity gate,
enabling a direct and straightforward implementation of the
Lechner-Hauke-Zoller (LHZ) scheme and its recent generalization, the parity
architecture, a scalable architecture for encoding arbitrarily connected
interaction graphs. Our gate relies on onetime-optimized adiabatic laser pulses
and is fully programmable by adjusting two hold-times during operation. We
numerically demonstrate an implementation of the quantum approximate
optimization algorithm (QAOA) for a small scale test problem. Our approach
allows for efficient execution of variational optimization steps with a
constant number of system manipulations, independent of the system size, thus
paving the way for experimental investigations of QAOA beyond the reach of
numerical simulations.
|
In contrast to the generic object, aerial targets are often non-axis aligned
with arbitrary orientations having the cluttered surroundings. Unlike the
mainstreamed approaches regressing the bounding box orientations, this paper
proposes an effective adaptive points learning approach to aerial object
detection by taking advantage of the adaptive points representation, which is
able to capture the geometric information of the arbitrary-oriented instances.
To this end, three oriented conversion functions are presented to facilitate
the classification and localization with accurate orientation. Moreover, we
propose an effective quality assessment and sample assignment scheme for
adaptive points learning toward choosing the representative oriented reppoints
samples during training, which is able to capture the non-axis aligned features
from adjacent objects or background noises. A spatial constraint is introduced
to penalize the outlier points for roust adaptive learning. Experimental
results on four challenging aerial datasets including DOTA, HRSC2016, UCAS-AOD
and DIOR-R, demonstrate the efficacy of our proposed approach. The source code
is availabel at: https://github.com/LiWentomng/OrientedRepPoints.
|
Color symmetry implies that the colors of geometrical objects are assigned
according to their symmetry properties. It is defined by associating the
elements of the symmetry group with a color permutation. I use this concept for
generative art and apply symmetry-consistent color distortions to images of
paintings by Johannes Vermeer. The color permutations are realized as mappings
of the HSV color space onto itself.
|
Survival outcomes are common in comparative effectiveness studies and require
unique handling because they are usually incompletely observed due to
right-censoring. A ``once for all'' approach for causal inference with survival
outcomes constructs pseudo-observations and allows standard methods such as
propensity score weighting to proceed as if the outcomes are completely
observed. For a general class of model-free causal estimands with survival
outcomes on user-specified target populations, we develop corresponding
propensity score weighting estimators based on the pseudo-observations and
establish their asymptotic properties. In particular, utilizing the functional
delta-method and the von Mises expansion, we derive a new closed-form variance
of the weighting estimator that takes into account the uncertainty due to both
pseudo-observation calculation and propensity score estimation. This allows
valid and computationally efficient inference without resampling. We also prove
the optimal efficiency property of the overlap weights within the class of
balancing weights for survival outcomes. The proposed methods are applicable to
both binary and multiple treatments. Extensive simulations are conducted to
explore the operating characteristics of the proposed method versus other
commonly used alternatives. We apply the proposed method to compare the causal
effects of three popular treatment approaches for prostate cancer patients.
|
During the planning phase of industrial robot workplaces, hazard analyses are
required so that potential hazards for human workers can be identified and
appropriate safety measures can be implemented. Existing hazard analysis
methods use human reasoning, checklists and/or abstract system models, which
limit the level of detail. We propose a new approach that frames hazard
analysis as a search problem in a dynamic simulation environment. Our goal is
to identify workplace hazards by searching for simulation sequences that result
in hazardous situations. We solve this search problem by placing virtual humans
into workplace simulation models. These virtual humans act in an adversarial
manner: They learn to provoke unsafe situations, and thereby uncover workplace
hazards. Although this approach cannot replace a thorough hazard analysis, it
can help uncover hazards that otherwise may have been overlooked, especially in
early development stages. Thus, it helps to prevent costly re-designs at later
development stages. For validation, we performed hazard analyses in six
different example scenarios that reflect typical industrial robot workplaces.
|
In the past, the axion-nucleon coupling has been calculated in the framework
of SU(2) heavy baryon chiral perturbation theory up to third order in the
chiral power counting. Here, we extend these earlier studies to the case of
heavy baryon chiral perturbation theory with SU(3) flavor symmetry and derive
the axion coupling to the full SU(3) baryon octet, showing that the axion also
significantly couples to hyperons. As studies on dense nuclear matter suggest
the possible existence of hyperons in stellar objects such as neutron stars,
our results should have phenomenological implications related to the so-called
axion window.
|
Hardware performance counters (HPCs) that measure low-level architectural and
microarchitectural events provide dynamic contextual information about the
state of the system. However, HPC measurements are error-prone due to non
determinism (e.g., undercounting due to event multiplexing, or OS
interrupt-handling behaviors). In this paper, we present BayesPerf, a system
for quantifying uncertainty in HPC measurements by using a domain-driven
Bayesian model that captures microarchitectural relationships between HPCs to
jointly infer their values as probability distributions. We provide the design
and implementation of an accelerator that allows for low-latency and low-power
inference of the BayesPerf model for x86 and ppc64 CPUs. BayesPerf reduces the
average error in HPC measurements from 40.1% to 7.6% when events are being
multiplexed. The value of BayesPerf in real-time decision-making is illustrated
with a simple example of scheduling of PCIe transfers.
|
COVID-19 has disrupted normal life and has enforced a substantial change in
the policies, priorities and activities of individuals, organisations and
governments. These changes are proving to be a catalyst for technology and
innovation. In this paper, we discuss the pandemic's potential impact on the
adoption of the Internet of Things (IoT) in various broad sectors namely
healthcare, smart homes, smart buildings, smart cities, transportation and
industrial IoT. Our perspective and forecast of this impact on IoT adoption is
based on a thorough research literature review, a careful examination of
reports from leading consulting firms and interactions with several industry
experts. For each of these sectors, we also provide the details of notable IoT
initiatives taken in wake of COVID-19. We also highlight the challenges that
need to be addressed and important research directions that will facilitate
accelerated IoT adoption.
|
In this (partly expository) paper we give a short overview about the close
relationship between the sequence of Catalan numbers and Hankel determinants
from the point of view of orthogonal polynomials and show that an analogous
situation exists for more general sequences.
|
Ultra-hot Jupiters are defined as giant planets with equilibrium temperatures
larger than 2000 K. Most of them are found orbiting bright A-F type stars,
making them extremely suitable objects to study their atmospheres using
high-resolution spectroscopy. Recent studies show a variety of atoms and
molecules detected in the atmospheres of this type of planets. Here we present
our analysis of the newly discovered ultra-hot Jupiter TOI-1431b/MASCARA-5b,
using two transit observations with the HARPS-N spectrograph and one transit
observation with the EXPRES spectrograph. Analysis of the Rossiter-McLaughlin
effect shows that the planet is in a polar orbit, with a projected obliquity $
\lambda = -155^{+20}_{-10}$ degrees. Combining the nights and applying both
cross-correlation methods and transmission spectroscopy, we find no evidences
of CaI, FeI, FeII, MgI, NaI, VI, TiO, VO or H$\alpha$ in the atmosphere of the
planet. Our most likely explanation for the lack of atmospheric features is the
large surface gravity of the planet.
|
We present an exact computation of effective Hamiltonians for an elementary
model obtained from the Yukawa theory by going to the limit of bare fermions
being infinitely heavy and bare bosons being at rest with respect to the
fermions that emit or absorb them. The coupling constant can be arbitrarily
large. The Hamiltonians are computed by solving the differential equation of
the renormalization group procedure for effective particles (RGPEP). Physical
fermions, defined in the model as eigenstates of the effective Hamiltonians,
are obtained in the form of an effective fermion dressed with a coherent state
of effective bosons. The model computation illustrates the method that can be
used in perturbative computations of effective Hamiltonians for realistic
theories. It shows the mechanism by which the perturbative expansion and
Tamm-Dancoff approximation increase in accuracy along the RGPEP evolution.
|
Blended organic thin films have been studied during the last decades due to
their applicability in organic solar cells. Although their optical and
electronic features have been examined intensively, there is still lack of
detailed knowledge about their growth processes and resulting morphologies,
which play a key role for the efficiency of optoelectronic devices such as
organic solar cells. In this study, pure and blended thin films of copper
phthalocyanine (CuPc) and the Buckminster fullerene (C60) were grown by vacuum
deposition onto a native silicon oxide substrate at two different substrate
temperatures, 310 K and 400 K. The evolution of roughness was followed by
in-situ real-time X-ray reflectivity. Crystal orientation, island densities and
morphology were examined after the growth by X-ray diffraction experiments and
microscopy techniques. The formation of a smooth wetting layer followed by
rapid roughening was found in pure CuPc thin films, whereas C60 shows a fast
formation of distinct islands at a very early stage of growth. The growth of
needle-like CuPc crystals loosing their alignment with the substrate was
identified in co-deposited thin films. Furthermore, the data demonstrates that
structural features become larger and more pronounced and that the island
density decreases by a factor of four when going from 310 K to 400 K. Finally,
the key parameters roughness and island density were well reproduced on a
smaller scale by kinetic Monte-Carlo simulations of a generic, binary lattice
model with simple nearest-neighbor interaction energies.
|
Improving the clock stability is of fundamental importance for the
development of quantum-enhanced metrology. One of the main limitations arises
from the randomly-fluctuating local oscillator (LO) frequency, which introduces
"phase slips" for long interrogation times and hence failure of the
frequency-feedback loop. Here we propose a strategy to improve the stability of
atomic clocks by interrogating two out-of-phase state sharing the same LO.
While standard Ramsey interrogation can only determine phases unambiguously in
the interval $[-\pi/2,\pi/2]$, the joint interrogation allows for an extension
to $[-\pi,\pi]$, resulting in a relaxed restriction of the Ramsey time and
improvement of absolute clock stability. Theoretical predictions are supported
by ab-initio numerical simulation for white and correlated LO noise. While our
basic protocol uses uncorrelated atoms, we have further extended it to include
spin-squeezing and further improving the scaling of clock stability with the
number of atoms. Our protocol can be readily tested in current state-of-the-art
experiments.
|
Rational Krylov subspace projection methods are one of successful methods in
MOR, mainly because some order derivatives of the approximate and original
transfer functions are the same. This is the well known moments matching
result. However, the properties of points which are far from the interpolating
points are little known. In this paper, we obtain the error's explicit
expression which involves shifts and Ritz values. The advantage of our result
over than the known moments matches theory is, to some extent, similar to the
one of Lagrange type remainder formula over than Peano Type remainder formula
in Taylor theorem. Expect for the proof, we also provide three explanations for
the error formula. One explanation shows that in the Gauss-Christoffel
quadrature sense, the error is the Gauss quadrature remainder, when the Gauss
quadrature formula is applied onto the resolvent function. By using the error
formula, we propose some greedy algorithms for the interpolatory $H_{\infty}$
norm MOR.
|
Let $s(n):= \sum_{d\mid n,~d<n} d$ denote the sum of the proper divisors of
$n$. It is natural to conjecture that for each integer $k\ge 2$, the
equivalence \[ \text{$n$ is $k$th powerfree} \Longleftrightarrow \text{$s(n)$
is $k$th powerfree} \] holds almost always (meaning, on a set of asymptotic
density $1$). We prove this for $k\ge 4$.
|
In recent years, Graph Neural Networks has received enormous attention from
academia for its huge potential of modeling the network traits such as
macrostructure and single node attributes. However, prior mainstream works
mainly focus on homogeneous network and lack the capacity to characterize the
network heterogeneous property. Besides, most previous literature cannot model
the influence under microscope vision, making it infeasible to model the joint
relation between the heterogeneity and mutual interaction within multiple
relation type. In this paper, we propose an Influence Self-attention network to
address the difficulties mentioned above. To model heterogeneity and mutual
interaction, we redesign attention mechanism with influence factor on the
single-type relation level, which learns the importance coefficient from its
adjacent neighbors under the same meta-path based patterns. To incorporate the
heterogeneous meta-path in a unified dimension, we developed a self-attention
based framework for meta-path relation fusion according to the learned
meta-path coefficient. Our experimental results demonstrate that our framework
not only achieve higher results than current state-of-the-art baselines, but
also show promising vision on depicting heterogeneous interactive relations
under complicated network structure.
|
Stencil computation is one of the most important kernels in various
scientific and engineering applications. A variety of work has focused on
vectorization techniques, aiming at exploiting the in-core data parallelism.
Briefly, they either incur data alignment conflicts or hurt the data locality
when integrated with tiling. In this paper, a novel transpose layout is devised
to preserve the data locality for tiling in the data space and reduce the data
reorganization overhead for vectorization simultaneously. We then propose an
approach of temporal computation folding designed to further reduce the
redundancy of arithmetic calculations by exploiting the register reuse,
alleviating the increased register pressure, and deducing generalization with a
linear regression model. Experimental results on the AVX-2 and AVX-512 CPUs
show that our approach obtains a competitive performance.
|
In the last few years, deep learning classifiers have shown promising results
in image-based medical diagnosis. However, interpreting the outputs of these
models remains a challenge. In cancer diagnosis, interpretability can be
achieved by localizing the region of the input image responsible for the
output, i.e. the location of a lesion. Alternatively, segmentation or detection
models can be trained with pixel-wise annotations indicating the locations of
malignant lesions. Unfortunately, acquiring such labels is labor-intensive and
requires medical expertise. To overcome this difficulty, weakly-supervised
localization can be utilized. These methods allow neural network classifiers to
output saliency maps highlighting the regions of the input most relevant to the
classification task (e.g. malignant lesions in mammograms) using only
image-level labels (e.g. whether the patient has cancer or not) during
training. When applied to high-resolution images, existing methods produce
low-resolution saliency maps. This is problematic in applications in which
suspicious lesions are small in relation to the image size. In this work, we
introduce a novel neural network architecture to perform weakly-supervised
segmentation of high-resolution images. The proposed model selects regions of
interest via coarse-level localization, and then performs fine-grained
segmentation of those regions. We apply this model to breast cancer diagnosis
with screening mammography, and validate it on a large clinically-realistic
dataset. Measured by Dice similarity score, our approach outperforms existing
methods by a large margin in terms of localization performance of benign and
malignant lesions, relatively improving the performance by 39.6% and 20.0%,
respectively. Code and the weights of some of the models are available at
https://github.com/nyukat/GLAM
|
High-dimensional distributed semantic spaces have proven useful and effective
for aggregating and processing visual, auditory, and lexical information for
many tasks related to human-generated data. Human language makes use of a large
and varying number of features, lexical and constructional items as well as
contextual and discourse-specific data of various types, which all interact to
represent various aspects of communicative information. Some of these features
are mostly local and useful for the organisation of e.g. argument structure of
a predication; others are persistent over the course of a discourse and
necessary for achieving a reasonable level of understanding of the content.
This paper describes a model for high-dimensional representation for utterance
and text level data including features such as constructions or contextual
data, based on a mathematically principled and behaviourally plausible approach
to representing linguistic information. The implementation of the
representation is a straightforward extension of Random Indexing models
previously used for lexical linguistic items. The paper shows how the
implemented model is able to represent a broad range of linguistic features in
a common integral framework of fixed dimensionality, which is computationally
habitable, and which is suitable as a bridge between symbolic representations
such as dependency analysis and continuous representations used e.g. in
classifiers or further machine-learning approaches. This is achieved with
operations on vectors that constitute a powerful computational algebra,
accompanied with an associative memory for the vectors. The paper provides a
technical overview of the framework and a worked through implemented example of
how it can be applied to various types of linguistic features.
|
The TRAPPIST-1 system is a priority target for terrestrial exoplanet
characterization. TRAPPIST-1e, residing in the habitable zone, will be observed
during the JWST GTO Program. Here, we assess the prospects of differentiating
between prebiotic and modern Earth scenarios for TRAPPIST-1e via transmission
spectroscopy. Using updated TRAPPIST-1 stellar models from the Mega-MUSCLES
survey, we compute self-consistent model atmospheres for a 1 bar prebiotic
Earth scenario and two modern Earth scenarios (1 and 0.5 bar eroded
atmosphere). Our modern and prebiotic high-resolution transmission spectra (0.4
- 20 $\mu$m at $R \sim$ 100,000) are made available online. We conduct a
Bayesian atmospheric retrieval analysis to ascertain the molecular
detectability, abundance measurements, and temperature constraints achievable
for both scenarios with JWST. We demonstrate that JWST can differentiate
between our prebiotic and modern Earth scenarios within 20 NIRSpec Prism
transits via CH$_4$ abundance measurements. However, JWST will struggle to
detect O$_3$ for our modern Earth scenario to $> 2\,\sigma$ confidence within
the nominal mission lifetime ($\sim$ 80 transits over 5 years). The agnostic
combination of N$_2$O and/or O$_3$ offers better prospects, with a predicted
detection significance of $2.7\,\sigma$ with 100 Prism transits. We show that
combining MIRI LRS transits with Prism data provides little improvement to
atmospheric constraints compared to observing additional Prism transits. Though
biosignatures will be challenging to detect for TRAPPIST-1e with JWST, the
abundances for several important molecules - CO$_2$, CH$_4$, and H$_2$O - can
be measured to a precision of $\lesssim$ 0.7 dex (a factor of 5) within a 20
Prism transit JWST program.
|
We present results of a multi-line study of the filamentary infrared dark
cloud G351.78-0.54 in the 1.3 and 0.8 mm wavelength bands. The lines of the
three isotopologues of carbon monoxide CO, N$_2$H$^+$, CH$_3$CCH and HNCO were
observed. The aim was to study the general structure of the filamentary cloud,
its fragmentation and physical parameters with the emphasis on properties of
dense clumps in this cloud. Several dense clumps are identified from the
N$_2$H$^+$ (3-2) data, their masses and virial parameters are determined using
the C$^{18}$O (2-1) line. Temperatures of some clumps are estimated from the
CH$_3$CCH and HNCO data. Almost all clumps appear to be gravitationally
unstable. The density estimates obtained from the C$^{18}$O (3-2)/(2-1) and
N$_2$H$^+$ (3-2)/(1-0) intensity ratios are in the range $n \sim (0.3-3)\times
10^5$ cm$^{-2}$. The HNCO emission is detected exclusively toward the first
clump which contains the luminous IR source IRAS 17233-3606, and indicates an
even higher density. It is observed in the outflow, too. The velocity shift of
the higher excitation HNCO lines may indicate a movement of the hot core
relative the surrounding medium. In some clumps there is a velocity shift $\sim
1$ km s$^{-1}$ between N$_2$H$^+$ (3-2) and CO isotopologues. The large widths
of the N$_2$H$^+$ (3-2) line in the clumps indicate an increase of the velocity
dispersion in their dense interiors, which may be related to the star formation
process. The N$_2$H$^+$ abundance drops toward the luminous IR source.
|
Dimension four provides a peculiarly idiosyncratic setting for the interplay
between scalar curvature and differential topology. Here we will explain some
of the peculiarities of the four-dimensional realm via a careful discussion of
the Yamabe invariant (or sigma constant). In the process, we will also prove
some new results, and point out open problems that continue to represent key
challenges in the subject.
|
Rate splitting (RS) has emerged as a valuable technology for wireless
communications systems due to its capability to deal with uncertainties in the
channel state information at the transmitter (CSIT). RS with linear and
non-linear precoders, such as the Tomlinson-Harashima (THP) precoder, have been
explored in the downlink (DL) of multiuser multi antenna systems. In this work,
we propose a multi-branch (MB) scheme for a RS-based multiple-antenna system,
which creates patterns to order the transmitted symbols and enhances the
overall sum rate performance compared to existing approaches. Closed-form
expressions are derived for the sum rate through statistical analysis.
Simulation results show that the proposed MB-THP for RS outperforms
conventional THP and MB-THP schemes.
|
Recently, the Siamese-based method has stood out from multitudinous tracking
methods owing to its state-of-the-art (SOTA) performance. Nevertheless, due to
various special challenges in UAV tracking, \textit{e.g.}, severe occlusion and
fast motion, most existing Siamese-based trackers hardly combine superior
performance with high efficiency. To this concern, in this paper, a novel
attentional Siamese tracker (SiamAPN++) is proposed for real-time UAV tracking.
By virtue of the attention mechanism, we conduct a special attentional
aggregation network (AAN) consisting of self-AAN and cross-AAN for raising the
representation ability of features eventually. The former AAN aggregates and
models the self-semantic interdependencies of the single feature map via
spatial and channel dimensions. The latter aims to aggregate the
cross-interdependencies of two different semantic features including the
location information of anchors. In addition, the anchor proposal network based
on dual features is proposed to raise its robustness of tracking objects with
various scales. Experiments on two well-known authoritative benchmarks are
conducted, where SiamAPN++ outperforms its baseline SiamAPN and other SOTA
trackers. Besides, real-world tests onboard a typical embedded platform
demonstrate that SiamAPN++ achieves promising tracking results with real-time
speed.
|
In this paper, we introduce a sequential variational mode decomposition
method to separate non-stationary mixed signals successively. This method is
inspired by the variational method, and can precisely recover the original
components one by one from the raw mixture without prior knowing or assuming
the number of components. And in such a way, the mode number also can be
determined during the separation procedure. Such character brings great
convenience for real application and differs from the current VMD method.
Furthermore, we also conduct a principal elongation for the mixture signal
before the decomposing operation. By applying such an approach, the end effect
can be reduced to a low level compared with the VMD method. To obtain higher
accuracy, a refinement process has been introduced after gross extraction.
Combined these techniques together, the final decomposition result implies a
significant improvement compared with the VMD method and EMD method.
|
A full Bayesian approach to the estimation of Vaccine Efficacy is presented,
which is an improvement over the currently used exact method conditional on the
total number of cases. As an example, we reconsider the statistical sections of
the BioNTech/Pfizer protocol, which in 2020 has led to the first approved
anti-Covid-19 vaccine.
|
Generalized (non-Markovian) diffusion equations with different memory kernels
and subordination schemes based on random time change in the Brownian diffusion
process are popular mathematical tools for description of a variety of
non-Fickian diffusion processes in physics, biology and earth sciences. Some of
such processes (notably, the fluid limits of continuous time random walks)
allow for either kind of description, but other ones do not. In the present
work we discuss the conditions under which a generalized diffusion equation
does correspond to a subordination scheme, and the conditions under which a
subordination scheme does possess the corresponding generalized diffusion
equation. Moreover, we discuss examples of random processes for which only one,
or both kinds of description are applicable.
|
We consider a planar SIS-type Josephson junction between diffusive
superconductors (S) through an insulating tunnel interface (I). We construct
fully self-consistent perturbation theory with respect to the interface
conductance. As a result, we find correction to the first Josephson harmonic
and calculate the second Josephson harmonic. At arbitrary temperatures, we
correct previous results for the nonsinusoidal current-phase relation in
Josephson tunnel junctions, which were obtained with the help of conjectured
form of solution. Our perturbation theory also describes the difference between
the phases of the order parameter and of the anomalous Green functions.
|
We show that the uniform radius of spatial analyticity $\sigma(t)$ of
solution at time $t$ for the fifth order KdV-BBM equation cannot decay faster
than $1/t$ for large $t>0$, given initial data that is analytic with fixed
radius $\sigma_0$. This significantly improves a recent result by Carvajal and
Panthee, where they established an exponential decay of $\sigma(t)$ for large
$t$.
|
Knowledge about the locations of keypoints of an object in an image can
assist in fine-grained classification and identification tasks, particularly
for the case of objects that exhibit large variations in poses that greatly
influence their visual appearance, such as wild animals. However, supervised
training of a keypoint detection network requires annotating a large image
dataset for each animal species, which is a labor-intensive task. To reduce the
need for labeled data, we propose to learn simultaneously keypoint heatmaps and
pose invariant keypoint representations in a semi-supervised manner using a
small set of labeled images along with a larger set of unlabeled images.
Keypoint representations are learnt with a semantic keypoint consistency
constraint that forces the keypoint detection network to learn similar features
for the same keypoint across the dataset. Pose invariance is achieved by making
keypoint representations for the image and its augmented copies closer together
in feature space. Our semi-supervised approach significantly outperforms
previous methods on several benchmarks for human and animal body landmark
localization.
|
The paper is concerned with the time-periodic (T-periodic) problem of the
fractal Burgers equation with a T-periodic force on the real line. Based on the
Galerkin approximates and Fourier series (transform) methods, we first prove
the existence of T-periodic solution to a linearized version. Then, the
existence and uniqueness of T-periodic solution to the nonlinear equation are
established by the contraction mapping argument. Furthermore, we show that the
unique T-periodic solution is asymptotically stable. This analysis, which is
carried out in energy space $ H^{1}(0,T;H^{\frac{\alpha}{2}}(R))\cap
L^{2}(0,T;\dot{H}^{\alpha})$ with $1<\alpha<\frac{3}{2}$, extends the
T-periodic viscid Burgers equation in \cite{5} to the T-periodic fractional
case.
|
In this paper we introduce models of short wave-long wave interactions in the
relativistic setting. In this context the nonlinear Schr\"odinger equation is
no longer adequate for describing short waves and is replaced by a nonlinear
Dirac equation. Two specific examples are considered: the case where the long
waves are governed by a scalar conservation law; and the case where the long
waves are governed by the augmented Born-Infeld equations in electromagnetism.
|
Aging affects almost all aspects of an organism -- its morphology, its
physiology, its behavior. Isolating which biological mechanisms are regulating
these changes, however, has proven difficult, potentially due to our inability
to characterize the full repertoire of an animal's behavior across the
lifespan. Using data from fruit flies (D. melanogaster) we measure the full
repertoire of behaviors as a function of age. We observe a sexually dimorphic
pattern of changes in the behavioral repertoire during aging. Although the
stereotypy of the behaviors and the complexity of the repertoire overall
remains relatively unchanged, we find evidence that the observed alterations in
behavior can be explained by changing the fly's overall energy budget,
suggesting potential connections between metabolism, aging, and behavior.
|
The strange visual appearance of objects is one of the puzzling predictions
of Einstein's relativity. This is mainly due to the distinction between
measuring and seeing, where the former is described by the Lorentz
Transformation and the latter considers the time light rays (emitted by each
point on the object) take to reach the observer. We compute the apparent
position of a point given its velocity, initial position, and observation time.
The apparent speed of a point is calculated, and we obtain that it exceeds the
speed of light when approaching the observer, similar to superluminal motion.
For parameterizable surfaces, we analyze properties (such as curvature and
torsion) of apparent shapes. The observation that a sphere retains its circular
silhouette when transformed to its apparent shape, independent of the initial
conditions, is proved mathematically. Plots describing the apparent speed and
length of objects are made, and the metric tensor for a distorted sphere is
calculated. A generalized equation for the Doppler effect and relativistic
aberration is derived to analyze regions of redshift and blueshift. Using the
Born-rigidity conditions, we compute the hyperbolic trajectories of each point
on an extended object given an initial velocity, position, and proper
acceleration for any reference point. The claim that a rigid body, accelerating
in Special Relativity, cannot exceed a given length in certain circumstances is
justified. We obtain many non-trivial results, which are proved algebraically
and using light cones, that are tested by taking the limit of acceleration
approaching 0 to retrieve results in the constant velocity scenario. In
conclusion, these visualizations may be used by teachers to explain SR
intuitively. Finally, we provide an overview of extending the same problem to
curved spacetime and explain the potential applications of this project.
|
The problem of graph Reachability is to decide whether there is a path from
one vertex to another in a given graph. In this paper, we study the
Reachability problem on three distinct graph families - intersection graphs of
Jordan regions, unit contact disk graphs (penny graphs), and chordal graphs.
For each of these graph families, we present space-efficient algorithms for the
Reachability problem.
For intersection graphs of Jordan regions, we show how to obtain a "good"
vertex separator in a space-efficient manner and use it to solve the
Reachability in polynomial time and $O(m^{1/2}\log n)$ space, where $n$ is the
number of Jordan regions, and $m$ is the total number of crossings among the
regions. We use a similar approach for chordal graphs and obtain a
polynomial-time and $O(m^{1/2}\log n)$ space algorithm, where $n$ and $m$ are
the number of vertices and edges, respectively. However, we use a more involved
technique for unit contact disk graphs (penny graphs) and obtain a better
algorithm. We show that for every $\epsilon> 0$, there exists a polynomial-time
algorithm that can solve Reachability in an $n$ vertex directed penny graph,
using $O(n^{1/4+\epsilon})$ space. We note that the method used to solve penny
graphs does not extend naturally to the class of geometric intersection graphs
that include arbitrary size cliques.
|
Differentiable architecture search (DAS) has made great progress in searching
for high-performance architectures with reduced computational cost. However,
DAS-based methods mainly focus on searching for a repeatable cell structure,
which is then stacked sequentially in multiple stages to form the networks.
This configuration significantly reduces the search space, and ignores the
importance of connections between the cells. To overcome this limitation, in
this paper, we propose a Hierarchical Differentiable Architecture Search
(H-DAS) that performs architecture search both at the cell level and at the
stage level. Specifically, the cell-level search space is relaxed so that the
networks can learn stage-specific cell structures. For the stage-level search,
we systematically study the architectures of stages, including the number of
cells in each stage and the connections between the cells. Based on insightful
observations, we design several search rules and losses, and mange to search
for better stage-level architectures. Such hierarchical search space greatly
improves the performance of the networks without introducing expensive search
cost. Extensive experiments on CIFAR10 and ImageNet demonstrate the
effectiveness of the proposed H-DAS. Moreover, the searched stage-level
architectures can be combined with the cell structures searched by existing DAS
methods to further boost the performance. Code is available at:
https://github.com/MalongTech/research-HDAS
|
In the race to achieve climate goals, many governments and organizations are
encouraging the local development of Renewable Energy Technology (RET). The
spatial innovation dynamics of the development of a technology partly depends
on the characteristics of the knowledge base on which this technology builds,
in particular the analyticity and cumulativeness of knowledge. Theoretically,
greater analyticity and lesser cumulativeness are positively associated with
more widespread development. In this study, we first empirically evaluate these
relations for general technology and then systematically determine the
knowledge base characteristics for a set of 14 different RETs. We find that,
while several RETs (photovoltaics, fuel-cells, energy storage) have a highly
analytic knowledge base and develop more widespread, there are also important
RETs (wind turbines, solar thermal, geothermal and hydro energy) for which the
knowledge base is less analytic and which develop less widespread. Likewise,
the technological cumulativeness tends to be lower for the former than for the
latter group. This calls for regional and country-level policies to be specific
for different RETs, taking for a given RET into account both the type of
knowledge it builds on as well as the local presence of this knowledge.
|
One of the fundamental concerns in the operation of modern power systems is
the assessment of their frequency stability in case of inertia-reduction
induced by the large share of power electronic interfaced resources. Within
this context, the paper proposes a framework that, by making use of linear
models of the frequency response of different types of power plants, including
also grid--forming and grid-following converters, is capable to infer a
numerically tractable dynamical model to be used in frequency stability
assessment. Furthermore, the proposed framework makes use of models defined in
a way such that their parameters can be inferred from real-time measurements
feeding a classical least squares estimator. The paper validates the proposed
framework using a full-replica of the dynamical model of the IEEE 39 bus system
simulated in a real-time platform.
|
Adversarial attacks expose important blind spots of deep learning systems.
While word- and sentence-level attack scenarios mostly deal with finding
semantic paraphrases of the input that fool NLP models, character-level attacks
typically insert typos into the input stream. It is commonly thought that these
are easier to defend via spelling correction modules. In this work, we show
that both a standard spellchecker and the approach of Pruthi et al. (2019),
which trains to defend against insertions, deletions and swaps, perform poorly
on the character-level benchmark recently proposed in Eger and Benz (2020)
which includes more challenging attacks such as visual and phonetic
perturbations and missing word segmentations. In contrast, we show that an
untrained iterative approach which combines context-independent character-level
information with context-dependent information from BERT's masked language
modeling can perform on par with human crowd-workers from Amazon Mechanical
Turk (AMT) supervised via 3-shot learning.
|
We considered the multiphoton resonance in the periodically driven quantum
oscillator with Kerr nonlinearity in the presence of weak high-order
nonlinearities. Multiphoton resonance leads to the emergence of peaks and dips
in the dependence of the stationary occupations of the stable states on
detuning. We demonstrated that due to high-order nonlinearities, these peaks
and dips acquire additional fine structure and split into several closely
spaced ones. Quasiclassically, multiphoton resonance is treated as tunneling
between the regions of the oscillator phase portrait, and the fine structure of
the multiphoton resonance is a consequence of a special quasienergy dependence
of the tunneling rate between different regions of the classical phase
portrait. For different values of damping and high-order nonlinearity
coefficients, we identified the domain of quasienergies where tunneling
strongly influences the system kinetics. The corresponding tunneling term in
the Fokker-Planck equation in quasienergy space was derived directly from the
quantum master equation.
|
Active learning promises to alleviate the massive data needs of supervised
machine learning: it has successfully improved sample efficiency by an order of
magnitude on traditional tasks like topic classification and object
recognition. However, we uncover a striking contrast to this promise: across 5
models and 4 datasets on the task of visual question answering, a wide variety
of active learning approaches fail to outperform random selection. To
understand this discrepancy, we profile 8 active learning methods on a
per-example basis, and identify the problem as collective outliers -- groups of
examples that active learning methods prefer to acquire but models fail to
learn (e.g., questions that ask about text in images or require external
knowledge). Through systematic ablation experiments and qualitative
visualizations, we verify that collective outliers are a general phenomenon
responsible for degrading pool-based active learning. Notably, we show that
active learning sample efficiency increases significantly as the number of
collective outliers in the active learning pool decreases. We conclude with a
discussion and prescriptive recommendations for mitigating the effects of these
outliers in future work.
|
Let $C/\mathbb{Q}$ be a hyperelliptic curve with an affine model of the form
$y^2=x^p+a$. We explicitly determine the root number of the Jacobian of $C$,
with particular focus on the local root number at $p$ where $C$ has wild
ramification.
|
Cooperative multi-agent reinforcement learning (MARL) has achieved
significant results, most notably by leveraging the representation learning
abilities of deep neural networks. However, large centralized approaches
quickly become infeasible as the number of agents scale, and fully
decentralized approaches can miss important opportunities for information
sharing and coordination. Furthermore, not all agents are equal - in some
cases, individual agents may not even have the ability to send communication to
other agents or explicitly model other agents. This paper considers the case
where there is a single, powerful, central agent that can observe the entire
observation space, and there are multiple, low powered, local agents that can
only receive local observations and cannot communicate with each other. The job
of the central agent is to learn what message to send to different local
agents, based on the global observations, not by centrally solving the entire
problem and sending action commands, but by determining what additional
information an individual agent should receive so that it can make a better
decision. After explaining our MARL algorithm, hammer, and where it would be
most applicable, we implement it in the cooperative navigation and multi-agent
walker domains. Empirical results show that 1) learned communication does
indeed improve system performance, 2) results generalize to multiple numbers of
agents, and 3) results generalize to different reward structures.
|
I show that any locally Cartesian left localisation of a presentable
infinity-category admits a right proper model structure in which all morphisms
are cofibrations, and obtain a Koszul duality classification of its fibrations.
By a simple criterion in terms of generators for a localisation to be locally
Cartesian, this applies to any nullification functor. In particular, it
includes examples with non-trivial "homotopical content."
I further describe, and provide examples from, the set of fibrations in three
contexts: the higher categorical Thomason model structure of Mazel-Gee, where
fibrations are local systems; Morel-Voevodsky A1-localisation, where they are a
higher analogue of A1-covering spaces; and the Quillen plus construction, where
they are related to loop space modules trivialised over the universal acyclic
extension.
|
Quantum simulation has shown great potential in many fields due to its
powerful computational capabilities. However, the limited fidelity can lead to
a severe limitation on the number of gate operations, which requires us to find
optimized algorithms. Trotter decomposition and high order Trotter
decompositions are widely used in quantum simulations. We find that they can be
significantly improved by force-gradient integrator in lattice QCD. By using
two applications as examples, we show that the force-gradient decomposition can
reduce the number of gate operations up to about a third of those using high
order Trotter decompositions. Therefore, force-gradient decomposition shows a
great prospective in future applications of quantum simulation.
|
Powder X-ray Diffraction (PXRD) and Pair Distribution Function (PDF) analysis
are well-established techniques for investigation of atomic configurations in
crystalline materials, and the two are related by a Fourier transformation. In
PXRD experiments, structural information, such as crystallite size and strain,
is contained within the peak profile function of the diffraction peaks.
However, the effects of the PXRD peak profile function on the PDF are not fully
understood. Here, all the effects from a Voigt diffraction peak profile are
solved analytically and verified experimentally through a high-quality X-ray
total scattering measurements on strained Ni powder. The Lorentzian
contribution to strain broadening is found to result in Voigt shaped PDF peaks.
Furthermore, it is demonstrated that an improper description of the Voigt shape
during model refinement leads to overestimation of the atomic displacement
parameter.
|
The analysis of plaque deposits in the coronary vasculature is an important
topic in current clinical research. From a technical side mostly new algorithms
for different sub tasks - e.g. centerline extraction or vessel/plaque
segmentation - are proposed. However, to enable clinical research with the help
of these algorithms, a software solution, which enables manual correction,
comprehensive visual feedback and tissue analysis capabilities, is needed.
Therefore, we want to present such an integrated software solution. It is able
to perform robust automatic centerline extraction and inner and outer vessel
wall segmentation, while providing easy to use manual correction tools. Also,
it allows for annotation of lesions along the centerlines, which can be further
analyzed regarding their tissue composition. Furthermore, it enables research
in upcoming technologies and research directions: it does support dual energy
CT scans with dedicated plaque analysis and the quantification of the fatty
tissue surrounding the vasculature, also in automated set-ups.
|
Under mild conditions on the noise level of the measurements, rotation
averaging satisfies strong duality, which enables global solutions to be
obtained via semidefinite programming (SDP) relaxation. However, generic
solvers for SDP are rather slow in practice, even on rotation averaging
instances of moderate size, thus developing specialised algorithms is vital. In
this paper, we present a fast algorithm that achieves global optimality called
rotation coordinate descent (RCD). Unlike block coordinate descent (BCD) which
solves SDP by updating the semidefinite matrix in a row-by-row fashion, RCD
directly maintains and updates all valid rotations throughout the iterations.
This obviates the need to store a large dense semidefinite matrix. We
mathematically prove the convergence of our algorithm and empirically show its
superior efficiency over state-of-the-art global methods on a variety of
problem configurations. Maintaining valid rotations also facilitates
incorporating local optimisation routines for further speed-ups. Moreover, our
algorithm is simple to implement; see supplementary material for a
demonstration program.
|
This paper proposes a deep learning-based method to identify the segments of
a clinical note corresponding to ICD-9 broad categories which are further
color-coded with respect to 17 ICD-9 categories. The proposed Medical Segment
Colorer (MSC) architecture is a pipeline framework that works in three stages:
(1) word categorization, (2) phrase allocation, and (3) document
classification. MSC uses gated recurrent unit neural networks (GRUs) to map
from an input document to word multi-labels to phrase allocations, and uses
statistical median to map phrase allocation to document multi-label. We compute
variable length segment coloring from overlapping phrase allocation
probabilities. These cross-level bidirectional contextual links identify
adaptive context and then produce segment coloring. We train and evaluate MSC
using the document labeled MIMIC-III clinical notes. Training is conducted
solely using document multi-labels without any information on phrases,
segments, or words. In addition to coloring a clinical note, MSC generates as
byproducts document multi-labeling and word tagging -- creation of ICD9
category keyword lists based on segment coloring. Performance comparison of MSC
byproduct document multi-labels versus methods whose purpose is to produce
justifiable document multi-labels is 64% vs 52.4% micro-average F1-score
against the CAML (CNN attention multi label) method. For evaluation of MSC
segment coloring results, medical practitioners independently assigned the
colors to broad ICD9 categories given a sample of 40 colored notes and a sample
of 50 words related to each category based on the word tags. Binary scoring of
this evaluation has a median value of 83.3% and mean of 63.7%.
|
Short-term precipitation forecasting is essential for planning of human
activities in multiple scales, ranging from individuals' planning, urban
management to flood prevention. Yet the short-term atmospheric dynamics are
highly nonlinear that it cannot be easily captured with classical time series
models. On the other hand, deep learning models are good at learning nonlinear
interactions, but they are not designed to deal with the seasonality in time
series. In this study, we aim to develop a forecasting model that can both
handle the nonlinearities and detect the seasonality hidden within the daily
precipitation data. To this end, we propose a seasonally-integrated autoencoder
(SSAE) consisting of two long short-term memory (LSTM) autoencoders: one for
learning short-term dynamics, and the other for learning the seasonality in the
time series. Our experimental results show that not only does the SSAE
outperform various time series models regardless of the climate type, but it
also has low output variance compared to other deep learning models. The
results also show that the seasonal component of the SSAE helped improve the
correlation between the forecast and the actual values from 4% at horizon 1 to
37% at horizon 3.
|
We give a new, simpler proof of the fractional Korn's inequality for subsets
of $\mathbb{R}^d$. We also show a framework for obtaining Korn's inequality
directly from the appropriate Hardy-type inequality.
|
The structure of the icy shells of ocean worlds is important for
understanding the stability of their underlying oceans as it controls the rate
at which heat can be transported outward and radiated to space. Future
spacecraft exploration of the ocean worlds (e.g., by NASA's Europa Clipper
mission) will allow for higher-resolution measurements of gravity and shape
than currently available.
In this paper, we study the sensitivity of gravity-topography admittance to
the structure of icy shells in preparation for future data analysis. An
analytical viscous relaxation model is used to predict admittance spectra given
different shell structures determined by the temperature-dependent viscosity of
a tidally heated, conductive shell. We apply these methods to the ocean worlds
of Europa and Enceladus. We find that admittance is sensitive to the mechanisms
of topography support at different wavelengths and estimate the required
gravity performance to resolve transitions between these mechanisms. We find
that the Airy isostatic model is unable to accurately describe admittance
universally across all wavelengths when the shell thickness is a significant
fraction of body's radius. Our models suggest that measurements of admittance
at low spherical harmonic degrees are more sensitive to thick shells with high
tidal dissipation, and may complement ice-penetrating radar measurements in
constraining shell thickness. Finally, we find that admittance may be used to
constrain the tidal dissipation within the icy shell, which would be
complementary to a more demanding measurement of the tidal phase lag.
|
Future direct imaging missions will primarily observe planets that have been
previously detected, mostly via the radial velocity (RV) technique, to
characterize planetary atmospheres. In the meantime, direct imaging may
discover new planets within existing planetary systems that have bright enough
reflected flux, yet with insufficient signals for other methods to detect.
Here, we investigate the parameter space within which planets are unlikely to
be detected by RV in the near future due to precision limitations, but could be
discovered through reflected light with future direct imaging missions. We use
the HD 134987 system as a working example, combine RV and direct imaging
detection limit curves in the same parameter space through various assumptions,
and insert a fictitious planet into the system while ensuring it lies between
the RV and imaging detection limits. Planet validity tested through dynamical
simulations and retrieval tests revealed that the planet could indeed be
detected by imaging while remaining hidden from RV surveys. Direct imaging
retrieval was carried out using starshade simulations for two mission concepts:
the Starshade Rendezvous Probe that could be coupled with the Nancy Grace Roman
Space Telescope, and the Habitable Exoplanet Observatory. This method is
applicable to any other systems and high contrast direct imaging instruments,
and could help inform future imaging observations and data analysis on the
discovery of new exoplanets.
|
Recent works highlight the importance of stellar X-rays on the evolution of
the circumstellar disks of young stellar objects, especially for disk
photoevaporation. A signature of this process may be seen in the so far
tentatively observed dependence of stellar accretion rates on X-ray
luminosities. According to models of X-ray driven photoevaporation, stars with
higher X-ray luminosities should show lower accretion rates, on average, in a
sample with similar masses and ages. To this aim, we have analyzed X-ray
properties of young stars in the Orion Nebula Cluster determined with Chandra
during the COUP observation as well as accretion data obtained from the
photometric catalog of the HST Treasury Program. With these data, we have
performed a statistical analysis of the relation between X-ray activity and
accretion rates using partial linear regression analysis. The initial
anticorrelation found with a sample of 332 young stars is considerably weaker
compared to previous studies. However, excluding flaring activity or limiting
the X-ray luminosity to the soft band (0.5 - 2.0 keV) leads to a stronger
anticorrelation, which is statistically more significant. Furthermore, we have
found a weak positive correlation between the higher component of the plasma
temperature gained in the X-ray spectral fitting and the accretion rates,
indicating that the hardness of the X-ray spectra may influence the accretion
process. There is evidence for a weak anticorrelation, as predicted by
theoretical models, suggesting that X-ray photoevaporation modulates the
accretion rate through the inner disk at late stages of disk evolution, leading
to a phase of photoevaporation-starved accretion.
|
As the number of novel data-driven approaches to material science continues
to grow, it is crucial to perform consistent quality, reliability and
applicability assessments of model performance. In this paper, we benchmark the
Materials Optimal Descriptor Network (MODNet) method and architecture against
the recently released MatBench v0.1, a curated test suite of materials
datasets. MODNet is shown to outperform current leaders on 6 of the 13 tasks,
whilst closely matching the current leaders on a further 2 tasks; MODNet
performs particularly well when the number of samples is below 10,000.
Attention is paid to two topics of concern when benchmarking models. First, we
encourage the reporting of a more diverse set of metrics as it leads to a more
comprehensive and holistic comparison of model performance. Second, an equally
important task is the uncertainty assessment of a model towards a target
domain. Significant variations in validation errors can be observed, depending
on the imbalance and bias in the training set (i.e., similarity between
training and application space). By using an ensemble MODNet model, confidence
intervals can be built and the uncertainty on individual predictions can be
quantified. Imbalance and bias issues are often overlooked, and yet are
important for successful real-world applications of machine learning in
materials science and condensed matter.
|
Social media became popular and percolated almost all aspects of our daily
lives. While online posting proves very convenient for individual users, it
also fosters fast-spreading of various rumors. The rapid and wide percolation
of rumors can cause persistent adverse or detrimental impacts. Therefore,
researchers invest great efforts on reducing the negative impacts of rumors.
Towards this end, the rumor classification system aims to detect, track, and
verify rumors in social media. Such systems typically include four components:
(i) a rumor detector, (ii) a rumor tracker, (iii) a stance classifier, and (iv)
a veracity classifier. In order to improve the state-of-the-art in rumor
detection, tracking, and verification, we propose VRoC, a tweet-level
variational autoencoder-based rumor classification system. VRoC consists of a
co-train engine that trains variational autoencoders (VAEs) and rumor
classification components. The co-train engine helps the VAEs to tune their
latent representations to be classifier-friendly. We also show that VRoC is
able to classify unseen rumors with high levels of accuracy. For the PHEME
dataset, VRoC consistently outperforms several state-of-the-art techniques, on
both observed and unobserved rumors, by up to 26.9%, in terms of macro-F1
scores.
|
Turing machines and register machines have been used for decades in
theoretical computer science as abstract models of computation. Also the
$\lambda$-calculus has played a central role in this domain as it allows to
focus on the notion of functional computation, based on the substitution
mechanism, while abstracting away from implementation details. The present
article starts from the observation that the equivalence between these
formalisms is based on the Church-Turing Thesis rather than an actual encoding
of $\lambda$-terms into Turing (or register) machines. The reason is that these
machines are not well-suited for modelling \lam-calculus programs.
We study a class of abstract machines that we call \emph{addressing machine}
since they are only able to manipulate memory addresses of other machines. The
operations performed by these machines are very elementary: load an address in
a register, apply a machine to another one via their addresses, and call the
address of another machine. We endow addressing machines with an operational
semantics based on leftmost reduction and study their behaviour. The set of
addresses of these machines can be easily turned into a combinatory algebra. In
order to obtain a model of the full untyped $\lambda$-calculus, we need to
introduce a rule that bares similarities with the $\omega$-rule and the rule
$\zeta_\beta$ from combinatory logic.
|
A smooth rigidity inequalitiy provides an explicit lower bound for the
$(d+1)$-st derivatives of a smooth function $f$, which holds, if $f$ exhibits
certain patterns, forbidden for polynomials of degree $d$. The main goal of the
present paper is twofold: first, we provide an overview of some recent results
and questions related to smooth rigidity, which recently were obtained in
Singularity Theory, in Approximation Theory, and in Whitney smooth extensions.
Second, we prove some new results, specifically, a new Remez-type inequality,
and on this base we obtain a new rigidity inequality. In both parts of the
paper we stress the topology of the level sets, as the input information. Here
are the main new results of the paper:
\smallskip
Let $B^n$ be the unit $n$-dimensional ball. For a given integer $d$ let
$Z\subset B^n$ be a smooth compact hypersurface with $N=(d-1)^n+1$ connected
components $Z_j$. Let $\mu_j$ be the $n$-volume of the interior of $Z_j$, and
put $\mu=\min \mu_j, \ j=1,\ldots, N$. Then for each polynomial $P$ of degree
$d$ on ${\mathbb R}^n$ we have $$ \frac{\max_{B^n}|P|}{\max_{Z}|P|}\le
(\frac{4n}{\mu})^d. $$ As a consequence, we provide an explicit lower bound for
the $(d+1)$-st derivatives of any smooth function $f$, which vanishes on $Z$,
while being of order $1$ on $B^n$ (smooth rigidity)}: $$ ||f^{(d+1)}||\ge
\frac{1}{(d+1)!}(\frac{4n}{\mu})^d. $$ We also provide an interpretation, in
terms of smooth rigidity, of one of the simplest versions of the results in
\cite{Ler.Ste}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.