abstract
stringlengths 42
2.09k
|
---|
While Moore's law has driven exponential computing power expectations, its
nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of new alternative brain-inspired
computing architectures that promise to achieve the flexibility and
computational efficiency of biological neural processing systems. Within this
context, neuromorphic intelligence represents a paradigm shift in computing
based on the implementation of spiking neural network architectures tightly
co-locating processing and memory. In this paper, we provide a comprehensive
overview of the field, highlighting the different levels of granularity present
in existing silicon implementations, comparing approaches that aim at
replicating natural intelligence (bottom-up) versus those that aim at solving
practical artificial intelligence applications (top-down), and assessing the
benefits of the different circuit design styles used to achieve these goals.
First, we present the analog, mixed-signal and digital circuit design styles,
identifying the boundary between processing and memory through time
multiplexing, in-memory computation and novel devices. Next, we highlight the
key tradeoffs for each of the bottom-up and top-down approaches, survey their
silicon implementations, and carry out detailed comparative analyses to extract
design guidelines. Finally, we identify both necessary synergies and missing
elements required to achieve a competitive advantage for neuromorphic edge
computing over conventional machine-learning accelerators, and outline the key
elements for a framework toward neuromorphic intelligence.
|
This survey discusses the classical Bernstein and Markov inequalities for the
derivatives of polynomials, as well as some of their extensions to general
sets.
|
Self-assembly of Janus (or `patchy') particles is dependent on the precise
interaction between neighbouring particles. Here, the orientations of two
amphiphilic Janus spheres within a dimer in an explicit fluid are studied with
high geometric resolution. Molecular dynamics simulations and first-principles
energy calculations are used with hard- and soft-sphere Lennard-Jones
potentials, and temperature and hydrophobicity are varied. The most probable
centre-centre-pole angles are in the range 40{\deg} to 55{\deg}, with
pole-to-pole alignment not observed due to orientational entropy. Angles near
90{\deg} are energetically unfavoured due to solvent exclusion, and we
unexpectedly found that the relative azimuthal angle between the spheres is
affected by solvent ordering.
|
The atmospheric depth of the air shower maximum $X_{\mathrm{max}}$ is an
observable commonly used for the determination of the nuclear mass composition
of ultra-high energy cosmic rays. Direct measurements of $X_{\mathrm{max}}$ are
performed using observations of the longitudinal shower development with
fluorescence telescopes. At the same time, several methods have been proposed
for an indirect estimation of $X_{\mathrm{max}}$ from the characteristics of
the shower particles registered with surface detector arrays. In this paper, we
present a deep neural network (DNN) for the estimation of $X_{\mathrm{max}}$.
The reconstruction relies on the signals induced by shower particles in the
ground based water-Cherenkov detectors of the Pierre Auger Observatory. The
network architecture features recurrent long short-term memory layers to
process the temporal structure of signals and hexagonal convolutions to exploit
the symmetry of the surface detector array. We evaluate the performance of the
network using air showers simulated with three different hadronic interaction
models. Thereafter, we account for long-term detector effects and calibrate the
reconstructed $X_{\mathrm{max}}$ using fluorescence measurements. Finally, we
show that the event-by-event resolution in the reconstruction of the shower
maximum improves with increasing shower energy and reaches less than
$25~\mathrm{g/cm^{2}}$ at energies above $2\times 10^{19}~\mathrm{eV}$.
|
We analyze the propagation dynamics of radially polarized symmetric Airy
beams (R-SABs) in a (2+1)-dimensional optical system with fractional
diffraction, modeled by the fractional Schr\"odinger equation (FSE)
characterized by the L\'evy index. The autofocusing effect featured by such
beams becomes stronger, while the focal length becomes shorter, with the
increase of . The effect of the intrinsic vorticity on the autofocusing
dynamics of the beams is considered too. Then, the ability of R-SABs to capture
nano-particles by means of radiation forces is explored, and multiple capture
positions emerging in the course of the propagation are identified. Finally, we
find that the propagation of the vortical R-SABs with an off-axis shift leads
to rupture of the ring-shaped pattern of the power-density distribution.
|
Abstractive neural summarization models have seen great improvements in
recent years, as shown by ROUGE scores of the generated summaries. But despite
these improved metrics, there is limited understanding of the strategies
different models employ, and how those strategies relate their understanding of
language. To understand this better, we run several experiments to characterize
how one popular abstractive model, the pointer-generator model of See et al.
(2017), uses its explicit copy/generation switch to control its level of
abstraction (generation) vs extraction (copying). On an extractive-biased
dataset, the model utilizes syntactic boundaries to truncate sentences that are
otherwise often copied verbatim. When we modify the copy/generation switch and
force the model to generate, only simple paraphrasing abilities are revealed
alongside factual inaccuracies and hallucinations. On an abstractive-biased
dataset, the model copies infrequently but shows similarly limited abstractive
abilities. In line with previous research, these results suggest that
abstractive summarization models lack the semantic understanding necessary to
generate paraphrases that are both abstractive and faithful to the source
document.
|
In this article, a combinatorial characterization of the family of planes of
$\PG(3,q)$ which meet a hyperbolic quadric in an irreducible conic, using their
intersection properties with the points and lines of $\PG(3,q)$, is given.
|
The upgraded LHCb detector, due to start datataking in 2022, will have to
process an average data rate of 4~TB/s in real time. Because LHCb's physics
objectives require that the full detector information for every LHC bunch
crossing is read out and made available for real-time processing, this
bandwidth challenge is equivalent to that of the ATLAS and CMS HL-LHC software
read-out, but deliverable five years earlier. Over the past six years, the LHCb
collaboration has undertaken a bottom-up rewrite of its software
infrastructure, pattern recognition, and selection algorithms to make them
better able to efficiently exploit modern highly parallel computing
architectures. We review the impact of this reoptimization on the energy
efficiency of the real-time processing software and hardware which will be used
for the upgrade of the LHCb detector. We also review the impact of the decision
to adopt a hybrid computing architecture consisting of GPUs and CPUs for the
real-time part of LHCb's future data processing. We discuss the implications of
these results on how LHCb's real-time power requirements may evolve in the
future, particularly in the context of a planned second upgrade of the
detector.
|
New surprises continue to be revealed about La$_2$CuO$_4$, the parent
compound of the original cuprate superconductor. Here we present neutron
scattering evidence that the structural symmetry is lower than commonly
assumed. The static distortion results in anisotropic Cu-O bonds within the
CuO$_2$ planes; such anisotropy is relevant to pinning charge stripes in
hole-doped samples. Associated with the extra structural modulation is a soft
phonon mode. If this phonon were to soften completely, the resulting change in
CuO$_6$ octahedral tilts would lead to weak ferromagnetism. Hence, we suggest
that this mode may be the "chiral" phonon inferred from recent studies of the
thermal Hall effect. We also note the absence of interaction between the
antiferromagnetic spin waves and low-energy optical phonons, in contrast to
what is observed in hole-doped samples.
|
Beyond the scope of conventional metasurface which necessitates plenty of
computational resources and time, an inverse design approach using machine
learning algorithms promises an effective way for metasurfaces design. In this
paper, benefiting from Deep Neural Network (DNN), an inverse design procedure
of a metasurface in an ultra-wide working frequency band is presented where the
output unit cell structure can be directly computed by a specified design
target. To reach the highest working frequency, for training the DNN, we
consider 8 ring-shaped patterns to generate resonant notches at a wide range of
working frequencies from 4 to 45 GHz. We propose two network architectures. In
one architecture, we restricted the output of the DNN, so the network can only
generate the metasurface structure from the input of 8 ring-shaped patterns.
This approach drastically reduces the computational time, while keeping the
network's accuracy above 91\%. We show that our model based on DNN can
satisfactorily generate the output metasurface structure with an average
accuracy of over 90\% in both network architectures. Determination of the
metasurface structure directly without time-consuming optimization procedures,
having an ultra-wide working frequency, and high average accuracy equip an
inspiring platform for engineering projects without the need for complex
electromagnetic theory.
|
We obtain new separation results for the two-party external information
complexity of boolean functions. The external information complexity of a
function $f(x,y)$ is the minimum amount of information a two-party protocol
computing $f$ must reveal to an outside observer about the input. We obtain the
following results:
1. We prove an exponential separation between external and internal
information complexity, which is the best possible; previously no separation
was known.
2. We prove a near-quadratic separation between amortized zero-error
communication complexity and external information complexity for total
functions, disproving a conjecture of \cite{Bravermansurvey}.
3. We prove a matching upper showing that our separation result is tight.
|
We introduce TransformerFusion, a transformer-based 3D scene reconstruction
approach. From an input monocular RGB video, the video frames are processed by
a transformer network that fuses the observations into a volumetric feature
grid representing the scene; this feature grid is then decoded into an implicit
3D scene representation. Key to our approach is the transformer architecture
that enables the network to learn to attend to the most relevant image frames
for each 3D location in the scene, supervised only by the scene reconstruction
task. Features are fused in a coarse-to-fine fashion, storing fine-level
features only where needed, requiring lower memory storage and enabling fusion
at interactive rates. The feature grid is then decoded to a higher-resolution
scene reconstruction, using an MLP-based surface occupancy prediction from
interpolated coarse-to-fine 3D features. Our approach results in an accurate
surface reconstruction, outperforming state-of-the-art multi-view stereo depth
estimation methods, fully-convolutional 3D reconstruction approaches, and
approaches using LSTM- or GRU-based recurrent networks for video sequence
fusion.
|
We study the effects of additional cooling due to the emission of a dark
matter candidate particle, the dark photon, on the final phases of the
evolution of a $15\,M_\odot$ star and resulting modifications of the
pre-supernova neutrino signal. For a substantial portion of the dark photon
parameter space the extra cooling speeds up Si burning, which results in a
reduced number of neutrinos emitted during the last day before core collapse.
This reduction can be described by a systematic acceleration of the relevant
timescales and the results can be estimated semi-analytically in good agreement
with the numerical simulations. Outside the semi-analytic regime we find more
complicated effects. In a narrow parameter range, low-mass dark photons lead to
an increase of the number of emitted neutrinos because of additional shell
burning episodes that delay core collapse. Furthermore, relatively strong
couplings produce a thermonuclear runaway during O burning, which could result
in a complete disruption of the star but requires more detailed simulations to
determine the outcome. Our results show that pre-supernova neutrino signals are
a potential probe of the dark photon parameter space.
|
Unsupervised contrastive learning achieves great success in learning image
representations with CNN. Unlike most recent methods that focused on improving
accuracy of image classification, we present a novel contrastive learning
approach, named DetCo, which fully explores the contrasts between global image
and local image patches to learn discriminative representations for object
detection. DetCo has several appealing benefits. (1) It is carefully designed
by investigating the weaknesses of current self-supervised methods, which
discard important representations for object detection. (2) DetCo builds
hierarchical intermediate contrastive losses between global image and local
patches to improve object detection, while maintaining global representations
for image recognition. Theoretical analysis shows that the local patches
actually remove the contextual information of an image, improving the lower
bound of mutual information for better contrastive learning. (3) Extensive
experiments on PASCAL VOC, COCO and Cityscapes demonstrate that DetCo not only
outperforms state-of-the-art methods on object detection, but also on
segmentation, pose estimation, and 3D shape prediction, while it is still
competitive on image classification. For example, on PASCAL VOC, DetCo-100ep
achieves 57.4 mAP, which is on par with the result of MoCov2-800ep. Moreover,
DetCo consistently outperforms supervised method by 1.6/1.2/1.0 AP on Mask
RCNN-C4/FPN/RetinaNet with 1x schedule. Code will be released at
\href{https://github.com/xieenze/DetCo}{\color{blue}{\tt
github.com/xieenze/DetCo}}.
|
Graphdiyne nanomaterials are low density and highly porous carbon-based
two-dimensional (2D) materials, with outstanding application prospects for
electronic and energy storage/conversion systems. In two latest scientific
advances, large-area pyrenyl graphdiyne (Pyr-GDY) and pyrazinoquinoxaline
graphdiyne (PQ-GDY) nanosheets have been successfully fabricated. As the first
theoretical study, herein we conduct first-principles simulations to explore
the stability and electronic, optical and mechanical properties of Pyr-GDY,
N-Pyr-GDY, PQ-GDY and N-Pyr-GYN monolayers. We particularly examine the
intrinsic properties of PQ-graphyne (PQ-GYN) and Pyr-graphyne (Pyr-GYN)
monolayers. Acquired results confirm desirable dynamical and thermal stability
and high mechanical strength of these novel nanosheets, owing to their strong
covalent networks. We show that Pyr-based lattices can show high
stretchability. Analysis of optical results also confirm the suitability of
Pyr- and PQ-GDY/GYN nanosheets to adsorb in the near-IR, visible, and UV range
of light. Notably, PQ-GDY is found to exhibit distorted Dirac cone and highly
anisotropic fermi velocities. First-principles results reveal ultrahigh carrier
mobilities along the considered nanoporous nanomembranes, particularly PQ-GYN
monolayer is predicted to outperform phosphorene and MoS2. Acquired results
introduce pyrenyl and pyrazinoquinoxaline graphyne/graphyne as promising
candidates to design novel nanoelectronics and energy storage/conversion
systems.
|
Soft materials such as rubber and hydrogels are commonly used in industry for
their excellent hyperelastic behaviour. There are various types of constitutive
models for soft materials, and phenomenological models are very popular for
finite element method (FEM) simulations. However, it is not easy to construct a
model that can precisely predict the complex behaviours of soft materials. In
this paper, we suggest that the strain energy functions should be expressed as
functions of ordered principal stretches, which have more flexible expressions
and are capable of matching various experimental curves. Moreover, the feasible
region is small, and simple experiments, such as uniaxial tension/compression
and hydrostatic tests, are on its boundaries. Therefore, strain energy
functions can be easily constructed by the interpolation of experimental
curves, which does not need initial guessing in the form of the strain energy
function as most existing phenomenological models do. The proposed strain
energy functions are perfectly consistent with the available experimental
curves for interpolation. It is found that for incompressible materials, the
function via an interpolation from two experimental curves can already predict
other experimental curves reasonably well. To further improve the accuracy,
additional experiments can be used in the interpolation.
|
In this paper, constant false alarm rate (CFAR) detector-based approaches are
proposed for interference mitigation of Frequency modulated continuous wave
(FMCW) radars. The proposed methods exploit the fact that after dechirping and
low-pass filtering operations the targets' beat signals of FMCW radars are
composed of exponential sinusoidal components while interferences exhibit short
chirp waves within a sweep. The spectra of interferences in the time-frequency
($t$-$f$) domain are detected by employing a 1-D CFAR detector along each
frequency bin and then the detected map is dilated as a mask for interference
suppression. They are applicable to the scenarios in the presence of multiple
interferences. Compared to the existing methods, the proposed methods reduce
the power loss of useful signals and are very computationally efficient. Their
interference mitigation performances are demonstrated through both numerical
simulations and experimental results.
|
Fully-heavy tetraquark states, i.e. $cc\bar{c}\bar{c}$, $bb\bar{b}\bar{b}$,
$bb\bar{c}\bar{c}$ ($cc\bar{b}\bar{b}$), $cb\bar{c}\bar{c}$,
$cb\bar{b}\bar{b}$, and $cb\bar{c}\bar{b}$, are systematically investigated by
means of a non-relativistic quark model based on lattice-QCD studies of the
two-body $Q\bar{Q}$ interaction, which exhibits a spin-independent Cornell
potential along with a spin-spin term. The four-body problem is solved using
the Gaussian expansion method; additionally, the so-called complex scaling
technique is employed so that bound, resonance, and scattering states can be
treated on the same footing. Moreover, a complete set of four-body
configurations, including meson-meson, diquark-antidiquark, and K-type
configurations, as well as their couplings, are considered for spin-parity
quantum numbers $J^{P(C)}=0^{+(+)}$, $1^{+(\pm)}$, and $2^{+(+)}$ in the
$S$-wave channel. Several narrow resonances, with two-meson strong decay widths
less than 30 MeV, are found in all of the tetraquark systems studied.
Particularly, the fully-charm resonances recently reported by the LHCb
Collaboration, at the energy range between 6.2 and 7.2 GeV in the di-$J/\psi$
invariant spectrum, can be well identified in our calculation. Focusing on the
fully-bottom tetraquark spectrum, resonances with masses between 18.9 and 19.6
GeV are found. For the remaining charm-bottom cases, the masses are obtained
within a energy region from 9.8 GeV to 16.4 GeV. All these predicted resonances
can be further examined in future experiments.
|
Relations among various musical concepts are investigated through a new
concept, musical icosahedron that is the regular icosahedron each of whose
vertices has one of 12 tones. First, we found that there exist four musical
icosahedra that characterize the topology of the chromatic scale and one of the
whole tone scales, and have the hexagon-icosahedron symmetry (an operation of
raising all the tones of a given scale by two semitones corresponds to a
symmetry transformation of the regular icosahedron): chromatic/whole tone
musical icosahedra. The major triads or the minor triads are set on the golden
triangles of these musical icosahedra. Also, various dualities between musical
concepts are shown by these musical icosahedra: the major triads/scales and the
minor triads/scales, the major/minor triads and the fundamental triads for the
hexatonic major/minor scales, the major/minor scales and the Gregorian modes.
Second, we proposed Pythagorean/whole tone musical icosahedra that characterize
the topology of the Pythagorean chain and one of the whole tone scales, and
have the hexagon-icosahedron symmetry. The Pythagorean chain (chromatic scale)
in the chromatic (Pythagorean)/whole tone musical icosahedron is constructed by
"middle" lines of the regular icosahedron. While some golden triangles
correspond to the major/minor triads in the chromatic/whole tone musical
icosahedra, in the Pythagorean/whole tone musical icosahedra, some golden
gnomons correspond to the minor/major triads. Third, we found four types of
musical icosahedra other than the chromatic/whole tone musical icosahedra and
the Pythagorean/whole tone musical icosahedra that have the hexagon-icosahedron
symmetry. All the major triads and minor triads are represented by the golden
triangles or the golden gnomons on each type. All of these musical icosahedra
lead to generalizations of major/minor triads and scales.
|
Silicon-based tracking detectors have been used in several important
applications, such as in cancer therapy using particle beams, and for the
discovery of new elementary particles at the Large Hadron Collider at CERN.
III-V semiconductor materials are an attractive alternative to silicon for this
application, as they have some superior physical properties. They could meet
the demands for fast timing detectors allowing time-of-flight measurements with
ps resolution while being radiation tolerant and cost-efficient. As a material
with a larger density, higher atomic number Z and much higher electron mobility
than silicon, GaAs exhibits faster signal collection and a larger signal per
{\mu}m of sensor thickness. In this work, we report on the fabrication of
n-in-n GaAs thin-film devices intended to serve next-generation high-energy
particle tracking detectors. Molecular beam epitaxy (MBE) was used to grow
high-quality GaAs films with doping levels sufficiently low to achieve full
depletion for detectors with an active thickness of 10 {\mu}m. The signal
collection speed of the detector structures was assessed using the transient
current technique (TCT). To elucidate the structural properties of the
detector, Kelvin probe force microscopy (KPFM) was used, which confirmed the
formation of the junction in the detector and revealed residual doping in the
intrinsic layer. Our results suggest that GaAs thin films are suitable
candidates to achieve thin and radiation-tolerant tracking detectors.
|
We propose a new example of the AdS/CFT correspondence between the system of
multiple giant gravitons in AdS${}_5 \times {}$S${}^5$ and the operators with
$O(N_c)$ dimensions in ${\cal N}=4$ super Yang-Mills. We first extend the
mixing of huge operators on the Gauss graph basis in the $su(2)$ sector to all
loops of the 't Hooft coupling, by demanding the commutation of perturbative
Hamiltonians in an effective $U(p)$ theory, where $p$ corresponds to the number
of giant gravitons. The all-loop dispersion relation remains gapless at any
$\lambda$, which suggests that harmonic oscillators of the effective $U(p)$
theory should correspond to the classical motion of the D3-brane that is
continuously connected to non-maximal giant gravitons.
|
The present article is devoted to representations of rational numbers in
terms sign-variable Cantor expansions. The main attention is given to one of
the discussions given by J. Galambos in [4].
|
Inspired by more detailed modeling of biological neurons, Spiking neural
networks (SNNs) have been investigated both as more biologically plausible and
potentially more powerful models of neural computation, and also with the aim
of extracting biological neurons' energy efficiency; the performance of such
networks however has remained lacking compared to classical artificial neural
networks (ANNs). Here, we demonstrate how a novel surrogate gradient combined
with recurrent networks of tunable and adaptive spiking neurons yields
state-of-the-art for SNNs on challenging benchmarks in the time-domain, like
speech and gesture recognition. This also exceeds the performance of standard
classical recurrent neural networks (RNNs) and approaches that of the best
modern ANNs. As these SNNs exhibit sparse spiking, we show that they
theoretically are one to three orders of magnitude more computationally
efficient compared to RNNs with comparable performance. Together, this
positions SNNs as an attractive solution for AI hardware implementations.
|
We will construct a confidence region of parameters for $N$ samples from
Cauchy distributed random variables. Although Cauchy distribution has two
parameters, a location parameter $\mu \in \mathbb{R}$ and a scale parameter
$\sigma > 0$, we will infer them at once by regarding them as a single complex
parameter $\gamma := \mu + i \sigma$. Therefore the region should be a domain
in the complex plane and we will give a simple and concrete formula to give the
region as a disc.
|
Aspect-based Sentiment analysis (ABSA) accomplishes a fine-grained analysis
that defines the aspects of a given document or sentence and the sentiments
conveyed regarding each aspect. This level of analysis is the most detailed
version that is capable of exploring the nuanced viewpoints of the reviews. The
bulk of study in ABSA focuses on English with very little work available in
Arabic. Most previous work in Arabic has been based on regular methods of
machine learning that mainly depends on a group of rare resources and tools for
analyzing and processing Arabic content such as lexicons, but the lack of those
resources presents another challenge. In order to address these challenges,
Deep Learning (DL)-based methods are proposed using two models based on Gated
Recurrent Units (GRU) neural networks for ABSA. The first is a DL model that
takes advantage of word and character representations by combining
bidirectional GRU, Convolutional Neural Network (CNN), and Conditional Random
Field (CRF) making up the (BGRU-CNN-CRF) model to extract the main opinionated
aspects (OTE). The second is an interactive attention network based on
bidirectional GRU (IAN-BGRU) to identify sentiment polarity toward extracted
aspects. We evaluated our models using the benchmarked Arabic hotel reviews
dataset. The results indicate that the proposed methods are better than
baseline research on both tasks having 39.7% enhancement in F1-score for
opinion target extraction (T2) and 7.58% in accuracy for aspect-based sentiment
polarity classification (T3). Achieving F1 score of 70.67% for T2, and accuracy
of 83.98% for T3.
|
Contrastive Learning has emerged as a powerful representation learning method
and facilitates various downstream tasks especially when supervised data is
limited. How to construct efficient contrastive samples through data
augmentation is key to its success. Unlike vision tasks, the data augmentation
method for contrastive learning has not been investigated sufficiently in
language tasks. In this paper, we propose a novel approach to construct
contrastive samples for language tasks using text summarization. We use these
samples for supervised contrastive learning to gain better text representations
which greatly benefit text classification tasks with limited annotations. To
further improve the method, we mix up samples from different classes and add an
extra regularization, named Mixsum, in addition to the cross-entropy-loss.
Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG
News, and IMDb) demonstrate the effectiveness of the proposed contrastive
learning framework with summarization-based data augmentation and Mixsum
regularization.
|
The field of explainable AI (XAI) has quickly become a thriving and prolific
community. However, a silent, recurrent and acknowledged issue in this area is
the lack of consensus regarding its terminology. In particular, each new
contribution seems to rely on its own (and often intuitive) version of terms
like "explanation" and "interpretation". Such disarray encumbers the
consolidation of advances in the field towards the fulfillment of scientific
and regulatory demands e.g., when comparing methods or establishing their
compliance with respect to biases and fairness constraints. We propose a
theoretical framework that not only provides concrete definitions for these
terms, but it also outlines all steps necessary to produce explanations and
interpretations. The framework also allows for existing contributions to be
re-contextualized such that their scope can be measured, thus making them
comparable to other methods. We show that this framework is compliant with
desiderata on explanations, on interpretability and on evaluation metrics. We
present a use-case showing how the framework can be used to compare LIME, SHAP
and MDNet, establishing their advantages and shortcomings. Finally, we discuss
relevant trends in XAI as well as recommendations for future work, all from the
standpoint of our framework.
|
Multi Agent Path Finding (MAPF) requires identification of conflict free
paths for agents which could be point-sized or with dimensions. In this paper,
we propose an approach for MAPF for spatially-extended agents. These find
application in real world problems like Convoy Movement Problem, Train
Scheduling etc. Our proposed approach, Decentralised Multi Agent Path Finding
(DeMAPF), handles MAPF as a sequence of pathplanning and allocation problems
which are solved by two sets of agents Travellers and Routers respectively,
over multiple iterations. The approach being decentralised allows an agent to
solve the problem pertinent to itself, without being aware of other agents in
the same set. This allows the agents to be executed on independent machines,
thereby leading to scalability to handle large sized problems. We prove, by
comparison with other distributed approaches, that the approach leads to a
faster convergence to a conflict-free solution, which may be suboptimal, with
lesser memory requirement.
|
Coupled 3D-1D problems arise in many practical applications, in an attempt to
reduce the computational burden in simulations where cylindrical inclusions
with a small section are embedded in a much larger domain. Nonetheless the
resolution of such problems can be non trivial, both from a mathematical and a
geometrical standpoint. Indeed 3D-1D coupling requires to operate in non
standard function spaces, and, also, simulation geometries can be complex for
the presence of multiple intersecting domains. Recently, a PDE-constrained
optimization based formulation has been proposed for such problems, proving a
well posed mathematical formulation and allowing for the use of non conforming
meshes for the discrete problem. Here an unconstrained optimization formulation
of the problem is derived and an efficient gradient based solver is proposed
for such formulation. Some numerical tests on quite complex configurations are
discussed to show the viability of the method.
|
In this paper we examine necessary conditions for an inhomogeneity to be
non-scattering, or equivalently, by negation, sufficient conditions for it to
be scattering. These conditions are formulated in terms of the regularity of
the boundary of the inhomogeneity. We examine broad classes of incident waves
in both two and three dimensions. Our analysis is greatly influenced by the
analysis carried out by Williams [28] in order to establish that a domain,
which does not possess the Pompeiu Property, has a real analytic boundary. That
analysis, as well as ours, relies crucially on classical free boundary
regularity results due to Kinderlehrer and Nirenberg [18], and Caffarelli [6].
|
Stemming from de Finetti's work on finitely additive coherent probabilities,
the paradigm of coherence has been applied to many uncertainty calculi in order
to remove structural restrictions on the domain of the assessment. Three
possible approaches to coherence are available: coherence as a consistency
notion, coherence as fair betting scheme, and coherence in terms of penalty
criterion. Due to its intimate connection with (finitely additive) probability
theory, Dempster-Shafer theory allows notions of coherence in all the forms
recalled above, presenting evident similarities with probability theory. In
this chapter we present a systematic study of such coherence notions showing
their equivalence.
|
The simulation of chemical kinetics involving multiple scales constitutes a
modeling challenge (from ordinary differential equations to Markov chain) and a
computational challenge (multiple scales, large dynamical systems, time step
restrictions). In this paper we propose a new discrete stochastic simulation
algorithm: the postprocessed second kind stabilized orthogonal $\tau$-leap
Runge-Kutta method (PSK-$\tau$-ROCK). In the context of chemical kinetics this
method can be seen as a stabilization of Gillespie's explicit $\tau$-leap
combined with a postprocessor. The stabilized procedure allows to simulate
problems with multiple scales (stiff), while the postprocessing procedure
allows to approximate the invariant measure (e.g. mean and variance) of ergodic
stochastic dynamical systems. We prove stability and accuracy of the
PSK-$\tau$-ROCK. Numerical experiments illustrate the high reliability and
efficiency of the scheme when compared to other $\tau$-leap methods.
|
Mesoporous bioactive glasses (MBGs) in the system SiO2-CaO-P2O5-Ga2O3 have
been synthesized by the evaporation induced self-assembly method and subsequent
impregnation with Ga cations. Two different compositions have been prepared and
the local environment of Ga(III) has been characterized using 29Si, 71Ga and
31P NMR analysis, demonstrating that Ga(III) is efficiently incorporated as
both, network former (GaO4 units) and network modifier (GaO6 units). In vitro
bioactivity tests evidenced that Ga-containing MBGs retain their capability for
nucleation and growth of an apatite-like layer in contact with a simulated body
fluid with ion concentrations nearly equal to those of human blood plasma.
Finally, in vitro cell culture tests evidenced that Ga incorporation results in
a selective effect on osteoblasts and osteoclasts. Indeed, the presence of this
element enhances the early differentiation towards osteoblast phenotype while
disturbing osteoclastogenesis. Considering these results, Ga-doped MBGs might
be proposed as bone substitutes, especially in osteoporosis scenarios.
|
System optimum (SO) routing, wherein the total travel time of all users is
minimized, is a holy grail for transportation authorities. However, SO routing
may discriminate against users who incur much larger travel times than others
to achieve high system efficiency, i.e., low total travel times. To address the
inherent unfairness of SO routing, we study the ${\beta}$-fair SO problem whose
goal is to minimize the total travel time while guaranteeing a ${\beta\geq 1}$
level of unfairness, which specifies the maximum possible ratio between the
travel times of different users with shared origins and destinations.
To obtain feasible solutions to the ${\beta}$-fair SO problem while achieving
high system efficiency, we develop a new convex program, the Interpolated
Traffic Assignment Problem (I-TAP), which interpolates between a
fairness-promoting and an efficiency-promoting traffic-assignment objective. We
evaluate the efficacy of I-TAP through theoretical bounds on the total system
travel time and level of unfairness in terms of its interpolation parameter, as
well as present a numerical comparison between I-TAP and a state-of-the-art
algorithm on a range of transportation networks. The numerical results indicate
that our approach is faster by several orders of magnitude as compared to the
benchmark algorithm, while achieving higher system efficiency for all desirable
levels of unfairness. We further leverage the structure of I-TAP to develop two
pricing mechanisms to collectively enforce the I-TAP solution in the presence
of selfish homogeneous and heterogeneous users, respectively, that
independently choose routes to minimize their own travel costs. We mention that
this is the first study of pricing in the context of fair routing for general
road networks (as opposed to, e.g., parallel road networks).
|
Food texture is a complex property; various sensory attributes such as
perceived crispiness and wetness have been identified as ways to quantify it.
Objective and automatic recognition of these attributes has applications in
multiple fields, including health sciences and food engineering. In this work
we use an in-ear microphone, commonly used for chewing detection, and propose
algorithms for recognizing three food-texture attributes, specifically
crispiness, wetness (moisture), and chewiness. We use binary SVMs, one for each
attribute, and propose two algorithms: one that recognizes each texture
attribute at the chew level and one at the chewing-bout level. We evaluate the
proposed algorithms using leave-one-subject-out cross-validation on a dataset
with 9 subjects. We also evaluate them using leave-one-food-type-out
cross-validation, in order to examine the generalization of our approach to
new, unknown food types. Our approach performs very well in recognizing
crispiness (0.95 weighted accuracy on new subjects and 0.93 on new food types)
and demonstrates promising results for objective and automatic recognition of
wetness and chewiness.
|
Streaks in the buffer layer of wall-bounded turbulence are tracked in time to
study their life-cycle. Spatially and temporally resolved direct numerical
simulation data is used to analyze the strong wall-parallel movements
conditioned to low-speed streamwise flow. The analysis of the streaks shows
that there is a clear distinction between wall-attached and detached streaks,
and that the former can be further categorized into streaks that are contained
in the buffer layer and the ones that reach the outer region. The results
reveal that streaks are born in the buffer layer, coalescing with each other to
create larger streaks that are still attached to the wall. Once the streak
becomes large enough, it starts to meander due to the large
streamwise-to-wall-normal aspect ratio, and consequently the elongation in the
streamwise direction, which makes it more difficult for the streak to be
oriented strictly in the streamwise direction. While the continuous interaction
of the streaks allows the super-structure to span extremely long temporal and
length scales, individual streak components are relatively small and
short-lived. Tall-attached streaks eventually split into wall-attached and
wall-detached components. These wall-detached streaks have a strong wall-normal
velocity away from the wall, similar to ejections or bursts observed in the
literature. Conditionally averaging the flow fields to these split events show
that the detached streak has not only a larger wall-normal velocity compared to
the wall-attached counterpart, it also has a larger (less negative) streamwise
velocity, similar to the velocity field at the tip of a vortex cluster.
|
A graph $G$ is $H$-saturated if it contains no $H$ as a subgraph, but does
contain $H$ after the addition of any edge in the complement of $G$. The
saturation number, $sat (n, H)$, is the minimum number of edges of a graph in
the set of all $H$-saturated graphs with order $n$. In this paper, we determine
the saturation number $sat (n, P_6 + tP_2)$ for $n \geq 10t/3 + 10$ and
characterize the extremal graphs for $n >10t/3 + 20$.
|
Quantum theories of gravity predict interesting phenomenological features
such as a minimum measurable length and maximum momentum. We use the
Generalized Uncertainty Principle (GUP), which is an extension of the standard
Heisenberg Uncertainty Principle motivated by Quantum Gravity, to model the
above features. In particular, we use a GUP with modelling maximum momentum to
establish a correspondence between the GUP-modified dynamics of a massless
spin-2 field and quadratic (referred to as Stelle) gravity. In other words,
Stelle gravity can be regarded as the classical manifestation of a maximum
momentum and the related GUP. We explore the applications of Stelle gravity to
cosmology and specifically show that Stelle gravity applied to a homogeneous
and isotropic background leads to inflation with an exit. Using the above, we
obtain strong bounds on the GUP parameter from CMB observations. Unlike
previous works, which fixed only upper bounds for GUP parameters, we obtain
both \emph{lower and upper bounds} on the GUP parameter.
|
The acceptance of Internet of Things (IoT) applications and services has seen
an enormous rise of interest in IoT. Organizations have begun to create various
IoT based gadgets ranging from small personal devices such as a smart watch to
a whole network of smart grid, smart mining, smart manufacturing, and
autonomous driver-less vehicles. The overwhelming amount and ubiquitous
presence have attracted potential hackers for cyber-attacks and data theft.
Security is considered as one of the prominent challenges in IoT. The key scope
of this research work is to propose an innovative model using machine learning
algorithm to detect and mitigate botnet-based distributed denial of service
(DDoS) attack in IoT network. Our proposed model tackles the security issue
concerning the threats from bots. Different machine learning algorithms such as
K- Nearest Neighbour (KNN), Naive Bayes model and Multi-layer Perception
Artificial Neural Network (MLP ANN) were used to develop a model where data are
trained by BoT-IoT dataset. The best algorithm was selected by a reference
point based on accuracy percentage and area under the receiver operating
characteristics curve (ROC AUC) score. Feature engineering and Synthetic
minority oversampling technique (SMOTE) were combined with machine learning
algorithms (MLAs). Performance comparison of three algorithms used was done in
class imbalance dataset and on the class balanced dataset.
|
Research on overlapped and discontinuous named entity recognition (NER) has
received increasing attention. The majority of previous work focuses on either
overlapped or discontinuous entities. In this paper, we propose a novel
span-based model that can recognize both overlapped and discontinuous entities
jointly. The model includes two major steps. First, entity fragments are
recognized by traversing over all possible text spans, thus, overlapped
entities can be recognized. Second, we perform relation classification to judge
whether a given pair of entity fragments to be overlapping or succession. In
this way, we can recognize not only discontinuous entities, and meanwhile
doubly check the overlapped entities. As a whole, our model can be regarded as
a relation extraction paradigm essentially. Experimental results on multiple
benchmark datasets (i.e., CLEF, GENIA and ACE05) show that our model is highly
competitive for overlapped and discontinuous NER.
|
The idea of transfer in reinforcement learning (TRL) is intriguing: being
able to transfer knowledge from one problem to another problem without learning
everything from scratch. This promises quicker learning and learning more
complex methods. To gain an insight into the field and to detect emerging
trends, we performed a database search. We note a surprisingly late adoption of
deep learning that starts in 2018. The introduction of deep learning has not
yet solved the greatest challenge of TRL: generalization. Transfer between
different domains works well when domains have strong similarities (e.g.
MountainCar to Cartpole), and most TRL publications focus on different tasks
within the same domain that have few differences. Most TRL applications we
encountered compare their improvements against self-defined baselines, and the
field is still missing unified benchmarks. We consider this to be a
disappointing situation. For the future, we note that: (1) A clear measure of
task similarity is needed. (2) Generalization needs to improve. Promising
approaches merge deep learning with planning via MCTS or introduce memory
through LSTMs. (3) The lack of benchmarking tools will be remedied to enable
meaningful comparison and measure progress. Already Alchemy and Meta-World are
emerging as interesting benchmark suites. We note that another development, the
increase in procedural content generation (PCG), can improve both benchmarking
and generalization in TRL.
|
Semantic parsing, as an important approach to question answering over
knowledge bases (KBQA), transforms a question into the complete query graph for
further generating the correct logical query. Existing semantic parsing
approaches mainly focus on relations matching with paying less attention to the
underlying internal structure of questions (e.g., the dependencies and
relations between all entities in a question) to select the query graph. In
this paper, we present a relational graph convolutional network (RGCN)-based
model gRGCN for semantic parsing in KBQA. gRGCN extracts the global semantics
of questions and their corresponding query graphs, including structure
semantics via RGCN and relational semantics (label representation of relations
between entities) via a hierarchical relation attention mechanism. Experiments
evaluated on benchmarks show that our model outperforms off-the-shelf models.
|
We prove Engstr\"{o}m's conjecture that the independence complex of graphs
with no induced cycle of length divisible by $3$ is either contractible or
homotopy equivalent to a sphere. Our result strengthens a result by Zhang and
Wu, verifying a conjecture of Kalai and Meshulam which states that the total
Betti number of the independence complex of such a graph is at most $1$. A
weaker conjecture was proved earlier by Chudnovsky, Scott, Seymour, and Spikl,
who showed that in such a graph, the number of independent sets of even size
minus the number of independent sets of odd size has values $0$, $1$, or $-1$.
|
We predict that a photon condensate inside a dye-filled microcavity forms
long-lived spatial structures that resemble vortices when incoherently excited
by a focused pump orbiting around the cavity axis. The finely structured
density of the condensates have a discrete rotational symmetry that is
controlled by the orbital frequency of the pump spot and is phase-coherent over
its full spatial extent despite the absence of any effective photon-photon
interactions.
|
Volumetric modulated arc therapy planning is a challenging problem in
high-dimensional, non-convex optimization. Traditionally, heuristics such as
fluence-map-optimization-informed segment initialization use locally optimal
solutions to begin the search of the full arc therapy plan space from a
reasonable starting point. These routines facilitate arc therapy optimization
such that clinically satisfactory radiation treatment plans can be created in
about 10 minutes. However, current optimization algorithms favor solutions near
their initialization point and are slower than necessary due to plan
overparameterization. In this work, arc therapy overparameterization is
addressed by reducing the effective dimension of treatment plans with
unsupervised deep learning. An optimization engine is then built based on
low-dimensional arc representations which facilitates faster planning times.
|
Time-delay interferometry (TDI) is a post-processing technique used to reduce
laser noise in heterodyne interferometric measurements with unequal armlengths,
a situation characteristic of space gravitational detectors such as Laser
Interferometer Space Antenna (LISA). This technique consists in properly
time-shifting and linearly combining the interferometric measurements in order
to reduce the laser noise by several orders of magnitude and to detect
gravitational waves. In this communication, we show that the Doppler shift due
to the time evolution of the armlengths leads to an unacceptably large residual
noise when using interferometric measurements expressed in units of frequency
and standard expressions of the TDI variables. We also present a technique to
mitigate this effect by including a scaling of the interferometric measurements
in addition to the usual time-shifting operation when constructing the TDI
variables. We demonstrate analytically and using numerical simulations that
this technique allows one to recover standard laser noise suppression which is
necessary to measure gravitational waves.
|
3D object detection is a core component of automated driving systems.
State-of-the-art methods fuse RGB imagery and LiDAR point cloud data
frame-by-frame for 3D bounding box regression. However, frame-by-frame 3D
object detection suffers from noise, field-of-view obstruction, and sparsity.
We propose a novel Temporal Fusion Module (TFM) to use information from
previous time-steps to mitigate these problems. First, a state-of-the-art
frustum network extracts point cloud features from raw RGB and LiDAR point
cloud data frame-by-frame. Then, our TFM module fuses these features with a
recurrent neural network. As a result, 3D object detection becomes robust
against single frame failures and transient occlusions. Experiments on the
KITTI object tracking dataset show the efficiency of the proposed TFM, where we
obtain ~6%, ~4%, and ~6% improvements on Car, Pedestrian, and Cyclist classes,
respectively, compared to frame-by-frame baselines. Furthermore, ablation
studies reinforce that the subject of improvement is temporal fusion and show
the effects of different placements of TFM in the object detection pipeline.
Our code is open-source and available at
https://github.com/emecercelik/Temp-Frustum-Net.git.
|
In this work we provide a framework that connects the co-rotating and counter
rotating $f$-mode frequencies of rotating neutron stars with their stellar
structure. The accurate computation of these modes for realistic equations of
state has been presented recently and they are here used as input for a
Bayesian analysis of the inverse problem. This allows to quantitatively
reconstruct basic neutron star parameters, such as the mass, radius, rotation
rate or universal scaling parameters. We find that future observations of both
$f$-mode frequencies, in combination with a Bayesian analysis, would provide a
promising direction to solve the inverse stellar problem. We provide two
complementary approaches, one that is equation of state dependent and one that
only uses universal scaling relations. We discuss advantages and disadvantages
of each approach, such as possible bias and robustness. The focus is on
astrophysically motivated scenarios in which informed prior information on the
neutron star mass or rotation rate can be provided and study how they impact
the results.
|
The discovery of gravitational waves, high-energy neutrinos or the
very-high-energy counterpart of gamma-ray bursts has revolutionized the
high-energy and transient astrophysics community. The development of new
instruments and analysis techniques will allow the discovery and/or follow-up
of new transient sources. We describe the prospects for the Cherenkov Telescope
Array (CTA), the next-generation ground-based gamma-ray observatory, for
multi-messenger and transient astrophysics in the decade ahead. CTA will
explore the most extreme environments via very-high-energy observations of
compact objects, stellar collapse events, mergers and cosmic-ray accelerators.
|
Recommender systems rely heavily on increasing computation resources to
improve their business goal. By deploying computation-intensive models and
algorithms, these systems are able to inference user interests and exhibit
certain ads or commodities from the candidate set to maximize their business
goals. However, such systems are facing two challenges in achieving their
goals. On the one hand, facing massive online requests, computation-intensive
models and algorithms are pushing their computation resources to the limit. On
the other hand, the response time of these systems is strictly limited to a
short period, e.g. 300 milliseconds in our real system, which is also being
exhausted by the increasingly complex models and algorithms.
In this paper, we propose the computation resource allocation solution (CRAS)
that maximizes the business goal with limited computation resources and
response time. We comprehensively illustrate the problem and formulate such a
problem as an optimization problem with multiple constraints, which could be
broken down into independent sub-problems. To solve the sub-problems, we
propose the revenue function to facilitate the theoretical analysis, and obtain
the optimal computation resource allocation strategy. To address the
applicability issues, we devise the feedback control system to help our
strategy constantly adapt to the changing online environment. The effectiveness
of our method is verified by extensive experiments based on the real dataset
from Taobao.com. We also deploy our method in the display advertising system of
Alibaba. The online results show that our computation resource allocation
solution achieves significant business goal improvement without any increment
of computation cost, which demonstrates the efficacy of our method in real
industrial practice.
|
We present a novel implementation of classification using the machine
learning / artificial intelligence method called boosted decision trees (BDT)
on field programmable gate arrays (FPGA). The firmware implementation of binary
classification requiring 100 training trees with a maximum depth of 4 using
four input variables gives a latency value of about 10 ns, independent of the
clock speed from 100 to 320 MHz in our setup. The low timing values are
achieved by restructuring the BDT layout and reconfiguring its parameters. The
FPGA resource utilization is also kept low at a range from 0.01% to 0.2% in our
setup. A software package called fwXmachina achieves this implementation. Our
intended user is an expert of custom electronics-based trigger systems in high
energy physics experiments or anyone that needs decisions at the lowest latency
values for real-time event classification. Two problems from high energy
physics are considered, in the separation of electrons vs. photons and in the
selection of vector boson fusion-produced Higgs bosons vs. the rejection of the
multijet processes.
|
High-throughput computational imaging requires efficient processing
algorithms to retrieve multi-dimensional and multi-scale information. In
computational phase imaging, phase retrieval (PR) is required to reconstruct
both amplitude and phase in complex space from intensity-only measurements. The
existing PR algorithms suffer from the tradeoff among low computational
complexity, robustness to measurement noise and strong generalization on
different modalities. In this work, we report an efficient large-scale phase
retrieval technique termed as LPR. It extends the plug-and-play
generalized-alternating-projection framework from real space to nonlinear
complex space. The alternating projection solver and enhancing neural network
are respectively derived to tackle the measurement formation and statistical
prior regularization. This framework compensates the shortcomings of each
operator, so as to realize high-fidelity phase retrieval with low computational
complexity and strong generalization. We applied the technique for a series of
computational phase imaging modalities including coherent diffraction imaging,
coded diffraction pattern imaging, and Fourier ptychographic microscopy.
Extensive simulations and experiments validate that the technique outperforms
the existing PR algorithms with as much as 17dB enhancement on signal-to-noise
ratio, and more than one order-of-magnitude increased running efficiency.
Besides, we for the first time demonstrate ultra-large-scale phase retrieval at
the 8K level (7680$\times$4320 pixels) in minute-level time.
|
While quantum measurement theories are built around density matrices and
observables, the laws of thermodynamics are based on processes such as are used
in heat engines and refrigerators. The study of quantum thermodynamics fuses
these two distinct paradigms. In this article, we highlight the usage of
quantum process matrices as a unified language for describing thermodynamic
processes in the quantum regime. We experimentally demonstrate this in the
context of a quantum Maxwell's demon, where two major quantities are commonly
investigated; the average work extraction $\langle W \rangle$ and the efficacy
$\gamma$ which measures how efficiently the feedback operation uses the
obtained information. Using the tool of quantum process matrices, we develop
the optimal feedback protocols for these two quantities and experimentally
investigate them in a superconducting circuit QED setup.
|
Computing power, big data, and advancement of algorithms have led to a
renewed interest in artificial intelligence (AI), especially in deep learning
(DL). The success of DL largely lies on data representation because different
representations can indicate to a degree the different explanatory factors of
variation behind the data. In the last few year, the most successful story in
DL is supervised learning. However, to apply supervised learning, one challenge
is that data labels are expensive to get, noisy, or only partially available.
With consideration that we human beings learn in an unsupervised way;
self-supervised learning methods have garnered a lot of attention recently. A
dominant force in self-supervised learning is the autoencoder, which has
multiple uses (e.g., data representation, anomaly detection, denoise). This
research explored the application of an autoencoder to learn effective data
representation of helicopter flight track data, and then to support helicopter
track identification. Our testing results are promising. For example, at
Phoenix Deer Valley (DVT) airport, where 70% of recorded flight tracks have
missing aircraft types, the autoencoder can help to identify twenty-two times
more helicopters than otherwise detectable using rule-based methods; for Grand
Canyon West Airport (1G4) airport, the autoencoder can identify thirteen times
more helicopters than a current rule-based approach. Our approach can also
identify mislabeled aircraft types in the flight track data and find true types
for records with pseudo aircraft type labels such as HELO. With improved
labelling, studies using these data sets can produce more reliable results.
|
The convergence rate in Wasserstein distance is estimated for the empirical
measures of symmetric semilinear SPDEs. Unlike in the finite-dimensional case
that the convergence is of algebraic order in time, in the present situation
the convergence is of log order with a power given by eigenvalues of the
underlying linear operator.
|
Background: A global description of the ground-state properties of nuclei in
a wide mass range in a unified manner is desirable not only for understanding
exotic nuclei but for providing nuclear data for applications. Purpose: We
demonstrate the KIDS functional describes the ground states appropriately with
respect to the existing data and predictions for a possible application of the
functional to all the nuclei by taking Nd isotopes as examples. Method: The
Kohn-Sham-Bogoliubov equation is solved for the Nd isotopes with the neutron
numbers ranging from 60 to 160 by employing the KIDS functionals constructed to
satisfy both neutron-matter equation of state or neutron star observation and
selected nuclear data. Results: Considering the nuclear deformation improves
the description of the binding energies and radii. We find that the discrepancy
from the experimental data is more significant for neutron-rich/deficient
isotopes and this can be made isotope independent by changing the slope
parameter of the symmetry energy. Conclusions: The KIDS functional is applied
to the mid-shell nuclei for the first time. The onset and evolution of
deformation are nicely described for the Nd isotopes. The KIDS functional is
competent to a global fitting for a better description of nuclear properties in
the nuclear chart.
|
Weak supervision has shown promising results in many natural language
processing tasks, such as Named Entity Recognition (NER). Existing work mainly
focuses on learning deep NER models only with weak supervision, i.e., without
any human annotation, and shows that by merely using weakly labeled data, one
can achieve good performance, though still underperforms fully supervised NER
with manually/strongly labeled data. In this paper, we consider a more
practical scenario, where we have both a small amount of strongly labeled data
and a large amount of weakly labeled data. Unfortunately, we observe that
weakly labeled data does not necessarily improve, or even deteriorate the model
performance (due to the extensive noise in the weak labels) when we train deep
NER models over a simple or weighted combination of the strongly labeled and
weakly labeled data. To address this issue, we propose a new multi-stage
computational framework -- NEEDLE with three essential ingredients: (1) weak
label completion, (2) noise-aware loss function, and (3) final fine-tuning over
the strongly labeled data. Through experiments on E-commerce query NER and
Biomedical NER, we demonstrate that NEEDLE can effectively suppress the noise
of the weak labels and outperforms existing methods. In particular, we achieve
new SOTA F1-scores on 3 Biomedical NER datasets: BC5CDR-chem 93.74,
BC5CDR-disease 90.69, NCBI-disease 92.28.
|
Using a combinatorial description of Stiefel-Whitney classes of closed flat
manifolds with diagonal holonomy representation, we show that no
Hantzsche-Wendt manifold of dimension greater than three does not admit a
spin$^c$ structure.
|
Learning high-dimensional distributions is an important yet challenging
problem in machine learning with applications in various domains. In this
paper, we introduce new techniques to formulate the problem as solving
Fokker-Planck equation in a lower-dimensional latent space, aiming to mitigate
challenges in high-dimensional data space. Our proposed model consists of
latent-distribution morphing, a generator and a parameterized Fokker-Planck
kernel function. One fascinating property of our model is that it can be
trained with arbitrary steps of latent distribution morphing or even without
morphing, which makes it flexible and as efficient as Generative Adversarial
Networks (GANs). Furthermore, this property also makes our latent-distribution
morphing an efficient plug-and-play scheme, thus can be used to improve
arbitrary GANs, and more interestingly, can effectively correct failure cases
of the GAN models. Extensive experiments illustrate the advantages of our
proposed method over existing models.
|
In this work, we provide an analytical proof of the robustness of topological
entanglement under a model of random local perturbations. We define a notion of
average topological subsystem purity and show that, in the context of quantum
double models, this quantity does detect topological order and is robust under
the action of a random quantum circuit of shallow depth.
|
We propose a minimal generalization of the celebrated Markov-Chain Monte
Carlo algorithm which allows for an arbitrary number of configurations to be
visited at every Monte Carlo step. This is advantageous when a parallel
computing machine is available, or when many biased configurations can be
evaluated at little additional computational cost. As an example of the former
case, we report a significant reduction of the thermalization time for the
paradigmatic Sherrington-Kirkpatrick spin-glass model. For the latter case, we
show that, by leveraging on the exponential number of biased configurations
automatically computed by Diagrammatic Monte Carlo, we can speed up
computations in the Fermi-Hubbard model by two orders of magnitude.
|
It is increasingly important to understand the spatial dynamics of epidemics.
While there are numerous mathematical models of epidemics, there is a scarcity
of physical systems with sufficiently well-controlled parameters to allow
quantitative model testing. It is also challenging to replicate the macro
non-equilibrium effects of complex models in microscopic systems. In this work,
we demonstrate experimentally a physics analog of epidemic spreading using
optically driven non-equilibrium phase transitions in strongly interacting
Rydberg atoms. Using multiple laser beams we can impose any desired spatial
structure. We observe spatially localized phase transitions and their interplay
in different parts of the sample. These phase transitions simulate the outbreak
of an infectious disease in multiple locations, as well as the dynamics towards
herd immunity and endemic state in different regimes. The reported results
indicate that Rydberg systems are versatile enough to model complex
spatial-temporal dynamics.
|
This article describes an algorithm that provides visual odometry estimates
from sequential pairs of RGBD images. The key contribution of this article on
RGBD odometry is that it provides both an odometry estimate and a covariance
for the odometry parameters in real-time via a representative covariance
matrix. Accurate, real-time parameter covariance is essential to effectively
fuse odometry measurements into most navigation systems. To date, this topic
has seen little treatment in research which limits the impact existing RGBD
odometry approaches have for localization in these systems. Covariance
estimates are obtained via a statistical perturbation approach motivated by
real-world models of RGBD sensor measurement noise. Results discuss the
accuracy of our RGBD odometry approach with respect to ground truth obtained
from a motion capture system and characterizes the suitability of this approach
for estimating the true RGBD odometry parameter uncertainty.
|
We treat a setting in which two priority wireless service classes are offered
in a given area by a drone small cell (DSC). Specifically, we consider
broadband (BB) user with high priority and reliability requirements that
coexists with random access machine-type-communications (MTC) devices. The
drone serves both connectivity types with a combination of orthogonal slicing
of the wireless resources and dynamic horizontal opportunistic positioning
(D-HOP). We treat the D-HOP as a computational geometry function over
stochastic BB user locations which requires careful adjustment in the
deployment parameters to ensure MTC service at all times. Using an information
theoretic approach, we optimize DSC deployment properties and radio resource
allocation for the purpose of maximizing the average rate of BB users. While
respecting the strict dual service requirements we analyze how system
performance is affected by stochastic user positioning and density, topology,
and reliability constraints combinations. The numerical results show that this
approach outperforms static DSCs that fit the same coverage constraints, with
outstanding performance in the urban setting.
|
In this paper we first investigate the equatorial circular orbit structure of
Kerr black holes with scalar hair (KBHsSH) and highlight their most prominent
features which are quite distinct from the exterior region of ordinary bald
Kerr black holes, i.e. peculiarities that arise from the combined bound system
of a hole with an off-center, self-gravitating distribution of scalar matter.
Some of these traits are incompatible with the thin disk approach, thus we
identify and map out various regions in the parameter space respectively. All
the solutions for which the stable circular orbital velocity (and angular
momentum) curve is continuous are used for building thin and optically thick
disks around them, from which we extract the radiant energy fluxes,
luminosities and efficiencies. We compare the results in batches with the same
spin parameter $j$ but different normalized charges, and the profiles are
richly diverse. Because of the existence of a conserved scalar charge, $Q$,
these solutions are non-unique in the $(M, J)$ parameter space. Furthermore,
$Q$ cannot be extracted asymptotically from the metric functions. Nevertheless,
by constraining the parameters through different observations, the luminosity
profile could in turn be used to constrain the Noether charge and characterize
the spacetime, should KBHsSH exist.
|
In this work, we propose a novel missile guidance algorithm that combines
deep learning based trajectory prediction with nonlinear model predictive
control. Although missile guidance and threat interception is a well-studied
problem, existing algorithms' performance degrades significantly when the
target is pulling high acceleration attack maneuvers while rapidly changing its
direction. We argue that since most threats execute similar attack maneuvers,
these nonlinear trajectory patterns can be processed with modern machine
learning methods to build high accuracy trajectory prediction algorithms. We
train a long short-term memory network (LSTM) based on a class of simulated
structured agile attack patterns, then combine this predictor with quadratic
programming based nonlinear model predictive control (NMPC). Our method, named
nonlinear model based predictive control with target acceleration predictions
(NMPC-TAP), significantly outperforms compared approaches in terms of miss
distance, for the scenarios where the target/threat is executing agile
maneuvers.
|
The sub-Saturn ($\sim$4--8$R_{\oplus}$) occurrence rate rises with orbital
period out to at least $\sim$300 days. In this work we adopt and test the
hypothesis that the decrease in their occurrence towards the star is a result
of atmospheric mass loss, which can transform sub-Saturns into sub-Neptunes
($\lesssim$4$R_{\oplus}$) more efficiently at shorter periods. We show that
under the mass loss hypothesis, the sub-Saturn occurrence rate can be leveraged
to infer their underlying core mass function, and by extension that of gas
giants. We determine that lognormal core mass functions peaked near
$\sim$10--20$M_{\oplus}$ are compatible with the sub-Saturn period
distribution, the distribution of observationally-inferred sub-Saturn cores,
and gas accretion theories. Our theory predicts that close-in sub-Saturns
should be $\sim$50\% less common and $\sim$30\% more massive around rapidly
rotating stars; this should be directly testable for stars younger than
$\lesssim$500 Myr. We also predict that the sub-Jovian desert becomes less
pronounced and opens up at smaller orbital periods around M stars compared to
solar-type stars ($\sim$0.7 days vs.~$\sim$3 days). We demonstrate that
exceptionally low-density sub-Saturns, "Super-Puffs", can survive intense
hydrodynamic escape to the present day if they are born with even larger
atmospheres than they currently harbor; in this picture, Kepler 223 d began
with an envelope $\sim$1.5$\times$ the mass of its core and is currently losing
its envelope at a rate $\sim$2$\times 10^{-3}M_{\oplus}~\mathrm{Myr}^{-1}$. If
the predictions from our theory are confirmed by observations, the core mass
function we predict can also serve to constrain core formation theories of
gas-rich planets.
|
Atomic Switch Networks (ASN) comprising silver iodide (AgI) junctions, a
material previously unexplored as functional memristive elements within
highly-interconnected nanowire networks, were employed as a neuromorphic
substrate for physical Reservoir Computing (RC). This new class of ASN-based
devices has been physically characterized and utilized to classify spoken digit
audio data, demonstrating the utility of substrate-based device architectures
where intrinsic material properties can be exploited to perform computation
in-materio. This work demonstrates high accuracy in the classification of
temporally analyzed Free-Spoken Digit Data (FSDD). These results expand upon
the class of viable memristive materials available for the production of
functional nanowire networks and bolster the utility of ASN-based devices as
unique hardware platforms for neuromorphic computing applications involving
memory, adaptation and learning.
|
This paper presents a coverage-guided grammar-based fuzzing technique for
automatically generating a corpus of concise test inputs for programs such as
compilers. We walk-through a case study of a compiler designed for education
and the corresponding problem of generating meaningful test cases to provide to
students. The prior state-of-the-art solution is a combination of fuzzing and
test-case reduction techniques such as variants of delta-debugging. Our key
insight is that instead of attempting to minimize convoluted fuzzer-generated
test inputs, we can instead grow concise test inputs by construction using a
form of iterative deepening. We call this approach Bonsai Fuzzing. Experimental
results show that Bonsai Fuzzing can generate test corpora having inputs that
are 16--45% smaller in size on average as compared to a fuzz-then-reduce
approach, while achieving approximately the same code coverage and
fault-detection capability.
|
Betweeness centrality is one of the most important concepts in graph
analysis. It was recently extended to link streams, a graph generalization
where links arrive over time. However, its computation raises non-trivial
issues, due in particular to the fact that time is considered as continuous. We
provide here the first algorithms to compute this generalized betweenness
centrality, as well as several companion algorithms that have their own
interest. They work in polynomial time and space, we illustrate them on typical
examples, and we provide an implementation.
|
Deep learning approaches often require huge datasets to achieve good
generalization. This complicates its use in tasks like image-based medical
diagnosis, where the small training datasets are usually insufficient to learn
appropriate data representations. For such sensitive tasks it is also important
to provide the confidence in the predictions. Here, we propose a way to learn
and use probabilistic labels to train accurate and calibrated deep networks
from relatively small datasets. We observe gains of up to 22% in the accuracy
of models trained with these labels, as compared with traditional approaches,
in three classification tasks: diagnosis of hip dysplasia, fatty liver, and
glaucoma. The outputs of models trained with probabilistic labels are
calibrated, allowing the interpretation of its predictions as proper
probabilities. We anticipate this approach will apply to other tasks where few
training instances are available and expert knowledge can be encoded as
probabilities.
|
Bayesian structure learning allows inferring Bayesian network structure from
data while reasoning about the epistemic uncertainty -- a key element towards
enabling active causal discovery and designing interventions in real world
systems. In this work, we propose a general, fully differentiable framework for
Bayesian structure learning (DiBS) that operates in the continuous space of a
latent probabilistic graph representation. Contrary to existing work, DiBS is
agnostic to the form of the local conditional distributions and allows for
joint posterior inference of both the graph structure and the conditional
distribution parameters. This makes our formulation directly applicable to
posterior inference of complex Bayesian network models, e.g., with nonlinear
dependencies encoded by neural networks. Using DiBS, we devise an efficient,
general purpose variational inference method for approximating distributions
over structural models. In evaluations on simulated and real-world data, our
method significantly outperforms related approaches to joint posterior
inference.
|
Object detection in natural scenes can be a challenging task. In many
real-life situations, the visible spectrum is not suitable for traditional
computer vision tasks. Moving outside the visible spectrum range, such as the
thermal spectrum or the near-infrared (NIR) images, is much more beneficial in
low visibility conditions, NIR images are very helpful for understanding the
object's material quality. In this work, we have taken images with both the
Thermal and NIR spectrum for the object detection task. As multi-spectral data
with both Thermal and NIR is not available for the detection task, we needed to
collect data ourselves. Data collection is a time-consuming process, and we
faced many obstacles that we had to overcome. We train the YOLO v3 network from
scratch to detect an object from multi-spectral images. Also, to avoid
overfitting, we have done data augmentation and tune hyperparameters.
|
We construct the systems of bi-orthogonal polynomials on the unit circle
where the Toeplitz structure of the moment determinants is replaced by $
\det(w_{2j-k})_{0\leq j,k \leq N-1} $ and the corresponding Vandermonde modulus
squared is replaced by $ \prod_{1 \le j < k \le N}(\zeta^{2}_k -
\zeta^{2}_j)(\zeta^{-1}_k - \zeta^{-1}_j) $. This is the simplest case of a
general system of $pj-qk$ with $p,q$ co-prime integers. We derive analogues of
the structures well known in the Toeplitz case: third order recurrence
relations, determinantal and multiple-integral representations, their
reproducing kernel and Christoffel-Darboux sum, and associated
(Carath{\'e}odory) functions. We close by giving full explicit details for the
system defined by the simple weight $ w(\zeta)=e^{\zeta}$, which is a
specialisation of a weight arising from averages of moments of derivatives of
characteristic polynomials over $USp(2N)$, $SO(2N)$ and $O^-(2N)$.
|
Two-dimensional (2D) van der Waals (vdWs) materials have gathered a lot of
attention recently. However, the majority of these materials have Curie
temperatures that are well below room temperature, making it challenging to
incorporate them into device applications. In this work, we synthesized a
room-temperature vdW magnetic crystal Fe$_5$GeTe$_2$ with a Curie temperature
T$_c = 332$ K, and studied its magnetic properties by vibrating sample
magnetometry (VSM) and broadband ferromagnetic resonance (FMR) spectroscopy.
The experiments were performed with external magnetic fields applied along the
c-axis (H$\parallel$c) and the ab-plane (H$\parallel$ab), with temperatures
ranging from 300 K to 10 K. We have found a sizable Land\'e g-factor difference
between the H$\parallel$c and H$\parallel$ab cases. In both cases, the Land\'e
g-factor values deviated from g = 2. This indicates contribution of orbital
angular momentum to the magnetic moment. The FMR measurements reveal that
Fe$_5$GeTe$_2$ has a damping constant comparable to Permalloy. With reducing
temperature, the linewidth was broadened. Together with the VSM data, our
measurements indicate that Fe$_5$GeTe$_2$ transitions from ferromagnetic to
ferrimagnetic at lower temperatures. Our experiments highlight key information
regarding the magnetic state and spin scattering processes in Fe$_5$GeTe$_2$,
which promote the understanding of magnetism in Fe$_5$GeTe$_2$, leading to
implementations of Fe$_5$GeTe$_2$ based room-temperature spintronic devices.
|
We develop a hybrid model of galactic chemical evolution that combines a
multi-ring computation of chemical enrichment with a prescription for stellar
migration and the vertical distribution of stellar populations informed by a
cosmological hydrodynamic disc galaxy simulation. Our fiducial model adopts
empirically motivated forms of the star formation law and star formation
history, with a gradient in outflow mass loading tuned to reproduce the
observed metallicity gradient. With this approach, the model reproduces many of
the striking qualitative features of the Milky Way disc's abundance structure:
(i) the dependence of the [O/Fe]-[Fe/H] distribution on radius $R_\text{gal}$
and midplane distance $|z|$; (ii) the changing shapes of the [O/H] and [Fe/H]
distributions with $R_\text{gal}$ and $|z|$; (iii) a broad distribution of
[O/Fe] at sub-solar metallicity and changes in the [O/Fe] distribution with
$R_\text{gal}$, $|z|$, and [Fe/H]; (iv) a tight correlation between [O/Fe] and
stellar age for [O/Fe] $>$ 0.1; (v) a population of young and intermediate-age
$\alpha$-enhanced stars caused by migration-induced variability in the Type Ia
supernova rate; (vi) non-monotonic age-[O/H] and age-[Fe/H] relations, with
large scatter and a median age of $\sim$4 Gyr near solar metallicity.
Observationally motivated models with an enhanced star formation rate $\sim$2
Gyr ago improve agreement with the observed age-[Fe/H] and age-[O/H] relations,
but worsen agreement with the observed age-[O/Fe] relation. None of our models
predict an [O/Fe] distribution with the distinct bimodality seen in the
observations, suggesting that more dramatic evolutionary pathways are required.
All code and tables used for our models are publicly available through the
Versatile Integrator for Chemical Evolution (VICE;
https://pypi.org/project/vice).
|
We study the problem of zeroth-order (black-box) optimization of a Lipschitz
function $f$ defined on a compact subset $\mathcal X$ of $\mathbb R^d$, with
the additional constraint that algorithms must certify the accuracy of their
recommendations. We characterize the optimal number of evaluations of any
Lipschitz function $f$ to find and certify an approximate maximizer of $f$ at
accuracy $\varepsilon$. Under a weak assumption on $\mathcal X$, this optimal
sample complexity is shown to be nearly proportional to the integral
$\int_{\mathcal X} \mathrm{d}\boldsymbol x/( \max(f) - f(\boldsymbol x) +
\varepsilon )^d$. This result, which was only (and partially) known in
dimension $d=1$, solves an open problem dating back to 1991. In terms of
techniques, our upper bound relies on a slightly improved analysis of the DOO
algorithm that we adapt to the certified setting and then link to the above
integral. Our instance-dependent lower bound differs from traditional
worst-case lower bounds in the Lipschitz setting and relies on a local
worst-case analysis that could likely prove useful for other learning tasks.
|
Hyperbolic phonon polaritons (HPhPs) sustained in van der Waals (vdW)
materials exhibit extraordinary capabilities of confining long-wave
electromagnetic fields to the deep subwavelength scale. In stark contrast to
the uniaxial vdW hyperbolic materials such as hexagonal boron nitride (h-BN),
the recently emerging biaxial hyperbolic materials such as {\alpha}-MoO3 and
{\alpha}-V2O5 further bring new degree of freedoms in controlling light at the
flatland, due to their distinctive in-plane hyperbolic dispersion. However, the
controlling and focusing of such in-plane HPhPs are to date remain elusive.
Here, we propose a versatile technique for launching, controlling and focusing
of in-plane HPhPs in {\alpha}-MoO3 with geometrically designed plasmonic
antennas. By utilizing high resolution near-field optical imaging technique, we
directly excited and mapped the HPhPs wavefronts in real space. We find that
subwavelength manipulating and focusing behavior are strongly dependent on the
curvature of antenna extremity. This strategy operates effectively in a
broadband spectral region. These findings can not only provide fundamental
insights into manipulation of light by biaxial hyperbolic crystals at
nanoscale, but also open up new opportunities for planar nanophotonic
applications.
|
Secret Unknown Ciphers (SUC) have been proposed recently as digital
clone-resistant functions overcoming some of Physical(ly) Unclonable Functions
(PUF) downsides, mainly their inconsistency because of PUFs analog nature. In
this paper, we propose a new practical mechanism for creating internally random
ciphers in modern volatile and non-volatile SoC FPGAs, coined as SRAM-SUC. Each
created random cipher inside a SoC FPGA constitutes a robust digital PUF. This
work also presents a class of involutive SUCs, optimized for the targeted SoC
FPGA architecture, as sample realization of the concept; it deploys a generated
class of involutive 8-bit S-Boxes, that are selected randomly from a defined
large set through an internal process inside the SoC FPGA. Hardware and
software implementations show that the resulting SRAM-SUC has ultra-low latency
compared to well-known PUF-based authentication mechanisms. SRAM-SUC requires
only $2.88/0.72 \mu s$ to generate a response for a challenge at 50/200 MHz
respectively. This makes SRAM-SUC a promising and appealing solution for
Ultra-Reliable Low Latency Communication (URLLC).
|
It is known that any $m$-gonal form of $\rank n \ge 5$ is almost regular.
In this article, we study the sufficiently large integers which are
represented by (almost regular) $m$-gonal forms of $\rank n \ge 6$.
|
Two stochastic models are proposed to describe the evolution of the COVID-19
pandemic. In the first model the population is partitioned into four
compartments: susceptible $S$, infected $I$, removed $R$ and dead people $D$.
In order to have a cross validation, a deterministic version of such a model is
also devised which is represented by a system of ordinary differential
equations with delays. In the second stochastic model two further compartments
are added: the class $A$ of asymptomatic individuals and the class $L$ of
isolated infected people. Effects such as social distancing measures are easily
included and the consequences are analyzed. Numerical solutions are obtained
with Monte Carlo simulations. Quantitative predictions are provided which can
be useful for the evaluation of political measures, e.g. the obtained results
suggest that strategies based on herd immunity are too risky.
|
We develop a simple and elegant method for lossless compression using latent
variable models, which we call 'bits back with asymmetric numeral systems'
(BB-ANS). The method involves interleaving encode and decode steps, and
achieves an optimal rate when compressing batches of data. We demonstrate it
firstly on the MNIST test set, showing that state-of-the-art lossless
compression is possible using a small variational autoencoder (VAE) model. We
then make use of a novel empirical insight, that fully convolutional generative
models, trained on small images, are able to generalize to images of arbitrary
size, and extend BB-ANS to hierarchical latent variable models, enabling
state-of-the-art lossless compression of full-size colour images from the
ImageNet dataset. We describe 'Craystack', a modular software framework which
we have developed for rapid prototyping of compression using deep generative
models.
|
We construct a Green function, which can identify the topological nature of
interacting systems. It is equivalent to the single-particle Green function of
effective non-interacting particles, the Bloch Hamiltonian of which is given by
the inverse of the full Green function of the original interacting particles at
zero frequency. The topological nature of the interacting insulators is
originated from the coincidence of the poles and the zeros of the diagonal
elements of the constructed Green function. The cross of the zeros in the
momentum space closely relates to the topological nature of insulators. As a
demonstration, using the zero's cross, we identify the topological phases of
magnetic insulators, where both the ionic potential and the spin exchange
between conduction electrons and magnetic moments are present together with the
spin-orbital coupling. The topological phase identification is consistent with
the topological invariant of the magnetic insulators. We also found an
antiferromagnetic state with topologically breaking of the spin symmetry, where
electrons with one spin orientation are in topological insulating state, while
electrons with the opposite spin orientation are in topologically trivial one.
|
Radical progress in the field of deep learning (DL) has led to unprecedented
accuracy in diverse inference tasks. As such, deploying DL models across mobile
platforms is vital to enable the development and broad availability of the
next-generation intelligent apps. Nevertheless, the wide and optimised
deployment of DL models is currently hindered by the vast system heterogeneity
of mobile devices, the varying computational cost of different DL models and
the variability of performance needs across DL applications. This paper
proposes OODIn, a framework for the optimised deployment of DL apps across
heterogeneous mobile devices. OODIn comprises a novel DL-specific software
architecture together with an analytical framework for modelling DL
applications that: (1) counteract the variability in device resources and DL
models by means of a highly parametrised multi-layer design; and (2) perform a
principled optimisation of both model- and system-level parameters through a
multi-objective formulation, designed for DL inference apps, in order to adapt
the deployment to the user-specified performance requirements and device
capabilities. Quantitative evaluation shows that the proposed framework
consistently outperforms status-quo designs across heterogeneous devices and
delivers up to 4.3x and 3.5x performance gain over highly optimised platform-
and model-aware designs respectively, while effectively adapting execution to
dynamic changes in resource availability.
|
Theoretically, domain adaptation is a well-researched problem. Further, this
theory has been well-used in practice. In particular, we note the bound on
target error given by Ben-David et al. (2010) and the well-known
domain-aligning algorithm based on this work using Domain Adversarial Neural
Networks (DANN) presented by Ganin and Lempitsky (2015). Recently, multiple
variants of DANN have been proposed for the related problem of domain
generalization, but without much discussion of the original motivating bound.
In this paper, we investigate the validity of DANN in domain generalization
from this perspective. We investigate conditions under which application of
DANN makes sense and further consider DANN as a dynamic process during
training. Our investigation suggests that the application of DANN to domain
generalization may not be as straightforward as it seems. To address this, we
design an algorithmic extension to DANN in the domain generalization case. Our
experimentation validates both theory and algorithm.
|
Symbolic music understanding, which refers to the understanding of music from
the symbolic data (e.g., MIDI format, but not audio), covers many music
applications such as genre classification, emotion classification, and music
pieces matching. While good music representations are beneficial for these
applications, the lack of training data hinders representation learning.
Inspired by the success of pre-training models in natural language processing,
in this paper, we develop MusicBERT, a large-scale pre-trained model for music
understanding. To this end, we construct a large-scale symbolic music corpus
that contains more than 1 million music songs. Since symbolic music contains
more structural (e.g., bar, position) and diverse information (e.g., tempo,
instrument, and pitch), simply adopting the pre-training techniques from NLP to
symbolic music only brings marginal gains. Therefore, we design several
mechanisms, including OctupleMIDI encoding and bar-level masking strategy, to
enhance pre-training with symbolic music data. Experiments demonstrate the
advantages of MusicBERT on four music understanding tasks, including melody
completion, accompaniment suggestion, genre classification, and style
classification. Ablation studies also verify the effectiveness of our designs
of OctupleMIDI encoding and bar-level masking strategy in MusicBERT.
|
We present an ab initio derivation method for effective low-energy
Hamiltonians of material with strong spin-orbit interactions. The effective
Hamiltonian is described in terms of the Wannier function in the spinor form,
and effective interactions are derived with the constrained random phase
approximation (cRPA) method. Based on this formalism and the developed code, we
derive an effective Hamiltonian of a strong spin-orbit interaction material
Ca5Ir3O12. This system consists of three edge-shared IrO6 octahedral chains
arranged along the c axis, and the three Ir atoms in the ab plane compose a
triangular lattice. For such a complicated structure, we need to set up the
Wannier spinor function under the local coordinate system. We found that a
density-functional band structure near the Fermi level is formed by local dxy
and dyz orbitals. Then, we constructed the ab initio dxy/dyz model. The
estimated nearest neighbor transfer t is close to 0.2 eV, and the cRPA onsite U
and neighboring V electronic interactions are found to be 2.4-2.5 eV and 1 eV,
respectively. The resulting characteristic correlation strength defined by
(U-V)/t is above 7, and thus this material is classified as a strongly
correlated electron system. The onsite transfer integral involved in the
spin-orbit interaction is 0.2 eV, which is comparable to the onsite exchange
integrals near 0.2 eV, indicating that the spin-orbit-interaction physics would
compete with the Hund physics. Based on these calculated results, we discuss
possible rich ground-state low-energy electronic structures of spin, charge and
orbitals with competing Hund, spin-orbit and strong correlation physics.
|
We study the problem of {\em crowdsourced PAC learning} of Boolean-valued
functions through enriched queries, a problem that has attracted a surge of
recent research interests. In particular, we consider that the learner may
query the crowd to obtain a label of a given instance or a comparison tag of a
pair of instances. This is a challenging problem and only recently have
budget-efficient algorithms been established for the scenario where the
majority of the crowd are correct. In this work, we investigate the
significantly more challenging case that the majority are incorrect which
renders learning impossible in general. We show that under the {semi-verified
model} of Charikar~et~al.~(2017), where we have (limited) access to a trusted
oracle who always returns the correct annotation, it is possible to learn the
underlying function while the labeling cost is significantly mitigated by the
enriched and more easily obtained queries.
|
Ultrasound tomography (UST) scanners allow quantitative images of the human
breast's acoustic properties to be derived with potential applications in
screening, diagnosis and therapy planning. Time domain full waveform inversion
(TD-FWI) is a promising UST image formation technique that fits the parameter
fields of a wave physics model by gradient-based optimization. For high
resolution 3D UST, it holds three key challenges: Firstly, its central building
block, the computation of the gradient for a single US measurement, has a
restrictively large memory footprint. Secondly, this building block needs to be
computed for each of the $10^3-10^4$ measurements, resulting in a massive
parallel computation usually performed on large computational clusters for
days. Lastly, the structure of the underlying optimization problem may result
in slow progression of the solver and convergence to a local minimum. In this
work, we design and evaluate a comprehensive computational strategy to overcome
these challenges: Firstly, we exploit a gradient computation based on time
reversal that dramatically reduces the memory footprint at the expense of one
additional wave simulation per source. Secondly, we break the dependence on the
number of measurements by using source encoding (SE) to compute stochastic
gradient estimates. Also we describe a more accurate, TD-specific SE technique
with a finer variance control and use a state-of-the-art stochastic LBFGS
method. Lastly, we design an efficient TD multi-grid scheme together with
preconditioning to speed up the convergence while avoiding local minima. All
components are evaluated in extensive numerical proof-of-concept studies
simulating a bowl-shaped 3D UST breast scanner prototype. Finally, we
demonstrate that their combination allows us to obtain an accurate 442x442x222
voxel image with a resolution of 0.5mm using Matlab on a single GPU within 24h.
|
We study some local spectral properties of contraction operators on $\ell_p$,
$1<p<\infty$ from a Baire category point of view, with respect to the
Strong$^*$ Operator Topology. In particular, we show that a typical contraction
on $\ell_p$ has Dunford's Property (C) but neither Bishop's Property $(\beta)$
nor the Decomposition Property $(\delta)$, and is completely indecomposable. We
also obtain some results regarding the asymptotic behavior of orbits of typical
contractions on $\ell_p$.
|
The morphological classification of galaxies is a relevant probe for galaxy
evolution and unveils its connection with cosmological structure formation. To
this scope, it is fundamental to recover galaxy morphologies over large areas
of the sky. In this paper, we present a morphological catalogue for galaxies in
the Stripe-82 area, observed with S-PLUS, till a magnitude limit of $r\le17$,
using the state-of-the-art of Convolutional Neural Networks (CNNs) for computer
vision. This analysis will then be extended to the whole S-PLUS survey data,
covering $\simeq 9300$ $deg^{2}$ of the celestial sphere in twelve optical
bands. We find that the network's performance increases with 5 broad bands and
additional 3 narrow bands compared to our baseline with 3 bands. However, it
does lose performance when using the full $12$ band image information.
Nevertheless, the best result is achieved with 3 bands, when using pre-trained
network weights in an ImageNet dataset. These results enhance the importance of
previous knowledge in the neural network weights based on training in non
related extensive datasets. Thus, we release a model pre-trained in several
bands that could be adapted to other surveys. We develop a catalogue of 3274
galaxies in Stripe-82 that are not presented in Galaxy Zoo 1 (GZ1). We also add
classification to 4686 galaxies considered ambiguous in GZ1 dataset. Finally,
we present a prospect of a novel way to take advantage of $12$ bands
information for morphological classification using multiband morphometric
features. The morphological catalogues are publicly available.
|
In this paper, we consider the conditional regularity of weak solution to the
3D Navier--Stokes equations. More precisely, we prove that if one directional
derivative of velocity, say $\partial_3 u,$ satisfies $\partial_3 u \in
L^{p_0,1}(0,T; L^{q_0}(\mathbb{R}^3))$ with $\frac{2}{p_{0}}+\frac{3}{q_{0}}=2$
and $\frac{3}{2}<q_0< +\infty,$ then the weak solution is regular on $(0,T].$
The proof is based on the new local energy estimates introduced by Chae-Wolf
(arXiv:1911.02699) and Wang-Wu-Zhang (arXiv:2005.11906).
|
A bridgeless cubic graph $G$ is said to have a 2-bisection if there exists a
2-vertex-colouring of $G$ (not necessarily proper) such that: (i) the colour
classes have the same cardinality, and (ii) the monochromatic components are
either an isolated vertex or an edge. In 2016, Ban and Linial conjectured that
every bridgeless cubic graph, apart from the well-known Petersen graph, admits
a 2-bisection. In the same paper it was shown that every Class I bridgeless
cubic graph admits such a bisection. The Class II bridgeless cubic graphs which
are critical to many conjectures in graph theory are snarks, in particular,
those with excessive index at least 5, that is, whose edge-set cannot be
covered by four perfect matchings. Moreover, Esperet et al. state that a
possible counterexample to Ban--Linial's Conjecture must have circular flow
number at least 5. The same authors also state that although empirical evidence
shows that several graphs obtained from the Petersen graph admit a 2-bisection,
they can offer nothing in the direction of a general proof. Despite some
sporadic computational results, until now, no general result about snarks
having excessive index and circular flow number both at least 5 has been
proven. In this work we show that treelike snarks, which are an infinite family
of snarks heavily depending on the Petersen graph and with both their circular
flow number and excessive index at least 5, admit a 2-bisection.
|
We analyze the Nelson-Barr approach to the Strong CP Problem. We derive the
necessary conditions in order to simultaneously reproduce the CKM phase and the
quark masses. Then we quantify the irreducible contributions to the QCD
topological angle, namely the corrections arising from loops of the colored
fermion mediators that characterize these models. Corrections analytic in the
couplings first arise at 3-loop order and are safely below current bounds;
non-analytic effects are 2-loop order and decouple as the mediators exceed a
few TeV. We discuss collider, electroweak, and flavor bounds and argue that
most of the parameter space above the TeV scale is still allowed in models with
down-type mediators, whereas other scenarios are more severely constrained.
With two or more families of mediators the dominant experimental bound is due
to the neutron electric dipole moment.
|
Chiral Effective Field Theory ($\chi$EFT) has been extensively used to study
the $NN$ interaction during the last three decades. In Effective Field Theories
(EFTs) the renormalization is performed order by order including the necessary
counter terms. Due to the strong character of the $NN$ interaction a
non-perturbative resummation is needed. In this work we will review some of the
methods proposed to completely remove cutoff dependencies. The methods covered
are renormalization with boundary conditions, renormalization with one counter
term in momentum space (or equivalently substractive renormalization) and the
exact $N/D$ method. The equivalence between the methods up to one
renormalization condition will be checked showing results in the $NN$ system.
The exact $N/D$ method allows to go beyond the others, and using a toy model it
is shown how it can renormalize singular repulsive interactions.
|
The inelastic dark matter model is one kind of popular models for the light
dark matter (DM) below $O(1)$ GeV. If the mass splitting between DM excited and
ground states is small enough, the co-annihilation becomes the dominant channel
for thermal relic density and the DM excited state can be long-lived at the
collider scale. We study scalar and fermion inelastic dark matter models for $
{\cal O}(1) $ GeV DM at Belle II with $ U(1)_D $ dark gauge symmetry broken
into its $Z_2$ subgroup. We focus on dilepton displaced vertex signatures from
decays of the DM excited state. With the help of precise displaced vertex
detection ability at Belle II, we can explore the DM spin, mass and mass
splitting between DM excited and ground states. Especially, we show scalar and
fermion DM candidates can be discriminated and the mass and mass splitting of
DM sector can be determined within the percentage of deviation for some
benchmark points. Furthermore, the allowed parameter space to explain the
excess of muon $(g-2)_\mu$ is also studied and it can be covered in our
displaced vertex analysis during the early stage of Belle II experiment.
|
Hyperkalemia is a potentially life-threatening condition that can lead to
fatal arrhythmias. Early identification of high risk patients can inform
clinical care to mitigate the risk. While hyperkalemia is often a complication
of acute kidney injury (AKI), it also occurs in the absence of AKI. We
developed predictive models to identify intensive care unit (ICU) patients at
risk of developing hyperkalemia by using the Medical Information Mart for
Intensive Care (MIMIC) and the eICU Collaborative Research Database (eICU-CRD).
Our methodology focused on building multiple models, optimizing for
interpretability through model selection, and simulating various clinical
scenarios.
In order to determine if our models perform accurately on patients with and
without AKI, we evaluated the following clinical cases: (i) predicting
hyperkalemia after AKI within 14 days of ICU admission, (ii) predicting
hyperkalemia within 14 days of ICU admission regardless of AKI status, and
compared different lead times for (i) and (ii). Both clinical scenarios were
modeled using logistic regression (LR), random forest (RF), and XGBoost.
Using observations from the first day in the ICU, our models were able to
predict hyperkalemia with an AUC of (i) 0.79, 0.81, 0.81 and (ii) 0.81, 0.85,
0.85 for LR, RF, and XGBoost respectively. We found that 4 out of the top 5
features were consistent across the models. AKI stage was significant in the
models that included all patients with or without AKI, but not in the models
which only included patients with AKI. This suggests that while AKI is
important for hyperkalemia, the specific stage of AKI may not be as important.
Our findings require further investigation and confirmation.
|
The transmission through a magnetic layer of correlated electrons sandwiched
between non-interacting normal-metal leads is studied within model
calculations. We consider the linear regime in the framework of the
Meir-Wingreen formalism, according to which the transmission can be interpreted
as the overlap of the spectral function of the surface layer of the leads with
that of the central region. By analyzing these spectral functions, we show that
a change of the coupling parameter between the leads and the central region
significantly and non-trivially affects the conductance. The role of band
structure effects for the transmission is clarified. For a strong coupling
between the leads and the central layer, high-intensity localized states are
formed outside the overlapping bands, while for weaker coupling this
high-intensity spectral weight is formed within the leads' continuum band
around the Fermi energy. A local Coulomb interaction in the central region
modifies the high-intensity states, and hence the transmission. For the present
setup, the major effect of the local interaction consists in shifts of the band
structure, since any sharp features are weakened due to the macroscopic
extension of the configuration in the directions perpendicular to the transport
direction.
|
We use photometric and kinematic data from Gaia DR2 to explore the structure
of the star forming region associated with the molecular cloud of Perseus.
Apart from the two well known clusters, IC 348 and NGC 1333, we present five
new clustered groups of young stars, which contain between 30 and 300 members,
named Autochthe, Alcaeus, Heleus, Electryon and Mestor. We demonstrate these
are co-moving groups of young stars, based on how the candidate members are
distributed in position, proper motion, parallax and colour-magnitude space. By
comparing their colour-magnitude diagrams to isochrones we show that they have
ages between 1 and 5 Myr. Using 2MASS and WISE colours we find that the
fraction of stars with discs in each group ranges from 10 to 50 percent. The
youngest of the new groups is also associated with a reservoir of cold dust,
according to the Planck map at 353 GHz. We compare the ages and proper motions
of the five new groups to those of IC 348 and NGC 1333. Autochthe is clearly
linked with NGC 1333 and may have formed in the same star formation event. The
seven groups separate roughly into two sets which share proper motion, parallax
and age: Heleus, Electryon, Mestor as the older set, and NGC 1333, Autochthe as
the younger set. Alcaeus is kinematically related to the younger set, but at a
more advanced age, while the properties of IC 348 overlap with both sets. All
older groups in this star forming region are located at higher galactic
latitude.
|
In this paper, we derive a set of equations of motions for binaries on
eccentric orbits undergoing spin-induced precession that can efficiently be
integrated on the radiation-reaction timescale. We find a family of solutions
with a computation cost improved by a factor $10$ - $50$ down to $\sim 10$ ms
per waveform evaluation compared to waveforms obtained by directly integrating
the precession equations, that maintain a mismatch of the order $10^{-4}$ -
$10^{-6}$ for waveforms lasting a million orbital cycles and a thousand
spin-induced precession cycles. We express it in terms of parameters that make
the solution regular in the equal-mass limit, thus bypassing a problem of
previous similar solutions. We point to ways in which the solution presented in
this paper can be perturbed to take into account effects such as general
quadrupole momenta and post-Newtonian corrections to the precession equations.
This new waveform, with its improved efficiency and its accuracy, makes
possible Bayesian parameter estimation using the full spin and eccentricity
parameter volume for long lasting inspiralling signals such as stellar-origin
black hole binaries observed by LISA.
|
Subsets and Splits