abstract
stringlengths 42
2.09k
|
---|
In this paper we are dealing with the issue of finding possibly short
synchronizing words in automata with weight assigned to each letter in the
alphabet $\Sigma$. First we discuss some complexity problems, and then we
present new approximation algorithm in four variations.
|
In this paper, we consider the discrete Laguerre polynomials $P_{n, N}(z)$
orthogonal with respect to the weight function $w(x) = x^{\alpha} e^{-N cx}$
supported on the infinite nodes $L_N = \{ x_{k,N} = \frac{k^2}{N^2}, k \in
\mathbb{N} \}$. We focus on the "band-saturated region" situation when the
parameter $c > \frac{\pi^2}{4}$. As $n \to \infty$, uniform expansions for
$P_{n, n}(z)$ are achieved for $z$ in different regions in the complex plane.
Typically, the Airy-function expansions and Gamma-function expansions are
derived for $z$ near the endpoints of the band and the origin, respectively.
The asymptotics for the normalizing coefficient $h_{n, N}$, recurrence
coefficients $\mathscr{B}_{n, N}$ and $\mathscr{A}_{n, N}^2$, are also
obtained. Our method is based on the Deift-Zhou steepest descent method for
Riemann-Hilbert problems.
|
Debugging is an essential process with a large share of the development
effort, being a relentless quest for offensive code through tracing, inspection
and iterative running sessions. Probably every developer has been in a
situation with a clear wish to rewind time just for a while, only to retry some
actions alternatively, instead of restarting the entire session. Well, the
genie to fulfill such a wish is known as a reverse debugger. Their inherent
technical complexity makes them very hard to implement, while the imposed
execution overhead turns them to less preferable for adoption. There are only a
few available, most being off-line tools, working on recorded, previously run,
sessions. We consider live reverse debuggers both challenging and promising,
since they can fit into existing forward debuggers, and we developed the first
live reverse debugger on top of LLDB, discussing in detail our implementation
approach.
|
Graph structures are powerful tools for modeling the relationships between
textual elements. Graph-of-Words (GoW) has been adopted in many Natural
Language tasks to encode the association between terms. However, GoW provides
few document-level relationships in cases when the connections between
documents are also essential. For identifying sub-events on social media like
Twitter, features from both word- and document-level can be useful as they
supply different information of the event. We propose a hybrid Graph-of-Tweets
(GoT) model which combines the word- and document-level structures for modeling
Tweets. To compress large amount of raw data, we propose a graph merging method
which utilizes FastText word embeddings to reduce the GoW. Furthermore, we
present a novel method to construct GoT with the reduced GoW and a Mutual
Information (MI) measure. Finally, we identify maximal cliques to extract
popular sub-events. Our model showed promising results on condensing
lexical-level information and capturing keywords of sub-events.
|
Environmental epidemiologists are increasingly interested in establishing
causality between exposures and health outcomes. A popular model for causal
inference is the Rubin Causal Model (RCM), which typically seeks to estimate
the average difference in study units' potential outcomes. An important
assumption under RCM is no interference; that is, the potential outcomes of one
unit are not affected by the exposure status of other units. The no
interference assumption is violated if we expect spillover or diffusion of
exposure effects based on units' proximity to other units and several other
causal estimands arise. Air pollution epidemiology typically violates this
assumption when we expect upwind events to affect downwind or nearby locations.
This paper adapts causal assumptions from social network research to address
interference and allow estimation of both direct and spillover causal effects.
We use propensity score-based methods to estimate these effects when
considering the effects of the Environmental Protection Agency's 2005
nonattainment designations for particulate matter with aerodynamic diameter
less than 2.5 micrograms per cubic meter (PM2.5) on lung cancer incidence using
county-level data obtained from the Surveillance, Epidemiology, and End Results
(SEER) Program. We compare these methods in a rigorous simulation study that
considers both spatially autocorrelated variables, interference, and missing
confounders. We find that pruning and matching based on the propensity score
produces the highest probability coverage of the true causal effects and lower
mean squared error. When applied to the research question, we found protective
direct and spillover causal effects.
|
With the widespread use of Deep Neural Networks (DNNs), machine learning
algorithms have evolved in two diverse directions -- one with ever-increasing
connection density for better accuracy and the other with more compact sizing
for energy efficiency. The increase in connection density increases on-chip
data movement, which makes efficient on-chip communication a critical function
of the DNN accelerator. The contribution of this work is threefold. First, we
illustrate that the point-to-point (P2P)-based interconnect is incapable of
handling a high volume of on-chip data movement for DNNs. Second, we evaluate
P2P and network-on-chip (NoC) interconnect (with a regular topology such as a
mesh) for SRAM- and ReRAM-based in-memory computing (IMC) architectures for a
range of DNNs. This analysis shows the necessity for the optimal interconnect
choice for an IMC DNN accelerator. Finally, we perform an experimental
evaluation for different DNNs to empirically obtain the performance of the IMC
architecture with both NoC-tree and NoC-mesh. We conclude that, at the tile
level, NoC-tree is appropriate for compact DNNs employed at the edge, and
NoC-mesh is necessary to accelerate DNNs with high connection density.
Furthermore, we propose a technique to determine the optimal choice of
interconnect for any given DNN. In this technique, we use analytical models of
NoC to evaluate end-to-end communication latency of any given DNN. We
demonstrate that the interconnect optimization in the IMC architecture results
in up to 6$\times$ improvement in energy-delay-area product for VGG-19
inference compared to the state-of-the-art ReRAM-based IMC architectures.
|
Currently, urban autonomous driving remains challenging because of the
complexity of the driving environment. Learning-based approaches, such as
reinforcement learning (RL) and imitation learning (IL), have indicated
superiority over rule-based approaches, showing great potential to make
decisions intelligently, but they still do not work well in urban driving
situations. To better tackle this problem, this paper proposes a novel
learning-based method that combines deep reinforcement learning with expert
demonstrations, focusing on longitudinal motion control in autonomous driving.
Our proposed method employs the soft actor-critic structure and modifies the
learning process of the policy network to incorporate both the goals of
maximizing reward and imitating the expert. Moreover, an adaptive prioritized
experience replay is designed to sample experience from both the agent's
self-exploration and expert demonstration, in order to improve the sample
efficiency. The proposed method is validated in a simulated urban roundabout
scenario and compared with various prevailing RL and IL baseline approaches.
The results manifest that the proposed method has a faster training speed, as
well as better performance in navigating safely and time-efficiently. The
ablation study reveals that the prioritized replay and expert demonstration
filter play important roles in our proposed method.
|
Video quality assessment (VQA) is now a fast-growing subject, maturing in the
full reference (FR) case, yet challenging in the exploding no reference (NR)
case. We investigate variants of the popular VMAF video quality assessment
algorithm for the FR case, using both support vector regression and feedforward
neural networks. We extend it to the NR case, using some different features but
similar learning, to develop a partially unified framework for VQA. When fully
trained, FR algorithms such as VMAF perform very well on test datasets,
reaching 90%+ match in PCC and SRCC; but for predicting performance in the
wild, we train/test from scratch for each database. With an 80/20 train/test
split, we still achieve about 90% performance on average in both PCC and SRCC,
with up to 7-9% gains over VMAF, using an improved motion feature and better
regression. Moreover, we even get decent performance (about 75%) if we ignore
the reference, treating FR as NR, partly justifying our attempts at
unification. In the true NR case, we reduce complexity vs. leading recent
algorithms VIDEVAL, RAPIQUE, yet achieve performance within 3-5%. Moreover, we
develop a method to analyze the saliency of features, and conclude that for
both VIDEVAL and RAPIQUE, a small subset of their features are providing the
bulk of the performance. In short, we find encouraging improvements in
trainability in FR, while constraining training complexity against leading
methods in NR, elucidating the saliency of features for feature selection.
|
Primordial black holes (PBHs), formed out of large overdensities in the early
Universe, are a viable dark matter (DM) candidate over a broad range of masses.
Ultra-light, asteroid-mass PBHs with masses around $10^{17}$ g are particularly
interesting as current observations allow them to constitute the entire DM
density. PBHs in this mass range emit $\sim$ MeV photons via Hawking radiation
which can directly be detected by the gamma ray telescopes, such as the
upcoming AMEGO. In this work we forecast how well an instrument with the
sensitivity of AMEGO will be able to detect, or rule out, PBHs as a DM
candidate, by searching for their evaporating signature when marginalizing over
the Galactic and extra-Galactic gamma-ray backgrounds. We find that an
instrument with the sensitivity of AMEGO could exclude non-rotating PBHs as the
only DM component for masses up to $7 \times 10^{17}$ g at 95% confidence level
(C.L.) for a monochromatic mass distribution, improving upon current bounds by
nearly an order of magnitude. The forecasted constraints are more stringent for
PBHs that have rotation, or which follow extended mass distributions.
|
The diurnal cycle CO$_2$ emissions from fossil fuel combustion and cement
production reflect seasonality, weather conditions, working days, and more
recently the impact of the COVID-19 pandemic. Here, for the first time we
provide a daily CO$_2$ emission dataset for the whole year of 2020 calculated
from inventory and near-real-time activity data (called Carbon Monitor project:
https://carbonmonitor.org). It was previously suggested from preliminary
estimates that did not cover the entire year of 2020 that the pandemics may
have caused more than 8% annual decline of global CO$_2$ emissions. Here we
show from detailed estimates of the full year data that the global reduction
was only 5.4% (-1,901 MtCO$_2$, ). This decrease is 5 times larger than the
annual emission drop at the peak of the 2008 Global Financial Crisis. However,
global CO$_2$ emissions gradually recovered towards 2019 levels from late April
with global partial re-opening. More importantly, global CO$_2$ emissions even
increased slightly by +0.9% in December 2020 compared with 2019, indicating the
trends of rebound of global emissions. Later waves of COVID-19 infections in
late 2020 and corresponding lockdowns have caused further CO$_2$ emissions
reductions particularly in western countries, but to a much smaller extent than
the declines in the first wave. That even substantial world-wide lockdowns of
activity led to a one-time decline in global CO$_2$ emissions of only 5.4% in
one year highlights the significant challenges for climate change mitigation
that we face in the post-COVID era. These declines are significant, but will be
quickly overtaken with new emissions unless the COVID-19 crisis is utilized as
a break-point with our fossil-fuel trajectory, notably through policies that
make the COVID-19 recovery an opportunity to green national energy and
development plans.
|
A laser multicharged ion source was used to perform interfacial treatment of
the 4H-SiC/ SiO2 interface using B and Ba ions. A Q-switched Nd:YAG laser
(wavelength {\lambda} = 1064 nm, pulse width {\tau} = 7 ns, and fluence F = 135
J/cm2) was used to ablate B and Ba targets to generate multicharged ions. The
ions were deflected by an electrostatic field to separate them from the
neutrals. The multicharged ions were used for nanometer layer growth and
shallow ion implantation in 4H-SiC. Several metal-oxide-semiconductor
capacitors (MOSCAP) were fabricated with a combination of B and Ba at the
SiC/SiO2 interface. High-low C-V measurements were used to characterize the
MOSCAPs. The B interfacial layer reduced the MOSCAP flatband voltage from 4.5
to 0.04 V, while the Ba layer had a negligible effect.
|
Nanoconfinement has been shown to drastically affect the physical properties
of water. Its effects on reactivity and dissociation, however, remain
controversial, despite their importance to understand aqueous chemistry at
interfaces, pores, aerosolos or protein cavities, and the growing evidence
indicating the acceleration of chemical kinetics in confined environments. The
dissociation constant $K_w$ in nanospaces has been assessed from experiments
and simulations in a few specific cases, leading to dissimilar conclusions.
Here, using carefully designed ab-initio simulations and data-science tools, we
demonstrate, challenging current misconceptions, that the energetics of bulk
water dissociation remains unchanged to surprisingly small length-scales
including clusters of only a dozen water molecules. This is rooted in the fact
that most of the free-energy involved in water autoionization comes from
breaking the O-H covalent bond, which has a comparable barrier in the bulk
liquid or in droplets of nanometer size. The subsequent separation of the
hydroxide and hydronium species contributes a marginal fraction of the total
free-energy, for which reason it turns out that confinement exerts little
control on this process. The present work provides a definitive and fundamental
description of the mechanism and thermodynamics of water dissociation at
different scales with broader implications on water's self-ionization at the
air-liquid interface and on chemical reactivity under nanoconfinement.
|
Recent experiments have demonstrated strong light-matter coupling between
electromagnetic nanoresonators and pristine sheets of two-dimensional
semiconductors, and it has been speculated whether these systems can enter the
quantum regime operating at the few-polariton level. To address this question,
we present a first-principles microscopic quantum theory for the interaction
between excitons in an infinite sheet of two-dimensional material and a
localised electromagnetic resonator. We find that the light-matter interaction
breaks the symmetry of the otherwise translation-invariant system and thereby
effectively generates a localised exciton mode, which is coupled to an
environment of residual exciton modes. This dissipative coupling increases with
tighter lateral confinement, and our analysis reveals this to be a potential
challenge in realising nonlinear exciton-exciton interaction. Nonetheless, we
predict that polariton blockade due to nonlinear exciton-exciton interactions
is well within reach for nanoresonators coupled to transition-metal
dichalcogenides, provided that the lateral resonator mode confinement can be
sufficiently small that the nonlinearity overcomes the polariton dephasing
caused by phonon interactions.
|
The sample efficiency of Bayesian optimization(BO) is often boosted by
Gaussian Process(GP) surrogate models. However, on mixed variable spaces,
surrogate models other than GPs are prevalent, mainly due to the lack of
kernels which can model complex dependencies across different types of
variables. In this paper, we propose the frequency modulated (FM) kernel
flexibly modeling dependencies among different types of variables, so that BO
can enjoy the further improved sample efficiency. The FM kernel uses distances
on continuous variables to modulate the graph Fourier spectrum derived from
discrete variables. However, the frequency modulation does not always define a
kernel with the similarity measure behavior which returns higher values for
pairs of more similar points. Therefore, we specify and prove conditions for FM
kernels to be positive definite and to exhibit the similarity measure behavior.
In experiments, we demonstrate the improved sample efficiency of GP BO using FM
kernels (BO-FM).On synthetic problems and hyperparameter optimization problems,
BO-FM outperforms competitors consistently. Also, the importance of the
frequency modulation principle is empirically demonstrated on the same
problems. On joint optimization of neural architectures and SGD
hyperparameters, BO-FM outperforms competitors including Regularized
evolution(RE) and BOHB. Remarkably, BO-FM performs better even than RE and BOHB
using three times as many evaluations.
|
The state-of-the-art on basic, single-antecedent anaphora has greatly
improved in recent years. Researchers have therefore started to pay more
attention to more complex cases of anaphora such as split-antecedent anaphora,
as in Time-Warner is considering a legal challenge to Telecommunications Inc's
plan to buy half of Showtime Networks Inc-a move that could lead to all-out war
between the two powerful companies. Split-antecedent anaphora is rarer and more
complex to resolve than single-antecedent anaphora; as a result, it is not
annotated in many datasets designed to test coreference, and previous work on
resolving this type of anaphora was carried out in unrealistic conditions that
assume gold mentions and/or gold split-antecedent anaphors are available. These
systems also focus on split-antecedent anaphors only. In this work, we
introduce a system that resolves both single and split-antecedent anaphors, and
evaluate it in a more realistic setting that uses predicted mentions. We also
start addressing the question of how to evaluate single and split-antecedent
anaphors together using standard coreference evaluation metrics.
|
Starting with twisted bilayer graphene, graphene-based moir\'e materials have
recently been established as a new platform for studying strong electron
correlations. In this paper, we study twisted graphene monolayers on trilayer
graphene and demonstrate that this system can host flat bands when the twist
angle is close to the magic-angle of 1.16$^\circ$. When monolayer graphene is
twisted on ABA trilayer graphene, the flat bands are not isolated, but are
intersected by a Dirac cone with a large Fermi velocity. In contrast, graphene
twisted on ABC trilayer graphene (denoted AtABC) exhibits a gap between flat
and remote bands. Since ABC trilayer graphene and twisted bilayer graphene are
known to host broken-symmetry phases, we further investigate the ostensibly
similar magic angle AtABC system. We study the effect of electron-electron
interactions in AtABC using both Hartree theory and an atomic Hubbard theory to
calculate the magnetic phase diagram as a function of doping, twist angle, and
perpendicular electric field. Our analysis reveals a rich variety of magnetic
orderings, including ferromagnetism and ferrimagnetism, and demonstrates that a
perpendicular electric field makes AtABC more susceptible to magnetic ordering.
|
Accurately describing and detecting 2D and 3D keypoints is crucial to
establishing correspondences across images and point clouds. Despite a plethora
of learning-based 2D or 3D local feature descriptors and detectors having been
proposed, the derivation of a shared descriptor and joint keypoint detector
that directly matches pixels and points remains under-explored by the
community. This work takes the initiative to establish fine-grained
correspondences between 2D images and 3D point clouds. In order to directly
match pixels and points, a dual fully convolutional framework is presented that
maps 2D and 3D inputs into a shared latent representation space to
simultaneously describe and detect keypoints. Furthermore, an ultra-wide
reception mechanism in combination with a novel loss function are designed to
mitigate the intrinsic information variations between pixel and point local
regions. Extensive experimental results demonstrate that our framework shows
competitive performance in fine-grained matching between images and point
clouds and achieves state-of-the-art results for the task of indoor visual
localization. Our source code will be available at [no-name-for-blind-review].
|
Using the two-level approximation of the energy barrier, we perform extensive
kinetic Monte Carlo simulations to probe the relaxation characteristics in a
two-dimensional ($L^{}_x\times L^{}_y$) array of magnetic nanoparticle as a
function of dipolar interaction strength $h^{}_d$, aspect ratio
$A^{}_r=L^{}_y/L^{}_x$, and temperature $T$. In the case of weak dipolar
interaction ($h^{}_d\approx0$) and substantial temperature, the magnetic
relaxation follows the N\'eel Brown model as expected. Interestingly, the
dipolar interaction of enough strength is found to induce antiferromagnetic
coupling in the square arrangement of MNPs ($A^{}_r=1.0$), resulting in the
fastening of magnetic relaxation with $h^{}_d$. There is also a rapid increase
in relaxation even with $A^{}_r<100$ above a particular dipolar interaction
strength $h^{\star}_d$, which gets enhanced with $A^{}_r$. Remarkably, there is
a slowing down of magnetic relaxation with $h^{}_d$ for the highly anisotropic
system such as linear chain of MNPs. It is because the dipolar interaction
induces ferromagnetic interaction in such a case. The thermal fluctuations also
affect the relaxation properties drastically. In the case of weak dipolar
limit, magnetization relaxes rapidly with $T$ because of enhancement in thermal
fluctuations. The effect of dipolar interaction and aspect ratio on the
magnetic relaxation is also clearly indicated in the variation of N\'eel
relaxation time $\tau^{}_N$. In the presence of strong dipolar interaction
($h^{}_d>0.3$) and $A^{}_r=1.0$, $\tau^{}_N$ decreases with $h^{}_d$ for a
given temperature. On the other hand, there is an increase in $\tau^{}_N$ with
$h^{}_d$ for huge $A^{}_r$ $(>100)$. We believe that the concepts presented in
this work are beneficial for the efficient use of self-assembled MNPs array in
data storage and other related applications.
|
A permutation $\pi$ contains a pattern $\sigma$ if and only if there is a
subsequence in $\pi$ with its letters are in the same relative order as those
in $\sigma$. Partially ordered patterns (POPs) provide a convenient way to
denote patterns in which the relative order of some of the letters does not
matter. This paper elucidates connections between the avoidance sets of a few
POPs with other combinatorial objects, directly answering five open questions
posed by Gao and Kitaev \cite{gao-kitaev-2019}. This was done by thoroughly
analysing the avoidance sets and developing recursive algorithms to derive
these sets and their corresponding combinatorial objects in parallel, which
yielded a natural bijection. We also analysed an avoidance set whose simple
permutations are enumerated by the Fibonacci numbers and derived an algorithm
to obtain them recursively.
|
We study the implication of $J/\psi$ decay into invisible particles for light
sterile neutrino and sub-GeV dark matter (DM). The low-energy effective field
theories (EFTs) are used for the description of general neutrino interactions
and the Dirac fermion DM coupled to charm quark. For $J/\psi\to \gamma+{\rm
invisible}$, we perform the likelihood fits for the individual neutrino and DM
operators with distinct Lorentz structures and photon spectra. The limits on
the decay branching fractions are obtained for different neutrino or DM
scenarios and then converted to the lower bounds on the new energy scales. The
most stringent bounds on the energy scale in neutrino and DM EFTs are 12.8 GeV
and 11.6 GeV, respectively. The purely invisible decay $J/\psi\to {\rm
invisible}$ provides complementary constraints on the effective operators. The
relevant bound on the energy scale is above 100 GeV for the dipole operators.
We also evaluate the limit on the DM-nucleon scattering cross section converted
from $J/\psi$ data. The data of $J/\psi$ invisible decays are sensitive to the
light DM mass range where the DM direct detection experiments cannot probe yet.
The future Super Tau Charm Factory after one year run can push the limits down
by two orders of magnitude.
|
Superpixels are higher-order perceptual groups of pixels in an image, often
carrying much more information than raw pixels. There is an inherent relational
structure to the relationship among different superpixels of an image. This
relational information can convey some form of domain information about the
image, e.g. relationship between superpixels representing two eyes in a cat
image. Our interest in this paper is to construct computer vision models,
specifically those based on Deep Neural Networks (DNNs) to incorporate these
superpixels information. We propose a methodology to construct a hybrid model
that leverages (a) Convolutional Neural Network (CNN) to deal with spatial
information in an image, and (b) Graph Neural Network (GNN) to deal with
relational superpixel information in the image. The proposed deep model is
learned using a generic hybrid loss function that we call a `hybrid' loss. We
evaluate the predictive performance of our proposed hybrid vision model on four
popular image classification datasets: MNIST, FMNIST, CIFAR-10 and CIFAR-100.
Moreover, we evaluate our method on three real-world classification tasks:
COVID-19 X-Ray Detection, LFW Face Recognition, and SOCOFing Fingerprint
Identification. The results demonstrate that the relational superpixel
information provided via a GNN could improve the performance of standard
CNN-based vision systems.
|
We consider the Cauchy problem for the Hardy parabolic equation $\partial_t
u-\Delta u=|x|^{-\gamma}u^p$ with initial data $u_0$ singular at some point
$z$. Our main results show that, if $z\neq 0$, then the optimal strength of the
singularity of $u_0$ at $z$ for the solvability of the equation is the same as
that of the Fujita equation $\partial_t u-\Delta u=u^p$. Moreover, if $z=0$,
then the optimal singularity for the Hardy parabolic equation is weaker than
that of the Fujita equation. We also obtain analogous results for a fractional
case $\partial_t u+(-\Delta)^{\theta/2} u=|x|^{-\gamma}u^p$ with $0<\theta<2$.
|
We extend the notion of quasi-transitive orientations of graphs to
2-edge-coloured graphs. By relating quasi-transitive $2$-edge-colourings to an
equivalence relation on the edge set of a graph, we classify those graphs that
admit a quasi-transitive $2$-edge-colouring. As a contrast to
Ghouil\'{a}-Houri's classification of quasi-transitively orientable graphs as
comparability graphs, we find quasi-transitively $2$-edge-colourable graphs do
not admit a forbiddden subgraph characterization. Restricting the problem to
comparability graphs, we show that the family of uniquely quasi-transitively
orientable comparability graphs is exactly the family of comparabilty graphs
that admit no quasi-transitive $2$-edge-colouring.
|
The properties of exotic stars are investigated. In particular, we study
objects made entirely of dark matter and we take into account intrinsic
anisotropies which have been ignored so far. We obtain exact analytical
solutions to the structure equations and we we show that those solutions i) are
well behaved within General Relativity, and ii) are capable of describing
realistic astrophysical configurations.
|
Spatial confinement of matter in functional nanostructures has propelled
these systems to the forefront of nanoscience, both as a playground for exotic
physics and quantum phenomena and in multiple applications including
plasmonics, optoelectronics, and sensing. In parallel, the emergence of
monochromated electron energy loss spectroscopy (EELS) has enabled exploration
of local nanoplasmonic functionalities within single nanoparticles and the
collective response of nanoparticle assemblies, providing deep insight into the
associated mechanisms. However, modern synthesis processes for plasmonic
nanostructures are often limited in the types of accessible geometry and
materials, and even then, limited to spatial precisions on the order of tens of
nm, precluding the direct exploration of critical aspects of the
structure-property relationships. Here, we use the atomic-sized probe of the
scanning transmission electron microscope (STEM) to perform precise sculpting
and design of nanoparticle configurations. Furthermore, using low-loss (EELS),
we provide dynamic analyses of evolution of the plasmonic response during the
sculpting process. We show that within self-assembled systems of nanoparticles,
individual nanoparticles can be selectively removed, reshaped, or arbitrarily
patterned with nanometer-level resolution, effectively modifying the plasmonic
response in both space and energy domains. This process significantly increases
the scope for design possibilities and presents opportunities for arbitrary
structure development, which are ultimately key for nanophotonic design.
Nanosculpting introduces yet another capability to the electron microscope.
|
We show that a one-dimensional ordered fermionic lattice system with
power-law-decaying hopping, when connected to two baths at its two ends with
different chemical potentials at zero temperature, features two phases showing
sub-diffusive scaling of conductance with system size. These phases have no
analogues in the isolated system (i.e, in absence of the baths) where the
transport is perfectly ballistic. In the open system scenario, interestingly,
there occurs two chemical-potential-driven sub-diffusive to ballistic phase
transitions at zero temperature. We discuss how these phase transitions, to our
knowledge, are different from all the known non-equilibrium quantum phase
transitions. We provide a clear understanding of the microscopic origin of
these phases and argue that the sub-diffusive phases are robust against the
presence of arbitrary number-conserving many-body interactions in the system.
These phases showing sub-diffusive scaling of conductance with system size in a
two-terminal set-up are therefore universal properties of all ordered
one-dimensional number-conserving fermionic systems with power-law-decaying
hopping at zero temperature.
|
Session types denote message protocols between concurrent processes, allowing
a type-safe expression of inter-process communication. Although previous work
demonstrate a well-defined notion of subtyping where processes have different
perceptions of the protocol, these formulations were limited to linear session
types where each channel of communication has a unique provider and client. In
this paper, we extend subtyping to shared session types where channels can now
have multiple clients instead of a single client. We demonstrate that this
generalization can statically capture protocol requirements that span multiple
phases of interactions of a client with a shared service provider, something
not possible in prior proposals. Moreover, the phases are manifest in the type
of the client.
|
Zero Trust security model permits to secure cloud native applications while
encrypting all network communication, authenticating, and authorizing every
request. The service mesh can enable Zero Trust using a side-car proxy without
changes to the application code. To the best of our knowledge, no previous work
has provided a performance analysis of Zero Trust in a multi-cloud environment.
This paper proposes a multi-cloud framework and a testing workflow to analyze
performance of the data plane under load and the impact on the control plane,
when Zero Trust is enabled. The results of preliminary tests show that Istio
has reduced latency variability in responding to sequential HTTP requests.
Results also reveal that the overall CPU and memory usage can increase based on
service mesh configuration and the cloud environment.
|
Modern Automatic Speech Recognition (ASR) systems can achieve high
performance in terms of recognition accuracy. However, a perfectly accurate
transcript still can be challenging to read due to disfluency, filter words,
and other errata common in spoken communication. Many downstream tasks and
human readers rely on the output of the ASR system; therefore, errors
introduced by the speaker and ASR system alike will be propagated to the next
task in the pipeline. In this work, we propose an ASR post-processing model
that aims to transform the incorrect and noisy ASR output into a readable text
for humans and downstream tasks. We leverage the Metadata Extraction (MDE)
corpus to construct a task-specific dataset for our study. Since the dataset is
small, we propose a novel data augmentation method and use a two-stage training
strategy to fine-tune the RoBERTa pre-trained model. On the constructed test
set, our model outperforms a production two-step pipeline-based post-processing
method by a large margin of 13.26 on readability-aware WER (RA-WER) and 17.53
on BLEU metrics. Human evaluation also demonstrates that our method can
generate more human-readable transcripts than the baseline method.
|
Identifying an entanglement island requires exquisite control over the
entropy of quantum fields, which is available only in toy models. Here we
present a set of sufficient conditions that guarantee the existence of an
island and place an upper bound on the entropy computed by the island rule.
This is enough to derive the main features of the Page curve for an evaporating
black hole in any spacetime dimension. Our argument makes use of Wall's maximin
formulation and the Quantum Focusing Conjecture. As a corollary, we derive a
novel entropy bound.
|
The present work reports on the numerical investigation of NOx in three
turbulent piloted diffusion flames of different levels of extinction. The study
involves two-dimensional axisymmetric modeling of combustion in these flames
with fairly detailed chemistry, i.e. GRI 3.0 mechanism. The main focus of the
study is to analyze the effects of the two different combustion model
approaches, such as infinitely fast chemistry based unsteady flamelet and
finite rate chemistry based EDC, in predicting the NOx formation in three
piloted methane jet flames (Sandia D, E, and F). The EDC approach is able to
predict the passive scalar quantities but shows over-prediction in the reactive
scalar quantities and NO prediction, while the unsteady flamelet modeling is
found to be essential in predicting the accurate formation of slow kinetic
species like NOx. The inability of flamelet and EDC approach in capturing
localized flame extinction is observed, which lead to an over-prediction of NOx
at larger downstream locations. Further, the dominance of NOx formation
pathways is investigated in all three flames.
|
We prove that the fundamental group of a finite graph of convergence groups
with parabolic edge groups is a convergence group. Using this result, under
some mild assumptions, we prove combination theorems for a graph of convergence
groups with dynamically quasi-convex edge groups (Theorems 1.3, 1.5). In the
proofs of these results, we are generalizing Dahmani's technique. Finally, we
prove that the fundamental group of a graph of relatively hyperbolic groups
with edge groups either parabolic or infinite cyclic is relatively hyperbolic
and construct the Bowditch boundary.
|
Energy efficiency and energy conservation are one of the most crucial
constraints for meeting the 20MW power envelope desired for exascale systems.
Towards this, most of the research in this area has been focused on the
utilization of user-controllable hardware switches such as per-core dynamic
voltage frequency scaling (DVFS) and software controlled clock modulation at
the application level. In this paper, we present a tuning plugin for the
Periscope Tuning Framework which integrates fine-grained autotuning at the
region level with DVFS and uncore frequency scaling (UFS). The tuning is based
on a feed-forward neural network which is formulated using Performance
Monitoring Counters (PMC) supported by x86 systems and trained using
standardized benchmarks. Experiments on five standardized hybrid benchmarks
show an energy improvement of 16.1% on average when the applications are tuned
according to our methodology as compared to 7.8% for static tuning.
|
We investigate the sensitivity of the FASER$\nu$ detector to new physics in
the form of non-standard neutrino interactions. FASER$\nu$, which has recently
been installed 480 m downstream of the ATLAS interaction point, will for the
first time study interactions of multi-TeV neutrinos from a controlled source.
Our formalism -- which is applicable to any current and future neutrino
experiment -- is based on the Standard Model Effective Theory~(SMEFT) and its
counterpart, Weak Effective Field Theory~(WEFT), below the electroweak scale.
Starting from the WEFT Lagrangian, we compute the coefficients that modify
neutrino production in meson decays and detection via deep-inelastic
scattering, and we express the new physics effects in terms of modified flavor
transition probabilities. For some coupling structures, we find that FASER$\nu$
will be able to constrain interactions that are two to three orders of
magnitude weaker than Standard Model weak interactions, implying that the
experiment will be indirectly probing new physics at the multi-TeV scale. In
some cases, FASER$\nu$ constraints will become comparable to existing limits -
some of them derived for the first time in this paper - already with
$150~$fb${}^{-1}$ of data.
|
The BKK theorem states that the mixed volume of the Newton polytopes of a
system of polynomial equations upper bounds the number of isolated torus
solutions of the system. Homotopy continuation solvers make use of this fact to
pick efficient start systems. For systems where the mixed volume bound is not
attained, such methods are still tracking more paths than necessary. We propose
a strategy of improvement by lifting a system to an equivalent system with a
strictly lower mixed volume at the expense of more variables. We illustrate
this idea providing lifting constructions for arbitrary bivariate systems and
certain dense-enough systems.
|
In this paper, we are introducing a novel model of artificial intelligence,
the functional neural network for modeling of human decision-making processes.
This neural network is composed of multiple artificial neurons racing in the
network. Each of these neurons has a similar structure programmed independently
by the users and composed of an intention wheel, a motor core and a sensory
core representing the user itself and racing at a specific velocity. The
mathematics of the neuron's formulation and the racing mechanism of multiple
nodes in the network will be discussed, and the group decision process with
fuzzy logic and the transformation of these conceptual methods into practical
methods of simulation and in operations will be developed. Eventually, we will
describe some possible future research directions in the fields of finance,
education and medicine including the opportunity to design an intelligent
learning agent with application in business operations supervision. We believe
that this functional neural network has a promising potential to transform the
way we can compute decision-making and lead to a new generation of neuromorphic
chips for seamless human-machine interactions.
|
An approach to reduce motion artifacts in Quantitative Susceptibility Mapping
using deep learning is proposed. We use an affine motion model with randomly
created motion profiles to simulate motion-corrupted QSM images. The simulated
QSM image is paired with its motion-free reference to train a neural network
using supervised learning. The trained network is tested on unseen simulated
motion-corrupted QSM images, in healthy volunteers and in Parkinson's disease
patients. The results show that motion artifacts, such as ringing and ghosting,
were successfully suppressed.
|
In this paper, we determine the harvested power region of a two-user
multiple-input single-output (MISO) wireless power transfer (WPT) system for a
non-linear model of the rectennas at the energy harvester (EH) nodes. To this
end, we characterize the distributions of the transmit symbol vector that
achieve individual points on the boundary of this region. Each distribution is
obtained as solution of an optimization problem where we maximize a weighted
sum of the average harvested powers at the EH nodes under a constraint on the
power budget of the transmitter. We prove that the optimal transmit strategy
employs two beamforming vectors and scalar unit norm transmit symbols with
arbitrary phase. To determine the beamforming vectors, we propose an iterative
algorithm based on a two-dimensional grid search, semi-definite relaxation, and
successive convex approximation. Our numerical results reveal that the proposed
design outperforms two baseline schemes based on a linear EH model and a single
beamforming vector, respectively. Finally, we observe that the harvested power
region is convex and the power harvested at one EH node can be traded for a
higher harvested power at the other node.
|
In graph theory and network analysis, node degree is defined as a simple but
powerful centrality to measure the local influence of node in a complex
network. Preferential attachment based on node degree has been widely adopted
for modeling network growth. However, many evidences exist which show deviation
of real network growth from what a pure degree-based model suggests. It seems
that node degree is not a reliable measure for predicting the preference of
newcomers in attaching to the network, or at least, it does not tell the whole
story. In this paper, we argue that there is another dimension to network
growth, one that we call node "coreness". The new dimension gives insights on
the global influence of nodes, in comparison to the local view the degree
metric provides. We found that the probability of existing nodes attracting new
nodes generally follows an exponential dependence on node coreness, while at
the same time, follows a power-law dependence on node degree. That is to say,
high-coreness nodes are more powerful than high-degree nodes in attracting
newcomers. The new dimension further discloses some hidden phenomena which
happen in the process of network growth. The power of node degree in attracting
newcomers increases over time while the influence of coreness decreases, and
finally, they reach a state of equilibrium in the growth. All these theories
have been tested on real-world networks.
|
We have performed density functional calculations in conjunction with the
linearized Migdal-Eliashberg equations and the functional derivative approach,
which takes into account the energy-range dependence of the density of states
at the Fermi level ($N(\varepsilon)$ variable on the scale of phonon energy),
to determine the evolution of the critical temperature ($T_{c}$) and the
isotope effect coefficient ($\alpha$) of H$_{3}$S as a function of pressure on
its $lm\bar{3}m$ crystal-structure range (from 162 to 250 GPa). Such approach,
in comparison with $N(\varepsilon)$=$N(0)$=const., improves the agreement of
$T_{c}$ with available experiments on the whole range of studied pressure.
Considering for $\alpha$ two main contributions: one positive coming from the
electron-phonon (el-ph) and the other negative from the electron-electron
interaction (el-el), we obtained a monotonic decrement as a function of
pressure, independent of applied scheme ($N(\varepsilon)$ or $N(0)$). However,
when $N(\varepsilon)$ is taken into account, an important renormalization
occurs on both contributions, el-ph and el-el, improving the agreement with
experimental data, specially for the high-pressure regime. The observed
evolution of $T_{c}$ and $\alpha$ as a function of pressure indicates, thus,
the crucial role of the energy-dependence on $N(\varepsilon)$ for the proper
analysis and description of the superconducting state on high-$T_{c}$ metal
hydrides as H$_{3}$S, by considering the role of its van Hove singularities.
|
We propose a novel deep Gaussian process (DGP) inference method for computer
model emulation using stochastic imputation. By stochastically imputing the
latent layers, the approach transforms the DGP into the linked GP, a
state-of-the-art surrogate model formed by linking a system of feed-forward
coupled GPs. This transformation renders a simple while efficient DGP training
procedure that only involves optimizations of conventional stationary GPs. In
addition, the analytically tractable mean and variance of the linked GP allows
one to implement predictions from DGP emulators in a fast and accurate manner.
We demonstrate the method in a series of synthetic examples and real-world
applications, and show that it is a competitive candidate for efficient DGP
surrogate modeling in comparison to the variational inference and the
fully-Bayesian approach. A $\texttt{Python}$ package $\texttt{dgpsi}$
implementing the method is also produced and available at
https://github.com/mingdeyu/DGP.
|
We consider the two-dimensional ideal Fermi gas subject to a magnetic field
which is perpendicular to the Euclidean plane $\mathbb R^2$ and whose strength
$B(x)$ at $x\in\mathbb R^2$ converges to some $B_0>0$ as $\|x\|\to\infty$.
Furthermore, we allow for an electric potential $V_\varepsilon$ which vanishes
at infinity. They define the single-particle Landau Hamiltonian of our Fermi
gas (up to gauge fixing). Starting from the ground state of this Fermi gas with
chemical potential $\mu\ge B_0$ we study the asymptotic growth of its bipartite
entanglement entropy associated to $L\Lambda$ as $L\to\infty$ for some fixed
bounded region $\Lambda\subset\mathbb R^2$. We show that its leading order in
$L$ does not depend on the perturbations $B_\varepsilon := B_0 - B$ and
$V_\varepsilon$ if they satisfy some mild decay assumptions. Our result holds
for all $\alpha$-R\' enyi entropies $\alpha>1/3$; for $\alpha\le 1/3$, we have
to assume in addition some differentiability of the perturbations
$B_\varepsilon$ and $V_\varepsilon$. The case of a constant magnetic field
$B_\varepsilon = 0$ and with $V_\varepsilon= 0$ was treated recently for
general $\mu$ by Leschke, Sobolev and Spitzer. Our result thus proves the
stability of that area law under the same regularity assumptions on the
boundary $\partial \Lambda$.
|
We theoretically address minimal search strategies of active, self-propelled
particles towards hidden targets in three-dimensional space. The particles can
sense if a target is close, e.g., by detecting signaling molecules released by
a target, but they cannot deduce any directional cues. We focus on composite
search strategies, where particles switch between extensive outer search and
intensive inner search; inner search is started when the proximity of a target
is detected and ends again when a certain inner search time has elapsed. In the
simplest strategy, active particles move ballistically during outer search, and
transiently reduce their directional persistence during inner search. In a
second, adaptive strategy, particles exploit a dynamic scattering effect by
reducing directional persistence only outside a well-defined target zone. These
two search strategies require only minimal information processing capabilities
and a single binary or tertiary internal state, respectively, yet increases the
rate of target encounter substantially. The optimal inner search time scales as
a power-law with exponent -2/3 with target density, reflecting a trade-off
between exploration and exploitation.
|
Symmetric nonnegative matrix factorization (SNMF) has demonstrated to be a
powerful method for data clustering. However, SNMF is mathematically formulated
as a non-convex optimization problem, making it sensitive to the initialization
of variables. Inspired by ensemble clustering that aims to seek a better
clustering result from a set of clustering results, we propose self-supervised
SNMF (S$^3$NMF), which is capable of boosting clustering performance
progressively by taking advantage of the sensitivity to initialization
characteristic of SNMF, without relying on any additional information.
Specifically, we first perform SNMF repeatedly with a random nonnegative matrix
for initialization each time, leading to multiple decomposed matrices. Then, we
rank the quality of the resulting matrices with adaptively learned weights,
from which a new similarity matrix that is expected to be more discriminative
is reconstructed for SNMF again. These two steps are iterated until the
stopping criterion/maximum number of iterations is achieved. We mathematically
formulate S$^3$NMF as a constraint optimization problem, and provide an
alternative optimization algorithm to solve it with the theoretical convergence
guaranteed. Extensive experimental results on $10$ commonly used benchmark
datasets demonstrate the significant advantage of our S$^3$NMF over $12$
state-of-the-art methods in terms of $5$ quantitative metrics. The source code
is publicly available at https://github.com/jyh-learning/SSSNMF.
|
We present a background model for dark matter searches using an array of
NaI(Tl) crystals in the COSINE-100 experiment that is located in the Yangyang
underground laboratory. The model includes background contributions from both
internal and external sources, including cosmogenic radionuclides and surface
$^{210}$Pb contamination. To improve the model in the low energy region, with
the threshold lowered to 1 keV, we used a depth profile of $^{210}$Pb
contamination in the surface of the NaI(Tl) crystals determined in a comparison
between measured and simulated spectra. We also considered the effect of the
energy scale errors propagated from the statistical uncertainties and the
nonlinear detector response at low energies. The 1.7 years COSINE-100 data
taken between October 21, 2016 and July 18, 2018 were used for this analysis.
The Geant4 toolkit version 10.4.2 was utilized throughout the Monte Carlo
simulations for the possible internal and external origins. In particular, the
version provides a non-Gaussian peak around 50 keV originating from beta decays
of $^{210}$Pb in a good agreement with the measured background. This improved
model estimates that the activities of $^{210}$Pb and $^{3}$H are the dominant
sources of the backgrounds with an average level of 2.73$\pm$0.14
counts/day/keV/kg in the energy region of 1-6 keV, using COSINE-100 data with a
total exposure of 97.7 kg$\cdot$years.
|
In this paper we prove the existence of a complete cap of ${\rm PG}(4n+1, q)$
of size $2(q^{2n+1}-1)/(q-1)$, for each prime power $q>2$. It is obtained by
projecting two disjoint Veronese varieties of ${\rm PG}(2n^2+3n, q)$ from a
suitable $(2n^2-n-2)$-dimensional projective space. This shows that the trivial
lower bound for the size of the smallest complete cap of ${\rm PG}(4n+1, q)$ is
essentially sharp.
|
We study a classical model for the atom that considers the movement of $n$
charged particles of charge $-1$ (electrons) interacting with a fixed nucleus
of charge $\mu >0$. We show that two global branches of spatial relative
equilibria bifurcate from the $n$-polygonal relative equilibrium for each
critical values $\mu =s_{k}$ for $k\in \lbrack 2,...,n/2]$. In these solutions,
the $n$ charges form $n/h$-groups of regular $h$-polygons in space, where $h$
is the greatest common divisor of $k$ and $n$. Furthermore, each spatial
relative equilibrium has a global branch of relative periodic solutions for
each normal frequency satisfying some nonresonant condition. We obtain
computer-assisted proofs of the existence of several spatial relative
equilibria on global branches away from the $n$-polygonal relative equilibrium.
Moreover, the nonresonant condition of the normal frequencies for some spatial
relative equilibria is verified rigorously using computer-assisted proofs.
|
For every given real value of the ratio $\mu:=A_X/G_X>1$ of the arithmetic
and geometric means of a positive random variable $X$ and every real $v>0$,
exact upper bounds on the right- and left-tail probabilities
$\mathsf{P}(X/G_X\ge v)$ and $\mathsf{P}(X/G_X\le v)$ are obtained, in terms of
$\mu$ and $v$. In particular, these bounds imply that $X/G_X\to1$ in
probability as $A_X/G_X\downarrow1$. Such a result may be viewed as a converse
to a reverse Jensen inequality for the strictly concave function $f=\ln$,
whereas the well-known Cantelli and Chebyshev inequalities may be viewed as
converses to a reverse Jensen inequality for the strictly concave quadratic
function $f(x) \equiv -x^2$. As applications of the mentioned new results,
improvements of the Markov, Bernstein--Chernoff, sub-Gaussian, and
Bennett--Hoeffding probability inequalities are given.
|
Recently regular decision processes have been proposed as a well-behaved form
of non-Markov decision process. Regular decision processes are characterised by
a transition function and a reward function that depend on the whole history,
though regularly (as in regular languages). In practice both the transition and
the reward functions can be seen as finite transducers. We study reinforcement
learning in regular decision processes. Our main contribution is to show that a
near-optimal policy can be PAC-learned in polynomial time in a set of
parameters that describe the underlying decision process. We argue that the
identified set of parameters is minimal and it reasonably captures the
difficulty of a regular decision process.
|
Recently, a new form of online shopping becomes more and more popular, which
combines live streaming with E-Commerce activity. The streamers introduce
products and interact with their audiences, and hence greatly improve the
performance of selling products. Despite of the successful applications in
industries, the live stream E-commerce has not been well studied in the data
science community. To fill this gap, we investigate this brand-new scenario and
collect a real-world Live Stream E-Commerce (LSEC) dataset. Different from
conventional E-commerce activities, the streamers play a pivotal role in the
LSEC events. Hence, the key is to make full use of rich interaction information
among streamers, users, and products. We first conduct data analysis on the
tripartite interaction data and quantify the streamer's influence on users'
purchase behavior. Based on the analysis results, we model the tripartite
information as a heterogeneous graph, which can be decomposed to multiple
bipartite graphs in order to better capture the influence. We propose a novel
Live Stream E-Commerce Graph Neural Network framework (LSEC-GNN) to learn the
node representations of each bipartite graph, and further design a multi-task
learning approach to improve product recommendation. Extensive experiments on
two real-world datasets with different scales show that our method can
significantly outperform various baseline approaches.
|
Motivated by twisted graphene multilayers, we study interaction of a Chern
insulator with circularly polarized light. The interaction energy contains an
antisymmetric term that couples to the helicity of incident light. For a
two-band Chern insulator, this term is expressed as an integral involving the
Berry curvature of the system. Taking advantage of this interaction, we propose
an experimental protocol for switching topological memory based on orbital
magnetization by circularly polarized light and discuss its feasibility for the
Chern insulators found in twisted graphene multilayers.
|
We describe a modular optically-pumped magnetometer (OPM) system which
enables fast prototyping and testing of new measurement schemes. Quick
reconfiguration of self-contained laser and sensor modules allow easy
construction of various array layouts. The modularity of this system enables
scaling of shared light-source arrays, and development of methods for high
density array management for magnetic imaging and sensing in both medical and
industrial fields. We demonstrate the OPM system in a first-order axial
gradiometer configuration with a magnetic field gradient sensitivity of 10.4
$fT/cm/\sqrt{Hz}$. To illustrate the capabilities of this system, we measured
alpha-rhythms from the brain of a human participant, and assessed the
magnetometer sensitivity both with single sensor channels and in a differential
gradiometer configuration.
|
The debate on gravity theories to extend or modify General Relativity is very
active today because of the issues related to ultra-violet and infra-red
behavior of Einstein's theory. In the first case, we have to address the
Quantum Gravity problem. In the latter, dark matter and dark energy, governing
the large scale structure and the cosmological evolution, seem to escape from
any final fundamental theory and detection. The state of art is that, up to
now, no final theory, capable of explaining gravitational interaction at any
scale, has been formulated. In this perspective, many research efforts are
devoted to test theories of gravity by space-based experiments. Here we propose
straightforward tests by the GINGER experiment, which, being Earth based,
requires little modeling of external perturbation, allowing a thorough analysis
of the systematics, crucial for experiments where sensitivity breakthrough is
required. Specifically, we want to show that it is possible to constrain
parameters of gravity theories, like scalar-tensor or Horava-Lifshitz gravity,
by considering their post-Newtonian limits matched with experimental data. In
particular, we use the Lense-Thirring measurements provided by GINGER to find
out relations among the parameters of theories and finally compare the results
with those provided by LARES and Gravity Probe-B satellites.
|
We generalize and unify the $f(R,T)$ and $f(R,L_m)$ type gravity models by
assuming that the gravitational Lagrangian is given by an arbitrary function of
the Ricci scalar $R$, of the trace of the energy-momentum tensor $T$, and of
the matter Lagrangian $L_m$, so that $L_{grav}=f(R,L_m,T)$. We obtain the
gravitational field equations in the metric formalism, the equations of motion
for test particles, and the energy and momentum balance equations, which follow
from the covariant divergence of the energy-momentum tensor. Generally, the
motion is non-geodesic, and takes place in the presence of an extra force
orthogonal to the four-velocity. The Newtonian limit of the equations of motion
is also investigated, and the expression of the extra acceleration is obtained
for small velocities and weak gravitational fields. The generalized Poisson
equation is also obtained in the Newtonian limit, and the Dolgov-Kawasaki
instability is also investigated. The cosmological implications of the theory
are investigated for a homogeneous, isotropic and flat Universe for two
particular choices of the Lagrangian density $f(R,L_m,T)$ of the gravitational
field, with a multiplicative and additive algebraic structure in the matter
couplings, respectively, and for two choices of the matter Lagrangian, by using
both analytical and numerical methods.
|
We present a spectroscopic and imaging study of an abnormal active galactic
nucleus (AGN), 2MASX J00423991+3017515. This AGN is newly identified in the
hard X-rays by the Swift BAT All-Sky survey and found in an edge-on disk galaxy
interacting with a nearby companion. Here, we analyze the first optical spectra
obtained for this system (taken in 2011 and 2016), high-resolution imaging
taken with the Hubble Space Telescope and Chandra X-ray Observatory, and 1"
imaging with the Very Large Array. Two unique properties are revealed: the
peaks of the broad Balmer emission lines (associated with gas orbiting very
near the supermassive black hole) are blue shifted from the corresponding
narrow line emission and host galaxy absorption by 1540 km/s, and the AGN is
spatially displaced from the apparent center of its host galaxy by 3.8 kpc. We
explore several scenarios to explain these features, along with other
anomalies, and propose that 2MASX J00423991+3017515 may be an AGN with an
unusually strong wind residing in a uniquely configured major merger, or that
it is an AGN recoiling from either a gravitational "slingshot" in a three-body
interaction or from a kick due to the asymmetric emission of gravitational
waves following the coalescence of two progenitor supermassive black holes.
|
Recent advances in meta-learning has led to remarkable performances on
several few-shot learning benchmarks. However, such success often ignores the
similarity between training and testing tasks, resulting in a potential bias
evaluation. We, therefore, propose a generative approach based on a variant of
Latent Dirichlet Allocation to analyse task similarity to optimise and better
understand the performance of meta-learning. We demonstrate that the proposed
method can provide an insightful evaluation for meta-learning algorithms on two
few-shot classification benchmarks that matches common intuition: the more
similar the higher performance. Based on this similarity measure, we propose a
task-selection strategy for meta-learning and show that it can produce more
accurate classification results than methods that randomly select training
tasks.
|
We come back on the dynamical properties of k-essential cosmological models
and show how the interesting phenomenological features of those models are
related to the existence of boundaries in the phase surface. We focus our
attention to the branching points where the energy density has an extremum and
the effective speed of sound diverges. We discuss the behaviour of solutions of
a general class of cosmological models exhibiting such curves and give two
possible interpretations; the most interesting possibility regards the arrow of
time that is reversed in trespassing the branching curve. This study teaches to
us something new about general FLRW cosmologies where the fluids driving the
cosmic evolution have equations of state that are multivalued functions of the
energy density and other thermodynamical quantities.
|
Results from higher order mean field calculations of light interacting with
atom arrays are presented for calculations of one- and two-time expectation
values. The atoms are approximated as two-levels and are fixed in space.
Calculations were performed for mean field approximations that include the
expectation value of one operator (mean field), two operators (mean field-2),
and three operators (mean field-3). For the one-time expectation values, we
examined three different situations to understand the convergence with
increasing order of mean field and some limitations of higher order mean field
approximations. As a representation of a two-time expectation value, we
calculated the $g^{(2)}(\tau )$ for a line of atoms illuminated by a
perpendicular plane wave at several emission angles and two different
intensities. For many cases, the mean field-2 will be sufficiently accurate to
quantitatively predict the response of the atoms as measured by one-time
expectation values. However, the mean field-3 approximation will often be
needed for two-time expectation values.
|
Deep neural networks have been shown to be vulnerable to adversarial examples
deliberately constructed to misclassify victim models. As most adversarial
examples have restricted their perturbations to $L_{p}$-norm, existing defense
methods have focused on these types of perturbations and less attention has
been paid to unrestricted adversarial examples; which can create more realistic
attacks, able to deceive models without affecting human predictions. To address
this problem, the proposed adversarial attack generates an unrestricted
adversarial example with a limited number of parameters. The attack selects
three points on the input image and based on their locations transforms the
image into an adversarial example. By limiting the range of movement and
location of these three points and using a discriminatory network, the proposed
unrestricted adversarial example preserves the image appearance. Experimental
results show that the proposed adversarial examples obtain an average success
rate of 93.5% in terms of human evaluation on the MNIST and SVHN datasets. It
also reduces the model accuracy by an average of 73% on six datasets MNIST,
FMNIST, SVHN, CIFAR10, CIFAR100, and ImageNet. It should be noted that, in the
case of attacks, lower accuracy in the victim model denotes a more successful
attack. The adversarial train of the attack also improves model robustness
against a randomly transformed image.
|
Specifying reward functions for robots that operate in environments without a
natural reward signal can be challenging, and incorrectly specified rewards can
incentivise degenerate or dangerous behavior. A promising alternative to
manually specifying reward functions is to enable robots to infer them from
human feedback, like demonstrations or corrections. To interpret this feedback,
robots treat as approximately optimal a choice the person makes from a choice
set, like the set of possible trajectories they could have demonstrated or
possible corrections they could have made. In this work, we introduce the idea
that the choice set itself might be difficult to specify, and analyze choice
set misspecification: what happens as the robot makes incorrect assumptions
about the set of choices from which the human selects their feedback. We
propose a classification of different kinds of choice set misspecification, and
show that these different classes lead to meaningful differences in the
inferred reward and resulting performance. While we would normally expect
misspecification to hurt, we find that certain kinds of misspecification are
neither helpful nor harmful (in expectation). However, in other situations,
misspecification can be extremely harmful, leading the robot to believe the
opposite of what it should believe. We hope our results will allow for better
prediction and response to the effects of misspecification in real-world reward
inference.
|
Designing provably efficient algorithms with general function approximation
is an important open problem in reinforcement learning. Recently, Wang et
al.~[2020c] establish a value-based algorithm with general function
approximation that enjoys
$\widetilde{O}(\mathrm{poly}(dH)\sqrt{K})$\footnote{Throughout the paper, we
use $\widetilde{O}(\cdot)$ to suppress logarithm factors. } regret bound, where
$d$ depends on the complexity of the function class, $H$ is the planning
horizon, and $K$ is the total number of episodes. However, their algorithm
requires $\Omega(K)$ computation time per round, rendering the algorithm
inefficient for practical use. In this paper, by applying online sub-sampling
techniques, we develop an algorithm that takes
$\widetilde{O}(\mathrm{poly}(dH))$ computation time per round on average, and
enjoys nearly the same regret bound. Furthermore, the algorithm achieves low
switching cost, i.e., it changes the policy only
$\widetilde{O}(\mathrm{poly}(dH))$ times during its execution, making it
appealing to be implemented in real-life scenarios. Moreover, by using an
upper-confidence based exploration-driven reward function, the algorithm
provably explores the environment in the reward-free setting. In particular,
after $\widetilde{O}(\mathrm{poly}(dH))/\epsilon^2$ rounds of exploration, the
algorithm outputs an $\epsilon$-optimal policy for any given reward function.
|
We report new branching fraction measurements for 199 UV and optical
transitions of Hf II. These transitions range in wavelength (wavenumber) from
2068- 6584 A (48322-15183 cm-1) and originate in 17 odd-parity upper levels
ranging in energy from 38578-53227 cm-1. The branching fractions are combined
with radiative lifetimes reported in an earlier study to produce a set of
transition probabilities and log(gf) values with accuracy ranging from 5-25%.
Comparison is made to transition probabilities from the literature where such
data exist. We use these new transition probabilities to derive improved Hf
abundances in two metal-poor stars. HD 196944 is enhanced in s-process
elements, and we derive log epsilon (Hf) = -0.72 +/- 0.03 (sigma = 0.09) from
12 Hf II lines. HD 222925 is enhanced in r-process elements, and we derive log
epsilon (Hf) = 0.32 +/- 0.03 (sigma = 0.11) from 20 Hf II lines. These
measurements greatly expand the number of potentially useful Hf II lines for
analysis in UV and optical spectra.
|
Neural cryptography is the application of artificial neural networks in the
subject of cryptography. The functionality of this solution is based on a tree
parity machine. It uses artificial neural networks to perform secure key
exchange between network entities. This article proposes improvements to the
synchronization of two tree parity machines. The improvement is based on
learning artificial neural network using input vectors which have a wider range
of values than binary ones. As a result, the duration of the synchronization
process is reduced. Therefore, tree parity machines achieve common weights in a
shorter time due to the reduction of necessary bit exchanges. This approach
improves the security of neural cryptography
|
This paper provides a set of cycling problems in linear programming. These
problems should be useful for researchers to develop and test new simplex
algorithms. As matter of the fact, this set of problems is used to test a
recently proposed double pivot simplex algorithm for linear programming.
|
We consider a nonlinearly coupled electromechanical system, and develop a
quantitative theory for two-phonon cooling. In the presence of two-phonon
cooling, the mechanical Hilbert space is effectively reduced to its ground and
first excited states, thus forming a mechanical qubit. This allows for
performing quantum operations at the level of individual mechanical phonons and
preparing nonclassical mechanical states with negative Wigner functions. We
propose a scheme for performing arbitrary Bloch sphere rotations, and derive
the fidelity in the specific case of a $\pi$-pulse. We characterise detrimental
processes that reduce the coherence in the system, and demonstrate that our
scheme can be implemented in state-of-the-art electromechanical devices.
|
Model-free reinforcement learning (RL) for legged locomotion commonly relies
on a physics simulator that can accurately predict the behaviors of every
degree of freedom of the robot. In contrast, approximate reduced-order models
are often sufficient for many model-based control strategies. In this work we
explore how RL can be effectively used with a centroidal model to generate
robust control policies for quadrupedal locomotion. Advantages over RL with a
full-order model include a simple reward structure, reduced computational
costs, and robust sim-to-real transfer. We further show the potential of the
method by demonstrating stepping-stone locomotion, two-legged in-place balance,
balance beam locomotion, and sim-to-real transfer without further adaptations.
Additional Results: https://www.pair.toronto.edu/glide-quadruped/.
|
We find a sharp condition on the density-dependent coefficient of damping of
a one-dimensional repulsive Euler-Poisson system, which makes it possible to
suppress the formation of singularities in the solution of the Cauchy problem
with arbitrary smooth data. In the context of plasma physics, this means the
possibility of suppressing the breakdown of arbitrary oscillations of cold
plasma.
|
Understanding of sedimentation dynamics of particles in bounded fluids is of
crucial importance for a wide variety of processes. While there is profound
knowledge base regarding the sedimentation of rigid solid particles, the
fundamental principles of sedimentation dynamics of elastic, nonheavy spheres
in bounded fluids are not well understood. Therefore, we performed
sedimentation of deformable, elastic solid spheres with particle Reynolds
numbers much smaller than 1 in a model experiment. The spheres started from
rest in a rectangular duct with a width of about 23 times the radius R of the
sphere. The particle dynamics of elastic spheres differed fundamentally from
that of rigid spheres. Elastic effects were found to take place on
comparatively large time scales, such that elastic spheres underwent four
phases of sedimentation. Phases I and II, including transient acceleration and
a short steady velocity plateau, are comparable with sedimentation of rigid
spheres. From a characteristic onset position of about 10R, deformability
effects kick in and a second acceleration appears (phase III). In the fourth
phase, the deformable spheres reach the terminal sedimentation velocity. The
softer the spheres are, the higher the terminal velocity is. In the present
setup, a terminal velocity up to 9 percent higher than the velocity for
comparable rigid spheres was reached. By means of the obtained data, insights
into the dynamics are given that could serve as basic approaches for modelling
the dynamics of elastic spheres in bounded fluids.
|
Uncertainty quantification plays an important role in applications that
involve simulating ensembles of trajectories of dynamical systems. Conrad et
al. (Stat. Comput., 2017) proposed randomisation of deterministic time
integration methods as a strategy for quantifying uncertainty due to time
discretisation. We consider this strategy for systems that are described by
deterministic, possibly non-autonomous operator differential equations defined
on a Banach space or a Gelfand triple. We prove pathwise and expected error
bounds on the random trajectories, given an assumption on the local truncation
error of the underlying deterministic time integration and an assumption that
the absolute moments of the random variables decay with the time step. Our
analysis shows that the error analysis for differential equations in
finite-dimensional Euclidean space carries over to infinite-dimensional
settings.
|
We introduce a new paradigm for scaling simulations with projected
entangled-pair states (PEPS) for critical strongly-correlated systems, allowing
for reliable extrapolations of PEPS data with relatively small bond dimensions
$D$. The key ingredient consists of using the effective correlation length
$\chi$ for inducing a collapse of data points, $f(D,\chi)=f(\xi(D,\chi))$, for
arbitrary values of $D$ and the environment bond dimension $\chi$. As such we
circumvent the need for extrapolations in $\chi$ and can use many distinct data
points for a fixed value of $D$. Here, we need that the PEPS has been optimized
using a fixed-$\chi$ gradient method, which can be achieved using a novel
tensor-network algorithm for finding fixed points of 2-D transfer matrices, or
by using the formalism of backwards differentiation. We test our hypothesis on
the critical 3-D dimer model, the 3-D classical Ising model, and the 2-D
quantum Heisenberg model.
|
The Andreev spectrum of a quantum dot embedded in a hybrid
semiconductor-superconductor interferometer can be modulated by electrostatic
gating, magnetic flux through the interferometer, and Zeeman splitting from
in-plane magnetic field. We demonstrate parity transitions in the embedded
quantum dot system, and show that the Zeeman-driven transition is accompanied
by a 0-{\pi} transition in the superconducting phase across the dot. We further
demonstrate that flux through the interferometer modulates both dot parity and
0-{\pi} transitions.
|
Capacitated lot-sizing problems (CLSPs) are important and challenging
optimization problems in production planning. Amongst the many approaches
developed for CLSPs, constructive heuristics are known to be the most intuitive
and fastest method for finding good feasible solutions for the CLSPs, and
therefore are often used as a subroutine in building more sophisticated exact
and metaheuristic approaches. Classical constructive heuristics, such as the
period-by-period heuristics and lot elimination heuristics, are first
introduced in the 1990s, and thereafter widely used in solving the CLSPs. This
paper evaluates the performance of period-by-period and lot elimination
heuristics, and improves the heuristics using perturbation techniques and
self-adaptive methods. We have also proposed a procedure for automatically
adjusting the parameters of the proposed heuristics so that the values of the
parameters can be chosen based on features of individual instances.
Experimental results show that the proposed self-adaptive randomized
period-by-period constructive heuristics are efficient and can find better
solutions with less computational time than the tabu search and lot elimination
heuristics. When the proposed constructive heuristic is used in a basic tabu
search framework, high-quality solutions with 0.88% average optimality gap can
be obtained on benchmark instances of 12 periods and 12 items, and optimality
gap within 1.2% for the instances with 24 periods and 24 items.
|
We consider a cell-free hybrid massive multiple-input multiple-output (MIMO)
system with $K$ users and $M$ access points (APs), each with $N_a$ antennas and
$N_r< N_a$ radio frequency (RF) chains. When $K\ll M{N_a}$, efficient uplink
channel estimation and data detection with reduced number of pilots can be
performed based on low-rank matrix completion. However, such a scheme requires
the central processing unit (CPU) to collect received signals from all APs,
which may enable the CPU to infer the private information of user locations. We
therefore develop and analyze privacy-preserving channel estimation schemes
under the framework of differential privacy (DP). As the key ingredient of the
channel estimator, two joint differentially private noisy matrix completion
algorithms based respectively on Frank-Wolfe iteration and singular value
decomposition are presented. We provide an analysis on the tradeoff between the
privacy and the channel estimation error. In particular, we show that the
estimation error can be mitigated while maintaining the same privacy level by
increasing the payload size with fixed pilot size; and the scaling laws of both
the privacy-induced and privacy-independent error components in terms of
payload size are characterized. Simulation results are provided to further
demonstrate the tradeoff between privacy and channel estimation performance.
|
A multi-decade exploration into the theoretical foundations of artificial and
natural general intelligence, which has been expressed in a series of books and
papers and used to guide a series of practical and research-prototype software
systems, is reviewed at a moderate level of detail. The review covers
underlying philosophies (patternist philosophy of mind, foundational
phenomenological and logical ontology), formalizations of the concept of
intelligence, and a proposed high level architecture for AGI systems partly
driven by these formalizations and philosophies. The implementation of specific
cognitive processes such as logical reasoning, program learning, clustering and
attention allocation in the context and language of this high level
architecture is considered, as is the importance of a common (e.g. typed
metagraph based) knowledge representation for enabling "cognitive synergy"
between the various processes. The specifics of human-like cognitive
architecture are presented as manifestations of these general principles, and
key aspects of machine consciousness and machine ethics are also treated in
this context. Lessons for practical implementation of advanced AGI in
frameworks such as OpenCog Hyperon are briefly considered.
|
Survival analysis is a technique to predict the times of specific outcomes,
and is widely used in predicting the outcomes for intensive care unit (ICU)
trauma patients. Recently, deep learning models have drawn increasing attention
in healthcare. However, there is a lack of deep learning methods that can model
the relationship between measurements, clinical notes and mortality outcomes.
In this paper we introduce BERTSurv, a deep learning survival framework which
applies Bidirectional Encoder Representations from Transformers (BERT) as a
language representation model on unstructured clinical notes, for mortality
prediction and survival analysis. We also incorporate clinical measurements in
BERTSurv. With binary cross-entropy (BCE) loss, BERTSurv can predict mortality
as a binary outcome (mortality prediction). With partial log-likelihood (PLL)
loss, BERTSurv predicts the probability of mortality as a time-to-event outcome
(survival analysis). We apply BERTSurv on Medical Information Mart for
Intensive Care III (MIMIC III) trauma patient data. For mortality prediction,
BERTSurv obtained an area under the curve of receiver operating characteristic
curve (AUC-ROC) of 0.86, which is an improvement of 3.6% over baseline of
multilayer perceptron (MLP) without notes. For survival analysis, BERTSurv
achieved a concordance index (C-index) of 0.7. In addition, visualizations of
BERT's attention heads help to extract patterns in clinical notes and improve
model interpretability by showing how the model assigns weights to different
inputs.
|
We propose Partially Interpretable Estimators (PIE) which attribute a
prediction to individual features via an interpretable model, while a
(possibly) small part of the PIE prediction is attributed to the interaction of
features via a black-box model, with the goal to boost the predictive
performance while maintaining interpretability. As such, the interpretable
model captures the main contributions of features, and the black-box model
attempts to complement the interpretable piece by capturing the "nuances" of
feature interactions as a refinement. We design an iterative training algorithm
to jointly train the two types of models. Experimental results show that PIE is
highly competitive to black-box models while outperforming interpretable
baselines. In addition, the understandability of PIE is comparable to simple
linear models as validated via a human evaluation.
|
Most of the current supervised automatic music transcription (AMT) models
lack the ability to generalize. This means that they have trouble transcribing
real-world music recordings from diverse musical genres that are not presented
in the labelled training data. In this paper, we propose a semi-supervised
framework, ReconVAT, which solves this issue by leveraging the huge amount of
available unlabelled music recordings. The proposed ReconVAT uses
reconstruction loss and virtual adversarial training. When combined with
existing U-net models for AMT, ReconVAT achieves competitive results on common
benchmark datasets such as MAPS and MusicNet. For example, in the few-shot
setting for the string part version of MusicNet, ReconVAT achieves F1-scores of
61.0% and 41.6% for the note-wise and note-with-offset-wise metrics
respectively, which translates into an improvement of 22.2% and 62.5% compared
to the supervised baseline model. Our proposed framework also demonstrates the
potential of continual learning on new data, which could be useful in
real-world applications whereby new data is constantly available.
|
How do we formalize the challenge of credit assignment in reinforcement
learning? Common intuition would draw attention to reward sparsity as a key
contributor to difficult credit assignment and traditional heuristics would
look to temporal recency for the solution, calling upon the classic eligibility
trace. We posit that it is not the sparsity of the reward itself that causes
difficulty in credit assignment, but rather the \emph{information sparsity}. We
propose to use information theory to define this notion, which we then use to
characterize when credit assignment is an obstacle to efficient learning. With
this perspective, we outline several information-theoretic mechanisms for
measuring credit under a fixed behavior policy, highlighting the potential of
information theory as a key tool towards provably-efficient credit assignment.
|
In traditional software programs, it is easy to trace program logic from
variables back to input, apply assertion statements to block erroneous
behavior, and compose programs together. Although deep learning programs have
demonstrated strong performance on novel applications, they sacrifice many of
the functionalities of traditional software programs. With this as motivation,
we take a modest first step towards improving deep learning programs by jointly
training a generative model to constrain neural network activations to "decode"
back to inputs. We call this design a Decodable Neural Network, or DecNN. Doing
so enables a form of compositionality in neural networks, where one can
recursively compose DecNN with itself to create an ensemble-like model with
uncertainty. In our experiments, we demonstrate applications of this
uncertainty to out-of-distribution detection, adversarial example detection,
and calibration -- while matching standard neural networks in accuracy. We
further explore this compositionality by combining DecNN with pretrained
models, where we show promising results that neural networks can be regularized
from using protected features.
|
The Onsager Lie algebra $O$ is often used to study integrable lattice models.
The universal enveloping algebra of $O$ admits a $q$-deformation $O_q$ called
the $q$-Onsager algebra. Recently, an algebra $\mathcal O_q$ was introduced
called the alternating central extension of $O_q$. In this paper we introduce a
Lie algebra $\mathcal O$ that is roughly described by the following two
analogies: (i) $\mathcal O$ is to $O$ as $\mathcal O_q$ is to $O_q$; (ii) $O_q$
is to $O$ as $\mathcal O_q$ is to $\mathcal O$. We call $\mathcal O$ the
alternating central extension of $O$. This paper contains a comprehensive
description of $\mathcal O$.
|
In this paper we give local and global parametric classifications of a class
of Einstein submanifolds of Euclidean space. The highlight is for submanifolds
of codimension two since in this case our assumptions are only of intrinsic
nature.
|
Sparse level-set formulations allow practitioners to find the minimum 1-norm
solution subject to likelihood constraints. Prior art requires this constraint
to be convex. In this letter, we develop an efficient approach for nonconvex
likelihoods, using Regula Falsi root-finding techniques to solve the level-set
formulation. Regula Falsi methods are simple, derivative-free, and efficient,
and the approach provably extends level-set methods to the broader class of
nonconvex inverse problems. Practical performance is illustrated using
l1-regularized Student's t inversion, which is a nonconvex approach used to
develop outlier-robust formulations.
|
In 2020, Cameron et al. introduced the restricted numerical range of a
digraph (directed graph) as a tool for characterizing digraphs and studying
their algebraic connectivity. In particular, digraphs with a restricted
numerical range of a single point, a horizontal line segment, and a vertical
line segment were characterized as $k$-imploding stars, directed joins of
bidirectional digraphs, and regular tournaments, respectively. In this article,
we extend these results by investigating digraphs whose restricted numerical
range is a convex polygon in the complex plane. We provide computational
methods for identifying these polygonal digraphs and show that these digraphs
can be broken into three disjoint classes: normal, restricted-normal, and
pseudo-normal digraphs, all of which are closed under the digraph complement.
We prove sufficient conditions for normal digraphs and show that the directed
join of two normal digraphs results in a restricted-normal digraph. Also, we
prove that directed joins are the only restricted-normal digraphs when the
order is square-free or twice a square-free number. Finally, we provide methods
to construct restricted-normal digraphs that are not directed joins for all
orders that are neither square-free nor twice a square-free number.
|
The contact process is a particular case of birth-and-death processes on
infinite particle configurations. We consider the contact models on locally
compact separable metric spaces. We prove the existence of a one-parameter set
of invariant measures in the critical regime under the condition imposed on the
associated Markov jump process. This condition, roughly speaking, requires the
separation of any pair of trajectories of this jump process. The general scheme
can be applied to the contact process on the lattice in a heterogeneous and
random environments as well as to the contact process on graphs and on
manifolds.
|
Recent astronomical data have provided the primordial deuterium abundance
with percent precision. As a result, Big Bang nucleosynthesis may provide a
constraint on the universal baryon to photon ratio that is as precise as, but
independent from, analyses of the cosmic microwave background. However, such a
constraint requires that the nuclear reaction rates governing the production
and destruction of primordial deuterium are sufficiently well known. Here, a
new measurement of the $^2$H($p,\gamma$)$^3$He cross section is reported. This
nuclear reaction dominates the error on the predicted Big Bang deuterium
abundance. A proton beam of 400-1650keV beam energy was incident on solid
titanium deuteride targets, and the emitted $\gamma$-rays were detected in two
high-purity germanium detectors at angles of 55$^\circ$ and 90$^\circ$,
respectively. The deuterium content of the targets has been obtained in situ by
the $^2$H($^3$He,$p$)$^4$He reaction and offline using the Elastic Recoil
Detection method. The astrophysical S-factor has been determined at center of
mass energies between 265 and 1094 keV, addressing the uppermost part of the
relevant energy range for Big Bang nucleosynthesis and complementary to ongoing
work at lower energies. The new data support a higher S-factor at Big Bang
temperatures than previously assumed, reducing the predicted deuterium
abundance.
|
We present a novel adaptive host-chip modular architecture for video
acquisition to optimize an overall objective task constrained under a given bit
rate. The chip is a high resolution imaging sensor such as gigapixel focal
plane array (FPA) with low computational power deployed on the field remotely,
while the host is a server with high computational power. The communication
channel data bandwidth between the chip and host is constrained to accommodate
transfer of all captured data from the chip. The host performs objective task
specific computations and also intelligently guides the chip to optimize
(compress) the data sent to host. This proposed system is modular and highly
versatile in terms of flexibility in re-orienting the objective task. In this
work, object tracking is the objective task. While our architecture supports
any form of compression/distortion, in this paper we use quadtree
(QT)-segmented video frames. We use Viterbi (Dynamic Programming) algorithm to
minimize the area normalized weighted rate-distortion allocation of resources.
The host receives only these degraded frames for analysis. An object detector
is used to detect objects, and a Kalman Filter based tracker is used to track
those objects. Evaluation of system performance is done in terms of Multiple
Object Tracking Accuracy (MOTA) metric. In this proposed novel architecture,
performance gains in MOTA is obtained by twice training the object detector
with different system generated distortions as a novel 2-step process.
Additionally, object detector is assisted by tracker to upscore the region
proposals in the detector to further improve the performance.
|
One of the most important lessons from the success of deep learning is that
learned representations tend to perform much better at any task compared to
representations we design by hand. Yet evolution of evolvability algorithms,
which aim to automatically learn good genetic representations, have received
relatively little attention, perhaps because of the large amount of
computational power they require. The recent method Evolvability ES allows
direct selection for evolvability with little computation. However, it can only
be used to solve problems where evolvability and task performance are aligned.
We propose Quality Evolvability ES, a method that simultaneously optimizes for
task performance and evolvability and without this restriction. Our proposed
approach Quality Evolvability has similar motivation to Quality Diversity
algorithms, but with some important differences. While Quality Diversity aims
to find an archive of diverse and well-performing, but potentially genetically
distant individuals, Quality Evolvability aims to find a single individual with
a diverse and well-performing distribution of offspring. By doing so Quality
Evolvability is forced to discover more evolvable representations. We
demonstrate on robotic locomotion control tasks that Quality Evolvability ES,
similarly to Quality Diversity methods, can learn faster than objective-based
methods and can handle deceptive problems.
|
Owing to a reduced solar background and low propagation losses in the
atmosphere, the 2- to 2.5-$\mu$m waveband is a promising candidate for daylight
quantum communication. This spectral region also offers low losses and low
dispersion in hollow-core fibers and in silicon waveguides. We demonstrate for
the first time near-maximally entangled photon pairs at 2.1 $\mu$m that could
support device independent quantum key distribution (DIQKD) assuming
sufficiently high channel efficiencies. The state corresponds to a positive
secure-key rate (0.254 bits/pair, with a quantum bit error rate of 3.8%) based
on measurements in a laboratory setting with minimal channel loss and
transmission distance. This is promising for the future implementation of DIQKD
at 2.1 $\mu$m.
|
Collaboration skills are important for future software engineers. In computer
science education, these skills are often practiced through group assignments,
where students develop software collaboratively. The approach that students
take in these assignments varies widely, but often involves a division of
labour. It can then be argued whether collaboration still takes place. The
discipline of computing education is especially interesting in this context,
because some of its specific features (such as the variation in entry skill
level and the use of source code repositories as collaboration platforms) are
likely to influence the approach taken within groupwork. The aim of this
research is to gain insight into the work division and allocation strategies
applied by computer science students during group assignments. To this end, we
interviewed twenty students of four universities. The thematic analysis shows
that students tend to divide up the workload to enable working independently,
with pair programming and code reviews being often employed. Motivated
primarily by grade and efficiency factors, students choose and allocate tasks
primarily based on their prior expertise and preferences. Based on our
findings, we argue that the setup of group assignments can limit student
motivation for practicing new software engineering skills, and that
interventions are needed towards encouraging experimentation and learning.
|
Event sourced systems are increasing in popularity because they are reliable,
flexible, and scalable. In this article, we point a microscope at a software
architecture pattern that is rapidly gaining popularity in industry, but has
not received as much attention from the scientific community. We do so through
constructivist grounded theory, which proves a suitable qualitative method for
extracting architectural knowledge from practitioners. Based on the discussion
of 19 event sourced systems we explore the rationale for and the context of the
event sourcing pattern. A description of the pattern itself and its relation to
other patterns as discussed with practitioners is given. The description itself
is grounded in the experience of 25 engineers, making it a reliable source for
both new practitioners and scientists. We identify five challenges that
practitioners experience: event system evolution, the steep learning curve,
lack of available technology, rebuilding projections, and data privacy. For the
first challenge of event system evolution, we uncover five tactics and
solutions that support practitioners in their design choices when developing
evolving event sourced systems: versioned events, weak schema, upcasting,
in-place transformation, and copy-and-transform.
|
Since its launch, the Alpha Magnetic Spectrometer-02 (AMS-02) has delivered
outstanding quality measurements of the spectra of cosmic-ray (CR) species,
$\bar{p}$, $e^{\pm}$, and nuclei (H--O, Ne, Mg, Si, Fe), which resulted in a
number of breakthroughs. The most recent AMS-02 result is the measurement of
the spectrum of CR fluorine up to $\sim$2 TV. Given its very low solar system
abundance, fluorine in CRs is thought to be mostly secondary, produced in
fragmentations of heavier species, predominantly Ne, Mg, and Si. Similar to the
best-measured secondary-to-primary boron to carbon nuclei ratio that is widely
used to study the origin and propagation of CR species, the precise fluorine
data would allow the origin of Si-group nuclei to be studied independently.
Meanwhile, the secondary origin of CR fluorine has never been tested in a wide
energy range due to the lack of accurate CR data. In this paper, we use the
first ever precise measurements of the fluorine spectrum by AMS-02 together
with ACE-CRIS and Voyager 1 data to actually test this paradigm. Our detailed
modeling shows an excess below 10 GV in the fluorine spectrum that may hint at
a primary fluorine component. We also provide an updated local interstellar
spectrum (LIS) of fluorine in the rigidity range from few MV to $\sim$2 TV. Our
calculations employ the self-consistent GalProp-HelMod framework that has
proved to be a reliable tool in deriving the LIS of CR $\bar{p}$, $e^{-}$, and
nuclei $Z\le28$.
|
A big, diverse and balanced training data is the key to the success of deep
neural network training. However, existing publicly available datasets used in
facial landmark localization are usually much smaller than those for other
computer vision tasks. A small dataset without diverse and balanced training
samples cannot support the training of a deep network effectively. To address
the above issues, this paper presents a novel Separable Batch Normalization
(SepBN) module with a Cross-protocol Network Training (CNT) strategy for robust
facial landmark localization. Different from the standard BN layer that uses
all the training data to calculate a single set of parameters, SepBN considers
that the samples of a training dataset may belong to different sub-domains.
Accordingly, the proposed SepBN module uses multiple sets of parameters, each
corresponding to a specific sub-domain. However, the selection of an
appropriate branch in the inference stage remains a challenging task because
the sub-domain of a test sample is unknown. To mitigate this difficulty, we
propose a novel attention mechanism that assigns different weights to each
branch for automatic selection in an effective style. As a further innovation,
the proposed CNT strategy trains a network using multiple datasets having
different facial landmark annotation systems, boosting the performance and
enhancing the generalization capacity of the trained network. The experimental
results obtained on several well-known datasets demonstrate the effectiveness
of the proposed method.
|
This paper concerns the design of a Fourier based pseudospectral numerical
method for the model of European Option Pricing with transaction costs under
Exponential Utility derived by Davis, Panas and Zariphopoulou. Computing the
option price involves solving two stochastic optimal control problems. With a
Exponential Utility function, the dimension of the problem can be reduced, but
one has to deal with high absolute values in the objective function. In this
paper, we propose two changes of variables that reduce the impact of the
exponential growth. We propose a Fourier pseudospectral method to solve the
resulting non linear equation. Numerical analysis of the stability,
consistency, convergence and localization error of the method are included.
Numerical experiments support the theoretical results. The effect of
incorporating transaction costs is also studied.
|
Bipartite networks are a natural representation of the interactions between
entities from two different types. The organization (or topology) of such
networks gives insight to understand the systems they describe as a whole.
Here, we rely on motifs which provide a meso-scale description of the topology.
Moreover, we consider the bipartite expected degree distribution (B-EDD) model
which accounts for both the density of the network and possible imbalances
between the degrees of the nodes. Under the B-EDD model, we prove the
asymptotic normality of the count of any given motif, considering sparsity
conditions. We also provide close-form expressions for the mean and the
variance of this count. This allows to avoid computationally prohibitive
resampling procedures. Based on these results, we define a goodness-of-fit test
for the B-EDD model and propose a family of tests for network comparisons. We
assess the asymptotic normality of the test statistics and the power of the
proposed tests on synthetic experiments and illustrate their use on ecological
data sets.
|
In \cite{butman1976} the linear coding scheme is applied, $X_t
=g_t\Big(\Theta - {\bf E}\Big\{\Theta\Big|Y^{t-1}, V_0=v_0\Big\}\Big)$,
$t=2,\ldots,n$, $X_1=g_1\Theta$, with $\Theta: \Omega \to {\mathbb R}$, a
Gaussian random variable, to derive a lower bound on the feedback rate, for
additive Gaussian noise (AGN) channels, $Y_t=X_t+V_t, t=1, \ldots, n$, where
$V_t$ is a Gaussian autoregressive (AR) noise, and $\kappa \in [0,\infty)$ is
the total transmitter power. For the unit memory AR noise, with parameters $(c,
K_W)$, where $c\in [-1,1]$ is the pole and $K_W$ is the variance of the
Gaussian noise, the lower bound is $C^{L,B} =\frac{1}{2} \log \chi^2$, where
$\chi =\lim_{n\longrightarrow \infty} \chi_n$ is the positive root of
$\chi^2=1+\Big(1+ \frac{|c|}{\chi}\Big)^2 \frac{\kappa}{K_W}$, and the sequence
$\chi_n \triangleq \Big|\frac{g_n}{g_{n-1}}\Big|, n=2, 3, \ldots,$ satisfies a
certain recursion, and conjectured that $C^{L,B}$ is the feedback capacity.
In this correspondence, it is observed that the nontrivial lower bound
$C^{L,B}=\frac{1}{2} \log \chi^2$ such that $\chi >1$, necessarily implies the
scaling coefficients of the feedback code, $g_n$, $n=1,2, \ldots$, grow
unbounded, in the sense that, $\lim_{n\longrightarrow\infty}|g_n| =+\infty$.
The unbounded behaviour of $g_n$ follows from the ratio limit theorem of a
sequence of real numbers, and it is verified by simulations. It is then
concluded that such linear codes are not practical, and fragile with respect to
a mismatch between the statistics of the mathematical model of the channel and
the real statistics of the channel. In particular, if the error is perturbed by
$\epsilon_n>0$ no matter how small, then $X_n =g_t\Big(\Theta - {\bf
E}\Big\{\Theta\Big|Y^{t-1}, V_0=v_0\Big\}\Big)+g_n \epsilon_n$, and
$|g_n|\epsilon_n \longrightarrow \infty$, as $n \longrightarrow \infty$.
|
We study a model for frustrated tunneling ionization using ultrashort laser
pulses. The model is based on the strong field approximation and it employs the
saddle point approximation to predict quasiclassical trajectories that are
captured on Rydberg states. We present a classification of the saddle-point
solutions and explore their behavior as functions of angular momentum of the
final state, as well as the carrier--envelope phase (CEP) of the laser pulse.
We compare the final state population computed by the model to results obtained
by numerical propagation of the time-dependent Schr\"odinger equation (TDSE)
for the hydrogen atom. While we find qualitative agreement in the CEP
dependence of the populations in principal quantum numbers, $n$, the
populations to individual angular momentum channels, $\ell$, are found to be
inconsistent between model and TDSE. Thus, our results show that improvements
of the quasiclassical trajectories are in order for a quantitative model of
frustrated tunneling ionizaiton.
|
Tomonaga-Luttinger liquids (TLLs) can be used to effectively describe
one-dimensional quantum many-body systems such as ultracold atoms, charges in
nanowires, superconducting circuits, and gapless spin chains. Their properties
are given by two parameters, the propagation velocity and the Luttinger
parameter. Here we study inhomogeneous TLLs where these are promoted to
functions of position and demonstrate that they profoundly affect the dynamics:
In general, besides curving the light cone, we show that propagation is no
longer ballistically localized to the light-cone trajectories, different from
standard homogeneous TLLs. Specifically, if the Luttinger parameter depends on
position, the dynamics features pronounced spreading into the light cone, which
cannot be understood via a simple superposition of waves as in the
Huygens-Fresnel principle. This is the case for ultracold atoms in a parabolic
trap, which serves as our main motivation, and we discuss possible experimental
observations in such systems.
|
Correlation acts as a critical role in the tracking field, especially in
recent popular Siamese-based trackers. The correlation operation is a simple
fusion manner to consider the similarity between the template and the search
region. However, the correlation operation itself is a local linear matching
process, leading to lose semantic information and fall into local optimum
easily, which may be the bottleneck of designing high-accuracy tracking
algorithms. Is there any better feature fusion method than correlation? To
address this issue, inspired by Transformer, this work presents a novel
attention-based feature fusion network, which effectively combines the template
and search region features solely using attention. Specifically, the proposed
method includes an ego-context augment module based on self-attention and a
cross-feature augment module based on cross-attention. Finally, we present a
Transformer tracking (named TransT) method based on the Siamese-like feature
extraction backbone, the designed attention-based fusion mechanism, and the
classification and regression head. Experiments show that our TransT achieves
very promising results on six challenging datasets, especially on large-scale
LaSOT, TrackingNet, and GOT-10k benchmarks. Our tracker runs at approximatively
50 fps on GPU. Code and models are available at
https://github.com/chenxin-dlut/TransT.
|
The regular monitoring of flat-spectrum radio quasars (FSRQs) in
$\gamma$-rays by Fermi-LAT since past 12 years indicated six sources who
exhibited extreme $\gamma$-ray outbursts crossing daily flux of $10^{-5}$
photons/cm$^{2}$/s. We obtained nearly-simultaneous multi-wavelength data of
these sources in radio to $\gamma$-ray waveband from OVRO, Steward Observatory,
SMARTS, Swift-UVOT, Swift-XRT, and Fermi-LAT. The time-averaged broadband
Spectral Energy Distributions (SEDs) of these sources in quiescent states were
studied to get an idea about the underlying baseline radiation processes. We
modeled the SEDs using one-zone leptonic synchrotron and inverse-Compton
emission scenario from broken power-law electron energy distribution inside a
spherical plasma blob, relativistically moving down a conical jet. The model
takes into account inverse-Compton scattering of externally and locally
originated seed photons in the jet. The big blue bumps visible in quiescent
state SEDs helped to estimate the accretion disk luminosities and central black
hole masses. We found a correlation between the magnetic field inside the
emission region and the ratio of emission region distance to disk luminosity,
which implies that the magnetic field decreases with an increase in emission
region distance and decrease in disk luminosity, suggesting a disk-jet
connection. The high-energy index of the electron distribution was also found
to be correlated with observed $\gamma$-ray luminosity as $\gamma$-rays are
produced by high-energy particles. In most cases, kinetic power carried by
electrons can account for jet radiation power as jets become radiatively
inefficient during quiescent states.
|
Examining a 20th-century Scandinavian legal theoretical tradition, we can
extract an ontological naturalistic, a logical empiristic, and a modern
idealistic rationale. We introduce the mathematical syntactic figure present in
the `logical empiricism' in a contemporary mathematical logic. A new formal
framework for describing explicit purchase statutes (Sweden) is gradually
developed and subsequently proposed. This new framework is based on a
many-sorted first-order logic (MFOL) approach, where the semantics are grounded
in concrete `physical' objects and situations with a legal relevance.
Specifically, we present a concrete formal syntactic translation of one of the
central statutes of Swedish legislation for the purchase of immovable property.
Additionally, we discuss the potential implications that a subsequent
development of such formalisations would have for constructing artificial
agents (e.g., software) that can be used as `co-creative' legal assistance for
solving highly complex legal issues concerning the transfer of property, among
others.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.