abstract
stringlengths 42
2.09k
|
---|
Software bots are used to streamline tasks in Open Source Software (OSS)
projects' pull requests, saving development cost, time, and effort. However,
their presence can be disruptive to the community. We identified several
challenges caused by bots in pull request interactions by interviewing 21
practitioners, including project maintainers, contributors, and bot developers.
In particular, our findings indicate noise as a recurrent and central problem.
Noise affects both human communication and development workflow by overwhelming
and distracting developers. Our main contribution is a theory of how human
developers perceive annoying bot behaviors as noise on social coding platforms.
This contribution may help practitioners understand the effects of adopting a
bot, and researchers and tool designers may leverage our results to better
support human-bot interaction on social coding platforms.
|
In this paper we consider the Lagrangian Averaged Navier-Stokes Equations,
also known as, LANS-$\alpha$ Navier-Stokes model on the two dimensional torus.
We assume that the noise is a cylindrical Wiener process and its coefficient is
multiplied by $\sqrt{\alpha}$. We then study through the lenses of the large
and moderate deviations principle the behaviour of the trajectories of the
solutions of the stochastic system as $\alpha$ goes to 0. Instead of giving two
separate proofs of the two deviations principles we present a unifying approach
to the proof of the LDP and MDP and express the rate function in term of the
unique solution of the Navier-Stokes equations. Our proof is based on the weak
convergence approach to large deviations principle. As a by-product of our
analysis we also prove that the solutions of the stochastic LANS-$\alpha$ model
converge in probability to the solutions of the deterministic Navier-Stokes
equations.
|
Reinforcement learning (RL) has been demonstrated suitable to develop agents
that play complex games with human-level performance. However, it is not
understood how to effectively use RL to perform cybersecurity tasks. To develop
such understanding, it is necessary to develop RL agents using simulation and
emulation systems allowing researchers to model a broad class of realistic
threats and network conditions. Demonstrating that a specific RL algorithm can
be effective for defending a network under certain conditions may not
necessarily give insight about the performance of the algorithm when the
threats, network conditions, and security goals change. This paper introduces a
novel approach for network environment design and a software framework to
address the fundamental problem that network defense cannot be defined as a
single game with a simple set of fixed rules. We show how our approach is
necessary to facilitate the development of RL network defenders that are robust
against attacks aimed at the agent's learning. Our framework enables the
development and simulation of adversaries with sophisticated behavior that
includes poisoning and evasion attacks on RL network defenders.
|
The influence of high-enthalpy effects on hypersonic turbulent boundary
layers is investigated by means of direct numerical simulations (DNS). A
quasi-adiabatic flat-plate air flow at free-stream Mach number equal to 10 is
simulated up to fully-developed turbulent conditions using a five-species,
chemically-reacting model. A companion DNS based on a frozen-chemistry
assumption is also carried out, in order to isolate the effect of finite-rate
chemical reactions and assess their influence on turbulent quantities. In order
to reduce uncertainties associated with turbulence generation at the inlet of
the computational domain, both simulations are initiated in the laminar flow
region and the flow is let to evolve up to the fully turbulent regime. Modal
forcing by means of localized suction and blowing is used to trigger
laminar-to-turbulent transition. The high temperatures reached in the near wall
region including the viscous and buffer sublayers activate significant
dissociation of both oxygen and nitrogen. This modifies in turn the
thermodynamic and transport properties of the reacting mixture, affecting the
first-order statistics of thermodynamic quantities. Due to the endothermic
nature of the chemical reactions in the forward direction, temperature and
density fluctuations in the reacting layer are smaller than in the
frozen-chemistry flow. However, the first- and second-order statistics of the
velocity field are found to be little affected by the chemical reactions under
a scaling that accounts for the modified fluid properties. We also observed
that the Strong Reynolds Analogy (SRA) remains well respected despite the
severe hypersonic conditions and that the computed skin friction coefficient
distributions match well the results of the Renard-Deck decomposition extended
to compressible flows.
|
The International Virtual Observatory Alliance (IVOA) has developed and
built, in the last two decades, an ecosystem of distributed resources,
interoperable and based upon open shared technological standards. In doing so
the IVOA has anticipated, putting into practice for the astrophysical domain,
the ideas of FAIR-ness of data and service resources and the Open-ness of
sharing scientific results, leveraging on the underlying open standards
required to fill the above. In Europe, efforts in supporting and developing the
ecosystem proposed by the IVOA specifications has been provided by a continuous
set of EU funded projects up to current H2020 ESCAPE ESFRI cluster. In the
meantime, in the last years, Europe has realised the importance of promoting
the Open Science approach for the research communities and started the European
Open Science Cloud (EOSC) project to create a distributed environment for
research data, services and communities. In this framework the European VO
community, had to face the move from the interoperability scenario in the
astrophysics domain into a larger audience perspective that includes a
cross-domain FAIR approach. Within the ESCAPE project the CEVO Work Package
(Connecting ESFRI to EOSC through the VO) has one task to deal with this
integration challenge: a challenge where an existing, mature, distributed
e-infrastructure has to be matched to a forming, more general architecture.
CEVO started its works in the first months of 2019 and has already worked on
the integration of the VO Registry into the EOSC e-infrastructure. This
contribution reports on the first year and a half of integration activities,
that involve applications, services and resources being aware of the VO
scenario and compatible with the EOSC architecture.
|
Since the introduction of the CDC 6600 in 1965 and its `scoreboarding'
technique processors have not (necessarily) executed instructions in program
order. Programmers of high-level code may sequence independent instructions in
arbitrary order, and it is a matter of significant programming abstraction and
computational efficiency that the processor can be relied upon to make sensible
parallelizations/reorderings of low-level instructions to take advantage of,
eg., multiple ALUs. At the architectural level such reordering is typically
implemented via a per-processor pipeline, into which instructions are fetched
in order, but possibly committed out of order depending on local
considerations, provided any reordering preserves sequential semantics from
that processor's perspective. However multicore architectures, where several
pipelines run in parallel, can expose these processor-level reorderings as
unexpected, or `weak', behaviours. Such weak behaviours are hard to reason
about, and (via speculative execution) underlie at least one class of
widespread security vulnerability.
In this paper we introduce a novel program operator, \emph{parallelized
sequential composition}, which can be instantiated with a function that
controls the reordering of atomic instructions. It generalises both sequential
and parallel composition, and when appropriately instantiated exhibits many of
the weak behaviours of well-known hardware weak memory models. Our framework
admits the application of established compositional techniques (eg.
Owicki-Gries) for reasoning about weak behaviours, and is convenient for
abstractly expressing properties from the literature. The semantics and theory
is encoded and verified in a theorem prover, and we give an implementation of
the pipeline semantics which we use to empirically show conformance against
established models of ARM and RISC-V.
|
Disorder in Weyl semimetals and superconductors is surprisingly subtle,
attracting attention and competing theories in recent years. In this brief
review, we discuss the current theoretical understanding of the effects of
short-ranged, quenched disorder on the low energy-properties of
three-dimensional, topological Weyl semimetals and superconductors. We focus on
the role of non-perturbative rare region effects on destabilizing the semimetal
phase and rounding the expected semimetal-to-diffusive metal transition into a
cross over. Furthermore, the consequences of disorder on the resulting nature
of excitations, transport, and topology are reviewed. New results on a
bipartite random hopping model are presented that confirm previous results in a
$p+ip$ Weyl superconductor, demonstrating that particle-hole symmetry is
insufficient to help stabilize the Weyl semimetal phase in the presence of
disorder. The nature of the avoided transition in a model for a single Weyl
cone in the continuum is discussed. We close with a discussion of open
questions and future directions.
|
This paper is devoted to the theoretical and numerical investigation of an
augmented Lagrangian method for the solution of optimization problems with
geometric constraints. Specifically, we study situations where parts of the
constraints are nonconvex and possibly complicated, but allow for a fast
computation of projections onto this nonconvex set. Typical problem classes
which satisfy this requirement are optimization problems with disjunctive
constraints (like complementarity or cardinality constraints) as well as
optimization problems over sets of matrices which have to satisfy additional
rank constraints. The key idea behind our method is to keep these complicated
constraints explicitly in the constraints and to penalize only the remaining
constraints by an augmented Lagrangian function. The resulting subproblems are
then solved with the aid of a problem-tailored nonmonotone projected gradient
method. The corresponding convergence theory allows for an inexact solution of
these subproblems. Nevertheless, the overall algorithm computes so-called
Mordukhovich-stationary points of the original problem under a mild asymptotic
regularity condition, which is generally weaker than most of the respective
available problem-tailored constraint qualifications. Extensive numerical
experiments addressing complementarity- and cardinality-constrained
optimization problems as well as a semidefinite reformulation of MAXCUT
problems visualize the power of our approach.
|
Wet-chemical syntheses for quasi two-dimensional (2D) transition metal
dichalcogenides (TMDs) have emerged as promising methods for straightforward
solution-processing of these materials. However, photoluminescence properties
of colloidal TMDs are virtually unexplored due to the typically non-emitting
synthesis products. In this work, we demonstrate room temperature
micro-photoluminescence of delicate ultrathin colloidal WS2 nanosheets
synthesized from WCl6 and elemental sulfur in oleic acid and oleylamine at 320
{\deg}C for the first time. Both, mono- and multilayer photoluminescence are
observed, revealing comparable characteristics to exfoliated TMD monolayers and
underpinning the high quality of colloidal WS2 nanosheets. In addition, a
promising long-term air-stability of colloidal WS2 nanosheets is found and the
control of photodegradation of the structures under laser excitation is
identified as a challenge for further advancing nanosheet monolayers. Our
results render colloidal TMDs as easily synthesized and highly promising 2D
semiconductors with optical properties fully competitive with conventionally
fabricated ultrathin TMDs.
|
Visual localization is one of the most important components for robotics and
autonomous driving. Recently, inspiring results have been shown with CNN-based
methods which provide a direct formulation to end-to-end regress 6-DoF absolute
pose. Additional information like geometric or semantic constraints is
generally introduced to improve performance. Especially, the latter can
aggregate high-level semantic information into localization task, but it
usually requires enormous manual annotations. To this end, we propose a novel
auxiliary learning strategy for camera localization by introducing
scene-specific high-level semantics from self-supervised representation
learning task. Viewed as a powerful proxy task, image colorization task is
chosen as complementary task that outputs pixel-wise color version of grayscale
photograph without extra annotations. In our work, feature representations from
colorization network are embedded into localization network by design to
produce discriminative features for pose regression. Meanwhile an attention
mechanism is introduced for the benefit of localization performance. Extensive
experiments show that our model significantly improve localization accuracy
over state-of-the-arts on both indoor and outdoor datasets.
|
The Marked Binary Branching Tree (MBBT) is the family tree of a rate one
binary branching process, on which points have been generated according to a
rate one Poisson point process, with i.i.d. uniformly distributed activation
times assigned to the points. In frozen percolation on the MBBT, initially, all
points are closed, but as time progresses points can become either frozen or
open. Points become open at their activation times provided they have not
become frozen before. Open points connect the parts of the tree below and above
it and one says that a point percolates if the tree above it is infinite. We
consider a version of frozen percolation on the MBBT in which at times of the
form $\theta^n$, all points that percolate are frozen. The limiting model for
$\theta \to 1$, in which points freeze as soon as they percolate, has been
studied before by R\'ath, Swart, and Terpai. We extend their results by showing
that there exists a $0<\theta^\ast<1$ such that the model is endogenous for
$\theta \leq\theta^\ast$ but not for $\theta>\theta^\ast$. This means that for
$\theta \leq \theta^\ast$, frozen percolation is a.s. determined by the MBBT
but for $\theta>\theta^\ast$ one needs additional randomness to describe it.
|
Due to increasing railway use, the capacity at railway yards and maintenance
locations is becoming limiting. Therefore, the scheduling of rolling stock
maintenance and the choice regarding optimal locations to perform maintenance
is increasingly complicated. This research introduces a Maintenance Scheduling
and Location Choice Problem (MSLCP). It simultaneously determines maintenance
locations and maintenance schedules of rolling stock, while it also considers
the available capacity of maintenance locations, measured in the number of
available teams. To solve the MSLCP, an optimization framework based on
Logic-Based Benders' Decomposition (LBBD) is proposed by combining two models,
the Maintenance Location Choice Problem (MLCP) and the Activity Planning
Problem (APP), to assess the capacity of a MLCP solution. Within the LBBD, four
cut generation procedures are introduced to improve the computational
performance: a naive procedure, two heuristic procedures and the so-called
min-cut procedure that aims to exploit the specific characteristics of the
problem at hand. The framework is demonstrated on a realistic scenarios from
the Dutch railways. It is shown that the best choice for cut generation
procedure depends on the objective: when aiming to find a good but not
necessarily optimal solution, the min-cut procedure performs best, whereas when
aiming for the optimal solution, one of the heuristic procedures is the
preferred option. The techniques used in the current research are new to the
current field and offer interesting next research opportunities.
|
The angular momentum of an electron is characterized well by pseudospin with
$J=3/2$ in the presence of strong spin-orbit interactions. We study
theoretically the Josephson effect of superconductors in which such two $J=3/2$
electrons form a Cooper pair. Within even-parity symmetry class,
pseudospin-quintet pairing states with $J=2$ can exist as well as
pseudospin-singlet state with $J=0$. We focus especially on the Josephson
selection rule among these even-parity superconductors. We find that the
selection rule between quintet states is severer than that between spin-triplet
states formed by two $S=1/2$ electrons. The effects of a pseudospin-active
interface on the selection rule are discussed as well as those of odd-frequency
Cooper pairs generated by pseudospin dependent band structures.
|
The $q$-Onsager algebra $O_q$ is defined by two generators and two relations,
called the $q$-Dolan/Grady relations. We investigate the alternating central
extension $\mathcal O_q$ of $O_q$. The algebra $\mathcal O_q$ was introduced by
Baseilhac and Koizumi, who called it the current algebra of $O_q$. Recently
Baseilhac and Shigechi gave a presentation of $\mathcal O_q$ by generators and
relations. The presentation is attractive, but the multitude of generators and
relations makes the presentation unwieldy. In this paper we obtain a
presentation of $\mathcal O_q$ that involves a subset of the original set of
generators and a very manageable set of relations. We call this presentation
the compact presentation of $\mathcal O_q$. This presentation resembles the
compact presentation of the alternating central extension for the positive part
of $U_q(\widehat{\mathfrak{sl}}_2)$.
|
For a convex body $K$ in $\mathbb{R}^n$, we introduce and study the extremal
general affine surface areas, defined by \[ {\rm
IS}_{\varphi}(K):=\sup_{K^\prime\subset K}{\rm as}_{\varphi}(K),\quad {\rm
os}_{\psi}(K):=\inf_{K^\prime\supset K}{\rm as}_{\psi}(K) \] where ${\rm
as}_{\varphi}(K)$ and ${\rm as}_{\psi}(K)$ are the $L_\varphi$ and $L_\psi$
affine surface area of $K$, respectively. We prove that there exist extremal
convex bodies that achieve the supremum and infimum, and that the functionals
${\rm IS}_{\varphi}$ and ${\rm os}_{\psi}$ are continuous. In our main results,
we prove Blaschke-Santal\'o type inequalities and inverse Santal\'o type
inequalities for the extremal general affine surface areas. This article may be
regarded as an Orlicz extension of the recent work of Giladi, Huang, Sch\"utt
and Werner (2020), who introduced and studied the extremal $L_p$ affine surface
areas.
|
Current studies in extractive question answering (EQA) have modeled
single-span extraction setting, where a single answer span is a label to
predict for a given question-passage pair. This setting is natural for general
domain EQA as the majority of the questions in the general domain can be
answered with a single span. Following general domain EQA models, current
biomedical EQA (BioEQA) models utilize single-span extraction setting with
post-processing steps. In this paper, we investigate the difference of the
question distribution across the general and biomedical domains and discover
biomedical questions are more likely to require list-type answers (multiple
answers) than factoid-type answers (single answer). In real-world use cases,
this emphasizes the need for Biomedical EQA models able to handle multiple
question types. Based on this preliminary study, we propose a multi-span
extraction setting, namely sequence tagging approach for BioEQA, which directly
tackles questions with a variable number of phrases as their answer. Our
approach can learn to decide the number of answers for a question from training
data. Our experimental result on the BioASQ 7b and 8b list-type questions
outperformed the best-performing existing models without requiring
post-processing steps.
|
Quasi-linear diffusion (QLD), driven by the cyclotron instability, is
proposed as a mechanism for the possible generation of synchrotron emission in
the nearby zone of SgrA$^*$. For physically reasonable parameters, the QLD, by
causing non-zero pitch angle scattering lets electrons with the relativistic
factors of the order of $10^8$ emit synchrotron radiation in the hard $X$-ray
spectral band $\sim120$ keV.
|
The spatio-temporal structure of the bulk wakefields excited by a
relativistic electron bunch in plasma of semiconductors and semimetals is
studied. It is shown that these wakefield consists of the field of longitudinal
plasmons and Cherenkov electromagnetic radiation, which is a set of eigen
electromagnetic waves of the semiconductor or semimetal waveguide. A branch of
the surface plasmons appears in a waveguide with an axial vacuum channel. The
process of wake excitation of the surface plasmons by the relativistic electron
bunch is also investigated. The intensity of the excited wake surface wave is
determined.
|
An aerial vehicle powered by flapping feathered wings was designed, developed
and fabricated. Different from legacy flapping-wing aerial vehicles with
membrane wings, the new design uses authentic bird feathers to fabricate wings.
In field tests, a radio-controlled electric-powered aerial vehicle with
flapping feathered wings successfully took off, flew up to 63.88 s and landed
safely. It was found that flapping feathered wings can generate sufficient
thrust and lift to make a man-made aerial vehicle accomplish takeoff,
sustainable flight and a safe landing.
|
Chatbots are intelligent software built to be used as a replacement for human
interaction. Existing studies typically do not provide enough support for
low-resource languages like Bangla. Due to the increasing popularity of social
media, we can also see the rise of interactions in Bangla transliteration
(mostly in English) among the native Bangla speakers. In this paper, we propose
a novel approach to build a Bangla chatbot aimed to be used as a business
assistant which can communicate in low-resource languages like Bangla and
Bangla Transliteration in English with high confidence consistently. Since
annotated data was not available for this purpose, we had to work on the whole
machine learning life cycle (data preparation, machine learning modeling, and
model deployment) using Rasa Open Source Framework, fastText embeddings,
Polyglot embeddings, Flask, and other systems as building blocks. While working
with the skewed annotated dataset, we try out different components and
pipelines to evaluate which works best and provide possible reasoning behind
the observed results. Finally, we present a pipeline for intent classification
and entity extraction which achieves reasonable performance (accuracy: 83.02%,
precision: 80.82%, recall: 83.02%, F1-score: 80%).
|
As a new generation of Public Bicycle-sharing Systems (PBS), the dockless PBS
(DL-PBS) is an important application of cyber-physical systems and intelligent
transportation. How to use AI to provide efficient bicycle dispatching
solutions based on dynamic bicycle rental demand is an essential issue for
DL-PBS. In this paper, we propose a dynamic bicycle dispatching algorithm based
on multi-objective reinforcement learning (MORL-BD) to provide the optimal
bicycle dispatching solution for DL-PBS. We model the DL-PBS system from the
perspective of CPS and use deep learning to predict the layout of bicycle
parking spots and the dynamic demand of bicycle dispatching. We define the
multi-route bicycle dispatching problem as a multi-objective optimization
problem by considering the optimization objectives of dispatching costs,
dispatch truck's initial load, workload balance among the trucks, and the
dynamic balance of bicycle supply and demand. On this basis, the collaborative
multi-route bicycle dispatching problem among multiple dispatch trucks is
modeled as a multi-agent MORL model. All dispatch paths between parking spots
are defined as state spaces, and the reciprocal of dispatching costs is defined
as a reward. Each dispatch truck is equipped with an agent to learn the optimal
dispatch path in the dynamic DL-PBS network. We create an elite list to store
the Pareto optimal solutions of bicycle dispatch paths found in each action,
and finally, get the Pareto frontier. Experimental results on the actual DL-PBS
systems show that compared with existing methods, MORL-BD can find a higher
quality Pareto frontier with less execution time.
|
There are many possible definitions of derivatives, here we present some and
present one that we have called generalized that allows us to put some of the
others as a particular case of this but, what interests us is to determine that
there is an infinite number of possible definitions of fractional derivatives,
all are correct as differential operators each of which must be properly
defined in its algebra.
We introduce a generalized version of the fractional derivative that extends
the existing ones in the literature. To those extensions, it is associated with
a differentiable operator and a differential ring and applications that show
the advantages of the generalization.
We also review the different definitions of fractional derivatives proposed
by Michele Caputo in \cite{GJI:GJI529}, Khalil, Al Horani, Yousef, Sababheh in
\cite{khalil2014new}, Anderson and Ulness in \cite{anderson2015newly}, Guebbai
and Ghiat in \cite{guebbai2016new}, Udita N. Katugampola in
\cite{katugampola2014new}, Camrud in \cite{camrud2016conformable} and it is
shown how the generalized version contains the previous ones as a particular
case.
|
A determinantal facet ideal (DFI) is an ideal $J_\Delta$ generated by maximal
minors of a generic matrix parametrized by an associated simplicial complex
$\Delta$. In this paper, we construct an explicit linear strand for the initial
ideal with respect to any diagonal term order $<$ of an arbitrary DFI. In
particular, we show that if $\Delta$ has no \emph{1-nonfaces}, then the Betti
numbers of the linear strand of $J_\Delta$ and its initial ideal coincide. We
apply this result to prove a conjecture of Ene, Herzog, and Hibi on Betti
numbers of closed binomial edge ideals in the case that the associated graph
has at most $2$ maximal cliques. More generally, we show that the linear strand
of the initial ideal (with respect to $<$) of \emph{any} DFI is supported on a
polyhedral cell complex obtained as an induced subcomplex of the \emph{complex
of boxes}, introduced by Nagel and Reiner.
|
An image-based deep learning framework is developed in this paper to predict
damage and failure in microstructure-dependent composite materials. The work is
motivated by the complexity and computational cost of high-fidelity simulations
of such materials. The proposed deep learning framework predicts the
post-failure full-field stress distribution and crack pattern in
two-dimensional representations of the composites based on the geometry of
microstructures. The material of interest is selected to be a high-performance
unidirectional carbon fiber-reinforced polymer composite. The deep learning
framework contains two stacked fully-convolutional networks, namely, Generator
1 and Generator 2, trained sequentially. First, Generator 1 learns to translate
the microstructural geometry to the full-field post-failure stress
distribution. Then, Generator 2 learns to translate the output of Generator 1
to the failure pattern. A physics-informed loss function is also designed and
incorporated to further improve the performance of the proposed framework and
facilitate the validation process. In order to provide a sufficiently large
data set for training and validating the deep learning framework, 4500
microstructural representations are synthetically generated and simulated in an
efficient finite element framework. It is shown that the proposed deep learning
approach can effectively predict the composites' post-failure full-field stress
distribution and failure pattern, two of the most complex phenomena to simulate
in computational solid mechanics.
|
We propose a new model to assess the mastery level of a given skill
efficiently. The model, called Bayesian Adaptive Mastery Assessment (BAMA),
uses information on the accuracy and the response time of the answers given and
infers the mastery at every step of the assessment. BAMA balances the length of
the assessment and the certainty of the mastery inference by employing a
Bayesian decision-theoretic framework adapted to each student. All these
properties contribute to a novel approach in assessment models for intelligent
learning systems. The purpose of this research is to explore the properties of
BAMA and evaluate its performance concerning the number of questions
administered and the accuracy of the final mastery estimates across different
students. We simulate student performances and establish that the model
converges with low variance and high efficiency leading to shorter assessment
duration for all students. Considering the experimental results, we expect our
approach to avoid the issue of over-practicing and under-practicing and
facilitate the development of Learning Analytics tools to support the tutors in
the evaluation of learning effects and instructional decision making.
|
To cope with the growing demand for transportation on the railway system,
accurate, robust, and high-frequency positioning is required to enable a safe
and efficient utilization of the existing railway infrastructure. As a basis
for a localization system we propose a complete on-board mapping pipeline able
to map robust meaningful landmarks, such as poles from power lines, in the
vicinity of the vehicle. Such poles are good candidates for reliable and long
term landmarks even through difficult weather conditions or seasonal changes.
To address the challenges of motion blur and illumination changes in railway
scenarios we employ a Dynamic Vision Sensor, a novel event-based camera. Using
a sideways oriented on-board camera, poles appear as vertical lines. To map
such lines in a real-time event stream, we introduce Hough2Map, a novel
consecutive iterative event-based Hough transform framework capable of
detecting, tracking, and triangulating close-by structures. We demonstrate the
mapping reliability and accuracy of Hough2Map on real-world data in typical
usage scenarios and evaluate using surveyed infrastructure ground truth maps.
Hough2Map achieves a detection reliability of up to 92% and a mapping root mean
square error accuracy of 1.1518m.
|
Across a large range of scales, accreting sources show remarkably similar
patterns of variability, most notably the log-normality of the luminosity
distribution and the linear root-mean square (rms)-flux relationship. These
results are often explained using the theory of propagating fluctuations in
which fluctuations in the viscosity create perturbations in the accretion rate
at all radii, propagate inwards and combine multiplicatively. While this idea
has been extensively studied analytically in a linear regime, there has been
relatively little numerical work investigating the non-linear behaviour. In
this paper, we present a suite of stochastically driven 1-d $\alpha$-disc
simulations, exploring the behaviour of these discs. We find that the eponymous
propagating fluctuations are present in all simulations across a wide range of
model parameters, in contradiction to previous work. Of the model parameters,
we find by far the most important to be the timescale on which the viscosity
fluctuations occur. Physically, this timescale will depend on the underlying
physical mechanism, thought to be the magnetorotational instability (MRI). We
find a close relationship between this fluctuation timescale and the break
frequency in the power spectral density (PSD) of the luminosity, a fact which
could allow observational probes of the behaviour of the MRI dynamo. We report
a fitting formula for the break frequency as a function of the fluctuation
timescale, the disc thickness and the mass of the central object.
|
We present a regime where an ultra-intense laser pulse interacting with a
foil target results in high $\gamma$-photon conversion efficiency, obtained via
three-dimensional quantum-electrodynamics particle-in-cell simulations. A
single-cycle laser pulse is used under the tight-focusing condition for
obtaining the $\mathrm{\lambda}^3$ regime. The simulations employ a radially
polarized laser as it results in higher $\gamma$-photon conversion efficiency
compared to both azimuthal and linear polarizations. A significant fraction of
the laser energy is transferred to positrons, while a part of the
electromagnetic wave escapes the target as attosecond single-cycle pulses.
|
Nonconvex optimal-control problems governed by evolution problems in
infinite-dimensional spaces (as e.g. parabolic boundary-value problems) needs a
continuous (and possibly also smooth) extension on some (preferably convex)
compactification, called relaxation, to guarantee existence of their solutions
and to facilitate analysis by relatively conventional tools. When the control
is valued in some subsets of Lebesgue spaces, the usual extensions are either
too coarse (allowing in fact only very restricted nonlinearities) or too fine
(being nonmetrizable). To overcome these drawbacks, a compromising convex
compactification is here devised, combining classical techniques for Young
measures with Choquet theory. This is applied to parabolic optimal control
problems as far as existence and optimality conditions concerns.
|
This paper establishes the local-in-time well-posedness of solutions to an
approximating system constructed by mildly regularizing the dynamical sea ice
model of {\it W.D. Hibler, Journal of Physical Oceanography, 1979}. Our choice
of regularization has been carefully designed, prompted by physical
considerations, to retain the original coupled hyperbolic-parabolic character
of Hibler's model. Various regularized versions of this model have been used
widely for the numerical simulation of the circulation and thickness of the
Arctic ice cover. However, due to the singularity in the ice rheology, the
notion of solutions to the original model is unclear. Instead, an approximating
system, which captures current numerical study, is proposed. The well-posedness
theory of such a system provides a first-step groundwork in both numerical
study and future analytical study.
|
The Maximum Entropy Spectral Analysis (MESA) method, developed by Burg,
provides a powerful tool to perform spectral estimation of a time-series. The
method relies on a Jaynes' maximum entropy principle and provides the means of
inferring the spectrum of a stochastic process in terms of the coefficients of
some autoregressive process AR($p$) of order $p$. A closed form recursive
solution provides an estimate of the autoregressive coefficients as well as of
the order $p$ of the process. We provide a ready-to-use implementation of the
algorithm in the form of a python package \texttt{memspectrum}. We characterize
our implementation by performing a power spectral density analysis on synthetic
data (with known power spectral density) and we compare different criteria for
stopping the recursion. Furthermore, we compare the performance of our code
with the ubiquitous Welch algorithm, using synthetic data generated from the
released spectrum by the LIGO-Virgo collaboration. We find that, when compared
to Welch's method, Burg's method provides a power spectral density (PSD)
estimation with a systematically lower variance and bias. This is particularly
manifest in the case of a little number of data points, making Burg's method
most suitable to work in this regime.
|
There is a class of binary post-AGB stars with a remarkable near-infrared
excess that are surrounded by Keplerian or quasi-Keplerian disks and extended
outflows composed of gas escaping from the disk. The Keplerian dynamics had
been well identified in four cases, namely the Red Rectangle, AC Her, IW Car,
and IRAS 08544-4431. In these objects, the mass of the outflow represents ~ 10
% of the nebular mass, the disk being the dominant component of the nebula. We
present interferometric NOEMA maps of 12CO and 13CO J=2-1 in 89 Her and 12CO
J=2-1 in AC Her, IRAS 19125+0343, and R Sct. Several properties of the nebula
are obtained from the data and model fitting, including the structure, density,
and temperature distributions, as well as the dynamics. We also discuss the
uncertainties on the derived values. The presence of an expanding component in
AC Her is doubtful, but thanks to new maps and models, we estimate an upper
limit to the mass of this outflow of < 3 10^-5 Mo, that is, the mass of the
outflow is < 5 % of the total nebular mass. For 89 Her, we find a total nebular
mass of 1.4 10^-2 Mo, of which ~ 50 % comes from an hourglass-shaped extended
outflow. In the case of IRAS 19125+0343, the nebular mass is 1.1 10^-2 Mo,
where the outflow contributes ~ 70 % of the total mass. The nebular mass of R
Sct is 3.2 10^-2 Mo, of which ~ 75 % corresponds to a very extended outflow
that surrounds the disk. Our results for IRAS 19125+0343 and R Sct lead us to
introduce a new subclass of binary post-AGB stars, for which the outflow is the
dominant component of the nebula. Moreover, the outflow mass fraction found in
AC Her is smaller than those found in other disk-dominated binary post-AGB
stars. 89 Her would represent an intermediate case between both subclasses.
|
In the (fully) dynamic set cover problem, we have a collection of $m$ sets
from a universe of size $n$ that undergo element insertions and deletions; the
goal is to maintain an approximate set cover of the universe after each update.
We give an $O(f^2)$ update time algorithm for this problem that achieves an
$f$-approximation, where $f$ is the maximum number of sets that an element
belongs to; under the unique games conjecture, this approximation is best
possible for any fixed $f$. This is the first algorithm for dynamic set cover
with approximation ratio that {exactly} matches $f$ (as opposed to {almost} $f$
in prior work), as well as the first one with runtime \emph{independent of
$n,m$} (for any approximation factor of $o(f^3)$).
Prior to our work, the state-of-the-art algorithms for this problem were
$O(f^2)$ update time algorithms of Gupta et al. [STOC'17] and Bhattacharya et
al. [IPCO'17] with $O(f^3)$ approximation, and the recent algorithm of
Bhattacharya et al. [FOCS'19] with $O(f \cdot \log{n}/\epsilon^2)$ update time
and $(1+\epsilon) \cdot f$ approximation, improving the $O(f^2 \cdot
\log{n}/\epsilon^5)$ bound of Abboud et al. [STOC'19].
The key technical ingredient of our work is an algorithm for maintaining a
{maximal} matching in a dynamic hypergraph of rank $r$, where each hyperedge
has at most $r$ vertices, which undergoes hyperedge insertions and deletions in
$O(r^2)$ amortized update time; our algorithm is randomized, and the bound on
the update time holds in expectation and with high probability. This result
generalizes the maximal matching algorithm of Solomon [FOCS'16] with constant
update time in ordinary graphs to hypergraphs, and is of independent merit; the
previous state-of-the-art algorithms for set cover do not translate to
(integral) matchings for hypergraphs, let alone a maximal one. Our quantitative
result for the set cover problem is [...]
|
Domain Adaptation (DA) techniques are important for overcoming the domain
shift between the source domain used for training and the target domain where
testing takes place. However, current DA methods assume that the entire target
domain is available during adaptation, which may not hold in practice. This
paper considers a more realistic scenario, where target data become available
in smaller batches and adaptation on the entire target domain is not feasible.
In our work, we introduce a new, data-constrained DA paradigm where unlabeled
target samples are received in batches and adaptation is performed continually.
We propose a novel source-free method for continual unsupervised domain
adaptation that utilizes a buffer for selective replay of previously seen
samples. In our continual DA framework, we selectively mix samples from
incoming batches with data stored in a buffer using buffer management
strategies and use the combination to incrementally update our model. We
evaluate the classification performance of the continual DA approach with
state-of-the-art DA methods based on the entire target domain. Our results on
three popular DA datasets demonstrate that our method outperforms many existing
state-of-the-art DA methods with access to the entire target domain during
adaptation.
|
We report photometric estimates of effective temperature, $T_{\rm eff}$,
metallicity, [Fe/H], carbonicity, [C/Fe], and absolute carbon abundances,
$A{\rm (C)}$, for over 700,000 stars from the Southern Photometric Local
Universe Survey (S-PLUS) Data Release 2, covering a substantial fraction of the
equatorial Sloan Digital Sky Survey Stripe 82. We present an analysis for two
stellar populations: 1) halo main-sequence turnoff stars and 2) K-dwarf stars
of mass $0.58 < M/M_{\odot} <0.75$ in the Solar Neighborhood. Application of
the Stellar Photometric Index Network Explorer (SPHINX) to the mixed-bandwidth
(narrow- plus wide-band) filter photometry from S-PLUS produces robust
estimates of the metallicities and carbon abundances in stellar atmospheres
over a wide range of temperature, $4250 < T_{\rm eff} \textrm{(K)} < 7000$. The
use of multiple narrow-band S-PLUS filters enables SPHINX to achieve
substantially lower levels of "catastrophic failures" (large offsets in
metallicity estimates relative to spectroscopic determinations) than previous
efforts using a single metallicity-sensitive narrow-band filter. We constrain
the exponential slope of the Milky Way's K-dwarf halo metallicity distribution
function (MDF), $\lambda_{10, \textrm{[Fe/H]}} = 0.85 \pm 0.21$, over the
metallicity range $-2.5 < \textrm{[Fe/H]} < -1.0$; the MDF of our local-volume
K-dwarf sample is well-represented by a gamma distribution with parameters
$\alpha=2.8$ and $\beta=4.2$. S-PLUS photometry obtains absolute carbon
abundances with a precision of $\sim 0.35$dex for stars with $T_{\rm eff} <
6500$K. We identify 364 candidate carbon-enhanced metal-poor stars, obtain
assignments of these stars into the Yoon-Beers morphological groups in the
$A$(C)-[Fe/H] space, and derive the CEMP frequencies.
|
We present results for higher-order corrections to exclusive
$\mathrm{J}/\psi$ production. This includes the first relativistic correction
of order $v^2$ in quark velocity, and next-to-leading order corrections in
$\alpha_s$ for longitudinally polarized production. The relativistic
corrections are found to be important for a good description of the HERA data,
especially at small values of the photon virtuality. The next-to-leading order
results for longitudinal production are evaluated numerically. We also
demonstrate how the vector meson production provides complementary information
to the structure functions for extracting the initial condition for the
small-$x$ evolution of the dipole-proton scattering amplitude.
|
We study the ensemble average of the thermal expectation value of an energy
momentum tensor in the presence of a random external metric. In a holographic
setup this quantity can be read off of the near boundary behavior of the metric
in a stochastic theory of gravity. By numerically solving the associated
Einstein equations and mapping the result to the dual boundary theory, we find
that the non relativistic energy power spectrum exhibits a power law behavior
as expected by the theory of Kolmogorov.
|
This study demonstrates that web-search traffic information, in particular,
Google Trends data, is a credible novel source of high-quality and
easy-to-access data for analyzing technology-based new ventures (TBNVs) growth
trajectories. Utilizing the diverse sample of 241 US-based TBNVs, we
comparatively analyze the relationship between companies' evolution curves
represented by search activity on the one hand and by valuations achieved
through rounds of venture investments on another. The results suggest that
TBNV's growth dynamics are positively and strongly correlated with its web
search traffic across the sample. This correlation is more robust when a
company is a) more successful (in terms of valuation achieved) - especially if
it is a "unicorn"; b) consumer-oriented (i.e., b2c); and 3) develops products
in the form of a digital platform. Further analysis based on fuzzy-set
Qualitative Comparative Analysis (fsQCA) shows that for the most successful
companies ("unicorns") and consumer-oriented digital platforms (i.e., b2c
digital platform companies) proposed approach may be extremely reliable, while
for other high-growth TBNVs it is useful for analyzing their growth dynamics,
albeit to a more limited degree. The proposed methodological approach opens a
wide range of possibilities for analyzing, researching and predicting the
growth of recently formed growth-oriented companies, in practice and academia.
|
We prove that $\mathbb{F}_p$ sketch, a well-celebrated streaming algorithm
for frequency moments estimation, is differentially private as is when $p\in(0,
1]$. $\mathbb{F}_p$ sketch uses only polylogarithmic space, exponentially
better than existing DP baselines and only worse than the optimal non-private
baseline by a logarithmic factor. The evaluation shows that $\mathbb{F}_p$
sketch can achieve reasonable accuracy with strong privacy guarantees.
|
We study the formation of magnon-polaron excitations and the consequences of
different time scales between the magnon and lattice dynamics. The spin-spin
interactions along the 1D lattice are ruled by a Heisenberg Hamiltonian in the
anisotropic form XXZ, in which each spin exhibits a vibrational degree of
freedom around its equilibrium position. By considering a magnetoelastic
coupling as a linear function of the relative displacement between
nearest-neighbor spins, results provide an original framework for achieving a
hybridized state of magnon-polaron. Such state is characterized by high
cooperation between the underlying excitations, where the traveling or
stationary formation of magnon-polaron depends on the effective magnetoelastic
coupling. A systematic investigation reveals the critical amount of the
magnon-lattice interaction ($\chi_c$) necessary to emergence of the stationary
magnon-polaron quasi-particle. Different characteristic time scales of the
magnon and the vibrational dynamics unveiled the threshold between the two
regimes, as well as a limiting value of critical magnetoelastic interaction,
above which the magnon velocity no longer interferes at the critical
magnetoelastic coupling capable of inducing the stationary regime.
|
We prove the existence of Bayesian Nash Equilibrium (BNE) of general-sum
Bayesian games with continuous types and finite actions under the conditions
that the utility functions and the prior type distributions are continuous
concerning the players' types. Moreover, there exists a sequence of discretized
Bayesian games whose BNE strategies converge weakly to a BNE strategy of the
infinite Bayesian game. Our proof establishes a connection between the
equilibria of the infinite Bayesian game and those of finite approximations,
which leads to an algorithm to construct $\varepsilon$-BNE of infinite Bayesian
games by discretizing players' type spaces.
|
Applications in materials and biological imaging are limited by the ability
to collect high-resolution data over large areas in practical amounts of time.
One solution to this problem is to collect low-resolution data and interpolate
to produce a high-resolution image. However, most existing super-resolution
algorithms are designed for natural images, often require aligned pairing of
high and low-resolution training data, and may not directly incorporate a model
of the imaging sensor.
In this paper, we present a Multi-resolution Data Fusion (MDF) algorithm for
accurate interpolation of low-resolution data at multiple resolutions up to 8x.
Our approach uses small quantities of unpaired high-resolution data to train a
neural network prior model denoiser and then uses the Multi-Agent Consensus
Equilibrium (MACE) problem formulation to balance this denoiser with a forward
model agent that promotes fidelity to measured data.
A key theoretical novelty is the analysis of mismatched back-projectors,
which modify typical forward model updates for computational efficiency or
improved image quality. We use MACE to prove that using a mismatched
back-projector is equivalent to using a standard back-projector and an
appropriately modified prior model.
We present electron microscopy results at 4x and 8x interpolation factors
that exhibit reduced artifacts relative to existing methods while maintaining
fidelity to acquired data and accurately resolving sub-pixel-scale features.
|
We study episodic reinforcement learning under unknown adversarial
corruptions in both the rewards and the transition probabilities of the
underlying system. We propose new algorithms which, compared to the existing
results in (Lykouris et al., 2020), achieve strictly better regret bounds in
terms of total corruptions for the tabular setting. To be specific, firstly,
our regret bounds depend on more precise numerical values of total rewards
corruptions and transition corruptions, instead of only on the total number of
corrupted episodes. Secondly, our regret bounds are the first of their kind in
the reinforcement learning setting to have the number of corruptions show up
additively with respect to $\min\{\sqrt{T}, \text{PolicyGapComplexity}\}$
rather than multiplicatively. Our results follow from a general algorithmic
framework that combines corruption-robust policy elimination meta-algorithms,
and plug-in reward-free exploration sub-algorithms. Replacing the
meta-algorithm or sub-algorithm may extend the framework to address other
corrupted settings with potentially more structure.
|
In this work we study the asymmetric heat flow, i.e., thermal rectification,
of a one-dimensional, mass-graded system consisting of acoupled harmonic
oscillator lattice (ballistic spacer) and two diffusive leads attached to the
boundaries of the former with both nearest-neighbor and next-nearest-neighbor
(NNN) interactions. The latter enhance the rectification properties of the
system and specially its independence on system size. The system presents a
maximum rectification efficiency for a very precise value of the parameter that
controls the coupling strength of the NNN interactions that depend on the
temperature range wherein the device operates. The origin of this maximum value
is the asymmetric local heat flow response corresponding to the NNN
contribution at both sides of the lighter mass-loaded diffusive lead as
quantified by the spectral properties. Upon variation of the system's
parameters the performance of the device is always enhanced in the presence of
NNN interactions.
|
Women are underrepresented in Computer Science disciplines at all levels,
from undergraduate and graduate studies to participation and leadership in
academia and industry. Increasing female representation in the field is a grand
challenge for academics, policymakers, and society. Although the problem has
been addressed for many years, progress has been difficult to be measured and
compared across countries and institutions, and has been invariably slow,
despite all the momentum and impulse for change taking place across several
countries. Therefore, it is important to reflect on knowledge, experiences,
successes, and challenges of existing policies, initiatives and interventions.
The main goal of this paper is to provide an overview of several initiatives,
studies, projects, and their outcomes. It contributes to building a body of
knowledge about gender aspects in several areas: research, education, projects,
networks and resources. This paper is mainly based on discussions in working
groups and the material collected for and during a series of talks on the topic
held by the first author and by feedback received by the community. This paper
provides the academic community, policymakers, industry and other stakeholders
with numerous examples of best practices, as well as studies and
recommendations on how to address key challenges about attracting, retaining,
encouraging, and inspiring women to pursue a career in Computer Science. Future
work should address the issue in a systematic and research based way.
|
Two dimensional (2D) ferromagnetic materials have attracted much attention in
the fields of condensed matter physics and materials science, but their
synthesis is still a challenge given their limitations on structural stability
and susceptibility to oxidization. MAX phases nanolaminated ternary carbides or
nitrides possess a unique crystal structure in which single-atom-thick A
sublayers are interleaved by two dimensional MX slabs, providing nanostructured
templates for designing 2D ferromagnetic materials if the non-magnetic A
sublayers can be substituted replaced by magnetic elements. Here, we report
three new ternary magnetic MAX phases (Ta2FeC, Ti2FeN and Nb2FeC) with A
sublayers of single-atom-thick 2D iron through an isomorphous replacement
reaction of MAX precursors (Ta2AlC, Ti2AlN and Nb2AlC) with a Lewis acid salts
(FeCl2). All these MAX phases exhibit ferromagnetic (FM) behavior. The Curie
temperature (Tc) of Ta2FeC and Nb2FeC MAX phase are 281 K and 291 K,
respectively, i.e. close to room temperature. The saturation magnetization of
these ternary magnetic MAX phases is almost two orders of magnitude higher than
that of V2(Sn,Fe)C MAX phase whose A-site is partial substituted by Fe.
Theoretical calculations on magnetic orderings of spin moments of Fe atoms in
these nanolaminated magnetic MAX phases reveal that the magnetism can be mainly
ascribed to intralayer exchange interaction of the 2D Fe atomic layers. Owning
to the richness in composition of MAX phases, there is a large compositional
space for constructing functional single-atom-thick 2D layers in materials
using these nanolaminated templates.
|
Oxygen vacancies have been identified to play an important role in
accelerating grain growth in polycrystalline perovskite-oxide ceramics. In
order to advance the fundamental understanding of growth mechanisms at the
atomic scale, classical atomistic simulations were carried out to investigate
the atomistic structures and oxygen vacancy formation energies at grain
boundaries in the prototypical perovskite-oxide material SrTiO$_3$. In this
work, we focus on two symmetric tilt grain boundaries, namely
$\Sigma$5(310)[001] and $\Sigma$5(210)[001]. A one-dimensional continuum model
is adapted to determine the electrostatic potential induced by charged lattice
planes in atomistic structure models containing grain boundaries and point
defects. By means of this model, electrostatic artifacts, which are inherent to
supercell models with periodic or open boundary conditions, can be taken into
account and corrected properly. We report calculated formation energies of
oxygen vacancies on all the oxygen sites across boundaries between two
misoriented grains, and we analyze and discuss the formation-energy values with
respect to local charge densities at the vacant sites.
|
Domain adaptation has been widely explored by transferring the knowledge from
a label-rich source domain to a related but unlabeled target domain. Most
existing domain adaptation algorithms attend to adapting feature
representations across two domains with the guidance of a shared
source-supervised classifier. However, such classifier limits the
generalization ability towards unlabeled target recognition. To remedy this, we
propose a Transferable Semantic Augmentation (TSA) approach to enhance the
classifier adaptation ability through implicitly generating source features
towards target semantics. Specifically, TSA is inspired by the fact that deep
feature transformation towards a certain direction can be represented as
meaningful semantic altering in the original input space. Thus, source features
can be augmented to effectively equip with target semantics to train a more
transferable classifier. To achieve this, for each class, we first use the
inter-domain feature mean difference and target intra-class feature covariance
to construct a multivariate normal distribution. Then we augment source
features with random directions sampled from the distribution class-wisely.
Interestingly, such source augmentation is implicitly implemented through an
expected transferable cross-entropy loss over the augmented source
distribution, where an upper bound of the expected loss is derived and
minimized, introducing negligible computational overhead. As a light-weight and
general technique, TSA can be easily plugged into various domain adaptation
methods, bringing remarkable improvements. Comprehensive experiments on
cross-domain benchmarks validate the efficacy of TSA.
|
The statistical characterization of the distribution of visible matter in the
universe is a central problem in modern cosmology. In this respect, a crucial
question still lacking a definitive answer concerns how large are the greatest
structures in the universe. This point is closely related to whether or not
such a distribution can be approximated as being homogeneous on large enough
scales. Here we assess this problem by considering the size distribution of
superclusters of galaxies and by leveraging on the properties of
Zipf-Mandelbrot law, providing a novel approach which complements standard
analysis based on the correlation functions. We find that galaxy superclusters
are well described by a pure Zipf's law with no deviations and this implies
that all the catalogs currently available are not sufficiently large to spot a
truncation in the power-law behavior. This finding provides evidence that
structures larger than the greatest superclusters already observed are expected
to be found when deeper redshift surveys will be completed. As a consequence
the scale beyond which galaxy distribution crossovers toward homogeneity, if
any, should increase accordingly
|
We introduce Ivy, a templated Deep Learning (DL) framework which abstracts
existing DL frameworks. Ivy unifies the core functions of these frameworks to
exhibit consistent call signatures, syntax and input-output behaviour. New
high-level framework-agnostic functions and classes, which are usable alongside
framework-specific code, can then be implemented as compositions of the unified
low-level Ivy functions. Ivy currently supports TensorFlow, PyTorch, MXNet, Jax
and NumPy. We also release four pure-Ivy libraries for mechanics, 3D vision,
robotics, and differentiable environments. Through our evaluations, we show
that Ivy can significantly reduce lines of code with a runtime overhead of less
than 1% in most cases. We welcome developers to join the Ivy community by
writing their own functions, layers and libraries in Ivy, maximizing their
audience and helping to accelerate DL research through inter-framework
codebases. More information can be found at https://ivy-dl.org.
|
The use of datasets is getting more relevance in surgical robotics since they
can be used to recognise and automate tasks. Also, this allows to use common
datasets to compare different algorithms and methods. The objective of this
work is to provide a complete dataset of three common training surgical tasks
that surgeons perform to improve their skills. For this purpose, 12 subjects
teleoperated the da Vinci Research Kit to perform these tasks. The obtained
dataset includes all the kinematics and dynamics information provided by the da
Vinci robot (both master and slave side) together with the associated video
from the camera. All the information has been carefully timestamped and
provided in a readable csv format. A MATLAB interface integrated with ROS for
using and replicating the data is also provided.
|
Dirac-Frenkel variational method with Davydov D2 trial wavefunction is
extended by introducing a thermalization algorithm and applied to simulate
dynamics of a general open quantum system. The algorithm allows to control
temperature variations of a harmonic finite size bath, when in contact with the
quantum system. Thermalization of the bath vibrational modes is realised via
stochastic scatterings, implemented as a discrete-time Bernoulli process with
Poisson statistics. It controls bath temperature by steering vibrational modes'
evolution towards their canonical thermal equilibrium. Numerical analysis of
the exciton relaxation dynamics in a small molecular cluster reveals that
thermalization additionally provides significant calculation speed up due to
reduced number of vibrational modes needed to obtain the convergence.
|
In this paper, we consider the non-symmetric positive semidefinite Procrustes
(NSPSDP) problem: Given two matrices $X,Y \in \mathbb{R}^{n,m}$, find the
matrix $A \in \mathbb{R}^{n,n}$ that minimizes the Frobenius norm of $AX-Y$ and
which is such that $A+A^T$ is positive semidefinite. We generalize the
semi-analytical approach for the symmetric positive semidefinite Procrustes
problem, where $A$ is required to be positive semidefinite, that was proposed
by Gillis and Sharma (A semi-analytical approach for the positive semidefinite
Procrustes problem, Linear Algebra Appl. 540, 112-137, 2018). As for the
symmetric case, we first show that the NSPSDP problem can be reduced to a
smaller NSPSDP problem that always has a unique solution and where the matrix
$X$ is diagonal and has full rank. Then, an efficient semi-analytical algorithm
to solve the NSPSDP problem is proposed, solving the smaller and well-posed
problem with a fast gradient method which guarantees a linear rate of
convergence. This algorithm is also applicable to solve the complex NSPSDP
problem, where $X,Y \in \mathbb{C}^{n,m}$, as we show the complex NSPSDP
problem can be written as an overparametrized real NSPSDP problem. The
efficiency of the proposed algorithm is illustrated on several numerical
examples.
|
Random Forests (RF) are at the cutting edge of supervised machine learning in
terms of prediction performance, especially in genomics. Iterative Random
Forests (iRF) use a tree ensemble from iteratively modified RF to obtain
predictive and stable non-linear high-order Boolean interactions of features.
They have shown great promise for high-order biological interaction discovery
that is central to advancing functional genomics and precision medicine.
However, theoretical studies into how tree-based methods discover high-order
feature interactions are missing. In this paper, to enable such theoretical
studies, we first introduce a novel discontinuous nonlinear regression model,
called Locally Spiky Sparse (LSS) model, which is inspired by the thresholding
behavior in many biological processes. Specifically, LSS model assumes that the
regression function is a linear combination of piece-wise constant Boolean
interaction terms. We define a quantity called depth-weighted prevalence (DWP)
for a set of signed features S and a given RF tree ensemble. We prove that,
with high probability under the LSS model, DWP of S attains a universal upper
bound that does not involve any model coefficients, if and only if S
corresponds to a union of Boolean interactions in the LSS model. As a
consequence, we show that RF yields consistent interaction discovery under the
LSS model. Simulation results show that DWP can recover the interactions under
the LSS model even when some assumptions such as the uniformity assumption are
violated.
|
In this article, we study the forward dynamical behavior of nonautonomous
lattice systems. We first construct a family of sets
$\{\mathcal{A}_\varepsilon(\sigma)\}_{\sigma\in \Sigma}$ in arbitrary small
neighborhood of a global attractor of the skew-product flow generated by a
general nonautonomous lattice system, which is forward invariant and uniformly
forward attracts any bounded subset of the phase space. Moreover, under some
suitable conditions, we further construct a family of sets
$\{\mathcal{B}_\varepsilon(\sigma)\}_{\sigma\in \Sigma}$ such that it uniformly
forward exponentially attracts bounded subsets of the phase space. As an
application, we study the discrete Gray-Scott model in detail and illustrate
how to apply our abstract results to some concrete lattice system.
|
We study dynamics of multi-soliton solutions of anti-self-dual Yang-Mills
equations for G=GL(2,C) in four-dimensional spaces. The one-soliton solution
can be interpreted as a codimension-one soliton in four-dimensional spaces
because the principal peak of action density localizes on a three-dimensional
hyperplane. We call it the soliton wall. We prove that in the asymptotic
region, the n-soliton solution possesses n isolated localized lumps of action
density, and interpret it as n intersecting soliton walls. More precisely, each
action density lump is essentially the same as a soliton wall because it
preserves its shape and "velocity" except for a position shift of principal
peak in the scattering process. The position shift results from the nonlinear
interactions of the multi-solitons and is called the phase shift. We calculate
the phase shift factors explicitly and find that the action densities can be
real-valued in three kind of signatures. Finally, we show that the gauge group
can be G=U(2) in the Ultrahyperbolic space (the split signature (+, +, -, -)).
This implies that the intersecting soliton walls could be realized in all
region in N=2 string theories. It is remarkable that quasideterminants
dramatically simplify the calculations and proofs.
|
During the initial years of its inception, the Internet was widely used for
transferring data packets between users and respective data sources by using IP
addresses. With the advancements in technology, the Internet has been used to
share data within several small and resource-constrained devices connected in
billions to create the framework for the so-called Internet of Things (IoT).
These systems were known for the presentation of a large quantum of data
emerging within these devices. On the flip side, these devices are known to
impose huge overheads on the IoT network. Therefore, it was essential to
develop solutions concerning different network-related problems as a part of
IoT networking. In this paper, we review these challenges emerge in routing,
congestion, energy conservation, scalability, heterogeneity, reliability,
security, and quality of service (QoS). This can be leverage to use the
available network optimally. As part of this research work, a detailed survey
is to be conducted on the network optimization process within IoT, as presented
in another research. Owing to the advances in wireless networking, relevant
Internet-of-Things (IoT) devices were equipped with several elements, including
multiple network access interfaces. The adoption of multipath TCP (MPTCP)
technology would improve the total throughput of data transmission. On the
other hand, leveraging traditional MPTCP path management algorithms lead to
other problems in data transport areas along with even buffer blockage. This
shall lead to massive issues in areas of reduction of transmission performance
across the entire IoT network. To this end, we develop a novel multipath
algorithm that would efficiently manage the data transport in an intelligently
scheduled and seamless manner using multiple wireless/wireline paths.
|
Decision-making under uncertainty is hugely important for any decisions
sensitive to perturbations in observed data. One method of incorporating
uncertainty into making optimal decisions is through robust optimization, which
minimizes the worst-case scenario over some uncertainty set. We connect
conformal prediction regions to robust optimization, providing finite sample
valid and conservative ellipsoidal uncertainty sets, aptly named conformal
uncertainty sets. In pursuit of this connection we explicitly define
Mahalanobis distance as a potential conformity score in full conformal
prediction. We also compare the coverage and optimization performance of
conformal uncertainty sets, specifically generated with Mahalanobis distance,
to traditional ellipsoidal uncertainty sets on a collection of simulated robust
optimization examples.
|
Auto encoding models have been extensively studied in recent years. They
provide an efficient framework for sample generation, as well as for analysing
feature learning. Furthermore, they are efficient in performing interpolations
between data-points in semantically meaningful ways. In this paper, we build
further on a previously introduced method for generating canonical, dimension
independent, stochastic interpolations. Here, the distribution of interpolation
paths is represented as the distribution of a bridge process constructed from
an artificial random data generating process in the latent space, having the
prior distribution as its invariant distribution. As a result the stochastic
interpolation paths tend to reside in regions of the latent space where the
prior has high mass. This is a desirable feature since, generally, such areas
produce semantically meaningful samples. In this paper, we extend the bridge
process method by introducing a discriminator network that accurately
identifies areas of high latent representation density. The discriminator
network is incorporated as a change of measure of the underlying bridge process
and sampling of interpolation paths is implemented using sequential Monte
Carlo. The resulting sampling procedure allows for greater variability in
interpolation paths and stronger drift towards areas of high data density.
|
Strange stars (SSs) are compact objects made of deconfined quarks. It is hard
to distinguish SSs from neutron stars as a thin crust composed of normal
hadronic matter may exist and obscure the whole surface of the SS. Here we
suggest that the intriguing repeating fast radio bursts (FRBs) are produced by
the intermittent fractional collapses of the crust of an SS induced by
refilling of accretion materials from its low-mass companion. The
periodic/sporadic/clustered temporal behaviors of FRBs could be well understood
in our scenario. Especially, the periodicity is attributed to the modulation of
accretion rate through the disk instabilities. To account for a $\sim 16$-day
periodicity of the repeating FRB source 180916.J0158+65, a Shakura-Sunyaev disk
with a viscosity parameter of $\alpha \simeq 0.004$ and an accretion rate of
$\simeq 3 \times 10^{16}$~g~s$^{-1}$ in the low state is invoked. Our scenario,
if favored by future observations, will serve as indirect evidence for the
strange quark matter hypothesis.
|
This paper is concerned with the sharp interface limit for the Allen-Cahn
equation with a nonlinear Robin boundary condition in a bounded smooth domain
$\Omega\subset\mathbb{R}^2$. We assume that a diffuse interface already has
developed and that it is in contact with the boundary $\partial\Omega$. The
boundary condition is designed in such a way that the limit problem is given by
the mean curvature flow with constant $\alpha$-contact angle. For $\alpha$
close to $90${\deg} we prove a local in time convergence result for
well-prepared initial data for times when a smooth solution to the limit
problem exists. Based on the latter we construct a suitable curvilinear
coordinate system and carry out a rigorous asymptotic expansion for the
Allen-Cahn equation with the nonlinear Robin boundary condition. Moreover, we
show a spectral estimate for the corresponding linearized Allen-Cahn operator
and with its aid we derive strong norm estimates for the difference of the
exact and approximate solutions using a Gronwall-type argument.
|
These notes concern aspects of various graphs whose vertex set is a group $G$
and whose edges reflect group structure in some way (so that they are invariant
under the action of the automorphism group of $G$). The graphs I will discuss
are the power graph, enhanced power graph, deep commuting graph, commuting
graph, and non-generating graph, though I give a briefer discussion of the
nilpotence and solvability graphs, and make some remarks on more general
graphs. Aspects to be discussed include induced subgraphs, forbidden subgraphs,
connectedness, and automorphism groups. We can also ask about the graphs formed
by the edges in one graph but not in an earlier graph in the hierarchy. I have
included some results on intersection graphs of subgroups of various types,
which are often in a dual relation to one of the other graphs considered.
Another actor is the Gruenberg--Kegel graph, or prime graph, of a group: this
very small graph influences various graphs defined on the group. I say little
about Cayley graphs, since (except in special cases) these are not invariant
under the automorphism group of $G$.
The graphs all have the property that they contain \emph{twins}, pairs of
vertices with the same neighbours (save possibly one another). Being equal or
twins is an equivalence relation, and the automorphism group of the graph has a
normal subgroup inducing the symmetric group on each twin class. For some
purposes, we can merge twin vertices and get a smaller graph. Continuing until
no further twins occur, the result is independent of the reduction, and is the
$1$-vertex graph if and only if the original graph is a \emph{cograph}. So I
devote a section to cographs and twin reduction, and another to consequences
for automorphism groups.
There are briefer discussions of related matters.
|
A common approach to the automatic detection of mispronunciation in language
learning is to recognize the phonemes produced by a student and compare it to
the expected pronunciation of a native speaker. This approach makes two
simplifying assumptions: a) phonemes can be recognized from speech with high
accuracy, b) there is a single correct way for a sentence to be pronounced.
These assumptions do not always hold, which can result in a significant amount
of false mispronunciation alarms. We propose a novel approach to overcome this
problem based on two principles: a) taking into account uncertainty in the
automatic phoneme recognition step, b) accounting for the fact that there may
be multiple valid pronunciations. We evaluate the model on non-native (L2)
English speech of German, Italian and Polish speakers, where it is shown to
increase the precision of detecting mispronunciations by up to 18% (relative)
compared to the common approach.
|
We present a new multi-stream 3D mesh reconstruction network (MSMR-Net) for
hand pose estimation from a single RGB image. Our model consists of an image
encoder followed by a mesh-convolution decoder composed of connected graph
convolution layers. In contrast to previous models that form a single mesh
decoding path, our decoder network incorporates multiple cross-resolution
trajectories that are executed in parallel. Thus, global and local information
are shared to form rich decoding representations at minor additional parameter
cost compared to the single trajectory network. We demonstrate the
effectiveness of our method in hand-hand and hand-object interaction scenarios
at various levels of interaction. To evaluate the former scenario, we propose a
method to generate RGB images of closely interacting hands. Moreoever, we
suggest a metric to quantify the degree of interaction and show that close hand
interactions are particularly challenging. Experimental results show that the
MSMR-Net outperforms existing algorithms on the hand-object FreiHAND dataset as
well as on our own hand-hand dataset.
|
Topological string theory near the conifold point of a Calabi-Yau threefold
gives rise to factorially divergent power series which encode the all-genus
enumerative information. These series lead to infinite towers of singularities
in their Borel plane (also known as "peacock patterns"), and we conjecture that
the corresponding Stokes constants are integer invariants of the Calabi-Yau
threefold. We calculate these Stokes constants in some toric examples,
confirming our conjecture and providing in some cases explicit generating
functions for the new integer invariants, in the form of q-series. Our
calculations in the toric case rely on the TS/ST correspondence, which promotes
the asymptotic series near the conifold point to spectral traces of operators,
and makes it easier to identify the Stokes data. The resulting mathematical
structure turns out to be very similar to the one of complex Chern-Simons
theory. In particular, spectral traces correspond to state integral invariants
and factorize in holomorphic/anti-holomorphic blocks.
|
A fundamental aspect of racing is overtaking other race cars. Whereas
previous research on autonomous racing has majorly focused on lap-time
optimization, here, we propose a method to plan overtaking maneuvers in
autonomous racing. A Gaussian process is used to learn the behavior of the
leading vehicle. Based on the outputs of the Gaussian process, a stochastic
Model Predictive Control algorithm plans optimistic trajectories, such that the
controlled autonomous race car is able to overtake the leading vehicle. The
proposed method is tested in a simple simulation scenario.
|
In this position paper, we argue for the need to investigate if and how
gender stereotypes manifest in search and recommender systems.As a starting
point, we particularly focus on how these systems may propagate and reinforce
gender stereotypes through their results in learning environments, a context
where teachers and children in their formative stage regularly interact with
these systems. We provide motivating examples supporting our concerns and
outline an agenda to support future research addressing the phenomena.
|
The Tibet ASgamma experiment just reported their measurement of sub-PeV
diffuse gamma ray emission from the Galactic disk, with the highest energy up
to 957 TeV. These gamma-rays are most likely the hadronic origin by cosmic ray
interaction with interstellar gas in the Galaxy. This measurement provides
direct evidence to the hypothesis that the Galactic cosmic rays can be
accelerated beyond PeV energies. In this work, we try to explain the sub-PeV
diffuse gamma-ray spectrum within cosmic rays diffusive propagation model. We
find there is a tension between the sub-PeV diffuse gamma rays and the local
cosmic ray spectrum. To describe the sub-PeV diffuse gamma-ray flux, it
generally requires larger local cosmic-ray flux than measurement in the knee
region. We further calculate the PeV neutrino flux from the cosmic ray
propagation model. Even all of these sub-PeV diffuse gamma rays originate from
the propagation, the Galactic neutrinos only account for less than ~15% of
observed flux, most of which are still from extragalactic sources.
|
Recently, the end-to-end training approach for neural beamformer-supported
multi-channel ASR has shown its effectiveness in multi-channel speech
recognition. However, the integration of multiple modules makes it more
difficult to perform end-to-end training, particularly given that the
multi-channel speech corpus recorded in real environments with a sizeable data
scale is relatively limited. This paper explores the usage of single-channel
data to improve the multi-channel end-to-end speech recognition system.
Specifically, we design three schemes to exploit the single-channel data,
namely pre-training, data scheduling, and data simulation. Extensive
experiments on CHiME4 and AISHELL-4 datasets demonstrate that all three methods
improve the multi-channel end-to-end training stability and speech recognition
performance, while the data scheduling approach keeps a much simpler pipeline
(vs. pre-training) and less computation cost (vs. data simulation). Moreover,
we give a thorough analysis of our systems, including how the performance is
affected by the choice of front-end, the data augmentation, training strategy,
and single-channel data size.
|
We present a web-based software tool, the Virtual Quantum Optics Laboratory
(VQOL), that may be used for designing and executing realistic simulations of
quantum optics experiments. A graphical user interface allows one to rapidly
build and configure a variety of different optical experiments, while the
runtime environment provides unique capabilities for visualization and
analysis. All standard linear optical components are available as well as
sources of thermal, coherent, and entangled Gaussian states. A unique aspect of
VQOL is the introduction of non-Gaussian measurements using detectors modeled
as deterministic devices that "click" when the amplitude of the light falls
above a given threshold. We describe the underlying theoretical models and
provide several illustrative examples. We find that VQOL provides a a faithful
representation of many experimental quantum optics phenomena and may serve as
both a useful instructional tool for students as well as a valuable research
tool for practitioners.
|
Low-frequency time-dependent noise is one of the main obstacles on the road
towards a fully scalable quantum computer. The majority of solid-state qubit
platforms, from superconducting circuits to spins in semiconductors, are
greatly affected by $1/f$ noise. Among the different control techniques used to
counteract noise effects on the system, dynamical decoupling sequences are one
of the most effective. However, most dynamical decoupling sequences require
unbounded and instantaneous pulses, which are unphysical and can only implement
identity operations. Among methods that do restrict to bounded control fields,
there remains a need for protocols that implement arbitrary gates with
lab-ready control fields. In this work, we introduce a protocol to design
bounded and continuous control fields that implement arbitrary single-axis
rotations while shielding the system from low-frequency time-dependent noise
perpendicular to the control axis. We show the versatility of our method by
presenting a set of non-negative-only control pulses that are immediately
applicable to quantum systems with constrained control, such as singlet-triplet
spin qubits. Finally, we demonstrate the robustness of our control pulses
against classical $1/f$ noise and noise modeled with a random quantum bath,
showing that our pulses can even outperform ideal dynamical decoupling
sequences.
|
In probabilistic nonadaptive group testing (PGT), we aim to characterize the
number of pooled tests necessary to identify a random $k$-sparse vector of
defectives with high probability. Recent work has shown that $n$ tests are
necessary when $k =\omega(n/\log n)$. It is also known that $O(k \log n)$ tests
are necessary and sufficient in other regimes. This leaves open the important
sparsity regime where the probability of a defective item is $\sim 1/\log n$
(or $k = \Theta(n/\log n)$) where the number of tests required is linear in
$n$. In this work we aim to exactly characterize the number of tests in this
sparsity regime. In particular, we seek to determine the number of defectives
$\lambda(\alpha)n / \log n$ that can be identified if the number of tests is
$\alpha n$. In the process, we give upper and lower bounds on the exact point
at which individual testing becomes suboptimal, and the use of a carefully
constructed pooled test design is beneficial.
|
Intermediate-mass black holes (IMBHs) by definition have masses of $M_{\rm
IMBH} \sim 10^{2-5}~M_\odot$, a range with few observational constraints.
Finding IMBHs in globular star clusters (GCs) would validate a formation
channel for massive black-hole seeds in the early universe. Here, we simulate a
60-hour observation with the next-generation Very Large Array (ngVLA) of 728 GC
candidates in the Virgo Cluster galaxy NGC\,4472. Interpreting the radio
detection thresholds as signatures of accretion onto IMBHs, we benchmark IMBH
mass thresholds in three scenarios and find the following: (1) Radio analogs of
ESO\,243-49 HLX-1, a strong IMBH candidate with $M_{\rm IMBH}^{\rm HLX} \sim
10^{4-5}~M_\odot$ in a star cluster, are easy to access in all 728 GC
candidates. (2) For the 30 GC candidates with extant X-ray detections, the
empirical fundamental-plane relation involving black hole mass plus X-ray and
radio luminosities suggests access to $M_{\rm IMBH}^{\rm FP} \sim
10^{1.7-3.6}~M_\odot$, with an uncertainty of 0.44 dex. (3) A fiducial Bondi
accretion model was applied to all 728 GC candidates and to radio stacks of GC
candidates. This model suggests access to IMBH masses, with a statistical
uncertainty of 0.39 dex, of $M_{\rm IMBH}^{\rm B} \sim 10^{4.9-5.1}~M_\odot$
for individual GC candidates and $M_{\rm IMBH}^{\rm B,stack} \sim
10^{4.5}~M_\odot$ for radio stacks of about 100-200 GC candidates. The fiducial
Bondi model offers initial guidance, but is subject to additional systematic
uncertainties and should be superseded by hydrodynamical simulations of gas
flows in GCs.
|
Research in sociology and linguistics shows that people use language not only
to express their own identity but to understand the identity of others. Recent
work established a connection between expression of identity and emoji usage on
social media, through use of emoji skin tone modifiers. Motivated by that
finding, this work asks if, as with language, readers are sensitive to such
acts of self-expression and use them to understand the identity of authors. In
behavioral experiments (n=488), where text and emoji content of social media
posts were carefully controlled before being presented to participants, we find
in the affirmative -- emoji are a salient signal of author identity. That
signal is distinct from, and complementary to, the one encoded in language.
Participant groups (based on self-identified ethnicity) showed no differences
in how they perceive this signal, except in the case of the default yellow
emoji. While both groups associate this with a White identity, the effect was
stronger in White participants. Our finding that emoji can index social
variables will have experimental applications for researchers but also
implications for designers: supposedly ``neutral`` defaults may be more
representative of some users than others.
|
The current state of the art for analytical and computational modelling of
deformation in nonlinear electroelastic and magnetoelastic membranes is
reviewed. A general framework and a list of methods to model large deformation
and associated instabilities (wrinkling, limit point, global bifurcation) due
to coupled electromechanical or magnetoemechanical loading is presented.
|
Cyber-physical systems are becoming core of the most modern systems
consisting control, data sharing and real-time monitoring. While centralized
control technique has been implemented in the past, recent innovation in
distributed control schemes makes it attractive due to various reasons. One of
them is the use of state-of-the-art communication protocols that makes the
system more robust toward extreme conditions and ensures observability. Thus,
as an application of cyber-physical systems, distributed control architectures
are prone to various cyber-vulnerability which makes cybersecurity research
critical in this application domain. This paper reviews recent researches of
distributed control architectures, their cyber-vulnerabilities, and reported
mitigation schemes. Finally, some research needs are addressed.
|
We present a spatially resolved analysis of ionized gas at the nuclear region
of the nearby galaxy NGC 1068. While NGC 1068 has been known to have gas
outflows driven by its active galactic nucleus (AGN), more complex kinematical
signatures were recently reported, which were inconsistent with a rotation or
simple biconical outflows. To account for the nature of gas kinematics, we
performed a spatially resolved kinematical study, finding a morphologically
symmetric pair of approaching and receding gas blobs in the northeast region.
The midpoint of the two blobs is located at a distance of 180 pc from the
nucleus in the projected plane. The ionized gas at the midpoint shows zero
velocity and high velocity dispersion, which are characteristics of an
outflow-launching position, as the two sides of a bicone, i.e., approaching and
receding outflows are superposed on the line of sight, leading to no velocity
shift but high velocity dispersion. We investigate the potential scenario of an
additional AGN based on a multiwavelength data set. While there are other
possibilities, i.e., X-ray binary or supernova shock, the results from optical
spectropolarimetry analysis are consistent with the presence of an additional
AGN, which likely originates from a minor merger.
|
We show that the category of pastures has arbitrary limits and colimits of
diagrams indexed by a small category.
|
The Explorer-Director game, first introduced by Nedev and Muthukrishnan, can
be described as a game where two players -- Explorer and Director -- determine
the movement of a token on the vertices of a graph. At each time step, the
Explorer specifies a distance that the token must move hoping to maximize the
amount of vertices ultimately visited, and the Director adversarially chooses
where to move token in an effort to minimize this number. Given a graph and a
starting vertex, the number of vertices that are visited under optimal play is
denoted by $f_d(G,v)$.
In this paper, we first reduce the study of $f_d (G,v)$ to the determination
of the minimal sets of vertices that are \textit{closed} in a certain
combinatorial sense, thus providing a structural understanding of each player's
optimal strategies. As an application, we address the problem on lattices and
trees. In the case of trees, we also provide a complete solution even in the
more restrictive setting where the strategy used by the Explorer is not allowed
to depend on their opponent's responses. In addition to this paper, a
supplementary companion note will be posted to arXiv providing additional
results about the game in a variety of specific graph families.
|
Binary black hole may form near a supermassive black hole. The background
black hole (BH) will affect the gravitational wave (GW) generated by the binary
black hole. It is well known that the Penrose process may provide extra energy
due to the ergosphere. In the present paper we investigate the energy
amplification of the gravitational wave by a Kerr black hole background. In
particular and different from the earlier studies, we compare the energies of
the waves in the cases with and without a nearby Kerr BH. We find that only
when the binary black hole is moving relative to the Kerr background can the GW
energy be amplified. Otherwise, the energy will be suppressed by the background
Kerr black hole. This finding is consistent with the inequality found by Wald
for Penrose process. Taking into account realistic astrophysical scenarios, we
find that the Kerr black hole background can amplify the GW energy by at most 5
times.
|
We study local biholomorphisms with finite orbits in some neighborhood of the
origin since they are intimately related to holomorphic foliations with closed
leaves. We describe the structure of the set of periodic points in dimension 2.
As a consequence we show that given a local biholomorphism $F$, in dimension 2
with finite orbits, there exists an analytic curve passing through the origin
and contained in the fixed point set of some non-trivial iterate of $F.$ As an
application we obtain that at least one eigenvalue of the linear part of $F$ at
the origin is a root of unity. Moreover, we show that such a result is sharp by
exhibiting examples of local biholomorphisms, with finite orbits, such that
exactly one of the eigenvalues is a root of unity. These examples are subtle
since we show they can not be embedded in one parameter groups.
|
We present a joint model for entity-level relation extraction from documents.
In contrast to other approaches - which focus on local intra-sentence mention
pairs and thus require annotations on mention level - our model operates on
entity level. To do so, a multi-task approach is followed that builds upon
coreference resolution and gathers relevant signals via multi-instance learning
with multi-level representations combining global entity and local mention
information. We achieve state-of-the-art relation extraction results on the
DocRED dataset and report the first entity-level end-to-end relation extraction
results for future reference. Finally, our experimental results suggest that a
joint approach is on par with task-specific learning, though more efficient due
to shared parameters and training steps.
|
Descriptive and empirical sciences, such as History, are the sciences that
collect, observe and describe phenomena in order to explain them and draw
interpretative conclusions about influences, driving forces and impacts under
given circumstances. Spreadsheet software and relational database management
systems are still the dominant tools for quantitative analysis and overall data
management in these these sciences, allowing researchers to directly analyse
the gathered data and perform scholarly interpretation. However, this current
practice has a set of limitations, including the high dependency of the
collected data on the initial research hypothesis, usually useless for other
research, the lack of representation of the details from which the registered
relations are inferred, and the difficulty to revisit the original data sources
for verification, corrections or improvements. To cope with these problems, in
this paper we present FAST CAT, a collaborative system for assistive data entry
and curation in Digital Humanities and similar forms of empirical research. We
describe the related challenges, the overall methodology we follow for
supporting semantic interoperability, and discuss the use of FAST CAT in the
context of a European (ERC) project of Maritime History, called SeaLiT, which
examines economic, social and demographic impacts of the introduction of
steamboats in the Mediterranean area between the 1850s and the 1920s.
|
The $L$-space conjecture asserts the equivalence, for prime 3-manifolds, of
three properties: not being an L-space, having a left-orderable fundamental
group, and admitting a co-oriented taut foliation. We investigate these
properties for toroidal $3$-manifolds using various notions of slope detection.
This leads to a proof that toroidal $3$-manifolds with small order first
homology have left-orderable fundamental groups and, under certain fibring
conditions, admit co-oriented taut foliations. It also allows us to show that
cyclic branched covers of prime satellite knots are not $L$-spaces, have
left-orderable fundamental groups and, when they have fibred companion knots,
admit co-oriented taut foliations. A partial extension to prime toroidal links
leads to a proof that prime quasi-alternating links are either hyperbolic or
$(2, m)$-torus links. Our main technical result gives sufficient conditions for
certain slopes on the boundaries of rational homology solid tori to be detected
by left-orders, foliations, and Heegaard Floer homology.
|
We examine possible environmental sources of the enhanced star formation and
active galactic nucleus (AGN) activity in the $z = 3.09$ SSA22 protocluster
using Hubble WFC3 F160W ($\sim1.6\ \rm \mu m$) observations of the SSA22 field,
including new observations centered on eight X-ray selected protocluster AGN.
To investigate the role of mergers in the observed AGN and star formation
enhancement, we apply both quantitative (S\'ersic-fit and Gini-$M_{20}$) and
visual morphological classifications to F160W images of protocluster Lyman
break galaxies (LBGs) in the fields of the X-ray AGN and $z \sim 3$ field LBGs
in SSA22 and GOODS-N. We find no statistically significant differences between
the morphologies and merger fractions of protocluster and field LBGs, though we
are limited by small number statistics in the protocluster. We also fit the
UV-to-near-IR spectral energy distributions (SED) of F160W-detected
protocluster and field LBGs to characterize their stellar masses and star
formation histories (SFH). We find that the mean protocluster LBG is by a
factor of $\sim2$ times more massive and more attenuated than the mean $z \sim
3$ field LBG. We take our results to suggest that ongoing mergers are not more
common among protocluster LBGs than field LBGs, though protocluster LBGs appear
to be more massive. We speculate that the larger mass of the protocluster LBGs
contributes to the enhancement of SMBH mass and accretion rate in the
protocluster, which in turn drives the observed protocluster AGN enhancement.
|
We report on a theoretical study on the rise of radiation-induced
magnetoresistance oscillations in two-dimensional systems of massive Dirac
fermions. We study the bilayer system of monolayer graphene and hexagonal boron
nitride (h-BN/graphene) and the trilayer system of hexagonal boron nitride
encapsulated graphene (h-BN/graphene/h-BN). We extend the radiation-driven
electron orbit model that was previously devised to study the same oscillations
in two-dimensional systems of Schr\"odinger electrons (GaAs/AlGaAS
heterostructure) to the case of massive Dirac fermions. In the simulations we
obtain clear oscillations for radiation frequencies in the terahertz and
far-infrared bands. %which contrasts with the two-dimensional Schrodinger
electrons case, %that are mainly sensitive to microwave frequencies. We
investigate also the power and temperatures dependence. For the former we
obtain similar results as for Schr\"odinger electrons and predict the rise of
zero resistance states. For the latter we obtain a similar qualitatively
dependence but quantitatively different when increasing temperature. While in
GaAs the oscillations are wiped out in a few degrees, interestingly enough, for
massive Dirac fermions, we obtain observable oscillations for temperatures
above $100$ K and even at room temperature for the higher frequencies used in
the simulations.
|
We propose a metrological strategy reaching Heisenberg scaling precision in
the estimation of functions of any number $l$ of arbitrary parameters encoded
in a generic $M$-channel linear network. This scheme is experimentally feasible
since it only employs a single-mode squeezed vacuum and homodyne detection on a
single output channel. Two auxiliary linear network are required and their role
is twofold: to refocus the signal into a single channel after the interaction
with the interferometer, and to fix the function of the parameters to be
estimated according to the linear network analysed. Although the refocusing
requires some knowledge on the parameters, we show that the required precision
on the prior measurement is shot-noise, and thus achievable with a classic
measurement. We conclude by discussing two paradigmatic schemes in which the
choice of the auxiliary stages allows to change the function of the unknown
parameter to estimate.
|
The family of edge-sharing tri-coordinated iridates and ruthenates has
emerged in recent years as a major platform for Kitaev spin liquid physics,
where spins fractionalize into emergent magnetic fluxes and Majorana fermions
with Dirac-like dispersions. While such exotic states are usually pre-empted by
long-range magnetic order at low temperatures, signatures of Majorana fermions
with long coherent times have been predicted to manifest at intermediate and
higher energy scales, similar to the observation of spinons in quasi-1D spin
chains. Here we present a Resonant Inelastic X-ray Scattering study of the
magnetic excitations of the hyperhoneycomb iridate $\beta$-Li$_2$IrO$_3$ under
a magnetic field with a record-high-resolution spectrometer. At
low-temperatures, dispersing spin waves can be resolved around the predicted
intertwined incommensurate spiral and field-induced zigzag orders, whose
excitation energy reaches a maximum of 16meV. A 2T magnetic field softens the
dispersion around ${\bf Q}=0$. The behavior of the spin waves under magnetic
field is consistent with our semiclassical calculations for the ground state
and the dynamical spin structure factor, which further predicts that the ensued
intertwined uniform states remain robust up to very high fields (100 T). Most
saliently, the low-energy magnon-like mode is superimposed by a broad continuum
of excitations, centered around 35meV and extending up to 100meV. This
high-energy continuum survives up to at least 300K -- well above the ordering
temperature of 38K -- and gives evidence for pairs of long-lived Majorana
fermions of the proximate Kitaev spin liquid.
|
Receiver sensitivity is a particularly important metric in optical
communication links operating at low signal-to-noise ratios (SNRs), for example
in deep-space communication, since it directly limits the maximum achievable
reach and data rate. Pulse position modulation (PPM) with direct detection
photon-counting detectors are the most power-efficient solution known, however,
the sensitivity gain comes at the expense of reduced spectral efficiency. We
show that quadrature phase-shift keying (QPSK) modulation with a
phase-sensitive ultralow noise pre-amplified coherent receiver outperforms
other well-known power-efficient multi-dimensional coherent modulation formats,
while simultaneously having higher spectral efficiency. It also results in
better sensitivity than PPM for orders up to 64 with ideal direct detection
using photon-counting receivers. This is because of the bit error rate
characteristics favoring the QPSK format when forward error correction with a
large overhead is considered.
|
The ability to model the evolution of compact binaries from the inspiral to
coalescence is central to gravitational wave astronomy. Current waveform
catalogues are built from vacuum binary black hole models, by evolving Einstein
equations numerically and complementing them with knowledge from slow-motion
expansions. Much less is known about the coalescence process in the presence of
matter, or in theories other than General Relativity. Here, we explore the
Close Limit Approximation as a powerful tool to understand the coalescence
process in general setups. In particular, we study the head-on collision of two
equal-mass, compact but horizonless objects. Our results show the appearance of
"echoes" and indicate that a significant fraction of the merger energy goes
into these late-time repetitions. We also apply the Close Limit Approximation
to investigate the effect of colliding black holes on surrounding scalar
fields. Notably, our results indicate that observables obtained through
perturbation theory may be extended to a significant segment of the merger
phase, where in principle only a numerical approach is appropriate.
|
We study entanglement-assisted quantum error-correcting codes (EAQECCs)
arising from classical one-point algebraic geometry codes from the Hermitian
curve with respect to the Hermitian inner product. Their only unknown parameter
is $c$, the number of required maximally entangled quantum states since the
Hermitian dual of an AG code is unknown. In this article, we present an
efficient algorithmic approach for computing $c$ for this family of EAQECCs. As
a result, this algorithm allows us to provide EAQECCs with excellent parameters
over any field size.
|
Coherence and entanglement are fundamental concepts in resource theory. The
coherence (entanglement) of assistance is the coherence (entanglement) that can
be extracted assisted by another party with local measurement and classical
communication. We introduce and study the general coherence of assistance.
First, in terms of real symmetric concave functions on the probability simplex,
the coherence of assistance and the entanglement of assistance are shown to be
in one-to-one correspondence. We then introduce two classes of quantum states:
the assisted maximally coherent states and the assisted maximally entangled
states. They can be transformed into maximally coherent or entangled pure
states with the help of another party using local measurement and classical
communication. We give necessary conditions for states to be assisted maximally
coherent or assisted maximally entangled. Based on these, a unified framework
between coherence and entanglement including coherence (entanglement) measures,
coherence (entanglement) of assistance, coherence (entanglement) resources is
proposed. Then we show that the coherence of assistance as well as entanglement
of assistance are strictly larger than the coherence of convex roof and
entanglement of convex roof for all full rank density matrices. So all full
rank quantum states are distillable in the assisted coherence distillation.
|
The benzene...ethene and parallel-displaced (PD) benzene...benzene dimers are
the most fundamental systems involving p-p stacking interactions. Several
high-level ab initio investigations calculated the binding energies of these
dimers at the CCSD(T)/CBS level of theory using various approaches such as
reduced virtual orbital spaces and/or MP2-based basis set corrections. Here we
obtain CCSDT(Q) binding energies using a Weizmann-3-type approach. In
particular, we extrapolate the SCF, CCSD, and (T) components using large
heavy-atom augmented Gaussian basis sets (namely, SCF/jul-cc-pV{5,6}Z,
CCSD/jul-cc-pV{Q,5}Z, and (T)/jul-cc-pV{T,Q}Z). We consider post-CCSD(T)
contributions up to CCSDT(Q), inner-shell, scalar-relativistic, and
Born-Oppenheimer corrections. Overall, our best relativistic, all-electron
CCSDT(Q) binding energies are Delta Ee,all,rel = 1.234 (benzene...ethene) and
2.550 (benzene...benzene PD), Delta H0 = 0.949 (benzene...ethene) and 2.310
(benzene...benzene PD), and Delta H298 = 0.130 (benzene...ethene) and 1.461
(benzene...benzene PD) kcal/mol. Important conclusions are reached regarding
the basis set convergence of the SCF, CCSD, (T), and post-CCSD(T) components.
Explicitly correlated calculations are used as a sanity check on the
conventional binding energies. Overall, post-CCSD(T) contributions are
destabilizing by 0.028 (benzene...ethene) and 0.058(benzene...benzene)
kcal/mol, thus they cannot be neglected if 0.1 kcal/mol accuracy is sought.
|
We obtain the superfluid weight and Berezinskii-Kosterlitz-Thouless (BKT)
transition temperature for highly unconventional superconducting states with
the coexistence of chiral d-wave superconductivity, charge density waves and
pair density waves in the strained graphene. Our results show that the
strain-induced flat bands can promote the superconducting transition
temperature approximately $50\%$ compared to that of the original doped
graphene, which suggests that the flat-band superconductivity is a potential
route to get superconductivity with higher critical temperatures. In
particular, we obtain the superfluid weight for the pure superconducting
pair-density-wave states from which the deduced superconducting transition
temperature is shown to be much lower than the gap-opening temperature of the
pair density wave, which is helpful to understand the phenomenon of the
pseudogap state in high-$T_c$ cuprate superconductors. Finally, we show that
the BKT transition temperature versus doping for strained graphene exhibits a
dome-like shape and it depends linearly on the spin-spin interaction strength.
|
We consider an electron in a localized potential submitted to a weak
external, timedependent field. In the linear response regime, the response
function can be computed using Kubo's formula. In this paper, we consider the
numerical approximation of the response function by means of a truncation to a
finite region of space. This is necessarily a singular approximation because of
the discreteness of the spectrum of the truncated Hamiltonian, and in practice
a regularization (smoothing) has to be used. Our results provide error
estimates for the response function past the ionization threshold with respect
to both the smoothing parameter and the size of the computational domain.
|
We investigate the formal semantics of a simple imperative language that has
both classical and quantum constructs. More specifically, we provide an
operational semantics, a denotational semantics and two Hoare-style proof
systems: an abstract one and a concrete one. The two proof systems are
satisfaction-based, as inspired by the program logics of Barthe et al for
probabilistic programs. The abstract proof system turns out to be sound and
relatively complete, while the concrete one is sound only.
|
We propose a Standard Model extension by a $U(1)_{R}$ gauge symmetry where
only right-handed chiral fermions can carry a non-trivial charge. Here we show
that the simplest anomaly-free solution to accommodate the proton charge radius
discrepancy takes right-handed muons $\mu_R$ and first generation quarks, $u_R$
and $d_R$. Consistency with the latest muon's $(g-2)$ measurements is achieved
through an extra light scalar, which itself must lie in the tens of MeV mass
range to be viable.
|
The spectral gap of the Neumann and Dirichlet Laplacians are each known to
have a sharp positive lower bound among convex domains of a given diameter.
Between these cases, for each positive value of the Robin parameter an
analogous sharp lower bound on the spectral gap is conjectured. In this paper
we show the extension of this conjecture to negative Robin parameters fails
completely, by proving the spectral gap of an explicit family of domains can be
exponentially small.
|
We explore some new off-shell and on-shell conserved quantities for a scalar
field in Minkowski space, using integrability condition. The off-shell
conserved tensors are related to the kinematics of the field, while a linear
combination of the off-shell and the on-shell conserved tensors ends up with
the energy-momentum tensor for the scalar field. In the curved background,
using Ricci and Bianchi identities, Brans-Dicke type field equations emerge,
without requiring the principle of equivalence. Further, starting from the
curvature scalar and using these identities, the field equations for modified
gravity (Einstein-Hilbert action in the presence of higher-order terms)
follows.
|
We report the results of a Monte Carlo global QCD analysis of unpolarized
parton distribution functions (PDFs), including for the first time constraints
from ratios of $^3$He to $^3$H structure functions recently obtained by the
MARATHON experiment at Jefferson Lab. Our simultaneous analysis of nucleon PDFs
and nuclear effects in $A=2$ and $A=3$ nuclei reveals the first indication for
an isovector nuclear EMC effect in light nuclei. We find that while the
MARATHON data yield relatively weak constraints on the $F_2^n/F_2^p$ neutron to
proton structure function ratio and on the $d/u$ PDF ratio, they suggest an
enhanced nuclear effect on the $d$-quark PDF in the bound proton, questioning
the assumptions commonly made in nuclear PDF analyses.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.