abstract
stringlengths 42
2.09k
|
---|
The single electron double quantum dot architecture is a versatile qubit
candidate, profiting from the advantages of both the spin qubit and the charge
qubit, while also offering ways to mitigate their drawbacks. By carefully
controlling the electrical parameters of the device, it can be imparted greater
spinlike or greater chargelike characteristics, yielding long coherence times
with the former and, with the latter, allowing electrically driven spin
rotations or coherent interaction with a microwave photon. In this work, we
demonstrate that applying the GRAPE algorithm to design the control pulses
needed to alter the operating regime of this device while preserving the
logical state encoded within can yield higher fidelity transfers than can be
achieved using standard linear methods.
|
A nuclear excitation following the capture of an electron in an empty orbital
has been recently observed for the first time. So far, the evaluation of the
cross section of the process has been carried out widely using the assumption
that the ion is in its electronic ground state prior to the capture. We show
that by lifting this restriction new capture channels emerge resulting in a
boost of various orders of magnitude to the electron capture resonance
strength. The present study also suggests the possibility to externally select
the capture channels by means of vortex electron beams.
|
Video summarization technologies aim to create a concise and complete
synopsis by selecting the most informative parts of the video content. Several
approaches have been developed over the last couple of decades and the current
state of the art is represented by methods that rely on modern deep neural
network architectures. This work focuses on the recent advances in the area and
provides a comprehensive survey of the existing deep-learning-based methods for
generic video summarization. After presenting the motivation behind the
development of technologies for video summarization, we formulate the video
summarization task and discuss the main characteristics of a typical
deep-learning-based analysis pipeline. Then, we suggest a taxonomy of the
existing algorithms and provide a systematic review of the relevant literature
that shows the evolution of the deep-learning-based video summarization
technologies and leads to suggestions for future developments. We then report
on protocols for the objective evaluation of video summarization algorithms and
we compare the performance of several deep-learning-based approaches. Based on
the outcomes of these comparisons, as well as some documented considerations
about the amount of annotated data and the suitability of evaluation protocols,
we indicate potential future research directions.
|
Quantum machine learning is one of the most promising applications of quantum
computing in the Noisy Intermediate-Scale Quantum(NISQ) era. Here we propose a
quantum convolutional neural network(QCNN) inspired by convolutional neural
networks(CNN), which greatly reduces the computing complexity compared with its
classical counterparts, with $O((log_{2}M)^6) $ basic gates and $O(m^2+e)$
variational parameters, where $M$ is the input data size, $m$ is the filter
mask size and $e$ is the number of parameters in a Hamiltonian. Our model is
robust to certain noise for image recognition tasks and the parameters are
independent on the input sizes, making it friendly to near-term quantum
devices. We demonstrate QCNN with two explicit examples. First, QCNN is applied
to image processing and numerical simulation of three types of spatial
filtering, image smoothing, sharpening, and edge detection are performed.
Secondly, we demonstrate QCNN in recognizing image, namely, the recognition of
handwritten numbers. Compared with previous work, this machine learning model
can provide implementable quantum circuits that accurately corresponds to a
specific classical convolutional kernel. It provides an efficient avenue to
transform CNN to QCNN directly and opens up the prospect of exploiting quantum
power to process information in the era of big data.
|
We give a criterion for the existence for pseudo-horizontal surfaces in small
Seifert fibered manifolds. We calculate the genuses for such surfaces and
detect their $\mathbb{Z}_2$-homology classes. Using such pseudo-horizontal
surfaces, we can determine the $\mathbb{Z}_2$-Thurston norm for every
$\mathbb{Z}_2$-homology classes in small Seifert manifolds. We find several
families of examples that in the same $\mathbb{Z}_2$-homology class the genus
of a pseudo-horizontal surface is less than the genus of the pseudo-vertical
surface. Hence the pseudo-vertical surfaces is not $\mathbb{Z}_2$-taut.
|
A uniform in space, oscillatory in time plasma equilibrium sustained by a
time-dependent current density is analytically and numerically studied
resorting to particle-in-cell simulations. The dispersion relation is derived
from the Vlasov equation for oscillating equilibrium distribution functions,
and used to demonstrate that the plasma has an infinite number of unstable
kinetic modes. This instability represents a new kinetic mechanism for the
decay of the initial mode of infinite wavelength (or equivalently null
wavenumber), for which no classical wave breaking or Landau damping exists. The
relativistic generalization of the instability is discussed. In this regime,
the growth rate of the fastest growing unstable modes scales with
$\gamma_T^{-1/2}$, where $\gamma_T$ is the largest Lorentz factor of the plasma
distribution. This result hints that this instability is not as severely
suppressed for large Lorentz factor flows as purely streaming instabilities.
The relevance of this instability in inductive electric field oscillations
driven in pulsar magnetospheres is discussed.
|
End-to-end models with auto-regressive decoders have shown impressive results
for automatic speech recognition (ASR). These models formulate the
sequence-level probability as a product of the conditional probabilities of all
individual tokens given their histories. However, the performance of locally
normalised models can be sub-optimal because of factors such as exposure bias.
Consequently, the model distribution differs from the underlying data
distribution. In this paper, the residual energy-based model (R-EBM) is
proposed to complement the auto-regressive ASR model to close the gap between
the two distributions. Meanwhile, R-EBMs can also be regarded as
utterance-level confidence estimators, which may benefit many downstream tasks.
Experiments on a 100hr LibriSpeech dataset show that R-EBMs can reduce the word
error rates (WERs) by 8.2%/6.7% while improving areas under precision-recall
curves of confidence scores by 12.6%/28.4% on test-clean/test-other sets.
Furthermore, on a state-of-the-art model using self-supervised learning
(wav2vec 2.0), R-EBMs still significantly improves both the WER and confidence
estimation performance.
|
The DESERT Underwater framework (http://desert-underwater.dei.unipd.it/),
originally designed for simulating and testing underwater acoustic networks in
sea trials, has recently been extended to support real payload data
transmission through underwater multimodal networks. Specifically, the new
version of the framework is now able to transmit data in real time through the
EvoLogics S2C low-rate and high-rate acoustic modems, the SmartPORT low-cost
acoustic underwater modem prototype (AHOI) for IoT applications, as well as
Ethernet, surface WiFi, and the BlueComm optical modem. The system can also be
tested in the lab by employing a simulated channel, and the EvoLogics S2C DMAC
Emulator (DMACE)
|
The classical Poincar{\'e} conjecture that every homotopy 3-sphere is
diffeomorphic to the 3-sphere is proved by G. Perelman by solving Thurston's
program on geometrizations of 3-manifolds. A new confirmation of this
conjecture is given by combining R. H. Bing's result on this conjecture with
Smooth Unknotting Conjecutre for an $S^2$-knot and Smooth 4D Poincar{\'e}
Conjecture.
|
We describe a geometric and symmetry-based formulation of the equivalence
principle in non-relativistic physics. It applies both on the classical and
quantum levels and states that the Newtonian potential can be eliminated in
favor of a curved and time-dependent spatial metric. It is this requirement
that forces the gravitational mass to be equal to the inertial mass. We
identify the symmetry responsible for the equivalence principle as the remnant
of time-reparameterization symmetry of the relativistic theory. We also clarify
the transformation properties of the Schroedinger wave-function under arbitrary
changes of frame.
|
We study the energy-momentum relations of the Nambu-Goldstone modes in
quantum antiferromagnetic Heisenberg models on a hypercubic lattice. This work
is a sequel to the previous series about the models. We prove that the
Nambu-Goldstone modes show a linear dispersion relation for the small momenta,
i.e., the low energies of the Nambu-Goldstone excitations are proportional to
the momentum of the spin wave above the infinite-volume ground states with
symmetry breaking. Our method relies on the upper bounds for the
susceptibilities which are derived from the reflection positivity of the
quantum Heisenberg antiferromagnets. The method is also applicable to other
systems when the corresponding upper bounds for the susceptibilities are
justified.
|
In this paper, we present first-order accurate numerical methods for solution
of the heat equation with uncertain temperature-dependent thermal conductivity.
Each algorithm yields a shared coefficient matrix for the ensemble set
improving computational efficiency. Both mixed and Robin-type boundary
conditions are treated. In contrast with alternative, related methodologies,
stability and convergence are unconditional. In particular, we prove
unconditional, energy stability and optimal-order error estimates. A battery of
numerical tests are presented to illustrate both the theory and application of
these algorithms.
|
Sentiment analysis (SA) is an important research area in cognitive
computation-thus in-depth studies of patterns of sentiment analysis are
necessary. At present, rich resource data-based SA has been well developed,
while the more challenging and practical multi-source unsupervised SA (i.e. a
target domain SA by transferring from multiple source domains) is seldom
studied. The challenges behind this problem mainly locate in the lack of
supervision information, the semantic gaps among domains (i.e., domain shifts),
and the loss of knowledge. However, existing methods either lack the
distinguishable capacity of the semantic gaps among domains or lose private
knowledge. To alleviate these problems, we propose a two-stage domain
adaptation framework. In the first stage, a multi-task methodology-based
shared-private architecture is employed to explicitly model the domain common
features and the domain-specific features for the labeled source domains. In
the second stage, two elaborate mechanisms are embedded in the shared private
architecture to transfer knowledge from multiple source domains. The first
mechanism is a selective domain adaptation (SDA) method, which transfers
knowledge from the closest source domain. And the second mechanism is a
target-oriented ensemble (TOE) method, in which knowledge is transferred
through a well-designed ensemble method. Extensive experiment evaluations
verify that the performance of the proposed framework outperforms unsupervised
state-of-the-art competitors. What can be concluded from the experiments is
that transferring from very different distributed source domains may degrade
the target-domain performance, and it is crucial to choose the proper source
domains to transfer from.
|
Consider a bipartite quantum system consisting of two subsystems A and B. The
reduced density matrix ofA a is obtained by taking the partial trace with
respect to B. In this Letter we show that the Wigner distribution of this
reduced density matrix is obtained by integrating the total Wigner distribution
with respect to the phase space variables corresponding to the subsystem B. Our
proof makes use of the Weyl--Wigner--Moyal phase space formalism. Our result is
applied to general Gaussian mixed states of which i gives a particularly simple
and precise description. We also briefly discuss purification from the Wigner
point of view.
|
Softening material models are known to trigger spurious localizations.This
may be shown theoretically by the existence of solutions with zero dissipation
when localization occurs and numerically with spurious mesh dependency and
localization in a single layer of elements. We introduce in this paper a new
way to avoid spurious localization. The idea is to enforce a Lipschitz
regularity on the internal variables responsible for the material softening.
The regularity constraint introduces the needed length scale in the material
formulation. Moreover, we prove bounds on the domain affected by this
constraint. A first one-dimensional finite element implementation is proposed
for softening elasticity and softening plasticity.
|
Autonomous missions of small unmanned aerial vehicles (UAVs) are prone to
collisions owing to environmental disturbances and localization errors.
Consequently, a UAV that can endure collisions and perform recovery control in
critical aerial missions is desirable to prevent loss of the vehicle and/or
payload. We address this problem by proposing a novel foldable quadrotor system
which can sustain collisions and recover safely. The quadrotor is designed with
integrated mechanical compliance using a torsional spring such that the impact
time is increased and the net impact force on the main body is decreased. The
post-collision dynamics is analysed and a recovery controller is proposed which
stabilizes the system to a hovering location without additional collisions.
Flight test results on the proposed and a conventional quadrotor demonstrate
that for the former, integrated spring-damper characteristics reduce the
rebound velocity and lead to simple recovery control algorithms in the event of
unintended collisions as compared to a rigid quadrotor of the same dimension.
|
When an epidemic spreads into a population, it is often unpractical or
impossible to have a continuous monitoring of all subjects involved. As an
alternative, algorithmic solutions can be used to infer the state of the whole
population from a limited amount of measures. We analyze the capability of deep
neural networks to solve this challenging task. Our proposed architecture is
based on Graph Convolutional Neural Networks. As such it can reason on the
effect of the underlying social network structure, which is recognized as the
main component in the spreading of an epidemic. We test the proposed
architecture with two scenarios modeled on the CoVid-19 pandemic: a generic
homogeneous population, and a toy model of Boston metropolitan area.
|
We introduce a generalized concept of quantum teleportation in the framework
of quantum measurement and reversing operation. Our framework makes it possible
to find an optimal protocol for quantum teleportation enabling a faithful
transfer of unknown quantum states with maximum success probability up to the
fundamental limit of the no-cloning theorem. Moreover, an optimized protocol in
this generalized approach allows us to overcome noise in quantum channel beyond
the reach of existing teleportation protocols without requiring extra qubit
resources. Our proposed framework is applicable to multipartite quantum
communications and primitive functionalities in scalable quantum architectures.
|
Optical cavities find diverse uses in lasers, frequency combs, optomechanics,
and optical signal processors. Complete reconfigurability of the cavities
enables development of generic field programmable cavities for achieving the
desired performance in all these applications. We propose and demonstrate a
simple and generic interferometer in a cavity structure that enables periodic
modification of the internal cavity loss and the cavity resonance to
reconfigure the Q-factor, transmission characteristics, and group delay of the
hybrid cavity, with simple engineering of the interferometer. We also
demonstrate methods to decouple the tuning of the loss from the
resonance-shift, for resonance-locked reconfigurability. Such devices can be
implemented in any guided-wave platform (on-chip or fiber-optic) with potential
applications in programmable photonics and reconfigurable optomechanics.
|
The Marathi language is one of the prominent languages used in India. It is
predominantly spoken by the people of Maharashtra. Over the past decade, the
usage of language on online platforms has tremendously increased. However,
research on Natural Language Processing (NLP) approaches for Marathi text has
not received much attention. Marathi is a morphologically rich language and
uses a variant of the Devanagari script in the written form. This works aims to
provide a comprehensive overview of available resources and models for Marathi
text classification. We evaluate CNN, LSTM, ULMFiT, and BERT based models on
two publicly available Marathi text classification datasets and present a
comparative analysis. The pre-trained Marathi fast text word embeddings by
Facebook and IndicNLP are used in conjunction with word-based models. We show
that basic single layer models based on CNN and LSTM coupled with FastText
embeddings perform on par with the BERT based models on the available datasets.
We hope our paper aids focused research and experiments in the area of Marathi
NLP.
|
For fixed $m$ and $R\subseteq \{0,1,\ldots,m-1\}$, take $A$ to be the set of
positive integers congruent modulo $m$ to one of the elements of $R$, and let
$p_A(n)$ be the number of ways to write $n$ as a sum of elements of $A$.
Nathanson proved that $\log p_A(n) \leq (1+o(1)) \pi \sqrt{2n|R|/3m}$ using a
variant of a remarkably simple method devised by Erd\H{o}s in order to bound
the partition function. In this short note we describe a simpler and shorter
proof of Nathanson's bound.
|
Conversational interfaces to Business Intelligence (BI) applications enable
data analysis using a natural language dialog in small incremental steps. To
truly unleash the power of conversational BI to democratize access to data, a
system needs to provide effective and continuous support for data analysis. In
this paper, we propose BI-REC, a conversational recommendation system for BI
applications to help users accomplish their data analysis tasks.
We define the space of data analysis in terms of BI patterns, augmented with
rich semantic information extracted from the OLAP cube definition, and use
graph embeddings learned using GraphSAGE to create a compact representation of
the analysis state. We propose a two-step approach to explore the search space
for useful BI pattern recommendations. In the first step, we train a
multi-class classifier using prior query logs to predict the next high-level
actions in terms of a BI operation (e.g., {\em Drill-Down} or {\em Roll-up})
and a measure that the user is interested in. In the second step, the
high-level actions are further refined into actual BI pattern recommendations
using collaborative filtering. This two-step approach allows us to not only
divide and conquer the huge search space, but also requires less training data.
Our experimental evaluation shows that BI-REC achieves an accuracy of 83% for
BI pattern recommendations and up to 2X speedup in latency of prediction
compared to a state-of-the-art baseline. Our user study further shows that
BI-REC provides recommendations with a precision@3 of 91.90% across several
different analysis tasks.
|
Sea fishing is a highly mobile activity, favoured by the vastness of the
oceans, the absence of physical boundaries and the abstraction of legislative
boundaries. Understanding and anticipating this mobility is a major challenge
for fisheries management issues, both at the national and international levels.
''FisherMob'' is a free Gama tool designed to study the effect of economic and
biological factors on the dynamics of connected fisheries. It incorporate the
most important processes involved in fisheries dynamics: fish abundance
variability, price of the fishing effort and ex-vessel fish market price that
which depends on the ratio between offer and demand. The tool uses as input a
scheme of a coastal area with delimited fishing sites, fish biological
parameters and fisheries parameters. It runs with a userfriendly graphic
interface and generates output files that can be post-processed easily using
graphic and statistical software.
|
It is generally accepted that the pulsar magnetic field converts most of its
rotational energy losses into radiation. In this paper, we propose an
alternative emission mechanism, in which neither the pulsar rotational energy
nor its magnetic field is involved. The emission mechanism proposed here is
based on the hypothesis that the pulsar matter is stable only when moving with
respect to the ambient medium at a velocity exceeding some threshold value. A
decrease in velocity below this threshold value leads to the decay of matter
with the emission of electromagnetic radiation. It is shown that decay regions
on the pulsar surface in which the velocities of pulsar particles drops to
arbitrarily small values are formed under simple driving condition. It is also
shown that for the majority of pulsars having measured transverse velocities,
such a condition is quite possible. Thus, the pulsar radiation carries away not
the pulsar rotational energy, but its mass, while the magnitude of the
rotational energy does not play any role. At the end of the paper, we consider
the reason for the possible short-period precession of the pulsar.
|
Digital tools have long been used for supporting children's creativity.
Digital games that allow children to create artifacts and express themselves in
a playful environment serve as efficient Creativity Support Tools (or CSTs).
Creativity is also scaffolded by social interactions with others in their
environment. In our work, we explore the use of game-based interactions with a
social agent to scaffold children's creative expression as game players. We
designed three collaborative games and play-tested with 146 5-10 year old
children played with the social robot Jibo, which affords three different kinds
of creativity: verbal creativity, figural creativity and divergent thinking
during creative problem solving. In this paper, we reflect on game mechanic
practices that we incorporated to design for stimulating creativity in
children. These strategies may be valuable to game designers and HCI
researchers designing games and social agents for supporting children's
creativity.
|
Pre-training models such as BERT have achieved great success in many natural
language processing tasks. However, how to obtain better sentence
representation through these pre-training models is still worthy to exploit.
Previous work has shown that the anisotropy problem is an critical bottleneck
for BERT-based sentence representation which hinders the model to fully utilize
the underlying semantic features. Therefore, some attempts of boosting the
isotropy of sentence distribution, such as flow-based model, have been applied
to sentence representations and achieved some improvement. In this paper, we
find that the whitening operation in traditional machine learning can similarly
enhance the isotropy of sentence representations and achieve competitive
results. Furthermore, the whitening technique is also capable of reducing the
dimensionality of the sentence representation. Our experimental results show
that it can not only achieve promising performance but also significantly
reduce the storage cost and accelerate the model retrieval speed.
|
Quantum uncertainty is a well-known property of quantum mechanics that states
the impossibility of predicting measurement outcomes of multiple incompatible
observables simultaneously. In contrast, the uncertainty in the classical
domain comes from the lack of information about the exact state of the system.
One may naturally ask, whether the quantum uncertainty is indeed a fully
intrinsic property of the quantum theory, or whether similar to the classical
domain lack of knowledge about specific parts of the physical system might be
the source of this uncertainty. This question has been addressed in [New J.
Phys.19 023038 (2017)] where the authors argue that in the entropic formulation
of the uncertainty principle that can be illustrated using the so-called,
guessing games, indeed such lack of information has a significant contribution
to the arising quantum uncertainty. Here we investigate this issue
experimentally by implementing the corresponding two-dimensional and
three-dimensional guessing games. Our results confirm that within the
guessing-game framework, the quantum uncertainty to a large extent relies on
the fact that quantum information determining the key properties of the game is
stored in the degrees of freedom that remain inaccessible to the guessing
party.
|
Tabular data underpins numerous high-impact applications of machine learning
from fraud detection to genomics and healthcare. Classical approaches to
solving tabular problems, such as gradient boosting and random forests, are
widely used by practitioners. However, recent deep learning methods have
achieved a degree of performance competitive with popular techniques. We devise
a hybrid deep learning approach to solving tabular data problems. Our method,
SAINT, performs attention over both rows and columns, and it includes an
enhanced embedding method. We also study a new contrastive self-supervised
pre-training method for use when labels are scarce. SAINT consistently improves
performance over previous deep learning methods, and it even outperforms
gradient boosting methods, including XGBoost, CatBoost, and LightGBM, on
average over a variety of benchmark tasks.
|
In collaborative software development, program merging is the mechanism to
integrate changes from multiple programmers. Merge algorithms in modern version
control systems report a conflict when changes interfere textually. Merge
conflicts require manual intervention and frequently stall modern continuous
integration pipelines. Prior work found that, although costly, a large majority
of resolutions involve re-arranging text without writing any new code. Inspired
by this observation we propose the first data-driven approach to resolve merge
conflicts with a machine learning model. We realize our approach in a tool
DeepMerge that uses a novel combination of (i) an edit-aware embedding of merge
inputs and (ii) a variation of pointer networks, to construct resolutions from
input segments. We also propose an algorithm to localize manual resolutions in
a resolved file and employ it to curate a ground-truth dataset comprising 8,719
non-trivial resolutions in JavaScript programs. Our evaluation shows that, on a
held out test set, DeepMerge can predict correct resolutions for 37% of
non-trivial merges, compared to only 4% by a state-of-the-art semistructured
merge technique. Furthermore, on the subset of merges with upto 3 lines
(comprising 24% of the total dataset), DeepMerge can predict correct
resolutions with 78% accuracy.
|
We propose a framework using contrastive learning as a pre-training task to
perform image classification in the presence of noisy labels. Recent strategies
such as pseudo-labeling, sample selection with Gaussian Mixture models,
weighted supervised contrastive learning have been combined into a fine-tuning
phase following the pre-training. This paper provides an extensive empirical
study showing that a preliminary contrastive learning step brings a significant
gain in performance when using different loss functions: non-robust, robust,
and early-learning regularized. Our experiments performed on standard
benchmarks and real-world datasets demonstrate that: i) the contrastive
pre-training increases the robustness of any loss function to noisy labels and
ii) the additional fine-tuning phase can further improve accuracy but at the
cost of additional complexity.
|
The literature in modern machine learning has only negative results for
learning to communicate between competitive agents using standard RL. We
introduce a modified sender-receiver game to study the spectrum of
partially-competitive scenarios and show communication can indeed emerge in a
competitive setting. We empirically demonstrate three key takeaways for future
research. First, we show that communication is proportional to cooperation, and
it can occur for partially competitive scenarios using standard learning
algorithms. Second, we highlight the difference between communication and
manipulation and extend previous metrics of communication to the competitive
case. Third, we investigate the negotiation game where previous work failed to
learn communication between independent agents (Cao et al., 2018). We show
that, in this setting, both agents must benefit from communication for it to
emerge; and, with a slight modification to the game, we demonstrate successful
communication between competitive agents. We hope this work overturns
misconceptions and inspires more research in competitive emergent
communication.
|
Delta-orthogonal multiple access (D-OMA) has been recently investigated as a
potential technique to enhance the spectral efficiency in the sixth-generation
(6G) networks. D-OMA enables partial overlapping of the adjacent sub-channels
that are assigned to different clusters of users served by non-orthogonal
multiple access (NOMA), at the expense of additional interference. In this
paper, we analyze the performance of D-OMA in the uplink and develop a
multi-objective optimization framework to maximize the uplink energy efficiency
(EE) in a multi-access point (AP) network enabled by D-OMA. Specifically, we
optimize the sub-channel and transmit power allocations of the users as well as
the overlapping percentage of the spectrum between the adjacent sub-channels.
The formulated problem is a mixed binary non-linear programming problem.
Therefore, to address the challenge we first transform the problem into a
single-objective problem using Tchebycheff method. Then, we apply the monotonic
optimization (MO) to explore the hidden monotonicity of the objective function
and constraints, and reformulate the problem into a standard MO in canonical
form. The reformulated problem is then solved by applying the outer polyblock
approximation method. Our numerical results show that D-OMA outperforms the
conventional non-orthogonal multiple access (NOMA) and orthogonal frequency
division multiple access (OFDMA) when the adjacent sub-channel overlap and
scheduling are optimized jointly.
|
With the costs of renewable energy technologies declining, new forms of urban
energy systems are emerging that can be established in a cost-effective way.
The SolarEV City concept has been proposed that uses rooftop Photovoltaics (PV)
to its maximum extent, combined with Electric Vehicle (EV) with bi-directional
charging for energy storage. Urban environments consist of various areas, such
as residential and commercial districts, with different energy consumption
patterns, building structures, and car parks. The cost effectiveness and
decarbonization potentials of PV + EV and PV (+ battery) systems vary across
these different urban environments and change over time as cost structures
gradually shift. To evaluate these characteristics, we performed
techno-economic analyses of PV, battery, and EV technologies for a residential
area in Shinchi, Fukushima and the central commercial district of Kyoto, Japan
between 2020 and 2040. We found that PV + EV and PV only systems in 2020 are
already cost competitive relative to existing energy systems (grid electricity
and gasoline car). In particular, the PV + EV system rapidly increases its
economic advantage over time, particularly in the residential district which
has larger PV capacity and EV battery storage relative to the size of energy
demand. Electricity exchanges between neighbors (e.g., peer-to-peer or
microgrid) further enhanced the economic value (net present value) and
decarbonization potential of PV + EV systems up to 23 percent and 7 percent in
2030, respectively. These outcomes have important strategic implications for
urban decarbonization over the coming decades.
|
Gravitational lensing has long been used to measure or constrain cosmology
models. Although the lensing effect of gravitational waves has not been
observed by LIGO/Virgo, it is expected that there can be a few to a few
hundreds lensed events to be detected by the future Japanese space-borne
interferometers DECIGO and B-DECIGO, if they are running for 4 years. Given the
predicted lensed gravitational wave events, one can estimate the constraints on
the cosmological parameters via the lensing statistics and the time delay
methods. With the lensing statistics method, the knowledge of the lens
redshifts, even with the moderate uncertainty, will set the tight bound on the
energy density parameter $\Omega_M$ for matter, that is,
$0.288\lesssim\Omega_M\lesssim0.314$ at best. The constraint on the Hubble
constant $H_0$ can be determined using the time delay method. It is found out
that at $5\sigma$, $|\delta H_0|/H_0$ ranges from $3\%$ to $11\%$ for DECIGO,
and B-DECIGO will give less constrained results, $8\%-15\%$. In this work, the
uncertainties on the luminosity distance and the time delay distance are set to
be $10\%$ and $20\%$, respectively. The improvement on measuring these
distances will tighten the bounds.
|
PyArmadillo is a linear algebra library for the Python language, with the aim
of closely mirroring the programming interface of the widely used Armadillo C++
library, which in turn is deliberately similar to Matlab. PyArmadillo hence
facilitates algorithm prototyping with Matlab-like syntax directly in Python,
and relatively straightforward conversion of PyArmadillo-based Python code into
performant Armadillo-based C++ code. The converted code can be used for
purposes such as speeding up Python-based programs in conjunction with
pybind11, or the integration of algorithms originally prototyped in Python into
larger C++ codebases. PyArmadillo provides objects for matrices and cubes, as
well as over 200 associated functions for manipulating data stored in the
objects. Integer, floating point and complex numbers are supported. Various
matrix factorisations are provided through integration with LAPACK, or one of
its high performance drop-in replacements such as Intel MKL or OpenBLAS.
PyArmadillo is open-source software, distributed under the Apache 2.0 license;
it can be obtained at https://pyarma.sourceforge.io or via the Python Package
Index in precompiled form.
|
In this paper, we propose an efficient, high order accurate and
asymptotic-preserving (AP) semi-Lagrangian (SL) method for the BGK model with
constant or spatially dependent Knudsen number. The spatial discretization is
performed by a mass conservative nodal discontinuous Galerkin (NDG) method,
while the temporal discretization of the stiff relaxation term is realized by
stiffly accurate diagonally implicit Runge-Kutta (DIRK) methods along
characteristics. Extra order conditions are enforced for asymptotic accuracy
(AA) property of DIRK methods when they are coupled with a semi-Lagrangian
algorithm in solving the BGK model. A local maximum principle preserving (LMPP)
limiter is added to control numerical oscillations in the transport step.
Thanks to the SL and implicit nature of time discretization, the time stepping
constraint is relaxed and it is much larger than that from an Eulerian
framework with explicit treatment of the source term. Extensive numerical tests
are presented to verify the high order AA, efficiency and shock capturing
properties of the proposed schemes.
|
Scene text removal (STR) contains two processes: text localization and
background reconstruction. Through integrating both processes into a single
network, previous methods provide an implicit erasure guidance by modifying all
pixels in the entire image. However, there exists two problems: 1) the implicit
erasure guidance causes the excessive erasure to non-text areas; 2) the
one-stage erasure lacks the exhaustive removal of text region. In this paper,
we propose a ProgrEssively Region-based scene Text eraser (PERT), introducing
an explicit erasure guidance and performing balanced multi-stage erasure for
accurate and exhaustive text removal. Firstly, we introduce a new region-based
modification strategy (RegionMS) to explicitly guide the erasure process.
Different from previous implicitly guided methods, RegionMS performs targeted
and regional erasure on only text region, and adaptively perceives stroke-level
information to improve the integrity of non-text areas with only bounding box
level annotations. Secondly, PERT performs balanced multi-stage erasure with
several progressive erasing stages. Each erasing stage takes an equal step
toward the text-erased image to ensure the exhaustive erasure of text regions.
Compared with previous methods, PERT outperforms them by a large margin without
the need of adversarial loss, obtaining SOTA results with high speed (71 FPS)
and at least 25% lower parameter complexity. Code is available at
https://github.com/wangyuxin87/PERT.
|
At most 1-2% of the global virome has been sampled to date. Here, we develop
a novel method that combines Linear Filtering (LF) and Singular Value
Decomposition (SVD) to infer host-virus associations. Using this method, we
recovered highly plausible undiscovered interactions with a strong signal of
viral coevolutionary history, and revealed a global hotspot of unusually unique
but unsampled (or unrealized) host-virus interactions in the Amazon rainforest.
We finally show that graph embedding of the imputed network can be used to
improve predictions of human infection from viral genome features, showing that
the global structure of the mammal-virus network provides additional insights
into human disease emergence.
|
Model predictive control (MPC) schemes have a proven track record for
delivering aggressive and robust performance in many challenging control tasks,
coping with nonlinear system dynamics, constraints, and observational noise.
Despite their success, these methods often rely on simple control
distributions, which can limit their performance in highly uncertain and
complex environments. MPC frameworks must be able to accommodate changing
distributions over system parameters, based on the most recent measurements. In
this paper, we devise an implicit variational inference algorithm able to
estimate distributions over model parameters and control inputs on-the-fly. The
method incorporates Stein Variational gradient descent to approximate the
target distributions as a collection of particles, and performs updates based
on a Bayesian formulation. This enables the approximation of complex
multi-modal posterior distributions, typically occurring in challenging and
realistic robot navigation tasks. We demonstrate our approach on both simulated
and real-world experiments requiring real-time execution in the face of
dynamically changing environments.
|
Automatic speech recognition (ASR) systems for young children are needed due
to the importance of age-appropriate educational technology. Because of the
lack of publicly available young child speech data, feature extraction
strategies such as feature normalization and data augmentation must be
considered to successfully train child ASR systems. This study proposes a novel
technique for child ASR using both feature normalization and data augmentation
methods based on the relationship between formants and fundamental frequency
($f_o$). Both the $f_o$ feature normalization and data augmentation techniques
are implemented as a frequency shift in the Mel domain. These techniques are
evaluated on a child read speech ASR task. Child ASR systems are trained by
adapting a BLSTM-based acoustic model trained on adult speech. Using both $f_o$
normalization and data augmentation results in a relative word error rate (WER)
improvement of 19.3% over the baseline when tested on the OGI Kids' Speech
Corpus, and the resulting child ASR system achieves the best WER currently
reported on this corpus.
|
We present an implementation of the Atiyah-Bott residue formula for
$\overline{M}_{0,m}(\mathbb{P}^{n},d)$. We use this implementation to compute a
large number of Gromov-Witten invariants of genus $0$, including characteristic
numbers of rational curves on general complete intersections. We also compute
some characteristic numbers of rational contact curves. Our computations
confirm known predictions made by Mirror Symmetry. The code we developed for
these computations is publicly available and can be used for other types of
computationsions.
|
A theorem of Glasner from 1979 shows that if $A \subset \mathbb{T} =
\mathbb{R}/\mathbb{Z}$ is infinite then for each $\epsilon > 0$ there exists an
integer $n$ such that $nA$ is $\epsilon$-dense and Berend-Peres later showed
that in fact one can take $n$ to be of the form $f(m)$ for any non-constant
$f(x) \in \mathbb{Z}[x]$. Alon and Peres provided a general framework for this
problem that has been used by Kelly-L\^{e} and Dong to show that the same
property holds for various linear actions on $\mathbb{T}^d$. We complement the
result of Kelly-L\^{e} on the $\epsilon$-dense images of integer polynomial
matrices in some subtorus of $\mathbb{T}^d$ by classifying those integer
polynomial matrices that have the Glasner property in the full torus
$\mathbb{T}^d$. We also extend a recent result of Dong by showing that if
$\Gamma \leq \operatorname{SL}_d(\mathbb{Z})$ is generated by finitely many
unipotents and acts irreducibly on $\mathbb{R}^d$ then the action $\Gamma
\curvearrowright \mathbb{T}^d$ has a uniform Glasner property.
|
A hypergraph $H=(V(H), E(H))$ is a Berge copy of a graph $F$, if $V(F)\subset
V(H)$ and there is a bijection $f:E(F)\rightarrow E(H)$ such that for any $e\in
E(F)$ we have $e\subset f(e)$. A hypergraph is Berge-$F$-free if it does not
contain any Berge copies of $F$. We address the saturation problem concerning
Berge-$F$-free hypergraphs, i.e., what is the minimum number $sat_r(n,F)$ of
hyperedges in an $r$-uniform Berge-$F$-free hypergraph $H$ with the property
that adding any new hyperedge to $H$ creates a Berge copy of $F$. We prove that
$sat_r(n,F)$ grows linearly in $n$ if $F$ is either complete multipartite or it
possesses the following property: if $d_1\le d_2\le \dots \le d_{|V(F)|}$ is
the degree sequence of $F$, then $F$ contains two adjacent vertices $u,v$ with
$d_F(u)=d_1$, $d_F(v)=d_2$. In particular, the Berge-saturation number of
regular graphs grows linearly in $n$.
|
Following an approach originally suggested by Balland in the context of the
SABR model, we derive an ODE that is satisfied by normalized volatility smiles
for short maturities under a rough volatility extension of the SABR model that
extends also the rough Bergomi model. We solve this ODE numerically and further
present a very accurate approximation to the numerical solution that we dub the
rough SABR formula.
|
Surgical data science is revolutionizing minimally invasive surgery by
enabling context-aware applications. However, many challenges exist around
surgical data (and health data, more generally) needed to develop context-aware
models. This work - presented as part of the Endoscopic Vision (EndoVis)
challenge at the Medical Image Computing and Computer Assisted Intervention
(MICCAI) 2020 conference - seeks to explore the potential for visual domain
adaptation in surgery to overcome data privacy concerns. In particular, we
propose to use video from virtual reality (VR) simulations of surgical
exercises in robotic-assisted surgery to develop algorithms to recognize tasks
in a clinical-like setting. We present the performance of the different
approaches to solve visual domain adaptation developed by challenge
participants. Our analysis shows that the presented models were unable to learn
meaningful motion based features form VR data alone, but did significantly
better when small amount of clinical-like data was also made available. Based
on these results, we discuss promising methods and further work to address the
problem of visual domain adaptation in surgical data science. We also release
the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.
|
Pristine graphene interacts relatively weakly with Al, which is a specie of
importance for novel generations of metal-ion batteries. We employ DFT
calculations to explore the possibility of enhancing Al interaction with
graphene. We investigate non-doped and N-doped graphene nanoribbons, address
the impact of the edge sites, which are always present to some extent in real
samples, and N-containing defects on the material's reactivity towards Al. The
results are compared to that of pristine graphene. We show that introduction of
edges does not affect the energetics of Al adsorption significantly by itself.
On the other hand, N-doping of graphene nanoribbons is found to affect the
adsorption energy of Al to the extent that strongly depends on the type of
N-containing defect. While graphitic and pyrrolic N induce minimal changes, the
introduction of edge NO group and doping with in plane pyridinic N result in Al
adsorption nearly twice as strong as on pristine graphene. The obtained results
could guide the further design of advanced materials for Al-ion rechargeable
batteries.
|
Golan and Sapir \cite{MR3978542} proved that the Thompson's groups $F$, $T$
and $V$ have linear divergence. In the current paper, we focus on the
divergence properties of several generalisation of the Thompson's groups, we
first consider the Brown-Thompson's groups $F_n$, $T_n$ and $V_n$ and found out
that these groups also have linear divergence function. We then consider the
braided Thompson's groups $BF$ and $\widehat{BF}$ and $\widehat{BV}$ together
with the result in \cite{Kodama:2020to} we conclude that theses groups have
linear divergence.
|
Euler's three-body problem is the problem of solving for the motion of a
particle moving in a Newtonian potential generated by two point sources fixed
in space. This system is integrable in the Liouville sense. We consider the
Euler problem with the inverse-square potential, which can be seen as a natural
generalization of the three-body problem to higher-dimensional Newtonian
theory. We identify a family of stable stationary orbits in the generalized
Euler problem. These orbits guarantee the existence of stable bound orbits.
Applying the Poincar\'e map method to these orbits, we show that stable bound
chaotic orbits appear. As a result, we conclude that the generalized Euler
problem is nonintegrable.
|
Non-Hermitian skin effect, namely that the eigenvalues and eigenstates of a
non-Hermitian tight-binding Hamiltonian have significant differences under open
or periodic boundary conditions, is a remarkable phenomenon of non-Hermitian
systems. Inspired by the presence of the non-Hermitian skin effect, we study
the evolution of wave-packets in non-Hermitian systems, which can be determined
using the single-particle Green's function. Surprisingly, we find that in the
thermodynamical limit, the Green's function does not depend on boundary
conditions, despite the presence of skin effect. We proffer a general proof for
this statement in arbitrary dimension with finite hopping range, with an
explicit illustration in the non-Hermitian Su-Schrieffer-Heeger model. We also
explore its applications in non-interacting open quantum systems described by
the master equation, where we demonstrate that the evolution of the density
matrix is independent of the boundary condition.
|
Binary neutron star mergers are thought to be one of the dominant sites of
production for rapid neutron capture elements, including platinum and gold.
Since the discovery of the binary neutron star merger GW170817, and its
associated kilonova AT2017gfo, numerous works have attempted to determine the
composition of its outflowing material, but they have been hampered by the lack
of complete atomic data. Here, we demonstrate how inclusion of new atomic data
in synthetic spectra calculations can provide insights and constraints on the
production of the heaviest elements. We employ theoretical atomic data
(obtained using GRASP$^0$) for neutral, singly- and doubly-ionised platinum and
gold, to generate photospheric and simple nebular phase model spectra for
kilonova-like ejecta properties. We make predictions for the locations of
strong transitions, which could feasibly appear in the spectra of kilonovae
that are rich in these species. We identify low-lying electric quadrupole and
magnetic dipole transitions that may give rise to forbidden lines when the
ejecta becomes optically thin. The strongest lines lie beyond $8000\,\r{A}$,
motivating high quality near-infrared spectroscopic follow-up of kilonova
candidates. We compare our model spectra to the observed spectra of AT2017gfo,
and conclude that no platinum or gold signatures are prominent in the ejecta.
From our nebular phase modelling, we place tentative upper limits on the
platinum and gold mass of $\lesssim$ a few $10^{-3}\,\rm{M}_{\odot}$, and
$\lesssim 10^{-2}\,\rm{M}_{\odot}$, respectively. This work demonstrates how
new atomic data of heavy elements can be included in radiative transfer
calculations, and motivates future searches for elemental signatures.
|
The fate of relativistic pair beams produced in the intergalactic medium by
very high energy emission from blazars remains controversial in the literature.
The possible role of resonance beam plasma instability has been studied both
analytically and numerically but no consensus has been reached. In this paper,
we thoroughly analyze the development of this type of instability. This
analysis takes into account that a highly relativistic beam loses energy only
due to interactions with the plasma waves propagating within the opening angle
of the beam (we call them parallel waves), whereas excitation of oblique waves
results merely in an angular spreading of the beam, which reduces the
instability growth rate. For parallel waves, the growth rate is a few times
larger than for oblique ones, so they grow faster than oblique waves and drain
energy from the beam before it expands. However, the specific property of
extragalactic beams is that they are extraordinarily narrow; the opening angle
is only $\Delta\theta\sim 10^{-6}-10^{-5}$. In this case, the width of the
resonance for parallel waves, $\propto\Delta\theta^2$, is too small for them to
grow in realistic conditions. We perform both analytical estimates and
numerical simulations in the quasilinear regime. These show that for
extragalactic beams, the growth of the waves is incapable of taking a
significant portion of the beam's energy. This type of instability could at
best lead to an expansion of the beam by some factor but the beam's energy
remains nearly intact.
|
Ultraviolet (UV) and X-ray photons from active galactic nuclei (AGNs) can
ionize hydrogen in the intergalactic medium (IGM). We solve radiative transfer
around AGNs in high redshift to evaluate the 21-cm line emission from the
neutral hydrogen in the IGM and obtain the radial profile of the brightness
temperature in the epoch of reionization. The ionization profile extends over
10 [Mpc] comoving distance which can be observed in the order of 10 [arcmin].
From estimation of the radio galaxy number counts with high sensitivity
observation through the Square Kilometre Array (SKA), we investigate the
capability of parameter constrains for AGN luminosity function with Fisher
analysis for three evolution model through cosmic time. We find that the errors
for each parameter are restricted to a few percent when AGNs are sufficiently
bright at high redshifts. We also investigate the possibility of further
parameter constraints with future observation beyond the era of SKA.
|
We determine the masses, the singlet and octet decay constants as well as the
anomalous matrix elements of the $\eta$ and $\eta^\prime$ mesons in $N_f=2+1$
QCD\@. The results are obtained using twenty-one CLS ensembles of
non-perturbatively improved Wilson fermions that span four lattice spacings
ranging from $a\approx 0.086\,$fm down to $a\approx 0.050\,$fm. The pion masses
vary from $M_{\pi}=420\,$MeV to $126\,$MeV and the spatial lattice extents
$L_s$ are such that $L_sM_\pi\gtrsim 4$, avoiding significant finite volume
effects. The quark mass dependence of the data is tightly constrained by
employing two trajectories in the quark mass plane, enabling a thorough
investigation of U($3$) large-$N_c$ chiral perturbation theory (ChPT). The
continuum limit extrapolated data turn out to be reasonably well described by
the next-to-leading order ChPT parametrization and the respective low energy
constants are determined. The data are shown to be consistent with the singlet
axial Ward identity and, for the first time, also the matrix elements with the
topological charge density are computed. We also derive the corresponding
next-to-leading order large-$N_{c}$ ChPT formulae. We find $F^8 =
115.0(2.8)~\text{MeV}$, $\theta_{8} = -25.8(2.3)^{\circ}$, $\theta_0 =
-8.1(1.8)^{\circ}$ and, in the $\overline{\mathrm{MS}}$ scheme for $N_f=3$,
$F^{0}(\mu = 2\,\mathrm{GeV}) = 100.1(3.0)~\text{MeV}$, where the decay
constants read $F^8_\eta=F^8\cos \theta_8$, $F^8_{\eta^\prime}=F^8\sin
\theta_8$, $F^0_\eta=-F^0\sin \theta_0$ and $F^0_{\eta^\prime}=F^0\cos
\theta_0$. For the gluonic matrix elements, we obtain $a_{\eta}(\mu =
2\,\mathrm{GeV}) = 0.0170(10)\,\mathrm{GeV}^{3}$ and $a_{\eta^{\prime}}(\mu =
2\,\mathrm{GeV}) = 0.0381(84)\,\mathrm{GeV}^{3}$, where statistical and all
systematic errors are added in quadrature.
|
This paper considers the length of resolution proofs when using
Krishnamurthy's classic symmetry rules. We show that inconsistent linear
equation systems of bounded width over a fixed finite field $\mathbb{F}_p$ with
$p$ a prime have, in their standard encoding as CNFs, polynomial length
resolutions when using the local symmetry rule (SRC-II).
As a consequence it follows that the multipede instances for the graph
isomorphism problem encoded as CNF formula have polynomial length resolution
proofs. This contrasts exponential lower bounds for
individualization-refinement algorithms on these graphs.
For the Cai-F\"urer-Immerman graphs, for which Tor\'an showed exponential
lower bounds for resolution proofs (SAT 2013), we also show that already the
global symmetry rule (SRC-I) suffices to allow for polynomial length proofs.
|
We report multi-frequency observations of large radio galaxies 3C 35 and 3C
284. The low-frequency observations were done with Giant Metrewave Radio
Telescope starting from $\sim$150 MHz, and the high-frequency observations were
done with the Very Large Array. We have studied the radio morphology of these
two sources at different frequencies. We present the spectral ageing map using
two of the most widely used models, the Kardashev-Pacholczyk and Jaffe-Perola
models. Another more realistic and complex Tribble model is also used. We also
calculate the jet-power and the speed of the radio lobes of these galaxies. We
check for whether any episodic jet activity is present or not in these galaxies
and found no sign of such kind of activity.
|
Humans learn compositional and causal abstraction, \ie, knowledge, in
response to the structure of naturalistic tasks. When presented with a
problem-solving task involving some objects, toddlers would first interact with
these objects to reckon what they are and what can be done with them.
Leveraging these concepts, they could understand the internal structure of this
task, without seeing all of the problem instances. Remarkably, they further
build cognitively executable strategies to \emph{rapidly} solve novel problems.
To empower a learning agent with similar capability, we argue there shall be
three levels of generalization in how an agent represents its knowledge:
perceptual, conceptual, and algorithmic. In this paper, we devise the very
first systematic benchmark that offers joint evaluation covering all three
levels. This benchmark is centered around a novel task domain, HALMA, for
visual concept development and rapid problem-solving. Uniquely, HALMA has a
minimum yet complete concept space, upon which we introduce a novel paradigm to
rigorously diagnose and dissect learning agents' capability in understanding
and generalizing complex and structural concepts. We conduct extensive
experiments on reinforcement learning agents with various inductive biases and
carefully report their proficiency and weakness.
|
Automatic extraction of product attribute values is an important enabling
technology in e-Commerce platforms. This task is usually modeled using sequence
labeling architectures, with several extensions to handle multi-attribute
extraction. One line of previous work constructs attribute-specific models,
through separate decoders or entirely separate models. However, this approach
constrains knowledge sharing across different attributes. Other contributions
use a single multi-attribute model, with different techniques to embed
attribute information. But sharing the entire network parameters across all
attributes can limit the model's capacity to capture attribute-specific
characteristics. In this paper we present AdaTag, which uses adaptive decoding
to handle extraction. We parameterize the decoder with pretrained attribute
embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This
allows for separate, but semantically correlated, decoders to be generated on
the fly for different attributes. This approach facilitates knowledge sharing,
while maintaining the specificity of each attribute. Our experiments on a
real-world e-Commerce dataset show marked improvements over previous methods.
|
This note is devoted to establish some new arithmetic properties of the
generalized Genocchi numbers $G_{n , a}$ ($n \in \mathbb{N}$, $a \geq 2$). The
resulting properties for the usual Genocchi numbers $G_n = G_{n , 2}$ are then
derived. We show for example that for any even positive integer $n$, the
Genocchi number $G_n$ is a multiple of the odd part of $n$.
|
Self-attention has become increasingly popular in a variety of sequence
modeling tasks from natural language processing to recommendation, due to its
effectiveness. However, self-attention suffers from quadratic computational and
memory complexities, prohibiting its applications on long sequences. Existing
approaches that address this issue mainly rely on a sparse attention context,
either using a local window, or a permuted bucket obtained by
locality-sensitive hashing (LSH) or sorting, while crucial information may be
lost. Inspired by the idea of vector quantization that uses cluster centroids
to approximate items, we propose LISA (LInear-time Self Attention), which
enjoys both the effectiveness of vanilla self-attention and the efficiency of
sparse attention. LISA scales linearly with the sequence length, while enabling
full contextual attention via computing differentiable histograms of codeword
distributions. Meanwhile, unlike some efficient attention methods, our method
poses no restriction on casual masking or sequence length. We evaluate our
method on four real-world datasets for sequential recommendation. The results
show that LISA outperforms the state-of-the-art efficient attention methods in
both performance and speed; and it is up to 57x faster and 78x more memory
efficient than vanilla self-attention.
|
We employ kernel-based approaches that use samples from a probability
distribution to approximate a Kolmogorov operator on a manifold. The
self-tuning variable-bandwidth kernel method [Berry & Harlim, Appl. Comput.
Harmon. Anal., 40(1):68--96, 2016] computes a large, sparse matrix that
approximates the differential operator. Here, we use the eigendecomposition of
the discretization to (i) invert the operator, solving a differential equation,
and (ii) represent gradient vector fields on the manifold. These methods only
require samples from the underlying distribution and, therefore, can be applied
in high dimensions or on geometrically complex manifolds when spatial
discretizations are not available. We also employ an efficient $k$-$d$ tree
algorithm to compute the sparse kernel matrix, which is a computational
bottleneck.
|
In image editing, the most common task is pasting objects from one image to
the other and then eventually adjusting the manifestation of the foreground
object with the background object. This task is called image compositing. But
image compositing is a challenging problem that requires professional editing
skills and a considerable amount of time. Not only these professionals are
expensive to hire, but the tools (like Adobe Photoshop) used for doing such
tasks are also expensive to purchase making the overall task of image
compositing difficult for people without this skillset. In this work, we aim to
cater to this problem by making composite images look realistic. To achieve
this, we are using Generative Adversarial Networks (GANS). By training the
network with a diverse range of filters applied to the images and special loss
functions, the model is able to decode the color histogram of the foreground
and background part of the image and also learns to blend the foreground object
with the background. The hue and saturation values of the image play an
important role as discussed in this paper. To the best of our knowledge, this
is the first work that uses GANs for the task of image compositing. Currently,
there is no benchmark dataset available for image compositing. So we created
the dataset and will also make the dataset publicly available for benchmarking.
Experimental results on this dataset show that our method outperforms all
current state-of-the-art methods.
|
Observing the Rossiter-McLaughlin effect during a planetary transit allows
the determination of the angle $\lambda$ between the sky projections of the
star's spin axis and the planet's orbital axis. Such observations have revealed
a large population of well-aligned systems and a smaller population of
misaligned systems, with values of $\lambda$ ranging up to 180$^\circ$. For a
subset of 57 systems, we can now go beyond the sky projection and determine the
3-d obliquity $\psi$ by combining the Rossiter-McLaughlin data with constraints
on the line-of-sight inclination of the spin axis. Here we show that the
misaligned systems do not span the full range of obliquities; they show a
preference for nearly-perpendicular orbits ($\psi=80-125^\circ$) that seems
unlikely to be a statistical fluke. If confirmed by further observations, this
pile-up of polar orbits is a clue about the unknown processes of obliquity
excitation and evolution.
|
We investigate quantum entanglement in an analogue black hole realized in the
flow of a Bose-Einstein condensate. The system is described by a three-mode
Gaussian state and we construct the corresponding covariance matrix at zero and
finite temperature. We study associated bipartite and tripartite entanglement
measures and discuss their experimental observation. We identify a simple
optical setup equivalent to the analogue Bose-Einstein black hole which
suggests a new way of determining the Hawking temperature and grey-body factor
of the system.
|
Diam-mean equicontinuity is a dynamical property that has been of use in the
study of non-periodic order. Using some type of "local" skew product between a
shift and an odometer looking cellular automaton (CA) we will show there exists
an almost diam-mean equicontinuous CA that is not almost equicontinuous, and
hence not almost locally periodic.
|
Hydrodynamic self-similar solutions, as obtained by Chi [J. Math. Phys. 24,
2532 (1983)] have been generalized by introducing new variables in place of the
old space and time variables. A systematic procedure of obtaining a complete
set of solutions has been suggested. The Newtonian analogs of all homogeneous
isotropic Friedmann dust universes with spatial curvature $k = 0, \pm 1$ have
been given.
|
In this paper we characterize magnitude-dependent systematics in the proper
motions of the Gaia EDR3 catalog and provide a prescription for their removal.
The reference frame of bright stars (G<13) in EDR3 is known to rotate with
respect to extragalactic objects, but this rotation has proven difficult to
characterize and correct. We employ a sample of binary stars and a sample of
open cluster members to characterize this proper motion bias as a
magnitude-dependent spin of the reference frame. We show that the bias varies
with G magnitude, reaching up to 80 {\mu}as/yr for sources in the range G = 11
- 13, several times the formal EDR3 proper motion uncertainties. We also show
evidence for an additional dependence on the color of the source, with a
magnitude up to 10 {\mu}as/yr. However, a color correction proportional to the
effective wavenumber is unsatisfactory for very red or very blue stars and we
do not recommend its use. We provide a recipe for a magnitude-dependent
correction to align the proper motion of the Gaia EDR3 sources brighter than
G=13 with the International Celestial Reference Frame.
|
We study canonical and affine versions of the quantized covariant Euclidean
free real scalar field-theory on four dimensional lattices through the Monte
Carlo method. We calculate the two-point function at small values of the bare
coupling constant and near the continuum limit at finite volume. Our
investigation shows that affine quantization is able to give meaningful results
for the two-point function for which is not available an exact analytic result
and therefore numerical methods are necessary.
|
This letter describes a method for estimating regions of attraction and
bounds on permissible perturbation amplitudes in nonlinear fluids systems. The
proposed approach exploits quadratic constraints between the inputs and outputs
of the nonlinearity on elliptical sets. This approach reduces conservatism and
improves estimates for regions of attraction and bounds on permissible
perturbation amplitudes over related methods that employ quadratic constraints
on spherical sets. We present and investigate two algorithms for performing the
analysis: an iterative method that refines the analysis by solving a sequence
of semi-definite programs, and another based on solving a generalized
eigenvalue problem with lower computational complexity, but at the cost of some
precision in the final solution. The proposed algorithms are demonstrated on
low-order mechanistic models of transitional flows. We further compare accuracy
and computational complexity with analysis based on sum-of-squares optimization
and direct-adjoint looping methods.
|
Since the 1980s, technology business incubators (TBIs), which focus on
accelerating businesses through resource sharing, knowledge agglomeration, and
technology innovation, have become a booming industry. As such, research on
TBIs has gained international attention, most notably in the United States,
Europe, Japan, and China. The present study proposes an entrepreneurial
ecosystem framework with four key components, i.e., people, technology,
capital, and infrastructure, to investigate which factors have an impact on the
performance of TBIs. We also empirically examine this framework based on
unique, three-year panel survey data from 857 national TBIs across China. We
implemented factor analysis and panel regression models on dozens of variables
from 857 national TBIs between 2015 and 2017 in all major cities in China and
found that a number of factors associated with people, technology, capital, and
infrastructure components have various statistically significant impacts on the
performance of TBIs at either national model or regional models.
|
Quantifying long-term statistical properties of satellite trajectories
typically entails time-consuming trajectory propagation. We present a fast,
ergodic\cite{Arnold} method of analytically estimating these for
$J_2-$perturbed elliptical orbits, broadly agreeing with trajectory
propagation-derived results. We extend the approach in Graven and Lo (2019) to
estimate: (1) Satellite-ground station coverage with limited satellite field of
view and ground station elevation angle with numerically optimized formulae,
and (2) long-term averages of general functions of satellite position. This
method is fast enough to facilitate real-time, interactive tools for satellite
constellation and network design, with an approximate $1000\times$ GPU speedup.
|
The orientation of the disk of material accreting onto supermassive black
holes that power quasars is one of most important quantities that are needed to
understand quasars -- both individually and in the ensemble average. We present
a hypothesis for determining comparatively edge-on orientation in a subset of
quasars (both radio loud and radio quiet). If confirmed, this orientation
indicator could be applicable to individual quasars without reference to radio
or X-ray data and could identify some 10-20% of quasars as being more edge-on
than average, based only on moderate resolution and signal-to-noise
spectroscopy covering the CIV 1549A emission feature. We present a test of said
hypothesis using X-ray observations and identify additional data that are
needed to confirm this hypothesis and calibrate the metric.
|
Determinantal consensus clustering is a promising and attractive alternative
to partitioning about medoids and k-means for ensemble clustering. Based on a
determinantal point process or DPP sampling, it ensures that subsets of similar
points are less likely to be selected as centroids. It favors more diverse
subsets of points. The sampling algorithm of the determinantal point process
requires the eigendecomposition of a Gram matrix. This becomes computationally
intensive when the data size is very large. This is particularly an issue in
consensus clustering, where a given clustering algorithm is run several times
in order to produce a final consolidated clustering. We propose two efficient
alternatives to carry out determinantal consensus clustering on large datasets.
They consist in DPP sampling based on sparse and small kernel matrices whose
eigenvalue distributions are close to that of the original Gram matrix.
|
This manuscript is aimed at addressing several long standing limitations of
dynamic mode decompositions in the application of Koopman analysis. Principle
among these limitations are the convergence of associated Dynamic Mode
Decomposition algorithms and the existence of Koopman modes. To address these
limitations, two major modifications are made, where Koopman operators are
removed from the analysis in light of Liouville operators (known as Koopman
generators in special cases), and these operators are shown to be compact for
certain pairs of Hilbert spaces selected separately as the domain and range of
the operator. While eigenfunctions are discarded in the general analysis, a
viable reconstruction algorithm is still demonstrated, and the sacrifice of
eigenfunctions realizes the theoretical goals of DMD analysis that have yet to
be achieved in other contexts. However, in the case where the domain is
embedded in the range, an eigenfunction approach is still achievable, where a
more typical DMD routine is established, but that leverages a finite rank
representation that converges in norm. The manuscript concludes with the
description of two Dynamic Mode Decomposition algorithms that converges when a
dense collection of occupation kernels, arising from the data, are leveraged in
the analysis.
|
In this paper, a numerical study on the melting behavior of phase change
material (PCM) with gradient porous media has been carried out at the pore
scales. In order to solve the governing equations, a pore-scale lattice
Boltzmann method with the double distribution functions is used, in which a
volumetric LB scheme is employed to handle the boundary. The Monte Carlo random
sampling is adopted to generate a microstructure of two-dimensional gradient
foam metal which are then used to simulate the solid-liquid phase transition in
the cavity. The effect of several factors, such as gradient porosity structure,
gradient direction, Rayleigh number and particle diameters on the liquid
fraction of PCM are systematically investigated. It is observed that the
presence of gradient media affect significantly the melting rate and shortens
full melting time compared to that for constant porosity by enhancing natural
convection. The melting time of positive and negative gradients will change
with Rayleigh number, and there is a critical value for Rayleigh number.
Specifically, when Rayleigh number is below the critical value, the positive
gradient is more advantageous, and when Rayleigh number exceeds the critical
value, the negative gradient is more advantageous. Moreover, smaller particle
diameters would lead to lower permeability and larger internal surfaces for
heat transfer.
|
In this paper, we use $K$-means clustering to analyze various relationships
between malware samples. We consider a dataset comprising~20 malware families
with~1000 samples per family. These families can be categorized into seven
different types of malware. We perform clustering based on pairs of families
and use the results to determine relationships between families. We perform a
similar cluster analysis based on malware type. Our results indicate that
$K$-means clustering can be a powerful tool for data exploration of malware
family relationships.
|
We study protostellar envelope and outflow evolution using Hubble Space
Telescope NICMOS or WFC3 images of 304 protostars in the Orion Molecular
clouds. These near-IR images resolve structures in the envelopes delineated by
the scattered light of the central protostars with 80 AU resolution and they
complement the 1.2-870 micron spectral energy distributions obtained with the
Herschel Orion Protostar Survey program (HOPS). Based on their 1.60 micron
morphologies, we classify the protostars into five categories: non-detections,
point sources without nebulosity, bipolar cavity sources, unipolar cavity
sources, and irregulars. We find point sources without associated nebulosity
are the most numerous, and show through monochromatic Monte Carlo radiative
transfer modeling that this morphology occurs when protostars are observed at
low inclinations or have low envelope densities. We also find that the
morphology is correlated with the SED-determined evolutionary class with Class
0 protostars more likely to be non-detections, Class I protostars to show
cavities and flat-spectrum protostars to be point sources. Using an edge
detection algorithm to trace the projected edges of the cavities, we fit
power-laws to the resulting cavity shapes, thereby measuring the cavity
half-opening angles and power-law exponents. We find no evidence for the growth
of outflow cavities as protostars evolve through the Class I protostar phase,
in contradiction with previous studies of smaller samples. We conclude that the
decline of mass infall with time cannot be explained by the progressive
clearing of envelopes by growing outflow cavities. Furthermore, the low star
formation efficiency inferred for molecular cores cannot be explained by
envelope clearing alone.
|
We consider genuine type IIB string theory (supersymmetric) brane
intersections that preserve $(1+1)$D Lorentz symmetry. We provide the full
supergravity solutions in their analytic form and discuss their physical
properties. The Ansatz for the spacetime dependence of the different brane warp
factors goes beyond the harmonic superposition principle. By studying the
associated near-horizon geometry, we construct interesting classes of AdS$_3$
vacua in type IIB and highlight their relation to the existing classifications
in the literature. Finally, we discuss their holographic properties.
|
Given any $f$ a locally finitely piecewise affine homeomorphism of $\Omega
\subset \rn$ onto $\Delta \subset \rn$ in $W^{1,p}$, $1\leq p < \infty$ and any
$\epsilon >0$ we construct a smooth injective map $\tilde{f}$ such that
$\|f-\tilde{f}\|_{W^{1,p}(\Omega,\rn)} < \epsilon$.
|
For many relevant statistics of multivariate time series, no valid frequency
domain bootstrap procedures exist. This is mainly due to the fact that the
distribution of such statistics depends on the fourth-order moment structure of
the underlying process in nearly every scenario, except for some special cases
like Gaussian time series. In contrast to the univariate case, even additional
structural assumptions such as linearity of the multivariate process or a
standardization of the statistic of interest do not solve the problem. This
paper focuses on integrated periodogram statistics as well as functions thereof
and presents a new frequency domain bootstrap procedure for multivariate time
series, the multivariate frequency domain hybrid bootstrap (MFHB), to fill this
gap. Asymptotic validity of the MFHB procedure is established for general
classes of periodogram-based statistics and for stationary multivariate
processes satisfying rather weak dependence conditions. A simulation study is
carried out which compares the finite sample performance of the MFHB with that
of the moving block bootstrap.
|
Recent advancements in data-to-text generation largely take on the form of
neural end-to-end systems. Efforts have been dedicated to improving text
generation systems by changing the order of training samples in a process known
as curriculum learning. Past research on sequence-to-sequence learning showed
that curriculum learning helps to improve both the performance and convergence
speed. In this work, we delve into the same idea surrounding the training
samples consisting of structured data and text pairs, where at each update, the
curriculum framework selects training samples based on the model's competence.
Specifically, we experiment with various difficulty metrics and put forward a
soft edit distance metric for ranking training samples. Our benchmarks show
faster convergence speed where training time is reduced by 38.7% and
performance is boosted by 4.84 BLEU.
|
Manufacturing of medical devices is strictly controlled by authorities, and
manufacturers must conform to the regulatory requirements of the region in
which a medical device is being marketed for use. In general, these
requirements make no difference between the physical device, embedded software
running inside a physical device, or software that constitutes the device in
itself. As a result, standalone software with intended medical use is
considered to be a medical device. Consequently, its development must meet the
same requirements as the physical medical device manufacturing. This practice
creates a unique challenge for organizations developing medical software. In
this paper, we pinpoint a number of regulatory requirement mismatches between
physical medical devices and standalone medical device software. The view is
based on experiences from industry, from the development of all-software
medical devices as well as from defining the manufacturing process so that it
meets the regulatory requirements.
|
We report a massive quiescent galaxy at $z_{\rm
spec}=3.0922^{+0.008}_{-0.004}$ spectroscopically confirmed at a protocluster
in the SSA22 field by detecting the Balmer and Ca {\footnotesize II} absorption
features with multi-object spectrometer for infrared exploration (MOSFIRE) on
the Keck I telescope. This is the most distant quiescent galaxy confirmed in a
protocluster to date. We fit the optical to mid-infrared photometry and
spectrum simultaneously with spectral energy distribution (SED) models of
parametric and nonparametric star formation histories (SFH). Both models fit
the observed SED well and confirm that this object is a massive quiescent
galaxy with the stellar mass of $\log(\rm M_{\star}/M_{\odot}) =
11.26^{+0.03}_{-0.04}$ and $11.54^{+0.03}_{-0.00}$, and star formation rate of
$\rm SFR/M_{\odot}~yr^{-1} <0.3$ and $=0.01^{+0.03}_{-0.01}$ for parametric and
nonparametric models, respectively. The SFH from the former modeling is
described as an instantaneous starburst while that of the latter modeling is
longer-lived but both models agree with a sudden quenching of the star
formation at $\sim0.6$ Gyr ago. This massive quiescent galaxy is confirmed in
an extremely dense group of galaxies predicted as a progenitor of a brightest
cluster galaxy formed via multiple mergers in cosmological numerical
simulations. We newly find three plausible [O III]$\lambda$5007 emitters at
$3.0791\leq z_{\rm spec}\leq3.0833$ happened to be detected around the target.
Two of them just between the target and its nearest massive galaxy are possible
evidence of their interactions. They suggest the future strong size and stellar
mass evolution of this massive quiescent galaxy via mergers.
|
The paper contains a review on recent progress in the deformational
properties of smooth maps from compact surfaces $M$ to a one-dimensional
manifold $P$. It covers description of homotopy types of stabilizers and orbits
of a large class of smooth functions on surfaces obtained by the author, E.
Kudryavtseva, B. Feshchenko, I. Kuznietsova, Yu. Soroka, A. Kravchenko. We also
present here a new direct proof of the fact that for generic Morse maps the
connected components their orbits are homotopy equivalent to finite products of
circles.
|
Complex systems, such as Artificial Intelligence (AI) systems, are comprised
of many interrelated components. In order to represent these systems,
demonstrating the relations between components is essential. Perhaps because of
this, diagrams, as "icons of relation", are a prevalent medium for signifying
complex systems. Diagrams used to communicate AI system architectures are
currently extremely varied. The diversity in diagrammatic conceptual modelling
choices provides an opportunity to gain insight into the aspects which are
being prioritised for communication. In this philosophical exploration of AI
systems diagrams, we integrate theories of conceptual models, communication
theory, and semiotics. We discuss consequences of standardised diagrammatic
languages for AI systems, concluding that while we expect engineers
implementing systems to benefit from standards, researchers would have a larger
benefit from guidelines.
|
Optical spectrometers have propelled scientific and technological
advancements in a wide range of fields. While sophisticated systems with
excellent performance metrics are serving well in controlled laboratory
environments, many applications require systems that are portable, economical,
and robust to optical misalignment. Here, we propose and demonstrate a
spectrometer that uses a planar one-dimensional photonic crystal cavity as a
dispersive element and a reconstructive computational algorithm to extract
spectral information from spatial patterns. The simple fabrication and planar
architecture of the photonic crystal cavity render our spectrometry platform
economical and robust to optical misalignment. The reconstructive algorithm
allows miniaturization and portability. The intensity transmitted by the
photonic crystal cavity has a wavelength-dependent spatial profile. We generate
the spatial transmittance function of the system using finite-difference
time-domain method and also estimate the dispersion relation. The transmittance
function serves as a transfer function in our reconstructive algorithm. We show
accurate estimation of various kinds of input spectra. We also show that the
spectral resolution of the system depends on the cavity linewidth that can be
improved by increasing the number of periodic layers in distributed Bragg
mirrors. Finally, we experimentally estimate the center wavelength and
linewidth of the spectrum of an unknown light emitting diode. The estimated
values are in good agreement with the values measured using a commercial
spectrometer.
|
In this paper we study the density in the real line of oscillating sequences
of the form
$$ (g(k)\cdot F(k\alpha))_{k \in \mathbb{N}} ,$$ where $g$ is a positive
increasing function and $F$ a real continuous 1-periodic function. This extends
work by Berend, Boshernitzan and Kolesnik who established differential
properties on the function $F$ ensuring that the oscillating sequence is dense
modulo $1$.
More precisely, when $F$ has finitely many roots in $[0,1)$, we provide
necessary and also sufficient conditions for the oscillating sequence under
consideration to be dense in $\mathbb{R}$. All the results are stated in terms
of the Diophantine properties of $\alpha$, with the help of the theory of
continued fractions.
|
The noncentrosymmetric transition metal monopnictides NbP, TaP, NbAs and TaAs
are a family of Weyl semimetals in which pairs of protected linear crossings of
spin-resolved bands occur. These so-called Weyl nodes are characterized by
integer topological charges of opposite sign associated with singular points of
Berry curvature in momentum space. In such a system anomalous magnetoelectric
responses are predicted, which should only occur if the crossing points are
close to the Fermi level and enclosed by Fermi surface pockets penetrated by an
integer flux of Berry curvature, dubbed Weyl pockets. TaAs was shown to possess
Weyl pockets whereas TaP and NbP have trivial pockets enclosing zero net flux
of Berry curvature. Here, via measurements of the magnetic torque, resistivity
and magnetisation, we present a comprehensive quantum oscillation study of
NbAs, the last member of this family where the precise shape and nature of the
Fermi surface pockets is still unknown. We detect six distinct frequency
branches, two of which have not been observed before. A comparison to density
functional theory calculations suggests that the two largest pockets are
topologically trivial, whereas the low frequencies might stem from tiny Weyl
pockets. The enclosed Weyl nodes are within a few meV of the Fermi energy.
|
Deep neural networks (DNNs) have achieved great success in various machine
learning tasks. However, most existing powerful DNN models are computationally
expensive and memory demanding, hindering their deployment in devices with low
memory and computational resources or in applications with strict latency
requirements. Thus, several resource-adaptable or flexible approaches were
recently proposed that train at the same time a big model and several
resource-specific sub-models. Inplace knowledge distillation (IPKD) became a
popular method to train those models and consists in distilling the knowledge
from a larger model (teacher) to all other sub-models (students). In this work
a novel generic training method called IPKD with teacher assistant (IPKD-TA) is
introduced, where sub-models themselves become teacher assistants teaching
smaller sub-models. We evaluated the proposed IPKD-TA training method using two
state-of-the-art flexible models (MSDNet and Slimmable MobileNet-V1) with two
popular image classification benchmarks (CIFAR-10 and CIFAR-100). Our results
demonstrate that the IPKD-TA is on par with the existing state of the art while
improving it in most cases.
|
We investigate the charge and heat electronic noise in a generic two-terminal
mesoscopic conductor in the absence of the corresponding charge and heat
currents. Despite these currents being zero, shot noise is generated in the
system. We show that, irrespective of the conductor's details and the specific
nonequilibrium conditions, the charge shot noise never exceeds its thermal
counterpart, thus establishing a general bound. Such a bound does not exist in
the case of heat noise, which reveals a fundamental difference between charge
and heat transport under zero-current conditions.
|
In optical communication systems, orthogonal frequency division multiplexing
(OFDM) is widely used to combat inter-symbol interference (ISI) caused by
multipath propagation. Optical systems which use intensity modulation and
direct detection (IM/DD) can only transmit real valued symbols, but the inverse
discrete Fourier transform (IDFT) or its computationally efficient form
inverse-fast Fourier transform (IFFT) required for the OFDM waveform
construction produces complex values. Hermitian symmetry is often used to
obtain real valued symbols. For this purpose, some trigonometric
transformations such as discrete cosine transform (DCT) are also used, however
these transformations can eliminate the ISI only under certain conditions. In
this paper, we propose a completely different method for the construction of
OFDM waveform with IFFT to obtain real valued symbols by combining the real and
imaginary parts (CRIP) of IFFT output electrically (E-CRIP) or optically
(O-CRIP). Analytical analysis and simulation works are presented to show that
compared to the Hermitian symmetric system, the proposed method slightly
increases the spectral efficiency, eliminates ISI, significantly reduces the
amount of needed calculation and does not effect the error performance. In
addition, the O-CRIP method is less affected by clipping noise that may occur
due to the imperfections of the transmitter front-ends.
|
Techniques for training artificial neural networks (ANNs) and convolutional
neural networks (CNNs) using simulated dynamical electron diffraction patterns
are described. The premise is based on the following facts. First, given a
suitable crystal structure model and scattering potential, electron diffraction
patterns can be simulated accurately using dynamical diffraction theory.
Secondly, using simulated diffraction patterns as input, ANNs can be trained
for the determination of crystal structural properties, such as crystal
orientation and local strain. Further, by applying the trained ANNs to
four-dimensional diffraction datasets (4D-DD) collected using the scanning
electron nanodiffraction (SEND) or 4D scanning transmission electron microscopy
(4D-STEM) techniques, the crystal structural properties can be mapped at high
spatial resolution. Here, we demonstrate the ANN-enabled possibilities for the
analysis of crystal orientation and strain at high precision and benchmark the
performance of ANNs and CNNs by comparing with previous methods. A factor of
thirty improvement in angular resolution at 0.009 degrees (0.16 mrad) for
orientation mapping, sensitivity at 0.04% or less for strain mapping, and
improvements in computational performance are demonstrated.
|
The quark mass function is computed both by solving the quark propagator
Dyson-Schwinger equation and from lattice simulations implementing overlap and
Domain-Wall fermion actions for valence and sea quarks, respectively. The
results are confronted and seen to produce a very congruent picture, showing a
remarkable agreement for the explored range of current-quark masses. The
effective running-interaction is based on a process-independent charge rooted
on a particular truncation of the Dyson-Schwinger equations in the gauge
sector, establishing thus a link from there to the quark sector and inspiring a
correlation between the emergence of gluon and hadron masses.
|
A stochastic approach is implemented to address the problem of a marine
structure exposed to water wave impacts. The focus is on (i) the average
frequency of wave impacts, and (ii) the related probability distribution of
impact kinematic variables. The wave field is assumed to be Gaussian. The
seakeeping motions of the considered body are taken into account in the
analysis. The coupling of the stochastic model with a water entry model is
demonstrated through the case study of a foil exposed to wave impacts.
|
A classical model for sources and sinks in a two-dimensional perfect
incompressible fluid occupying a bounded domain dates back to Yudovich in 1966.
In this model, on the one hand, the normal component of the fluid velocity is
prescribed on the boundary and is nonzero on an open subset of the boundary,
corresponding either to sources (where the flow is incoming) or to sinks (where
the flow is outgoing). On the other hand the vorticity of the fluid which is
entering into the domain from the sources is prescribed.
In this paper we investigate the existence of weak solutions to this system
by relying on \textit{a priori} bounds of the vorticity, which satisfies a
transport equation associated with the fluid velocity vector field. Our results
cover the case where the vorticity has a $L^p$ integrability in space, with $p
$ in $[1,+\infty]$, and prove the existence of solutions obtained by
compactness methods from viscous approximations. More precisely we prove the
existence of solutions which satisfy the vorticity equation in the
distributional sense in the case where $p >\frac43$, in the renormalized sense
in the case where $p >1$, and in a symmetrized sense in the case where $p =1$.
|
We report the detection of a massive neutral gas outflow in the z=2.09
gravitationally lensed Dusty Star-Forming Galaxy HATLASJ085358.9+015537
(G09v1.40), seen in absorption with the OH+(1_1-1_0) transition using spatially
resolved (0.5"x0.4") Atacama Large Millimeter/submillimeter Array (ALMA)
observations. The blueshifted OH+ line is observed simultaneously with the
CO(9-8) emission line and underlying dust continuum. These data are
complemented by high angular resolution (0.17"x0.13") ALMA observations of
CH+(1-0) and underlying dust continuum, and Keck 2.2 micron imaging tracing the
stellar emission. The neutral outflow, dust, dense molecular gas and stars all
show spatial offsets from each other. The total atomic gas mass of the observed
outflow is 6.7x10^9 M_sun, >25% as massive as the gas mass of the galaxy. We
find that a conical outflow geometry best describes the OH+ kinematics and
morphology and derive deprojected outflow properties as functions of possible
inclination (0.38 deg-64 deg). The neutral gas mass outflow rate is between
83-25400 M_sun/yr, exceeding the star formation rate (788+/-300 M_sun/yr) if
the inclination is >3.6 deg (mass-loading factor = 0.3-4.7). Kinetic energy and
momentum fluxes span 4.4-290x10^9 L_sun and 0.1-3.7x10^37 dyne, respectively
(energy-loading factor = 0.013-16), indicating that the feedback mechanisms
required to drive the outflow depend on the inclination assumed. We derive a
gas depletion time between 29 and 1 Myr, but find that the neutral outflow is
likely to remain bound to the galaxy, unless the inclination is small, and may
be re-accreted if additional feedback processes do not occur.
|
The grain boundary (GB) microchemistry and precipitation behaviour in
high-strength Al-Zn-Mg-Cu alloys has an important influence on their mechanical
and electrochemical properties. Simulation of the GB segregation,
precipitation, and solute distribution in these alloys requires an accurate
description of the thermodynamics and kinetics of this multi-component system.
CALPHAD databases have been successfully developed for equilibrium
thermodynamic calculations in complex multi-component systems, and in recent
years have been combined with diffusion simulations. In this work, we have
directly incorporated a CALPHAD database into a phase-field framework, to
simulate, with high fidelity, the complex kinetics of the non-equilibrium GB
microstructures that develop in these important commercial alloys during heat
treatment. In particular, the influence of GB solute segregation, GB diffusion,
precipitate number density, and far-field matrix composition, on the growth of
a population of GB precipitates, was systematically investigated in a model
Al-Zn-Mg-Cu alloy of near AA7050 composition. The simulation results were
compared with scanning transmission electron microscopy and atom probe
tomography characterisation of alloys of the similar composition, with good
agreement.
|
State governments in the U.S. have been facing difficult decisions involving
tradeoffs between economic and health-related outcomes during the COVID-19
pandemic. Despite evidence of the effectiveness of government-mandated
restrictions mitigating the spread of contagion, these orders are stigmatized
due to undesirable economic consequences. This tradeoff resulted in state
governments employing mandates in widely different ways. We compare the
different policies states implemented during periods of restriction (lockdown)
and reopening with indicators of COVID-19 spread and consumer card spending at
each state during the first wave of the pandemic in the U.S. between March and
August 2020. We find that while some states enacted reopening decisions when
the incidence rate of COVID-19 was minimal or sustained in its relative
decline, other states relaxed socioeconomic restrictions near their highest
incidence and prevalence rates experienced so far. Nevertheless, all states
experienced similar trends in consumer card spending recovery, which was
strongly correlated with reopening policies following the lockdowns and
relatively independent from COVID-19 incidence rates at the time. Our findings
suggest that consumer card spending patterns can be attributed to government
mandates rather than COVID-19 incidence in the states. We estimate the recovery
in states that reopened in late April was more than the recovery in states that
did not reopen in the same period - 15% for consumer card spending and 18% for
spending by high income households. This result highlights the important role
of state policies in minimizing health impacts while promoting economic
recovery and helps planning effective interventions in subsequent waves and
immunization efforts.
|
In the framework of the Einstein-Dirac-axion-aether theory we consider the
quartet of self-interacting cosmic fields, which includes the dynamic aether,
presented by the unit timelike vector field, the axionic dark mater, described
by the pseudoscalar field, the spinor field associated with fermion particles,
and the gravity field. The key, associated with the mechanism of
self-interaction, is installed into the modified periodic potential of the
pseudoscalar (axion) field constructed on the base of a guiding function, which
depends on one invariant, one pseudo-invariant and two cross-invariants
containing the spinor and vector fields. The total system of the field
equations related to the isotropic homogeneous cosmological model is solved; we
have found the exact solutions for the guiding function for three cases:
nonzero, vanishing and critical values of the cosmological constant. Based on
these solutions, we obtained the expressions for the effective mass of spinor
particles, interacting with the axionic dark matter and dynamic aether. This
effective mass is shown to bear imprints of the cosmological epoch and of the
state of the cosmic dark fluid in that epoch.
|
Continuous, automated surveillance systems that incorporate machine learning
models are becoming increasingly more common in healthcare environments. These
models can capture temporally dependent changes across multiple patient
variables and can enhance a clinician's situational awareness by providing an
early warning alarm of an impending adverse event such as sepsis. However, most
commonly used methods, e.g., XGBoost, fail to provide an interpretable
mechanism for understanding why a model produced a sepsis alarm at a given
time. The black-box nature of many models is a severe limitation as it prevents
clinicians from independently corroborating those physiologic features that
have contributed to the sepsis alarm. To overcome this limitation, we propose a
generalized linear model (GLM) approach to fit a Granger causal graph based on
the physiology of several major sepsis-associated derangements (SADs). We adopt
a recently developed stochastic monotone variational inequality-based estimator
coupled with forwarding feature selection to learn the graph structure from
both continuous and discrete-valued as well as regularly and irregularly
sampled time series. Most importantly, we develop a non-asymptotic upper bound
on the estimation error for any monotone link function in the GLM. We conduct
real-data experiments and demonstrate that our proposed method can achieve
comparable performance to popular and powerful prediction methods such as
XGBoost while simultaneously maintaining a high level of interpretability.
|
Combinatorial optimization is a well-established area in operations research
and computer science. Until recently, its methods have focused on solving
problem instances in isolation, ignoring the fact that they often stem from
related data distributions in practice. However, recent years have seen a surge
of interest in using machine learning, especially graph neural networks (GNNs),
as a key building block for combinatorial tasks, either directly as solvers or
by enhancing exact solvers. The inductive bias of GNNs effectively encodes
combinatorial and relational input due to their invariance to permutations and
awareness of input sparsity. This paper presents a conceptual review of recent
key advancements in this emerging field, aiming at researchers in both
optimization and machine learning.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.