abstract
stringlengths 42
2.09k
|
---|
In this paper, blow-up solutions of autonomous ordinary differential
equations (ODEs) which are unstable under perturbations of initial conditions
are studied. Combining dynamical systems machinery (e.g. phase space
compactifications, time-scale desingularizations of vector fields) with tools
from computer-assisted proofs (e.g. rigorous integrators, parameterization
method for invariant manifolds), these unstable blow-up solutions are obtained
as trajectories on stable manifolds of hyperbolic (saddle) equilibria at
infinity. In this process, important features are obtained: smooth dependence
of blow-up times on initial points near blow-up, level set distribution of
blow-up times, singular behavior of blow-up times on unstable blow-up
solutions, organization of the phase space via separatrices (stable manifolds).
In particular, we show that unstable blow-up solutions themselves, and
solutions defined globally in time connected by those blow-up solutions can
separate initial conditions into two regions where solution trajectories are
either globally bounded or blow-up, no matter how the large initial points are.
|
An optimal finite-time process drives a given initial distribution to a given
final one in a given time at the lowest cost as quantified by total entropy
production. We prove that for system with discrete states this optimal process
involves non-conservative driving, i.e., a genuine driving affinity, in
contrast to the case of system with continuous states. In a multicyclic
network, the optimal driving affinity is bounded by the number of states within
each cycle. If the driving affects forward and backwards rates
non-symmetrically, the bound additionally depends on a structural parameter
characterizing this asymmetry.
|
This paper describes the submissions by team HWR to the Dravidian Language
Identification (DLI) shared task organized at VarDial 2021 workshop. The DLI
training set includes 16,674 YouTube comments written in Roman script
containing code-mixed text with English and one of the three South Dravidian
languages: Kannada, Malayalam, and Tamil. We submitted results generated using
two models, a Naive Bayes classifier with adaptive language models, which has
shown to obtain competitive performance in many language and dialect
identification tasks, and a transformer-based model which is widely regarded as
the state-of-the-art in a number of NLP tasks. Our first submission was sent in
the closed submission track using only the training set provided by the shared
task organisers, whereas the second submission is considered to be open as it
used a pretrained model trained with external data. Our team attained shared
second position in the shared task with the submission based on Naive Bayes.
Our results reinforce the idea that deep learning methods are not as
competitive in language identification related tasks as they are in many other
text classification tasks.
|
We empirically determined the speed of light by measuring the variation in
longitudinal mode frequencies, or the beat frequencies, of an
adjustable-length, open-cavity helium-neon laser as a function of its cavity
length. The TEM$_{00}$ mode lasing output of the laser was analyzed using a
fast frequency photodiode detector and a radio frequency spectrum analyzer. A
Fabry-Perot interferometer was used to monitor the intensity of the
longitudinal modes and we found that the phenomena of frequency pushing and
pulling had little effects on the beat frequency measurements. Plotting the
reciprocal of the beat frequency as a function of the change in cavity length,
the speed of light was found, by using linear weighted least squares
regression, to be $(2.997 \pm 0.003) \times 10^{8}$ ms$^{-1}$. This value is
$0.3 \sigma$ away from the defined value of speed of light and is accurate to 1
part in 3200.
|
In this paper, we consider the estimation of a low Tucker rank tensor from a
number of noisy linear measurements. The general problem covers many specific
examples arising from applications, including tensor regression, tensor
completion, and tensor PCA/SVD. We propose a Riemannian Gauss-Newton (RGN)
method with fast implementations for low Tucker rank tensor estimation.
Different from the generic (super)linear convergence guarantee of RGN in the
literature, we prove the first quadratic convergence guarantee of RGN for
low-rank tensor estimation under some mild conditions. A deterministic
estimation error lower bound, which matches the upper bound, is provided that
demonstrates the statistical optimality of RGN. The merit of RGN is illustrated
through two machine learning applications: tensor regression and tensor SVD.
Finally, we provide the simulation results to corroborate our theoretical
findings.
|
The outbreak of a novel coronavirus causing severe acute respiratory syndrome
in December 2019 has escalated into a worldwide pandemic. In this work, we
propose a compartmental model to describe the dynamics of transmission of
infection and use it to obtain the optimal vaccination control. The model
accounts for the various stages of the vaccination and the optimisation is
focused on minimising the infections to protect the population and relieve the
healthcare system. As a case study we selected the Republic of Ireland. We use
data provided by Ireland's COVID-19 Data-Hub and simulate the evolution of the
pandemic with and without the vaccination in place for two different scenarios,
one representative of a national lockdown situation and the other indicating
looser restrictions in place. One of the main findings of our work is that the
optimal approach would involve a vaccination programme where the older
population is vaccinated in larger numbers earlier while simultaneously part of
the younger population also gets vaccinated to lower the risk of transmission
between groups.
|
Deep neural network (DNN) generally takes thousands of iterations to optimize
via gradient descent and thus has a slow convergence. In addition, softmax, as
a decision layer, may ignore the distribution information of the data during
classification. Aiming to tackle the referred problems, we propose a novel
manifold neural network based on non-gradient optimization, i.e., the
closed-form solutions. Considering that the activation function is generally
invertible, we reconstruct the network via forward ridge regression and low
rank backward approximation, which achieve the rapid convergence. Moreover, by
unifying the flexible Stiefel manifold and adaptive support vector machine, we
devise the novel decision layer which efficiently fits the manifold structure
of the data and label information. Consequently, a jointly non-gradient
optimization method is designed to generate the network with closed-form
results. Eventually, extensive experiments validate the superior performance of
the model.
|
In this work we introduce the graph-theoretic notion of mendability: for each
locally checkable graph problem we can define its mending radius, which
captures the idea of how far one needs to modify a partial solution in order to
"patch a hole."
We explore how mendability is connected to the existence of efficient
algorithms, especially in distributed, parallel, and fault-tolerant settings.
It is easy to see that $O(1)$-mendable problems are also solvable in $O(\log^*
n)$ rounds in the LOCAL model of distributed computing. One of the surprises is
that in paths and cycles, a converse also holds in the following sense: if a
problem $\Pi$ can be solved in $O(\log^* n)$, there is always a restriction
$\Pi' \subseteq \Pi$ that is still efficiently solvable but that is also
$O(1)$-mendable.
We also explore the structure of the landscape of mendability. For example,
we show that in trees, the mending radius of any locally checkable problem is
$O(1)$, $\Theta(\log n)$, or $\Theta(n)$, while in general graphs the structure
is much more diverse.
|
We discuss various aspects of positive kernel method of quantization of the
one-parameter groups $\tau_t \in \mbox{Aut}(P,\vartheta)$ of automorphisms of a
$G$-principal bundle $P(G,\pi,M)$ with a fixed connection form $\vartheta$ on
its total space $P$. We show that the generator $\hat{F}$ of the unitary flow
$U_t = e^{it \hat{F}}$ being the quantization of $\tau_t $ is realized by a
generalized Kirillov-Kostant-Souriau operator whose domain consists of sections
of some vector bundle over $M$, which are defined by suitable positive kernel.
This method of quantization applied to the case when $G=GL(N,\mathbb{C})$ and
$M$ is a non-compact Riemann surface leads to quantization of the arbitrary
holomorphic flow $\tau_t^{hol} \in \mbox{Aut}(P,\vartheta)$. For the above
case, we present the integral decompositions of the positive kernels on
$P\times P$ invariant with respect to the flows $\tau_t^{hol}$ in terms of
spectral measure of $\hat{F}$. These decompositions generalize the ones given
by Bochner theorem for a positive kernels on $\mathbb{C} \times \mathbb{C}$
invariant with respect to the one-parameter groups of translations of complex
plane.
|
Robots have to face challenging perceptual settings, including changes in
viewpoint, lighting, and background. Current simulated reinforcement learning
(RL) benchmarks such as DM Control provide visual input without such
complexity, which limits the transfer of well-performing methods to the real
world. In this paper, we extend DM Control with three kinds of visual
distractions (variations in background, color, and camera pose) to produce a
new challenging benchmark for vision-based control, and we analyze state of the
art RL algorithms in these settings. Our experiments show that current RL
methods for vision-based control perform poorly under distractions, and that
their performance decreases with increasing distraction complexity, showing
that new methods are needed to cope with the visual complexities of the real
world. We also find that combinations of multiple distraction types are more
difficult than a mere combination of their individual effects.
|
In this paper, we propose two novel approaches for hypergraph comparison. The
first approach transforms the hypergraph into a graph representation for use of
standard graph dissimilarity measures. The second approach exploits the
mathematics of tensors to intrinsically capture multi-way relations. For each
approach, we present measures that assess hypergraph dissimilarity at a
specific scale or provide a more holistic multi-scale comparison. We test these
measures on synthetic hypergraphs and apply them to biological datasets.
|
The notion of topological equivalence plays an essential role in the study of
dynamical systems of flows. However, it is inherently difficult to generalize
this concept to systems without well-posedness in the sense of Hadamard. In
this study, we formulate a notion of "topological equivalence" between such
systems based on the axiomatic theory of topological dynamics proposed by
Yorke, and discuss its relation with the usual definition. During this process,
we generalize Yorke's theory to the action of topological groups.
|
In this paper, we study an infeasible interior-point method for linear
optimization with full-Newton step. The introduced method uses an algebraic
equivalent transformation on the centering equation of the system which defines
the central path. We prove that the method finds an $\varepsilon$-optimal
solution of the underlying problem in polynomial time.
|
We describe our work on information extraction in medical documents written
in German, especially detecting negations using an architecture based on the
UIMA pipeline. Based on our previous work on software modules to cover medical
concepts like diagnoses, examinations, etc. we employ a version of the NegEx
regular expression algorithm with a large set of triggers as a baseline. We
show how a significantly smaller trigger set is sufficient to achieve similar
results, in order to reduce adaptation times to new text types. We elaborate on
the question whether dependency parsing (based on the Stanford CoreNLP model)
is a good alternative and describe the potentials and shortcomings of both
approaches.
|
In a recent publication a procedure was developed which can be used to derive
completely gauge invariant models from general Lagrangian densities with $N$
order of derivatives and $M$ rank of tensor potential. This procedure was then
used to show that unique models follow for each order, namely classical
electrodynamics for $N = M = 1$ and linearized Gauss-Bonnet gravity for $N = M
= 2$. In this article, the nature of the connection between these two well
explored physical models is further investigated by means of an additional
common property; a complete dual formulation. First we give a review of
Gauss-Bonnet gravity and the dual formulation of classical electrodynamics. The
dual formulation of linearized Gauss-Bonnet gravity is then developed. It is
shown that the dual formulation of linearized Gauss-Bonnet gravity is analogous
to the homogenous half of Maxwell's theory; both have equations of motion
corresponding to the (second) Bianchi identity, built from the dual form of
their respective field strength tensors. In order to have a dually symmetric
counterpart analogous to the non-homogenous half of Maxwell's theory, the first
invariant derived from the procedure in $N = M = 2$ can be introduced. The
complete gauge invariance of a model with respect to Noether's first theorem,
and not just the equation of motion, is a necessary condition for this dual
formulation. We show that this result can be generalized to the higher spin
gauge theories, where the spin-$n$ curvature tensors for all $N = M = n$ are
the field strength tensors for each $n$. These completely gauge invariant
models correspond to the Maxwell-like higher spin gauge theories whose
equations of motion have been well explored in the literature.
|
"For how many days during the past 30 days was your mental health not good?"
The responses to this question measure self-reported mental health and can be
linked to important covariates in the National Health and Nutrition Examination
Survey (NHANES). However, these count variables present major distributional
challenges: the data are overdispersed, zero-inflated, bounded by 30, and
heaped in five- and seven-day increments. To meet these challenges, we design a
semiparametric estimation and inference framework for count data regression.
The data-generating process is defined by simultaneously transforming and
rounding (STAR) a latent Gaussian regression model. The transformation is
estimated nonparametrically and the rounding operator ensures the correct
support for the discrete and bounded data. Maximum likelihood estimators are
computed using an EM algorithm that is compatible with any continuous data
model estimable by least squares. STAR regression includes asymptotic
hypothesis testing and confidence intervals, variable selection via information
criteria, and customized diagnostics. Simulation studies validate the utility
of this framework. STAR is deployed to study the factors associated with
self-reported mental health and demonstrates substantial improvements in
goodness-of-fit compared to existing count data regression models.
|
For a hyperplane arrangement in a real vector space, the coefficients of its
Poincar\'{e} polynomial have many interpretations. An interesting one is
provided by the Varchenko-Gel'fand ring, which is the ring of functions from
the chambers of the arrangement to the integers with pointwise addition and
multiplication. Varchenko and Gel'fand gave a simple presentation for this
ring, along with a filtration and associated graded ring whose Hilbert series
is the Poincar\'{e} polynomial. We generalize these results to cones defined by
intersections of halfspaces of some of the hyperplanes and prove a novel result
for the Varchenko-Gel'fand ring of an arrangement: when the arrangement is
supersolvable the associated graded ring of the arrangement is Koszul.
|
Compared with twisted bilayer graphene, twisted double bilayer graphene
(TDBG) provides another important platform to realize the moir\'e flat bands.
In this paper, we first calculate the valley Chern number phase diagram of TDBG
in the parameter space spanned by the twist angle and the interlayer electric
potential. To include the effects of interactions, we then phenomenologically
introduce the spin-splitting and valley-splitting. We find that when the valley
splitting is larger than the bandwidth of the first conduction band so that a
gap is opened and the spin splitting is relatively weak, the orbital Chern
insulator emerges at half-filling, associated with a large orbital
magnetization (OM). Further calculations suggest that there is no sign reversal
of the OM when the Fermi energy goes from the bottom to the top of the
half-filling gap, as the OM remains negative in both AB-AB stacking and AB-BA
stacking. The implications of our results for the ongoing experiments are also
discussed.
|
Let $\mathcal{H}_{d,g,r}$ be the Hilbert scheme parametrizing smooth
irreducible and non-degenerate curves of degree $d$ and genus $g$ in
$\mathbb{P}^r.$ We denote by $\mathcal{H}^\mathcal{L}_{d,g,r}$ the union of
those components of $\mathcal{H}_{d,g,r}$ whose general element is linearly
normal. In this article we show that $\mathcal{H}^\mathcal{L}_{d,g,r}$ ($d\ge
g+r-3$) is non-empty in a certain optimal range of triples $(d,g,r)$ and is
empty outside the range. This settles the existence (or non-emptiness if one
prefers) of the Hilbert scheme $\mathcal{H}^\mathcal{L}_{d,g,r}$ of linearly
normal curves of degree $d$ and genus $g$ in $\mathbb{P}^r$ for $g+r-3\le d\le
g+r$, $r\ge 3$. We also determine all the triples $(d,g,r)$ with $g+r-3\le d\le
g+r$ for which $\mathcal{H}^\mathcal{L}_{d,g,r}$ is reducible (or irreducible).
|
Continuous-time quantum walks (CTQWs) provide a valuable model for quantum
transport, universal quantum computation and quantum spatial search, among
others. Recently, the empowering role of new degrees of freedom in the
Hamiltonian generator of CTQWs, which are the complex phases along the loops of
the underlying graph, was acknowledged for its interest in optimizing or
suppressing transport on specific topologies. We argue that the
quantum-classical distance, a figure of merit which was introduced to capture
the difference in dynamics between a CTQW and its classical, stochastic
counterpart, guides the optimization of parameters of the Hamiltonian to
achieve better quantum transport on cycle graphs and spatial search to the
quantum speed limit without an oracle on complete graphs, the latter also
implying fast uniform mixing. We compare the variations of this quantity with
the 1-norm of coherence and the Inverse Participation Ratio, showing that the
quantum-classical distance is linked to both, but in a topology-dependent
relation, which is key to spot the most interesting quantum evolution in each
case.
|
Expanding visual categorization into a novel domain without the need of extra
annotation has been a long-term interest for multimedia intelligence.
Previously, this challenge has been approached by unsupervised domain
adaptation (UDA). Given labeled data from a source domain and unlabeled data
from a target domain, UDA seeks for a deep representation that is both
discriminative and domain-invariant. While UDA focuses on the target domain, we
argue that the performance on both source and target domains matters, as in
practice which domain a test example comes from is unknown. In this paper we
extend UDA by proposing a new task called unsupervised domain expansion (UDE),
which aims to adapt a deep model for the target domain with its unlabeled data,
meanwhile maintaining the model's performance on the source domain. We propose
Knowledge Distillation Domain Expansion (KDDE) as a general method for the UDE
task. Its domain-adaptation module can be instantiated with any existing model.
We develop a knowledge distillation based learning mechanism, enabling KDDE to
optimize a single objective wherein the source and target domains are equally
treated. Extensive experiments on two major benchmarks, i.e., Office-Home and
DomainNet, show that KDDE compares favorably against four competitive
baselines, i.e., DDC, DANN, DAAN, and CDAN, for both UDA and UDE tasks. Our
study also reveals that the current UDA models improve their performance on the
target domain at the cost of noticeable performance loss on the source domain.
|
Autonomous drone swarms are a burgeoning technology with significant
applications in the field of mapping, inspection, transportation and
monitoring. To complete a task, each drone has to accomplish a sub-goal within
the context of the overall task at hand and navigate through the environment by
avoiding collision with obstacles and with other agents in the environment. In
this work, we choose the task of optimal coverage of an environment with drone
swarms where the global knowledge of the goal states and its positions are
known but not of the obstacles. The drones have to choose the Points of
Interest (PoI) present in the environment to visit, along with the order to be
visited to ensure fast coverage. We model this task in a simulation and use an
agent-oriented approach to solve the problem. We evaluate different policy
networks trained with reinforcement learning algorithms based on their
effectiveness, i.e. time taken to map the area and efficiency, i.e.
computational requirements. We couple the task assignment with path planning in
an unique way for performing collision avoidance during navigation and compare
a grid-based global planning algorithm, i.e. Wavefront and a gradient-based
local planning algorithm, i.e. Potential Field. We also evaluate the Potential
Field planning algorithm with different cost functions, propose a method to
adaptively modify the velocity of the drone when using the Huber loss function
to perform collision avoidance and observe its effect on the trajectory of the
drones. We demonstrate our experiments in 2D and 3D simulations.
|
We present a new suite of mock galaxy catalogs mimicking the low-redshift
Universe, based on an updated halo occupation distribution (HOD) model and a
scaling relation between optical properties and the neutral hydrogen (HI)
content of galaxies. Our algorithm is constrained by observations of the
luminosity function and luminosity- and colour-dependent clustering of SDSS
galaxies, as well as the HI mass function and HI-dependent clustering of
massive HI-selected galaxies in the ALFALFA survey. Mock central and satellite
galaxies with realistic values of $r$-band luminosity, $g-r$ and $u-r$ colour,
stellar mass and HI mass are populated in an $N$-body simulation, inheriting a
number of properties of the density and tidal environment of their host halos.
The host halo of each central galaxy is also `baryonified' with realistic
spatial distributions of stars as well as hot and cold gas, along with the
corresponding rotation curve. Our default HOD assumes that galaxy properties
are a function of group halo mass alone, and can optionally include effects
such as galactic conformity and colour-dependent galaxy assembly bias. The
mocks predict the relation between the stellar mass and HI mass of massive HI
galaxies, as well as the 2-point cross-correlation function of spatially
co-located optical and HI-selected samples. They enable novel null tests for
galaxy assembly bias, provide predictions for the HI velocity width function,
and clarify the origin and universality of the radial acceleration relation in
the $\Lambda$CDM framework.
|
Many quantum mechanical experiments can be viewed as multi-round interactive
protocols between known quantum circuits and an unknown quantum process. Fully
quantum "coherent" access to the unknown process is known to provide an
advantage in many discrimination tasks compared to when only incoherent access
is permitted, but it is unclear if this advantage persists when the process is
noisy. Here, we show that a quantum advantage can be maintained when
distinguishing between two noisy single qubit rotation channels. Numerical and
analytical calculations reveal a distinct transition between optimal
performance by fully coherent and fully incoherent protocols as a function of
noise strength. Moreover, the size of the region of coherent quantum advantage
shrinks inverse polynomially in the number of channel uses, and in an
intermediate regime an improved strategy is a hybrid of fully-coherent and
fully-incoherent subroutines. The fully coherent protocol is based on quantum
signal processing, suggesting a generalizable algorithmic framework for the
study of quantum advantage in the presence of realistic noise.
|
The Humphreys-Davidson (HD) limit empirically defines a region of high
luminosities (log L > 5.5) and low effective temperatures (T < 20kK) on the
Hertzsprung-Russell Diagram in which hardly any supergiant stars are observed.
Attempts to explain this limit through instabilities arising in near- or
super-Eddington winds have been largely unsuccessful. Using modern stellar
evolution we aim to re-examine the HD limit, investigating the impact of
enhanced mixing on massive stars. We construct grids of stellar evolution
models appropriate for the Small and Large Magellanic Clouds (SMC, LMC), as
well as for the Galaxy, spanning various initial rotation rates and convective
overshooting parameters. Significantly enhanced mixing apparently steers
stellar evolution tracks away from the region of the HD limit. To quantify the
excess of over-luminous stars in stellar evolution simulations we generate
synthetic populations of massive stars, and make detailed comparisons with
catalogues of cool (T < 12.5kK) and luminous (log L > 4.7) stars in the SMC and
LMC. We find that adjustments to the mixing parameters can lead to agreement
between the observed and simulated red supergiant populations, but for hotter
supergiants the simulations always over-predict the number of very luminous
(log L > 5.4) stars compared to observations. The excess of luminous
supergiants decreases for enhanced mixing, possibly hinting at an important
role mixing has in explaining the HD limit. Still, the HD limit remains
unexplained for hotter supergiants.
|
We theoretically investigate the influence of a longitudinal laser
polarization component from beam focusing on spin dynamics in Kapitza-Dirac
scattering by solving the relativistic Dirac equation with time-dependent
perturbation theory. The transverse spacial dependence of the longitudinal beam
polarization component is accounted for, by approximating a Gaussian beam with
plane-wave components. We find that corrections from a longitudinal laser beam
polarization component approximately scale with the second power of the
diffraction angle $\epsilon$, from which we conclude that a related influence
from beam focusing can be made negligibly small for sufficiently low beam foci.
|
CAPTCHAs are a defense mechanism to prevent malicious bot programs from
abusing websites on the Internet. hCaptcha is a relatively new but emerging
image CAPTCHA service. This paper presents an automated system that can break
hCaptcha challenges with a high success rate. We evaluate our system against
270 hCaptcha challenges from live websites and demonstrate that it can solve
them with 95.93% accuracy while taking only 18.76 seconds on average to crack a
challenge. We run our attack from a docker instance with only 2GB memory (RAM),
3 CPUs, and no GPU devices, demonstrating that it requires minimal resources to
launch a successful large-scale attack against the hCaptcha system.
|
In this paper, we make a comprehensive study of the properties of a gapped
Dirac semimetal model, which was originally proposed in the magnetoinfrared
spectroscopy measurement of ZeTe$_5$, and includes both the linear and
parabolic dispersions in all three directions. We find that, depending on the
band inversion parameters, $\zeta'$ and $\zeta_z'$, the model can support three
different phases: the single Dirac point (DP) phase, the double DPs phase and
the Dirac ring phase. The three different phases can be distinguished by their
low-energy features in the density of states (DOS) and optical conductivity. At
high energy, both the DOS and optical conductivity exhibit power-law like
behaviors, with the asymptotic exponents depending heavily on the signs of
$\zeta'$ and $\zeta_z'$. Moreover, the thumb-of-rule formula between the DOS
and optical conductivity is satisfied only when $(\zeta',\zeta_z')>0$. The
implications of our results for experiments are discussed.
|
We determine the limiting distribution and the explicit tail behavior for the
maximal displacement of a branching symmetric stable process with spatially
inhomogeneous branching structure. Here the branching rate is a Kato class
measure with compact support and can be singular with respect to the Lebesgue
measure.
|
We consider the problem of assigning items to platforms in the presence of
group fairness constraints. In the input, each item belongs to certain
categories, called classes in this paper. Each platform specifies the group
fairness constraints through an upper bound on the number of items it can serve
from each class. Additionally, each platform also has an upper bound on the
total number of items it can serve. The goal is to assign items to platforms so
as to maximize the number of items assigned while satisfying the upper bounds
of each class. In some cases, there is a revenue associated with matching an
item to a platform, then the goal is to maximize the revenue generated.
This problem models several important real-world problems like ad-auctions,
scheduling, resource allocations, school choice etc.We also show an interesting
connection to computing a generalized maximum independent set on hypergraphs
and ranking items under group fairness constraints.
We show that if the classes are arbitrary, then the problem is NP-hard and
has a strong inapproximability. We consider the problem in both online and
offline settings under natural restrictions on the classes. Under these
restrictions, the problem continues to remain NP-hard but admits approximation
algorithms with small approximation factors. We also implement some of the
algorithms. Our experiments show that the algorithms work well in practice both
in terms of efficiency and the number of items that get assigned to some
platform.
|
Classical algorithms for predicting the equilibrium geometry of strongly
correlated molecules require expensive wave function methods that become
impractical already for few-atom systems. In this work, we introduce a
variational quantum algorithm for finding the most stable structure of a
molecule by explicitly considering the parametric dependence of the electronic
Hamiltonian on the nuclear coordinates. The equilibrium geometry of the
molecule is obtained by minimizing a more general cost function that depends on
both the quantum circuit and the Hamiltonian parameters, which are
simultaneously optimized at each step. The algorithm is applied to find the
equilibrium geometries of the $\mathrm{H}_2$, $\mathrm{H}_3^+$,
$\mathrm{BeH}_2$ and $\mathrm{H}_2\mathrm{O}$ molecules. The quantum circuits
used to prepare the electronic ground state for each molecule were designed
using an adaptive algorithm where excitation gates in the form of Givens
rotations are selected according to the norm of their gradient. All quantum
simulations are performed using the PennyLane library for quantum
differentiable programming. The optimized geometrical parameters for the
simulated molecules show an excellent agreement with their counterparts
computed using classical quantum chemistry methods.
|
In recent years, Deep Learning (DL) has been successfully applied to detect
and classify Radio Frequency (RF) Signals. A DL approach is especially useful
since it identifies the presence of a signal without needing full protocol
information, and can also detect and/or classify non-communication waveforms,
such as radar signals. In this work, we focus on the different pre-processing
steps that can be used on the input training data, and test the results on a
fixed DL architecture. While previous works have mostly focused exclusively on
either time-domain or frequency domain approaches, we propose a hybrid image
that takes advantage of both time and frequency domain information, and tackles
the classification as a Computer Vision problem. Our initial results point out
limitations to classical pre-processing approaches while also showing that it's
possible to build a classifier that can leverage the strengths of multiple
signal representations.
|
Analyzing and interpreting the exact behavior of new delay-based congestion
control protocols with complex non-linear control loops is exceptionally
difficult in highly variable networks such as cellular networks. This paper
proposes a Model-Driven Interpretability (MDI) congestion control framework,
which derives a model version of a delay-based protocol by simplifying a
congestion control protocol's response into a guided random walk over a
two-dimensional Markov model. We demonstrate the case for the MDI framework by
using MDI to analyze and interpret the behavior of two delay-based protocols
over cellular channels: Verus and Copa. Our results show a successful
approximation of throughput and delay characteristics of the protocols' model
versions across variable network conditions. The learned model of a protocol
provides key insights into an algorithm's convergence properties.
|
Multidimensional or multiplex bioanalysis represents a crucial approach to
improve diagnostic precision, increase assay throughput and advance fundamental
discoveries in analytical industry, life science and nanomedicine. Along this
line, bio-interfacing magnetic particles have been playing an important role.
Fully exploiting the properties of magnetic particles is the key to tailoring
recent technology development for better translational outcomes. In this
mini-review, typical magnetophysical dimensions of magnetic particles are
introduced. Recent progress of implementing these dimensions with advanced
sensor and actuator technologies in multiplex bioanalysis is discussed.
Outlooks on potential biomedical applications and challenges are provided.
|
We explore constraints on various new physics resonances from four top-quark
production based on current experimental data. Both light and heavy resonances
are studied in the work. A comparison of full width effect and narrow width
approximation is also made.
|
The properties of Pt-based materials can be intriguing due to the importance
of spin-orbit coupling for Pt. Herein, we report four new phases with formulas
M3Pt23Ge11 (M = Ca, Sr, Ba and Eu), which adopt the same structure type as
Ce3Pt23Si11. Magnetic susceptibility measurements indicate that none of the
phases is superconducting above 1.8 K, while for Eu3Pt23Ge11 ferromagnetic
ordering is observed at ~ 3 K. The low Curie temperature for that material
compared to that of Eu3Pt23Si11 may be due to its larger Eu-Eu distance. One
potential factor that destabilizes the structure of other rare-earth based
M3Pt23Ge11 is demonstrated through COHP calculations.
|
Linguistic knowledge is of great benefit to scene text recognition. However,
how to effectively model linguistic rules in end-to-end deep networks remains a
research challenge. In this paper, we argue that the limited capacity of
language models comes from: 1) implicitly language modeling; 2) unidirectional
feature representation; and 3) language model with noise input.
Correspondingly, we propose an autonomous, bidirectional and iterative ABINet
for scene text recognition. Firstly, the autonomous suggests to block gradient
flow between vision and language models to enforce explicitly language
modeling. Secondly, a novel bidirectional cloze network (BCN) as the language
model is proposed based on bidirectional feature representation. Thirdly, we
propose an execution manner of iterative correction for language model which
can effectively alleviate the impact of noise input. Additionally, based on the
ensemble of iterative predictions, we propose a self-training method which can
learn from unlabeled images effectively. Extensive experiments indicate that
ABINet has superiority on low-quality images and achieves state-of-the-art
results on several mainstream benchmarks. Besides, the ABINet trained with
ensemble self-training shows promising improvement in realizing human-level
recognition. Code is available at https://github.com/FangShancheng/ABINet.
|
Existing approaches in video captioning concentrate on exploring global frame
features in the uncompressed videos, while the free of charge and critical
saliency information already encoded in the compressed videos is generally
neglected. We propose a video captioning method which operates directly on the
stored compressed videos. To learn a discriminative visual representation for
video captioning, we design a residuals-assisted encoder (RAE), which spots
regions of interest in I-frames under the assistance of the residuals frames.
First, we obtain the spatial attention weights by extracting features of
residuals as the saliency value of each location in I-frame and design a
spatial attention module to refine the attention weights. We further propose a
temporal gate module to determine how much the attended features contribute to
the caption generation, which enables the model to resist the disturbance of
some noisy signals in the compressed videos. Finally, Long Short-Term Memory is
utilized to decode the visual representations into descriptions. We evaluate
our method on two benchmark datasets and demonstrate the effectiveness of our
approach.
|
The short-term economic consequences of the critical measures employed to
curb the transmission of Covid-19 are all too familiar, but the consequences of
isolation and loneliness resulting from those measures on the mental well-being
of the population and their ensuing long-term economic effects are largely
unknown. Here we offer a stochastic agent-based model to investigate social
restriction measures in a community where the feelings of loneliness of the
agents dwindle when they are socializing and grow when they are alone. In
addition, the intensity of those feelings, which are measured by a real
variable that we term degree of loneliness, determines whether the agent will
seek social contact or not. We find that decrease of the number, quality or
duration of social contacts lead the community to enter a regime of burnout in
which the degree of loneliness diverges, although the number of lonely agents
at a given moment amounts to only a fraction of the total population. This
regime of mental breakdown is separated from the healthy regime, where the
degree of loneliness is finite, by a continuous phase transition. We show that
the community dynamics is described extremely well by a simple mean-field
theory so our conclusions can be easily verified for different scenarios and
parameter settings. The appearance of the burnout regime illustrates neatly the
side effects of social distancing, which give to many of us the choice between
physical infection and mental breakdown.
|
Stability of thin lubricating fluid coated slippery surfaces depends on the
surface energy of the underlying solid surface. High energy solid surfaces,
coated with thin lubricating oil, lead to the dewetting of the oil films upon
depositing aqueous drops on it. The total surface energy, which is due to the
long range and short range interactions, also predict the instability of thin
lubricating films under the given condition. In this article, we present
experimental study of dewetting of thin lubricating oil films sandwiched
between hydrophilic solid surface and aqueous drops. Fluorescence imaging of
lubricant film and wetting behavior of aqueous drops are used for the analysis.
We find that the dewetting dynamics and the final pattern depend strongly on
the thickness of the lubricating oil film.
|
In this paper we analyze spectra in the phenomenological supersymmetric
Standard Model that simultaneously result in the right dark-matter relic
density $\Omega_{\rm DM} h^2$, offer an explanation for the $(g-2)_{\mu}$
discrepancy $\Delta a_{\mu}$ and are minimally fine-tuned. We discuss the LHC
phenomenology resulting from these spectra and the sensitivity of dark-matter
direct detection experiments to these spectra. We find that the latter type of
experiments with sensitivity to the spin-dependent dark-matter-nucleon
scattering cross section $\sigma_{\rm SD,p}$ will probe all of our found
solutions.
|
We report on three pre-registered studies testing whether people in the
position of describing a decision problem to decision-makers exploit this
opportunity for their benefit, by choosing descriptions that may be potentially
beneficial for themselves. In Study 1, recipients of an extreme dictator game
(where dictators can either take the whole pie for themselves or give it
entirely to the receiver) are asked to choose the instructions used to
introduce the game to dictators, among six different instructions that are
known from previous research to affect dictators' decisions. The results
demonstrate that some dictator game recipients tend to choose instructions that
make them more likely to receive a higher payoff. Study 2 shows that people who
choose descriptions that make them more likely to receive a higher payoff
indeed believe that they will receive a higher payoff. Study 3 shows that
receivers are more likely than dictators to choose these descriptions. In sum,
our work suggests that some people choose descriptions that are beneficial to
themselves; we also found some evidence that deliberative thinking and young
age are associated with this tendency.
|
Emotion dynamics is a framework for measuring how an individual's emotions
change over time. It is a powerful tool for understanding how we behave and
interact with the world. In this paper, we introduce a framework to track
emotion dynamics through one's utterances. Specifically we introduce a number
of utterance emotion dynamics (UED) metrics inspired by work in Psychology. We
use this approach to trace emotional arcs of movie characters. We analyze
thousands of such character arcs to test hypotheses that inform our broader
understanding of stories. Notably, we show that there is a tendency for
characters to use increasingly more negative words and become increasingly
emotionally discordant with each other until about 90 percent of the narrative
length. UED also has applications in behavior studies, social sciences, and
public health.
|
The error-correction code based proof-of-work (ECCPoW) algorithm is based on
a low-density parity-check (LDPC) code. The ECCPoW is possible to impair ASIC
with its time-varying capability of the parameters of LDPC code. Previous
researches on the ECCPoW algorithm have presented its theory and implementation
on Bitcoin. But they do not discuss how stable the block generation time is. A
finite mean block generation time (BGT) and none heavy-tail BGT distribution
are the ones of the focus in this study. In the ECCPoW algorithm, BGT may show
a long-tailed distribution due to time-varying cryptographic puzzles. Thus, it
is of interest to see if the BGT distribution is not heavy-tailed and if it
shows a finite mean. If the distribution is heavy-tailed, then confirmation of
a transaction cannot be guaranteed. We present implementation, simulation, and
validation of ECCPoW Ethereum. In implementation, we explain how the ECCPoW
algorithm is integrated into Ethereum 1.0 as a new consensus algorithm. In the
simulation, we perform a multinode simulation to show that the ECCPoW Ethereum
works well with automatic difficulty change. In the validation, we present the
statistical results of the two-sample Anderson-Darling test to show that the
distribution of BGT satisfies the necessary condition of the exponential
distribution. Our implementation is downloadable at
https://github.com/cryptoecc/ETH-ECC.
|
In relativistic Quantum Field Theory (QFT) ideal measurements of certain
observables are physically impossible without violating causality. This prompts
two questions: i) can a given observable be ideally measured in QFT, and ii) if
not, in what sense can it be measured? Here we formulate a necessary and
sufficient condition that any measurement, and more generally any state update
(quantum operation), must satisfy to respect causality. Our focus is scalar
QFT, although our results should be applicable to observables in fermionic QFT.
We argue that for unitary `kicks' and operations involving 1-parameter families
of Kraus operators, e.g. Gaussian measurements, the only causal observables are
smeared fields and the identity - the basic observables in QFT. We provide
examples with more complicated operators such as products of smeared fields,
and show that the associated state updates are acausal, and hence impossible.
Despite this, one can still recover expectation values of such operators, and
we show how to do this using only causal measurements of smeared fields.
|
To date, mainstream target speech separation (TSS) approaches are formulated
to estimate the complex ratio mask (cRM) of the target speech in time-frequency
domain under supervised deep learning framework. However, the existing deep
models for estimating cRM are designed in the way that the real and imaginary
parts of the cRM are separately modeled using real-valued training data pairs.
The research motivation of this study is to design a deep model that fully
exploits the temporal-spectral-spatial information of multi-channel signals for
estimating cRM directly and efficiently in complex domain. As a result, a novel
TSS network is designed consisting of two modules, a complex neural spatial
filter (cNSF) and an MVDR. Essentially, cNSF is a cRM estimation model and an
MVDR module is cascaded to the cNSF module to reduce the nonlinear speech
distortions introduced by neural network. Specifically, to fit the cRM target,
all input features of cNSF are reformulated into complex-valued representations
following the supervised learning paradigm. Then, to achieve good hierarchical
feature abstraction, a complex deep neural network (cDNN) is delicately
designed with U-Net structure. Experiments conducted on simulated multi-channel
speech data demonstrate the proposed cNSF outperforms the baseline NSF by 12.1%
scale-invariant signal-to-distortion ratio and 33.1% word error rate.
|
Training Graph Convolutional Networks (GCNs) is expensive as it needs to
aggregate data recursively from neighboring nodes. To reduce the computation
overhead, previous works have proposed various neighbor sampling methods that
estimate the aggregation result based on a small number of sampled neighbors.
Although these methods have successfully accelerated the training, they mainly
focus on the single-machine setting. As real-world graphs are large, training
GCNs in distributed systems is desirable. However, we found that the existing
neighbor sampling methods do not work well in a distributed setting.
Specifically, a naive implementation may incur a huge amount of communication
of feature vectors among different machines. To address this problem, we
propose a communication-efficient neighbor sampling method in this work. Our
main idea is to assign higher sampling probabilities to the local nodes so that
remote nodes are accessed less frequently. We present an algorithm that
determines the local sampling probabilities and makes sure our skewed neighbor
sampling does not affect much the convergence of the training. Our experiments
with node classification benchmarks show that our method significantly reduces
the communication overhead for distributed GCN training with little accuracy
loss.
|
Extracting relevant properties of empirical signals generated by nonlinear,
stochastic, and high-dimensional systems is a challenge of complex systems
research. Open questions are how to differentiate chaotic signals from
stochastic ones, and how to quantify nonlinear and/or high-order temporal
correlations. Here we propose a new technique to reliably address both
problems. Our approach follows two steps: first, we train an artificial neural
network (ANN) with flicker (colored) noise to predict the value of the
parameter, $\alpha$, that determines the strength of the correlation of the
noise. To predict $\alpha$ the ANN input features are a set of probabilities
that are extracted from the time series by using symbolic ordinal analysis.
Then, we input to the trained ANN the probabilities extracted from the time
series of interest, and analyze the ANN output. We find that the $\alpha$ value
returned by the ANN is informative of the temporal correlations present in the
time series. To distinguish between stochastic and chaotic signals, we exploit
the fact that the difference between the permutation entropy (PE) of a given
time series and the PE of flicker noise with the same $\alpha$ parameter is
small when the time series is stochastic, but it is large when the time series
is chaotic. We validate our technique by analysing synthetic and empirical time
series whose nature is well established. We also demonstrate the robustness of
our approach with respect to the length of the time series and to the level of
noise. We expect that our algorithm, which is freely available, will be very
useful to the community.
|
Federated learning enables multiple participants to collaboratively train a
model without aggregating the training data. Although the training data are
kept within each participant and the local gradients can be securely
synthesized, recent studies have shown that such privacy protection is
insufficient. The global model parameters that have to be shared for
optimization are susceptible to leak information about training data. In this
work, we propose Confined Gradient Descent (CGD) that enhances privacy of
federated learning by eliminating the sharing of global model parameters. CGD
exploits the fact that a gradient descent optimization can start with a set of
discrete points and converges to another set at the neighborhood of the global
minimum of the objective function. It lets the participants independently train
on their local data, and securely share the sum of local gradients to benefit
each other. We formally demonstrate CGD's privacy enhancement over traditional
FL. We prove that less information is exposed in CGD compared to that of
traditional FL. CGD also guarantees desired model accuracy. We theoretically
establish a convergence rate for CGD. We prove that the loss of the proprietary
models learned for each participant against a model learned by aggregated
training data is bounded. Extensive experimental results on two real-world
datasets demonstrate the performance of CGD is comparable with the centralized
learning, with marginal differences on validation loss (mostly within 0.05) and
accuracy (mostly within 1%).
|
Based on recent perturbative and non-perturbative lattice calculations with
almost quark flavors and the thermal contributions from photons, neutrinos,
leptons, electroweak particles, and scalar Higgs bosons, various thermodynamic
quantities, at vanishing net-baryon densities, such as pressure, energy
density, bulk viscosity, relaxation time, and temperature have been calculated
up to the TeV-scale, i.e. covering hadron, QGP and electroweak (EW) phases in
the early Universe. This remarkable progress motivated the present study to
determine the possible influence of the bulk viscosity in the early Universe
and to understand how this would vary from epoch to epoch. We have taken into
consideration first- (Eckart) and second-order (Israel-Stewart) theories for
the relativistic cosmic fluid and integrated viscous equations of state in
Friedmann equations. Nonlinear nonhomogeneous differential equations are
obtained as analytical solutions. For Israel-Stewart, the differential
equations are very sophisticated to be solved. They are outlined here as
road-maps for future studies. For Eckart theory, the only possible solution is
the functionality, $H(a(t))$, where $H(t)$ is the Hubble parameter and $a(t)$
is the scale factor, but none of them so far could to be directly expressed in
terms of either proper or cosmic time $t$. For Eckart-type viscous background,
especially at finite cosmological constant, non-singular $H(t)$ and $a(t)$ are
obtained, where $H(t)$ diverges for QCD/EW and asymptotic EoS. For non-viscous
background, the dependence of $H(a(t))$ is monotonic. The same conclusion can
be drawn for an ideal EoS. We also conclude that the rate of decreasing
$H(a(t))$ with increasing $a(t)$ varies from epoch to epoch, at vanishing and
finite cosmological constant. These results obviously help in improving our
understanding of the nucleosynthesis and the cosmological large-scale
structure.
|
We check the dynamical and observational features of four typologies of
logotropic dark energy models, leading to a \emph{thermodynamic cosmic speed
up} fueled by a single fluid that unifies dark energy and dark matter. We first
present two principal Anton-Schmidt fluids where the Gr\"uneisen parameter
$\gamma_{\rm G}$ is free to vary and then fixed to the special value
$\gamma_{\rm G}=\tfrac{5}{6}$. We also investigate the pure logotropic model,
corresponding to $\gamma_{\rm G}=-\frac{1}{6}$. Finally, we propose a new
logotropic paradigm that works as a generalized logotropic fluid, in which we
split the role of dark matter and baryons. We demonstrate that the logotropic
paradigms may present drawbacks in perturbations, showing a negative adiabatic
sound speed which make perturbations unstable. The Anton-Schmidt model with
$\gamma_{\rm G}=\frac{5}{6}$ is ruled out while the generalized logotropic
fluid seems to be the most suitable one, albeit weakly disfavored than the
$\Lambda$CDM model. We combine low- and higher-redshift domains through
experimental fits based on Monte Carlo Markov Chain procedures, taking into
account supernovae Ia catalogue, Hubble measurements and $\sigma_8$ data
points. We consider two model selection criteria to infer the statistical
significance of the four models. We conclude there is statistical advantage to
handle the Anton-Schmidt fluid with the Gr\"uneisen parameter free to vary
and/or fixed to $\gamma_{\rm G}=-\frac{1}{6}$. The generalized logotropic fluid
indicates suitable results, statistically favored than the other models, until
the sound speed is positive, becoming unstable in perturbations elsewhere. We
emphasize that the $\Lambda$CDM paradigm works statistically better than any
kinds of logotropic and generalized logotropic models, while the
Chevallier-Polarski-Linder parametrization is statistically comparable with
logotropic scenarios.
|
Using \emph{TESS} we are doing a systematic study of outbursting AM~CVn
systems to place some limits on the current outbursts models. We present the
\emph{TESS} light curve (LC) for 9 AM~CVns showing both superoutbursts (SOs)
and normal outbursts (NOs). The continuous coverage of the outbursts with
\emph{TESS} allows us to place stringent limits on the duration and structures
of the SOs and the NOs. We present evidence that in at least some of the
systems enhanced mass transfer (EMT) has to be taken into account to explain
the observed LC of the SOs and rebrighthening phase after the SOs. For others,
the colour evolution from simultaneous observations in $g$ and $r$ with ZTF
differs from previously reported colour evolution of longer period AM~CVns
where EMT is responsible for the SO. We also find that due to the lack of
sufficiently high cadence coverage the duration of many systems might have been
overestimated in previous ground-based surveys. We report the SO duration for 6
AM~CVns. We also found that precursors are a common feature of SOs in AM~CVns
and are seen in the LC of 5 of the 6 reported SOs. Finally, the 10-minute and
2-minute cadence LCs from \emph{TESS} also allowed us to find two new candidate
orbital periods of AM~CVns, both of which are in reasonably good agreement with
the predictions for their periods based on their past outburst histories.
|
Shapley values provide model agnostic feature attributions for model outcome
at a particular instance by simulating feature absence under a global
population distribution. The use of a global population can lead to potentially
misleading results when local model behaviour is of interest. Hence we consider
the formulation of neighbourhood reference distributions that improve the local
interpretability of Shapley values. By doing so, we find that the
Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as
a self-normalised importance sampling estimator. Empirically, we observe that
Neighbourhood Shapley values identify meaningful sparse feature relevance
attributions that provide insight into local model behaviour, complimenting
conventional Shapley analysis. They also increase on-manifold explainability
and robustness to the construction of adversarial classifiers.
|
The interaction between cracks, as well as their propagation, in
single-crystal aluminum is investigated at the atomic scale using the molecular
dynamics method and the modified embedded atom method. The results demonstrated
that the crack propagation in aluminum is a quite complex process, which is
accompanied by micro-crack growth, merging, stress shielding, dislocation
emission, and phase transformation of the crystal structure. The main
deformation mechanisms at the front of the fatigue crack are holes, slip bands,
and cross-slip bands. During crack propagation, there are interactions between
cracks. Such interactions inhibit the phase transition at the crack tip, and
also affect the direction and speed of crack propagation. The intensity of the
interaction between two cracks decreases with the increase of the distance
between them and increases with increasing crack size. Moreover, while this
interaction has no effect on the elastic modulus of the material, it affects
its strength limit.
|
Management of invasive species and pathogens requires information about the
traffic of potential vectors. Such information is often taken from vector
traffic models fitted to survey data. Here, user-specific data collected via
mobile apps offer new opportunities to obtain more accurate estimates and to
analyze how vectors' individual preferences affect propagule flows. However,
data voluntarily reported via apps may lack some trip records, adding a
significant layer of uncertainty. We show how the benefits of app-based data
can be exploited despite this drawback.
Based on data collected via an angler app, we built a stochastic model for
angler traffic in the Canadian province Alberta. There, anglers facilitate the
spread of whirling disease, a parasite-induced fish disease. The model is
temporally and spatially explicit and accounts for individual preferences and
repeating behaviour of anglers, helping to address the problem of missing trip
records.
We obtained estimates of angler traffic between all subbasins in Alberta. The
model's accuracy exceeds that of direct empirical estimates even when fewer
data were used to fit the model. The results indicate that anglers' local
preferences and their tendency to revisit previous destinations reduce the
number of long inter-waterbody trips potentially dispersing whirling disease.
According to our model, anglers revisit their previous destination in 64% of
their trips, making these trips irrelevant for the spread of whirling disease.
Furthermore, 54% of fishing trips end in individual-specific spatially
contained areas with mean radius of 54.7km. Finally, although the fraction of
trips that anglers report was unknown, we were able to estimate the total
yearly number of fishing trips in Alberta, matching an independent empirical
estimate.
|
We consider dark matter which have non-zero electromagnetic form factors like
electric/magnetic dipole moments and anapole moment for fermionic dark matter
and Rayleigh form factor for scalar dark matter. We consider dark matter mass
$m_\chi > \cal{ O}({\rm MeV})$ and put constraints on their mass and
electromagnetic couplings from CMB and LSS observations. Fermionic dark matter
with non-zero electromagnetic form factors can annihilate to $e^+ e^-$ and
scalar dark matter can annihilate to $2\gamma$ at the time of recombination and
distort the CMB. We analyze dark matter with multipole moments with Planck and
BAO observations. We find upper bounds on anapole moment $g_{A}<7.163\times
10^{3} \text{GeV}^{-2}$, electric dipole moment ${\cal D}<7.978\times 10^{-9}
\text{e-cm}$, magnetic dipole moment ${\mu}<2.959\times 10^{-7} \mu_B$, and the
bound on Rayleigh form factor of dark matter is $g_4/\Lambda_4^2<1.085\times
10^{-2}\text{GeV}^{-2}$ with $95\%$C.L.
|
Observations of plasma waves by the Fields Suite and of electrons by the
Solar Wind Electrons Alphas and Protons Investigation (SWEAP) on Parker Solar
Probe provide strong evidence for pitch angle scattering of strahl-energy
electrons by narrowband whistler-mode waves at radial distances less than ~0.3
AU. We present two example intervals of a few hours that include 8 waveform
captures with whistler-mode waves and 26 representative electron distributions
that are examined in detail. Two were narrow; 17 were clearly broadened, and 8
were very broad. The two with narrow strahl occurred when there were either no
whistlers or very intermittent low amplitude waves. Six of the eight broadest
distributions were associated with intense, long duration waves. Approximately
half of the observed electron distributions have features consistent with an
energy dependent scattering mechanism, as would be expected from interactions
with narrowband waves. A comparison of the wave power in the whistler-mode
frequency band to pitch angle width and a measure of anisotropy provides
additional evidence for the electron scattering by whistler-mode waves. The
pitch angle broadening occurs in over an energy range comparable to that
obtained for the n=1 (co-streaming) resonance for the observed wave and plasma
parameters. The additional observation that the heat flux is lower in the
interval with multiple switchbacks may provide clues to the nature of
switchbacks. These results provide strong evidence that the heat flux is
reduced by narroweband whistler-mode waves scattering of strahl-energy
electrons.
|
In recent years, space missions such as Kepler and TESS have discovered many
close-in planets with significant atmospheres consisting of hydrogen and
helium: mini-Neptunes. This indicates that these planets formed early in
gas-rich disks while avoiding the runaway gas accretion that would otherwise
have turned them into hot-Jupiters. A solution is to invoke a long
Kelvin-Helmholtz contraction (or cooling) timescale, but it has also been
suggested that thermodynamical cooling can be prevented by hydrodynamical
planet atmosphere-disk recycling. We investigate the efficacy of the recycling
hypothesis in preventing the collapse of the atmosphere, check for the
existence of a steady state configuration, and determine the final atmospheric
mass to core mass ratio. We use three-dimensional radiation-hydrodynamic
simulations to model the formation of planetary proto-atmospheres. Equations
are solved in a local frame centered on the planet. Ignoring small oscillations
that average to zero over time, the simulations converge to a steady state
where the velocity field of the gas becomes constant in time. In a steady
state, the energy loss by radiative cooling is fully compensated by the
recycling of the low entropy gas in the planetary atmosphere with high entropy
gas from the circumstellar disk. For close-in planets, recycling naturally
halts the cooling of planetary proto-atmospheres, preventing them from
contracting toward the runaway regime and collapsing into gas giants.
|
Platforms that support online commentary, from social networks to news sites,
are increasingly leveraging machine learning to assist their moderation
efforts. But this process does not typically provide feedback to the author
that would help them contribute according to the community guidelines. This is
prohibitively time-consuming for human moderators to do, and computational
approaches are still nascent. This work focuses on models that can help suggest
rephrasings of toxic comments in a more civil manner. Inspired by recent
progress in unpaired sequence-to-sequence tasks, a self-supervised learning
model is introduced, called CAE-T5. CAE-T5 employs a pre-trained text-to-text
transformer, which is fine tuned with a denoising and cyclic auto-encoder loss.
Experimenting with the largest toxicity detection dataset to date (Civil
Comments) our model generates sentences that are more fluent and better at
preserving the initial content compared to earlier text style transfer systems
which we compare with using several scoring systems and human evaluation.
|
Normal type Ia supernovae (SNe) are thought to arise from the thermonuclear
explosion of massive ($>0.8$ M$_\odot$) carbon-oxygen white dwarfs (WDs),
although the exact mechanism is debated. In some models helium accretion onto a
carbon-oxygen (CO) WD from a companion was suggested to dynamically trigger a
detonation of the accreted helium shell. The helium detonation then produces a
shock that after converging on itself close to the core of the CO-WD, triggers
a secondary carbon detonation and gives rise to an energetic explosion.
However, most studies of such scenarios have been done in one or two
dimensions, and/or did not consider self-consistent models for the accretion
and the He-donor. Here we make use of detailed 3D simulation to study the
interaction of a He-rich hybrid $0.69\,\mathrm{M_\odot}$ HeCO WD with a more
massive $0.8\,\mathrm{M_\odot}$ CO~WD. We find that accretion from the hybrid
WD onto the CO~WD gives rise to a helium detonation. However, the helium
detonation does not trigger a carbon detonation in the CO~WD. Instead, the
helium detonation burns through the accretion stream to also burn the helium
shell of the donor hybrid HeCO-WD. The detonation of its massive helium shell
then compresses its CO core, and triggers its detonation and full destruction.
The explosion gives rise to a faint, likely highly reddened transient,
potentially observable by the Vera Rubin survey, and the high-velocity ($\sim
1000\,\mathrm{km s^{-1}}$) ejection of the heated surviving CO~WD companion.
Pending on uncertainties in stellar evolution we estimate the rate of such
transient to be up to $\sim10\%$ of the rate of type Ia SNe.
|
Molecular dynamics simulations are performed to provide a detailed
understanding of the functional degradation of shape memory alloys at small
scale. The origin of the experimentally reported accumulation of plastic
deformation and the anomalous sudden increase of the residual strain under
cyclic mechanical loading are explained by detailed insights into the relevant
atomic scale processes. Our work reveals that the mechanical response of
shape-memory-alloy pillars under cyclic compression is significantly influenced
by the presence of an amorphous-like surface region as experimentally induced
by focused ion beam milling. The main factor responsible for the observed
degradation of superelasticity under cyclic loading is the accumulated plastic
deformation and the resultant retained martensite originating from a synergetic
contribution of the amorphous and crystalline shape-memory-alloy regions. We
show that the reported sudden diminishment of the stress plateaus and
hysteresis under cyclic loading is caused by the increased stability of the
martensite phase due to the presence of the amorphous phase. Based on the
identified mechanism responsible for the degradation, we validate reported
methods of recovering the superelasticity and propose a new method to prohibit
the synergetic contribution of the amorphous and crystalline regions, such as
to achieve a sustainable operation of shape memory alloys at small scale.
|
This is a technical report, containing all the lemma and proposition proofs
in paper "Topological Constraints on Identifying Additive Link Metrics via
End-to-end Paths Measurements" by Liang Ma, Ting He, Kin K. Leung, Don Towsley,
and Ananthram Swami, published in Annual Conference of The International
Technology Alliance (ACITA), 2012.
|
In video data, busy motion details from moving regions are conveyed within a
specific frequency bandwidth in the frequency domain. Meanwhile, the rest of
the frequencies of video data are encoded with quiet information with
substantial redundancy, which causes low processing efficiency in existing
video models that take as input raw RGB frames. In this paper, we consider
allocating intenser computation for the processing of the important busy
information and less computation for that of the quiet information. We design a
trainable Motion Band-Pass Module (MBPM) for separating busy information from
quiet information in raw video data. By embedding the MBPM into a two-pathway
CNN architecture, we define a Busy-Quiet Net (BQN). The efficiency of BQN is
determined by avoiding redundancy in the feature space processed by the two
pathways: one operating on Quiet features of low-resolution, while the other
processes Busy features. The proposed BQN outperforms many recent video
processing models on Something-Something V1, Kinetics400, UCF101 and HMDB51
datasets.
|
A key tool astronomers have to investigate the nature of extragalactic
transients is their position on their host galaxies. Galactocentric offsets,
enclosed fluxes and the fraction of light statistic are widely used at
different wavelengths to help infer the nature of transient progenitors.
Motivated by the proposed link between magnetars and fast radio bursts (FRBs),
we create a face-on image of the Milky Way using best estimates of its size,
structure and colour. We place Galactic magnetars, pulsars, low mass and high
mass X-ray binaries on this image, using the available distance information.
Galactocentric offsets, enclosed fluxes and fraction of light distributions for
these systems are compared to extragalactic transient samples. We find that
FRBs follow the distributions for Galactic neutron stars closest, with 24 (75
per cent) of the Anderson-Darling tests we perform having a p-value greater
than 0.05. This suggests that FRBs are located on their hosts in a manner
consistent with Galactic neutron stars on the Milky Way's light, although we
cannot determine which specific neutron star population is the best match. The
Galactic distributions are consistent with other extragalactic transients much
less often across the range of comparisons made, with type Ia SNe in second
place, at only 33 per cent of tests exceeding 0.05. Overall, our results
provide further support for FRB models invoking isolated young neutron stars,
or binaries containing a neutron star.
|
We introduce the GAMBIT Universal Model Machine (GUM), a tool for
automatically generating code for the global fitting software framework GAMBIT,
based on Lagrangian-level inputs. GUM accepts models written symbolically in
FeynRules and SARAH formats, and can use either tool along with MadGraph and
CalcHEP to generate GAMBIT model, collider, dark matter, decay and spectrum
code, as well as GAMBIT interfaces to corresponding versions of SPheno,
micrOMEGAs, Pythia and Vevacious (C++). In this paper we describe the features,
methods, usage, pathways, assumptions and current limitations of GUM. We also
give a fully worked example, consisting of the addition of a Majorana fermion
simplified dark matter model with a scalar mediator to GAMBIT via GUM, and
carry out a corresponding fit.
|
We study the relationship between the categorical entropy of the twist and
cotwist functors along a spherical functor. In particular, we prove the
categorical entropy of the twist functor coincides with that of the cotwist
functor if the essential image of the right adjoint functor of the spherical
functor contains a split-generator. We also see our results generalize the
computations of the categorical entropy of spherical twists and
$\mathbb{P}$-twists by Ouchi and Fan. As an application, we apply our results
to the Gromov--Yomdin type conjecture by Kikuta--Takahashi.
|
Cloud Native Application CNApp (as a distributed system) is a collection of
independent components (micro-services) interacting via communication
protocols. This gives rise to present an abstract architecture of CNApp as
dynamically re-configurable acyclic directed multi graph where vertices are
microservices, and edges are the protocols. Generic mechanisms for such
reconfigurations evidently correspond to higher-level functions (functionals).
This implies also internal abstract architecture of microservice as a
collection of event-triggered serverless functions (including functions
implementing the protocols) that are dynamically composed into event-dependent
data-flow graphs. Again, generic mechanisms for such compositions correspond to
calculus of functionals and relations.
|
3D ultrasound (US) has become prevalent due to its rich spatial and
diagnostic information not contained in 2D US. Moreover, 3D US can contain
multiple standard planes (SPs) in one shot. Thus, automatically localizing SPs
in 3D US has the potential to improve user-independence and
scanning-efficiency. However, manual SP localization in 3D US is challenging
because of the low image quality, huge search space and large anatomical
variability. In this work, we propose a novel multi-agent reinforcement
learning (MARL) framework to simultaneously localize multiple SPs in 3D US. Our
contribution is four-fold. First, our proposed method is general and it can
accurately localize multiple SPs in different challenging US datasets. Second,
we equip the MARL system with a recurrent neural network (RNN) based
collaborative module, which can strengthen the communication among agents and
learn the spatial relationship among planes effectively. Third, we explore to
adopt the neural architecture search (NAS) to automatically design the network
architecture of both the agents and the collaborative module. Last, we believe
we are the first to realize automatic SP localization in pelvic US volumes, and
note that our approach can handle both normal and abnormal uterus cases.
Extensively validated on two challenging datasets of the uterus and fetal
brain, our proposed method achieves the average localization accuracy of 7.03
degrees/1.59mm and 9.75 degrees/1.19mm. Experimental results show that our
light-weight MARL model has higher accuracy than state-of-the-art methods.
|
This paper focuses on a question raised by Holm and J{\o}rgensen, who asked
if the induced cotorsion pairs $(\Phi({\sf X}),\Phi({\sf X})^{\perp})$ and
$(^{\perp}\Psi({\sf Y}),\Psi({\sf Y}))$ in $\mathrm{Rep}(Q,{\sf{A}})$ -- the
category of all $\sf{A}$-valued representations of a quiver $Q$ -- are complete
whenever $(\sf X,\sf Y)$ is a complete cotorsion pair in an abelian category
$\sf{A}$ satisfying some mild conditions. Recently, Odaba\c{s}{\i} gave an
affirmative answer if the quiver $Q$ is rooted and the cotorsion pair $(\sf
X,\sf Y)$ is further hereditary. In this paper, we improve Odaba\c{s}{\i}'s
work by removing the hereditary assumption on the cotorsion pair. As an
application, we show under certain mild conditions that if a subcategory $\sf
L$, which is not necessarily closed under direct summands, of $\sf A$ is
special precovering (resp., preenveloping), then $\Phi(\sf L)$ (resp.,
$\Psi(\sf L)$) is special precovering (resp., preenveloping) in
$\mathrm{Rep}(Q,{\sf{A}})$.
|
We present the analytic evaluation of the two-loop corrections to the
amplitude for the scattering of four fermions in Quantum Electrodynamics, $f^-
+ f^+ + F^- + F^+ \to 0$, with $f$ and $F$ representing a massless and a
massive lepton, respectively. Dimensional regularization is employed to
evaluate the loop integrals. Ultraviolet divergences are removed by
renormalizing the coupling constant in the ${\overline{\text{MS}}}$-scheme, and
the lepton mass as well as the external fields in the on-shell scheme. The
analytic result for the renormalized amplitude is expressed as Laurent series
around $d=4$ space-time dimensions, and contains Generalized Polylogarithms
with up to weight four. The structure of the residual infrared divergences of
the virtual amplitude is in agreement with the prediction of the Soft Collinear
Effective Theory. Our analytic results are an essential ingredient for the
computation of the scattering cross section for massive fermion-pair production
in massless fermion-pair annihilation, i.e. $f^- f^+ \to F^- F^+$, and crossing
related processes such as the elastic scattering $f F \to f F$, with up to
Next-to-Next to Leading Order accuracy.
|
The topological structure of vacuum is the cornerstone of non-Abelian gauge
theories describing strong and electroweak interactions within the standard
model of particle physics. However, transitions between different topological
sectors of the vacuum (believed to be at the origin of the baryon asymmetry of
the Universe) have never been observed directly. An experimental observation of
such transitions in Quantum Chromodynamics (QCD) has become possible in
heavy-ion collisions, where the chiral magnetic effect converts the chiral
asymmetry (generated by topological transitions in hot QCD matter) into an
electric current, under the presence of the magnetic field produced by the
colliding ions. The Relativistic Heavy Ion Collider program on heavy-ion
collisions such as the Zr-Zr and Ru-Ru isobars, thus has the potential to
uncover the topological structure of vacuum in a laboratory experiment. This
discovery would have far-reaching implications for the understanding of QCD,
the origin of the baryon asymmetry in the present-day Universe, and for other
areas, including condensed matter physics.
|
We present the first observation of instability in weakly magnetized,
pressure dominated plasma Couette flow firmly in the Hall regime. Strong Hall
currents couple to a low frequency electromagnetic mode that is driven by
high-$\beta$ ($>1$) pressure profiles. Spectroscopic measurements show heating
(factor of 3) of the cold, unmagnetized ions via a resonant Landau damping
process. A linear theory of this instability is derived that predicts positive
growth rates at finite $\beta$ and shows the stabilizing effect of very large
$\beta$, in line with observations.
|
Using recent machine learning results that present an information-theoretic
perspective on underfitting and overfitting, we prove that deciding whether an
encodable learning algorithm will always underfit a dataset, even if given
unlimited training time, is undecidable. We discuss the importance of this
result and potential topics for further research, including
information-theoretic and probabilistic strategies for bounding learning
algorithm fit.
|
In this independent report fAshIon after fashion, we examine the development
of fAshIon (artificial intelligence (AI) in fashion) and explore its
potentiality to become a major disruptor of the fashion industry in the near
future. To do this, we investigate AI technologies used in the fashion industry
through several lenses. We summarise fAshIon studies conducted over the past
decade and categorise them into seven groups: Overview, Evaluation, Basic Tech,
Selling, Styling, Design, and Buying. The datasets mentioned in fAshIon
research have been consolidated on one GitHub page for ease of use. We analyse
the authors' backgrounds and the geographic regions treated in these studies to
determine the landscape of fAshIon research. The results of our analysis are
presented with an aim to provide researchers with a holistic view of research
in fAshIon. As part of our primary research, we also review a wide range of
cases of applied fAshIon in the fashion industry and analyse their impact on
the industry, markets and individuals. We also identify the challenges
presented by fAshIon and suggest that these may form the basis for future
research. We finally exhibit that many potential opportunities exist for the
use of AI in fashion which can transform the fashion industry embedded with AI
technologies and boost profits.
|
Let $G$ be a semisimple simply connected complex algebraic group. Let $U$ be
the unipotent radical of a Borel subgroup in $G$. We describe the coordinate
rings of $U$ (resp., $G/U$, $G$) in terms of two (resp., four, eight)
birational charts introduced in [L94, L19] in connection with the study of
total positivity.
|
A novel observable measuring the $C\!P$ asymmetry in multi-body decays of
heavy mesons, which is called the forward-backward asymmetry induced $C\!P$
asymmetry (FBI-$C\!P$A), $A_{CP}^{FB}$, is introduced. This observable has the
dual advantages that 1) it can isolate the $C\!P$ asymmetry associated with the
interference of the $S$- and $P$-wave amplitude from that associated with the
$S$- or $P$-wave amplitude alone; 2) it can effectively almost double the
statistics comparing to the conventionally defined regional $C\!P$ asymmetry.
We also suggest to perform the measurements of FBI-$C\!P$A in some three-body
decay channels of charm and beauty mesons.
|
Chv\'{a}tal conjectured in 1973 the existence of some constant $t$ such that
all $t$-tough graphs with at least three vertices are hamiltonian. While the
conjecture has been proven for some special classes of graphs, it remains open
in general. We say that a graph is $(K_2 \cup 3K_1)$-free if it contains no
induced subgraph isomorphic to $K_2 \cup 3K_1$, where $K_2 \cup 3K_1$ is the
disjoint union of an edge and three isolated vertices. In this paper, we show
that every 3-tough $(K_2 \cup 3K_1)$-free graph with at least three vertices is
hamiltonian.
|
Decision trees provide a rich family of highly non-linear but efficient
models, due to which they continue to be the go-to family of predictive models
by practitioners across domains. But learning trees is challenging due to their
discrete decision boundaries. The state-of-the-art (SOTA) techniques resort to
(a) learning soft trees thereby losing logarithmic inference time; or (b) using
methods tailored to specific supervised learning settings, requiring access to
labeled examples and loss function. In this work, by leveraging techniques like
overparameterization and straight-through estimators, we propose a novel method
that enables accurate end-to-end gradient based tree training and can be
deployed in a variety of settings like offline supervised learning and online
learning with bandit feedback. Using extensive validation on standard
benchmarks, we demonstrate that our method provides best of both worlds, i.e.,
it is competitive to, and in some cases more accurate than methods designed
specifically for the supervised settings; and in bandit settings, where most
existing tree learning techniques are not applicable, our models are still
accurate and significantly outperform the applicable SOTA methods.
|
The Bender-Knuth involutions on semistandard Young tableaux are known to
coincide with the tableau switching on horizontal border strips of two adjacent
letters, together with the swapping of those letters. Motivated by this
coincidence and using the shifted tableau switching due to Choi, Nam and Oh
(2019), we consider a shifted version of the Bender-Knuth involutions and
define a shifted version of the Berenstein-Kirillov group (1995). Similarly to
the classical case, the shifted version of the Berenstein-Kirillov group also
acts on the straight-shaped shifted tableau crystals introduced by Gillespie,
Levinson and Purbhoo (2020), via partial Sch\"utzenberger involutions, thus
coinciding with the action of the cactus group on the same crystal, due to the
author. Following the works of Halacheva (2016, 2020), and Chmutov, Glick and
Pylyavskyy (2020), on the relation between the actions of the
Berenstein-Kirillov group and the cactus group on a crystal of straight-shaped
Young tableaux, we also show that the shifted Berenstein-Kirillov group is
isomorphic to a quotient of the cactus group. Not all the known relations that
hold in the classic Berenstein-Kirillov group need to be satisfied by the
shifted Bender-Knuth involutions, but the ones implying the relations of the
cactus group are verified. Hence, we have an alternative presentation for the
cactus group in terms of the shifted Bender-Knuth involutions. We also use the
shifted growth diagrams due to Thomas and Yong (2016) to provide an alternative
proof concerning the mentioned cactus group action.
|
Process-induced defects are the leading cause of discrepancies between
as-designed and as-manufactured additive manufacturing (AM) product behavior.
Especially for metal lattices, the variations in the printed geometry cannot be
neglected. Therefore, the evaluation of the influence of microstructural
variability on their mechanical behavior is crucial for the quality assessment
of the produced structures. Commonly, the as-manufactured geometry can be
obtained by computed tomography (CT). However, to incorporate all
process-induced defects into the numerical analysis is often computationally
demanding. Thus, commonly this task is limited to a predefined set of
considered variations, such as strut size or strut diameter. In this work, a
CT-based binary random field is proposed to generate statistically equivalent
geometries of periodic metal lattices. The proposed random field model in
combination with the Finite Cell Method (FCM), an immersed boundary method,
allows to efficiently evaluate the influence of the underlying microstructure
on the variability of the mechanical behavior of AM products. Numerical
analysis of two lattices manufactured at different scales shows an excellent
agreement with experimental data. Furthermore, it provides a unique insight
into the effects of the process on the occurring geometrical variations and
final mechanical behavior.
|
Graphene quantum dots (GQDs) represent single layers up to dozens of graphene
layers smaller than 30 nm. GQDs are newish molecules that have aroused great
interest in research because of their exceptional and manageable optical,
electrical, chemical, and structural properties. In this work, we report
electrostatic potential energy maps, or molecular electrostatic potential
surfaces, illustrate the charge distributions of GQDs three-dimensionally.
Knowledge of the charge distributions can be used to determine how GQDs
interact with one another. To analyze the distribution of molecular charges
accurately, a large number of electrostatic potential energy values must be
calculated. The best way to transmit these data is to visualize them as in the
electrostatic potential map. A ZINDO semi-empirical quantum chemistry method
then imposes the calculated data onto an electron density model of the GQDs
derived from the Schr\"odinger equation. To make the electrostatic potential
energy data of GQDs easy to interpret, a color spectrum, with red as the lowest
electrostatic potential energy value and blue as the highest, is employed to
convey the varying intensities of the electrostatic potential energy values.
The results of the four GQD models suggest that the energy of the ionization
potential lies in a range of -7.20 eV to -5.31 eV and the electron affinity is
-2.65 to -0.24 eV.
|
This paper presents a novel architecture for model predictive control (MPC)
based indoor climate control of multi-zone buildings to provide energy
efficiency. Unlike prior works we do not assume the availability of a
high-resolution multi-zone building model, which is challenging to obtain.
Instead, the architecture uses a low-resolution model of the building which is
divided into a small number of "meta-zones" that can be easily identified using
existing data-driven modeling techniques. The proposed architecture is
hierarchical. At the higher level, an MPC controller uses the low-resolution
model to make decisions for the air handling unit (AHU) and the meta-zones.
Since the meta-zones are fictitious, a lower level controller converts the
high-level MPC decisions into commands for the individual zones by solving a
projection problem that strikes a trade-off between two potentially conflicting
goals: the AHU-level decisions made by the MPC are respected while the climate
of the individual zones is maintained within the comfort bounds. The
performance of the proposed controller is assessed via simulations in a
high-fidelity simulation testbed and compared to that of a rule-based
controller that is used in practice. Simulations in multiple weather conditions
show the effectiveness of the proposed controller in terms of energy savings,
climate control, and computational tractability.
|
Let $K$ be an imaginary quadratic field, with associated quadratic character
$\alpha$. We construct an analytic $p$-adic $L$-function interpolating the
twisted adjoint $L$-values $L(1, \mathrm{ad}(f) \otimes \alpha)$ as $f$ varies
in a Hida family; these special values are non-critical in the sense of
Deligne. Our approach is based on Greenberg--Stevens' idea of $\Lambda$-adic
modular symbols, which considers cohomology with values in a space of $p$-adic
measures.
|
Robot navigation in a safe way for complex and crowded situations is studied
in this work. When facing complex environments with both static and dynamic
obstacles, in existing works unicycle nonholonomic robots are prone to two
extreme behaviors, one is to fall into dead ends formed by obstacles, and the
other is to not complete the navigation task in time due to excessive collision
avoidance.As a result, we propose the R-SARL framework, which is based on a
deep reinforcement learning algorithm and where we augment the reward function
to avoid collisions. In particular, we estimate unsafe interactions between the
robot and obstacles in a look-ahead distance and penalize accordingly, so that
the robot can avoid collisions in advance and reach its destination
safely.Furthermore, we penalize frequent excessive detours to reduce the
timeout and thus improve the efficiency of navigation.We test our method in
various challenging and complex crowd navigation tasks. The results show that
our method improves navigation performance and outperforms state-of-the-art
methods.
|
Derivations of Borns Rule are of interest because they tell us what's
connected to what in quantum mechanics if we ever need to change or modify it.
|
Terrestrial planets with large water inventories are likely ubiquitous and
will be among the first Earth-sized planets to be characterized with upcoming
telescopes. It has previously been argued that waterworlds-particularly those
possessing more than 1% H$_2$O-experience limited melt production and
outgassing due to the immense pressure overburden of their overlying oceans,
unless subject to high internal heating. But an additional, underappreciated
obstacle to outgassing on waterworlds is the high solubility of volatiles in
high-pressure melts. Here, we investigate this phenomenon and show that
volatile solubilities in melts probably prevent almost all magmatic outgassing
from waterworlds. Specifically, for Earth-like gravity and oceanic crust
composition, oceans or water ice exceeding 10-100 km in depth (0.1-1 GPa)
preclude the exsolution of volatiles from partial melt of silicates. This
solubility limit compounds the pressure overburden effect as large surface
oceans limit both melt production and degassing from any partial melt that is
produced. We apply these calculations to Trappist-1 planets to show that, given
current mass and radius constraints and implied surface water inventories,
Trappist-1f and -1g are unlikely to experience volcanic degassing. While other
mechanisms for interior-surface volatile exchange are not completely excluded,
the suppression of magmatic outgassing simplifies the range of possible
atmospheric evolution trajectories and has implications for interpretation of
ostensible biosignature gases, which we illustrate with a coupled model of
planetary interior-climate-atmosphere evolution.
|
For $\beta<\frac13$, we consider $C^\beta(\mathbb{T}^3\times [0,T])$ weak
solutions of the incompressible Euler equations that do not conserve the
kinetic energy. We prove that for such solutions the closed and non-empty set
of singular times $\mathcal{B}$ satisfies $\dim_{\mathcal{H}}(\mathcal{B})\geq
\frac{2\beta}{1-\beta}$. This lower bound on the Hausdorff dimension of the
singular set in time is intrinsically linked to the H\"older regularity of the
kinetic energy and we conjecture it to be sharp. As a first step in this
direction, for every $\beta<\beta'<\frac{1}{3}$ we are able to construct, via a
convex integration scheme, non-conservative $C^\beta(\mathbb{T}^3\times [0,T])$
weak solutions of the incompressible Euler system such that
$\dim_{\mathcal{H}}(\mathcal{B})\leq
\frac{1}{2}+\frac{1}{2}\frac{2\beta'}{1-\beta'}$. The structure of the wild
solutions that we build allows moreover to deduce non-uniqueness of
$C^\beta(\mathbb{T}^3\times [0,T])$ weak solutions of the Cauchy problem for
Euler from every smooth initial datum.
|
Road detection or traversability analysis has been a key technique for a
mobile robot to traverse complex off-road scenes. The problem has been mainly
formulated in early works as a binary classification one, e.g. associating
pixels with road or non-road labels. Whereas understanding scenes with
fine-grained labels are needed for off-road robots, as scenes are very diverse,
and the various mechanical performance of off-road robots may lead to different
definitions of safe regions to traverse. How to define and annotate
fine-grained labels to achieve meaningful scene understanding for a robot to
traverse off-road is still an open question. This research proposes a
contrastive learning based method. With a set of human-annotated anchor
patches, a feature representation is learned to discriminate regions with
different traversability, a method of fine-grained semantic segmentation and
mapping is subsequently developed for off-road scene understanding. Experiments
are conducted on a dataset of three driving segments that represent very
diverse off-road scenes. An anchor accuracy of 89.8% is achieved by evaluating
the matching with human-annotated image patches in cross-scene validation.
Examined by associated 3D LiDAR data, the fine-grained segments of visual
images are demonstrated to have different levels of toughness and terrain
elevation, which represents their semantical meaningfulness. The resultant maps
contain both fine-grained labels and confidence values, providing rich
information to support a robot traversing complex off-road scenes.
|
This paper presents new machine learning approaches to approximate the
solution of optimal stopping problems. The key idea of these methods is to use
neural networks, where the hidden layers are generated randomly and only the
last layer is trained, in order to approximate the continuation value. Our
approaches are applicable for high dimensional problems where the existing
approaches become increasingly impractical. In addition, since our approaches
can be optimized using a simple linear regression, they are very easy to
implement and theoretical guarantees can be provided. In Markovian examples our
randomized reinforcement learning approach and in non-Markovian examples our
randomized recurrent neural network approach outperform the state-of-the-art
and other relevant machine learning approaches.
|
Simple closed curves in the plane can be mapped to nontrivial knots under the
action of origami foldings that allow the paper to self-intersect. We show all
tame knot types may be produced in this manner, motivating the development of a
new knot invariant, the fold number, defined as the minimum number of creases
required to obtain an equivalent knot. We study this invariant, presenting a
bound on the fold number by the diagram stick number as well as a class of
torus knots with constant fold number. We also pursue a characterization of
those foldings which admit nontrivial knots, giving a proof that no "physically
realizable", or proper, foldings can admit nontrivial knots. A number of
questions are posed for further study.
|
Terahertz-based nano-networks are emerging as a groundbreaking technology
able to play a decisive role in future medical applications owing to their
ability to precisely quantify figures, such as the viral load in a patient or
to predict sepsis shock or heart attacks before they occur. Due to the
extremely limited size of the devices composing these nano-networks, the use of
the Terahertz (THz) band has emerged as the enabling technology for their
communication. However, the characteristics of the THz band, which strictly
reduce the communication range inside the human body, together with the energy
limitations of nano-nodes make the in-body deployment of nano-nodes a
challenging task. To overcome these problems, we propose a novel in-body
flow-guided nano-network architecture consisting of three different devices: i)
nano-node, ii) nano-router, and iii) bio-sensor. As the performance of this
type of nano-network has not been previously explored, a theoretical framework
capturing all its particularities is derived to properly model its behavior and
evaluate its feasibility in real medical applications. Employing this
analytical model, a thorough sensitivity study of its key parameters is
accomplished. Finally, we analyze the terahertz flow-guided nano-network design
to satisfy the requirements of several medical applications of interest.
|
In this paper, an energy-consistent finite difference scheme for the
compressible hydrodynamic and magnetohydrodynamic (MHD) equations is
introduced. For the compressible magnetohydrodynamics, an energy-consistent
finite difference formulation is derived using the product rule for the spatial
difference. The conservation properties of the internal, kinetic, and magnetic
energy equations can be satisfied in the discrete level without explicitly
solving the total energy equation. The shock waves and discontinuities in the
numerical solution are stabilized by nonlinear filtering schemes. An
energy-consistent discretization of the filtering schemes is also derived by
introducing the viscous and resistive heating rates. The resulting
energy-consistent formulation can be implemented with the various kinds of
central difference, nonlinear filtering, and time integration schemes. The
second- and fifth-order schemes are implemented based on the proposed
formulation. The conservation properties and the robustness of the present
schemes are demonstrated via one- and two-dimensional numerical tests. The
proposed schemes successfully handle the most stringent problems in extremely
high Mach number and low beta conditions.
|
The high-temperature superconducting cuprates are governed by intertwined
spin, charge, and superconducting orders. While various state-of-the-art
numerical methods have demonstrated that these phases also manifest themselves
in doped Hubbard models, they differ on which is the actual ground state.
Finite cluster methods typically indicate that stripe order dominates while
embedded quantum cluster methods, which access the thermodynamic limit by
treating long-range correlations with a dynamical mean field, conclude that
superconductivity does. Here, we report the observation of fluctuating spin and
charge stripes in the doped single-band Hubbard model using a quantum Monte
Carlo dynamical cluster approximation (DCA) method. By resolving both the
fluctuating spin and charge orders using DCA, we demonstrate for the first time
that they survive in the doped Hubbard model in the thermodynamic limit. This
discovery also provides a new opportunity to study the influence of fluctuating
stripe correlations on the model's pairing correlations within a unified
numerical framework.
|
Mispronunciation detection and diagnosis (MDD) is designed to identify
pronunciation errors and provide instructive feedback to guide non-native
language learners, which is a core component in computer-assisted pronunciation
training (CAPT) systems. However, MDD often suffers from the data-sparsity
problem due to that collecting non-native data and the associated annotations
is time-consuming and labor-intensive. To address this issue, we explore a
fully end-to-end (E2E) neural model for MDD, which processes learners' speech
directly based on raw waveforms. Compared to conventional hand-crafted acoustic
features, raw waveforms retain more acoustic phenomena and potentially can help
neural networks discover better and more customized representations. To this
end, our MDD model adopts a co-called SincNet module to take input a raw
waveform and covert it to a suitable vector representation sequence. SincNet
employs the cardinal sine (sinc) function to implement learnable bandpass
filters, drawing inspiration from the convolutional neural network (CNN). By
comparison to CNN, SincNet has fewer parameters and is more amenable to human
interpretation. Extensive experiments are conducted on the L2-ARCTIC dataset,
which is a publicly-available non-native English speech corpus compiled for
research on CAPT. We find that the sinc filters of SincNet can be adapted
quickly for non-native language learners of different nationalities.
Furthermore, our model can achieve comparable mispronunciation detection
performance in relation to state-of-the-art E2E MDD models that take input the
standard handcrafted acoustic features. Besides that, our model also provides
considerable improvements on phone error rate (PER) and diagnosis accuracy.
|
To achieve higher coding efficiency, Versatile Video Coding (VVC) includes
several novel components, but at the expense of increasing decoder
computational complexity. These technologies at a low bit rate often create
contouring and ringing effects on the reconstructed frames and introduce
various blocking artifacts at block boundaries. To suppress those visual
artifacts, the VVC framework supports four post-processing filter operations.
The interoperation of these filters introduces extra signaling bits and
eventually becomes overhead at higher resolution video processing. In this
paper, a novel deep learning-based model is proposed for sample adaptive offset
(SAO) nonlinear filtering operation and substantiated the merits of intra-inter
frame quality enhancement. We introduced a variable filter size multi-scale CNN
(MSCNN) to improve the denoising operation and incorporated strided
deconvolution for further computation improvement. We demonstrated that our
deconvolution model can effectively be trained by leveraging the high-frequency
edge features learned in a parallel fashion using feature fusion and residual
learning. The simulation results demonstrate that the proposed method
outperforms the baseline VVC method in BD-BR, BD-PSNR measurements and achieves
an average of 3.762 % bit rate saving on the standard video test sequences.
|
Autism spectrum disorder is a developmental disorder characterized by
significant social, communication, and behavioral challenges. Individuals
diagnosed with autism, intellectual, and developmental disabilities (AUIDD)
typically require long-term care and targeted treatment and teaching. Effective
treatment of AUIDD relies on efficient and careful behavioral observations done
by trained applied behavioral analysts (ABAs). However, this process
overburdens ABAs by requiring the clinicians to collect and analyze data,
identify the problem behaviors, conduct pattern analysis to categorize and
predict categorical outcomes, hypothesize responsiveness to treatments, and
detect the effects of treatment plans. Successful integration of digital
technologies into clinical decision-making pipelines and the advancements in
automated decision-making using Artificial Intelligence (AI) algorithms
highlights the importance of augmenting teaching and treatments using novel
algorithms and high-fidelity sensors. In this article, we present an
AI-Augmented Learning and Applied Behavior Analytics (AI-ABA) platform to
provide personalized treatment and learning plans to AUIDD individuals. By
defining systematic experiments along with automated data collection and
analysis, AI-ABA can promote self-regulative behavior using reinforcement-based
augmented or virtual reality and other mobile platforms. Thus, AI-ABA could
assist clinicians to focus on making precise data-driven decisions and increase
the quality of individualized interventions for individuals with AUIDD.
|
A graph $G=(V,E)$ with geodesic distance $d(\cdot,\cdot)$ is said to be
resolved by a non-empty subset $R$ of its vertices when, for all vertices $u$
and $v$, if $d(u,r)=d(v,r)$ for each $r\in R$, then $u=v$. The metric dimension
of $G$ is the cardinality of its smallest resolving set. In this manuscript, we
present and investigate the notions of resolvability and metric dimension when
the geodesic distance is truncated with a certain threshold $k$; namely, we
measure distances in $G$ using the metric $d_k(u,v):=\min\{d(u,v),k+1\}$. We
denote the metric dimension of $G$ with respect to $d_k$ as $\beta_k(G)$. We
study the behavior of this quantity with respect to $k$ as well as the diameter
of $G$. We also characterize the truncated metric dimension of paths and cycles
as well as graphs with extreme metric dimension, including graphs of order $n$
such that $\beta_k(G)=n-2$ and $\beta_k(G)=n-1$. We conclude with a study of
various problems related to the truncated metric dimension of trees.
|
We revisit the squeezed-limit non-Gaussianity in the single-field
non-attractor inflation models from the viewpoint of the cosmological soft
theorem. In the single-field attractor models, inflaton's trajectories with
different initial conditions effectively converge into a single trajectory in
the phase space, and hence there is only one \emph{clock} degree of freedom
(DoF) in the scalar part. Its long-wavelength perturbations can be absorbed
into the local coordinate renormalization and lead to the so-called
\emph{consistency relation} between $n$- and $(n+1)$-point functions. On the
other hand, if the inflaton dynamics deviates from the attractor behavior, its
long-wavelength perturbations cannot necessarily be absorbed and the
consistency relation is expected not to hold any longer. In this work, we
derive a formula for the squeezed bispectrum including the explicit correction
to the consistency relation, as a proof of its violation in the non-attractor
cases. First one must recall that non-attractor inflation needs to be followed
by attractor inflation in a realistic case. Then, even if a specific
non-attractor phase is effectively governed by a single DoF of phase space
(represented by the exact ultra-slow-roll limit) and followed by a single-DoF
attractor phase, its transition phase necessarily involves two DoF in dynamics
and hence its long-wavelength perturbations cannot be absorbed into the local
coordinate renormalization. Thus, it can affect local physics, even taking
account of the so-called \emph{local observer effect}, as shown by the fact
that the bispectrum in the squeezed limit can go beyond the consistency
relation. More concretely, the observed squeezed bispectrum does not vanish in
general for long-wavelength perturbations exiting the horizon during a
non-attractor phase.
|
Our environment, whether at work, in public spaces, or at home, is becoming
more connected, and increasingly responsive. Meal preparation even when it
involves simply heating ready-made food can be perceived as a complex process
for people with disabilities. This research aimed to prototype, using a
co-Design approach a Community Supported Appliance (CSA) by developing a Domain
Specific Language (DSL), precisely created for a semi-automated cooking
process. The DSL was shaped and expressed in the idiom of the users and allowed
the CSA to support independence for users while performing daily cooking
activities.
|
Nowadays, more and more clinical trials choose combinational agents as the
intervention to achieve better therapeutic responses. However, dose-finding for
combinational agents is much more complicated than single agent as the full
order of combination dose toxicity is unknown. Therefore, regular phase I
designs are not able to identify the maximum tolerated dose (MTD) of
combinational agents. Motivated by such needs, plenty of novel phase I clinical
trial designs for combinational agents were proposed. With so many available
designs, research that compare their performances, explore parameters' impacts,
and provide recommendations is very limited. Therefore, we conducted a
simulation study to evaluate multiple phase I designs that proposed to identify
single MTD for combinational agents under various scenarios. We also explored
influences of different design parameters. In the end, we summarized the pros
and cons of each design, and provided a general guideline in design selection.
|
Subsets and Splits