abstract
stringlengths 42
2.09k
|
---|
Observations of the synchrotron and inverse Compton emissions from
ultrarelativistic electrons in astrophysical sources can reveal a great deal
about the energy-momentum relations of those electrons. They can thus be used
to place bounds on the possibility of Lorentz violation in the electron sector.
Recent $\gamma$-ray telescope data allow the Lorentz-violating electron
$c^{\nu\mu}$ parameters to be constrained extremely well, so that all bounds
are at the level of $7\times 10^{-16}$ or better.
|
We define a renormalized volume for a region in an asymptotically hyperbolic
Einstein manifold that is bounded by a Graham-Witten minimal surface and the
conformal infinity. We prove a Gauss-Bonnet theorem for the renormalized
volume, and compute its derivative under variations of the minimal
hypersurface.
|
We present a symmetry-based scheme to create 0D second-order topological
modes in continuous 2D systems. We show that a metamaterial with a
\textit{p6m}-symmetric pattern exhibits two Dirac cones, which can be gapped in
two distinct ways by deforming the pattern. Combining the deformations in a
single system then emulates the 2D Jackiw-Rossi model of a topological vortex,
where 0D in-gap bound modes are guaranteed to exist. We exemplify our approach
with simple hexagonal, Kagome and honeycomb lattices. We furthermore formulate
a quantitative method to extract the topological properties from finite-element
simulations, which facilitates further optimization of the bound mode
characteristics. Our scheme enables the realization of second-order topology in
a wide range of experimental systems.
|
A superdiagonal composition is one in which the $i$-th part or summand is of
size greater than or equal to $i$. In this paper, we study the number of
palindromic superdiagonal compositions and colored superdiagonal compositions.
In particular, we give generating functions and explicit combinatorial formulas
involving binomial coefficients and Stirling numbers of the first kind.
|
Relativistic AGN jets exhibit multi-timescale variability and a broadband
non-thermal spectrum extending from radio to gamma-rays. These highly
magnetized jets are prone to undergo several Magneto-hydrodynamic (MHD)
instabilities during their propagation in space and could trigger jet radiation
and particle acceleration. This work aims to study the implications of
relativistic kink mode instability on the observed long-term variability in the
context of the twisting in-homogeneous jet model. To achieve this, we
investigate the physical configurations preferable for forming kink mode
instability by performing high-resolution 3D relativistic MHD simulations of a
portion of highly magnetized jets. In particular, we perform simulations of
cylindrical plasma column with Lorentz factor $\geq 5$ and study the effects of
magnetization values and axial wave-numbers with decreasing pitch on the onset
and growth of kink instability. We have confirmed the impact of axial
wave-number on the dynamics of the plasma column including the growth of the
instability. In this work, we have further investigated the connection between
the dynamics of the plasma column with its time-varying emission features. From
our analysis, we find a correlated trend between the growth rate of kink mode
instability and the flux variability obtained from the simulated light curve.
|
Virtually anything can be and is ranked; people and animals, universities and
countries, words and genes. Rankings reduce the components of highly complex
systems into ordered lists, aiming to capture the fitness or ability of each
element to perform relevant functions, and are being used from socioeconomic
policy to knowledge extraction. A century of research has found regularities in
ranking lists across nature and society when data is aggregated over time. Far
less is known, however, about ranking dynamics, when the elements change their
rank in time. To bridge this gap, here we explore the dynamics of 30 ranking
lists in natural, social, economic, and infrastructural systems, comprising
millions of elements, whose temporal scales span from minutes to centuries. We
find that the flux governing the arrival of new elements into a ranking list
reveals systems with identifiable patterns of stability: in high-flux systems
only the top of the list is stable, while in low-flux systems the top and
bottom are equally stable. We show that two basic mechanisms - displacement and
replacement of elements - are sufficient to understand and quantify ranking
dynamics. The model uncovers two regimes in the dynamics of ranking lists: a
fast regime dominated by long-range rank changes, and a slow regime driven by
diffusion. Our results indicate that the balance between robustness and
adaptability characterizing the dynamics of complex systems might be governed
by random processes irrespective of the details of each system.
|
Recent years have seen considerable research activities devoted to video
enhancement that simultaneously increases temporal frame rate and spatial
resolution. However, the existing methods either fail to explore the intrinsic
relationship between temporal and spatial information or lack flexibility in
the choice of final temporal/spatial resolution. In this work, we propose an
unconstrained space-time video super-resolution network, which can effectively
exploit space-time correlation to boost performance. Moreover, it has complete
freedom in adjusting the temporal frame rate and spatial resolution through the
use of the optical flow technique and a generalized pixelshuffle operation. Our
extensive experiments demonstrate that the proposed method not only outperforms
the state-of-the-art, but also requires far fewer parameters and less running
time.
|
In Zero-shot learning (ZSL), we classify unseen categories using textual
descriptions about their expected appearance when observed (class embeddings)
and a disjoint pool of seen classes, for which annotated visual data are
accessible. We tackle ZSL by casting a "vanilla" convolutional neural network
(e.g. AlexNet, ResNet-101, DenseNet-201 or DarkNet-53) into a zero-shot
learner. We do so by crafting the softmax classifier: we freeze its weights
using fixed seen classification rules, either semantic (seen class embeddings)
or visual (seen class prototypes). Then, we learn a data-driven and
ZSL-tailored feature representation on seen classes only to match these fixed
classification rules. Given that the latter seamlessly generalize towards
unseen classes, while requiring not actual unseen data to be computed, we can
perform ZSL inference by augmenting the pool of classification rules at test
time while keeping the very same representation we learnt: nowhere re-training
or fine-tuning on unseen data is performed. The combination of semantic and
visual crafting (by simply averaging softmax scores) improves prior
state-of-the-art methods in benchmark datasets for standard, inductive ZSL.
After rebalancing predictions to better handle the joint inference over seen
and unseen classes, we outperform prior generalized, inductive ZSL methods as
well. Also, we gain interpretability at no additional cost, by using neural
attention methods (e.g., grad-CAM) as they are. Code will be made publicly
available.
|
Learning from implicit feedback is one of the most common cases in the
application of recommender systems. Generally speaking, interacted examples are
considered as positive while negative examples are sampled from uninteracted
ones. However, noisy examples are prevalent in real-world implicit feedback. A
noisy positive example could be interacted but it actually leads to negative
user preference. A noisy negative example which is uninteracted because of
unawareness of the user could also denote potential positive user preference.
Conventional training methods overlook these noisy examples, leading to
sub-optimal recommendations. In this work, we propose a novel framework to
learn robust recommenders from implicit feedback. Through an empirical study,
we find that different models make relatively similar predictions on clean
examples which denote the real user preference, while the predictions on noisy
examples vary much more across different models. Motivated by this observation,
we propose denoising with cross-model agreement(DeCA) which aims to minimize
the KL-divergence between the real user preference distributions parameterized
by two recommendation models while maximizing the likelihood of data
observation. We employ the proposed DeCA on four state-of-the-art
recommendation models and conduct experiments on four datasets. Experimental
results demonstrate that DeCA significantly improves recommendation performance
compared with normal training and other denoising methods. Codes will be
open-sourced.
|
Meta-learning, or learning to learn, is a technique that can help to overcome
resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to
new tasks. We apply model-agnostic meta-learning (MAML) to the task of
cross-lingual dependency parsing. We train our model on a diverse set of
languages to learn a parameter initialization that can adapt quickly to new
languages. We find that meta-learning with pre-training can significantly
improve upon the performance of language transfer and standard supervised
learning baselines for a variety of unseen, typologically diverse, and
low-resource languages, in a few-shot learning setup.
|
This manuscript presents an algorithm for obtaining an approximation of
nonlinear high order control affine dynamical systems, that leverages the
controlled trajectories as the central unit of information. As the fundamental
basis elements leveraged in approximation, higher order control occupation
kernels represent iterated integration after multiplication by a given
controller in a vector valued reproducing kernel Hilbert space. In a
regularized regression setting, the unique optimizer for a particular
optimization problem is expressed as a linear combination of these occupation
kernels, which converts an infinite dimensional optimization problem to a
finite dimensional optimization problem through the representer theorem.
Interestingly, the vector valued structure of the Hilbert space allows for
simultaneous approximation of the drift and control effectiveness components of
the control affine system. Several experiments are performed to demonstrate the
effectiveness of the approach.
|
In 1930, Wilhelm Magnus introduced the so-called Freiheitssatz: Let $F$ be a
free group with basis $\mathcal{X}$ and let $r$ be a cyclically reduced element
of $F$ which contains a basis element $x \in \mathcal{X}$, then every
non-trivial element of the normal closure of $r$ in $F$ contains the basis
element $x$. Equivalently, the subgroup freely generated by $\mathcal{X}
\backslash \{x\}$ embeds canonically into the quotient group $F / \langle \!
\langle r \rangle \! \rangle_{F}$. In this article, we want to introduce a
Freiheitssatz for amalgamated products $G=A \ast_{U} B$ of free groups $A$ and
$B$, where $U$ is a maximal cyclic subgroup in $A$ and $B$: If an element $r$
of $G$ is neither conjugate to an element of $A$ nor $B$, then the factors $A$,
$B$ embed canonically into $G / \langle \! \langle r \rangle \! \rangle_{G}$.
|
In this article, we prove a $p$-adic analogue of the local invariant cycle
theorem for $H^2$ in mixed characteristics. As a result, for a smooth
projective variety $X$ over a $p$-adic local field $K$ with a proper flat
regular model $\mathcal{X}$ over $O_K$, we show that the natural map
$Br(\mathcal{X})\rightarrow Br(X_{\bar{K}})^{G_K}$ has a finite kernel and a
finite cokernel. And we prove that the natural map
$Hom(Br(X)/Br(K)+Br(\mathcal{X}), \mathbb{Q}/\mathbb{Z}) \rightarrow Alb_X(K)$
has a finite kernel and a finite cokernel, generalizing Lichtenbaum's duality
between Brauer groups and Jacobians for curves to arbitrary dimensions.
|
Quasiparticles and analog models are ubiquitous in the study of physical
systems. Little has been written about quasiparticles on manifolds with
anticommuting co-ordinates, yet they are capable of emulating a surprising
range of physical phenomena. This paper introduces a classical model of free
fields on a manifold with anticommuting co-ordinates, identifies the region of
superspace which the model inhabits, and shows that the model emulates the
behaviour of a five-species interacting quantum field theory on
$\mathbb{R}^{1,3}$. The Lagrangian of this model arises entirely from the
anticommutation property of the manifold co-ordinates.
|
We study the problem of controlling oscillations in closed loop by combining
positive and negative feedback in a mixed configuration. We develop a complete
design procedure to set the relative strength of the two feedback loops to
achieve steady oscillations. The proposed design takes advantage of dominance
theory and adopts classical harmonic balance and fast/slow analysis to regulate
the frequency of oscillations. The design is illustrated on a simple two-mass
system, a setting that reveals the potential of the approach for locomotion,
mimicking approaches based on central pattern generators.
|
On-device training for personalized learning is a challenging research
problem. Being able to quickly adapt deep prediction models at the edge is
necessary to better suit personal user needs. However, adaptation on the edge
poses some questions on both the efficiency and sustainability of the learning
process and on the ability to work under shifting data distributions. Indeed,
naively fine-tuning a prediction model only on the newly available data results
in catastrophic forgetting, a sudden erasure of previously acquired knowledge.
In this paper, we detail the implementation and deployment of a hybrid
continual learning strategy (AR1*) on a native Android application for
real-time on-device personalization without forgetting. Our benchmark, based on
an extension of the CORe50 dataset, shows the efficiency and effectiveness of
our solution.
|
Deep learning based methods hold state-of-the-art results in image denoising,
but remain difficult to interpret due to their construction from poorly
understood building blocks such as batch-normalization, residual learning, and
feature domain processing. Unrolled optimization networks propose an
interpretable alternative to constructing deep neural networks by deriving
their architecture from classical iterative optimization methods, without use
of tricks from the standard deep learning tool-box. So far, such methods have
demonstrated performance close to that of state-of-the-art models while using
their interpretable construction to achieve a comparably low learned parameter
count. In this work, we propose an unrolled convolutional dictionary learning
network (CDLNet) and demonstrate its competitive denoising performance in both
low and high parameter count regimes. Specifically, we show that the proposed
model outperforms the state-of-the-art denoising models when scaled to similar
parameter count. In addition, we leverage the model's interpretable
construction to propose an augmentation of the network's thresholds that
enables state-of-the-art blind denoising performance and near-perfect
generalization on noise-levels unseen during training.
|
Differentially private (DP) stochastic convex optimization (SCO) is a
fundamental problem, where the goal is to approximately minimize the population
risk with respect to a convex loss function, given a dataset of i.i.d. samples
from a distribution, while satisfying differential privacy with respect to the
dataset. Most of the existing works in the literature of private convex
optimization focus on the Euclidean (i.e., $\ell_2$) setting, where the loss is
assumed to be Lipschitz (and possibly smooth) w.r.t. the $\ell_2$ norm over a
constraint set with bounded $\ell_2$ diameter. Algorithms based on noisy
stochastic gradient descent (SGD) are known to attain the optimal excess risk
in this setting.
In this work, we conduct a systematic study of DP-SCO for $\ell_p$-setups.
For $p=1$, under a standard smoothness assumption, we give a new algorithm with
nearly optimal excess risk. This result also extends to general polyhedral
norms and feasible sets. For $p\in(1, 2)$, we give two new algorithms, whose
central building block is a novel privacy mechanism, which generalizes the
Gaussian mechanism. Moreover, we establish a lower bound on the excess risk for
this range of $p$, showing a necessary dependence on $\sqrt{d}$, where $d$ is
the dimension of the space. Our lower bound implies a sudden transition of the
excess risk at $p=1$, where the dependence on $d$ changes from logarithmic to
polynomial, resolving an open question in prior work [TTZ15] . For $p\in (2,
\infty)$, noisy SGD attains optimal excess risk in the low-dimensional regime;
in particular, this proves the optimality of noisy SGD for $p=\infty$. Our work
draws upon concepts from the geometry of normed spaces, such as the notions of
regularity, uniform convexity, and uniform smoothness.
|
Biological muscles have always attracted robotics researchers due to their
efficient capabilities in compliance, force generation, and mechanical work.
Many groups are working on the development of artificial muscles, however,
state-of-the-art methods still fall short in performance when compared with
their biological counterpart. Muscles with high force output are mostly rigid,
whereas traditional soft actuators take much space and are limited in strength
and producing displacement. In this work, we aim to find a reasonable trade-off
between these features by mimicking the striated structure of skeletal muscles.
For that, we designed an artificial pneumatic myofibril composed of multiple
contraction units that combine stretchable and inextensible materials. Varying
the geometric parameters and the number of units in series provides flexible
adjustment of the desired muscle operation. We derived a mathematical model
that predicts the relationship between the input pneumatic pressure and the
generated output force. A detailed experimental study is conducted to validate
the performance of the proposed bio-inspired muscle.
|
Finding information about tourist places to visit is a challenging problem
that people face while visiting different countries. This problem is
accentuated when people are coming from different countries, speak different
languages, and are from all segments of society. In this context, visitors and
pilgrims face important problems to find the appropriate doaas when visiting
holy places. In this paper, we propose a mobile application that helps the user
find the appropriate doaas for a given holy place in an easy and intuitive
manner. Three different options are developed to achieve this goal: 1) manual
search, 2) GPS location to identify the holy places and therefore their
corresponding doaas, and 3) deep learning (DL) based method to determine the
holy place by analyzing an image taken by the visitor. Experiments show good
performance of the proposed mobile application in providing the appropriate
doaas for visited holy places.
|
So far, topological band theory is discussed mainly for systems described by
eigenvalue problems. Here, we develop a topological band theory described by a
generalized eigenvalue problem (GEVP). Our analysis elucidates that
non-Hermitian topological band structures may emerge for systems described by a
GEVP with Hermitian matrices. The above result is verified by analyzing a
two-dimensional toy model where symmetry-protected exceptional rings (SPERs)
emerge although the matrices involved are Hermitian. Remarkably, these SPERs
are protected by emergent symmetry, which is unique to the systems described by
the GEVP. Furthermore, these SPERs elucidate the origin of the characteristic
dispersion of hyperbolic metamaterials which is observed in experiments.
|
We provide a framework for proving convergence to the directed landscape, the
central object in the Kardar-Parisi-Zhang universality class. For last passage
models, we show that compact convergence to the Airy line ensemble implies
convergence to the Airy sheet. In i.i.d. environments, we show that Airy sheet
convergence implies convergence of distances and geodesics to their
counterparts in the directed landscape. Our results imply convergence of
classical last passage models and interacting particle systems. Our framework
is built on the notion of a directed metric, a generalization of metrics which
behaves better under limits. As a consequence of our results, we present a
solution to an old problem: the scaled longest increasing subsequence in a
uniform permutation converges to the directed geodesic.
|
It is known that an ideal triangulation of a compact $3$-manifold with
nonempty boundary is minimal if and only if it contains the minimum number of
edges among all ideal triangulations of the manifold. Therefore, any ideal
one-edge triangulation (i.e., an ideal singular triangulation with exactly one
edge) is minimal. Vesnin, Turaev, and the first author showed that an ideal
two-edge triangulation is minimal if no $3$-$2$ Pachner move can be applied. In
this paper we show that any of the so-called poor ideal three-edge
triangulations is minimal. We exploit this property to construct minimal ideal
triangulations for an infinite family of hyperbolic $3$-manifolds with totally
geodesic boundary.
|
Virtual reality (VR) head-mounted displays (HMD) appear to be effective
research tools, which may address the problem of ecological validity in
neuropsychological testing. However, their widespread implementation is
hindered by VR induced symptoms and effects (VRISE) and the lack of skills in
VR software development. This study offers guidelines for the development of VR
software in cognitive neuroscience and neuropsychology, by describing and
discussing the stages of the development of Virtual Reality Everyday Assessment
Lab (VR-EAL), the first neuropsychological battery in immersive VR. Techniques
for evaluating cognitive functions within a realistic storyline are discussed.
The utility of various assets in Unity, software development kits, and other
software are described so that cognitive scientists can overcome challenges
pertinent to VRISE and the quality of the VR software. In addition, this pilot
study attempts to evaluate VR-EAL in accordance with the necessary criteria for
VR software for research purposes. The VR neuroscience questionnaire (VRNQ;
Kourtesis et al., 2019b) was implemented to appraise the quality of the three
versions of VR-EAL in terms of user experience, game mechanics, in-game
assistance, and VRISE. Twenty-five participants aged between 20 and 45 years
with 12-16 years of full-time education evaluated various versions of VR-EAL.
The final version of VR-EAL achieved high scores in every sub-score of the VRNQ
and exceeded its parsimonious cut-offs. It also appeared to have better in-game
assistance and game mechanics, while its improved graphics substantially
increased the quality of the user experience and almost eradicated VRISE. The
results substantially support the feasibility of the development of effective
VR research and clinical software without the presence of VRISE during a
60-minute VR session.
|
We consider collisions between stars moving near the speed of light around
supermassive black holes (SMBHs), with mass
$M_{\bullet}\gtrsim10^8\,M_{\odot}$, without being tidally disrupted. The
overall rate for collisions taking place in the inner $\sim1$ pc of galaxies
with $M_{\bullet}=10^8,10^9,10^{10}\,M_{\odot}$ are $\Gamma\sim5,0.07,0.02$
yr$^{-1}$, respectively. We further calculate the differential collision rate
as a function of total energy released, energy released per unit mass lost, and
galactocentric radius. The most common collisions will release energies on the
order of $\sim10^{49}-10^{51}$ erg, with the energy distribution peaking at
higher energies in galaxies with more massive SMBHs. Depending on the host
galaxy mass and the depletion timescale, the overall rate of collisions in a
galaxy ranges from a small percentage to several times larger than that of
core-collapse supernovae (CCSNe) for the same host galaxy. In addition, we show
example light curves for collisions with varying parameters, and find that the
peak luminosity could reach or even exceed that of superluminous supernovae
(SLSNe), although with light curves with much shorter duration. Weaker events
could initially be mistaken for low-luminosity supernovae. In addition, we note
that these events will likely create streams of debris that will accrete onto
the SMBH and create accretion flares that may resemble tidal disruption events
(TDEs).
|
Custom currencies (ERC-20) on Ethereum are wildly popular, but they are
second class to the primary currency Ether. Custom currencies are more complex
and more expensive to handle than the primary currency as their accounting is
not natively performed by the underlying ledger, but instead in user-defined
contract code. Furthermore, and quite importantly, transaction fees can only be
paid in Ether.
In this paper, we focus on being able to pay transaction fees in custom
currencies. We achieve this by way of a mechanism permitting short term
liabilities to pay transaction fees in conjunction with offers of custom
currencies to compensate for those liabilities. This enables block producers to
accept custom currencies in exchange for settling liabilities of transactions
that they process.
We present formal ledger rules to handle liabilities together with the
concept of babel fees to pay transaction fees in custom currencies. We also
discuss how clients can determine what fees they have to pay, and we present a
solution to the knapsack problem variant that block producers have to solve in
the presence of babel fees to optimise their profits.
|
Control systems of interest are often invariant under Lie groups of
transformations. Given such a control system, assumed to not be static feedback
linearizable, a verifiable geometric condition is described and proven to
guarantee its dynamic feedback linearizability. Additionally, a systematic
procedure for obtaining all the system trajectories is shown to follow from
this condition. Besides smoothness and the existence of symmetry, no further
assumption is made on the local form of a control system, which is therefore
permitted to be fully nonlinear and time varying. Likewise, no constraints are
imposed on the local form of the dynamic compensator. Particular attention is
given to those systems requiring non-trivial dynamic extensions; that is,
beyond augmentation by chains of integrators. Nevertheless, the results are
illustrated by an example of each type. Firstly, a control system that can be
dynamically linearized by a chain of integrators, and secondly, one which does
not possess any linearizing chains of integrators and for which a dynamic
feedback linearization is nevertheless derived. These systems are discussed in
some detail. The constructions have been automated in the Maple package
DifferentialGeometry.
|
Learning embedding spaces of suitable geometry is critical for representation
learning. In order for learned representations to be effective and efficient,
it is ideal that the geometric inductive bias aligns well with the underlying
structure of the data. In this paper, we propose Switch Spaces, a data-driven
approach for learning representations in product space. Specifically, product
spaces (or manifolds) are spaces of mixed curvature, i.e., a combination of
multiple euclidean and non-euclidean (hyperbolic, spherical) manifolds. To this
end, we introduce sparse gating mechanisms that learn to choose, combine and
switch spaces, allowing them to be switchable depending on the input data with
specialization. Additionally, the proposed method is also efficient and has a
constant computational complexity regardless of the model size. Experiments on
knowledge graph completion and item recommendations show that the proposed
switch space achieves new state-of-the-art performances, outperforming pure
product spaces and recently proposed task-specific models.
|
Stellar parameters of 25 planet-hosting stars and abundances of Li, C, O, Na,
Mg, Al, S, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Ni, Zn, Y, Zr, Ba, Ce, Pr, Nd, Sm and
Eu, were studied based on homogeneous high resolution spectra and uniform
techniques. The iron abundance [Fe/H] and key elements (Li, C, O, Mg, Si)
indicative of the planet formation, as well as the dependencies of [El/Fe] on
$T_{cond}$, were analyzed. The iron abundances determined in our sample stars
with detected massive planets range within -0.3<[Fe/H]<0.4. The behaviour of
[C/Fe], [O/Fe], [Mg/Fe] and [Si/Fe] relative to [Fe/H] is consistent with the
Galactic Chemical Evolution trends. The mean values of C/O and [C/O] are <C/O>=
0.48 +/-0.07 and <[C/O]>=-0.07 +/-0.07, which are slightly lower than solar
ones. The Mg/Si ratios range from 0.83 to 0.95 for four stars in our sample and
from 1.0 to 1.86 for the remaining 21 stars. Various slopes of [El/Fe] vs.
Tcond were found. The dependencies of the planetary mass on metallicity, the
lithium abundance, the C/O and Mg/Si ratios, and also on the [El/Fe]-Tcond
slopes were considered.
|
A digital quantum simulation of the Agassi model from nuclear physics with a
trapped-ion quantum platform is proposed and analyzed. The proposal is worked
out for the case with four different sites, to be implemented in a four-ion
system. Numerical simulations and analytical estimations are presented to
illustrate the feasibility of this proposal with current technology. The
proposed approach is fully scalable to a larger number of sites. The use of a
quantum correlation function as a probe to explore the quantum phases by
quantum simulating the time dynamics, with no need of computing the ground
state, is also studied. Evidence that the amplitude of the quantum Rabi
oscillations in this quantum simulation is correlated with the different
quantum phases of the system is given. This approach establishes an avenue for
the digital quantum simulation of useful models in nuclear physics with
trapped-ion systems.
|
Coordination and cooperation between humans and autonomous agents in
cooperative games raises interesting questions of human decision making and
behaviour changes. Here we report our findings from a group formation game in a
small-world network of different mixes of human and agent players, aiming to
achieve connected clusters of the same colour by swapping places with
neighbouring players using non-overlapping information. In the experiments the
human players are incentivized by rewarding to prioritize their own cluster
while the model of agents' decision making is derived from our previous
experiment of purely cooperative game between human players. The experiments
were performed by grouping the players in three different setups to investigate
the overall effect of having cooperative autonomous agents within teams. We
observe that the change in the behavior of human subjects adjusts to playing
with autonomous agents by being less risk averse, while keeping the overall
performance efficient by splitting the behaviour into selfish and cooperative
in the two actions performed during the rounds of the game. Moreover, results
from two hybrid human-agent setups suggest that the group composition affects
the evolution of clusters. Our findings indicate that in purely or lesser
cooperative settings, providing more control to humans could help in maximizing
the overall performance of hybrid systems.
|
In this paper, we construct high order energy dissipative and conservative
local discontinuous Galerkin methods for the Fornberg-Whitham type equations.
We give the proofs for the dissipation and conservation for related
conservative quantities. The corresponding error estimates are proved for the
proposed schemes. The capability of our schemes for different types of
solutions is shown via several numerical experiments. The dissipative schemes
have good behavior for shock solutions, while for a long time approximation,
the conservative schemes can reduce the shape error and the decay of amplitude
significantly
|
Electromagnetic observations have provided strong evidence for the existence
of massive black holes in the center of galaxies, but their origin is still
poorly known. Different scenarios for the formation and evolution of massive
black holes lead to different predictions for their properties and merger
rates. LISA observations of coalescing massive black hole binaries could be
used to reverse engineer the problem and shed light on these mechanisms. In
this paper, we introduce a pipeline based on hierarchical Bayesian inference to
infer the mixing fraction between different theoretical models by comparing
them to LISA observations of massive black hole mergers. By testing this
pipeline against simulated LISA data, we show that it allows us to accurately
infer the properties of the massive black hole population as long as our
theoretical models provide a reliable description of the Universe. We also show
that measurement errors, including both instrumental noise and weak lensing
errors, have little impact on the inference.
|
Active solids consume energy to allow for actuation, shape change, and wave
propagation not possible in equilibrium. Whereas active interfaces have been
realized across many experimental systems, control of three-dimensional (3D)
bulk materials remains a challenge. Here, we develop continuum theory and
microscopic simulations that describe a 3D soft solid whose boundary
experiences active surface stresses. The competition between active boundary
and elastic bulk yields a broad range of previously unexplored phenomena, which
are demonstrations of so-called active elastocapillarity. In contrast to thin
shells and vesicles, we discover that bulk 3D elasticity controls snap-through
transitions between different anisotropic shapes. These transitions meet at a
critical point, allowing a universal classification via Landau theory. The
active surface modifies elastic wave propagation to allow zero, or even
negative, group velocities. These phenomena offer robust principles for
programming shape change and functionality into active solids, from robotic
metamaterials down to shape-shifting nanoparticles.
|
The validity of the Riemann Hypothesis (RH) on the location of the
non-trivial zeros of the Riemann $\zeta$-function is directly related to the
growth of the Mertens function $M(x) \,=\,\sum_{k=1}^x \mu(k)$, where $\mu(k)$
is the M\"{o}bius coefficient of the integer $k$: the RH is indeed true if the
Mertens function goes asymptotically as $M(x) \sim x^{1/2 + \epsilon}$, where
$\epsilon$ is an arbitrary strictly positive quantity. This behavior can be
established on the basis of a new probabilistic approach based on the global
properties of Mertens function. To this aim we derive a series of probabilistic
results concerning the prime number distribution along the series of
square-free numbers which shows that the Mertens function is subject to a
normal distribution. We also show that the validity of the RH also implies the
validity of the Generalized Riemann Hypothesis for the Dirichlet $L$-functions.
Next we study the local properties of the Mertens function, i.e. its variation
induced by each M\"{o}bius coefficient restricted to the square-free numbers.
We perform a massive statistical analysis on these coefficients, applying to
them a series of randomness tests of increasing precision and complexity, for a
total number of eighteen different tests. The successful outputs of all these
tests (each of them with a level of confidence of $99\%$ that all the
sub-sequences analyzed are indeed random) can be seen as impressive
"experimental" confirmations of the brownian nature of the restricted
M\"{o}bius coefficients and the probabilistic normal law distribution of the
Mertens function analytically established earlier. In view of the theoretical
probabilistic argument and the large battery of statistical tests, we can
conclude that while a violation of the RH is strictly speaking not impossible,
it is however extremely improbable.
|
Tracking humans in crowded video sequences is an important constituent of
visual scene understanding. Increasing crowd density challenges visibility of
humans, limiting the scalability of existing pedestrian trackers to higher
crowd densities. For that reason, we propose to revitalize head tracking with
Crowd of Heads Dataset (CroHD), consisting of 9 sequences of 11,463 frames with
over 2,276,838 heads and 5,230 tracks annotated in diverse scenes. For
evaluation, we proposed a new metric, IDEucl, to measure an algorithm's
efficacy in preserving a unique identity for the longest stretch in image
coordinate space, thus building a correspondence between pedestrian crowd
motion and the performance of a tracking algorithm. Moreover, we also propose a
new head detector, HeadHunter, which is designed for small head detection in
crowded scenes. We extend HeadHunter with a Particle Filter and a color
histogram based re-identification module for head tracking. To establish this
as a strong baseline, we compare our tracker with existing state-of-the-art
pedestrian trackers on CroHD and demonstrate superiority, especially in
identity preserving tracking metrics. With a light-weight head detector and a
tracker which is efficient at identity preservation, we believe our
contributions will serve useful in advancement of pedestrian tracking in dense
crowds.
|
Physical isolation, so called air-gapping, is an effective method for
protecting security-critical computers and networks. While it might be possible
to introduce malicious code through the supply chain, insider attacks, or
social engineering, communicating with the outside world is prevented.
Different approaches to breach this essential line of defense have been
developed based on electromagnetic, acoustic, and optical communication
channels. However, all of these approaches are limited in either data rate or
distance, and frequently offer only exfiltration of data. We present a novel
approach to infiltrate data to and exfiltrate data from air-gapped systems
without any additional hardware on-site. By aiming lasers at already built-in
LEDs and recording their response, we are the first to enable a long-distance
(25m), bidirectional, and fast (18.2kbps in & 100kbps out) covert communication
channel. The approach can be used against any office device that operates LEDs
at the CPU's GPIO interface.
|
Neural architecture search (NAS) is a hot topic in the field of automated
machine learning and outperforms humans in designing neural architectures on
quite a few machine learning tasks. Motivated by the natural representation
form of neural networks by the Cartesian genetic programming (CGP), we propose
an evolutionary approach of NAS based on CGP, called CGPNAS, to solve sentence
classification task. To evolve the architectures under the framework of CGP,
the operations such as convolution are identified as the types of function
nodes of CGP, and the evolutionary operations are designed based on
Evolutionary Strategy. The experimental results show that the searched
architectures are comparable with the performance of human-designed
architectures. We verify the ability of domain transfer of our evolved
architectures. The transfer experimental results show that the accuracy
deterioration is lower than 2-5%. Finally, the ablation study identifies the
Attention function as the single key function node and the linear
transformations along could keep the accuracy similar with the full evolved
architectures, which is worthy of investigation in the future.
|
We formulate a theory of shape valid for objects of arbitrary dimension whose
contours are path connected. We apply this theory to the design and modeling of
viable trajectories of complex dynamical systems. Infinite families of
qualitatively similar shapes are constructed giving as input a finite ordered
set of characteristic points (landmarks) and the value of a continuous
parameter $\kappa \in (0,\infty)$. We prove that all shapes belonging to the
same family are located within the convex hull of the landmarks. The theory is
constructive in the sense that it provides a systematic means to build a
mathematical model for any shape taken from the physical world. We illustrate
this with a variety of examples: (chaotic) time series, plane curves, space
filling curves, knots and strange attractors.
|
A longstanding issue in the study of quantum chromodynamics (QCD) is its
behavior at nonzero baryon density, which has implications for many areas of
physics. The path integral has a complex integrand when the quark chemical
potential is nonzero and therefore has a sign problem, but it also has a
generalized $\mathcal PT$ symmetry. We review some new approaches to $\mathcal
PT$-symmetric field theories, including both analytical techniques and methods
for lattice simulation. We show that $\mathcal PT$-symmetric field theories
with more than one field generally have a much richer phase structure than
their Hermitian counterparts, including stable phases with patterning behavior.
The case of a $\mathcal PT$-symmetric extension of a $\phi^4$ model is
explained in detail. The relevance of these results to finite density QCD is
explained, and we show that a simple model of finite density QCD exhibits a
patterned phase in its critical region.
|
In the mean field regime, neural networks are appropriately scaled so that as
the width tends to infinity, the learning dynamics tends to a nonlinear and
nontrivial dynamical limit, known as the mean field limit. This lends a way to
study large-width neural networks via analyzing the mean field limit. Recent
works have successfully applied such analysis to two-layer networks and
provided global convergence guarantees. The extension to multilayer ones
however has been a highly challenging puzzle, and little is known about the
optimization efficiency in the mean field regime when there are more than two
layers.
In this work, we prove a global convergence result for unregularized
feedforward three-layer networks in the mean field regime. We first develop a
rigorous framework to establish the mean field limit of three-layer networks
under stochastic gradient descent training. To that end, we propose the idea of
a \textit{neuronal embedding}, which comprises of a fixed probability space
that encapsulates neural networks of arbitrary sizes. The identified mean field
limit is then used to prove a global convergence guarantee under suitable
regularity and convergence mode assumptions, which -- unlike previous works on
two-layer networks -- does not rely critically on convexity. Underlying the
result is a universal approximation property, natural of neural networks, which
importantly is shown to hold at \textit{any} finite training time (not
necessarily at convergence) via an algebraic topology argument.
|
The topic of my research is "Learning and Upgrading in Global Value Chains:
An Analysis of India's Manufacturing Sector". To analyse India's learning and
upgrading through position, functions, specialisation & value addition of
manufacturing GVCs, it is required to quantify the extent, drivers, and impacts
of India's Manufacturing links in GVCs. I have transformed this overall broad
objective into three fundamental questions: (1) What is the extent of India's
Manufacturing Links in GVCs? (2) What are the determinants of India's
Manufacturing Links in GVCs? (3) What are the impacts of India's Manufacturing
Links in GVCs? These three objectives represent my three chapters in my PhD
thesis.
|
We calculate single-logarithmic corrections to the small-$x$ flavor-singlet
helicity evolution equations derived recently in the double-logarithmic
approximation. The new single-logarithmic part of the evolution kernel sums up
powers of $\alpha_s \, \ln (1/x)$, which are an important correction to the
dominant powers of $\alpha_s \, \ln^2 (1/x)$ summed up by the
double-logarithmic kernel at small values of Bjorken $x$ and with $\alpha_s$
the strong coupling constant. The single-logarithmic terms arise separately
from either the longitudinal or transverse momentum integrals. Consequently,
the evolution equations we derive simultaneously include the small-$x$
evolution kernel and the leading-order polarized DGLAP splitting functions. We
further enhance the equations by calculating the running coupling corrections
to the kernel.
|
In this note we prove that almost cap sets $A \subset \mathbb{F}_q^n$, i.e.,
the subsets of $\mathbb{F}_q^n$ that do not contain too many arithmetic
progressions of length three, satisfy that $|A| < c_q^n$ for some $c_q < q$. As
a corollary we prove a multivariable analogue of Ellenberg-Gijswijt theorem.
|
Given a graph $G$, a dominating set of $G$ is a set $S$ of vertices such that
each vertex not in $S$ has a neighbor in $S$. The domination number of $G$,
denoted $\gamma(G)$, is the minimum size of a dominating set of $G$. The
independent domination number of $G$, denoted $i(G)$, is the minimum size of a
dominating set of $G$ that is also independent. Note that every graph has an
independent dominating set, as a maximal independent set is equivalent to an
independent dominating set.
Let $G$ be a connected $k$-regular graph that is not $K_{k, k}$ where $k\geq
4$. Generalizing a result by Lam, Shiu, and Sun, we prove that $i(G)\le
\frac{k-1}{2k-1}|V(G)|$, which is tight for $k = 4$. This answers a question by
Goddard et al. in the affirmative. We also show that $\frac{i(G)}{\gamma(G)}
\le \frac{k^3-3k^2+2}{2k^2-6k+2}$, strengthening upon a result of Knor,
\v{S}krekovski, and Tepeh. In addition, we prove that a graph $G'$ with maximum
degree at most $4$ satisfies $i(G') \le \frac{5}{9}|V(G')|$, which is also
tight.
|
Sunquakes are helioseismic power enhancements initiated by solar flares, but
not all flares generate sunquakes. It is curious why some flares cause
sunquakes while others do not. Here we propose a hypothesis to explain the
disproportionate occurrence of sunquakes: during a flare's impulsive phase when
the flare's impulse acts upon the photosphere, delivered by shock waves,
energetic particles from higher atmosphere, or by downward Lorentz Force, a
sunquake tends to occur if the background oscillation at the flare footpoint
happens to oscillate downward in the same direction with the impulse from
above. To verify this hypothesis, we select 60 strong flares in Solar Cycle 24,
and examine the background oscillatory velocity at the sunquake sources during
the flares' impulsive phases. Since the Doppler velocity observations at
sunquake sources are usually corrupted during the flares, we reconstruct the
oscillatory velocity in the flare sites using helioseismic holography method
with an observation-based Green's function. A total of 24 flares are found to
be sunquake active, giving a total of 41 sunquakes. It is also found that in
3-5 mHz frequency band, 25 out of 31 sunquakes show net downward oscillatory
velocities during the flares' impulsive phases, and in 5-7 mHz frequency band,
33 out of 38 sunquakes show net downward velocities. These results support the
hypothesis that a sunquake more likely occurs when a flare impacts a
photospheric area with a downward background oscillation.
|
Multivariate max-stable processes are important for both theoretical
investigations and various statistical applications motivated by the fact that
these are limiting processes, for instance of stationary multivariate regularly
varying time series, [1]. In this contribution we explore the relation between
homogeneous functionals and multivariate max-stable processes and discuss the
connections between multivariate max-stable process and zonoid / max-zonoid
equivalence. We illustrate our results considering Brown-Resnick and Smith
processes.
|
Radiation therapy treatment planning is a complex process, as the target dose
prescription and normal tissue sparing are conflicting objectives. Automated
and accurate dose prediction for radiation therapy planning is in high demand.
In this study, we propose a novel learning-based ensemble approach, named
LE-NAS, which integrates neural architecture search (NAS) with knowledge
distillation for 3D radiotherapy dose prediction. Specifically, the prediction
network first exhaustively searches each block from enormous architecture
space. Then, multiple architectures are selected with promising performance and
diversity. To reduce the inference time, we adopt the teacher-student paradigm
by treating the combination of diverse outputs from multiple searched networks
as supervisions to guide the student network training. In addition, we apply
adversarial learning to optimize the student network to recover the knowledge
in teacher networks. To the best of our knowledge, we are the first to
investigate the combination of NAS and knowledge distillation. The proposed
method has been evaluated on the public OpenKBP dataset, and experimental
results demonstrate the effectiveness of our method and its superior
performance to the state-of-the-art method.
|
We analyze the dynamics of a single spiral galaxy from a general relativistic
viewpoint. We employ the known family of stationary axially-symmetric solutions
to Einstein gravity coupled with dust in order to model the halo external to
the bulge. In particular, we generalize the known results of Balasin and
Grumiller, relaxing the condition of co-rotation, thus including non
co-rotating dust. This further highlights the discrepancy between Newtonian
theory of gravity and general relativity at low velocities and energy
densities. We investigate the role of dragging in simulating dark matter
effects. In particular, we show that non co-rotance further reduce the amount
of energy density required to explain the rotation curves for spiral galaxies.
|
A biopsy is the only diagnostic procedure for accurate histological
confirmation of breast cancer. When sonographic placement is not feasible, a
Magnetic Resonance Imaging(MRI)-guided biopsy is often preferred. The lack of
real-time imaging information and the deformations of the breast make it
challenging to bring the needle precisely towards the tumour detected in
pre-interventional Magnetic Resonance (MR) images. The current manual
MRI-guided biopsy workflow is inaccurate and would benefit from a technique
that allows real-time tracking and localisation of the tumour lesion during
needle insertion. This paper proposes a robotic setup and software architecture
to assist the radiologist in targeting MR-detected suspicious tumours. The
approach benefits from image fusion of preoperative images with intraoperative
optical tracking of markers attached to the patient's skin. A hand-mounted
biopsy device has been constructed with an actuated needle base to drive the
tip toward the desired direction. The steering commands may be provided both by
user input and by computer guidance. The workflow is validated through phantom
experiments. On average, the suspicious breast lesion is targeted with a radius
down to 2.3 mm. The results suggest that robotic systems taking into account
breast deformations have the potentials to tackle this clinical challenge.
|
This paper develops \emph{iterative Covariance Regulation} (iCR), a novel
method for active exploration and mapping for a mobile robot equipped with
on-board sensors. The problem is posed as optimal control over the $SE(3)$ pose
kinematics of the robot to minimize the differential entropy of the map
conditioned the potential sensor observations. We introduce a differentiable
field of view formulation, and derive iCR via the gradient descent method to
iteratively update an open-loop control sequence in continuous space so that
the covariance of the map estimate is minimized. We demonstrate autonomous
exploration and uncertainty reduction in simulated occupancy grid environments.
|
Blockchain was mainly introduced for secure transactions in connection with
the mining of cryptocurrency Bitcoin. This article discusses the fundamental
concepts of blockchain technology and its components, such as block header,
transaction, smart contracts, etc. Blockchain uses the distributed databases,
so this article also explains the advantages of distributed Blockchain over a
centrally located database. Depending on the application, Blockchain is broadly
categorized into two categories; Permissionless and Permissioned. This article
elaborates on these two categories as well. Further, it covers the consensus
mechanism and its working along with an overview of the Ethereum platform.
Blockchain technology has been proved to be one of the remarkable techniques to
provide security to IoT devices. An illustration of how Blockchain will be
useful for IoT devices has been given. A few applications are also illustrated
to explain the working of Blockchain with IoT.
|
In this paper, we propose a novel learning-based polygonal point set tracking
method. Compared to existing video object segmentation~(VOS) methods that
propagate pixel-wise object mask information, we propagate a polygonal point
set over frames.
Specifically, the set is defined as a subset of points in the target contour,
and our goal is to track corresponding points on the target contour. Those
outputs enable us to apply various visual effects such as motion tracking, part
deformation, and texture mapping. To this end, we propose a new method to track
the corresponding points between frames by the global-local alignment with
delicately designed losses and regularization terms. We also introduce a novel
learning strategy using synthetic and VOS datasets that makes it possible to
tackle the problem without developing the point correspondence dataset. Since
the existing datasets are not suitable to validate our method, we build a new
polygonal point set tracking dataset and demonstrate the superior performance
of our method over the baselines and existing contour-based VOS methods. In
addition, we present visual-effects applications of our method on part
distortion and text mapping.
|
We investigate the predictive performance of two novel CNN-DNN machine
learning ensemble models in predicting county-level corn yields across the US
Corn Belt (12 states). The developed data set is a combination of management,
environment, and historical corn yields from 1980-2019. Two scenarios for
ensemble creation are considered: homogenous and heterogeneous ensembles. In
homogenous ensembles, the base CNN-DNN models are all the same, but they are
generated with a bagging procedure to ensure they exhibit a certain level of
diversity. Heterogenous ensembles are created from different base CNN-DNN
models which share the same architecture but have different levels of depth.
Three types of ensemble creation methods were used to create several ensembles
for either of the scenarios: Basic Ensemble Method (BEM), Generalized Ensemble
Method (GEM), and stacked generalized ensembles. Results indicated that both
designed ensemble types (heterogenous and homogenous) outperform the ensembles
created from five individual ML models (linear regression, LASSO, random
forest, XGBoost, and LightGBM). Furthermore, by introducing improvements over
the heterogeneous ensembles, the homogenous ensembles provide the most accurate
yield predictions across US Corn Belt states. This model could make 2019 yield
predictions with a root mean square error of 866 kg/ha, equivalent to 8.5%
relative root mean square, and could successfully explain about 77% of the
spatio-temporal variation in the corn grain yields. The significant predictive
power of this model can be leveraged for designing a reliable tool for corn
yield prediction which will, in turn, assist agronomic decision-makers.
|
Annotating the right set of data amongst all available data points is a key
challenge in many machine learning applications. Batch active learning is a
popular approach to address this, in which batches of unlabeled data points are
selected for annotation, while an underlying learning algorithm gets
subsequently updated. Increasingly larger batches are particularly appealing in
settings where data can be annotated in parallel, and model training is
computationally expensive. A key challenge here is scale - typical active
learning methods rely on diversity techniques, which select a diverse set of
data points to annotate, from an unlabeled pool. In this work, we introduce
Active Data Shapley (ADS) -- a filtering layer for batch active learning that
significantly increases the efficiency of active learning by pre-selecting,
using a linear time computation, the highest-value points from an unlabeled
dataset. Using the notion of the Shapley value of data, our method estimates
the value of unlabeled data points with regards to the prediction task at hand.
We show that ADS is particularly effective when the pool of unlabeled data
exhibits real-world caveats: noise, heterogeneity, and domain shift. We run
experiments demonstrating that when ADS is used to pre-select the
highest-ranking portion of an unlabeled dataset, the efficiency of
state-of-the-art batch active learning methods increases by an average factor
of 6x, while preserving performance effectiveness.
|
In this work we address the problem of solving ill-posed inverse problems in
imaging where the prior is a variational autoencoder (VAE). Specifically we
consider the decoupled case where the prior is trained once and can be reused
for many different log-concave degradation models without retraining. Whereas
previous MAP-based approaches to this problem lead to highly non-convex
optimization algorithms, our approach computes the joint (space-latent) MAP
that naturally leads to alternate optimization algorithms and to the use of a
stochastic encoder to accelerate computations. The resulting technique (JPMAP)
performs Joint Posterior Maximization using an Autoencoding Prior. We show
theoretical and experimental evidence that the proposed objective function is
quite close to bi-convex. Indeed it satisfies a weak bi-convexity property
which is sufficient to guarantee that our optimization scheme converges to a
stationary point. We also highlight the importance of correctly training the
VAE using a denoising criterion, in order to ensure that the encoder
generalizes well to out-of-distribution images, without affecting the quality
of the generative model. This simple modification is key to providing
robustness to the whole procedure. Finally we show how our joint MAP
methodology relates to more common MAP approaches, and we propose a
continuation scheme that makes use of our JPMAP algorithm to provide more
robust MAP estimates. Experimental results also show the higher quality of the
solutions obtained by our JPMAP approach with respect to other non-convex MAP
approaches which more often get stuck in spurious local optima.
|
In this paper, we introduce and test our algorithm to create a road network
representation for city-scale active transportation simulation models. The
algorithm relies on open and universal data to ensure applicability for
different cities around the world. In addition to the major roads, their
geometries and the road attributes typically used in transport modelling (e.g.,
speed limit, number of lanes, permitted travel modes), the algorithm also
captures minor roads usually favoured by pedestrians and cyclists, along with
road attributes such as bicycle-specific infrastructure, traffic signals, and
road gradient. Furthermore, it simplifies the network's complex geometries and
merges parallel roads if applicable to make it suitable for large-scale
simulations. To examine the utility and performance of the algorithm, we used
it to create a network representation for Greater Melbourne, Australia and
compared the output with a network created using an existing transport
simulation toolkit along with another network from an existing city-scale
transport model from the Victorian government. Through simulation experiments
with these networks, we illustrated that our algorithm achieves a very good
balance between simulation accuracy and run-time. For routed trips on our
network for walking and cycling it is of comparable accuracy to the common
network conversion tools in terms of travel distances of the shortest paths
while being more than two times faster when used for simulating different
sample sizes. Therefore, our algorithm offers a flexible solution for building
accurate and efficient road networks for city-scale active transport models for
different cities around the world.
|
We show that the space of anti-symplectic involutions of a monotone
$S^2\times S^2$ whose fixed points set is a Lagrangian sphere is connected.
This follows from a stronger result, namely that any two anti-symplectic
involutions in that space are Hamiltonian isotopic.
|
High purity iron is obtained from vanadium-titanium magnetite (VTM) by
one-step coal-based direct reduction-smelting process with coal as reductant
and sodium carbonate (Na2CO3) as additives. Industrial experiments show that
the process of treating molten iron with a large amount of Na2CO3 is effective
in removing titanium from molten iron. However, the studies are rarely
conducted in thermodynamic relationship between titanium and other components
of molten iron, under the condition of a large amount of Na2CO3 additives. In
this study, through the thermodynamic database software Factsage8.0, the
effects of melting temperature, sodium content and oxygen content on the
removal of titanium from molten iron are studied. The results of thermodynamic
calculation show that the removal of titanium from molten iron needs to be
under the condition of oxidation, and the temperature should be below the
critical temperature of titanium removal (the highest temperature at which
titanium can be removed). Relatively low temperature and high oxygen content
contribute to the removal of titanium from molten iron. The high oxygen content
is conducive to the simultaneous removal of titanium and phosphorus from molten
iron. In addition, from a thermodynamic point of view, excessive sodium
addition inhibits the removal of titanium from molten iron.
|
We present the first formal verification of approximation algorithms for
NP-complete optimization problems: vertex cover, independent set, set cover,
center selection, load balancing, and bin packing. We uncover incompletenesses
in existing proofs and improve the approximation ratio in one case. All proofs
are uniformly invariant based.
|
We generalize the classical shadow tomography scheme to a broad class of
finite-depth or finite-time local unitary ensembles, known as locally scrambled
quantum dynamics, where the unitary ensemble is invariant under local basis
transformations. In this case, the reconstruction map for the classical shadow
tomography depends only on the average entanglement feature of classical
snapshots. We provide an unbiased estimator of the quantum state as a linear
combination of reduced classical snapshots in all subsystems, where the
combination coefficients are solely determined by the entanglement feature. We
also bound the number of experimental measurements required for the tomography
scheme, so-called sample complexity, by formulating the operator shadow norm in
the entanglement feature formalism. We numerically demonstrate our approach for
finite-depth local unitary circuits and finite-time local-Hamiltonian generated
evolutions. The shallow-circuit measurement can achieve a lower tomography
complexity compared to the existing method based on Pauli or Clifford
measurements. Our approach is also applicable to approximately locally
scrambled unitary ensembles with a controllable bias that vanishes quickly.
Surprisingly, we find a single instance of time-dependent local Hamiltonian
evolution is sufficient to perform an approximate tomography as we numerically
demonstrate it using a paradigmatic spin chain Hamiltonian modeled after
trapped ion or Rydberg atom quantum simulators. Our approach significantly
broadens the application of classical shadow tomography on near-term quantum
devices.
|
We consider the inverse scattering on the quantum graph associated with the
hexagonal lattice. Assuming that the potentials on the edges are compactly
supported and symmetric, we show that the S-matrix for all energies in any
given open set in the continuous spectrum determines the potentials.
|
We introduce a framework that abstracts Reinforcement Learning (RL) as a
sequence modeling problem. This allows us to draw upon the simplicity and
scalability of the Transformer architecture, and associated advances in
language modeling such as GPT-x and BERT. In particular, we present Decision
Transformer, an architecture that casts the problem of RL as conditional
sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal
actions by leveraging a causally masked Transformer. By conditioning an
autoregressive model on the desired return (reward), past states, and actions,
our Decision Transformer model can generate future actions that achieve the
desired return. Despite its simplicity, Decision Transformer matches or exceeds
the performance of state-of-the-art model-free offline RL baselines on Atari,
OpenAI Gym, and Key-to-Door tasks.
|
Recently, moir\'{e} superlattices have attracted considerable attentions
because they are found to exhibit intriguing electronic phenomena of tunable
Mott insulators and unconventional superconductivity. These phenomena are
highly related to the physical mechanism of the interlayer coupling. However,
up to now, there has not existed any theory that can completely interpret the
experimental results of the interlayer conductance of moir\'{e} superlattice.
In order to solve this problem, the superposition of periods and the
corresponding coherence, which are the essential characteristics of moir\'{e}
superlattice, should be considered more sufficiently. Therefore, it is quite
necessary to introduce optical methods to study moir\'{e} superlattices. Here,
we develop a theory for moir\'{e} superlattices which are founded on
traditional optical scattering theory. The theory can interpret both the
continuously decreasing background and the peak of the interlayer conductance
observed in the experiments by a unified mechanism. We show that, the
decreasing background of the interlayer conductance arises from the increasing
strength of the interface potential, and the peak roots from the scattering
resonance of the interface potential. The present work is crucial for
understanding the interlayer coupling of the moir\'{e} superlattice, and
provide a solid theoretical foundation for the application of moir\'{e}
superlattice.
|
While deep learning has enabled great advances in many areas of music,
labeled music datasets remain especially hard, expensive, and time-consuming to
create. In this work, we introduce SimCLR to the music domain and contribute a
large chain of audio data augmentations to form a simple framework for
self-supervised, contrastive learning of musical representations: CLMR. This
approach works on raw time-domain music data and requires no labels to learn
useful representations. We evaluate CLMR in the downstream task of music
classification on the MagnaTagATune and Million Song datasets and present an
ablation study to test which of our music-related innovations over SimCLR are
most effective. A linear classifier trained on the proposed representations
achieves a higher average precision than supervised models on the MagnaTagATune
dataset, and performs comparably on the Million Song dataset. Moreover, we show
that CLMR's representations are transferable using out-of-domain datasets,
indicating that our method has strong generalisability in music classification.
Lastly, we show that the proposed method allows data-efficient learning on
smaller labeled datasets: we achieve an average precision of 33.1% despite
using only 259 labeled songs in the MagnaTagATune dataset (1% of the full
dataset) during linear evaluation. To foster reproducibility and future
research on self-supervised learning in music, we publicly release the
pre-trained models and the source code of all experiments of this paper.
|
Current observations present unprecedented opportunities to probe the true
nature of black holes, which must harbor new physics beyond General Relativity
to provide singularity-free descriptions. To test paradigms for this new
physics, it is necessary to bridge the gap all the way from theoretical
developments of new-physics models to phenomenological developments such as
simulated images of black holes embedded in astrophysical disk environments. In
this paper, we construct several steps along this bridge. We construct a novel
family of regular black-hole spacetimes based on a locality principle which
ties new physics to local curvature scales. We then characterize these
spacetimes in terms of a complete set of curvature invariants and analyze the
ergosphere and both the outer event as well as distinct Killing horizon. Our
comprehensive study of the shadow shape at various spins and inclinations
reveals characteristic image features linked to the locality principle. We also
explore the photon rings as an additional probe of the new-physics effects. A
simple analytical disk model enables us to generate simulated images of the
regular spinning black hole and test whether the characteristic image-features
are visible in the intensity map.
|
It is well known that the K\"ahler-Ricci flow on a K\"ahler manifold $X$
admits a long-time solution if and only if $X$ is a minimal model, i.e., the
canonical line bundle $K_X$ is nef. The abundance conjecture in algebraic
geometry predicts that $K_X$ must be semi-ample when $X$ is a projective
minimal model. We prove that if $K_X$ is semi-ample, then the diameter is
uniformly bounded for long-time solutions of the normalized K\"ahler-Ricci
flow. Our diameter estimate combined with the scalar curvature estimate in [34]
for long-time solutions of the K\"ahler-Ricci flow are natural extensions of
Perelman's diameter and scalar curvature estimates for short-time solutions on
Fano manifolds. We further prove that along the normalized K\"ahler-Ricci flow,
the Ricci curvature is uniformly bounded away from singular fibres of $X$ over
its unique algebraic canonical model $X_{can}$ if the Kodaira dimension of $X$
is one. As an application, the normalized K\"ahler-Ricci flow on a minimal
threefold $X$ always converges sequentially in Gromov-Hausdorff topology to a
compact metric space homeomorphic to its canonical model $X_{can}$, with
uniformly bounded Ricci curvature away from the critical set of the
pluricanonical map from $X$ to $X_{can}$.
|
A general overview of the existing difference ring theory for symbolic
summation is given. Special emphasis is put on the user interface: the
translation and back translation of the corresponding representations within
the term algebra and the formal difference ring setting. In particular,
canonical (unique) representations and their refinements in the introduced term
algebra are explored by utilizing the available difference ring theory. Based
on that, precise input-output specifications of the available tools of the
summation package Sigma are provided.
|
It is believed that the $\pm J$ Ising spin-glass does not order at finite
temperatures in dimension $d=2$. However, using a graphical representation and
a contour argument, we prove rigorously the existence of a finite-temperature
phase transition in $d\geq 2$ with $T_c \geq 0.4$. In the graphical
representation, the low-temperature phase allows for the coexistence of
multiple infinite clusters each with a rigidly aligned spin-overlap state.
These clusters correlate negatively with each other, and are entropically
stable without breaking any global symmetry. They can emerge in most graph
structures and disorder measures.
|
We survey a number of different methods for computing $L(\chi,1-k)$ for a
Dirichlet character $\chi$, with particular emphasis on quadratic characters.
The main conclusion is that when $k$ is not too large (for instance $k\le100$)
the best method comes from the use of Eisenstein series of half-integral
weight, while when $k$ is large the best method is the use of the complete
functional equation, unless the conductor of $\chi$ is really large, in which
case the previous method again prevails.
|
Hydrogen bonding liquids, typically water and alcohols, are known to form
labile structures (network, chains, etc...), hence the lifetime of such
structures is an important microscopic parameter, which can be calculated in
computer simulations. Since these cluster entities are mostly statistical in
nature, one would expect that, in the short time regime, their lifetime
distribution would be a broad Gaussian-like function of time, with a single
maximum representing their mean lifetime, and weakly dependent on criteria such
as the bonding distance and angle, much similarly to non-hydrogen bonding
simple liquids, while the long time part is known to have some power law
dependence. Unexpectedly, all the hydrogen bonding liquids studied herein,
namely water and alcohols, display highly hierarchic three types of specific
lifetimes, in the sub-picosecond range 0-0.5ps The dominant lifetime very
strongly depends on the bonding distance criterion and is related to hydrogen
bonded pairs. This mode is absent in non-H-bonding simple liquids. The
secondary and tertiary mean lifetimes are related to clusters, and are nearly
independent on the bonding criterion. Of these two lifetimes, only the first
one can be related to that of simple liquids, which poses the question of the
nature of the third life time. The study of acohols reveals that this 3rd
lifetime is related to the topology of H-bonded clusters, and that its
distribution may be also affected by the alkyl tail surrounding "bath". This
study reveals that hydrogen bonding liquids have a universal hierarchy of
hydrogen bonding lifetimes with a timescale regularity across very different
types, and which depend on the topology of the cluster structures
|
Scattering by an isolated defect embedded in a dielectric medium of two
dimensional periodicity is of interest in many sub-fields of electrodynamics.
Present approaches to compute this scattering rely either on the Born
approximation and its quasi-analytic extensions, or on \emph{ab-initio}
computation that requires large domain sizes to reduce the effects of boundary
conditions. The Born approximation and its extensions are limited in scope,
while the ab-initio approach suffers from its high numerical cost. In this
paper, I introduce a hybrid scheme in which an effective local electric
susceptibility tensor of a defect is estimated by solving an inverse problem
efficiently. The estimated tensor is embedded into an S-matrix formula based on
the reciprocity theorem. With this embedding, the computation of the S-matrix
of the defect requires field solutions only in the unit cell of the background.
In practice, this scheme reduces the computational cost by almost two orders of
magnitude, while sacrificing little in accuracy. The scheme demonstrates that
statistical estimation can capture sufficient information from cheap
calculations to compute quantities in the far field. I outline the fundamental
theory and algorithms to carry out the computations in high dielectric contrast
materials, including metals. I demonstrate the capabilities of this approach
with examples from optical inspection of nano-electronic circuitry where the
Born approximation fails and the existing methods for its extension are also
inapplicable.
|
A nonlinear Markov chain is a discrete time stochastic process whose
transitions depend on both the current state and the current distribution of
the process. The nonlinear Markov chain over a infinite state space can be
identified by a continuous mapping (the so-called nonlinear Markov operator)
defined on a set of all probability distributions (which is a simplex). In the
present paper, we consider a continuous analogue of the mentioned mapping
acting on $L^1$-spaces. Main aim of the current paper is to investigate
projective surjectivity of quadratic stochastic operators (QSO) acting on the
set of all probability measures. To prove the main result, we study the
surjectivity of infinite dimensional nonlinear Markov operators and apply them
to the projective surjectivity of a QSO. Furthermore, the obtained result has
been applied for the existence of positive solution of some Hammerstein
integral equations.
|
Coherent gate errors are a concern in many proposed quantum computing
architectures. These errors can be effectively handled through composite pulse
sequences for single-qubit gates, however, such techniques are less feasible
for entangling operations. In this work, we benchmark our coherent errors by
comparing the actual performance of composite single-qubit gates to the
predicted performance based on characterization of individual single-qubit
rotations. We then propose a compilation technique, which we refer to as hidden
inverses, that creates circuits robust to these coherent errors. We present
experimental data showing that these circuits suppress both overrotation and
phase misalignment errors in our trapped ion system.
|
We reformulate Euclidean general relativity without cosmological constant as
an action governing the complex structure of twistor space. Extending Penrose's
non-linear graviton construction, we find a correspondence between twistor
spaces with partially integrable almost complex structures and four-dimensional
space-times with off-shell metrics. Using this, we prove that our twistor
action reduces to Plebanski's action for general relativity via the Penrose
transform. This should lead to new insights into the geometry of graviton
scattering as well as to the derivation of computational tools like
gravitational MHV rules.
|
Real-time PCR, or Real-time Quantitative PCR (qPCR) is an effective approach
to quantify nucleic acid samples. Given the complicated reaction system along
with thermal cycles, there has been long-term confusion on accurately
calculating the initial nucleic acid amounts from the fluorescence signals.
Although many improved algorithms had been proposed, the classical threshold
method is still the primary choice in the routine application. In this study,
we will first illustrate the origin of the linear relationship between the
threshold value and logarithm of the initial nucleic acid amount by
reconstructing the PCR reaction process with stochastic simulations. We then
develop a new method for the absolute quantification of nucleic acid samples
with qPCR. By monitoring the fluorescence signal changes in every stage of the
thermal cycle, we are able to calculate a representation of the step-wise
efficiency change. This is the first work calculated PCR efficiency change
directly from the fluorescence signal, without fitting or sophisticated
analysis. Our results revealed that the efficiency change during the PCR
process is complicated and can not be modeled simply by monotone function
model. Based on the calculated efficiency, we illustrate a new absolute qPCR
analysis method for accurately determining nucleic acid amount. The efficiency
problem is completely avoided in this new method.
|
The prior independent framework for algorithm design considers how well an
algorithm that does not know the distribution of its inputs approximates the
expected performance of the optimal algorithm for this distribution. This paper
gives a method that is agnostic to problem setting for proving lower bounds on
the prior independent approximation factor of any algorithm. The method
constructs a correlated distribution over inputs that can be generated both as
a distribution over i.i.d. good-for-algorithms distributions and as a
distribution over i.i.d. bad-for-algorithms distributions. Prior independent
algorithms are upper-bounded by the optimal algorithm for the latter
distribution even when the true distribution is the former. Thus, the ratio of
the expected performances of the Bayesian optimal algorithms for these two
decompositions is a lower bound on the prior independent approximation ratio.
The techniques of the paper connect prior independent algorithm design, Yao's
Minimax Principle, and information design. We apply this framework to give new
lower bounds on several canonical prior independent mechanism design problems.
|
In the t-U-V Hubbard model on the square lattice we found self-consistent
analytic solution for the ground state with coexisting d-wave symmetric bond
ordered pair density wave (PDW) and spin (SDW) or charge (CDW) density waves,
as observed in some high-temperature superconductors. In particular, the
solution gives the same periodicity for CDW and PDW, and a pseudogap in the
fermi-excitation spectrum.
|
The paper deals with cubic 1-variable polynomials whose Julia sets are
connected. Fixing a bounded type rotation number, we obtain a slice of such
polynomials with the origin being a fixed Siegel point of the specified
rotation number. Such slices as parameter spaces were studied by S. Zakeri, so
we call them Zakeri slices. We give a model of the central part of a slice (the
subset of the slice that can be approximated by hyperbolic polynomials with
Jordan curve Julia sets), and a continuous projection from the central part to
the model. The projection is defined dynamically and agrees with the
dynamical-analytic parameterization of the Principal Hyperbolic Domain by
Petersen and Tan Lei.
|
The partial Latin square extension problem is to fill as many as possible
empty cells of a partially filled Latin square. This problem is a useful model
for a wide range of relevant applications in diverse domains. This paper
presents the first massively parallel hybrid search algorithm for this
computationally challenging problem based on a transformation of the problem to
partial graph coloring. The algorithm features the following original elements.
Based on a very large population (with more than $10^4$ individuals) and modern
graphical processing units, the algorithm performs many local searches in
parallel to ensure an intensified exploitation of the search space. It employs
a dedicated crossover with a specific parent matching strategy to create a
large number of diversified and information-preserving offspring at each
generation. Extensive experiments on 1800 benchmark instances show a high
competitiveness of the algorithm compared with the current best performing
methods. Competitive results are also reported on the related Latin square
completion problem. Analyses are performed to shed lights on the understanding
of the main algorithmic components. The code of the algorithm will be made
publicly available.
|
In this study, we formulate a mathematical model incorporating age specific
transmission dynamics of COVID-19 to evaluate the role of vaccination and
treatment strategies in reducing the size of COVID-19 burden. Initially, we
establish the positivity and boundedness of the solutions of the model and
calculate the basic reproduction number. We then formulate an optimal control
problem with vaccination and treatment as control variables. Optimal
vaccination and treatment policies are analysed for different values of the
weight constant associated with the cost of vaccination and different
transmissibility levels. Findings from these suggested that the combined
strategies(vaccination and treatment) worked best in minimizing the infection
and disease induced mortality. In order to reduce COVID-19 infection and
COVID-19 induced deaths to maximum, it was observed that optimal control
strategy should be prioritized to population with age greater than 40 years.
Not much difference was found between individual strategies and combined
strategies in case of mild epidemic ($R_0 \in (0, 2)$). For higher values of
$R_0 (R_0 \in (2, 10))$ the combined strategies was found to be best in terms
of minimizing the overall infection. The infection curves varying the
efficacies of the vaccines were also analysed and it was found that higher
efficacy of the vaccine resulted in lesser number of infection and COVID
induced death.
|
The chiral hinge modes are the key feature of a second order topological
insulator in three dimensions. Here we propose a quadrupole index in
combination of a slab Chern number in the bulk to characterize the flowing
pattern of chiral hinge modes along the hinges at the intersection of the
surfaces of a sample. We further utilize the topological field theory to
demonstrate the correspondent connection of the chiral hinge modes to the
quadrupole index and the slab Chern number, and present a picture of
three-dimensional quantum anomalous Hall effect as a consequence of chiral
hinge modes. The two bulk topological invariants can be measured in electric
transport and magneto-optical experiments. In this way we establish the
bulk-hinge correspondence in a three-dimensional second order topological
insulator.
|
In the near future, the Deep Underground Neutrino Experiment and the European
Spallation Source aim to reach unprecedented sensitivity in the search for
neutron-antineutron ($n\text{-}\bar{n}$) oscillations, whose observation would
directly imply $|\Delta B| = 2$ violation and hence might hint towards a close
link to the mechanism behind the observed baryon asymmetry of the Universe. In
this work, we explore the consequences of such a discovery for baryogenesis
first within a model-independent effective field theory approach. We then
refine our analysis by including a source of CP violation and different
hierarchies between the scales of new physics using a simplified model. We
analyse the implication for baryogenesis in different scenarios and confront
our results with complementary experimental constraints from dinucleon decay,
LHC, and meson oscillations. We find that for a small mass hierarchy between
the new degrees of freedom, an observable rate for $n\text{-}\bar{n}$
oscillation would imply that the washout processes are too strong to generate
any sizeable baryon asymmetry, even if the CP violation is maximal. On the
other hand, for a large hierarchy between the new degrees of freedom, our
analysis shows that successful baryogenesis can occur over a large part of the
parameter space, opening the window to be probed by current and future
colliders and upcoming $n\text{-}\bar{n}$ oscillation searches.
|
We study a precise and computationally tractable notion of operator
complexity in holographic quantum theories, including the ensemble dual of
Jackiw-Teitelboim gravity and two-dimensional holographic conformal field
theories. This is a refined, "microcanonical" version of K-complexity that
applies to theories with infinite or continuous spectra (including quantum
field theories), and in the holographic theories we study exhibits exponential
growth for a scrambling time, followed by linear growth until saturation at a
time exponential in the entropy $\unicode{x2014}$a behavior that is
characteristic of chaos. We show that the linear growth regime implies a
universal random matrix description of the operator dynamics after scrambling.
Our main tool for establishing this connection is a "complexity renormalization
group" framework we develop that allows us to study the effective operator
dynamics for different timescales by "integrating out" large K-complexities. In
the dual gravity setting, we comment on the empirical match between our version
of K-complexity and the maximal volume proposal, and speculate on a connection
between the universal random matrix theory dynamics of operator growth after
scrambling and the spatial translation symmetry of smooth black hole interiors.
|
In reliability analysis, methods used to estimate failure probability are
often limited by the costs associated with model evaluations. Many of these
methods, such as multifidelity importance sampling (MFIS), rely upon a
computationally efficient, surrogate model like a Gaussian process (GP) to
quickly generate predictions. The quality of the GP fit, particularly in the
vicinity of the failure region(s), is instrumental in supplying accurately
predicted failures for such strategies. We introduce an entropy-based GP
adaptive design that, when paired with MFIS, provides more accurate failure
probability estimates and with higher confidence. We show that our greedy data
acquisition strategy better identifies multiple failure regions compared to
existing contour-finding schemes. We then extend the method to batch selection,
without sacrificing accuracy. Illustrative examples are provided on benchmark
data as well as an application to an impact damage simulator for National
Aeronautics and Space Administration (NASA) spacesuits.
|
Recent literature has underscored the importance of dataset documentation
work for machine learning, and part of this work involves addressing
"documentation debt" for datasets that have been used widely but documented
sparsely. This paper aims to help address documentation debt for BookCorpus, a
popular text dataset for training large language models. Notably, researchers
have used BookCorpus to train OpenAI's GPT-N models and Google's BERT models,
even though little to no documentation exists about the dataset's motivation,
composition, collection process, etc. We offer a preliminary datasheet that
provides key context and information about BookCorpus, highlighting several
notable deficiencies. In particular, we find evidence that (1) BookCorpus
likely violates copyright restrictions for many books, (2) BookCorpus contains
thousands of duplicated books, and (3) BookCorpus exhibits significant skews in
genre representation. We also find hints of other potential deficiencies that
call for future research, including problematic content, potential skews in
religious representation, and lopsided author contributions. While more work
remains, this initial effort to provide a datasheet for BookCorpus adds to
growing literature that urges more careful and systematic documentation for
machine learning datasets.
|
Session-based recommendation (SBR) learns users' preferences by capturing the
short-term and sequential patterns from the evolution of user behaviors. Among
the studies in the SBR field, graph-based approaches are a relatively powerful
kind of way, which generally extract item information by message aggregation
under Euclidean space. However, such methods can't effectively extract the
hierarchical information contained among consecutive items in a session, which
is critical to represent users' preferences. In this paper, we present a
hyperbolic contrastive graph recommender (HCGR), a principled session-based
recommendation framework involving Lorentz hyperbolic space to adequately
capture the coherence and hierarchical representations of the items. Within
this framework, we design a novel adaptive hyperbolic attention computation to
aggregate the graph message of each user's preference in a session-based
behavior sequence. In addition, contrastive learning is leveraged to optimize
the item representation by considering the geodesic distance between positive
and negative samples in hyperbolic space. Extensive experiments on four
real-world datasets demonstrate that HCGR consistently outperforms
state-of-the-art baselines by 0.43$\%$-28.84$\%$ in terms of $HitRate$, $NDCG$
and $MRR$.
|
In this note we show a simple formula for the coefficients of the polynomial
associated with the sums of powers of the terms of an arbitrary arithmetic
progression. This formula consists of a double sum involving only ordinary
binomial coefficients and binomial powers. Arguably, this is the simplest
formula that can probably be found for the said coefficients. Furthermore, as a
by-product, we give an explicit formula for the Bernoulli polynomials involving
the Stirling numbers of the first and second kind.
|
As Stefan Kopp and Nicole Kramer say in their recent paper[Frontiers in
Psychology 12 (2021) 597], despite some very impressive demonstrations over the
last decade or so, we still don't know how how to make a computer have a half
decent conversation with a human. They argue that the capabilities required to
do this include incremental joint co-construction and mentalizing. Although
agreeing whole heartedly with their statement of the problem, this paper argues
for a different approach to the solution based on the "new" AI of situated
action.
|
Cell proliferation, apoptosis, and myosin-dependent contraction can generate
elastic stress and strain in living tissues, which may be dissipated by
internal rearrangement through cell topological transition and cytoskeletal
reorganization. Moreover, cells and tissues can change their sizes in response
to mechanical cues. The present work demonstrates the role of tissue
compressibility and internal rearranging activities on its size and mechanics
regulation in the context of differential growth induced by a field of
growth-promoting chemical factors. We develop a mathematical model based on
finite elasticity and growth theory and the reference map techniques to
describe the coupled tissue growth and mechanics in the Eulerian frame. We
incorporate the tissue rearrangement by introducing a rearranging rate to the
reference map evolution, leading to elastic-energy dissipation when tissue
growth and deformation are in radial symmetry. By linearizing the model, we
show that the stress follows the Maxwell-type viscoelastic relaxation. The
rearrangement rate, which we call tissue fluidity, sets the stress relaxation
time, and the ratio between the shear modulus and the fluidity sets the tissue
viscosity. By nonlinear simulation of growing tissue spheroids and discs with
graded growth rates along the radius, we find that the tissue compressibility
and fluidity influence their equilibrium size. By comparing the nonlinear
simulations with the linear analytical solutions, we show the size change as a
nonlinear effect due to the advection of the tissue density flow, which only
occurs when both tissue compressibility and fluidity are small. We apply the
model to study tumor spheroid growth and epithelial disc growth when a
reaction-diffusion process determines the growth-promoting factor field.
|
It has been recently claimed that primordial magnetic fields could relieve
the cosmological Hubble tension. We consider the impact of such fields on the
formation of the first cosmological objects, mini-halos forming stars, for
present-day field strengths in the range of $2\times 10^{-12}$ - $2\times
10^{-10}$ G. These values correspond to initial ratios of Alv\'en velocity to
the speed of sound of $v_a/c_s\approx 0.03 - 3$. We find that when $v_a/c_s\ll
1$, the effects are modest. However, when $v_a\sim c_s$, the starting time of
the gravitational collapse is delayed and the duration extended as much as by
$\Delta$z = 2.5 in redshift. When $v_a > c_s$, the collapse is completely
suppressed and the mini-halos continue to grow and are unlikely to collapse
until reaching the atomic cooling limit. Employing current observational limits
on primordial magnetic fields we conclude that inflationary produced primordial
magnetic fields could have a significant impact on first star formation,
whereas post-inflationary produced fields do not.
|
This is the second part of a paper describing a new concept of separation of
variables applied to the classical Clebsch integrable case. The quadratures
obtained in Part I (also uploaded in arXiv.org) lead to a new type of the Abel
map which contains Abelian integrals on two different algebraic curves.
Here we interprete it as from the product of the two curves to the Prym
variety of one of them, show that the map is well defined although not a
bijection. We analyse its properties and formulate a new extention of the
Riemann vanishing theorem, which allows to invert the map in terms of
theta-functions of higher order.
Lastly, we describe how to express the original variables of the Clebsch
system in terms of the preimages of the map. This enables one to obtain
theta-function solution whose structure is different from that found long time
ago by F. K\"otter.
|
To improve the security and robustness of autonomous driving models, this
paper presents SMET, a scenariobased metamorphic testing tool for autonomous
driving models. The metamorphic relationship is divided into three dimensions
(time, space, and event) and demonstrates its effectiveness through case
studies in two types of autonomous driving models with different
outputs.Experimental results show that this tool can well detect potential
defects of the autonomous driving model, and complex scenes are more effective
than simple scenes.
|
B-mode ultrasound imaging is a popular medical imaging technique. Like other
image processing tasks, deep learning has been used for analysis of B-mode
ultrasound images in the last few years. However, training deep learning models
requires large labeled datasets, which is often unavailable for ultrasound
images. The lack of large labeled data is a bottleneck for the use of deep
learning in ultrasound image analysis. To overcome this challenge, in this work
we exploit Auxiliary Classifier Generative Adversarial Network (ACGAN) that
combines the benefits of data augmentation and transfer learning in the same
framework. We conduct experiment on a dataset of breast ultrasound images that
shows the effectiveness of the proposed approach.
|
In the last three decades, several constructions of quantum error-correcting
codes were presented in the literature. Among these codes, there are the
asymmetric ones, i.e., quantum codes whose $Z$-distance $d_z$ is different from
its $X$-distance $d_x$. The topological quantum codes form an important class
of quantum codes, where the toric code, introduced by Kitaev, was the first
family of this type. After Kitaev's toric code, several authors focused
attention on investigating its structure and the constructions of new families
of topological quantum codes over Euclidean and hyperbolic surfaces. As a
consequence of establishing the existence and the construction of asymmetric
topological quantum codes in Theorem \ref{main}, the main result of this paper,
we introduce the class of hyperbolic asymmetric codes. Hence, families of
Euclidean and hyperbolic asymmetric topological quantum codes are presented. An
analysis regarding the asymptotic behavior of their distances $d_x$ and $d_z$
and encoding rates $k/n$ versus the compact orientable surface's genus is
provided due to the significant difference between the asymmetric distances
$d_x$ and $d_z$ when compared with the corresponding parameters of topological
codes generated by other tessellations. This inherent unequal error-protection
is associated with the nontrivial homological cycle of the $\{p,q\}$
tessellation and its dual, which may be appropriately explored depending on the
application, where $p\neq q$ and $(p-2)(q-2)\ge 4$. Three families of codes
derived from the $\{7,3\}$, $\{5,4\}$, and $\{10,5\}$ tessellations are
highlighted.
|
Since the 1970s, most airlines have incorporated computerized support for
managing disruptions during flight schedule execution. However, existing
platforms for airline disruption management (ADM) employ monolithic system
design methods that rely on the creation of specific rules and requirements
through explicit optimization routines, before a system that meets the
specifications is designed. Thus, current platforms for ADM are unable to
readily accommodate additional system complexities resulting from the
introduction of new capabilities, such as the introduction of unmanned aerial
systems (UAS), operations and infrastructure, to the system. To this end, we
use historical data on airline scheduling and operations recovery to develop a
system of artificial neural networks (ANNs), which describe a predictive
transfer function model (PTFM) for promptly estimating the recovery impact of
disruption resolutions at separate phases of flight schedule execution during
ADM. Furthermore, we provide a modular approach for assessing and executing the
PTFM by employing a parallel ensemble method to develop generative routines
that amalgamate the system of ANNs. Our modular approach ensures that current
industry standards for tardiness in flight schedule execution during ADM are
satisfied, while accurately estimating appropriate time-based performance
metrics for the separate phases of flight schedule execution.
|
Multi-step effects between bound, resonant, and non-resonant states have been
investigated by the continuum-discretized coupled-channels method (CDCC). In
the CDCC, a resonant state is treated as multiple states fragmented in a
resonance energy region, although it is described as a single state in usual
coupled-channel calculations. For such the fragmented resonant states, one-step
and multi-step contributions to the cross sections should be carefully
discussed because the cross sections obtained by the one-step calculation
depend on the number of those states, which corresponds to the size of the
model space. To clarify the role of the multi-step effects, we propose the
one-step calculation without model-space dependence for the fragmented resonant
states. Furthermore, we also discuss the multi-step effects between the ground,
$2^{+}_{1}$ resonant, and non-resonant states in $^6$He for proton inelastic
scattering.
|
Coherent control of interfering one- and two-photon processes has for decades
been the subject of research to achieve the redirection of photocurrent. The
present study develops two-pathway coherent control of ground state helium atom
above-threshold photoionization for energies up to the $N=2$ threshold, based
on a multichannel quantum defect and R-matrix calculation. Three parameters are
controlled in our treatment: the optical interference phase $\Delta\Phi$, the
reduced electric field strength
$\chi=\mathcal{E}_{\omega}^2/{\mathcal{E}_{2\omega}}$, and the final state
energy $\epsilon$. A small energy change near a resonance is shown to flip the
emission direction of photoelectrons with high efficiency, through an example
where $90\%$ of photoelectrons whose energy is near the $2p^2\ ^1S^e$ resonance
flip their emission direction. However, the large fraction of photoelectrons
ionized at the intermediate state energy, which are not influenced by the
optical control, make this control scheme challenging to realize
experimentally.
|
This paper describes the most efficient way to manage operations on ranges of
elements within an ordered set. The goal is to improve existing solutions, by
optimizing the average-case time complexity and getting rid of heavy
multiplicative constants in the worst-case, without sacrificing space
complexity. This is a high-impact operation in practical applications,
performed by introducing a new data structure called Wise Red-Black Tree, an
augmented version of the Red-Black Tree.
|
For any integer $m\ge 2$ and a set $V\subset \{1,\dots,m\}$, let $(m,V)$
denote the union of congruence classes of the elements in $V$ modulo $m$. We
study the Hankel determinants for the number of Dyck paths with peaks avoiding
the heights in the set $(m,V)$. For any set $V$ of even elements of an even
modulo $m$, we give an explicit description of the sequence of Hankel
determinants in terms of subsequences of arithmetic progression of integers.
There are numerous instances for varied $(m,V)$ with periodic sequences of
Hankel determinants. We present a sufficient condition for the set $(m,V)$ such
that the sequence of Hankel determinants is periodic, including even and odd
modulus $m$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.