abstract
stringlengths 42
2.09k
|
---|
Finite-time coherent sets (FTCSs) are distinguished regions of phase space
that resist mixing with the surrounding space for some finite period of time;
physical manifestations include eddies and vortices in the ocean and
atmosphere, respectively. The boundaries of finite-time coherent sets are
examples of Lagrangian coherent structures (LCSs). The selection of the time
duration over which FTCS and LCS computations are made in practice is crucial
to their success. If this time is longer than the lifetime of coherence of
individual objects then existing methods will fail to detect the shorter-lived
coherence. It is of clear practical interest to determine the full lifetime of
coherent objects, but in complicated practical situations, for example a field
of ocean eddies with varying lifetimes, this is impossible with existing
approaches. Moreover, determining the timing of emergence and destruction of
coherent sets is of significant scientific interest. In this work we introduce
new constructions to address these issues. The key components are an inflated
dynamic Laplace operator and the concept of semi-material FTCSs. We make strong
mathematical connections between the inflated dynamic Laplacian and the
standard dynamic Laplacian [Froyland 2015], showing that the latter arises as a
limit of the former. The spectrum and eigenfunctions of the inflated dynamic
Laplacian directly provide information on the number, lifetimes, and evolution
of coherent~sets.
|
The ambipolar electrostatic potential rising along the magnetic field line
from the grounded wall to the centre in the linear gas dynamic trap, rules the
available suppression of axial heat and particle losses. In this paper, the
visible range optical diagnostic is described using the Doppler shift of plasma
emission lines for measurements of this accelerating potential drop. We used
the room temperature hydrogen jet puffed directly on the line of sight as the
charge exchange target for plasma ions moving in the expanding flux from the
mirror towards the wall. Both bulk plasma protons and $He^{2+}$ ions velocity
distribution functions can be spectroscopically studied; the latter population
is produced via the neutral He tracer puff into the central cell plasma. This
way, potential in the centre and in the mirror area can be measured
simultaneously along with the ion temperature. A reasonable accuracy of
$4\div8\%$ was achieved in observations with the frame rate of $\approx 1~kHz$.
Active acquisitions on the gas jet also provide the spatial resolution better
than 5~mm in the middle plane radial coordinate because of the strong
compression of the object size when projected to the centre along the magnetic
flux surface. The charge exchange radiation diagnostic operates with three
emission lines: H-$\alpha$ 656.3~nm, He-I 667.8~nm and He-I 587.6~nm. Recorded
spectra are shown in the paper and examples for physical dependences are
presented. The considered experimental technique can be scaled to the upgraded
multi-point diagnostic for the next generation linear traps and other magnetic
confinement systems.
|
Among the top approaches of recent years, link prediction using knowledge
graph embedding (KGE) models has gained significant attention for knowledge
graph completion. Various embedding models have been proposed so far, among
which, some recent KGE models obtain state-of-the-art performance on link
prediction tasks by using embeddings with a high dimension (e.g. 1000) which
accelerate the costs of training and evaluation considering the large scale of
KGs. In this paper, we propose a simple but effective performance boosting
strategy for KGE models by using multiple low dimensions in different
repetition rounds of the same model. For example, instead of training a model
one time with a large embedding size of 1200, we repeat the training of the
model 6 times in parallel with an embedding size of 200 and then combine the 6
separate models for testing while the overall numbers of adjustable parameters
are same (6*200=1200) and the total memory footprint remains the same. We show
that our approach enables different models to better cope with their
expressiveness issues on modeling various graph patterns such as symmetric,
1-n, n-1 and n-n. In order to justify our findings, we conduct experiments on
various KGE models. Experimental results on standard benchmark datasets, namely
FB15K, FB15K-237 and WN18RR, show that multiple low-dimensional models of the
same kind outperform the corresponding single high-dimensional models on link
prediction in a certain range and have advantages in training efficiency by
using parallel training while the overall numbers of adjustable parameters are
same.
|
One of the properties of interest in the analysis of networks is \emph{global
communicability}, i.e., how easy or difficult it is, generally, to reach nodes
from other nodes by following edges. Different global communicability measures
provide quantitative assessments of this property, emphasizing different
aspects of the problem.
This paper investigates the sensitivity of global measures of communicability
to local changes. In particular, for directed, weighted networks, we study how
different global measures of communicability change when the weight of a single
edge is changed; or, in the unweighted case, when an edge is added or removed.
The measures we study include the \emph{total network communicability}, based
on the matrix exponential of the adjacency matrix, and the \emph{Perron network
communicability}, defined in terms of the Perron root of the adjacency matrix
and the associated left and right eigenvectors.
Finding what local changes lead to the largest changes in global
communicability has many potential applications, including assessing the
resilience of a system to failure or attack, guidance for incremental system
improvements, and studying the sensitivity of global communicability measures
to errors in the network connection data.
|
Although micro-lensing of macro-lensed quasars and supernovae provides unique
opportunities for several kinds of investigations, it can add unwanted and
sometimes substantial noise. While micro-lensing flux anomalies may be safely
ignored for some observations, they severely limit others. "Worst-case"
estimates can inform the decision whether or not to undertake an extensive
examination of micro-lensing scenarios. Here, we report "worst-case"
micro-lensing uncertainties for point sources lensed by singular isothermal
potentials, parameterized by a convergence equal to the shear and by the
stellar fraction. The results can be straightforwardly applied to
non-isothermal potentials utilizing the mass sheet degeneracy. We use
micro-lensing maps to compute fluctuations in image micro-magnifications and
estimate the stellar fraction at which the fluctuations are greatest for a
given convergence. We find that the worst-case fluctuations happen at a stellar
fraction $\kappa_\star=\frac{1}{|\mu_{macro}|}$. For macro-minima, fluctuations
in both magnification and demagnification appear to be bounded ($1.5>\Delta
m>-1.3$, where $\Delta m$ is magnitude relative to the average
macro-magnification). Magnifications for macro-saddles are bounded as well
($\Delta m > -1.7$). In contrast, demagnifications for macro-saddles appear to
have unbounded fluctuations as $1/\mu_{macro}\rightarrow0$ and
$\kappa_\star\rightarrow0$.
|
A high-order quasi-conservative discontinuous Galerkin (DG) method is
proposed for the numerical simulation of compressible multi-component flows. A
distinct feature of the method is a predictor-corrector strategy to define the
grid velocity. A Lagrangian mesh is first computed based on the flow velocity
and then used as an initial mesh in a moving mesh method (the moving mesh
partial differential equation or MMPDE method ) to improve its quality. The
fluid dynamic equations are discretized in the direct arbitrary
Lagrangian-Eulerian framework using DG elements and the non-oscillatory kinetic
flux while the species equation is discretized using a quasi-conservative DG
scheme to avoid numerical oscillations near material interfaces. A selection of
one- and two-dimensional examples are presented to verify the convergence order
and the constant-pressure-velocity preservation property of the method. They
also demonstrate that the incorporation of the Lagrangian meshing with the
MMPDE moving mesh method works well to concentrate mesh points in regions of
shocks and material interfaces.
|
We provide evidence for the existence of a new strongly-coupled four
dimensional $\mathcal{N}=2$ superconformal field theory arising as a
non-trivial IR fixed point on the Coulomb branch of the mass-deformed
superconformal Lagrangian theory with gauge group $G_2$ and four fundamental
hypermultiplets. Notably, our analysis proceeds by using various geometric
constraints to bootstrap the data of the theory, and makes no explicit
reference to the Seiberg-Witten curve. We conjecture a corresponding VOA and
check that the vacuum character satisfies a linear modular differential
equation of fourth order. We also propose an identification with existing class
$\mathcal{S}$ constructions.
|
Social touch is essential for our social interactions, communication, and
well-being. It has been shown to reduce anxiety and loneliness; and is a key
channel to transmit emotions for which words are not sufficient, such as love,
sympathy, reassurance, etc. However, direct physical contact is not always
possible due to being remotely located, interacting in a virtual environment,
or as a result of a health issue. Mediated social touch enables physical
interactions, despite the distance, by transmitting the haptic cues that
constitute social touch through devices. As this technology is fairly new, the
users' needs and their expectations on a device design and its features are
unclear, as well as who would use this technology, and in which conditions. To
better understand these aspects of the mediated interaction, we conducted an
online survey on 258 respondents located in the USA. Results give insights on
the type of interactions and device features that the US population would like
to use.
|
We establish a second anti-blocker theorem for non-commutative convex
corners, show that the anti-blocking operation is continuous on bounded sets of
convex corners, and define optimisation parameters for a given convex corner
that generalise well-known graph theoretic quantities. We define the entropy of
a state with respect to a convex corner, characterise its maximum value in
terms of a generalised fractional chromatic number and establish entropy
splitting results that demonstrate the entropic complementarity between a
convex corner and its anti-blocker. We identify two extremal tensor products of
convex corners and examine the behaviour of the introduced parameters with
respect to tensoring. Specialising to non-commutative graphs, we obtain quantum
versions of the fractional chromatic number and the clique covering number, as
well as a notion of non-commutative graph entropy of a state, which we show to
be continuous with respect to the state and the graph. We define the
Witsenhausen rate of a non-commutative graph and compute the values of our
parameters in some specific cases.
|
We focus on BPS solutions of the gauged O(3) Sigma model, due to Schroers,
and use these ideas to study the geometry of the moduli space. The model has an
asymmetry parameter $\tau$ breaking the symmetry of vortices and antivortices
on the field equations. It is shown that the moduli space is incomplete both on
the Euclidean plane and on a compact surface. On the Euclidean plane, the L2
metric on the moduli space is approximated for well separated cores and results
consistent with similar approximations for the Ginzburg-Landau functional are
found. The scattering angle of approaching vortex-antivortex pairs of different
effective mass is computed numerically and is shown to be different from the
well known scattering of approaching Ginzburg-Landau vortices. The volume of
the moduli space for general $\tau$ is computed for the case of the round
sphere and flat tori. The model on a compact surface is deformed introducing a
neutral field and a Chern-Simons term. A lower bound for the Chern-Simons
constant $\kappa$ such that the extended model admits a solution is shown to
exist, and if the total number of vortices and antivortices are different, the
existence of an upper bound is also shown. Existence of multiple solutions to
the governing elliptic problem is established on a compact surface as well as
the existence of two limiting behaviours as $\kappa \to 0$. A localization
formula for the deformation is found for both Ginzburg-Landau and the O(3)
Sigma model vortices and it is shown that it can be extended to the coalescense
set. This rules out the possibility that this is Kim-Lee's term in the case of
Ginzburg-Landau vortices, moreover, the deformation term is compared on the
plane with the Ricci form of the surface and it is shown they are different,
hence also discarding that this is the term proposed by Collie-Tong to model
vortex dynamics with Chern-Simons interaction.
|
We show that the maximal number of singular points of a normal quartic
surface $X \subset \mathbb{P}^3_K$ defined over an algebraically closed field
$K$ of characteristic $2$ is at most $20$, and that if equality is attained,
then the minimal resolution of $X$ is a supersingular
K3 surface and the singular points are $20$ nodes.
We produce examples with 14 nodes. In a sequel to this paper (in two parts,
the second in collaboration with Matthias Sch\"utt) we show that the optimal
bound is indeed 14, and that if equality is attained, then the minimal
resolution of $X$ is a supersingular
K3 surface and the singular points are $14$ nodes.
We also obtain some smaller upper bounds under several geometric assumptions
holding at one of the singular points $P$ (structure of tangent cone,
separability/inseparability of the projection with centre $P$).
|
All-electronic interrogation of biofluid flow velocity by sensors
incorporated in ultra-low-power or self-sustained systems offers the promise of
enabling multifarious emerging research and applications. Electrical sensors
based on nanomaterials are of high spatiotemporal resolution and exceptional
sensitivity to external flow stimulus and easily integrated and fabricated
using scalable techniques. But existing nano-based electrical flow-sensing
technologies remain lacking in precision and stability and are typically only
applicable to simple aqueous solutions or liquid/gas dual-phase mixtures,
making them unsuitable for monitoring low-flow (~micrometer/second) yet
important characteristics of continuous biofluids (e.g., hemorheological
behaviors in microcirculation). Here we show that monolayer-graphene single
microelectrodes harvesting charge from continuous aqueous flow provide an ideal
flow sensing strategy: Our devices deliver over six months stability and
sub-micrometer/second resolution in real-time quantification of whole-blood
flows with multiscale amplitude-temporal characteristics in a microfluidic
chip. The flow transduction is enabled by low-noise charge transfer at
graphene/water interface in response to flow-sensitive rearrangement of the
interfacial electrical double layer. Our results demonstrate the feasibility of
using a graphene-based self-powered strategy for monitoring biofluid flow
velocity with key performance metrics orders of magnitude higher than other
electrical approaches.
|
Randomization-based Machine Learning methods for prediction are currently a
hot topic in Artificial Intelligence, due to their excellent performance in
many prediction problems, with a bounded computation time. The application of
randomization-based approaches to renewable energy prediction problems has been
massive in the last few years, including many different types of
randomization-based approaches, their hybridization with other techniques and
also the description of new versions of classical randomization-based
algorithms, including deep and ensemble approaches. In this paper we review the
most important characteristics of randomization-based machine learning
approaches and their application to renewable energy prediction problems. We
describe the most important methods and algorithms of this family of modeling
methods, and perform a critical literature review, examining prediction
problems related to solar, wind, marine/ocean and hydro-power renewable
sources. We support our critical analysis with an extensive experimental study,
comprising real-world problems related to solar, wind and hydro-power energy,
where randomization-based algorithms are found to achieve superior results at a
significantly lower computational cost than other modeling counterparts. We end
our survey with a prospect of the most important challenges and research
directions that remain open this field, along with an outlook motivating
further research efforts in this exciting research field.
|
Internal interfaces in a domain could exist as a material defect or they can
appear due to propagations of cracks. Discretization of such geometries and
solution of the contact problem on the internal interfaces can be
computationally challenging. We employ an unfitted Finite Element (FE)
framework for the discretization of the domains and develop a tailored,
globally convergent, and efficient multigrid method for solving contact
problems on the internal interfaces. In the unfitted FE methods, structured
background meshes are used and only the underlying finite element space has to
be modified to incorporate the discontinuities. The non-penetration conditions
on the embedded interfaces of the domains are discretized using the method of
Lagrange multipliers. We reformulate the arising variational inequality problem
as a quadratic minimization problem with linear inequality constraints. Our
multigrid method can solve such problems by employing a tailored multilevel
hierarchy of the FE spaces and a novel approach for tackling the discretized
non-penetration conditions. We employ pseudo-$L^2$ projection-based transfer
operators to construct a hierarchy of nested FE spaces from the hierarchy of
non-nested meshes. The essential component of our multigrid method is a
technique that decouples the linear constraints using an orthogonal
transformation of the basis. The decoupled constraints are handled by a
modified variant of the projected Gauss-Seidel method, which we employ as a
smoother in the multigrid method. These components of the multigrid method
allow us to enforce linear constraints locally and ensure the global
convergence of our method. We will demonstrate the robustness, efficiency, and
level independent convergence property of the proposed method for Signorini's
problem and two-body contact problems.
|
Given a simple undirected graph $G$, an orientation of $G$ is to assign every
edge of $G$ a direction. Borradaile et al gave a greedy algorithm
SC-Path-Reversal (in polynomial time) which finds a strongly connected
orientation that minimizes the maximum indegree, and conjectured that
SC-Path-Reversal is indeed optimal for the "minimizing the lexicographic order"
objective as well. In this note, we give a positive answer to the conjecture,
that is we show that the algorithm SC-PATH-REVERSAL finds a strongly connected
orientation that minimizes the lexicographic order of indegrees.
|
Intermediate mass planets, from Super-Earth to Neptune-sized bodies, are the
most common type of planets in the galaxy. The prevailing theory of planet
formation, core-accretion, predicts significantly fewer intermediate-mass giant
planets than observed. The competing mechanism for planet formation, disk
instability, can produce massive gas giant planets on wide-orbits, such as
HR8799, by direct fragmentation of the protoplanetary disk. Previously,
fragmentation in magnetized protoplanetary disks has only been considered when
the magneto-rotational instability is the driving mechanism for magnetic field
growth. Yet, this instability is naturally superseded by the spiral-driven
dynamo when more realistic, non-ideal MHD conditions are considered. Here we
report on MHD simulations of disk fragmentation in the presence of a
spiral-driven dynamo. Fragmentation leads to the formation of long-lived bound
protoplanets with masses that are at least one order of magnitude smaller than
in conventional disk instability models. These light clumps survive shear and
do not grow further due to the shielding effect of the magnetic field, whereby
magnetic pressure stifles local inflow of matter. The outcome is a population
of gaseous-rich planets with intermediate masses, while gas giants are found to
be rarer, in qualitative agreement with the observed mass distribution of
exoplanets.
|
Since the education sector is associated with highly dynamic business
environments which are controlled and maintained by information systems, recent
technological advancements and the increasing pace of adopting artificial
intelligence (AI) technologies constitute a need to identify and analyze the
issues regarding their implementation in education sector. However, a study of
the contemporary literature reveled that relatively little research has been
undertaken in this area. To fill this void, we have identified the benefits and
challenges of implementing artificial intelligence in the education sector,
preceded by a short discussion on the concepts of AI and its evolution over
time. Moreover, we have also reviewed modern AI technologies for learners and
educators, currently available on the software market, evaluating their
usefulness. Last but not least, we have developed a strategy implementation
model, described by a five-stage, generic process, along with the corresponding
configuration guide. To verify and validate their design, we separately
developed three implementation strategies for three different higher education
organizations. We believe that the obtained results will contribute to better
understanding the specificities of AI systems, services and tools, and
afterwards pave a smooth way in their implementation.
|
In compositional zero-shot learning, the goal is to recognize unseen
compositions (e.g. old dog) of observed visual primitives states (e.g. old,
cute) and objects (e.g. car, dog) in the training set. This is challenging
because the same state can for example alter the visual appearance of a dog
drastically differently from a car. As a solution, we propose a novel graph
formulation called Compositional Graph Embedding (CGE) that learns image
features, compositional classifiers, and latent representations of visual
primitives in an end-to-end manner. The key to our approach is exploiting the
dependency between states, objects, and their compositions within a graph
structure to enforce the relevant knowledge transfer from seen to unseen
compositions. By learning a joint compatibility that encodes semantics between
concepts, our model allows for generalization to unseen compositions without
relying on an external knowledge base like WordNet. We show that in the
challenging generalized compositional zero-shot setting our CGE significantly
outperforms the state of the art on MIT-States and UT-Zappos. We also propose a
new benchmark for this task based on the recent GQA dataset. Code is available
at: https://github.com/ExplainableML/czsl
|
The biochemical reaction networks that regulate living systems are all
stochastic to varying degrees. The resulting randomness affects biological
outcomes at multiple scales, from the functional states of single proteins in a
cell to the evolutionary trajectory of whole populations. Controlling how the
distribution of these outcomes changes over time -- via external interventions
like time-varying concentrations of chemical species -- is a complex challenge.
In this work, we show how counterdiabatic (CD) driving, first developed to
control quantum systems, provides a versatile tool for steering biological
processes. We develop a practical graph-theoretic framework for CD driving in
discrete-state continuous-time Markov networks. We illustrate the formalism
with examples from gene regulation and chaperone-assisted protein folding,
demonstrating the possibility that nature can exploit CD driving to accelerate
response to sudden environmental changes. We generalize the method to continuum
Fokker-Planck models, and apply it to study AFM single-molecule pulling
experiments in regimes where the typical assumption of adiabaticity breaks
down, as well as an evolutionary model with competing genetic variants subject
to time-varying selective pressures. The AFM analysis shows how CD driving can
eliminate non-equilibrium artifacts due to large force ramps in such
experiments, allowing accurate estimation of biomolecular properties.
|
Quantum properties, such as entanglement and coherence, are indispensable
resources in various quantum information processing tasks. However, there still
lacks an efficient and scalable way to detecting these useful features,
especially for high-dimensional and multipartite quantum systems. In this work,
we exploit the convexity of samples without the desired quantum features and
design an unsupervised machine learning method to detect the presence of such
features as anomalies. Particularly, in the context of entanglement detection,
we propose a complex-valued neural network composed of pseudo-siamese network
and generative adversarial net, and then train it with only separable states to
construct non-linear witnesses for entanglement. It is shown via numerical
examples, ranging from two-qubit to ten-qubit systems, that our network is able
to achieve high detection accuracy which is above 97.5% on average.Moreover, it
is capable of revealing rich structures of entanglement, such as partial
entanglement among subsystems. Our results are readily applicable to the
detection of other quantum resources such as Bell nonlocality and steerability,
and thus our work could provide a powerful tool to extract quantum features
hidden in multipartite quantum data.
|
Effective traffic optimization strategies can improve the performance of
transportation networks significantly. Most exiting works develop traffic
optimization strategies depending on the local traffic states of congested road
segments, where the congestion propagation is neglected. This paper proposes a
novel distributed traffic optimization method for urban freeways considering
the potential congested road segments, which are called
potential-homogeneous-area. The proposed approach is based on the intuition
that the evolution of congestion may affect the neighbor segments due to the
mobility of traffic flow. We identify potential-homogeneous-area by applying
our proposed temporal-spatial lambda-connectedness method using historical
traffic data. Further, global dynamic capacity constraint of this area is
integrated with cell transmission model (CTM) in the traffic optimization
problem. To reduce computational complexity and improve scalability, we propose
a fully distributed algorithm to solve the problem, which is based on the
partial augmented Lagrangian and dual-consensus alternating direction method of
multipliers (ADMM). By this means, distributed coordination of ramp metering
and variable speed limit control is achieved. We prove that the proposed
algorithm converges to the optimal solution so long as the traffic optimization
objective is convex. The performance of the proposed method is evaluated by
macroscopic simulation using real data of Shanghai, China.
|
We give a numerical condition for right-handedness of a dynamically convex
Reeb flow on $S^3$. Our condition is stated in terms of an asymptotic ratio
between the amount of rotation of the linearised flow and the linking number of
trajectories with a periodic orbit that spans a disk-like global surface of
section. As an application, we find an explicit constant $\delta_* < 0.7225$
such that if a Riemannian metric on the $2$-sphere is $\delta$-pinched with
$\delta > \delta_*$, then its geodesic flow lifts to a right-handed flow on
$S^3$. In particular, all finite collections of periodic orbits of such a
geodesic flow bind open books whose pages are global surfaces of section.
|
Conventional Supervised Learning approaches focus on the mapping from input
features to output labels. After training, the learnt models alone are adapted
onto testing features to predict testing labels in isolation, with training
data wasted and their associations ignored. To take full advantage of the vast
number of training data and their associations, we propose a novel learning
paradigm called Memory-Associated Differential (MAD) Learning. We first
introduce an additional component called Memory to memorize all the training
data. Then we learn the differences of labels as well as the associations of
features in the combination of a differential equation and some sampling
methods. Finally, in the evaluating phase, we predict unknown labels by
inferencing from the memorized facts plus the learnt differences and
associations in a geometrically meaningful manner. We gently build this theory
in unary situations and apply it on Image Recognition, then extend it into Link
Prediction as a binary situation, in which our method outperforms strong
state-of-the-art baselines on ogbl-ddi dataset.
|
Existing sequential recommendation methods rely on large amounts of training
data and usually suffer from the data sparsity problem. To tackle this, the
pre-training mechanism has been widely adopted, which attempts to leverage
large-scale data to perform self-supervised learning and transfer the
pre-trained parameters to downstream tasks. However, previous pre-trained
models for recommendation focus on leverage universal sequence patterns from
user behaviour sequences and item information, whereas ignore capturing
personalized interests with the heterogeneous user information, which has been
shown effective in contributing to personalized recommendation. In this paper,
we propose a method to enhance pre-trained models with heterogeneous user
information, called User-aware Pre-training for Recommendation (UPRec).
Specifically, UPRec leverages the user attributes andstructured social graphs
to construct self-supervised objectives in the pre-training stage and proposes
two user-aware pre-training tasks. Comprehensive experimental results on
several real-world large-scale recommendation datasets demonstrate that UPRec
can effectively integrate user information into pre-trained models and thus
provide more appropriate recommendations for users.
|
This paper is devoted to the analysis of the distribution of the total
magnetic quantum number $M$ in a relativistic subshell with $N$ equivalent
electrons of momentum $j$. This distribution is analyzed through its cumulants
and through their generating function, for which an analytical expression is
provided. This function also allows us to get the values of the cumulants at
any order. Such values are useful to obtain the moments at various orders.
Since the cumulants of the distinct subshells are additive this study directly
applies to any relativistic configuration. Recursion relations on the
generating function are given. It is shown that the generating function of the
magnetic quantum number distribution may be expressed as a n-th derivative of a
polynomial. This leads to recurrence relations for this distribution which are
very efficient even in the case of large $j$ or $N$. The magnetic quantum
number distribution is numerically studied using the Gram-Charlier and
Edgeworth expansions. The inclusion of high-order terms may improve the
accuracy of the Gram-Charlier representation for instance when a small and a
large angular momenta coexist in the same configuration. However such series
does not exhibit convergence when high orders are considered and the account
for the first two terms often provides a fair approximation of the magnetic
quantum number distribution. The Edgeworth series offers an interesting
alternative though this expansion is also divergent and of asymptotic nature.
|
In this paper we present a continuation method which transforms spatially
distributed ODE systems into continuous PDE. We show that this continuation can
be performed both for linear and nonlinear systems, including multidimensional,
space- and time-varying systems. When applied to a large-scale network, the
continuation provides a PDE describing evolution of continuous state
approximation that respects the spatial structure of the original ODE. Our
method is illustrated by multiple examples including transport equations,
Kuramoto equations and heat diffusion equations. As a main example, we perform
the continuation of a Newtonian system of interacting particles and obtain the
Euler equations for compressible fluids, thereby providing an original
alternative solution to Hilbert's 6th problem. Finally, we leverage our
derivation of the Euler equations to control multiagent systems, designing a
nonlinear control algorithm for robot formation based on its continuous
approximation.
|
Due to the recent growth of discoveries of strong gravitational lensing (SGL)
systems, one can statistically study both lens properties and cosmological
parameters from 161 galactic scale SGL systems. We analyze meVSL model with the
velocity dispersion of lenses by adopting the redshift and surface mass density
depending power-law mass model. Analysis shows that meVSL models with various
dark energy models including $\Lambda$CDM, $\omega$CDM, and CPL provide the
negative values of meVSL parameter, $b$ when we put the prior to the $\Omega_{m
0}$ value from Planck. These indicate the faster speed of light and the
stronger gravitational force in the past. However, if we adopt the WMAP prior
on $\Omega_{m0}$, then we obtain the null results on $b$ within 1-$\sigma$ CL
for the different dark energy models.
|
Most online message threads inherently will be cluttered and any new user or
an existing user visiting after a hiatus will have a difficult time
understanding whats being discussed in the thread. Similarly cluttered
responses in a message thread makes analyzing the messages a difficult problem.
The need for disentangling the clutter is much higher when the platform where
the discussion is taking place does not provide functions to retrieve reply
relations of the messages. This introduces an interesting problem to which
\cite{wang2011learning} phrases as a structural learning problem. We create
vector embeddings for posts in a thread so that it captures both linguistic and
positional features in relation to a context of where a given message is in.
Using these embeddings for posts we compute a similarity based connectivity
matrix which then converted into a graph. After employing a pruning mechanisms
the resultant graph can be used to discover the reply relation for the posts in
the thread. The process of discovering or disentangling chat is kept as an
unsupervised mechanism. We present our experimental results on a data set
obtained from Telegram with limited meta data.
|
The relationship between the magnetic interaction and photoinduced dynamics
in antiferromagnetic perovskites is investigated in this study. In
La${}_{1/3}$Sr${}_{2/3}$FeO${}_{3}$ thin films, commensurate spin ordering is
accompanied by charge disproportionation, whereas SrFeO${}_{3}$ thin films show
incommensurate helical antiferromagnetic spin ordering due to increased
ferromagnetic coupling compared to La${}_{1/3}$Sr${}_{2/3}$FeO${}_{3}$. To
understand the photoinduced spin dynamics in these materials, we investigate
the spin ordering through time-resolved resonant soft X-ray scattering. In
La${}_{1/3}$Sr${}_{2/3}$FeO${}_{3}$, ultrafast quenching of the magnetic
ordering within 130 fs through a nonthermal process is observed, triggered by
charge transfer between the Fe atoms. We compare this to the photoinduced
dynamics of the helical magnetic ordering of SrFeO${}_{3}$. We find that the
change in the magnetic coupling through optically induced charge transfer can
offer an even more efficient channel for spin-order manipulation.
|
In this paper we study the equations of the elimination ideal associated with
$n+1$ generic multihomogeneous polynomials defined over a product of projective
spaces of dimension $n$. We first prove a duality property and then make this
duality explicit by introducing multigraded Sylvester forms. These results
provide a partial generalization of similar properties that are known in the
setting of homogeneous polynomial systems defined over a single projective
space. As an important consequence, we derive a new family of elimination
matrices that can be used for solving zero-dimensional multiprojective
polynomial systems by means of linear algebra methods.
|
We report microscopic, cathodoluminescence, chemical and O isotopic
measurements of FeO-poor isolated olivine grains (IOG) in the carbonaceous
chondrites Allende (CV3), Northwest Africa 5958 (C2-ung), Northwest Africa
11086 (CM2-an), Allan Hills 77307 (CO3.0). The general petrographic, chemical
and isotopic similarity with bona fide type I chondrules confirms that the IOG
derived from them. The concentric CL zoning, reflecting a decrease in
refractory elements toward the margins, and frequent rimming by enstatite are
taken as evidence of interaction of the IOG with the gas as stand-alone
objects. This indicates that they were splashed out of chondrules when these
were still partially molten. CaO-rich refractory forsterites, which are
restricted to $\Delta^{17}O < -4\permil$ likely escaped equilibration at lower
temperatures because of their large size and possibly quicker quenching. The
IOG thus bear witness to frequent collisions in the chondrule-forming regions.
|
The nova rate in the Milky Way remains largely uncertain, despite its vital
importance in constraining models of Galactic chemical evolution as well as
understanding progenitor channels for Type Ia supernovae. The rate has been
previously estimated in the range of $\approx10-300$ yr$^{-1}$, either based on
extrapolations from a handful of very bright optical novae or the nova rates in
nearby galaxies; both methods are subject to debatable assumptions. The total
discovery rate of optical novae remains much smaller ($\approx5-10$ yr$^{-1}$)
than these estimates, even with the advent of all-sky optical time domain
surveys. Here, we present a systematic sample of 12 spectroscopically confirmed
Galactic novae detected in the first 17 months of Palomar Gattini-IR (PGIR), a
wide-field near-infrared time domain survey. Operating in $J$-band
($\approx1.2$ $\mu$m) that is relatively immune to dust extinction, the
extinction distribution of the PGIR sample is highly skewed to large extinction
values ($> 50$% of events obscured by $A_V\gtrsim5$ mag). Using recent
estimates for the distribution of mass and dust in the Galaxy, we show that the
observed extinction distribution of the PGIR sample is commensurate with that
expected from dust models. The PGIR extinction distribution is inconsistent
with that reported in previous optical searches (null hypothesis probability $<
0.01$%), suggesting that a large population of highly obscured novae have been
systematically missed in previous optical searches. We perform the first
quantitative simulation of a $3\pi$ time domain survey to estimate the Galactic
nova rate using PGIR, and derive a rate of $\approx 46.0^{+12.5}_{-12.4}$
yr$^{-1}$. Our results suggest that all-sky near-infrared time-domain surveys
are well poised to uncover the Galactic nova population.
|
Reconfigurable intelligent surface (RIS) is considered as a revolutionary
technology for future wireless communication networks. In this letter, we
consider the acquisition of the cascaded channels, which is a challenging task
due to the massive number of passive RIS elements. To reduce the pilot
overhead, we adopt the element-grouping strategy, where each element in one
group shares the same reflection coefficient and is assumed to have the same
channel condition. We analyze the channel interference caused by the
element-grouping strategy and further design two deep learning based networks.
The first one aims to refine the partial channels by eliminating the
interference, while the second one tries to extrapolate the full channels from
the refined partial channels. We cascade the two networks and jointly train
them. Simulation results show that the proposed scheme provides significant
gain compared to the conventional element-grouping method without interference
elimination.
|
The progenitors of present-day galaxy clusters give important clues about the
evolution of the large scale structure, cosmic mass assembly, and galaxy
evolution. Simulations are a major tool for these studies since they are used
to interpret observations. In this work, we introduce a set of
"protocluster-lightcones", dubbed PCcones. They are mock galaxy catalogs
generated from the Millennium Simulation with the L-GALAXIES semi-analytic
model. These lightcones were constructed by placing a desired structure at the
redshift of interest in the centre of the cone. This approach allows to adopt a
set of observational constraints, such as magnitude limits and uncertainties in
magnitudes and photometric redshifts (photo-zs), to produce realistic
simulations of photometric surveys. We show that photo-zs obtained with PCcones
are more accurate than those obtained directly with the Millennium Simulation,
mostly due to the difference in how apparent magnitudes are computed. We apply
PCcones in the determination of the expected accuracy of protocluster detection
using photo-zs in the $z=1-3$ range in the wide-layer of HSC-SSP and the
10-year LSST forecast. With our technique, we expect to recover only $\sim38\%$
and $\sim 43\%$ of all massive galaxy cluster progenitors with more than 70\%
of purity for HSC-SSP and LSST, respectively. Indeed, the combination of
observational constraints and photo-z uncertainties affects the detection of
structures critically for both emulations, indicating the need of spectroscopic
redshifts to improve detection. We also compare our mocks of the Deep CFHTLS at
$z<1.5$ with observed cluster catalogs, as an extra validation of the
lightcones and methods.
|
We reanalyse the solar eclipse linked to the Biblical passage about the
military leader Joshua who ordered the sun to halt in the midst of the day
(Joshua 10:12). Although there is agreement that the basic story is rooted in a
real event, the date is subject to different opinions. We review the historical
emergence of the text and confirm that the total eclipse of the sun of 30
September 1131 BCE is the most likely candidate. The Besselian Elements for
this eclipse are re-computed. The error for the deceleration parameter of
Earth's rotation, $\Delta T$, is improved by a factor of 2.
|
Aspect-based Sentiment Analysis (ABSA) aims to identify the aspect terms,
their corresponding sentiment polarities, and the opinion terms. There exist
seven subtasks in ABSA. Most studies only focus on the subsets of these
subtasks, which leads to various complicated ABSA models while hard to solve
these subtasks in a unified framework. In this paper, we redefine every subtask
target as a sequence mixed by pointer indexes and sentiment class indexes,
which converts all ABSA subtasks into a unified generative formulation. Based
on the unified formulation, we exploit the pre-training sequence-to-sequence
model BART to solve all ABSA subtasks in an end-to-end framework. Extensive
experiments on four ABSA datasets for seven subtasks demonstrate that our
framework achieves substantial performance gain and provides a real unified
end-to-end solution for the whole ABSA subtasks, which could benefit multiple
tasks.
|
Recently, a geometric approach to operator mixing in massless QCD-like
theories -- that involves canonical forms based on the Poincare'-Dulac theorem
for the linear system that defines the renormalized mixing matrix in the
coordinate representation $Z(x,\mu)$ -- has been advocated in arXiv:2103.15527
. As a consequence, a classification of operator mixing in four cases --
depending on the canonical forms of $- \frac{\gamma(g)}{\beta(g)}$, with
$\gamma(g)=\gamma_0 g^2+\cdots$ the matrix of the anomalous dimensions and
$\beta(g)=-\beta_0 g^3 + \cdots$ the beta function -- has been proposed: (I)
nonresonant $\frac{\gamma_0}{\beta_0}$ diagonalizable, (II) resonant
$\frac{\gamma_0}{\beta_0}$ diagonalizable, (III) nonresonant
$\frac{\gamma_0}{\beta_0}$ nondiagonalizable, (IV) resonant
$\frac{\gamma_0}{\beta_0}$ nondiagonalizable. In particular, in
arXiv:2103.15527 a detailed analysis of the case (I) -- where operator mixing
reduces to all orders of perturbation theory to the multiplicatively
renormalizable case -- has been provided. In the present paper, following the
aforementioned approach, we work out in the remaining three cases the canonical
forms for $- \frac{\gamma(g)}{\beta(g)}$ to all orders of perturbation theory,
the corresponding UV asymptotics of $Z(x,\mu)$, and the physics interpretation.
We also work out in detail physical realizations of the cases (I) and (II).
|
Ova-angular rotations of a prime number are characterized, constructed using
the Dirichlet theorem. The geometric properties arising from this theory are
analyzed and some applications are presented, including Goldbach's conjecture,
the existence of infinite primes of the form $\rho = k^2+1$ and the convergence
of the sum of the inverses of the Mersenne's primes. Although the mathematics
that was used was elementary, you can notice the usefulness of this theory
based on geometric properties. In this paper, the study ends by introducing the
ova-angular square matrix.
|
Data from multifactor HCI experiments often violates the normality assumption
of parametric tests (i.e., nonconforming data). The Aligned Rank Transform
(ART) is a popular nonparametric analysis technique that can find main and
interaction effects in nonconforming data, but leads to incorrect results when
used to conduct contrast tests. We created a new algorithm called ART-C for
conducting contrasts within the ART paradigm and validated it on 72,000 data
sets. Our results indicate that ART-C does not inflate Type I error rates,
unlike contrasts based on ART, and that ART-C has more statistical power than a
t-test, Mann-Whitney U test, Wilcoxon signed-rank test, and ART. We also
extended a tool called ARTool with our ART-C algorithm for both Windows and R.
Our validation had some limitations (e.g., only six distribution types, no
mixed factorial designs, no random slopes), and data drawn from Cauchy
distributions should not be analyzed with ART-C.
|
We investigate the formation and growth of massive black hole (BH) seeds in
dusty star-forming galaxies, relying and extending the framework proposed by
Boco et al. 2020. Specifically, the latter envisages the migration of stellar
compact remnants (neutron stars and stellar-mass black holes) via gaseous
dynamical friction towards the galaxy nuclear region, and their subsequent
merging to grow a massive central BH seed. In this paper we add two relevant
ingredients: (i) we include primordial BHs, that could constitute a fraction
$f_{\rm pBH}$ of the dark matter, as an additional component participating in
the seed growth; (ii) we predict the stochastic gravitational wave background
originated during the seed growth, both from stellar compact remnant and from
primordial BH mergers. We find that the latter events contribute most to the
initial growth of the central seed during a timescale of $10^6-10^7\,\rm yr$,
before stellar compact remnant mergers and gas accretion take over. In
addition, if the fraction of primordial BHs $f_{\rm pBH}$ is large enough,
gravitational waves emitted by their mergers in the nuclear galactic regions
could be detected by future interferometers like Einsten Telescope, DECIGO and
LISA. As for the associated stochastic gravitational wave background, we
predict that it extends over the wide frequency band $10^{-6}\lesssim f [{\rm
Hz}]\lesssim 10$, which is very different from the typical range originated by
mergers of isolated binary compact objects. On the one hand, the detection of
such a background could be a smoking gun to test the proposed seed growth
mechanism; on the other hand, it constitutes a relevant contaminant from
astrophysical sources to be characterized and subtracted, in the challenging
search for a primordial background of cosmological origin.
|
The viewing size of a signer correlates with legibility, i.e., the ease with
which a viewer can recognize individual signs. The WCAG 2.0 guidelines (G54)
mention in the notes that there should be a mechanism to adjust the size to
ensure the signer is discernible but does not state minimum discernibility
guidelines. The fluent range (the range over which sign viewers can follow the
signers at maximum speed) extends from about 7{\deg} to 20{\deg}, which is far
greater than 2{\deg} for print. Assuming a standard viewing distance of 16
inches from a 5-inch smartphone display, the corresponding sizes are from 2 to
5 inches, i.e., from 1/3rd to full-screen. This is consistent with vision
science findings about human visual processing properties, and how they play a
dominant role in constraining the distribution of signer sizes.
|
Time-of-Flight Magnetic Resonance Angiographs (TOF-MRAs) enable visualization
and analysis of cerebral arteries. This analysis may indicate normal variation
of the configuration of the cerebrovascular system or vessel abnormalities,
such as aneurysms. A model would be useful to represent normal cerebrovascular
structure and variabilities in a healthy population and to differentiate from
abnormalities. Current anomaly detection using autoencoding convolutional
neural networks usually use a voxelwise mean-error for optimization. We propose
optimizing a variational-autoencoder (VAE) with structural similarity loss
(SSIM) for TOF-MRA reconstruction. A patch-trained 2D fully-convolutional VAE
was optimized for TOF-MRA reconstruction by comparing vessel segmentations of
original and reconstructed MRAs. The method was trained and tested on two
datasets: the IXI dataset, and a subset from the ADAM challenge. Both trained
networks were tested on a dataset including subjects with aneurysms. We
compared VAE optimization with L2-loss and SSIM-loss. Performance was evaluated
between original and reconstructed MRAs using mean square error, mean-SSIM,
peak-signal-to-noise-ratio and dice similarity index (DSI) of segmented
vessels. The L2-optimized VAE outperforms SSIM, with improved reconstruction
metrics and DSIs for both datasets. Optimization using SSIM performed best for
visual image quality, but with discrepancy in quantitative reconstruction and
vascular segmentation. The larger, more diverse IXI dataset had overall better
performance. Reconstruction metrics, including SSIM, were lower for MRAs
including aneurysms. A SSIM-optimized VAE improved the visual perceptive image
quality of TOF-MRA reconstructions. A L2-optimized VAE performed best for
TOF-MRA reconstruction, where the vascular segmentation is important. SSIM is a
potential metric for anomaly detection of MRAs.
|
We investigate weak$^*$ derived sets, that is the sets of weak$^*$ limits of
bounded nets, of convex subsets of duals of non-reflexive Banach spaces and
their possible iterations. We prove that a dual space of any non-reflexive
Banach space contains convex subsets of any finite order and a convex subset of
order $\omega + 1$.
|
The scaling relations between the black hole (BH) mass and soft lag
properties for both active galactic nuclei (AGNs) and BH X-ray binaries
(BHXRBs) suggest the same underlying physical mechanism at work in accreting BH
systems spanning a broad range of mass. However, the low-mass end of AGNs has
never been explored in detail. In this work, we extend the existing scaling
relations to lower-mass AGNs, which serve as anchors between the normal-mass
AGNs and BHXRBs. For this purpose, we construct a sample of low-mass AGNs
($M_{\rm BH}<3\times 10^{6} M_{\rm \odot}$) from the XMM-Newton archive and
measure frequency-resolved time delays between the soft (0.3-1 keV) and hard
(1-4 keV) X-ray emissions. We report that the soft band lags behind the hard
band emission at high frequencies $\sim[1.3-2.6]\times 10^{-3}$ Hz, which is
interpreted as a sign of reverberation from the inner accretion disc in
response to the direct coronal emission. At low frequencies ($\sim[3-8]\times
10^{-4}$ Hz), the hard band lags behind the soft band variations, which we
explain in the context of the inward propagation of luminosity fluctuations
through the corona. Assuming a lamppost geometry for the corona, we find that
the X-ray source of the sample extends at an average height and radius of $\sim
10r_{\rm g}$ and $\sim 6r_{\rm g}$, respectively. Our results confirm that the
scaling relations between the BH mass and soft lag amplitude/frequency derived
for higher-mass AGNs can safely extrapolate to lower-mass AGNs, and the
accretion process is indeed independent of the BH mass.
|
The presence of interface recombination in a complex multilayered thin-film
solar structure causes a disparity between the internal open-circuit voltage
(VOC,in), measured by photoluminescence, and the external open-circuit voltage
(VOC,ex) i.e. an additional VOC deficit. Higher VOC,ex value aim require a
comprehensive understanding of connection between VOC deficit and interface
recombination. Here, a deep near-surface defect model at the absorber/buffer
interface is developed for copper indium di-selenide solar cells grown under Cu
excess conditions to explain the disparity between VOC,in and VOC,ex.. The
model is based on experimental analysis of admittance spectroscopy and
deep-level transient spectroscopy, which show the signature of deep acceptor
defect. Further, temperature-dependent current-voltage measurements confirm the
presence of near surface defects as the cause of interface recombination. The
numerical simulations show strong decrease in the local VOC,in near the
absorber/buffer interface leading to a VOC deficit in the device. This loss
mechanism leads to interface recombination without a reduced interface bandgap
or Fermi level pinning. Further, these findings demonstrate that the VOC,in
measurements alone can be inconclusive and might conceal the information on
interface recombination pathways, establishing the need for complementary
techniques like temperature dependent current voltage measurements to identify
the cause of interface recombination in the devices.
|
We study random walks on the isometry group of a Gromov hyperbolic space or
Teichm\"uller space. We prove that the translation lengths of random isometries
satisfy a central limit theorem if and only if the random walk has finite
second moment. While doing this, we recover the central limit theorem of
Benoist and Quint for the displacement of a reference point and establish its
converse. Also discussed are the corresponding laws of the iterated logarithm.
Finally, we prove sublinear geodesic tracking by random walks with finite
$(1/2)$-th moment and logarithmic tracking by random walks with finite
exponential moment.
|
A multi-agent optimization problem motivated by the management of energy
systems is discussed. The associated cost function is separable and convex
although not necessarily strongly convex and there exist edge-based coupling
equality constraints. In this regard, we propose a distributed algorithm based
on solving the dual of the augmented problem. Furthermore, we consider that the
communication network might be time-varying and the algorithm might be carried
out asynchronously. The time-varying nature and the asynchronicity are modeled
as random processes. Then, we show the convergence and the convergence rate of
the proposed algorithm under the aforementioned conditions.
|
Deploying sophisticated deep learning models on embedded devices with the
purpose of solving real-world problems is a struggle using today's technology.
Privacy and data limitations, network connection issues, and the need for fast
model adaptation are some of the challenges that constitute today's approaches
unfit for many applications on the edge and make real-time on-device training a
necessity. Google is currently working on tackling these challenges by
embedding an experimental transfer learning API to their TensorFlow Lite,
machine learning library. In this paper, we show that although transfer
learning is a good first step for on-device model training, it suffers from
catastrophic forgetting when faced with more realistic scenarios. We present
this issue by testing a simple transfer learning model on the CORe50 benchmark
as well as by demonstrating its limitations directly on an Android application
we developed. In addition, we expand the TensorFlow Lite library to include
continual learning capabilities, by integrating a simple replay approach into
the head of the current transfer learning model. We test our continual learning
model on the CORe50 benchmark to show that it tackles catastrophic forgetting,
and we demonstrate its ability to continually learn, even under non-ideal
conditions, using the application we developed. Finally, we open-source the
code of our Android application to enable developers to integrate continual
learning to their own smartphone applications, as well as to facilitate further
development of continual learning functionality into the TensorFlow Lite
environment.
|
This chapter presents an overview on actuator attacks that exploit zero
dynamics, and countermeasures against them. First, zero-dynamics attack is
re-introduced based on a canonical representation called normal form. Then it
is shown that the target dynamic system is at elevated risk if the associated
zero dynamics is unstable. From there on, several questions are raised in
series to ensure when the target system is immune to the attack of this kind.
The first question is: Is the target system secure from zero-dynamics attack if
it does not have any unstable zeros? An answer provided for this question is:
No, the target system may still be at risk due to another attack surface
emerging in the process of implementation. This is followed by a series of next
questions, and in the course of providing answers, variants of the classic
zero-dynamics attack are presented, from which the vulnerability of the target
system is explored in depth. At the end, countermeasures are proposed to render
the attack ineffective. Because it is known that the zero-dynamics in
continuous-time systems cannot be modified by feedback, the main idea of the
countermeasure is to relocate any unstable zero to a stable region in the stage
of digital implementation through modified digital samplers and holders.
Adversaries can still attack actuators, but due to the re-located zeros, they
are of little use in damaging the target system.
|
Guitar fretboards are designed based on the equation of the ideal string.
That is, it neglecs several factors as nonlinearities and bending stiffness of
the strings. Due to this fact, intonation of guitars along the whole neck is
not perfect, and guitars have right tuning just in an \emph{average} sense.
There are commercially available fretboards that differ from the tradictional
design.\footnote{One example is the \cite{patent} by the Company True
Temperament AB, where each fretboard is made using CNC processes.} As a final
application of this work we would like to redesign the fretboard layout
considering the effects of bending stiffness. The main goal of this project is
to analyze the differences between the differences in the solution for
vibrations of the ideal string and a stiff string. These differences should
lead to changes in the fret distribution for a guitar, and, hopefully improve
the overall intonation of the instrument. We will start analyzing the ideal
string equation and after a good understanding of this analytical solution we
will proceed with the, more complex, stiff equation. Topics like separation of
variables, Fourier transforms, and Perturbation analysis might prove useful
during the course of this project
|
This investigation presents evidence of the relation between the dynamics of
intense events in small-scale turbulence and the energy cascade. We use the
generalised (H\"older) means to track the temporal evolution of intense events
of the enstrophy and the dissipation in direct numerical simulations of
isotropic turbulence. We show that these events are modulated by large-scale
fluctuations, and that their evolution is consistent with a local
multiplicative cascade, as hypothesised by a broad class of intermittency
models of turbulence.
|
A new implicit-explicit local differential transform method (IELDTM) is
derived here for time integration of the nonlinear advection-diffusion
processes represented by (2+1)-dimensional Burgers equation. The IELDTM is
adaptively constructed as stability preserved and high order time integrator
for spatially discretized Burgers equation. For spatial discretization of the
model equation, the Chebyshev spectral collocation method (ChCM) is utilized. A
robust stability analysis and global error analysis of the IELDTM are presented
with respect to the direction parameter \theta. With the help of the global
error analysis, adaptivity equations are derived to minimize the computational
costs of the algorithms. The produced method is shown to eliminate the accuracy
disadvantage of the classical \theta-method and the stability disadvantages of
the existing DTM-based methods. Two examples of the Burgers equation in one and
two dimensions have been solved via the ChCM-IELDTM hybridization, and the
produced results are compared with the literature. The present time integrator
has been proven to produce more accurate numerical results than the MATLAB
solvers, ode45 and ode15s.
|
Concatenation and equilibrium swelling of Olympic gels, which are composed of
entangled cyclic polymers, is studied by Monte Carlo Simulations. The average
number of concatenated molecules per cyclic polymers, $f_n$, is found to depend
on the degree of polymerization, $N$, and polymer volume fraction at network
preparation, ${\phi}_0$, as $f_n ~ {\phi}_0^{{\nu}/(3{\nu}-1)}N$ with scaling
exponent ${\nu} = 0.588$. In contrast to chemically cross-linked polymer
networks, we observe that Olympic gels made of longer cyclic chains exhibit a
smaller equilibrium swelling degree, $Q ~ N^{-0.28} {\phi}_0^{-0.72}$, at the
same polymer volume fraction ${\phi}_0$. This observation is explained by a
desinterspersion process of overlapping non-concatenated rings upon swelling,
which is tested directly by analyzing the change in overlap of the molecules
upon swelling.
|
Temporal Neural Networks (TNNs) are spiking neural networks that use time as
a resource to represent and process information, similar to the mammalian
neocortex. In contrast to compute-intensive deep neural networks that employ
separate training and inference phases, TNNs are capable of extremely efficient
online incremental/continual learning and are excellent candidates for building
edge-native sensory processing units. This work proposes a microarchitecture
framework for implementing TNNs using standard CMOS. Gate-level implementations
of three key building blocks are presented: 1) multi-synapse neurons, 2)
multi-neuron columns, and 3) unsupervised and supervised online learning
algorithms based on Spike Timing Dependent Plasticity (STDP). The proposed
microarchitecture is embodied in a set of characteristic scaling equations for
assessing the gate count, area, delay and power for any TNN design.
Post-synthesis results (in 45nm CMOS) for the proposed designs are presented,
and their online incremental learning capability is demonstrated.
|
We propose new methods for in-domain and cross-domain Named Entity
Recognition (NER) on historical data for Dutch and French. For the cross-domain
case, we address domain shift by integrating unsupervised in-domain data via
contextualized string embeddings; and OCR errors by injecting synthetic OCR
errors into the source domain and address data centric domain adaptation. We
propose a general approach to imitate OCR errors in arbitrary input data. Our
cross-domain as well as our in-domain results outperform several strong
baselines and establish state-of-the-art results. We publish preprocessed
versions of the French and Dutch Europeana NER corpora.
|
COVID-19 has impacted nations differently based on their policy
implementations. The effective policy requires taking into account public
information and adaptability to new knowledge. Epidemiological models built to
understand COVID-19 seldom provide the policymaker with the capability for
adaptive pandemic control (APC). Among the core challenges to be overcome
include (a) inability to handle a high degree of non-homogeneity in different
contributing features across the pandemic timeline, (b) lack of an approach
that enables adaptive incorporation of public health expert knowledge, and (c)
transparent models that enable understanding of the decision-making process in
suggesting policy. In this work, we take the early steps to address these
challenges using Knowledge Infused Policy Gradient (KIPG) methods. Prior work
on knowledge infusion does not handle soft and hard imposition of varying forms
of knowledge in disease information and guidelines to necessarily comply with.
Furthermore, the models do not attend to non-homogeneity in feature counts,
manifesting as partial observability in informing the policy. Additionally,
interpretable structures are extracted post-learning instead of learning an
interpretable model required for APC. To this end, we introduce a mathematical
framework for KIPG methods that can (a) induce relevant feature counts over
multi-relational features of the world, (b) handle latent non-homogeneous
counts as hidden variables that are linear combinations of kernelized
aggregates over the features, and (b) infuse knowledge as functional
constraints in a principled manner. The study establishes a theory for imposing
hard and soft constraints and simulates it through experiments. In comparison
with knowledge-intensive baselines, we show quick sample efficient adaptation
to new knowledge and interpretability in the learned policy, especially in a
pandemic context.
|
Bayesian optimization is a popular method for optimizing expensive black-box
functions. The objective functions of hard real world problems are oftentimes
characterized by a fluctuated landscape of many local optima. Bayesian
optimization risks in over-exploiting such traps, remaining with insufficient
query budget for exploring the global landscape. We introduce Coordinate
Backoff Bayesian Optimization (CobBO) to alleviate those challenges. CobBO
captures a smooth approximation of the global landscape by interpolating the
values of queried points projected to randomly selected promising subspaces.
Thus also a smaller query budget is required for the Gaussian process
regressions applied over the lower dimensional subspaces. This approach can be
viewed as a variant of coordinate ascent, tailored for Bayesian optimization,
using a stopping rule for backing off from a certain subspace and switching to
another coordinate subset. Extensive evaluations show that CobBO finds
solutions comparable to or better than other state-of-the-art methods for
dimensions ranging from tens to hundreds, while reducing the trial complexity.
|
Automatic software development has been a research hot spot in the field of
software engineering (SE) in the past decade. In particular, deep learning (DL)
has been applied and achieved a lot of progress in various SE tasks. Among all
applications, automatic code generation by machines as a general concept,
including code completion and code synthesis, is a common expectation in the
field of SE, which may greatly reduce the development burden of the software
developers and improves the efficiency and quality of the software development
process to a certain extent. Code completion is an important part of modern
integrated development environments (IDEs). Code completion technology
effectively helps programmers complete code class names, method names, and
key-words, etc., which improves the efficiency of program development and
reduces spelling errors in the coding process. Such tools use static analysis
on the code and provide candidates for completion arranged in alphabetical
order. Code synthesis is implemented from two aspects, one based on
input-output samples and the other based on functionality description. In this
study, we introduce existing techniques of these two aspects and the
corresponding DL techniques, and present some possible future research
directions.
|
We initiate the study of incentive-compatible forecasting competitions in
which multiple forecasters make predictions about one or more events and
compete for a single prize. We have two objectives: (1) to incentivize
forecasters to report truthfully and (2) to award the prize to the most
accurate forecaster. Proper scoring rules incentivize truthful reporting if all
forecasters are paid according to their scores. However, incentives become
distorted if only the best-scoring forecaster wins a prize, since forecasters
can often increase their probability of having the highest score by reporting
more extreme beliefs. In this paper, we introduce two novel forecasting
competition mechanisms. Our first mechanism is incentive compatible and
guaranteed to select the most accurate forecaster with probability higher than
any other forecaster. Moreover, we show that in the standard single-event,
two-forecaster setting and under mild technical conditions, no other
incentive-compatible mechanism selects the most accurate forecaster with higher
probability. Our second mechanism is incentive compatible when forecasters'
beliefs are such that information about one event does not lead to belief
updates on other events, and it selects the best forecaster with probability
approaching 1 as the number of events grows. Our notion of incentive
compatibility is more general than previous definitions of dominant strategy
incentive compatibility in that it allows for reports to be correlated with the
event outcomes. Moreover, our mechanisms are easy to implement and can be
generalized to the related problems of outputting a ranking over forecasters
and hiring a forecaster with high accuracy on future events.
|
In this paper we consider nonautonomous optimal control problems of infinite
horizon type, whose control actions are given by $L^1$-functions. We verify
that the value function is locally Lipschitz. The equivalence between dynamic
programming inequalities and Hamilton-Jacobi-Bellman (HJB) inequalities for
proximal sub (super) gradients is proven. Using this result we show that the
value function is a Dini solution of the HJB equation. We obtain a verification
result for the class of Dini sub-solutions of the HJB equation and also prove a
minimax property of the value function with respect to the sets of Dini
semi-solutions of the HJB equation. We introduce the concept of viscosity
solutions of the HJB equation in infinite horizon and prove the equivalence
between this and the concept of Dini solutions. In the appendix we provide an
existence theorem.
|
The evolution towards Industry 4.0 is driving the need for innovative
solutions in the area of network management, considering the complex, dynamic
and heterogeneous nature of ICT supply chains. To this end, Intent-Based
networking (IBN) which is already proven to evolve how network management is
driven today, can be implemented as a solution to facilitate the management of
large ICT supply chains. In this paper, we first present a comparison of the
main architectural components of typical IBN systems and, then, we study the
key engineering requirements when integrating IBN with ICT supply chain network
systems while considering AI methods. We also propose a general architecture
design that enables intent translation of ICT supply chain specifications into
lower level policies, to finally show an example of how the access control is
performed in a modeled ICT supply chain system.
|
The last milestone achievement for the roundoff-error-free solution of
general mixed integer programs over the rational numbers was a hybrid-precision
branch-and-bound algorithm published by Cook, Koch, Steffy, and Wolter in 2013.
We describe a substantial revision and extension of this framework that
integrates symbolic presolving, features an exact repair step for solutions
from primal heuristics, employs a faster rational LP solver based on LP
iterative refinement, and is able to produce independently verifiable
certificates of optimality.
We study the significantly improved performance and give insights into the
computational behavior of the new algorithmic components.
On the MIPLIB 2017 benchmark set, we observe an average speedup of 6.6x over
the original framework and 2.8 times as many instances solved within a time
limit of two hours.
|
HIP 41378 f is a temperate $9.2\pm0.1 R_{\oplus}$ planet with period of
542.08 days and an extremely low density of $0.09\pm0.02$ g cm$^{-3}$. It
transits the bright star HIP 41378 (V=8.93), making it an exciting target for
atmospheric characterization including transmission spectroscopy. HIP 41378 was
monitored photometrically between the dates of 2019 November 19 and November
28. We detected a transit of HIP 41378 f with NGTS, just the third transit ever
detected for this planet, which confirms the orbital period. This is also the
first ground-based detection of a transit of HIP 41378 f. Additional
ground-based photometry was also obtained and used to constrain the time of the
transit. The transit was measured to occur 1.50 hours earlier than predicted.
We use an analytic transit timing variation (TTV) model to show the observed
TTV can be explained by interactions between HIP 41378 e and HIP 41378 f. Using
our TTV model, we predict the epochs of future transits of HIP 41378 f, with
derived transit centres of T$_{C,4} = 2459355.087^{+0.031}_{-0.022}$ (May 2021)
and T$_{C,5} = 2459897.078^{+0.114}_{-0.060}$ (Nov 2022).
|
The canonical approach to video-and-language learning (e.g., video question
answering) dictates a neural model to learn from offline-extracted dense video
features from vision models and text features from language models. These
feature extractors are trained independently and usually on tasks different
from the target domains, rendering these fixed features sub-optimal for
downstream tasks. Moreover, due to the high computational overload of dense
video features, it is often difficult (or infeasible) to plug feature
extractors directly into existing approaches for easy finetuning. To provide a
remedy to this dilemma, we propose a generic framework ClipBERT that enables
affordable end-to-end learning for video-and-language tasks, by employing
sparse sampling, where only a single or a few sparsely sampled short clips from
a video are used at each training step. Experiments on text-to-video retrieval
and video question answering on six datasets demonstrate that ClipBERT
outperforms (or is on par with) existing methods that exploit full-length
videos, suggesting that end-to-end learning with just a few sparsely sampled
clips is often more accurate than using densely extracted offline features from
full-length videos, proving the proverbial less-is-more principle. Videos in
the datasets are from considerably different domains and lengths, ranging from
3-second generic domain GIF videos to 180-second YouTube human activity videos,
showing the generalization ability of our approach. Comprehensive ablation
studies and thorough analyses are provided to dissect what factors lead to this
success. Our code is publicly available at https://github.com/jayleicn/ClipBERT
|
Space-time visualizations of macroscopic or microscopic traffic variables is
a qualitative tool used by traffic engineers to understand and analyze
different aspects of road traffic dynamics. We present a deep learning method
to learn the macroscopic traffic speed dynamics from these space-time
visualizations, and demonstrate its application in the framework of traffic
state estimation. Compared to existing estimation approaches, our approach
allows a finer estimation resolution, eliminates the dependence on the initial
conditions, and is agnostic to external factors such as traffic demand, road
inhomogeneities and driving behaviors. Our model respects causality in traffic
dynamics, which improves the robustness of estimation. We present the
high-resolution traffic speed fields estimated for several freeway sections
using the data obtained from the Next Generation Simulation Program (NGSIM) and
German Highway (HighD) datasets. We further demonstrate the quality and utility
of the estimation by inferring vehicle trajectories from the estimated speed
fields, and discuss the benefits of deep neural network models in approximating
the traffic dynamics.
|
We consider the problem of collectively detecting multiple events,
particularly in cross-sentence settings. The key to dealing with the problem is
to encode semantic information and model event inter-dependency at a
document-level. In this paper, we reformulate it as a Seq2Seq task and propose
a Multi-Layer Bidirectional Network (MLBiNet) to capture the document-level
association of events and semantic information simultaneously. Specifically, a
bidirectional decoder is firstly devised to model event inter-dependency within
a sentence when decoding the event tag vector sequence. Secondly, an
information aggregation module is employed to aggregate sentence-level semantic
and event tag information. Finally, we stack multiple bidirectional decoders
and feed cross-sentence information, forming a multi-layer bidirectional
tagging architecture to iteratively propagate information across sentences. We
show that our approach provides significant improvement in performance compared
to the current state-of-the-art results.
|
Detecting transparent objects in natural scenes is challenging due to the low
contrast in texture, brightness and colors. Recent deep-learning-based works
reveal that it is effective to leverage boundaries for transparent object
detection (TOD). However, these methods usually encounter boundary-related
imbalance problem, leading to limited generation capability. Detailly, a kind
of boundaries in the background, which share the same characteristics with
boundaries of transparent objects but have much smaller amounts, usually hurt
the performance. To conquer the boundary-related imbalance problem, we propose
a novel content-dependent data augmentation method termed FakeMix. Considering
collecting these trouble-maker boundaries in the background is hard without
corresponding annotations, we elaborately generate them by appending the
boundaries of transparent objects from other samples into the current image
during training, which adjusts the data space and improves the generalization
of the models. Further, we present AdaptiveASPP, an enhanced version of ASPP,
that can capture multi-scale and cross-modality features dynamically. Extensive
experiments demonstrate that our methods clearly outperform the
state-of-the-art methods. We also show that our approach can also transfer well
on related tasks, in which the model meets similar troubles, such as mirror
detection, glass detection, and camouflaged object detection. Code will be made
publicly available.
|
I review the meaning of General Relativity (GR), viewed as a dynamical field,
rather than as geometry, as effected by the 1958-61 anti-geometrical work of
ADM. This very brief non-technical summary, is intended for historians.
|
We establish an asymptotic formula for the number of lattice points in the
sets \[ \mathbf S_{h_1, h_2, h_3}(\lambda): =\{x\in\mathbb Z_+^3:\lfloor
h_1(x_1)\rfloor+\lfloor h_2(x_2)\rfloor+\lfloor h_3(x_3)\rfloor=\lambda\} \quad
\text{with}\quad \lambda\in\mathbb Z_+; \] where functions $h_1, h_2, h_3$ are
constant multiples of regularly varying functions of the form
$h(x):=x^c\ell_h(x)$, where the exponent $c>1$ (but close to $1$) and a
function $\ell_h(x)$ is taken from a certain wide class of slowly varying
functions. Taking $h_1(x)=h_2(x)=h_3(x)=x^c$ we will also derive an asymptotic
formula for the number of lattice points in the sets \[ \mathbf
S_{c}^3(\lambda) := \{x \in \mathbb Z^3 : \lfloor |x_1|^c \rfloor + \lfloor
|x_2|^c \rfloor + \lfloor |x_3|^c \rfloor= \lambda \} \quad \text{with}\quad
\lambda\in\mathbb Z_+; \] which can be thought of as a perturbation of the
classical Waring problem in three variables.
We will use the latter asymptotic formula to study, the main results of this
paper, norm and pointwise convergence of the ergodic averages \[
\frac{1}{\#\mathbf S_{c}^3(\lambda)}\sum_{n\in \mathbf
S_{c}^3(\lambda)}f(T_1^{n_1}T_2^{n_2}T_3^{n_3}x) \quad \text{as}\quad
\lambda\to\infty; \] where $T_1, T_2, T_3:X\to X$ are commuting invertible and
measure-preserving transformations of a $\sigma$-finite measure space $(X,
\nu)$ for any function $f\in L^p(X)$ with $p>\frac{11-4c}{11-7c}$. Finally, we
will study the equidistribution problem corresponding to the spheres $\mathbf
S_{c}^3(\lambda)$.
|
We present a general series representation formula for the local solution of
Bernoulli equation with Caputo fractional derivatives. We then focus on a
generalization of the fractional logistic equation and we present some related
numerical simulations.
|
In high energy physics (HEP), jets are collections of correlated particles
produced ubiquitously in particle collisions such as those at the CERN Large
Hadron Collider (LHC). Machine learning (ML)-based generative models, such as
generative adversarial networks (GANs), have the potential to significantly
accelerate LHC jet simulations. However, despite jets having a natural
representation as a set of particles in momentum-space, a.k.a. a particle
cloud, there exist no generative models applied to such a dataset. In this
work, we introduce a new particle cloud dataset (JetNet), and apply to it
existing point cloud GANs. Results are evaluated using (1) 1-Wasserstein
distances between high- and low-level feature distributions, (2) a newly
developed Fr\'{e}chet ParticleNet Distance, and (3) the coverage and (4)
minimum matching distance metrics. Existing GANs are found to be inadequate for
physics applications, hence we develop a new message passing GAN (MPGAN), which
outperforms existing point cloud GANs on virtually every metric and shows
promise for use in HEP. We propose JetNet as a novel point-cloud-style dataset
for the ML community to experiment with, and set MPGAN as a benchmark to
improve upon for future generative models. Additionally, to facilitate research
and improve accessibility and reproducibility in this area, we release the
open-source JetNet Python package with interfaces for particle cloud datasets,
implementations for evaluation and loss metrics, and more tools for ML in HEP
development.
|
One of the big challenges of current electronics is the design and
implementation of hardware neural networks that perform fast and
energy-efficient machine learning. Spintronics is a promising catalyst for this
field with the capabilities of nanosecond operation and compatibility with
existing microelectronics. Considering large-scale, viable neuromorphic systems
however, variability of device properties is a serious concern. In this paper,
we show an autonomously operating circuit that performs hardware-aware machine
learning utilizing probabilistic neurons built with stochastic magnetic tunnel
junctions. We show that in-situ learning of weights and biases in a Boltzmann
machine can counter device-to-device variations and learn the probability
distribution of meaningful operations such as a full adder. This scalable
autonomously operating learning circuit using spintronics-based neurons could
be especially of interest for standalone artificial-intelligence devices
capable of fast and efficient learning at the edge.
|
We revisit the problem of the estimation of the differential entropy $H(f)$
of a random vector $X$ in $R^d$ with density $f$, assuming that $H(f)$ exists
and is finite. In this note, we study the consistency of the popular nearest
neighbor estimate $H_n$ of Kozachenko and Leonenko. Without any smoothness
condition we show that the estimate is consistent ($E\{|H_n - H(f)|\} \to 0$ as
$n \to \infty$) if and only if $\mathbb{E} \{ \log ( \| X \| + 1 )\} < \infty$.
Furthermore, if $X$ has compact support, then $H_n \to H(f)$ almost surely.
|
Identifying and understanding quality phrases from context is a fundamental
task in text mining. The most challenging part of this task arguably lies in
uncommon, emerging, and domain-specific phrases. The infrequent nature of these
phrases significantly hurts the performance of phrase mining methods that rely
on sufficient phrase occurrences in the input corpus. Context-aware tagging
models, though not restricted by frequency, heavily rely on domain experts for
either massive sentence-level gold labels or handcrafted gazetteers. In this
work, we propose UCPhrase, a novel unsupervised context-aware quality phrase
tagger. Specifically, we induce high-quality phrase spans as silver labels from
consistently co-occurring word sequences within each document. Compared with
typical context-agnostic distant supervision based on existing knowledge bases
(KBs), our silver labels root deeply in the input domain and context, thus
having unique advantages in preserving contextual completeness and capturing
emerging, out-of-KB phrases. Training a conventional neural tagger based on
silver labels usually faces the risk of overfitting phrase surface names.
Alternatively, we observe that the contextualized attention maps generated from
a transformer-based neural language model effectively reveal the connections
between words in a surface-agnostic way. Therefore, we pair such attention maps
with the silver labels to train a lightweight span prediction model, which can
be applied to new input to recognize (unseen) quality phrases regardless of
their surface names or frequency. Thorough experiments on various tasks and
datasets, including corpus-level phrase ranking, document-level keyphrase
extraction, and sentence-level phrase tagging, demonstrate the superiority of
our design over state-of-the-art pre-trained, unsupervised, and distantly
supervised methods.
|
We show that the nature of the topological fluctuations in $SU(3)$ gauge
theory changes drastically at the finite temperature phase transition. Starting
from temperatures right above the phase transition topological fluctuations
come in well separated lumps of unit charge that form a non-interacting ideal
gas. Our analysis is based on a novel method to count not only the net
topological charge, but also separately the number of positively and negatively
charged lumps in lattice configurations using the spectrum of the overlap Dirac
operator. This enables us to determine the joint distribution of the number of
positively and negatively charged topological objects, and we find this
distribution to be consistent with that of an ideal gas of unit charged
topological objects.
|
Measuring the acoustic characteristics of a space is often done by capturing
its impulse response (IR), a representation of how a full-range stimulus sound
excites it. This work generates an IR from a single image, which can then be
applied to other signals using convolution, simulating the reverberant
characteristics of the space shown in the image. Recording these IRs is both
time-intensive and expensive, and often infeasible for inaccessible locations.
We use an end-to-end neural network architecture to generate plausible audio
impulse responses from single images of acoustic environments. We evaluate our
method both by comparisons to ground truth data and by human expert evaluation.
We demonstrate our approach by generating plausible impulse responses from
diverse settings and formats including well known places, musical halls, rooms
in paintings, images from animations and computer games, synthetic environments
generated from text, panoramic images, and video conference backgrounds.
|
Lithium-sulfur (Li-S) batteries have become one of the most attractive
alternatives over conventional Li-ion batteries due to their high theoretical
specific energy density (2500 Wh/kg for Li-S vs. $\sim$250 Wh/kg for Li-ion).
Accurate state estimation in Li-S batteries is urgently needed for safe and
efficient operation. To the best of the authors' knowledge, electrochemical
model-based observers have not been reported for Li-S batteries, primarily due
to the complex dynamics that make state observer design a challenging problem.
In this work, we demonstrate a state estimation scheme based on a
zero-dimensional electrochemical model for Li-S batteries. The nonlinear
differential-algebraic equation (DAE) model is incorporated into an extend
Kalman filter. This observer design estimates both differential and algebraic
states that represent the dynamic behavior inside the cell, from voltage and
current measurements only. The effectiveness of the proposed estimation
algorithm is illustrated by numerical simulation results. Our study unlocks how
an electrochemical model can be utilized for practical state estimation of Li-S
batteries.
|
We investigate the influence of general forms of disorder on the robustness
of superconductivity in multiband materials. Specifically, we consider a
general two-band system where the bands arise from an orbital degree of freedom
of the electrons. Within the Born approximation, we show that the interplay of
the spin-orbital structure of the normal-state Hamiltonian, disorder
scattering, and superconducting pairing potentials can lead to significant
deviations from the expected robustness of the superconductivity. This can be
conveniently formulated in terms of the so-called "superconducting fitness". In
particular, we verify a key role for unconventional $s$-wave states, permitted
by the spin-orbital structure and which may pair electrons that are not
time-reversed partners. To exemplify the role of Fermi surface topology and
spin-orbital texture, we apply our formalism to the candidate topological
superconductor Cu$_x$Bi$_2$Se$_3$, for which only a single band crosses the
Fermi energy, as well as models of the iron pnictides, which possess multiple
Fermi pockets.
|
Classical models for multivariate or spatial extremes are mainly based upon
the asymptotically justified max-stable or generalized Pareto processes. These
models are suitable when asymptotic dependence is present, i.e., the joint tail
decays at the same rate as the marginal tail. However, recent environmental
data applications suggest that asymptotic independence is equally important
and, unfortunately, existing spatial models in this setting that are both
flexible and can be fitted efficiently are scarce. Here, we propose a new
spatial copula model based on the generalized hyperbolic distribution, which is
a specific normal mean-variance mixture and is very popular in financial
modeling. The tail properties of this distribution have been studied in the
literature, but with contradictory results. It turns out that the proofs from
the literature contain mistakes. We here give a corrected theoretical
description of its tail dependence structure and then exploit the model to
analyze a simulated dataset from the inverted Brown-Resnick process, hindcast
significant wave height data in the North Sea, and wind gust data in the state
of Oklahoma, USA. We demonstrate that our proposed model is flexible enough to
capture the dependence structure not only in the tail but also in the bulk.
|
Speech enhancement is an essential task of improving speech quality in noise
scenario. Several state-of-the-art approaches have introduced visual
information for speech enhancement,since the visual aspect of speech is
essentially unaffected by acoustic environment. This paper proposes a novel
frameworkthat involves visual information for speech enhancement, by
in-corporating a Generative Adversarial Network (GAN). In par-ticular, the
proposed visual speech enhancement GAN consistof two networks trained in
adversarial manner, i) a generator that adopts multi-layer feature fusion
convolution network to enhance input noisy speech, and ii) a discriminator that
attemptsto minimize the discrepancy between the distributions of the clean
speech signal and enhanced speech signal. Experiment re-sults demonstrated
superior performance of the proposed modelagainst several state-of-the-art
|
Recent evidence based on APOGEE data for stars within a few kpc of the
Galactic centre suggests that dissolved globular clusters (GCs) contribute
significantly to the stellar mass budget of the inner halo. In this paper we
enquire into the origins of tracers of GC dissolution, N-rich stars, that are
located in the inner 4 kpc of the Milky Way. From an analysis of the chemical
compositions of these stars we establish that about 30% of the N-rich stars
previously identified in the inner Galaxy may have an accreted origin. This
result is confirmed by an analysis of the kinematic properties of our sample.
The specific frequency of N-rich stars is quite large in the accreted
population, exceeding that of its in situ counterparts by near an order of
magnitude, in disagreement with predictions from numerical simulations. We hope
that our numbers provide a useful test to models of GC formation and
destruction.
|
Hypergraphs offer an explicit formalism to describe multibody interactions in
complex systems. To connect dynamics and function in systems with these
higher-order interactions, network scientists have generalised random-walk
models to hypergraphs and studied the multibody effects on flow-based
centrality measures. But mapping the large-scale structure of those flows
requires effective community detection methods. We derive unipartite,
bipartite, and multilayer network representations of hypergraph flows and
explore how they and the underlying random-walk model change the number, size,
depth, and overlap of identified multilevel communities. These results help
researchers choose the appropriate modelling approach when mapping flows on
hypergraphs.
|
A number of solar filaments/prominences demonstrate failed eruptions, when a
filament at first suddenly starts to ascend and then decelerates and stops at
some greater height in the corona. The mechanism of the termination of
eruptions is not clear yet. One of the confining forces able to stop the
eruption is the gravity force. Using a simple model of a partial
current-carrying torus loop anchored to the photosphere and photospheric
magnetic field measurements as the boundary condition for the potential
magnetic field extrapolation into the corona, we estimated masses of 15
eruptive filaments. The values of the filament mass show rather wide
distribution in the range of $4\times10^{15}$ -- $270\times10^{16}$g. Masses of
the most of filaments, laying in the middle of the range, are in accordance
with estimations made earlier on the basis of spectroscopic and white-light
observations.
|
Heavy-ion therapy, particularly using scanned (active) beam delivery,
provides a precise and highly conformal dose distribution, with maximum dose
deposition for each pencil beam at its endpoint (Bragg peak), and low entrance
and exit dose. To take full advantage of this precision, robust range
verification methods are required; these methods ensure that the Bragg peak is
positioned correctly in the patient and the dose is delivered as prescribed.
Relative range verification allows intra-fraction monitoring of Bragg peak
spacing to ensure full coverage with each fraction, as well as inter-fraction
monitoring to ensure all fractions are delivered consistently. To validate the
proposed filtered Interaction Vertex Imaging method for relative range
verification, a ${}^{16}$O beam was used to deliver 12 Bragg peak positions in
a 40 mm poly-(methyl methacrylate) phantom. Secondary particles produced in the
phantom were monitored using position-sensitive silicon detectors. Events
recorded on these detectors, along with a measurement of the treatment beam
axis, were used to reconstruct the sites of origin of these secondary particles
in the phantom. The distal edge of the depth distribution of these
reconstructed points was determined with logistic fits, and the translation in
depth required to minimize the $\chi^2$ statistic between these fits was used
to compute the range shift between any two Bragg peak positions. In all cases,
the range shift was determined with sub-millimeter precision, to a standard
deviation of the mean of 220(10) $\mu$m. This result validates filtered
Interaction Vertex Imaging as a reliable relative range verification method,
which should be capable of monitoring each energy step in each fraction of a
scanned heavy-ion treatment plan.
|
Type IIn supernovae (SNe IIn) are a relatively infrequently observed subclass
of SNe whose photometric and spectroscopic properties are varied. A common
thread among SNe IIn are the complex multiple-component hydrogen Balmer lines.
Owing to the heterogeneity of SNe IIn, online databases contain some outdated,
erroneous, or even contradictory classifications. SN IIn classification is
further complicated by SN impostors and contamination from underlying HII
regions. We have compiled a catalogue of systematically classified nearby
(redshift z < 0.02) SNe IIn using the Open Supernova Catalogue (OSC). We
present spectral classifications for 115 objects previously classified as SNe
IIn. Our classification is based upon results obtained by fitting multiple
Gaussians to the H-alpha profiles. We compare classifications reported by the
OSC and Transient Name Server (TNS) along with the best matched templates from
SNID. We find that 28 objects have been misclassified as SNe IIn. TNS and OSC
can be unreliable; they disagree on the classifications of 51 of the objects
and contain a number of erroneous classifications. Furthermore, OSC and TNS
hold misclassifications for 34 and twelve (respectively) of the transients we
classify as SNe IIn. In total, we classify 87 SNe IIn. We highlight the
importance of ensuring that online databases remain up to date when new or even
contemporaneous data become available. Our work shows the great range of
spectral properties and features that SNe IIn exhibit, which may be linked to
multiple progenitor channels and environment diversity. We set out a
classification sche me for SNe IIn based on the H-alpha profile which is not
greatly affected by the inhomogeneity of SNe IIn.
|
Isobaric $^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr
collisions at $\sqrt{s_{_{NN}}}=200$ GeV have been conducted at the
Relativistic Heavy Ion Collider to circumvent the large flow-induced background
in searching for the chiral magnetic effect (CME), predicted by the topological
feature of quantum chromodynamics (QCD). Considering that the background in
isobar collisions is approximately twice that in Au+Au collisions (due to the
smaller multiplicity) and the CME signal is approximately half (due to the
weaker magnetic field), we caution that the CME may not be detectable with the
collected isobar data statistics, within $\sim$2$\sigma$ significance, if the
axial charge per entropy density ($n_5/s$) and the QCD vacuum transition
probability are system independent. This expectation is generally verified by
the Anomalous-Viscous Fluid Dynamics (AVFD) model. While our estimate provides
an approximate "experimental" baseline, theoretical uncertainties on the CME
remain large.
|
Complex objects are usually with multiple labels, and can be represented by
multiple modal representations, e.g., the complex articles contain text and
image information as well as multiple annotations. Previous methods assume that
the homogeneous multi-modal data are consistent, while in real applications,
the raw data are disordered, e.g., the article constitutes with variable number
of inconsistent text and image instances. Therefore, Multi-modal Multi-instance
Multi-label (M3) learning provides a framework for handling such task and has
exhibited excellent performance. However, M3 learning is facing two main
challenges: 1) how to effectively utilize label correlation; 2) how to take
advantage of multi-modal learning to process unlabeled instances. To solve
these problems, we first propose a novel Multi-modal Multi-instance Multi-label
Deep Network (M3DN), which considers M3 learning in an end-to-end multi-modal
deep network and utilizes consistency principle among different modal bag-level
predictions. Based on the M3DN, we learn the latent ground label metric with
the optimal transport. Moreover, we introduce the extrinsic unlabeled
multi-modal multi-instance data, and propose the M3DNS, which considers the
instance-level auto-encoder for single modality and modified bag-level optimal
transport to strengthen the consistency among modalities. Thereby M3DNS can
better predict label and exploit label correlation simultaneously. Experiments
on benchmark datasets and real world WKG Game-Hub dataset validate the
effectiveness of the proposed methods.
|
Bunch splitting is an RF manipulation method of changing the bunch structure,
bunch numbers and bunch intensity in the high-intensity synchrotrons that serve
as the injector for a particle collider. An efficient way to realize bunch
splitting is to use the combination of different harmonic RF systems, such as
the two-fold bunch splitting of a bunch with a combination of fundamental
harmonic and doubled harmonic RF systems. The two-fold bunch splitting and
three-fold bunch splitting methods have been experimentally verified and
successfully applied to the LHC/PS. In this paper, a generalized multi-fold
bunch splitting method is given. The five-fold bunch splitting method using
specially designed multi-harmonic RF systems was studied and tentatively
applied to the medium-stage synchrotron (MSS), the third accelerator of the
injector chain of the Super Proton-Proton Collider (SPPC), to mitigate the
pileup effects and collective instabilities of a single bunch in the SPPC. The
results show that the five-fold bunch splitting is feasible and both the bunch
population distribution and longitudinal emittance growth after the splitting
are acceptable, e.g., a few percent in the population deviation and less than
10% in the total emittance growth.
|
We construct a random unitary Gaussian circuit for continuous-variable (CV)
systems subject to Gaussian measurements. We show that when the measurement
rate is nonzero, the steady state entanglement entropy saturates to an area-law
scaling. This is different from a many-body qubit system, where a generic
entanglement transition is widely expected. Due to the unbounded local Hilbert
space, the time scale to destroy entanglement is always much shorter than the
one to build it, while a balance could be achieved for a finite local Hilbert
space. By the same reasoning, the absence of transition should also hold for
other non-unitary Gaussian CV dynamics.
|
Here we report a record thermoelectric power factor of up to 160 $\mu$ W m-1
K-2 for the conjugated polymer poly(3-hexylthiophene) (P3HT). This result is
achieved through the combination of high-temperature rubbing of thin films
together with the use of a large molybdenum dithiolene p-dopant with a high
electron affinity. Comparison of the UV-vis-NIR spectra of the chemically doped
samples to electrochemically oxidized material reveals an oxidation level of
10%, i.e. one polaron for every 10 repeat units. The high power factor arises
due to an increase in the charge-carrier mobility and hence electrical
conductivity along the rubbing direction. We conclude that P3HT, with its
facile synthesis and outstanding processability, should not be ruled out as a
potential thermoelectric material.
|
The Python package ComCH is a lightweight specialized computer algebra system
that provides models for well known objects, the surjection and Barratt-Eccles
operads, parameterizing the product structure of algebras that are commutative
in a derived sense. The primary examples of such algebras treated by ComCH are
the cochain complexes of spaces, for which it provides effective constructions
of Steenrod cohomology operations at all prime.
|
By probing the population of binary black hole (BBH) mergers detected by
LIGO-Virgo, we can infer properties about the underlying black hole formation
channels. A mechanism known as pair-instability (PI) supernova is expected to
prevent the formation of black holes from stellar collapse with mass greater
than $\sim 40-65\,M_\odot$ and less than $\sim 120\,M_\odot$. Any BBH merger
detected by LIGO-Virgo with a component black hole in this gap, known as the PI
mass gap, likely originated from an alternative formation channel. Here, we
firmly establish GW190521 as an outlier to the stellar-mass BBH population if
the PI mass gap begins at or below $65\, M_{\odot}$. In addition, for a PI
lower boundary of $40-50\, M_{\odot}$, we find it unlikely that the remaining
distribution of detected BBH events, excluding GW190521, is consistent with the
stellar-mass population.
|
In this paper, we propose Zero Aware Configurable Data Encoding by Skipping
Transfer (ZAC-DEST), a data encoding scheme to reduce the energy consumption of
DRAM channels, specifically targeted towards approximate computing and error
resilient applications. ZAC-DEST exploits the similarity between recent data
transfers across channels and information about the error resilience behavior
of applications to reduce on-die termination and switching energy by reducing
the number of 1's transmitted over the channels. ZAC-DEST also provides a
number of knobs for trading off the application's accuracy for energy savings,
and vice versa, and can be applied to both training and inference.
We apply ZAC-DEST to five machine learning applications. On average, across
all applications and configurations, we observed a reduction of $40$% in
termination energy and $37$% in switching energy as compared to the state of
the art data encoding technique BD-Coder with an average output quality loss of
$10$%. We show that if both training and testing are done assuming the presence
of ZAC-DEST, the output quality of the applications can be improved upto 9
times as compared to when ZAC-DEST is only applied during testing leading to
energy savings during training and inference with increased output quality.
|
We study two well-known $SU(N)$ chiral gauge theories with fermions in the
symmetric, anti-symmetric and fundamental representations. We give a detailed
description of the global symmetry, including various discrete quotients.
Recent work argues that these theories exhibit a subtle mod 2 anomaly, ruling
out certain phases in which the theories confine without breaking their global
symmetry, leaving a gapless composite fermion in the infra-red. We point out
that no such anomaly exists. We further exhibit an explicit path to the gapless
fermion phase, showing that there is no kinematic obstruction to realising
these phases.
|
The nontrivial topology of spin systems such as skyrmions in real space can
promote complex electronic states. Here, we provide a general viewpoint at the
emergence of topological electronic states in spin systems based on the methods
of noncommutative K-theory. By realizing that the structure of the observable
algebra of spin textures is determined by the algebraic properties of the
noncommutative hypertorus, we arrive at a unified understanding of topological
electronic states which we predict to arise in various noncollinear setups. The
power of our approach lies in an ability to categorize emergent topological
states algebraically without referring to smooth real- or reciprocal-space
quantities. This opens a way towards an educated design of topological phases
in aperiodic, disordered, or non-smooth textures of spins and charges
containing topological defects.
|
Pairwise alignment of DNA sequencing data is a ubiquitous task in
bioinformatics and typically represents a heavy computational burden. A
standard approach to speed up this task is to compute "sketches" of the DNA
reads (typically via hashing-based techniques) that allow the efficient
computation of pairwise alignment scores. We propose a rate-distortion
framework to study the problem of computing sketches that achieve the optimal
tradeoff between sketch size and alignment estimation distortion. We consider
the simple setting of i.i.d. error-free sources of length $n$ and introduce a
new sketching algorithm called "locational hashing." While standard approaches
in the literature based on min-hashes require $B = (1/D) \cdot O\left( \log n
\right)$ bits to achieve a distortion $D$, our proposed approach only requires
$B = \log^2(1/D) \cdot O(1)$ bits. This can lead to significant computational
savings in pairwise alignment estimation.
|
In this article we propose a novel method to estimate the frequency
distribution of linguistic variables while controlling for statistical
non-independence due to shared ancestry. Unlike previous approaches, our
technique uses all available data, from language families large and small as
well as from isolates, while controlling for different degrees of relatedness
on a continuous scale estimated from the data. Our approach involves three
steps: First, distributions of phylogenies are inferred from lexical data.
Second, these phylogenies are used as part of a statistical model to
statistically estimate transition rates between parameter states. Finally, the
long-term equilibrium of the resulting Markov process is computed. As a case
study, we investigate a series of potential word-order correlations across the
languages of the world.
|
In a previous work by the author it was shown that every finite dimensional
algebraic structure over an algebraically closed field of characteristic zero K
gives rise to a character $K[X]_{aug}\to K$, where $K[X]_aug$ is a commutative
Hopf algebra that encodes scalar invariants of structures. This enabled us to
think of some characters $K[X]_{aug}\to K$ as algebraic structures with closed
orbit. In this paper we study structures in general symmetric monoidal
categories, and not only in $Vec_K$. We show that every character $\chi :
K[X]_{aug}\to K$ arises from such a structure, by constructing a category
$C_{\chi}$ that is analogous to the universal construction from TQFT. We then
give necessary and sufficient conditions for a given character to arise from a
structure in an abelian category with finite dimensional hom-spaces. We call
such characters good characters. We show that if $\chi$ is good then $C_{\chi}$
is abelian and semisimple, and that the set of good characters forms a
K-algebra. This gives us a way to interpolate algebraic structures, and also
symmetric monoidal categories, in a way that generalizes Deligne's categories
$Rep(S_t)$, $Rep(GL_t(K))$, $Rep(O_t)$, and also some of the symmetric monoidal
categories introduced by Knop. We also explain how one can recover the recent
construction of 2 dimensional TQFT of Khovanov, Ostrik, and Kononov, by the
methods presented here. We give new examples, of interpolations of the
categories $Rep(Aut_{O}(M))$ where $O$ is a discrete valuation ring with a
finite residue field, and M is a finite module over it. We also generalize the
construction of wreath products with $S_t$, which was introduced by Knop.
|
We explore the connections between Green's functions for certain differential
equations, covariance functions for Gaussian processes, and the smoothing
splines problem. Conventionally, the smoothing spline problem is considered in
a setting of reproducing kernel Hilbert spaces, but here we present a more
direct approach. With this approach, some choices that are implicit in the
reproducing kernel Hilbert space setting stand out, one example being choice of
boundary conditions and more elaborate shape restrictions.
The paper first explores the Laplace operator and the Poisson equation and
studies the corresponding Green's functions under various boundary conditions
and constraints. Explicit functional forms are derived in a range of examples.
These examples include several novel forms of the Green's function that, to the
author's knowledge, have not previously been presented. Next we present a
smoothing spline problem where we penalize the integrated squared derivative of
the function to be estimated. We then show how the solution can be explicitly
computed using the Green's function for the Laplace operator. In the last part
of the paper, we explore the connection between Gaussian processes and
differential equations, and show how the Laplace operator is related to
Brownian processes and how processes that arise due to boundary conditions and
shape constraints can be viewed as conditional Gaussian processes. The
presented connection between Green's functions for the Laplace operator and
covariance functions for Brownian processes allows us to introduce several new
novel Brownian processes with specific behaviors. Finally, we consider the
connection between Gaussian process priors and smoothing splines.
|
In this paper, we study Toeplitz algebras generated by certain class of
Toeplitz operators on the $p$-Fock space and the $p$-Bergman space with
$1<p<\infty$. Let BUC($\mathbb C^n$) and BUC($\mathbb B_n$) denote the
collections of bounded uniformly continuous functions on $\mathbb C^n$ and
$\mathbb B_n$ (the unit ball in $\mathbb C^n$), respectively. On the $p$-Fock
space, we show that the Toeplitz algebra which has a translation invariant
closed subalgebra of BUC($\mathbb C^n$) as its set of symbols is linearly
generated by Toeplitz operators with the same space of symbols. This answers a
question recently posed by Fulsche \cite{Robert}. On the $p$-Bergman space, we
study Toeplitz algebras with symbols in some translation invariant closed
subalgebras of BUC($\mathbb B_n)$. In particular, we obtain that the Toeplitz
algebra generated by all Toeplitz operators with symbols in BUC($\mathbb B_n$)
is equal to the closed linear space generated by Toeplitz operators with such
symbols. This generalizes the corresponding result for the case of $p=2$
obtained by Xia \cite{Xia2015}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.