abstract
stringlengths 42
2.09k
|
---|
COVID-19 Prevention, which combines the soft approaches and best practices
for public health safety, is the only recommended solution from the health
science and management society side considering the pandemic era. In an attempt
to evaluate the validity of such claims in a conflict and COVID-19-affected
country like Afghanistan, we conducted a large-scale digital social experiment
using conversational AI and social platforms from an info-epidemiology and an
infoveillance perspective. This served as a means to uncover an underling
truth, give large-scale facilitation support, extend the soft impact of
discussion to multiple sites, collect, diverge, converge and evaluate a large
amount of opinions and concerns from health experts, patients and local people,
deliberate on the data collected and explore collective prevention approaches
of COVID-19. Finally, this paper shows that deciding a prevention measure that
maximizes the probability of finding the ground truth is intrinsically
difficult without utilizing the support of an AI-enabled discussion systems.
|
We continue the investigation of systems of hereditarily rigid relations
started in Couceiro, Haddad, Pouzet and Sch\"olzel [1]. We observe that on a
set $V$ with $m$ elements, there is a hereditarily rigid set $\mathcal R$ made
of $n$ tournaments if and only if $m(m-1)\leq 2^n$. We ask if the same
inequality holds when the tournaments are replaced by linear orders. This
problem has an equivalent formulation in terms of separation of linear orders.
Let $h_{\rm Lin}(m)$ be the least cardinal $n$ such that there is a family
$\mathcal R$ of $n$ linear orders on an $m$-element set $V$ such that any two
distinct ordered pairs of distinct elements of $V$ are separated by some member
of $\mathcal R$, then $ \lceil \log_2 (m(m-1))\rceil\leq h_{\rm Lin}(m)$ with
equality if $m\leq 7$. We ask whether the equality holds for every $m$. We
prove that $h_{\rm Lin}(m+1)\leq h_{\rm Lin}(m)+1$. If $V$ is infinite, we show
that $h_{\rm Lin}(m)= \aleph_0$ for $m\leq 2^{\aleph_0}$. More generally, we
prove that the two equalities $h_{\rm Lin}(m)= log_2 (m)= d({\rm Lin}(V))$
hold, where $\log_2 (m)$ is the least cardinal $\mu$ such that $m\leq 2^\mu$,
and $d({\rm Lin}(V))$ is the topological density of the set ${\rm Lin}(V)$ of
linear orders on $V$ (viewed as a subset of the power set $\mathcal{P}(V\times
V)$ equipped with the product topology). These equalities follow from the {\it
Generalized Continuum Hypothesis}, but we do not know whether they hold without
any set theoretical hypothesis.
|
The pantograph differential equation and its solution, the deformed
exponential function, are remarkable objects that appear in areas as diverse as
combinatorics, number theory, statistical mechanics, and electrical
engineering. In this article we describe a new surprising application of these
objects in graph theory, by showing that the set of all cliques is not forcing
for quasirandomness. This provides a natural example of an infinite family of
graphs, which is not forcing, and answers a natural question posed by P. Horn.
|
Classical, i.e. non-quantum, communications include configurations with
multiple-input multiple-output (MIMO) channels. Some associated signal
processing tasks consider these channels in a symmetric way, i.e. by assigning
the same role to all channel inputs, and similarly to all channel outputs.
These tasks especially include channel identification/estimation and channel
equalization, tightly connected with source separation. Their most challenging
version is the blind one, i.e. when the receivers have (almost) no prior
knowledge about the emitted signals. Other signal processing tasks consider
classical communication channels in an asymmetric way. This especially includes
the situation when data are sent by Emitter 1 to Receiver 1 through a main
channel, and an "intruder" (including Receiver 2) interferes with that channel
so as to extract information, thus performing so-called eavesdropping, while
Receiver 1 may aim at detecting that intrusion. Part of the above processing
tasks have been extended to quantum channels, including those that have several
quantum bits (qubits) at their input and output. For such quantum channels,
beyond previously reported work for symmetric scenarios, we here address
asymmetric (blind and non-blind) ones, with emphasis on intrusion detection and
additional comments about eavesdropping. To develop fundamental concepts, we
first consider channels with exchange coupling as a toy model. We especially
use the general quantum information processing framework that we recently
developed, to derive new attractive intrusion detection methods based on a
single preparation of each state. Finally, we discuss how the proposed methods
might be extended, beyond the specific class of channels analyzed here.
|
Sensor and control data of modern mechatronic systems are often available as
heterogeneous time series with different sampling rates and value ranges.
Suitable classification and regression methods from the field of supervised
machine learning already exist for predictive tasks, for example in the context
of condition monitoring, but their performance scales strongly with the number
of labeled training data. Their provision is often associated with high effort
in the form of person-hours or additional sensors. In this paper, we present a
method for unsupervised feature extraction using autoencoder networks that
specifically addresses the heterogeneous nature of the database and reduces the
amount of labeled training data required compared to existing methods. Three
public datasets of mechatronic systems from different application domains are
used to validate the results.
|
This paper proposes a receding horizon active learning and control problem
for dynamical systems in which Gaussian Processes (GPs) are utilized to model
the system dynamics. The active learning objective in the optimization problem
is presented by the exact conditional differential entropy of GP predictions at
multiple steps ahead, which is equivalent to the log determinant of the GP
posterior covariance matrix. The resulting non-convex and complex optimization
problem is solved by the Sequential Convex Programming algorithm that exploits
the first-order approximations of non-convex functions. Simulation results of
an autonomous racing car example verify that using the proposed method can
significantly improve data quality for model learning while solving time is
highly promising for real-time applications.
|
It is well known the method of determining the center of mass of a triangular
piece in which it is hanged from each one of its vertices while drawing from
the vertice its verticals. The intersection of the three verticals is
considered as the center of mass, verified by equilibrating the piece over a
point object. But not everybody knows that the method was developed by
Arquimedes 2,300 years ago, determining the geometrical elements to the medians
of the triangle. He demonstrated theoretically its result and, inspired on his
demonstration we developed another one, which lends to the ideia and methods of
the differential and integral calculus.
|
This paper proposes integrating semantics-oriented similarity representation
into RankingMatch, a recently proposed semi-supervised learning method. Our
method, dubbed ReRankMatch, aims to deal with the case in which labeled and
unlabeled data share non-overlapping categories. ReRankMatch encourages the
model to produce the similar image representations for the samples likely
belonging to the same class. We evaluate our method on various datasets such as
CIFAR-10, CIFAR-100, SVHN, STL-10, and Tiny ImageNet. We obtain promising
results (4.21% error rate on CIFAR-10 with 4000 labels, 22.32% error rate on
CIFAR-100 with 10000 labels, and 2.19% error rate on SVHN with 1000 labels)
when the amount of labeled data is sufficient to learn semantics-oriented
similarity representation. The code is made publicly available at
https://github.com/tqtrunghnvn/ReRankMatch.
|
This paper studies the channel capacity of intensity-modulation
direct-detection (IM/DD) visible light communication (VLC) systems under both
optical and electrical power constraints. Specifically, it derives the
asymptotic capacities in the high and low signal-to-noise ratio (SNR) regimes
under peak, first-moment, and second-moment constraints. The results show that
first- and second-moment constraints are never simultaneously active in the
asymptotic low-SNR regime, and only in few cases in the asymptotic high-SNR
regime. Moreover, the second-moment constraint is more stringent in the
asymptotic low-SNR regime than in the high-SNR regime.
|
Molecular motor gliding motility assays based on myosin/actin or
kinesin/microtubules are of interest for nanotechnology applications ranging
from cargo-trafficking in lab-on-a-chip devices to novel biocomputation
strategies. Prototype systems are typically monitored by expensive and bulky
fluorescence microscopy systems and the development of integrated, direct
electric detection of single filaments would strongly benefit applications and
scale-up. We present estimates for the viability of such a detector by
calculating the electrostatic potential change generated at a carbon nanotube
transistor by a motile actin filament or microtubule under realistic gliding
assay conditions. We combine this with detection limits based on previous
state-of-the-art experiments using carbon nanotube transistors to detect
catalysis by a bound lysozyme molecule and melting of a bound short-strand DNA
molecule. Our results show that detection should be possible for both actin and
microtubules using existing low ionic strength buffers given good device
design, e.g., by raising the transistor slightly above the guiding channel
floor. We perform studies as a function of buffer ionic strength, height of the
transistor above the guiding channel floor, presence/absence of the casein
surface passivation layer for microtubule assays and the linear charge density
of the actin filaments/microtubules. We show that detection of microtubules is
a more likely prospect given their smaller height of travel above the surface,
higher negative charge density and the casein passivation, and may possibly be
achieved with the nanoscale transistor sitting directly on the guiding channel
floor.
|
Entities, as the essential elements in relation extraction tasks, exhibit
certain structure. In this work, we formulate such structure as distinctive
dependencies between mention pairs. We then propose SSAN, which incorporates
these structural dependencies within the standard self-attention mechanism and
throughout the overall encoding stage. Specifically, we design two alternative
transformation modules inside each self-attention building block to produce
attentive biases so as to adaptively regularize its attention flow. Our
experiments demonstrate the usefulness of the proposed entity structure and the
effectiveness of SSAN. It significantly outperforms competitive baselines,
achieving new state-of-the-art results on three popular document-level relation
extraction datasets. We further provide ablation and visualization to show how
the entity structure guides the model for better relation extraction. Our code
is publicly available.
|
We introduce \emph{k-positive representations}, a large class of
$\{1,\ldots,k\}$--Anosov surface group representations into PGL(E) that share
many features with Hitchin representations, and we study their degenerations:
unless they are Hitchin, they can be deformed to non-discrete representations,
but any limit is at least (k-3)-positive and irreducible limits are
(k-1)-positive. A major ingredient, of independent interest, is a general limit
theorem for positively ratioed representations.
|
The recent availability of commercial-off-the-shelf (COTS) legged robot
platforms have opened up new opportunities in deploying legged systems into
different scenarios. While the main advantage of legged robots is their ability
to traverse unstructured terrain, there are still large gaps between what robot
platforms can achieve and their animal counterparts. Therefore, when deploying
as part of a heterogeneous robot team of different platforms, it is beneficial
to understand the different scenarios where a legged platform would perform
better than a wheeled, tracked or aerial platform. Two COTS quadruped robots,
Ghost Robotics' Vision 60 and Boston Dynamics' Spot, were deployed into a
heterogeneous team. A description of some of the challenges faced while
integrating the platforms, as well as some experiments in traversing different
terrains are provided to give insight into the real-world deployment of legged
robots.
|
We give a simple proof of a classical theorem by A.M\'at\'e, P.Nevai, and
V.Totik on asymptotic behavior of orthogonal polynomials on the unit circle. It
is based on a new real-variable approach involving an entropy estimate for the
orthogonality measure. Our second result is an extension of a theorem by
G.Freud on averaged convergence of Fourier series. We also discuss some related
open problems in the theory of orthogonal polynomials on the unit circle.
|
Location-based games (LBG) impose virtual spaces on top of physical
locations. Studies have explored LBG from various perspectives. However, a
comprehensive study of who these players are, their traits, their
gratifications, and the links between them is conspicuously absent from the
literature. In this paper, we aim to address this lacuna through a series of
surveys with 2390 active LBG players utilizing Tondello's Player Traits Model
and Scale of Game playing Preferences, and Hamari's scale of LBG
gratifications. Our findings (1) illustrate an association between player
satisfaction and social aspects of the studied games, (2) explicate how the
core-loops of the studied games impact the expressed gratifications and the
affine traits of players, and (3) indicate a strong distinction between
hardcore and casual players based on both traits and gratifications. Overall
our findings shed light into the players of LBG, their traits, and
gratifications they derive from playing LBGs.
|
As the interaction between the black holes and highly energetic infalling
charged matter receives quantum corrections, the basic laws of black hole
mechanics have to be carefully rederived. Using the covariant phase space
formalism, we generalize the first law of black hole mechanics, both
"equilibrium state" and "physical process" versions, in the presence of
nonlinear electrodynamics fields, defined by Lagrangians depending on both
quadratic electromagnetic invariants, $F_{ab}F^{ab}$ and $F_{ab}\,{\star
F}^{ab}$. Derivation of this law demands a specific treatment of the Lagrangian
parameters, similar to embedding of the cosmological constant into
thermodynamic context. Furthermore, we discuss the validity of energy
conditions, several complementing proofs of the zeroth law of black hole
electrodynamics and some aspects of the recently generalized Smarr formula, its
(non-)linearity and relation to the first law.
|
We consider zero-error communication over a two-transmitter deterministic
adversarial multiple access channel (MAC) governed by an adversary who has
access to the transmissions of both senders (hence called omniscient) and aims
to maliciously corrupt the communication. None of the encoders, jammer and
decoder is allowed to randomize using private or public randomness. This
enforces a combinatorial nature of the problem. Our model covers a large family
of channels studied in the literature, including all deterministic discrete
memoryless noisy or noiseless MACs. In this work, given an arbitrary
two-transmitter deterministic omniscient adversarial MAC, we characterize when
the capacity region
1) has nonempty interior (in particular, is two-dimensional);
2) consists of two line segments (in particular, has empty interior);
3) consists of one line segment (in particular, is one-dimensional);
4) or only contains $ (0,0) $ (in particular, is zero-dimensional).
This extends a recent result by Wang, Budkuley, Bogdanov and Jaggi (2019)
from the point-to-point setting to the multiple access setting. Indeed, our
converse arguments build upon their generalized Plotkin bound and involve
delicate case analysis. One of the technical challenges is to take care of both
"joint confusability" and "marginal confusability". In particular, the
treatment of marginal confusability does not follow from the point-to-point
results by Wang et al. Our achievability results follow from random coding with
expurgation.
|
Contemporary works on abstractive text summarization have focused primarily
on high-resource languages like English, mostly due to the limited availability
of datasets for low/mid-resource ones. In this work, we present XL-Sum, a
comprehensive and diverse dataset comprising 1 million professionally annotated
article-summary pairs from BBC, extracted using a set of carefully designed
heuristics. The dataset covers 44 languages ranging from low to high-resource,
for many of which no public dataset is currently available. XL-Sum is highly
abstractive, concise, and of high quality, as indicated by human and intrinsic
evaluation. We fine-tune mT5, a state-of-the-art pretrained multilingual model,
with XL-Sum and experiment on multilingual and low-resource summarization
tasks. XL-Sum induces competitive results compared to the ones obtained using
similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10
languages we benchmark on, with some of them exceeding 15, as obtained by
multilingual training. Additionally, training on low-resource languages
individually also provides competitive performance. To the best of our
knowledge, XL-Sum is the largest abstractive summarization dataset in terms of
the number of samples collected from a single source and the number of
languages covered. We are releasing our dataset and models to encourage future
research on multilingual abstractive summarization. The resources can be found
at \url{https://github.com/csebuetnlp/xl-sum}.
|
In this work, we aim to achieve efficient end-to-end learning of driving
policies in dynamic multi-agent environments. Predicting and anticipating
future events at the object level are critical for making informed driving
decisions. We propose an Instance-Aware Predictive Control (IPC) approach,
which forecasts interactions between agents as well as future scene structures.
We adopt a novel multi-instance event prediction module to estimate the
possible interaction among agents in the ego-centric view, conditioned on the
selected action sequence of the ego-vehicle. To decide the action at each step,
we seek the action sequence that can lead to safe future states based on the
prediction module outputs by repeatedly sampling likely action sequences. We
design a sequential action sampling strategy to better leverage predicted
states on both scene-level and instance-level. Our method establishes a new
state of the art in the challenging CARLA multi-agent driving simulation
environments without expert demonstration, giving better explainability and
sample efficiency.
|
Quantifying the confidence (or conversely the uncertainty) of a prediction is
a highly desirable trait of an automatic system, as it improves the robustness
and usefulness in downstream tasks. In this paper we investigate confidence
estimation for end-to-end automatic speech recognition (ASR). Previous work has
addressed confidence measures for lattice-based ASR, while current machine
learning research mostly focuses on confidence measures for unstructured deep
learning. However, as the ASR systems are increasingly being built upon deep
end-to-end methods, there is little work that tries to develop confidence
measures in this context. We fill this gap by providing an extensive benchmark
of popular confidence methods on four well-known speech datasets. There are two
challenges we overcome in adapting existing methods: working on structured data
(sequences) and obtaining confidences at a coarser level than the predictions
(words instead of tokens). Our results suggest that a strong baseline can be
obtained by scaling the logits by a learnt temperature, followed by estimating
the confidence as the negative entropy of the predictive distribution and,
finally, sum pooling to aggregate at word level.
|
Image guidance in minimally invasive interventions is usually provided using
live 2D X-ray imaging. To enhance the information available during the
intervention, the preoperative volume can be overlaid over the 2D images using
2D/3D image registration. Recently, deep learning-based 2D/3D registration
methods have shown promising results by improving computational efficiency and
robustness. However, there is still a gap in terms of registration accuracy
compared to traditional optimization-based methods. We aim to address this gap
by incorporating traditional methods in deep neural networks using known
operator learning. As an initial step in this direction, we propose to learn
the update step of an iterative 2D/3D registration framework based on the
Point-to-Plane Correspondence model. We embed the Point-to-Plane Correspondence
model as a known operator in our deep neural network and learn the update step
for the iterative registration. We show an improvement of 1.8 times in terms of
registration accuracy for the update step prediction compared to learning
without the known operator.
|
In this paper, we focus on the nonlinear least squares:
$\mbox{min}_{\mathbf{x} \in \mathbb{H}^d}\| |A\mathbf{x}|-\mathbf{b}\|$ where
$A\in \mathbb{H}^{m\times d}$, $\mathbf{b} \in \mathbb{R}^m$ with $\mathbb{H}
\in \{\mathbb{R},\mathbb{C} \}$ and consider the uniqueness and stability of
solutions. Such problem arises, for instance, in phase retrieval and absolute
value rectification neural networks. For the case where
$\mathbf{b}=|A\mathbf{x}_0|$ for some $\mathbf{x}_0\in \mathbb{H}^d$, many
results have been developed to characterize the uniqueness and stability of
solutions. However, for the case where $\mathbf{b} \neq |A\mathbf{x}_0| $ for
any $\mathbf{x}_0\in \mathbb{H}^d$, there is no existing result for it to the
best of our knowledge. In this paper, we first focus on the uniqueness of
solutions and show for any matrix $A\in \mathbb{H}^{m \times d}$ there always
exists a vector $\mathbf{b} \in \mathbb{R}^m$ such that the solution is not
unique. But, in real case, such ``bad'' vectors $\mathbf{b}$ are negligible,
namely, if $\mathbf{b} \in \mathbb{R}_{+}^m$ does not lie in some measure zero
set, then the solution is unique. We also present some conditions under which
the solution is unique. For the stability of solutions, we prove that the
solution is never uniformly stable. But if we restrict the vectors $\mathbf{b}$
to any convex set then it is stable.
|
We study the free energy and limit shape problem for the five-vertex model
with periodic "genus zero" weights. We derive the exact phase diagram, free
energy and surface tension for this model. We show that its surface tension has
trivial potential and use this to give explicit parameterizations of limit
shapes.
|
The goal of this paper is to establish Beilinson-Bernstein type localization
theorems for quantizations of some conical symplectic resolutions. We prove the
full localization theorems for finite and affine type A Nakajima quiver
varieties. The proof is based on two partial results that hold in more general
situations. First, we establish an exactness result for global section functor
if there is a tilting generator that has a rank 1 summand. Second, we examine
when the global section functor restricts to an equivalence between categories
$\mathcal{O}$.
|
While the wiretap secret key capacity remains unknown for general source
models even in the two-user case, we obtained a single-letter characterization
for a large class of multi-user source models with a linear wiretapper who can
observe any linear combinations of the source. We introduced the idea of
irreducible sources to show existence of an optimal communication scheme that
achieves perfect omniscience with minimum leakage of information to the
wiretapper. This implies a duality between the problems of wiretap secret key
agreement and secure omniscience, and such duality potentially holds for more
general sources.
|
Functional connectomes derived from functional magnetic resonance imaging
have long been used to understand the functional organization of the brain.
Nevertheless, a connectome is intrinsically linked to the atlas used to create
it. In other words, a connectome generated from one atlas is different in scale
and resolution compared to a connectome generated from another atlas. Being
able to map connectomes and derived results between different atlases without
additional pre-processing is a crucial step in improving interpretation and
generalization between studies that use different atlases. Here, we use optimal
transport, a powerful mathematical technique, to find an optimum mapping
between two atlases. This mapping is then used to transform time series from
one atlas to another in order to reconstruct a connectome. We validate our
approach by comparing transformed connectomes against their "gold-standard"
counterparts (i.e., connectomes generated directly from an atlas) and
demonstrate the utility of transformed connectomes by applying these
connectomes to predictive models based on a different atlas. We show that these
transformed connectomes are significantly similar to their "gold-standard"
counterparts and maintain individual differences in brain-behavior
associations, demonstrating both the validity of our approach and its utility
in downstream analyses. Overall, our approach is a promising avenue to increase
the generalization of connectome-based results across different atlases.
|
A $d$-dimensional framework is a pair $(G,p)$, where $G=(V,E)$ is a graph and
$p$ is a map from $V$ to ${\mathbb{R}}^d$. The length of an edge $uv\in E$ in
$(G,p)$ is the distance between $p(u)$ and $p(v)$. The framework is said to be
globally rigid in ${\mathbb{R}}^d$ if the graph $G$ and its edge lengths
uniquely determine $(G,p)$, up to congruence. A graph $G$ is called globally
rigid in ${\mathbb{R}}^d$ if every $d$-dimensional generic framework $(G,p)$ is
globally rigid.
In this paper, we consider the problem of reconstructing a graph from the set
of edge lengths arising from a generic framework. Roughly speaking, a graph $G$
is strongly reconstructible in ${\mathbb{C}}^d$ if it is uniquely determined by
the set of (unlabeled) edge lengths of any generic framework $(G,p)$ in
$d$-space, along with the number of its vertices. It is known that if $G$ is
globally rigid in ${\mathbb{R}}^d$ on at least $d+2$ vertices, then it is
strongly reconstructible in ${\mathbb{C}}^d$. We strengthen this result and
show that under the same conditions, $G$ is in fact fully reconstructible in
${\mathbb{C}}^d$, which means that the set of edge lengths alone is sufficient
to uniquely reconstruct $G$, without any constraint on the number of vertices.
We also prove that if $G$ is globally rigid in ${\mathbb{R}}^d$ on at least
$d+2$ vertices, then the $d$-dimensional generic rigidity matroid of $G$ is
connected. This result generalizes Hendrickson's necessary condition for global
rigidity and gives rise to a new combinatorial necessary condition.
Finally, we provide new families of fully reconstructible graphs and use them
to answer some questions regarding unlabeled reconstructibility posed in recent
papers.
|
Characterizing metastable neural dynamics in finite-size spiking networks
remains a daunting challenge. We propose to address this challenge in the
recently introduced replica-mean-field (RMF) limit. In this limit, networks are
made of infinitely many replicas of the finite network of interest, but with
randomized interactions across replica. Such randomization renders certain
excitatory networks fully tractable at the cost of neglecting activity
correlations, but with explicit dependence on the finite size of the neural
constituents. However, metastable dynamics typically unfold in networks with
mixed inhibition and excitation. Here, we extend the RMF computational
framework to point-process-based neural network models with exponential
stochastic intensities, allowing for mixed excitation and inhibition. Within
this setting, we show that metastable finite-size networks admit multistable
RMF limits, which are fully characterized by stationary firing rates.
Technically, these stationary rates are determined as solutions to a set of
delayed differential equations under certain regularity conditions that any
physical solutions shall satisfy. We solve this original problem by combining
the resolvent formalism and singular-perturbation theory. Importantly, we find
that these rates specify probabilistic pseudo-equilibria which accurately
capture the neural variability observed in the original finite-size network. We
also discuss the emergence of metastability as a stochastic bifurcation, which
can also be interpreted as a static phase transition in the RMF limits. In
turn, we expect to leverage the static picture of RMF limits to infer purely
dynamical features of metastable finite-size networks, such as the transition
rates between pseudo-equilibria.
|
In this note we introduce and study the almost commuting varieties for the
symplectic Lie algebras.
|
Integrated semiconductor mode-locked lasers have shown promise in many
applications and are readily fabricated using generic InP photonic integration
platforms. However, the passive waveguides offered in such platforms have
relatively high linear and nonlinear losses that limit the performance of these
lasers. By extending such lasers with, for example, an external cavity the
performance can be increased considerably. In this paper, we demonstrate for
the first time that a high-performance mode-locked laser can be achieved with a
butt-coupling integration technique using chip scale silicon nitride
waveguides. A platform-independent SiN/SU8 coupler design is used to couple
between the silicon nitride external cavity and the III/V active chip.
Mode-locked lasers at 2.18 GHz and 15.5 GHz repetition rates are demonstrated
with Lorentzian RF linewidths several orders of magnitude smaller than what has
been demonstrated on monolithic InP platforms. The RF linewidth was 31 Hz for
the 2.18 GHz laser.
|
We demonstrate the reliable generation of 1-mJ, 31-fs pulses with an average
power of 1 kW by post-compression of 200-fs pulses from a coherently combined
Yb:fiber laser system in an argon-filled Herriott-type multi-pass cell with an
overall compression efficiency of 96%. We also analyze the output beam,
revealing essentially no spatio-spectral couplings or beam quality loss.
|
We consider the approach of replacing trees by multi-indices as an index set
of the abstract model space $\mathsf{T}$ introduced by Otto, Sauer, Smith and
Weber to tackle quasi-linear singular SPDEs. We show that this approach is
consistent with the postulates of regularity structures when it comes to the
structure group $\mathsf{G}$. In particular, $\mathsf{G}\subset{\rm
Aut}(\mathsf{T})$ arises from a Hopf algebra $\mathsf{T}^+$ and a comodule
$\Delta\colon\mathsf{T}\rightarrow \mathsf{T}^+\otimes\mathsf{T}$. In fact,
this approach, where the dual $\mathsf{T}^*$ of the abstract model space
$\mathsf{T}$ naturally embeds into a formal power series algebra, allows to
interpret $\mathsf{G}^*\subset{\rm Aut}(\mathsf{T}^*)$ as a Lie group arising
from a Lie algebra $\mathsf{L} \subset{\rm End}(\mathsf{T}^*)$ consisting of
derivations on this power series algebra. These derivations in turn are the
infinitesimal generators of two actions on the space of pairs (nonlinearities,
functions of space-time mod constants). These actions are shift of space-time
and tilt by space-time polynomials. The Hopf algebra $\mathsf{T}^+$ arises from
a coordinate representation of the universal enveloping algebra ${\rm
U}(\mathsf{L})$ of the Lie algebra $\mathsf{L}$. The coordinates are determined
by an underlying pre-Lie algebra structure of the derived algebra of
$\mathsf{L}$. Strong finiteness properties, which are enforced by gradedness
and the restrictive definition of $\mathsf{T}$, allow for this purely algebraic
construction of $\mathsf{G}$. We also argue that there exist pre-Lie algebra
and Hopf algebra morphisms between our structure and the tree-based one in the
cases of branched rough paths (Grossman-Larson, Connes-Kreimer) and of the
generalized parabolic Anderson model.
|
Previous approaches to learned cardinality estimation have focused on
improving average estimation error, but not all estimates matter equally. Since
learned models inevitably make mistakes, the goal should be to improve the
estimates that make the biggest difference to an optimizer. We introduce a new
loss function, Flow-Loss, that explicitly optimizes for better query plans by
approximating the optimizer's cost model and dynamic programming search
algorithm with analytical functions. At the heart of Flow-Loss is a reduction
of query optimization to a flow routing problem on a certain plan graph in
which paths correspond to different query plans. To evaluate our approach, we
introduce the Cardinality Estimation Benchmark, which contains the ground truth
cardinalities for sub-plans of over 16K queries from 21 templates with up to 15
joins. We show that across different architectures and databases, a model
trained with Flow-Loss improves the cost of plans (using the PostgreSQL cost
model) and query runtimes despite having worse estimation accuracy than a model
trained with Q-Error. When the test set queries closely match the training
queries, both models improve performance significantly over PostgreSQL and are
close to the optimal performance (using true cardinalities). However, the
Q-Error trained model degrades significantly when evaluated on queries that are
slightly different (e.g., similar but not identical query templates), while the
Flow-Loss trained model generalizes better to such situations. For example, the
Flow-Loss model achieves up to 1.5x better runtimes on unseen templates
compared to the Q-Error model, despite leveraging the same model architecture
and training data.
|
Data-driven, automatic design space exploration of neural accelerator
architecture is desirable for specialization and productivity. Previous
frameworks focus on sizing the numerical architectural hyper-parameters while
neglect searching the PE connectivities and compiler mappings. To tackle this
challenge, we propose Neural Accelerator Architecture Search (NAAS) which
holistically searches the neural network architecture, accelerator
architecture, and compiler mapping in one optimization loop. NAAS composes
highly matched architectures together with efficient mapping. As a data-driven
approach, NAAS rivals the human design Eyeriss by 4.4x EDP reduction with 2.7%
accuracy improvement on ImageNet under the same computation resource, and
offers 1.4x to 3.5x EDP reduction than only sizing the architectural
hyper-parameters.
|
It has previously been shown that response transformations can be very
effective in improving dimension reduction outcomes for a continuous response.
The choice of transformation used can make a big difference in the
visualization of the response versus the dimension reduced regressors. In this
article, we provide an automated approach for choosing parameters of
transformation functions to seek optimal results. A criterion based on an
influence measure between dimension reduction spaces is utilized for choosing
the optimal parameter value of the transformation. Since influence measures can
be time-consuming for large data sets, two efficient criteria are also
provided. Given that a different transformation may be suitable for each
direction required to form the subspace, we also employ an iterative approach
to choosing optimal parameter values. Several simulation studies and a real
data example highlight the effectiveness of the proposed methods.
|
Let $X$ be a sufficiently large positive integer. We prove that one may
choose a subset $S$ of primes with cardinality $O(\log X)$, such that a
positive proportion of integers less than $X$ can be represented by $x^2 + p
y^2$ for at least one of $p \in S$.
|
The difference in COVID 19 death rates across political regimes has caught a
lot of attention. The "efficient autocracy" view suggests that autocracies may
be more efficient at putting in place policies that contain COVID 19 spread. On
the other hand, the "biasing autocracy" view underlines that autocracies may be
under reporting their COVID 19 data. We use fixed effect panel regression
methods to discriminate between the two sides of the debate. Our results show
that a third view may in fact be prevailing: once pre-determined
characteristics of countries are accounted for, COVID 19 death rates equalize
across political regimes. The difference in death rate across political regime
seems therefore to be primarily due to omitted variable bias.
|
In this study, we examine the superconducting instability of a
quasi-one-dimensional lattice in the Hubbard model based on the random-phase
approximation (RPA) and the fluctuation exchange (FLEX) approximation. We find
that a spin-singlet pair density wave (PDW-singlet) with a center-of-mass
momentum of $2k_F$ can be stabilized when the one-dimensionality becomes
prominent toward the perfect nesting of the Fermi surface. The obtained pair is
a mixture of even-frequency and odd-frequency singlet ones. The dominant
even-frequency component does not have nodal lines on the Fermi surface. This
PDW-singlet state is more favorable as compared to RPA when self-energy
correction is introduced in the FLEX approximation.
|
To improve the performance of deep learning, mixup has been proposed to force
the neural networks favoring simple linear behaviors in-between training
samples. Performing mixup for transfer learning with pre-trained models however
is not that simple, a high capacity pre-trained model with a large
fully-connected (FC) layer could easily overfit to the target dataset even with
samples-to-labels mixed up. In this work, we propose SMILE - Self-Distilled
Mixup for EffIcient Transfer LEarning. With mixed images as inputs, SMILE
regularizes the outputs of CNN feature extractors to learn from the mixed
feature vectors of inputs (sample-to-feature mixup), in addition to the mixed
labels. Specifically, SMILE incorporates a mean teacher, inherited from the
pre-trained model, to provide the feature vectors of input samples in a
self-distilling fashion, and mixes up the feature vectors accordingly via a
novel triplet regularizer. The triple regularizer balances the mixup effects in
both feature and label spaces while bounding the linearity in-between samples
for pre-training tasks. Extensive experiments have been done to verify the
performance improvement made by SMILE, in comparisons with a wide spectrum of
transfer learning algorithms, including fine-tuning, L2-SP, DELTA, and RIFLE,
even with mixup strategies combined. Ablation studies show that the vanilla
sample-to-label mixup strategies could marginally increase the linearity
in-between training samples but lack of generalizability, while SMILE
significantly improve the mixup effects in both label and feature spaces with
both training and testing datasets. The empirical observations backup our
design intuition and purposes.
|
Invariant risk minimization (IRM) has recently emerged as a promising
alternative for domain generalization. Nevertheless, the loss function is
difficult to optimize for nonlinear classifiers and the original optimization
objective could fail when pseudo-invariant features and geometric skews exist.
Inspired by IRM, in this paper we propose a novel formulation for domain
generalization, dubbed invariant information bottleneck (IIB). IIB aims at
minimizing invariant risks for nonlinear classifiers and simultaneously
mitigating the impact of pseudo-invariant features and geometric skews.
Specifically, we first present a novel formulation for invariant causal
prediction via mutual information. Then we adopt the variational formulation of
the mutual information to develop a tractable loss function for nonlinear
classifiers. To overcome the failure modes of IRM, we propose to minimize the
mutual information between the inputs and the corresponding representations.
IIB significantly outperforms IRM on synthetic datasets, where the
pseudo-invariant features and geometric skews occur, showing the effectiveness
of proposed formulation in overcoming failure modes of IRM. Furthermore,
experiments on DomainBed show that IIB outperforms $13$ baselines by $0.9\%$ on
average across $7$ real datasets.
|
This paper presents a vehicle speed planning system called the energy-optimal
deceleration planning system (EDPS), which aims to maximize energy recuperation
of regenerative braking of connected and autonomous electrified vehicles. A
recuperation energy-optimal speed profile is computed based on the impending
deceleration requirements for turning or stopping at an intersection. This is
computed to maximize the regenerative braking energy while satisfying the
physical limits of an electrified powertrain. In automated driving, the
powertrain of an electrified vehicle can be directly controlled by the vehicle
control unit such that it follows the computed optimal speed profile. To obtain
smooth optimal deceleration speed profiles, optimal deceleration commands are
determined by a parameterized polynomial-based deceleration model that is
obtained by regression analyses with real vehicle driving test data. The
parameters are dependent on preview information such as residual time and
distance as well as target speed. The key design parameter is deceleration
time, which determines the deceleration speed profile to satisfy the residual
time and distance constraints as well as the target speed requirement. The
bounds of deceleration commands corresponding to the physical limits of the
powertrain are deduced from realistic deceleration test driving. The state
constraints are dynamically updated by considering the anticipated road load
and the deceleration preference. For validation and comparisons of the EDPS
with different preview distances, driving simulation tests with a virtual road
environment and vehicle-to-infrastructure connectivity are presented. It is
shown that the longer preview distance in the EDPS, the more
energy-recuperation. In comparison with driver-in-the-loop simulation tests,
EDPS-based autonomous driving shows improvements in energy recuperation and
reduction in trip time.
|
We present augmented Lagrangian Schur complement preconditioners and robust
multigrid methods for incompressible Stokes problems with extreme viscosity
variations. Such Stokes systems arise, for instance, upon linearization of
nonlinear viscous flow problems, and they can have severely inhomogeneous and
anisotropic coefficients. Using an augmented Lagrangian formulation for the
incompressibility constraint makes the Schur complement easier to approximate,
but results in a nearly singular (1,1)-block in the Stokes system. We present
eigenvalue estimates for the quality of the Schur complement approximation. To
cope with the near-singularity of the (1,1)-block, we extend a multigrid scheme
with a discretization-dependent smoother and transfer operators from
triangular/tetrahedral to the quadrilateral/hexahedral finite element
discretizations $[\mathbb{Q}_k]^d\times \mathbb{P}_{k-1}^{\text{disc}}$, $k\geq
2$, $d=2,3$. Using numerical examples with scalar and with anisotropic
fourth-order tensor viscosity arising from linearization of a viscoplastic
constitutive relation, we confirm the robustness of the multigrid scheme and
the overall efficiency of the solver. We present scalability results using up
to 28,672 parallel tasks for problems with up to 1.6 billion unknowns and a
viscosity contrast up to ten orders of magnitude.
|
We investigate, through a data-driven contact tracing model, the transmission
of COVID-19 inside buses during distinct phases of the pandemic in a large
Brazilian city. From this microscopic approach, we recover the networks of
close contacts within consecutive time windows. A longitudinal comparison is
then performed by upscaling the traced contacts with the transmission computed
from a mean-field compartmental model for the entire city. Our results show
that the effective reproduction numbers inside the buses, $Re^{bus}$, and in
the city, $Re^{city}$, followed a compatible behavior during the first wave of
the local outbreak. Moreover, by distinguishing the close contacts of
healthcare workers in the buses, we discovered that their transmission,
$Re^{health}$, during the same period, was systematically higher than
$Re^{bus}$. This result reinforces the need for special public transportation
policies for highly exposed groups of people.
|
Recent advances in machine learning have become increasingly popular in the
applications of phase transitions and critical phenomena. By machine learning
approaches, we try to identify the physical characteristics in the
two-dimensional percolation model. To achieve this, we adopt Monte Carlo
simulation to generate dataset at first, and then we employ several approaches
to analyze the dataset. Four kinds of convolutional neural networks (CNNs), one
variational autoencoder (VAE), one convolutional VAE (cVAE), one principal
component analysis (PCA), and one $k$-means are used for identifying order
parameter, the permeability, and the critical transition point. The former
three kinds of CNNs can simulate the two order parameters and the permeability
with high accuracy, and good extrapolating performance. The former two kinds of
CNNs have high anti-noise ability. To validate the robustness of the former
three kinds of CNNs, we also use the VAE and the cVAE to generate new
percolating configurations to add perturbations into the raw configurations. We
find that there is no difference by using the raw or the perturbed
configurations to identify the physical characteristics, under the prerequisite
of corresponding labels. In the case of lacking labels, we use unsupervised
learning to detect the physical characteristics. The PCA, a classical
unsupervised learning, performs well when identifying the permeability but
fails to deduce order parameter. Hence, we apply the fourth kinds of CNNs with
different preset thresholds, and identify a new order parameter and the
critical transition point. Our findings indicate that the effectiveness of
machine learning still needs to be evaluated in the applications of phase
transitions and critical phenomena.
|
Uniform large transition-edge sensor (TES) arrays are fundamental for the
next generation of X-ray space observatories. These arrays are required to
achieve an energy resolution $\Delta E$ < 3 eV full-width-half-maximum (FWHM)
in the soft X-ray energy range. We are currently developing X-ray
microcalorimeter arrays for use in future laboratory and space-based X-ray
astrophysics experiments and ground-based spectrometers. In this contribution
we report on the development and the characterization of a uniform 32$\times$32
pixel array with 140$\times$30 $\mu$m$^2$ Ti/Au TESs with Au X-ray absorber. We
report upon extensive measurements on 60 pixels in order to show the uniformity
of our large TES array. The averaged critical temperature is $T_\mathrm{c}$ =
89.5$\pm$0.5 mK and the variation across the array ($\sim$1 cm) is less than
1.5 mK. We found a large region of detector's bias points between 20\% and 40\%
of the normal-state resistance where the energy resolution is constantly lower
than 3 eV. In particular, results show a summed X-ray spectral resolution
$\Delta E_\mathrm{FWHM}$ = 2.50$\pm$0.04 eV at a photon energy of 5.9 keV,
measured in a single-pixel mode using a frequency domain multiplexing (FDM)
readout system developed at SRON/VTT at bias frequencies ranging from 1 to 5
MHz. Moreover we compare the logarithmic resistance sensitivity with respect to
temperature and current ($\alpha$ and $\beta$ respectively) and their
correlation with the detector's noise parameter $M$, showing an homogeneous
behaviour for all the measured pixels in the array.
|
We present a new algebraic method for solving the inverse problem of quantum
scattering theory based on the Marchenko theory. We applied a triangular wave
set for the Marchenko equation kernel expansion in a separable form. The
separable form allows a reduction of the Marchenko equation to a system of
linear equations. For the zero orbital angular momentum, a linear expression of
the kernel expansion coefficients is obtained in terms of the Fourier series
coefficients of a function depending on the momentum $q$ and determined by the
scattering data in the finite range of $q$. It is shown that a Fourier series
on a finite momentum range ($0<q<\pi/h$) of a $q(1-S)$ function ($S$ is the
scattering matrix) defines the potential function of the corresponding radial
Schr\"odinger equation with $h$-step accuracy. A numerical algorithm is
developed for the reconstruction of the optical potential from scattering data.
The developed procedure is applied to analyze the $^{1}S_{0}NN$ data up to 3
GeV. It is shown that these data are described by optical energy-independent
partial potential.
|
We generalize the Jacobi no-core shell model (J-NCSM) to study
double-strangeness hypernuclei. All particle conversions in the strangeness
$S=-1,-2$ sectors are explicitly taken into account. In two-body space, such
transitions may lead to the coupling between states of identical particles and
of non-identical ones. Therefore, a careful consideration is required when
determining the combinatorial factors that connect the many-body potential
matrix elements and the free-space two-body potentials. Using second
quantization, we systematically derive the combinatorial factors in question
for $S=0,-1,-2$ sectors. As a first application, we use the J-NCSM to
investigate $\Lambda \Lambda$ s-shell hypernuclei based on hyperon-hyperon (YY)
potentials derived within chiral effective field theory at leading order (LO)
and up to next-to-leading order (NLO). We find that the LO potential overbinds
$^{\text{ }\text{ }\text{ } \text{}6}_{\Lambda \Lambda}\text{He}$ while the
prediction of the NLO interaction is close to experiment. Both interactions
also yield a bound state for $^{\text{ }\text{ }\text{ } \text{}5}_{\Lambda
\Lambda}\text{He}$. The $^{\text{}\text{ }\text{ }\text{}4}_{\Lambda
\Lambda}\text{H}$ system is predicted to be unbound.
|
In online advertising, recommender systems try to propose items from a list
of products to potential customers according to their interests. Such systems
have been increasingly deployed in E-commerce due to the rapid growth of
information technology and availability of large datasets. The ever-increasing
progress in the field of artificial intelligence has provided powerful tools
for dealing with such real-life problems. Deep reinforcement learning (RL) that
deploys deep neural networks as universal function approximators can be viewed
as a valid approach for design and implementation of recommender systems. This
paper provides a comparative study between value-based and policy-based deep RL
algorithms for designing recommender systems for online advertising. The
RecoGym environment is adopted for training these RL-based recommender systems,
where the long short term memory (LSTM) is deployed to build value and policy
networks in these two approaches, respectively. LSTM is used to take account of
the key role that order plays in the sequence of item observations by users.
The designed recommender systems aim at maximising the click-through rate (CTR)
for the recommended items. Finally, guidelines are provided for choosing proper
RL algorithms for different scenarios that the recommender system is expected
to handle.
|
During the era of the High Luminosity LHC (HL-LHC) the devices in its
experiments will be subjected to increased radiation levels with high fluxes of
neutrons and charged hadrons, especially in the inner detectors. A systematic
program of radiation tests with neutrons and charged hadrons is being carried
out by the CMS and ATLAS Collaborations in view of the upgrade of the
experiments, in order to cope with the higher luminosity at HL-LHC and the
associated increase in the pile-up events and radiation fluxes. In this work,
results from a complementary radiation study with $^{60}$Co-$\gamma$ photons
are presented. The doses are equivalent to those that the outer layers of the
silicon tracker systems of the two big LHC experiments will be subjected to.
The devices in this study are float-zone oxygenated p-type MOS capacitors. The
results of CV measurements on these devices are presented as a function of the
total absorbed radiation dose following a specific annealing protocol. The
measurements are compared with the results of a TCAD simulation.
|
Fast Fourier Transform based phase screen simulations give accurate results
only when the screen size ($G$) is much larger than the outer scale parameter
($L_0$). Otherwise, they fall short in correctly predicting both the low and
high frequency behaviours of turbulence induced phase distortions. Sub-harmonic
compensation is a commonly used technique that aids in low-frequency correction
but does not solve the problem for all values of screen size to outer scale
parameter ratios $(G/L_0$). A subharmonics based approach will lead to unequal
sampling or weights calculation for subharmonics addition at the low-frequency
range and patch normalization factor. We have modified the subharmonics based
approach by introducing a Gaussian phase autocorrelation matrix that
compensates for these shortfalls. We show that the maximum relative error in
structure function with respect to theoretical value is as small as 0.5-3% for
$(G/L_0$) ratio of 1/1000 even for screen sizes up to 100 m diameter.
|
We summarise the results of a study performed within the GENIE global
analysis framework, revisiting the GENIE bare-nucleon cross-section tuning and,
in particular, the tuning of a) the inclusive cross-section, b) the
cross-section of low-multiplicity inelastic channels (single-pion and
double-pion production), and c) the relative contributions of resonance and
non-resonance processes to these final states. The same analysis was performed
with several different comprehensive cross-section model sets available in
GENIE Generator v3. In this work we performed a careful investigation of the
observed tensions between exclusive and inclusive data, and installed analysis
improvements to handle systematics in historic data. All tuned model
configurations discussed in this paper are available through public releases of
the GENIE Generator. With this paper we aim to support the consumers of these
physics tunes by providing comprehensive summaries of our alternate model
constructions, of the relevant datasets and their systematics, and of our
tuning procedure and results.
|
We evaluate the usefulness of holographic stabilizer codes for practical
purposes by studying their allowed sets of fault-tolerantly implementable
gates. We treat them as subsystem codes and show that the set of transversally
implementable logical operations is contained in the Clifford group for
sufficiently localized logical subsystems. As well as proving this concretely
for several specific codes, we argue that this restriction naturally arises in
any stabilizer subsystem code that comes close to capturing certain properties
of holography. We extend these results to approximate encodings,
locality-preserving gates, certain codes whose logical algebras have
non-trivial centers, and discuss cases where restrictions can be made to other
levels of the Clifford hierarchy. A few auxiliary results may also be of
interest, including a general definition of entanglement wedge map for any
subsystem code, and a thorough classification of different correctability
properties for regions in a subsystem code.
|
The growing interest in axion-like particles (ALPs) stems from the fact that
they provide successful theoretical explanations of physics phenomena, from the
anomaly of the CP-symmetry conservation in strong interactions to the
observation of an unexpectedly large TeV photon flux from astrophysical
sources, at distances where the strong absorption by the intergalactic medium
should make the signal very dim. In this latter condition, which is the focus
of this review, a possible explanation is that TeV photons convert to ALPs in
the presence of strong and/or extended magnetic fields, such as those in the
core of galaxy clusters or around compact objects, or even those in the
intergalactic space. This mixing affects the observed ${\gamma}$-ray spectrum
of distant sources, either by signal recovery or the production of
irregularities in the spectrum, called "wiggles", according to the specific
microscopic realization of the ALP and the ambient magnetic field at the
source, and in the Milky Way, where ALPs may be converted back to ${\gamma}$
rays. ALPs are also proposed as candidate particles for the Dark Matter.
Imaging Atmospheric Cherenkov telescopes (IACTs) have the potential to detect
the imprint of ALPs in the TeV spectrum from several classes of sources. In
this contribution, we present the ALP case and review the past decade of
searches for ALPs with this class of instruments.
|
We consider the nonlocal double phase equation \begin{align*} \mathrm{P.V.}
&\int_{\mathbb{R}^n}|u(x)-u(y)|^{p-2}(u(x)-u(y))K_{sp}(x,y)\,dy\\
&+\mathrm{P.V.} \int_{\mathbb{R}^n}
a(x,y)|u(x)-u(y)|^{q-2}(u(x)-u(y))K_{tq}(x,y)\,dy=0, \end{align*} where
$1<p\leq q$ and the modulating coefficient $a(\cdot,\cdot)\geq0$. Under some
suitable hypotheses, we first use the De Giorgi-Nash-Moser methods to derive
the local H\"{o}lder continuity for bounded weak solutions, and then establish
the relationship between weak solutions and viscosity solutions to such
equations.
|
The goal of this paper is to provide a general purpose result for the
coupling of exploration processes of random graphs, both undirected and
directed, with their local weak limits when this limit is a marked
Galton-Watson process. This class includes in particular the configuration
model and the family of inhomogeneous random graphs with rank-1 kernel.
Vertices in the graph are allowed to have attributes on a general separable
metric space and can potentially influence the construction of the graph
itself. The coupling holds for any fixed depth of a breadth-first exploration
process.
|
Using the novel notion of parablender, P. Berger proved that the existence of
finitely many attractors is not Kolmogorov typical in parametric families of
diffeomorphisms. Here, motivated by the concept of Newhouse domains we define
Berger domains for families of diffeomorphisms. As an application, we show that
the coexistence of infinitely many attracting invariant smooth circles is
Kolmogorov typical in certain non-sectionally dissipative Berger domains of
parametric families in dimension three or greater.
|
The combination of Winograd's algorithm and systolic array architecture has
demonstrated the capability of improving DSP efficiency in accelerating
convolutional neural networks (CNNs) on FPGA platforms. However, handling
arbitrary convolution kernel sizes in FPGA-based Winograd processing elements
and supporting efficient data access remain underexplored. In this work, we are
the first to propose an optimized Winograd processing element (WinoPE), which
can naturally support multiple convolution kernel sizes with the same amount of
computing resources and maintains high runtime DSP efficiency. Using the
proposed WinoPE, we construct a highly efficient systolic array accelerator,
termed WinoCNN. We also propose a dedicated memory subsystem to optimize the
data access. Based on the accelerator architecture, we build accurate resource
and performance modeling to explore optimal accelerator configurations under
different resource constraints. We implement our proposed accelerator on
multiple FPGAs, which outperforms the state-of-the-art designs in terms of both
throughput and DSP efficiency. Our implementation achieves DSP efficiency up to
1.33 GOPS/DSP and throughput up to 3.1 TOPS with the Xilinx ZCU102 FPGA. These
are 29.1\% and 20.0\% better than the best solutions reported previously,
respectively.
|
We present semantic correctness proofs of automatic differentiation (AD). We
consider a forward-mode AD method on a higher order language with algebraic
data types, and we characterise it as the unique structure preserving macro
given a choice of derivatives for basic operations. We describe a rich
semantics for differentiable programming, based on diffeological spaces. We
show that it interprets our language, and we phrase what it means for the AD
method to be correct with respect to this semantics. We show that our
characterisation of AD gives rise to an elegant semantic proof of its
correctness based on a gluing construction on diffeological spaces. We explain
how this is, in essence, a logical relations argument. Throughout, we show how
the analysis extends to AD methods for computing higher order derivatives using
a Taylor approximation.
|
Dynamically probing systems of ultrastrongly coupled light and matter by
advanced coherent control has been recently proposed as a unique tool for
detecting peculiar quantum features of this regime. Coherence allows in
principle on-demand conversion of virtual photons dressing the entangled
eigenstates of the system to real ones, with unitary efficiency and remarkable
robustness. Here we study this effect in the presence of decoherence, showing
that also in far from ideal regimes is it possible to probe such peculiar
features.
|
For regular continued fraction, if a real number $x$ and its rational
approximation $p/q$ satisfying $|x-p/q|<1/q^2$, then, after deleting the last
integer of the partial quotients of $p/q$, the sequence of the remaining
partial quotients is a prefix of that of $x$. In this paper, we show that the
situation is completely different if we consider the Hurwitz continued fraction
expansions of a complex number and its rational approximations. More
specifically, we consider the set $E(\psi)$ of complex numbers which are well
approximated with the given bound $\psi$ and have quite different Hurwitz
continued fraction expansions from that of their rational approximations. The
Hausdorff and packing dimensions of such set are determined. It turns out that
its packing dimension is always full for any given approximation bound $\psi$
and its Hausdorff dimension is equal to that of the $\psi$-approximable set
$W(\psi)$ of complex numbers. As a consequence, we also obtain an analogue of
the classical Jarn\'ik Theorem in real case.
|
High performance but unverified controllers, e.g., artificial
intelligence-based (a.k.a. AI-based) controllers, are widely employed in
cyber-physical systems (CPSs) to accomplish complex control missions. However,
guaranteeing the safety and reliability of CPSs with this kind of controllers
is currently very challenging, which is of vital importance in many real-life
safety-critical applications. To cope with this difficulty, we propose in this
work a Safe-visor architecture for sandboxing unverified controllers in CPSs
operating in noisy environments (a.k.a. stochastic CPSs). The proposed
architecture contains a history-based supervisor, which checks inputs from the
unverified controller and makes a compromise between functionality and safety
of the system, and a safety advisor that provides fallback when the unverified
controller endangers the safety of the system. Both the history-based
supervisor and the safety advisor are designed based on an approximate
probabilistic relation between the original system and its finite abstraction.
By employing this architecture, we provide formal probabilistic guarantees on
preserving the safety specifications expressed by accepting languages of
deterministic finite automata (DFA). Meanwhile, the unverified controllers can
still be employed in the control loop even though they are not reliable. We
demonstrate the effectiveness of our proposed results by applying them to two
(physical) case studies.
|
Owning to the sub-standards being developed by IEEE Time-Sensitive Networking
(TSN) Task Group, the traditional IEEE 802.1 Ethernet is enhanced to support
real-time dependable communications for future time- and safety-critical
applications. Several sub-standards have been recently proposed that introduce
various traffic shapers (e.g., Time-Aware Shaper (TAS), Asynchronous Traffic
Shaper (ATS), Credit-Based Shaper (CBS), Strict Priority (SP)) for flow control
mechanisms of queuing and scheduling, targeting different application
requirements. These shapers can be used in isolation or in combination and
there is limited work that analyzes, evaluates and compares their performance,
which makes it challenging for end-users to choose the right combination for
their applications. This paper aims at (i) quantitatively comparing various
traffic shapers and their combinations, (ii) summarizing, classifying and
extending the architectures of individual and combined traffic shapers and
their Network calculus (NC)-based performance analysis methods and (iii)
filling the gap in the timing analysis research on handling two novel hybrid
architectures of combined traffic shapers, i.e., TAS+ATS+SP and TAS+ATS+CBS. A
large number of experiments, using both synthetic and realistic test cases, are
carried out for quantitative performance comparisons of various individual and
combined traffic shapers, from the perspective of upper bounds of delay,
backlog and jitter. To the best of our knowledge, we are the first to
quantitatively compare the performance of the main traffic shapers in TSN. The
paper aims at supporting the researchers and practitioners in the selection of
suitable TSN sub-protocols for their use cases.
|
Grid-synchronization stability (GSS) is an emerging stability issue of
grid-tied voltage source converters (VSCs), which can be provoked by severe
grid voltage sags. Although a qualitative understanding of the mechanism behind
the loss of synchronization has been acquired in recent studies, an analytical
method for quantitative assessment of GSS of grid-tied VSCs is still missing.
To bridge this gap, a dedicated Lyapunov function is analytically proposed, and
its corresponding stability criterion for GSS analysis of grid-tied VSCs is
rigorously constructed. Both theoretical analysis and simulation result
demonstrate that the proposed method can provide a credible GSS evaluation
compared to the previous EAC/EF-based method. Therefore, it can be applied for
fast GSS evaluation of the grid-tied VSCs as is exemplified in this paper, as
well as an analytical tool for some GSS-related issues, e.g., GSS-oriented
parameter design and stabilization control.
|
The aim of this thesis project is to investigate the bit commitment protocol
in the framework of operational probabilistic theories. In particular a careful
study is carried on the feasibility of bit commitment in the non-local boxes
theory. New aspects of the theory are also presented.
|
Does consciousness collapse the quantum wave function? This idea was taken
seriously by John von Neumann and Eugene Wigner but is now widely dismissed. We
develop the idea by combining a mathematical theory of consciousness
(integrated information theory) with an account of quantum collapse dynamics
(continuous spontaneous localization). Simple versions of the theory are
falsified by the quantum Zeno effect, but more complex versions remain
compatible with empirical evidence. In principle, versions of the theory can be
tested by experiments with quantum computers. The upshot is not that
consciousness-collapse interpretations are clearly correct, but that there is a
research program here worth exploring.
|
In the budget-feasible allocation problem, a set of items with varied sizes
and values are to be allocated to a group of agents. Each agent has a budget
constraint on the total size of items she can receive. The goal is to compute a
feasible allocation that is envy-free (EF), in which the agents do not envy
each other for the items they receive, nor do they envy a charity, who is
endowed with all the unallocated items. Since EF allocations barely exist even
without budget constraints, we are interested in the relaxed notion of
envy-freeness up to one item (EF1). The computation of both exact and
approximate EF1 allocations remains largely open, despite a recent effort by Wu
et al. (IJCAI 2021) in showing that any budget-feasible allocation that
maximizes the Nash Social Welfare (NSW) is 1/4-approximate EF1. In this paper,
we move one step forward by showing that for agents with identical additive
valuations, a 1/2-approximate EF1 allocation can be computed in polynomial
time. For the uniform-budget and two-agent cases, we propose efficient
algorithms for computing an exact EF1 allocation. We also consider the large
budget setting, i.e., when the item sizes are infinitesimal compared with the
agents' budgets, and show that both the NSW maximizing allocation and the
allocation our polynomial-time algorithm computes have an approximation close
to 1 regarding EF1.
|
We present the first study of baryon-baryon interactions in the continuum
limit of lattice QCD, finding unexpectedly large lattice artifacts.
Specifically, we determine the binding energy of the $H$ dibaryon at a single
quark-mass point. The calculation is performed at six values of the lattice
spacing $a$, using O($a$)-improved Wilson fermions at the SU(3)-symmetric point
with $m_\pi=m_K\approx 420$ MeV. Energy levels are extracted by applying a
variational method to correlation matrices of bilocal two-baryon interpolating
operators computed using the distillation technique. Our analysis employs
L\"uscher's finite-volume quantization condition to determine the scattering
phase shifts from the spectrum and vice versa, both above and below the
two-baryon threshold. We perform global fits to the lattice spectra using
parametrizations of the phase shift, supplemented by terms describing
discretization effects, then extrapolate the lattice spacing to zero. The phase
shift and the binding energy determined from it are found to be strongly
affected by lattice artifacts. Our estimate of the binding energy in the
continuum limit of three-flavor QCD is $B_H^{\text{SU(3)}_{\rm
f}}=4.56\pm1.13_{\rm stat}\pm0.63_{\rm syst}$ MeV.
|
Building on ideas from probabilistic programming, we introduce the concept of
an expectation programming framework (EPF) that automates the calculation of
expectations. Analogous to a probabilistic program, an expectation program is
comprised of a mix of probabilistic constructs and deterministic calculations
that define a conditional distribution over its variables. However, the focus
of the inference engine in an EPF is to directly estimate the resulting
expectation of the program return values, rather than approximate the
conditional distribution itself. This distinction allows us to achieve
substantial performance improvements over the standard probabilistic
programming pipeline by tailoring the inference to the precise expectation we
care about. We realize a particular instantiation of our EPF concept by
extending the probabilistic programming language Turing to allow so-called
target-aware inference to be run automatically, and show that this leads to
significant empirical gains compared to conventional posterior-based inference.
|
We have built SinSpell, a comprehensive spelling checker for the Sinhala
language which is spoken by over 16 million people, mainly in Sri Lanka.
However, until recently, Sinhala had no spelling checker with acceptable
coverage. Sinspell is still the only open source Sinhala spelling checker.
SinSpell identifies possible spelling errors and suggests corrections. It also
contains a module which auto-corrects evident errors. To maintain accuracy,
SinSpell was designed as a rule-based system based on Hunspell. A set of words
was compiled from several sources and verified. These were divided into
morphological classes, and the valid roots, suffixes and prefixes for each
class were identified, together with lists of irregular words and exceptions.
The errors in a corpus of Sinhala documents were analysed and commonly
misspelled words and types of common errors were identified. We found that the
most common errors were in vowel length and similar sounding letters. Errors
due to incorrect typing and encoding were also found. This analysis was used to
develop the suggestion generator and auto-corrector.
|
Most successful computer vision models transform low-level features, such as
Gabor filter responses, into richer representations of intermediate or
mid-level complexity for downstream visual tasks. These mid-level
representations have not been explored for event cameras, although it is
especially relevant to the visually sparse and often disjoint spatial
information in the event stream. By making use of locally consistent
intermediate representations, termed as superevents, numerous visual tasks
ranging from semantic segmentation, visual tracking, depth estimation shall
benefit. In essence, superevents are perceptually consistent local units that
delineate parts of an object in a scene. Inspired by recent deep learning
architectures, we present a novel method that employs lifetime augmentation for
obtaining an event stream representation that is fed to a fully convolutional
network to extract superevents. Our qualitative and quantitative experimental
results on several sequences of a benchmark dataset highlights the significant
potential for event-based downstream applications.
|
In this paper, we introduce a general family of $q$-hypergeometric
polynomials and investigate several $q$-series identities such as an extended
generating function and a Srivastava-Agarwal type bilinear generating function
for this family of $q$-hypergeometric polynomials. We give a transformational
identity involving generating functions for the generalized $q$-hypergeometric
polynomials which we have introduced here. We also point out relevant
connections of the various $q$-results, which we investigate here, with those
in several related earlier works on this subject. We conclude this paper by
remarking that it will be a rather trivial and inconsequential exercise to give
the so-called $(p,q)$-variations of the $q$-results, which we have investigated
here, because the additional parameter $p$ is obviously redundant.
|
Erd\H{o}s and P\'{o}sa proved in 1965 that there is a duality between the
maximum size of a packing of cycles and the minimum size of a vertex set
hitting all cycles. Such a duality does not hold if we restrict to odd cycles.
However, in 1999, Reed proved an analogue for odd cycles by relaxing packing to
half-integral packing. We prove a far-reaching generalisation of the theorem of
Reed; if the edges of a graph are labelled by finitely many abelian groups,
then there is a duality between the maximum size of a half-integral packing of
cycles whose values avoid a fixed finite set for each abelian group and the
minimum size of a vertex set hitting all such cycles.
A multitude of natural properties of cycles can be encoded in this setting,
for example cycles of length at least $\ell$, cycles of length $p$ modulo $q$,
cycles intersecting a prescribed set of vertices at least $t$ times, and cycles
contained in given $\mathbb{Z}_2$-homology classes in a graph embedded on a
fixed surface. Our main result allows us to prove a duality theorem for cycles
satisfying a fixed set of finitely many such properties.
|
On a bounded strictly pseudoconvex domain in $\mathbb{C}^n$, $n >1$, the
smoothness of the Cheng-Yau solution to Fefferman's complex Monge-Amp\`ere
equation up to the boundary is obstructed by a local curvature invariant of the
boundary, the CR obstruction density $\mathcal{O}$. While local examples of
obstruction flat CR manifolds are plentiful, the only known compact examples
are the spherical CR manifolds. We consider the obstruction flatness problem
for small deformations of the standard CR 3-sphere. That rigidity holds for the
CR sphere was previously known (in all dimensions) for the case of embeddable
CR structures, where it also holds at the infinitesimal level. In the
3-dimensional case, however, a CR structure need not be embeddable. While in
the nonembeddable case we may no longer interpret the obstruction density
$\mathcal{O}$ in terms of the boundary regularity of Fefferman's equation (or
the logarithmic singularity of the Bergman kernel) the equation
$\mathcal{O}\equiv 0$ is still of great interest, e.g., since it corresponds to
the Bach flat equation of conformal gravity for the Fefferman space of the CR
structure (a conformal Lorentzian 4-manifold). Unlike in the embeddable case,
it turns out that in the nonembeddable case there is an infinite dimensional
space of solutions to the linearized obstruction flatness equation on the
standard CR 3-sphere and this space defines a natural complement to the tangent
space of the embeddable deformations. In spite of this, we show that the CR
3-sphere does not admit nontrivial obstruction flat deformations, embeddable or
nonembeddable.
|
The off-shell anomalous chromomagnetic dipole moment of the standard model
quarks ($u$, $d$, $s$, $c$ and $b$), at the $Z$ gauge boson mass scale, is
computed by using the $\overline{\textrm{MS}}$ scheme. The numerical results
disagree with all the previous predictions reported in the literature and show
a discrepancy of up to two orders of magnitude in certain situations.
|
The asymptotic expansions of the Wright functions of the second kind,
introduced by Mainardi [see Appendix F of his book {\it Fractional Calculus and
Waves in Linear Viscoelasticity}, (2010)], $$ F_\sigma(x)=\sum_{n=0}^\infty
\frac{(-x)^n}{n! \g(-n\sigma)}~,\quad M_\sigma(x)=\sum_{n=0}^\infty
\frac{(-x)^n}{n! \g(-n\sigma+1-\sigma)}\quad(0<\sigma<1)$$ for $x\to\pm\infty$
are presented. The situation corresponding to the limit $\sigma\to1^-$ is
considered, where $M_\sigma(x)$ approaches the Dirac delta function
$\delta(x-1)$. Numerical results are given to demonstrate the accuracy of the
expansions derived in the paper, together with graphical illustrations that
reveal the transition to a Dirac delta function as $\sigma\to 1^-$.
|
Topics concerning metric dimension related invariants in graphs are nowadays
intensively studied. This compendium of combinatorial and computational results
on this topic is an attempt of surveying those contributions that are of the
highest interest for the research community dealing with several variants of
metric dimension in graphs.
|
We describe in dialogue form a possible way of discovering and investigating
10-adic numbers starting from the naive question about a `largest natural
number'. Among the topics we pursue are possibilities of extensions to
transfinite 10-adic numbers, 10-adic representations of rational numbers, zero
divisors, square roots and 10-adic roots of higher degree of natural numbers,
and applications of 10-adic number representation in computer arithmetic. The
participants of the dialogue are idealized embodiments of different
philosophical attitudes towards mathematics. The article aims at illustrating
how these attitudes interact, in both jarring and stimulating ways, and how
they impact mathematical development.
|
Conversion of temporal to spatial correlations in the cortex is one of the
most intriguing functions in the brain. The learning at synapses triggering the
correlation conversion can take place in a wide integration window, whose
influence on the correlation conversion remains elusive. Here, we propose a
generalized associative memory model with arbitrary Hebbian length. The model
can be analytically solved, and predicts that a small Hebbian length can
already significantly enhance the correlation conversion, i.e., the
stimulus-induced attractor can be highly correlated with a significant number
of patterns in the stored sequence, thereby facilitating state transitions in
the neural representation space. Moreover, an anti-Hebbian component is able to
reshape the energy landscape of memories, akin to the function of sleep. Our
work thus establishes the fundamental connection between associative memory,
Hebbian length, and correlation conversion in the brain.
|
The increasing usage of machine learning (ML) coupled with the software
architectural challenges of the modern era has resulted in two broad research
areas: i) software architecture for ML-based systems, which focuses on
developing architectural techniques for better developing ML-based software
systems, and ii) ML for software architectures, which focuses on developing ML
techniques to better architect traditional software systems. In this work, we
focus on the former side of the spectrum with a goal to highlight the different
architecting practices that exist in the current scenario for architecting
ML-based software systems. We identify four key areas of software architecture
that need the attention of both the ML and software practitioners to better
define a standard set of practices for architecting ML-based software systems.
We base these areas in light of our experience in architecting an ML-based
software system for solving queuing challenges in one of the largest museums in
Italy.
|
In a first-of-its-kind study, this paper proposes the formulation of
constructing prediction intervals (PIs) in a time series as a bi-objective
optimization problem and solves it with the help of Nondominated Sorting
Genetic Algorithm (NSGA-II). We also proposed modeling the chaos present in the
time series as a preprocessor in order to model the deterministic uncertainty
present in the time series. Even though the proposed models are general in
purpose, they are used here for quantifying the uncertainty in macroeconomic
time series forecasting. Ideal PIs should be as narrow as possible while
capturing most of the data points. Based on these two objectives, we formulated
a bi-objective optimization problem to generate PIs in 2-stages, wherein
reconstructing the phase space using Chaos theory (stage-1) is followed by
generating optimal point prediction using NSGA-II and these point predictions
are in turn used to obtain PIs (stage-2). We also proposed a 3-stage hybrid,
wherein the 3rd stage invokes NSGA-II too in order to solve the problem of
constructing PIs from the point prediction obtained in 2nd stage. The proposed
models when applied to the macroeconomic time series, yielded better results in
terms of both prediction interval coverage probability (PICP) and prediction
interval average width (PIAW) compared to the state-of-the-art Lower Upper
Bound Estimation Method (LUBE) with Gradient Descent (GD). The 3-stage model
yielded better PICP compared to the 2-stage model but showed similar
performance in PIAW with added computation cost of running NSGA-II second time.
|
Cobaltates have rich spin-states and diverse properties. Using spin-state
pictures and firstprinciples calculations, here we study the electronic
structure and magnetism of the mixed-valent double perovskite YBaCo2O6. We find
that YBaCo2O6 is in the formal intermediate-spin (IS) Co3+/low-spin (LS) Co4+
ground state. The hopping of eg electron from IS-Co3+ to LS-Co4+ via double
exchange gives rise to a ferromagnetic half-metallicity, which well accounts
for the recent experiments. The reduction of both magnetization and Curie
temperature by oxygen vacancies is discussed, aided with Monte Carlo
simulations. We also explore several other possible spin-states and their
interesting electronic/magnetic properties. Moreover, we predict that a volume
expansion more than 3% would tune YBaCo2O6 into the high-spin (HS) Co3+/LS Co4+
ferromagnetic state and simultaneously drive a metal-insulator transition.
Therefore, spin-states are a useful parameter for tuning the material
properties of cobaltates.
|
This paper presents an equilibrium model of dynamic trading, learning, and
pricing by strategic investors with trading targets and price impact. Since
trading targets are private, rebalancers and liquidity providers filter the
child order flow over time to estimate the latent underlying parent trading
demand imbalance and its expected impact on subsequent price pressure dynamics.
We prove existence of the equilibrium and solve for equilibrium trading
strategies and prices in terms of the solution to a system of coupled ODEs. We
show that trading strategies are combinations of trading towards investor
targets, liquidity provision for other investors' demands, and front-running
based on learning about latent underlying trading demand imbalances and future
price pressure.
|
Let $R$ be a B\'ezout domain, and let $A,B,C\in R^{n\times n}$ with
$ABA=ACA$. If $AB$ and $CA$ are group invertible, we prove that $AB$ is similar
to $CA$. Moreover, we have $(AB)^{\#}$ is similar to $(CA)^{\#}$. This
generalize the main result of Cao and Li(Group inverses for matrices over a
B\'ezout domain, {\it Electronic J. Linear Algebra}, {\bf 18}(2009), 600--612).
|
We investigate periods, quasi-periods, logarithms, and quasi-logarithms of
Anderson $t$-modules, as well as their hyperderivatives. We develop a
comprehensive account of how these values can be obtained through rigid
analytic trivializations of abelian and $\mathbf{A}$-finite $t$-modules. To do
this we build on the exponentiation theorem of Anderson and investigate
quasi-periodic extensions of $t$-modules through Anderson generating functions.
By applying these results to prolongation $t$-modules of Maurischat, we
integrate hyperderivatives of these values together with previous work of
Brownawell and Denis in this framework.
|
Calibration is a vital aspect of the performance of risk prediction models,
but research in the context of ordinal outcomes is scarce. This study compared
calibration measures for risk models predicting a discrete ordinal outcome, and
investigated the impact of the proportional odds assumption on calibration and
overfitting. We studied the multinomial, cumulative, adjacent category,
continuation ratio, and stereotype logit/logistic models. To assess
calibration, we investigated calibration intercepts and slopes, calibration
plots, and the estimated calibration index. Using large sample simulations, we
studied the performance of models for risk estimation under various conditions,
assuming that the true model has either a multinomial logistic form or a
cumulative logit proportional odds form. Small sample simulations were used to
compare the tendency for overfitting between models. As a case study, we
developed models to diagnose the degree of coronary artery disease (five
categories) in symptomatic patients. When the true model was multinomial
logistic, proportional odds models often yielded poor risk estimates, with
calibration slopes deviating considerably from unity even on large model
development datasets. The stereotype logistic model improved the calibration
slope, but still provided biased risk estimates for individual patients. When
the true model had a cumulative logit proportional odds form, multinomial
logistic regression provided biased risk estimates, although these biases were
modest. Non-proportional odds models require more parameters to be estimated
from the data, and hence suffered more from overfitting. Despite larger sample
size requirements, we generally recommend multinomial logistic regression for
risk prediction modeling of discrete ordinal outcomes.
|
The discovery of the Meissner effect was a turning point in the history of
superconductivity. It demonstrated that superconductivity is an equilibrium
state of matter, thus allowing to use thermodynamics for its study. This
provided a justification for the two-fluid model of Gorter and Casimir, a
seminal thermodynamic theory founded on a postulate of zero entropy of the
superconducting (S) component of conduction electrons. It also demonstrated
that, apart from zero resistivity, the S phase is also characterized by zero
magnetic induction, used as a basic postulate in the theory of Londons
underlying the understanding of electromagnetic properties of superconductors.
Here the experimental and theoretical aspects of the Meissner effect are
reviewed. The reader will see that, in spite of almost nine decades age, the
London theory still contains questions, the answers to which can lead to a
revision of the standard picture of the Meissner state (MS) and other
superconducting states. An attempt is made to take a fresh look at
electrodynamics of the MS and try resolve the issues associated with this most
important state of all superconductors. It is shown that the concept of Cooper
pairing along with the Bohr-Sommerfeld quantization condition allows one to
construct a semi-classical theoretical model consistently addressing properties
of the MS and beyond, including non-equilibrium properties of superconductors
caused by the total current. As follows from the model, the three "big zeros"
of superconductivity (zero resistance, induction and entropy) have equal weight
and grow from a single root: quantization of the angular momentum of paired
electrons. The model predicts some yet unknown effects. If confirmed, they can
help in studies of microscopic properties of all superconductors. Preliminary
experimental results suggesting the need to revise the standard picture of the
MS are presented.
|
The geometry of the Ellis-Bronnikov wormhole is implemented in the Rastall
and k-essence theories of gravity with a self-interacting scalar field. The
form of the scalar field potential is determined in both cases. A stability
analysis with respect to spherically symmetric time-dependent perturbations is
carried out, and it shows that in k-essence theory the wormhole is unstable,
like the original version of this geometry supported by a massless phantom
scalar field in general relativity. In Rastall's theory, it turns out that a
perturbative approach reveals the same inconsistency that was found previously
for black hole solutions: time-dependent perturbations of the static
configuration prove to be excluded by the equations of motion, and the wormhole
is, in this sense, stable under spherical perturbations.
|
Quadratic Unconstrained Binary Optimization (QUBO) is a broad class of
optimization problems with many practical applications. To solve its hard
instances in an exact way, known classical algorithms require exponential time
and several approximate methods have been devised to reduce such cost. With the
growing maturity of quantum computing, quantum algorithms have been proposed to
speed up the solution by using either quantum annealers or universal quantum
computers. Here we apply the divide-and-conquer approach to reduce the original
problem to a collection of smaller problems whose solutions can be assembled to
form a single Polynomial Binary Unconstrained Optimization instance with fewer
variables. This technique can be applied to any QUBO instance and leads to
either an all-classical or a hybrid quantum-classical approach. When quantum
heuristics like the Quantum Approximate Optimization Algorithm (QAOA) are used,
our proposal leads to a double advantage: a substantial reduction of quantum
resources, specifically an average of ~42% fewer qubits to solve MaxCut on
random 3-regular graphs, together with an improvement in the quality of the
approximate solutions reached.
|
Numerous contemporary investigations in condensed matter physics are devoted
to high temperature (high-$T_c$ ) cuprate superconductors. Despite its unique
effulgence among research subjects, the enigma of the high-$T_c$ mechanism
still persists. One way to advance its understanding is to discover and study
new analogous systems. Here we begin a novel exploration of the natural mineral
murunskite, K$_2$FeCu$_3$S$_4$, as an interpolation compound between cuprates
and ferropnictides, the only known high-$T_c$ superconductors at ambient
pressure. Because in-depth studies can be carried out only on single crystals,
we have mastered the synthesis and growth of high quality specimens. Similar to
the cuprate parent compounds, these show semiconducting behavior in resistivity
and optical transmittance, and an antiferromagnetic ordering at 100 K.
Spectroscopy (XPS) and calculations (DFT) concur that the sulfur 3$p$ orbitals
are partially open, making them accessible for charge manipulation, which is a
prerequisite for superconductivity in analogous layered structures. DFT
indicates that the valence band is more cuprate-like, while the conduction band
is more pnictide-like. With appropriate doping strategies, this parent compound
promises exciting future developments.
|
A teleparallel geometry is an n-dimensional manifold equipped with a frame
basis and an independent spin connection. For such a geometry, the curvature
tensor vanishes and the torsion tensor is non-zero. A straightforward approach
to characterizing teleparallel geometries is to compute scalar polynomial
invariants constructed from the torsion tensor and its covariant derivatives.
An open question has been whether the set of all scalar polynomial torsion
invariants, $\mathcal{I}_T$ uniquely characterize a given teleparallel
geometry. In this paper we show that the answer is no and construct the most
general class of teleparallel geometries in four dimensions which cannot be
characterized by $\mathcal{I}_T$. As a corollary we determine all teleparallel
geometries which have vanishing scalar polynomial torsion invariants.
|
In a brief article published in 1931 and expanded in 1935, the Indian
astrophysicist Subrahmanyan Chandrasekhar shared an important astronomical
discovery where he introduced what is now known as Chandrasekhar limit. This
limit establishes the maximum mass that a white dwarf can reach, which is the
stellar remnant that is generated when a low mass star has used up its nuclear
fuel. The present work has a double purpose. The first is to present a
heuristic derivation of the Chandrasekhar limit. The second is to clarify the
genesis of the discovery of Chandrasekhar, as well as the conceptual aspects of
the subject. The exhibition only uses high school algebra, as well as some
general notions of classical physics and quantum theory.
|
In the big data and AI era, context is widely exploited as extra information
which makes it easier to learn a more complex pattern in machine learning
systems. However, most of the existing related studies seldom take context into
account. The difficulty lies in the unknown generalization ability of both
context and its modeling techniques across different scenarios. To fill the
above gaps, we conduct a large-scale analytical and empirical study on the
spatiotemporal crowd prediction (STCFP) problem that is a widely-studied and
hot research topic. We mainly make three efforts:(i) we develop new taxonomy
about both context features and context modeling techniques based on extensive
investigations in prevailing STCFP research; (ii) we conduct extensive
experiments on seven datasets with hundreds of millions of records to
quantitatively evaluate the generalization ability of both distinct context
features and context modeling techniques; (iii) we summarize some guidelines
for researchers to conveniently utilize context in diverse applications.
|
Hindsight rationality is an approach to playing general-sum games that
prescribes no-regret learning dynamics for individual agents with respect to a
set of deviations, and further describes jointly rational behavior among
multiple agents with mediated equilibria. To develop hindsight rational
learning in sequential decision-making settings, we formalize behavioral
deviations as a general class of deviations that respect the structure of
extensive-form games. Integrating the idea of time selection into
counterfactual regret minimization (CFR), we introduce the extensive-form
regret minimization (EFR) algorithm that achieves hindsight rationality for any
given set of behavioral deviations with computation that scales closely with
the complexity of the set. We identify behavioral deviation subsets, the
partial sequence deviation types, that subsume previously studied types and
lead to efficient EFR instances in games with moderate lengths. In addition, we
present a thorough empirical analysis of EFR instantiated with different
deviation types in benchmark games, where we find that stronger types typically
induce better performance.
|
We study vacuum birefringence and x-ray photon scattering in the head-on
collision of x-ray free electron and high-intensity laser pulses. Resorting to
analytical approximations for the numbers of attainable signal photons, we
analyze the behavior of the phenomenon under the variation of various
experimental key-parameters and provide new analytical scalings. Our optimized
approximations allow for quantitatively accurate results on the one-percent
level. We in particular demonstrate that an appropriate choice of the x-ray
focus and pulse duration can significantly improve the signal for given laser
parameters, using the experimental parameters to be available at the Helmholtz
International Beamline for Extreme Fields at the European XFEL as example. Our
results are essential for the identification of the optimal choice of
parameters in a discovery experiment of vacuum birefringence at the
high-intensity frontier.
|
In this paper, the average achievable rate and error probability of a
reconfigurable intelligent surface (RIS) aided systems is investigated for the
finite blocklength (FBL) regime. The performance loss due to the presence of
phase errors arising from limited quantization levels as well as hardware
impairments at the RIS elements is also discussed. First, the composite channel
containing the direct path plus the product of reflected channels through the
RIS is characterized. Then, the distribution of the received signal-to-noise
ratio (SNR) is matched to a Gamma random variable whose parameters depend on
the total number of RIS elements, phase errors and the channels' path loss.
Next, by considering the FBL regime, the achievable rate expression and error
probability are identified and the corresponding average rate and average error
probability are elaborated based on the proposed SNR distribution. Furthermore,
the impact of the presence of phase error due to either limited quantization
levels or hardware impairments on the average rate and error probability is
discussed. The numerical results show that Monte Carlo simulations conform to
matched Gamma distribution to received SNR for sufficiently large number of RIS
elements. In addition, the system reliability indicated by the tightness of the
SNR distribution increases when RIS is leveraged particularly when only the
reflected channel exists. This highlights the advantages of RIS-aided
communications for ultra-reliable and low-latency systems. The difference
between Shannon capacity and achievable rate in FBL regime is also discussed.
Additionally, the required number of RIS elements to achieve a desired error
probability in the FBL regime will be significantly reduced when the phase
shifts are performed without error.
|
In this paper we prove that any local automorphism on the solvable Leibniz
algebras with null-filiform and naturally graded non-Lie filiform nilradicals,
whose dimension of complementary space is maximal is an automorphism.
Furthermore, the same problem concerning 2-local automorphisms of such algebras
is investigated and we obtain the analogously results for 2-local
automorphisms.
|
Geometric phases are robust against certain types of local noises, and thus
provide a promising way towards high-fidelity quantum gates. However, comparing
with the dynamical ones, previous implementations of nonadiabatic geometric
quantum gates usually require longer evolution time, due to the needed longer
evolution path. Here, we propose a scheme to realize nonadiabatic geometric
quantum gates with short paths based on simple pulse control techniques,
instead of deliberated pulse control in previous investigations, which can thus
further suppress the influence from the environment induced noises.
Specifically, we illustrate the idea on a superconducting quantum circuit,
which is one of the most promising platforms for realizing practical quantum
computer. As the current scheme shortens the geometric evolution path, we can
obtain ultra-high gate fidelity, especially for the two-qubit gate case, as
verified by our numerical simulation. Therefore, our protocol suggests a
promising way towards high-fidelity and roust quantum computation on a
solid-state quantum system.
|
Heterogeneous Ultra-Dense Network (HUDN) is one of the vital networking
architectures due to its ability to enable higher connectivity density and
ultra-high data rates. Rational user association and power control schedule in
HUDN can reduce wireless interference. This paper proposes a novel idea for
resolving the joint user association and power control problem: the optimal
user association and Base Station transmit power can be represented by channel
information. Then, we solve this problem by formulating an optimal
representation function. We model the HUDNs as a heterogeneous graph and train
a Graph Neural Network (GNN) to approach this representation function by using
semi-supervised learning, in which the loss function is composed of the
unsupervised part that helps the GNN approach the optimal representation
function and the supervised part that utilizes the previous experience to
reduce useless exploration. We separate the learning process into two parts,
the generalization-representation learning (GRL) part and the
specialization-representation learning (SRL) part, which train the GNN for
learning representation for generalized scenario quasi-static user distribution
scenario, respectively. Simulation results demonstrate that the proposed
GRL-based solution has higher computational efficiency than the traditional
optimization algorithm, and the performance of SRL outperforms the GRL.
|
Ferroelectric tunnel junctions (FTJ) based on hafnium zirconium oxide
(Hf1-xZrxO2; HZO) are a promising candidate for future applications, such as
low-power memories and neuromorphic computing. The tunneling electroresistance
(TER) is tunable through the polarization state of the HZO film. To circumvent
the challenge of fabricating thin ferroelectric HZO layers in the tunneling
range of 1-3 nm range, ferroelectric/dielectric double layer sandwiched between
two symmetric metal electrodes are used. Due to the decoupling of the
ferroelectric polarization storage layer and a dielectric tunneling layer with
a higher bandgap, a significant TER ratio between the two polarization states
is obtained. By exploiting previously reported switching behaviour and the
gradual tunability of the resistance, FTJs can be used as potential candidates
for the emulation of synapses for neuromorphic computing in spiking neural
networks. The implementation of two major components of a synapse are shown:
long term depression/potentiation by varying the amplitude/width/number of
voltage pulses applied to the artificial FTJ synapse, and
spike-timing-dependent-plasticity curves by applying time-delayed voltages at
each electrode. These experimental findings show the potential of spiking
neural networks and neuromorphic computing that can be implemented with
hafnia-based FTJs.
|
Deep learning is expected to offer new opportunities and a new paradigm for
the field of architecture. One such opportunity is teaching neural networks to
visually understand architectural elements from the built environment. However,
the availability of large training datasets is one of the biggest limitations
of neural networks. Also, the vast majority of training data for visual
recognition tasks is annotated by humans. In order to resolve this bottleneck,
we present a concept of a hybrid system using both building information
modeling (BIM) and hyperrealistic (photorealistic) rendering to synthesize
datasets for training a neural network for building object recognition in
photos. For generating our training dataset BIMrAI, we used an existing BIM
model and a corresponding photo-realistically rendered model of the same
building. We created methods for using renderings to train a deep learning
model, trained a generative adversarial network (GAN) model using these
methods, and tested the output model on real-world photos. For the specific
case study presented in this paper, our results show that a neural network
trained with synthetic data; i.e., photorealistic renderings and BIM-based
semantic labels, can be used to identify building objects from photos without
using photos in the training data. Future work can enhance the presented
methods using available BIM models and renderings for more generalized mapping
and description of photographed built environments.
|
Subsets and Splits