abstract
stringlengths 42
2.09k
|
---|
Dzyaloshinskii-Moriya interaction, DMI in short, represents an antisymmetric
type of magnetic interactions that favour orthogonal orientation of spins and
competes with Heisenberg exchange. Being introduced to explain weak
ferromagnetism in antiferromagnets without an inversion center between magnetic
atoms such an anisotropic interaction can be used to analyze other non-trivial
magnetic structures of technological importance including spin spirals and
skyrmions. Despite the fact that the corresponding DMI contribution to the
magnetic energy of the system has a very compact form of the vector product of
spins, the determination of DMI from first-principles electronic structure is a
very challenging methodological and technical problem whose solution opens a
door into the fascinating microscopic world of complex magnetic materials. In
this paper we review a few such methods developed by us for calculating DMI and
their applications to study the properties of real materials.
|
Proportional-integral-derivative (PID) control is the most widely used in
industrial control, robot control and other fields. However, traditional PID
control is not competent when the system cannot be accurately modeled and the
operating environment is variable in real time. To tackle these problems, we
propose a self-adaptive model-free SAC-PID control approach based on
reinforcement learning for automatic control of mobile robots. A new
hierarchical structure is developed, which includes the upper controller based
on soft actor-critic (SAC), one of the most competitive continuous control
algorithms, and the lower controller based on incremental PID controller. Soft
actor-critic receives the dynamic information of the mobile robot as input, and
simultaneously outputs the optimal parameters of incremental PID controllers to
compensate for the error between the path and the mobile robot in real time. In
addition, the combination of 24-neighborhood method and polynomial fitting is
developed to improve the adaptability of SAC-PID control method to complex
environments. The effectiveness of the SAC-PID control method is verified with
several different difficulty paths both on Gazebo and real mecanum mobile
robot. Futhermore, compared with fuzzy PID control, the SAC-PID method has
merits of strong robustness, generalization and real-time performance.
|
People's opinions evolve over time as they interact with their friends,
family, colleagues, and others. In the study of opinion dynamics on networks,
one often encodes interactions between people in the form of dyadic
relationships, but many social interactions in real life are polyadic (i.e.,
they involve three or more people). In this paper, we extend an asynchronous
bounded-confidence model (BCM) on graphs, in which nodes are connected pairwise
by edges, to an asynchronous BCM on hypergraphs, in which arbitrarily many
nodes can be connected by a single hyperedge. We show that our hypergraph BCM
converges to consensus under a wide range of initial conditions for the
opinions of the nodes, including for non-uniform and asymmetric initial opinion
distributions. We also show that, under suitable conditions, echo chambers can
form on hypergraphs with community structure. We demonstrate that the opinions
of individuals can sometimes jump from one opinion cluster to another in a
single time step, a phenomenon (which we call ``opinion jumping'') that is not
possible in standard dyadic BCMs. Additionally, we observe that there is a
phase transition in the convergence time on {a complete hypergraph} when the
variance $\sigma^2$ of the initial opinion distribution equals the confidence
bound $c$. We prove that the convergence time grows at least exponentially fast
with the number of nodes when $\sigma^2 > c$ and the initial opinions are
normally distributed. Therefore, to determine the convergence properties of our
hypergraph BCM when the variance and the number of hyperedges are both large,
it is necessary to use analytical methods instead of relying only on Monte
Carlo simulations.
|
Limited expert time is a key bottleneck in medical imaging. Due to advances
in image classification, AI can now serve as decision-support for medical
experts, with the potential for great gains in radiologist productivity and, by
extension, public health. However, these gains are contingent on building and
maintaining experts' trust in the AI agents. Explainable AI may build such
trust by helping medical experts to understand the AI decision processes behind
diagnostic judgements. Here we introduce and evaluate explanations based on
Bayesian Teaching, a formal account of explanation rooted in the cognitive
science of human learning. We find that medical experts exposed to explanations
generated by Bayesian Teaching successfully predict the AI's diagnostic
decisions and are more likely to certify the AI for cases when the AI is
correct than when it is wrong, indicating appropriate trust. These results show
that Explainable AI can be used to support human-AI collaboration in medical
imaging.
|
The goal of this paper is to review and critically assess different methods
to monitor key process variables for ethanol production from lignocellulosic
biomass. Because cellulose-based biofuels cannot yet compete with
non-cellulosic biofuels, process control and optimization are of importance to
lower the production costs. This study reviews different monitoring schemes, to
indicate what the added value of real-time monitoring is for process control.
Furthermore, a comparison is made on different monitoring techniques to measure
the off-gas, the concentrations of dissolved components in the inlet to the
process, the concentrations of dissolved components in the reactor, and the
biomass concentration. Finally, soft sensor techniques and available models are
discussed, to give an overview of modeling techniques that analyze data, with
the aim of coupling the soft sensor predictions to the control and optimization
of cellulose to ethanol fermentation. The paper ends with a discussion of
future needs and developments.
|
The value semigroup of a $k$-semiroot $C_k$ of a plane branch $C$ allow us to
recover part of the value semigroup $\Gamma =\langle v_0,\ldots ,v_g\rangle$ of
$C$, that is, it is related to topological invariants of $C$. In this paper we
consider the set of values of differentials $\Lambda_k$ of $C_k$, that is an
analytical invariant, and we show how it determine part of the set of values of
differentials $\Lambda$ of $C$. As a consequence, in a fixed topological class,
we relate the Tjurina number $\tau$ of $C$ with the Tjurina number of $C_k$. In
particular, we show that $\tau\leq \mu-\frac{3n_g-2}{4}\mu_{g-1}$ where
$n_g=gcd(v_0,\ldots ,v_{g-1})$, $\mu$ and $\mu_{g-1}$ denote the Milnor number
of $C$ and $C_{g-1}$ respectively. If $n_g=2$, we have that
$\tau=\mu-\mu_{g-1}$ for any curve in the topological class determined by
$\Gamma$ that is a generalization of a result obtained by Luengo and Pfister.
|
A point of a metric space is called a geodesic star with $m$ arms if it is
the endpoint of $m$ disjoint geodesics. For every $m\in\{1,2,3,4\}$, we prove
that the set of all geodesic stars with $m$ arms in the Brownian sphere has
dimension $5-m$. This complements recent results of Miller and Qian, who proved
that this dimension is smaller than or equal to $5-m$.
|
In this PhD thesis, we explore and apply methods inspired by the free energy
principle to two important areas in machine learning and neuroscience. The free
energy principle is a general mathematical theory of the necessary
information-theoretic behaviours of systems that maintain a separation from
their environment. A core postulate of the theory is that complex systems can
be seen as performing variational Bayesian inference and minimizing an
information-theoretic quantity called the variational free energy. The thesis
is structured into three independent sections. Firstly, we focus on predictive
coding, a neurobiologically plausible process theory derived from the free
energy principle which argues that the primary function of the brain is to
minimize prediction errors, showing how predictive coding can be scaled up and
extended to be more biologically plausible, and elucidating its close links
with other methods such as Kalman Filtering. Secondly, we study active
inference, a neurobiologically grounded account of action through variational
message passing, and investigate how these methods can be scaled up to match
the performance of deep reinforcement learning methods. We additionally provide
a detailed mathematical understanding of the nature and origin of the
information-theoretic objectives that underlie exploratory behaviour. Finally,
we investigate biologically plausible methods of credit assignment in the
brain. We first demonstrate a close link between predictive coding and the
backpropagation of error algorithm. We go on to propose novel and simpler
algorithms which allow for backprop to be implemented in purely local,
biologically plausible computations.
|
This paper considers the problem of retrieving an object from many tightly
packed objects using a combination of robotic pushing and grasping actions.
Object retrieval in dense clutter is an important skill for robots to operate
in households and everyday environments effectively. The proposed solution,
Visual Foresight Tree (VFT), intelligently rearranges the clutter surrounding a
target object so that it can be grasped easily. Rearrangement with nested
nonprehensile actions is challenging as it requires predicting complex object
interactions in a combinatorially large configuration space of multiple
objects. We first show that a deep neural network can be trained to accurately
predict the poses of the packed objects when the robot pushes one of them. The
predictive network provides visual foresight and is used in a tree search as a
state transition function in the space of scene images. The tree search returns
a sequence of consecutive push actions yielding the best arrangement of the
clutter for grasping the target object. Experiments in simulation and using a
real robot and objects show that the proposed approach outperforms model-free
techniques as well as model-based myopic methods both in terms of success rates
and the number of executed actions, on several challenging tasks. A video
introducing VFT, with robot experiments, is accessible at
https://youtu.be/7cL-hmgvyec. The full source code is available at
https://github.com/arc-l/vft.
|
We derive a method to enhance the evaluation for a text-based Emotion Aware
Recommender that we have developed. However, we did not implement a suitable
way to assess the top-N recommendations subjectively. In this study, we
introduce an emotion-aware Pseudo Association Method to interconnect disjointed
users across different datasets so data files can be combined to form a more
extensive data file. Users with the same user IDs found in separate data files
in the same dataset are often the same users. However, users with the same user
ID may not be the same user across different datasets. We advocate an emotion
aware Pseudo Association Method to associate users across different datasets.
The approach interconnects users with different user IDs across different
datasets through the most similar users' emotion vectors (UVECs). We found the
method improved the evaluation process of assessing the top-N recommendations
objectively.
|
We solve the two-dimensional hydrodynamic equations of hot accretion flow in
the presence of the thermal conduction. The flow is assumed to be in
steady-state and axisymmetric, and self-similar approximation is adopted in the
radial direction. In this hydrodynamic study, we consider the viscous stress
tensor to mimic the effects of the magnetorotational instability for driving
angular momentum. We impose the physical boundary conditions at both the
rotation axis and the equatorial plane and obtain the solutions in the full $
r-\theta $ space. We have found that thermal conduction is indispensable term
for investigating the inflow-wind structure of the hot accretion flows with
very low mass accretion rates. One of the most interesting results here is that
the disc is convectively stable in hot accretion mode and in the presence of
the thermal conduction. Furthermore, the properties of wind and also its
driving mechanisms are studied. Our analytical results are consistent with
previous numerical simulations of hot accretion flow.
|
We propose network benchmarking: a procedure to efficiently benchmark the
quality of a quantum network link connecting quantum processors in a quantum
network. This procedure is based on the standard randomized benchmarking
protocol and provides an estimate for the fidelity of a quantum network link.
We provide statistical analysis of the protocol as well as a simulated
implementation inspired by NV-center systems using Netsquid, a special purpose
simulator for noisy quantum networks.
|
In virtual-state spectroscopy, information about the energy-level structure
of an arbitrary sample is retrieved by Fourier transforming sets of measured
two-photon absorption probabilities of entangled photon pairs where the degree
of entanglement and the delay time between the photons have been varied. This
works well for simple systems but quickly becomes rather difficult when many
intermediate states are involved. We propose and discuss an extension of
entangled two-photon absorption spectroscopy that solves this problem by means
of repeated measurements at different pump wavelengths. Specifically, we
demonstrate that our extension works well for a variety of realistic
experimental setups.
|
Owing to the hybridization of cerium's localised 4$f$ electron and conduction
band composed of $d$-electrons, cerium based intermetallics exhibit various
kinds of magnetic interactions. In crystals, these can result in exotic types
of magnetic ordering. In this study, we report a detailed single-crystal
neutron diffraction study on CePdAl$_3$ and CePtAl$_3$. We have synthesized a
large crystal of CePdAl$_3$, which crystallizes in a non-centrosymmetric,
orthorhombic structure with space group $Cmc2_1$, a new, distorted variant of
the tetragonal BaNiSn$_3$ structure observed in other Ce$T$Al$_3$ compounds,
such as CePtAl$_3$. Low-temperature diffraction measurements showed that
CePdAl$_3$ orders in a collinear antiferromagnetic structure below T$_N$=5.3
(1) K, with magnetic moments pointing along the $a$-axis direction and an
ordered magnetic moment $\mu$=1.64(3) $\mu_B$/Ce$^{3+}$. Tetragonal CePtAl$_3$
shows a modulated, cycloidal type of ordering with
$\vec{k}=(\frac{2}{3}\,0\,0)$, and a transition temperature T$_N$=3.2 K.
Symmetry analysis allows two types of ordering, which show modulation of both
amplitude and direction of magnetic moments. These results allow to conclude
that in Ce$T$Al$_3$ system the orthorhombic distortion ($T$=Pd, Ag) releases
some underlying magnetic frustration that results in modulated types of
magnetic ordering in tetragonal compounds ($T$=Cu,Au,Pt).
|
Link prediction and node classification are two important downstream tasks of
network representation learning. Existing methods have achieved acceptable
results but they perform these two tasks separately, which requires a lot of
duplication of work and ignores the correlations between tasks. Besides,
conventional models suffer from the identical treatment of information of
multiple views, thus they fail to learn robust representation for downstream
tasks. To this end, we tackle link prediction and node classification problems
simultaneously via multi-task multi-view learning in this paper. We first
explain the feasibility and advantages of multi-task multi-view learning for
these two tasks. Then we propose a novel model named as MT-MVGCN to perform
link prediction and node classification tasks simultaneously. More
specifically, we design a multi-view graph convolutional network to extract
abundant information of multiple views in a network, which is shared by
different tasks. We further apply two attention mechanisms: view attention
mechanism and task attention mechanism to make views and tasks adjust the view
fusion process. Moreover, view reconstruction can be introduced as an auxiliary
task to boost the performance of the proposed model. Experiments on real-world
network datasets demonstrate that our model is efficient yet effective, and
outperforms advanced baselines in these two tasks.
|
It is well-known that typability, type inhabitation and type inference are
undecidable in the Girard-Reynolds polymorphic system F. It has recently been
proven that type inhabitation remains undecidable even in the predicative
fragment of system F in which all universal instantiations have an atomic
witness (system Fat). In this paper we analyze typability and type inference in
Curry style variants of system Fat and show that typability is decidable and
that there is an algorithm for type inference which is capable of dealing with
non-redundancy constraints.
|
In this paper, we propose two simple yet efficient computational algorithms
to obtain approximate optimal designs for multi-dimensional linear regression
on a large variety of design spaces. We focus on the two commonly used optimal
criteria, $D$- and $A$-optimal criteria. For $D$-optimality, we provide an
alternative proof for the monotonic convergence for $D$-optimal criterion and
propose an efficient computational algorithm to obtain the approximate
$D$-optimal design. We further show that the proposed algorithm converges to
the $D$-optimal design, and then prove that the approximate $D$-optimal design
converges to the continuous $D$-optimal design under certain conditions. For
$A$-optimality, we provide an efficient algorithm to obtain approximate
$A$-optimal design and conjecture the monotonicity of the proposed algorithm.
Numerical comparisons suggest that the proposed algorithms perform well and
they are comparable or superior to some existing algorithms.
|
Using multiple monitors is commonly thought to improve productivity, but this
is hard to check experimentally. We use a survey, taken by 101 practitioners of
which 80% have coded professionally for at least 2 years, to assess subjective
perspectives based on experience. To improve validity, we compare situations in
which developers naturally use different setups -- the difference between
working at home or at the office, and how things changed when developers were
forced to work from home due to the Covid-19 pandemic. The results indicate
that using multiple monitors is indeed perceived as beneficial and desirable.
19% of the respondents reported adding a monitor to their home setup in
response to the Covid-19 situation. At the same time, the single most
influential factor cited as affecting productivity was not the physical setup
but interactions with co-workers -- both reduced productivity due to lack of
connections available at work, and improved productivity due to reduced
interruptions from co-workers. A central implication of our work is that
empirical research on software development should be conducted in settings
similar to those actually used by practitioners, and in particular using
workstations configured with multiple monitors.
|
This research proposes a new integrated framework for identifying safe
landing locations and planning in-flight divert maneuvers. The state-of-the-art
algorithms for landing zone selection utilize local terrain features such as
slopes and roughness to judge the safety and priority of the landing point.
However, when there are additional chances of observation and diverting in the
future, these algorithms are not able to evaluate the safety of the decision
itself to target the selected landing point considering the overall descent
trajectory. In response to this challenge, we propose a reinforcement learning
framework that optimizes a landing site selection strategy concurrently with a
guidance and control strategy to the target landing site. The trained agent
could evaluate and select landing sites with explicit consideration of the
terrain features, quality of future observations, and control to achieve a safe
and efficient landing trajectory at a system-level. The proposed framework was
able to achieve 94.8 $\%$ of successful landing in highly challenging landing
sites where over 80$\%$ of the area around the initial target lading point is
hazardous, by effectively updating the target landing site and feedback control
gain during descent.
|
We use rank correlations as distance functions to establish the
interconnectivity between stock returns, building weighted signed networks for
the stocks of seven European countries, the US and Japan. We establish the
theoretical relationship between the level of balance in a network and stock
predictability, studying its evolution from 2005 to the third quarter of 2020.
We find a clear balance-unbalance transition for six of the nine countries,
following the August 2011 Black Monday in the US, when the Economic Policy
Uncertainty index for this country reached its highest monthly level before the
COVID-19 crisis. This sudden loss of balance is mainly caused by a
reorganization of the market networks triggered by a group of low
capitalization stocks belonging to the non-financial sector. After the
transition, the stocks of companies in these groups become all negatively
correlated between them and with most of the rest of the stocks in the market.
The implied change in the network topology is directly related to a decrease in
stocks predictability, a finding with novel important implications for asset
allocation and portfolio hedging strategies.
|
In this article we fully describe the domain of the infinitesimal generator
of the optimal state semigroup which arises in the theory of the
linear-quadratic problem for a specific class of boundary control systems. This
represents an improvement over earlier work of the authors, joint with
Lasiecka, where a set inclusion was established, but not an equality. The novel
part of the proof of this result developes through appropriate asymptotic
estimates that take advantage of the regularity analysis carried out in the
study of the optimization problem, while the powers of positive operators and
interpolation are still key tools. We also attest to the validity of an assumed
relation between two significant parameters in the case of distinct systems of
coupled hyperbolic-parabolic partial differential equations which are pertinent
to the underlying framework.
|
Physics-Informed Neural Networks promise to revolutionize science and
engineering practice, by introducing domain-aware deep machine learning models
into scientific computation. Several software suites have emerged to make the
implementation and usage of these architectures available to the research and
industry communities. Here we introduce\linebreak TensorDiffEq, built on
Tensorflow 2.x, which presents an intuitive Keras-like interface for problem
domain definition, model definition, and solution of forward and inverse
problems using physics-aware deep learning methods. TensorDiffEq takes full
advantage of Tensorflow 2.x infrastructure for deployment on multiple GPUs,
allowing the implementation of large high-dimensional and complex models.
Simultaneously, TensorDiffEq supports the Keras API for custom neural network
architecture definitions. In the case of smaller or simpler models, the package
allows for rapid deployment on smaller-scale CPU platforms with negligible
changes to the implementation scripts. We demonstrate the basic usage and
capabilities of TensorDiffEq in solving forward, inverse, and data assimilation
problems of varying sizes and levels of complexity. The source code is
available at https://github.com/tensordiffeq.
|
We introduce a formal language for specifying dynamic updates for Software
Defined Networks. Our language builds upon Network Kleene Algebra with Tests
(NetKAT) and adds constructs for synchronisations and multi-packet behaviour to
capture the interaction between the control- and data-plane in dynamic updates.
We provide a sound and ground-complete axiomatisation of our language. We
exploit the equational theory to provide an efficient reasoning method about
safety properties for dynamic networks. We implement our equational theory in
DyNetiKAT -- a tool prototype, based on the Maude Rewriting Logic and the
NetKAT tool, and apply it to a case study. We show that we can analyse the case
study for networks with hundreds of switches using our initial tool prototype.
|
The ability of photonic crystal waveguides (PCWs) to confine and slow down
light makes them an ideal component to enhance the performance of various
photonic devices, such as optical modulators or sensors. However, the
integration of PCWs in photonic applications poses design challenges, most
notably, engineering the PCW mode dispersion and creating efficient coupling
devices. Here, we solve these challenges with photonic inverse design, and
experimentally demonstrate a slow-light PCW optical phased array (OPA) with a
wide steering range. Even and odd mode PCWs are engineered for a group index of
25, over a bandwidth of 20nm and 12nm, respectively. Additionally, for both PCW
designs, we create strip waveguide couplers and free-space vertical couplers.
Finally, also relying on inverse design, the radiative losses of the PCW are
engineered, allowing us to construct OPAs with a 20{\deg} steering range in a
20nm bandwidth.
|
Weighted model counting (WMC) is a popular framework to perform probabilistic
inference with discrete random variables. Recently, WMC has been extended to
weighted model integration (WMI) in order to additionally handle continuous
variables. At their core, WMI problems consist of computing integrals and sums
over weighted logical formulas. From a theoretical standpoint, WMI has been
formulated by patching the sum over weighted formulas, which is already present
in WMC, with Riemann integration. A more principled approach to integration,
which is rooted in measure theory, is Lebesgue integration. Lebesgue
integration allows one to treat discrete and continuous variables on equal
footing in a principled fashion. We propose a theoretically sound measure
theoretic formulation of weighted model integration, which naturally reduces to
weighted model counting in the absence of continuous variables. Instead of
regarding weighted model integration as an extension of weighted model
counting, WMC emerges as a special case of WMI in our formulation.
|
Inspired by the analysis of variance (ANOVA) decomposition of functions we
propose a Gaussian-Uniform mixture model on the high-dimensional torus which
relies on the assumption that the function we wish to approximate can be well
explained by limited variable interactions. We consider three approaches,
namely wrapped Gaussians, diagonal wrapped Gaussians and products of von Mises
distributions. The sparsity of the mixture model is ensured by the fact that
its summands are products of Gaussian-like density functions acting on low
dimensional spaces and uniform probability densities defined on the remaining
directions. To learn such a sparse mixture model from given samples, we propose
an objective function consisting of the negative log-likelihood function of the
mixture model and a regularizer that penalizes the number of its summands. For
minimizing this functional we combine the Expectation Maximization algorithm
with a proximal step that takes the regularizer into account. To decide which
summands of the mixture model are important, we apply a Kolmogorov-Smirnov
test. Numerical examples demonstrate the performance of our approach.
|
In this work, three heuristic derivations of the Hawking temperature are
presented. The main characteristic of these derivations is their extreme
simplicity, which makes them easily accessible to a wide and diverse audience.
|
Lagrangian cobordism induces a preorder on the set of Legendrian links in any
contact 3-manifold. We show that any finite collection of null-homologous
Legendrian links in a tight contact 3-manifold with a common rotation number
has an upper bound with respect to the preorder. In particular, we construct an
exact Lagrangian cobordism from each element of the collection to a common
Legendrian link. This construction allows us to define a notion of minimal
Lagrangian genus between any two null-homologous Legendrian links with a common
rotation number.
|
Let $G$ be a simple graph with maximum degree $\Delta(G)$. A subgraph $H$ of
$G$ is overfull if $|E(H)|>\Delta(G)\lfloor |V(H)|/2 \rfloor$. Chetwynd and
Hilton in 1985 conjectured that a graph $G$ with $\Delta(G)>|V(G)|/3$ has
chromatic index $\Delta(G)$ if and only if $G$ contains no overfull subgraph.
The 1-factorization conjecture is a special case of this overfull conjecture,
which states that for even $n$, every regular $n$-vertex graph with degree at
least about $n/2$ has a 1-factorization and was confirmed for large graphs in
2014. Supporting the overfull conjecture as well as generalizing the
1-factorization conjecture in an asymptotic way, in this paper, we show that
for any given $0<\varepsilon <1$, there exists a positive integer $n_0$ such
that the following statement holds: if $G$ is a graph on $2n\ge n_0$ vertices
with minimum degree at least $(1+\varepsilon)n$, then $G$ has chromatic index
$\Delta(G)$ if and only if $G$ contains no overfull subgraph.
|
Grammar compression is, next to Lempel-Ziv (LZ77) and run-length
Burrows-Wheeler transform (RLBWT), one of the most flexible approaches to
representing and processing highly compressible strings. The main idea is to
represent a text as a context-free grammar whose language is precisely the
input string. This is called a straight-line grammar (SLG). An AVL grammar,
proposed by Rytter [Theor. Comput. Sci., 2003] is a type of SLG that
additionally satisfies the AVL-property: the heights of parse-trees for
children of every nonterminal differ by at most one. In contrast to other SLG
constructions, AVL grammars can be constructed from the LZ77 parsing in
compressed time: $\mathcal{O}(z \log n)$ where $z$ is the size of the LZ77
parsing and $n$ is the length of the input text. Despite these advantages, AVL
grammars are thought to be too large to be practical.
We present a new technique for rapidly constructing a small AVL grammar from
an LZ77 or LZ77-like parse. Our algorithm produces grammars that are always at
least five times smaller than those produced by the original algorithm, and
never more than double the size of grammars produced by the practical Re-Pair
compressor [Larsson and Moffat, Proc. IEEE, 2000]. Our algorithm also achieves
low peak RAM usage. By combining this algorithm with recent advances in
approximating the LZ77 parsing, we show that our method has the potential to
construct a run-length BWT from an LZ77 parse in about one third of the time
and peak RAM required by other approaches. Overall, we show that AVL grammars
are surprisingly practical, opening the door to much faster construction of key
compressed data structures.
|
There is a well-known correspondence between coherent theories (and their
interpretations) and coherent categories (resp. functors), hence the
(2,1)-category $\mathbf{Coh_{\sim}}$ (of small coherent categories, coherent
functors and all natural isomorphisms) is of logical interest. We prove that
this category admits all small 2-limits and 2-colimits (in the ($\infty
$,1)-categorical sense), and prove a 2-categorical small object argument to
provide weak factorisation systems for coherent functors.
|
Since early 2020 the COVID-19 pandemic has had a considerable impact on many
aspects of daily life. A range of different measures have been implemented
worldwide to reduce the rate of new infections and to manage the pressure on
national health services. A primary strategy has been to reduce gatherings and
the potential for transmission through the prioritisation of remote working and
education. Enhanced hand hygiene and the use of facial masks have decreased the
spread of pathogens when gatherings are unavoidable. These particular measures
present challenges for reliable biometric recognition, e.g. for facial-, voice-
and hand-based biometrics. At the same time, new challenges create new
opportunities and research directions, e.g. renewed interest in non-constrained
iris or periocular recognition, touch-less fingerprint- and vein-based
authentication and the use of biometric characteristics for disease detection.
This article presents an overview of the research carried out to address those
challenges and emerging opportunities.
|
We derive a large-scale hydrodynamic equation, including diffusive and
dissipative effects, for systems with generic static position-dependent driving
forces coupling to local conserved quantities. We show that this equation
predicts entropy increase and thermal states as the only stationary states. The
equation applies to any hydrodynamic system with any number of local,
PT-symmetric conserved quantities, in arbitrary dimension. It is fully
expressed in terms of elements of an extended Onsager matrix. In integrable
systems, this matrix admits an expansion in the density of excitations. We
evaluate exactly its 2-particle-hole contribution, which dominates at low
density, in terms of the scattering phase and dispersion of the quasiparticles,
giving a lower bound for the extended Onsager matrix and entropy production. We
conclude with a molecular dynamics simulation, demonstrating thermalisation
over diffusive time scales in the Toda interacting particle model with an
inhomogeneous energy field.
|
The electric charge renormalization constant, as defined in the Thomson
limit, is expressed in terms of self-energies of the photon-Z-boson system in
an arbitrary R_\xi-gauge to all perturbative orders. The derivation as carried
out in the Standard Model holds in all spontaneously broken gauge theories with
the SU(2)_w \times U(1)_Y gauge group in the electroweak sector and is based on
the application of charge universality to a fake fermion with infinitesimal
weak hypercharge and vanishing weak isospin, which effectively decouples from
all other particles. Charge universality, for instance, follows from the known
universal form of the charge renormalization constant as derived within the
background-field formalism. Finally, we have generalized the described
procedure to gauge theories with gauge group U(1)_Y \times G with any Lie group
G, only assuming that electromagnetic gauge symmetry is unbroken and mixes with
U(1)_Y transformations in a non-trivial way.
|
Given an action of a groupoid by isomorphisms on a Fell bundle (over another
groupoid), we form a semidirect-product Fell bundle, and prove that its
$C^{*}$-algebra is isomorphic to a crossed product.
|
We present an effective field theory describing the relevant interactions of
the Standard Model with an electrically neutral particle that can account for
the dark matter in the Universe. The possible mediators of these interactions
are assumed to be heavy. The dark matter candidates that we consider have spin
0, 1/2 or 1, belong to an electroweak multiplet with arbitrary isospin and
hypercharge and their stability at cosmological scales is guaranteed by
imposing a $\mathbb{Z}_2$ symmetry. We present the most general framework for
describing the interaction of the dark matter with standard particles, and
construct a general non-redundant basis of the gauge-invariant operators up to
dimension six. The basis includes multiplets with non-vanishing hypercharge,
which can also be viable DM candidates. We give two examples illustrating the
phenomenological use of such a general effective framework. First, we consider
the case of a scalar singlet, provide convenient semi-analytical expressions
for the relevant dark matter observables, use present experimental data to set
constraints on the Wilson coefficients of the operators, and show how the
interplay of different operators can open new allowed windows in the parameter
space of the model. Then we study the case of a lepton isodoublet, which
involves co-annihilation processes, and we discuss the impact of the operators
on the particle mass splitting and direct detection cross sections. These
examples highlight the importance of the contribution of the various
non-renormalizable operators, which can even dominate over the gauge
interactions in certain cases.
|
In this paper, we investigate the roles that social robots can take in
physical exercise with human partners. In related work, robots or virtual
intelligent agents take the role of a coach or instructor whereas in other
approaches they are used as motivational aids. These are two "paradigms", so to
speak, within the small but growing area of robots for social exercise. We
designed an online questionnaire to test whether the preferred role in which
people want to see robots would be the companion or the coach. The
questionnaire asks people to imagine working out with a robot with the help of
three utilized questionnaires: (1) CART-Q which is used for judging
coach-athlete relationships, (2) the mind perception questionnaire and (3) the
System Usability Scale (SUS). We present the methodology, some preliminary
results as well as our intended future work on personal robots for coaching.
|
Aims: To study the heating of solar chromospheric magnetic and nonmagnetic
regions by acoustic and magnetoacoustic waves, the deposited acoustic-energy
flux derived from observations of strong chromospheric lines is compared with
the total integrated radiative losses. Methods: A set of 23 quiet-Sun and
weak-plage regions were observed in the Mg II k and h lines with the Interface
Region Imaging Spectrograph (IRIS). The deposited acoustic-energy flux was
derived from Doppler velocities observed at two different geometrical heights
corresponding to the middle and upper chromosphere. A set of scaled nonlocal
thermodynamic equilibrium 1D hydrostatic semi-empirical models (obtained by
fitting synthetic to observed line profiles) was applied to compute the
radiative losses. The characteristics of observed waves were studied by means
of a wavelet analysis. Results: Observed waves propagate upward at supersonic
speed. In the quiet chromosphere, the deposited acoustic flux is sufficient to
balance the radiative losses and maintain the semi-empirical temperatures in
the layers under study. In the active-region chromosphere, the comparison shows
that the contribution of acoustic-energy flux to the radiative losses is only
10 - 30 %. Conclusions: Acoustic and magnetoacoustic waves play an important
role in the chromospheric heating, depositing a main part of their energy in
the chromosphere. Acoustic waves compensate for a substantial fraction of the
chromospheric radiative losses in quiet regions. In active regions, their
contribution is too small to balance the radiative losses and the chromosphere
has to be heated by other mechanisms.
|
In this work we explore the effects that a possible primordial magnetic field
can have on the inflaton effective potential, taking as the underlying model a
warm inflation scenario, based on global supersymmetry with a
new-inflation-type potential. The decay scheme for the inflaton field is a
two-step process of radiation production, where the inflaton couples to heavy
intermediate superfields, which in turn interact with light particles. In this
context, we consider that both sectors, heavy and light, are charged and work
in the strong magnetic field approximation for the light fields. We find an
analytical expression for the one-loop effective potential, for an arbitrary
magnetic field strength, and show that the trend of the magnetic contribution
is to make the potential flatter, preserving the conditions for a successful
inflationary process.
|
Embedding-based methods for reasoning in knowledge hypergraphs learn a
representation for each entity and relation. Current methods do not capture the
procedural rules underlying the relations in the graph. We propose a simple
embedding-based model called ReAlE that performs link prediction in knowledge
hypergraphs (generalized knowledge graphs) and can represent high-level
abstractions in terms of relational algebra operations. We show theoretically
that ReAlE is fully expressive and provide proofs and empirical evidence that
it can represent a large subset of the primitive relational algebra operations,
namely renaming, projection, set union, selection, and set difference. We also
verify experimentally that ReAlE outperforms state-of-the-art models in
knowledge hypergraph completion, and in representing each of these primitive
relational algebra operations. For the latter experiment, we generate a
synthetic knowledge hypergraph, for which we design an algorithm based on the
Erdos-R'enyi model for generating random graphs.
|
We study the effects of electronic correlations on fragile topology using
dynamical mean-field theory. Fragile topological insulators (FTIs) offer
obstruction to the formation of exponentially localized Wannier functions, but
they can be trivialized by adding certain trivial degrees of freedom. For the
same reason, FTIs do not host symmetry-protected flow of edge states between
bulk bands in cylindrical boundary conditions but are expected to have a
spectral flow between the fragile bands and other bands under certain twisted
boundary conditions. We here analyze commonly observed effects of strong
correlations, such as the Mott-insulator transition and magnetism, on a known
model hosting fragile topology. We show that in the nonmagnetic case, fragile
topology, along with the twisted boundary states, is stable with interactions
below a critical interaction strength. Above this interaction strength, a
transition to the Mott insulating phase occurs, and the twisted boundary states
disappear. Furthermore, by applying a homogeneous magnetic field, the fragile
topology is destroyed. However, we show that a magnetic field can induce a
topological phase transition which converts a fragile topological insulator to
a Chern insulator. Finally, we study ferromagnetic solutions of the fragile
topological model.
|
We formulate non-relativistic string theory on the Newton-Cartan space time
using the vielbein approach mooted in the Galilean gauge theory. Geometric
implication has been discussed at length. The outcome is the establishment of
some points of non-relativistic diffeomorphism which has to some extent
mystified in the literature.
|
Modelling the end point of binary black hole mergers is a cornerstone of
modern gravitational-wave astronomy. Extracting multiple quasinormal mode
frequencies from the ringdown signal allows the remnant black hole to be
studied in unprecedented detail. Previous studies on numerical relativity
simulations of aligned-spin binaries have found that it is possible to start
the ringdown analysis much earlier than previously thought if overtones (and
possibly mirror modes) are included. This increases the signal-to-noise ratio
in the ringdown making identification of subdominant modes easier. In this
paper we study, for the first time, black hole binaries with misaligned spins
and find a much greater variation in the performance of ringdown fits than in
the aligned-spin case. The inclusion of mirror modes and higher harmonics,
along with overtones, improves the reliability of ringdown fits with an early
start time; however, there remain cases with poor performing fits. While using
overtones in conjunction with an early ringdown start time is an enticing
possibility, it is necessary to proceed with caution. We also consider for the
first time the use of numerical relativity surrogate models in this type of
quasinormal mode study and address important questions of accuracy in the
underlying numerical waveforms used for the fit.
|
In this paper, we generalize the compact subcell weighted essentially non
oscillatory (CSWENO) limiting strategy for Runge-Kutta discontinuous Galerkin
method developed recently by us in 2021 for structured meshes to unstructured
triangular meshes. The main idea of the limiting strategy is to divide the
immediate neighbors of a given cell into the required stencil and to use a WENO
reconstruction for limiting. This strategy can be applied for any type of WENO
reconstruction. We have used the WENO reconstruction proposed by Zhu and Shu in
2019 and provided accuracy tests and results for two-dimensional Burgers'
equation and two dimensional Euler equations to illustrate the performance of
this limiting strategy.
|
Atmospheric heavy elements have been observed in more than a quarter of white
dwarfs (WDs) at different cooling ages, indicating ongoing accretion of
asteroidal material, whilst only a few per cent of the WDs possess a dust disk,
and all these WDs are accreting metals. Here, assuming that a rubble-pile
asteroid is scattered inside a WD's Roche lobe by a planet, we study its tidal
disruption and the long-term evolution of the resulting fragments. We find that
after a few pericentric passages, the asteroid is shredded into its constituent
particles, forming a flat, thin ring. On a timescale of Myr, tens of per cent
of the particles are scattered onto the WD, and are therefore directly accreted
without first passing through a circularised close-in disk. Fragment mutual
collisions are most effective for coplanar fragments, and are thus only
important in $10^3-10^4$ yr before the orbital coplanarity is broken by the
planet. We show that for a rubble pile asteroid with a size frequency
distribution of the component particles following that of the near earth
objects, it has to be roughly at least 10 km in radius such that enough
fragments are generated and $\ge10\%$ of its mass is lost to mutual collisions.
At relative velocities of tens of km/s, such collisions grind down the tidal
fragments into smaller and smaller dust grains. The WD radiation forces may
shrink those grains' orbits, forming a dust disk. Tidal disruption of a
monolithic asteroid creates large km-size fragments, and only parent bodies
$\ge100$ km are able to generate enough fragments for mutual collisions to be
significant. Hence, those large asteroids experience a disk phase before being
accreted.
|
In the fusion community, the use of high performance computing (HPC) has been
mostly dominated by heavy-duty plasma simulations, such as those based on
particle-in-cell and gyrokinetic codes. However, there has been a growing
interest in applying machine learning for knowledge discovery on top of large
amounts of experimental data collected from fusion devices. In particular, deep
learning models are especially hungry for accelerated hardware, such as
graphics processing units (GPUs), and it is becoming more common to find those
models competing for the same resources that are used by simulation codes,
which can be either CPU- or GPU-bound. In this paper, we give examples of deep
learning models -- such as convolutional neural networks, recurrent neural
networks, and variational autoencoders -- that can be used for a variety of
tasks, including image processing, disruption prediction, and anomaly detection
on diagnostics data. In this context, we discuss how deep learning can go from
using a single GPU on a single node to using multiple GPUs across multiple
nodes in a large-scale HPC infrastructure.
|
In this paper, we study the generalization performance of min $\ell_2$-norm
overfitting solutions for the neural tangent kernel (NTK) model of a two-layer
neural network with ReLU activation that has no bias term. We show that,
depending on the ground-truth function, the test error of overfitted NTK models
exhibits characteristics that are different from the "double-descent" of other
overparameterized linear models with simple Fourier or Gaussian features.
Specifically, for a class of learnable functions, we provide a new upper bound
of the generalization error that approaches a small limiting value, even when
the number of neurons $p$ approaches infinity. This limiting value further
decreases with the number of training samples $n$. For functions outside of
this class, we provide a lower bound on the generalization error that does not
diminish to zero even when $n$ and $p$ are both large.
|
We investigate the kinematic properties of a large (N=998) sample of COSMOS
spectroscopic galaxy members distributed among 79 groups. We identify the
Brightest Group Galaxies (BGGs) and cross-match our data with the VLA-COSMOS
Deep survey at 1.4 GHz, classifying our parent sample into radio/non-radio BGGs
and radio/non-radio satellites. The radio luminosity distribution spans from
$L_R\sim2\times10^{21}$ W Hz$^{-1}$ to $L_R\sim3\times$10$^{25}$ W Hz$^{-1}$. A
phase-space analysis, performed by comparing the velocity ratio (line-of-sight
velocity divided by the group velocity dispersion) with the galaxy-group centre
offset, reveals that BGGs (radio and non-radio) are mostly ($\sim$80\%) ancient
infallers. Furthermore, the strongest ($L_R>10^{23}$ W Hz$^{-1}$) radio
galaxies are always found within 0.2$R_{\rm vir}$ from the group centre.
Comparing our samples with HORIZON-AGN, we find that the velocities and offsets
of simulated galaxies are more similar to radio BGGs than to non-radio BGGs,
albeit statistical tests still highlight significant differences between
simulated and real objects. We find that radio BGGs are more likely to be
hosted in high-mass groups. Finally, we observe correlations between the powers
of BGG radio galaxies and the X-ray temperatures, $T_{\rm x}$, and X-ray
luminosities, $L_{\rm x}$, of the host groups. This supports the existence of a
link between the intragroup medium and the central radio source. The occurrence
of powerful radio galaxies at group centres can be explained by Chaotic Cold
Accretion, as the AGN can feed from both the galactic and intragroup
condensation, leading to the observed positive $L_{\rm R}-T_{\rm x}$
correlation.
|
Spectral-based subspace clustering methods have proved successful in many
challenging applications such as gene sequencing, image recognition, and motion
segmentation. In this work, we first propose a novel spectral-based subspace
clustering algorithm that seeks to represent each point as a sparse convex
combination of a few nearby points. We then extend the algorithm to constrained
clustering and active learning settings. Our motivation for developing such a
framework stems from the fact that typically either a small amount of labelled
data is available in advance; or it is possible to label some points at a cost.
The latter scenario is typically encountered in the process of validating a
cluster assignment. Extensive experiments on simulated and real data sets show
that the proposed approach is effective and competitive with state-of-the-art
methods.
|
We present the first determination of transverse momentum dependent (TMD)
photon densities with the Parton Branching method. The photon distribution is
generated perturbatively without intrinsic photon component. The input
parameters for quarks and gluons are determined from fits to precision
measurements of deep inelastic scattering cross sections at HERA. The TMD
densities are used to predict the mass and transverse momentum spectra of very
high mass lepton pairs from both Drell-Yan production and Photon-Initiated
lepton processes at the LHC.
|
The increase of generation capacity in the area of responsibility of the
distribution system operator (DSO) requires strengthening of coordination
between transmission system operator (TSO) and DSO in order to prevent
conflicting or counteracting use of flexibility options. For this purpose,
methods for the standardized description and identification of the aggregated
flexibility potential of distribution grids (DGs) are developed. Approaches for
identifying the feasible operation region (FOR) of DGs can be categorized into
two main classes: Data-driven/stochastic approaches and optimization based
approaches. While the latter have the advantage of working in real-world
scenarios where no full grid models exist, when relying on naive sampling
strategies, they suffer from poor coverage of the edges of the FOR. To underpin
the need for improved sampling strategies for data-driven approaches, in this
paper we point out and analyse the shortcomings of naive sampling strategies
with focus on the problem of leptocurtic distribution of resulting
interconnection power flows (IPFs). We refer to this problem as convolution
problem, as it can be traced back to the fact that the probability density
function (PDF) of the sum of two or more independent random variables is the
convolution of their respective PDFs. To demonstrate the convolution problem,
we construct a series of synthetic 0.4 kV feeders, which are characterized by
an increasing number of nodes and apply a sampling strategy to them that draws
set-values for the controllable distributed energy resources (DERs) from
independent uniform distributions. By calculating the power flow for each
sample in each feeder, we end up with a collapsing IPF point cloud clearly
indicating the convolution problem.
|
Experiments with pretrained models such as BERT are often based on a single
checkpoint. While the conclusions drawn apply to the artifact (i.e., the
particular instance of the model), it is not always clear whether they hold for
the more general procedure (which includes the model architecture, training
data, initialization scheme, and loss function). Recent work has shown that
re-running pretraining can lead to substantially different conclusions about
performance, suggesting that alternative evaluations are needed to make
principled statements about procedures. To address this question, we introduce
MultiBERTs: a set of 25 BERT-base checkpoints, trained with similar
hyper-parameters as the original BERT model but differing in random
initialization and data shuffling. The aim is to enable researchers to draw
robust and statistically justified conclusions about pretraining procedures.
The full release includes 25 fully trained checkpoints, as well as statistical
guidelines and a code library implementing our recommended hypothesis testing
methods. Finally, for five of these models we release a set of 28 intermediate
checkpoints in order to support research on learning dynamics.
|
Functional Magnetic Resonance Imaging (fMRI) maps cerebral activation in
response to stimuli but this activation is often difficult to detect,
especially in low-signal contexts and single-subject studies. Accurate
activation detection can be guided by the fact that very few voxels are, in
reality, truly activated and that activated voxels are spatially localized, but
it is challenging to incorporate both these facts. We provide a computationally
feasible and methodologically sound model-based approach, implemented in the R
package MixfMRI, that bounds the a priori expected proportion of activated
voxels while also incorporating spatial context. Results on simulation
experiments for different levels of activation detection difficulty are
uniformly encouraging. The value of the methodology in low-signal and
single-subject fMRI studies is illustrated on a sports imagination experiment.
Concurrently, we also extend the potential use of fMRI as a clinical tool to,
for example, detect awareness and improve treatment in individual patients in
persistent vegetative state, such as traumatic brain injury survivors.
|
A generalised notion of Kac-Moody algebra is defined using smooth maps from a
compact real manifold $\mathcal{M}$ to a finite-dimensional Lie group, by means
of complete orthonormal bases for a Hermitian inner product on the manifold and
a Fourier expansion. The Peter--Weyl theorem for the case of manifolds related
to compact Lie groups and coset spaces is discussed, and appropriate Hilbert
bases for the space $L^{2}(\mathcal{M})$ of square-integrable functions are
constructed. It is shown that such bases are characterised by the
representation theory of the compact Lie group, from which a complete set of
labelling operator is obtained. The existence of central extensions of
generalised Kac-Moody algebras is analysed using a duality property of
Hermitian operators on the manifold, and the corresponding root systems are
constructed. Several applications of physically relevant compact groups and
coset spaces are discussed.
|
We present two fast algorithms which apply inclusion-exclusion principle to
sum over the bosonic diagrams in bare diagrammatic quantum Monte Carlo (dQMC)
and inchworm Monte Carlo method, respectively. In the case of inchworm Monte
Carlo, the proposed fast algorithm gives an extension to the work
["Inclusion-exclusion principle for many-body diagrammatics", Phys. Rev. B,
98:115152, 2018] from fermionic to bosonic systems. We prove that the proposed
fast algorithms reduce the computational complexity from double factorial to
exponential. Numerical experiments are carried out to verify the theoretical
results and to compare the efficiency of the methods.
|
Regret-based algorithms are highly efficient at finding approximate Nash
equilibria in sequential games such as poker games. However, most regret-based
algorithms, including counterfactual regret minimization (CFR) and its
variants, rely on iterate averaging to achieve convergence. Inspired by recent
advances on last-iterate convergence of optimistic algorithms in zero-sum
normal-form games, we study this phenomenon in sequential games, and provide a
comprehensive study of last-iterate convergence for zero-sum extensive-form
games with perfect recall (EFGs), using various optimistic regret-minimization
algorithms over treeplexes. This includes algorithms using the vanilla entropy
or squared Euclidean norm regularizers, as well as their dilated versions which
admit more efficient implementation. In contrast to CFR, we show that all of
these algorithms enjoy last-iterate convergence, with some of them even
converging exponentially fast. We also provide experiments to further support
our theoretical results.
|
Pre-trained language models such as ClinicalBERT have achieved impressive
results on tasks such as medical Natural Language Inference. At first glance,
this may suggest that these models are able to perform medical reasoning tasks,
such as mapping symptoms to diseases. However, we find that standard benchmarks
such as MedNLI contain relatively few examples that require such forms of
reasoning. To better understand the medical reasoning capabilities of existing
language models, in this paper we introduce DisKnE, a new benchmark for Disease
Knowledge Evaluation. To construct this benchmark, we annotated each positive
MedNLI example with the types of medical reasoning that are needed. We then
created negative examples by corrupting these positive examples in an
adversarial way. Furthermore, we define training-test splits per disease,
ensuring that no knowledge about test diseases can be learned from the training
data, and we canonicalize the formulation of the hypotheses to avoid the
presence of artefacts. This leads to a number of binary classification
problems, one for each type of reasoning and each disease. When analysing
pre-trained models for the clinical/biomedical domain on the proposed
benchmark, we find that their performance drops considerably.
|
Pre-trained language models achieve outstanding performance in NLP tasks.
Various knowledge distillation methods have been proposed to reduce the heavy
computation and storage requirements of pre-trained language models. However,
from our observations, student models acquired by knowledge distillation suffer
from adversarial attacks, which limits their usage in security sensitive
scenarios. In order to overcome these security problems, RoSearch is proposed
as a comprehensive framework to search the student models with better
adversarial robustness when performing knowledge distillation. A directed
acyclic graph based search space is built and an evolutionary search strategy
is utilized to guide the searching approach. Each searched architecture is
trained by knowledge distillation on pre-trained language model and then
evaluated under a robustness-, accuracy- and efficiency-aware metric as
environmental fitness. Experimental results show that RoSearch can improve
robustness of student models from 7%~18% up to 45.8%~47.8% on different
datasets with comparable weight compression ratio to existing distillation
methods (4.6$\times$~6.5$\times$ improvement from teacher model BERT_BASE) and
low accuracy drop. In addition, we summarize the relationship between student
architecture and robustness through statistics of searched models.
|
We investigate the near horizon geometry of the simplest representative of
the class of axisymmetric space-times: the Kerr Vaidya metrics. Kerr Vaidya
metrics can be derived from the Vaidya metric by the complex coordinate
transformation suggested by Newman and Janis. We show that the energy momentum
tensor belongs to type 3 in the Segre Hawking Ellis classification but has a
special form with all Lorentz invariant eigenvalues belonging to zero. We find
a location of the apparent horizon for quasi-stationary Kerr Vaidya black
holes. The energy-momentum tensor of the Kerr Vaidya geometries violates the
null energy condition. We show that energy density, pressure, and flux for an
infalling observer are diverging in the outgoing Kerr Vaidya metric. This
firewall leads to the violation of a specific quantum energy inequality.
|
We consider a massless higher spin field theory within the BRST approach and
construct a general off-shell cubic vertex corresponding to irreducible higher
spin fields of helicities $s_1, s_2, s_3$. Unlike the previous works on cubic
vertices, which do not take into account of the trace constraints, we use the
complete BRST operator, including the trace constraints that describe an
irreducible representation with definite integer helicity. As a result, we
generalize the cubic vertex found in [arXiv:1205.3131 [hep-th]] and calculate
the new contributions to the vertex, which contain additional terms with a
smaller number space-time derivatives of the fields as well as the terms
without derivatives.
|
Fluid, heat and species transport and oxygen reduction in the cathode of a
PEM fuel cell are simulated using multi relaxation lattice Boltzmann method.
Heat generation due to oxygen reduction and its effects on transport and
reaction are considered. Simulations for various cell operating voltages,
temperatures and flow rates with various values of porous media properties,
namely, permeability, porosity, and effective porous media diffusion
coefficient, are performed to study transport and operating characteristics in
the electrode. It is seen that maximum output power density achievable is
limited by the mass transport rate. A small increase in current density is
obtained by increasing the operating temperature. However, this results in an
increase in the rate of heat generation. Permeability and porosity of the gas
diffusion layer do not show a significant impact on the performance in the
range of values presently simulated. Higher permeability, in turn, resulted in
enhancement of thermal gradients in the porous layer. A significant increase in
the maximum current density obtainable is observed with increase in flow rates.
The higher convection associated with high flow rate facilitates better
transport of species and heat at the catalyst layer resulting in larger current
density with a lesser chance of hotspot formation. Increased species diffusion
coefficient also resulted in increasing the power output substantially. In
addition, the fuel utilization is also improved at high diffusion rates in the
porous media. The study analyses and shows the impact of various operating and
material parameters affecting the performance of a PEM fuel cell with special
attention on enhancing the maximum power density attainable.
|
In d-dimensional CFTs with a large number of degrees of freedom an important
set of operators consists of the stress tensor and its products, multi stress
tensors. Thermalization of such operators, the equality between their
expectation values in heavy states and at finite temperature, is equivalent to
a universal behavior of their OPE coefficients with a pair of identical heavy
operators. We verify this behavior in a number of examples which include
holographic and free CFTs and provide a bootstrap argument for the general
case. In a free CFT we check the thermalization of multi stress tensor
operators directly and also confirm the equality between the contributions of
multi stress tensors to heavy-heavy-light-light correlators and to the
corresponding thermal light-light two-point functions by disentangling the
contributions of other light operators. Unlike multi stress tensors, these
light operators violate the Eigenstate Thermalization Hypothesis and do not
thermalize.
|
Following ideas of Gromov we prove scalar and mean curvature comparison
results for Riemannian bands with lower scalar curvature bounds in dimension
$n\leq7$. The model spaces we use are warped products over scalar-flat
manifolds with $\log$-concave warping functions.
|
Video watermarking embeds a message into a cover video in an imperceptible
manner, which can be retrieved even if the video undergoes certain
modifications or distortions. Traditional watermarking methods are often
manually designed for particular types of distortions and thus cannot
simultaneously handle a broad spectrum of distortions. To this end, we propose
a robust deep learning-based solution for video watermarking that is end-to-end
trainable. Our model consists of a novel multiscale design where the watermarks
are distributed across multiple spatial-temporal scales. It gains robustness
against various distortions through a differentiable distortion layer, whereas
non-differentiable distortions, such as popular video compression standards,
are modeled by a differentiable proxy. Extensive evaluations on a wide variety
of distortions show that our method outperforms traditional video watermarking
methods as well as deep image watermarking models by a large margin. We further
demonstrate the practicality of our method on a realistic video-editing
application.
|
Since the beginning of the COVID-19 pandemic, many dashboards have emerged as
useful tools to monitor the evolution of the pandemic, inform the public, and
assist governments in decision making. Our goal is to develop a globally
applicable method, integrated in a twice daily updated dashboard that provides
an estimate of the trend in the evolution of the number of cases and deaths
from reported data of more than 200 countries and territories, as well as a
seven-day forecast. One of the significant difficulties to manage a quickly
propagating epidemic is that the details of the dynamic needed to forecast its
evolution are obscured by the delays in the identification of cases and deaths
and by irregular reporting. Our forecasting methodology substantially relies on
estimating the underlying trend in the observed time series using robust
seasonal trend decomposition techniques. This allows us to obtain forecasts
with simple, yet effective extrapolation methods in linear or log scale. We
present the results of an assessment of our forecasting methodology and discuss
its application to the production of global and regional risk maps.
|
We study the impact of non-local modifications of General Relativity on
stellar structure. In particular, assuming an analytic distortion function, we
made use of remnant stars to put qualitative constraints on a parameter not
directly restricted by solar system tests. Using current data sets available
for white dwarfs and strange quark stars candidates, we find that the most
stringent bounds come from the objects displaying the highest core densities,
such as strange quark stars and neutron stars. Specifically, the constraints
obtained from this class of stars are three to four orders of magnitude tighter
than those obtained using white dwarfs.
|
Let p be prime. We describe explicitly the resolution of singularities of
several families of wild Z/pZ-quotient singularities in dimension two,
including families that generalize the quotient singularities of type E_6, E_7,
and E_8 from p=2 to arbitrary characteristics. We prove that for odd primes,
any power of p can appear as the determinant of the intersection matrix of a
wild Z/pZ-quotient singularity. We also provide evidence towards the conjecture
that in this situation one may choose the wild action to be ramified precisely
at the origin.
|
We perform a zero-$\beta$ magnetohydrodynamic simulation for the C7.7 class
flare initiated at 01:18 UT on 2011 June 21 using the Message Passing Interface
Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC). The initial
condition for the simulation involves a flux rope which we realize through the
regularized Biot-Savart laws, whose parameters are constrained by observations
from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory
(SDO) and the Extreme Ultraviolet Imager (EUVI) on the twin Solar Terrestrial
Relations Observatory (STEREO). This data-constrained initial state is then
relaxed to a force-free state by the magneto-frictional module in MPI-AMRVAC.
The further time-evolving simulation results reproduce the eruption
characteristics obtained by SDO/AIA 94 A, 304 A, and STEREO/EUVI 304 A
observations fairly well. The simulated flux rope possesses similar eruption
direction, height range, and velocities to the observations. Especially, the
two phases of slow evolution and fast eruption are reproduced by varying the
density distribution in light of the filament material draining process. Our
data-constrained simulations also show other advantages, such as a large field
of view (about 0.76 solar radii). We study the twist of the magnetic flux rope
and the decay index of the overlying field, and find that in this event, both
the magnetic strapping force and the magnetic tension force are sufficiently
weaker than the magnetic hoop force, thus allowing the successful eruption of
the flux rope. We also find that the anomalous resistivity is necessary in
keeping the correct morphology of the erupting flux rope.
|
Our data challenge Chen et al.'s interpretation of smFRET results, i.e. the
repetitive unfolding of G4 with one-base translocations. We believe that the
observed oscillatory curve represents the alternate binding of DHX36 to the 3'
and 5'G-tetrad of G4s, rather than the repetitive unfolding between the
canonical and the transformed non-canonical G4s. Noteworthily, our results
reported here also call into question the previously published smFRET data of
helicase-mediated G4 unfolding. Therefore,discriminating the smFRET signal of
repetitive binding from that of unfolding is very important to avoid
misinterpreting smFRET results.
|
To improve the convergence property of approximate message-passing (AMP),
convolutional AMP (CAMP) has been proposed. CAMP replaces the Onsager
correction in AMP with a convolution of messages in all preceding iterations
while it uses the same low-complexity matched filter (MF) as AMP. This paper
derives state evolution (SE) equations to design the Bayes-optimal denoiser in
CAMP. Numerical results imply that CAMP with the Bayes-optimal denoiser--called
Bayes-optimal CAMP--can achieve the Bayes-optimal performance for
right-orthogonally invariant sensing matrices with low-to-moderate condition
numbers.
|
We compute the $\text{SO}(n+1)$-equivariant mod $2$ Borel cohomology of the
free iterated loop space $Z^{S^n}$ when $n$ is odd and $Z$ is a product of mod
$2$ Eilenberg Mac Lane spaces. When $n=1$, this recovers Ottosen and
B\"okstedt's computation for the free loop space. The highlight of our
computation is a construction of cohomology classes using an
$\text{O}(n)$-equivariant evaluation map and a pushforward map. We then
reinterpret our computation as giving a presentation of the zeroth derived
functor of the Borel cohomology of $Z^{S^n}$ for arbitrary $Z$. We also include
an appendix where we give formulas for computing the zeroth derived functor of
the cohomology of mapping spaces, and study the dependence of such derived
functors on the Steenrod operations.
|
Anomaly detection techniques are growing in importance at the Large Hadron
Collider (LHC), motivated by the increasing need to search for new physics in a
model-agnostic way. In this work, we provide a detailed comparative study
between a well-studied unsupervised method called the autoencoder (AE) and a
weakly-supervised approach based on the Classification Without Labels (CWoLa)
technique. We examine the ability of the two methods to identify a new physics
signal at different cross sections in a fully hadronic resonance search. By
construction, the AE classification performance is independent of the amount of
injected signal. In contrast, the CWoLa performance improves with increasing
signal abundance. When integrating these approaches with a complete background
estimate, we find that the two methods have complementary sensitivity. In
particular, CWoLa is effective at finding diverse and moderately rare signals
while the AE can provide sensitivity to very rare signals, but only with
certain topologies. We therefore demonstrate that both techniques are
complementary and can be used together for anomaly detection at the LHC.
|
Prescription Drug Monitoring Programs (PDMPs) seek to potentially reduce
opioid misuse by restricting the sale of opioids in a state. We examine
discontinuities along state borders, where one side may have a PDMP and the
other side may not. We find that electronic PDMP implementation, whereby
doctors and pharmacists can observe a patient's opioid purchase history,
reduces a state's opioid sales but increases opioid sales in neighboring
counties on the other side of the state border. We also find systematic
differences in opioid sales and mortality between border counties and interior
counties. These differences decrease when neighboring states both have ePDMPs,
which is consistent with the hypothesis that individuals cross state lines to
purchase opioids. Our work highlights the importance of understanding the
opioid market as connected across counties or states, as we show that states
are affected by the opioid policies of their neighbors.
|
This article deals with the uniqueness in identifying multiple parameters
simultaneously in the one-dimensional time-fractional diffusion-wave equation
of fractional time-derivative order $\in (0,2)$ with the zero Robin boundary
condition. Using the Laplace transform and a transformation formula, we prove
the uniqueness in determining an order of the fractional derivative, a
spatially varying potential, initial values and Robin coefficients
simultaneously by boundary measurement data, provided that all the eigenmodes
of an initial value do not vanish. Furthermore, for another formulation of
inverse problem with input source term in place of initial value, by the
uniqueness in the case of non-zero initial value and a Duhamel principle, we
prove the simultaneous uniqueness in determining multiple parameters for a
time-fractional diffusion-wave equation.
|
We develop a new nonparametric method to reconstruct the Equation of State
(EoS) of Neutron Star with multimessenger data. As an universal function
approximator, the Feed-Forward Neural Network (FFNN) with one hidden layer and
a sigmoidal activation function can approximately fit any continuous function.
Thus we are able to implement the nonparametric FFNN representation of the
EoSs. This new representation is validated by its capabilities of fitting the
theoretical EoSs and recovering the injected parameters. Then we adopt this
nonparametric method to analyze the real data, including mass-tidal
deformability measurement from the Binary Neutron Star (BNS) merger
Gravitational Wave (GW) event GW170817 and mass-radius measurement of PSR
J0030+0451 by {\it NICER}. We take the publicly available samples to construct
the likelihood and use the nested sampling to obtain the posteriors of the
parameters of FFNN according to the Bayesian theorem, which in turn can be
translated to the posteriors of EoS parameters. Combining all these data, for a
canonical 1.4 $M_\odot$ neutron star, we get the radius
$R_{1.4}=11.83^{+1.25}_{-1.08}$ km and the tidal deformability $\Lambda_{1.4} =
323^{+334}_{-165}$ (90\% confidence interval).Furthermore, we find that in the
high density region ($\geq 3\rho_{\rm sat}$), the 90\% lower limits of the
$c_{\rm s}^2/c^2$ ($c_{\rm s}$ is the sound speed and $c$ is the velocity of
light in the vacuum) are above $1/3$, which means that the so-called conformal
limit (i.e., $c_{\rm s}^2/c^2<1/3$) is not always valid in the neutron stars.
|
This paper considers a massive MIMO system under the double scattering
channels. We derive a closed-form expression of the uplink ergodic spectral
efficiency (SE) by exploiting the maximum-ratio combining technique with
imperfect channel state information. We then formulate and solve a total uplink
data power optimization problem that aims at simultaneously satisfying the
required SEs from all the users with limited power resources. We further
propose algorithms to cope with the congestion issue appearing when at least
one user is served by lower SE than requested. Numerical results illustrate the
effectiveness of our proposed power optimization. More importantly, our
proposed congestion-handling algorithms can guarantee the required SEs to many
users under congestion, even when the SE requirement is high.
|
The processing of low-frequency interaural time differences is found to be
problematic among hearing-impaired people. The current generation of
beamformers does not consider this deficiency. In an attempt to tackle this
issue, we propose to replace the inaudible interaural time differences in the
low-frequency region with the interaural level differences. In addition, a
beamformer is introduced and analyzed, which enhances the low-frequency
interaural level differences of the sound sources using a near-field
transformation. The proposed beamforming problem is relaxed to a convex problem
using semi-definite relaxation. The instrumental analysis suggests that the
low-frequency interaural level differences are enhanced without hindering the
provided intelligibility. A psychoacoustic localization test is done using a
listening experiment, which suggests that the replacement of time differences
into level differences improves the localization performance of normal-hearing
listeners for an anechoic scene but not for a reverberant scene.
|
We propose a method for verifying that a given feasible point for a
polynomial optimization problem is globally optimal. The approach relies on the
Lasserre hierarchy and the result of Lasserre regarding the importance of the
convexity of the feasible set as opposed to that of the individual constraints.
By focusing solely on certifying global optimality and relaxing the Lasserre
hierarchy using necessary conditions for positive semidefiniteness based on
matrix determinants, the proposed method is implementable as a computationally
tractable linear program. We demonstrate this method via application to several
instances of polynomial optimization, including the optimal power flow problem
used to operate electric power systems.
|
Adversarial learning can learn fairer and less biased models of language than
standard methods. However, current adversarial techniques only partially
mitigate model bias, added to which their training procedures are often
unstable. In this paper, we propose a novel approach to adversarial learning
based on the use of multiple diverse discriminators, whereby discriminators are
encouraged to learn orthogonal hidden representations from one another.
Experimental results show that our method substantially improves over standard
adversarial removal methods, in terms of reducing bias and the stability of
training.
|
In recent years, machine learning and AI have been introduced in many
industrial fields. In fields such as finance, medicine, and autonomous driving,
where the inference results of a model may have serious consequences, high
interpretability as well as prediction accuracy is required. In this study, we
propose CGA2M+, which is based on the Generalized Additive 2 Model (GA2M) and
differs from it in two major ways. The first is the introduction of
monotonicity. Imposing monotonicity on some functions based on an analyst's
knowledge is expected to improve not only interpretability but also
generalization performance. The second is the introduction of a higher-order
term: given that GA2M considers only second-order interactions, we aim to
balance interpretability and prediction accuracy by introducing a higher-order
term that can capture higher-order interactions. In this way, we can improve
prediction performance without compromising interpretability by applying
learning innovation. Numerical experiments showed that the proposed model has
high predictive performance and interpretability. Furthermore, we confirmed
that generalization performance is improved by introducing monotonicity.
|
We study automated intrusion prevention using reinforcement learning. In a
novel approach, we formulate the problem of intrusion prevention as an optimal
stopping problem. This formulation allows us insight into the structure of the
optimal policies, which turn out to be threshold based. Since the computation
of the optimal defender policy using dynamic programming is not feasible for
practical cases, we approximate the optimal policy through reinforcement
learning in a simulation environment. To define the dynamics of the simulation,
we emulate the target infrastructure and collect measurements. Our evaluations
show that the learned policies are close to optimal and that they indeed can be
expressed using thresholds.
|
We describe the models we built for hospital admissions and occupancy of
COVID-19 patients in the Netherlands. These models were used to make short-term
decisions about transfers of patients between regions and for long-term policy
making. We motivate and describe the model we used for predicting admissions
and how we use this to make predictions on occupancy.
|
With very few exceptions, recent research in fair division has mostly focused
on deterministic allocations. Deviating from this trend, we study the fairness
notion of interim envy-freeness (iEF) for lotteries over allocations, which
serves as a sweet spot between the too stringent notion of ex-post
envy-freeness and the very weak notion of ex-ante envy-freeness. iEF is a
natural generalization of envy-freeness to random allocations in the sense that
a deterministic envy-free allocation is iEF (when viewed as a degenerate
lottery). It is also certainly meaningful as it allows for a richer solution
space, which includes solutions that are provably better than envy-freeness
according to several criteria. Our analysis relates iEF to other fairness
notions as well, and reveals tradeoffs between iEF and efficiency. Even though
several of our results apply to general fair division problems, we are
particularly interested in instances with equal numbers of agents and items
where allocations are perfect matchings of the items to the agents.
Envy-freeness can be trivially decided and (when it can be achieved, it)
implies full efficiency in this setting. Although computing iEF allocations in
matching allocation instances is considerably more challenging, we show how to
compute them in polynomial time, while also maximizing several efficiency
objectives. Our algorithms use the ellipsoid method for linear programming and
efficient solutions to a novel variant of the bipartite matching problem as a
separation oracle. We also study the extension of interim envy-freeness notion
when payments to or from the agents are allowed. We present a series of results
on two optimization problems, including a generalization of the classical rent
division problem to random allocations using interim envy-freeness as the
solution concept.
|
The Geroch/Stephani transformation is a solution-generating transformation,
and may generate spiky solutions. The spikes in solutions generated so far are
either early-time permanent spikes or transient spikes. We want to generate a
solution with a late-time permanent spike. We achieve this by applying the
Stephani transformation with the rotational Killing vector field of the locally
rotationally symmetric Jacobs solution. The late-time permanent spike occurs
along the cylindrical axis. The generated solution also features a rich variety
of transient structures. We introduce a new technique to analyse these
structures. Our findings lead us to discover a transient behaviour, which we
call the overshoot transition.
|
Many biological systems can be described by finite Markov models. A general
method for simplifying master equations is presented that is based on merging
adjacent states. The approach preserves the steady-state probability
distribution and all steady-state fluxes except the one between the merged
states. Different levels of coarse graining of the underlying microscopic
dynamics can be obtained by iteration, with the result being independent of the
order in which states are merged. A criterion for the optimal level of coarse
graining or resolution of the process is proposed via a tradeoff between the
simplicity of the coarse-grained model and the information loss relative to the
original model. As a case study, the method is applied to the cycle kinetics of
the molecular motor kinesin.
|
We reconcile for the first time the strict mathematical formalism of
multivariate cumulants with the usage of cumulants in anisotropic flow analyses
in high-energy nuclear collisions. This reconciliation yields to the next
generation of observables to be used in flow analyses. We review all
fundamental properties of multivariate cumulants and use them as a foundation
to establish two simple necessary conditions to determine whether some
multivariate observable is a multivariate cumulant in the basis they are
expressed in. We argue that properties of cumulants are preserved only for the
stochastic observables on which the cumulant expansion has been performed
directly, and if there are no underlying symmetries due to which some terms in
the cumulant expansion are identically zero. We illustrate one possibility of
how new multivariate cumulants of azimuthal angles can be defined which do
satisfy all fundamental properties of multivariate cumulants, by defining them
event-by-event and by keeping all non-isotropic terms in the cumulant
expansion. We introduce new cumulants of flow amplitudes named Asymmetric
Cumulants, which generalize recently introduced Symmetric Cumulants for the
case when flow amplitudes are raised to different powers. Finally, we present
the new concept of Cumulants of Symmetry Plane Correlations and provide the
first realisation for the lowest orders. All the presented results are
supported by Monte Carlo studies using state-of-the-art models.
|
A widely recognized difficulty in federated learning arises from the
statistical heterogeneity among clients: local datasets often come from
different but not entirely unrelated distributions, and personalization is,
therefore, necessary to achieve optimal results from each individual's
perspective. In this paper, we show how the excess risks of personalized
federated learning with a smooth, strongly convex loss depend on data
heterogeneity from a minimax point of view. Our analysis reveals a surprising
theorem of the alternative for personalized federated learning: there exists a
threshold such that (a) if a certain measure of data heterogeneity is below
this threshold, the FedAvg algorithm [McMahan et al., 2017] is minimax optimal;
(b) when the measure of heterogeneity is above this threshold, then doing pure
local training (i.e., clients solve empirical risk minimization problems on
their local datasets without any communication) is minimax optimal. As an
implication, our results show that the presumably difficult
(infinite-dimensional) problem of adapting to client-wise heterogeneity can be
reduced to a simple binary decision problem of choosing between the two
baseline algorithms. Our analysis relies on a new notion of algorithmic
stability that takes into account the nature of federated learning.
|
A century ago, Srinivasa Ramanujan -- the great self-taught Indian genius of
mathematics -- died, shortly after returning from Cambridge, UK, where he had
collaborated with Godfrey Hardy. Ramanujan contributed numerous outstanding
results to different branches of mathematics, like analysis and number theory,
with a focus on special functions and series. Here we refer to apparently weird
values which he assigned to two simple divergent series, $\sum_{n \geq 1} n$
and $\sum_{n \geq 1} n^{3}$. These values are sensible, however, as analytic
continuations, which correspond to Riemann's $\zeta$-function. Moreover, they
have applications in physics: we discuss the vacuum energy of the photon field,
from which one can derive the Casimir force, which has been experimentally
measured. We further discuss its interpretation, which remains controversial.
This is a simple way to illustrate the concept of renormalization, which is
vital in quantum field theory.
|
Nowadays, formal methods are used in various areas for the verification of
programs or for code generation from models in order to increase the quality of
software and to reduce costs. However, there are still fields in which formal
methods have not been widely adopted, despite the large set of possible
benefits offered. This is the case for the area of programmable logic
controllers (PLC). This article aims to evaluate the potential of formal
methods in the context of PLC development. For this purpose, the general
concepts of formal methods are first introduced and then transferred to the PLC
area, resulting in an engineering-oriented description of the technology that
is based on common concepts from PLC development. Based on this description,
PLC professionals with varying degrees of experience were interviewed for their
perspective on the topic and to identify possible use cases within the PLC
domain. The survey results indicate the technology's high potential in the PLC
area, either as a tool to directly support the developer or as a key element
within a model-based systems engineering toolchain. The evaluation of the
survey results is performed with the aid of a demo application that
communicates with the Totally Integrated Automation Portal from Siemens and
generates programs via Fastsynth, a model-based open source code generator.
Benchmarks based on an industry-related PLC project show satisfactory synthesis
times and a successful integration into the workflow of a PLC developer.
|
Inspired by the recent discovery of the $X(6900)$ meson at {\tt LHCb}
experiment, we investigate the inclusive production rate of the $C$-odd
fully-charmed tetraquarks associated with light hadrons at the $B$ factory
within the nonrelativistic QCD (NRQCD) factorization framework. The
short-distance coefficient is computed at lowest order in velocity and
$\alpha_s$. Employing the diquark-antidiquark model to roughly estimate the
long-distance NRQCD matrix elements, we predict the rate for inclusive
production of the $1^{+-}$ $T_{4c}$ state and discuss the observation prospects
at {\tt Belle 2} experiment.
|
Modeling distributions on Riemannian manifolds is a crucial component in
understanding non-Euclidean data that arises, e.g., in physics and geology. The
budding approaches in this space are limited by representational and
computational tradeoffs. We propose and study a class of flows that uses convex
potentials from Riemannian optimal transport. These are universal and can model
distributions on any compact Riemannian manifold without requiring domain
knowledge of the manifold to be integrated into the architecture. We
demonstrate that these flows can model standard distributions on spheres, and
tori, on synthetic and geological data. Our source code is freely available
online at http://github.com/facebookresearch/rcpm
|
Pretrained language models (PLMs) such as BERT adopt a training paradigm
which first pretrain the model in general data and then finetune the model on
task-specific data, and have recently achieved great success. However, PLMs are
notorious for their enormous parameters and hard to be deployed on real-life
applications. Knowledge distillation has been prevailing to address this
problem by transferring knowledge from a large teacher to a much smaller
student over a set of data. We argue that the selection of thee three key
components, namely teacher, training data, and learning objective, is crucial
to the effectiveness of distillation. We, therefore, propose a four-stage
progressive distillation framework ERNIE-Tiny to compress PLM, which varies the
three components gradually from general level to task-specific level.
Specifically, the first stage, General Distillation, performs distillation with
guidance from pretrained teacher, gerenal data and latent distillation loss.
Then, General-Enhanced Distillation changes teacher model from pretrained
teacher to finetuned teacher. After that, Task-Adaptive Distillation shifts
training data from general data to task-specific data. In the end,
Task-Specific Distillation, adds two additional losses, namely Soft-Label and
Hard-Label loss onto the last stage. Empirical results demonstrate the
effectiveness of our framework and generalization gain brought by ERNIE-Tiny.In
particular, experiments show that a 4-layer ERNIE-Tiny maintains over
98.0%performance of its 12-layer teacher BERT base on GLUE benchmark,
surpassing state-of-the-art (SOTA) by 1.0% GLUE score with the same amount of
parameters. Moreover, ERNIE-Tiny achieves a new compression SOTA on five
Chinese NLP tasks, outperforming BERT base by 0.4% accuracy with 7.5x fewer
parameters and9.4x faster inference speed.
|
Plant species identification in the wild is a difficult problem in part due
to the high variability of the input data, but also because of complications
induced by the long-tail effects of the datasets distribution. Inspired by the
most recent fine-grained visual classification approaches which are based on
attention to mitigate the effects of data variability, we explore the idea of
using object detection as a form of attention. We introduce a bottom-up
approach based on detecting plant organs and fusing the predictions of a
variable number of organ-based species classifiers. We also curate a new
dataset with a long-tail distribution for evaluating plant organ detection and
organ-based species identification, which is publicly available.
|
Using an effective field theory approach for higher-spin fields, we derive
the interactions of colour singlet and electrically neutral particles with a
spin higher than unity, concentrating on the spin-3/2, spin-2, spin-5/2 and
spin-3 cases. We compute the decay rates and production cross sections in the
main channels for spin-3/2 and spin-2 states at both electron-positron and
hadron colliders, and identify the most promising novel experimental signatures
for discovering such particles at the LHC. The discussion is qualitatively
extended to the spin-5/2 and spin-3 cases. Higher-spin particles exhibit a rich
phenomenology and have signatures that often resemble the ones of
supersymmetric and extra-dimensional theories. To enable further studies of
higher-spin particles at collider and beyond, we collect the relevant Feynman
rules and other technical details.
|
Length Rate Quotient (LRQ) is the first algorithm of interleaved shaping -- a
novel concept proposed to provide per-flow shaping for a flow aggregate without
per-flow queuing. This concept has been adopted by Time-Sensitive Networking
(TSN) and Deterministic Networking (DetNet). An appealing property of
interleaved shaping is that, when an interleaved shaper is appended to a FIFO
system, it does not increase the worst-case delay of the system. Based on this
"shaping-for-free" property, an approach has been introduced to deliver bounded
end-to-end latency. Specifically, at each output link of a node, class-based
aggregate scheduling is used together with one interleaved shaper per-input
link and per-class, and the interleaved shaper re-shapes every flow to its
initial traffic constraint. In this paper, we investigate other properties of
interleaved LRQ shapers, particularly as stand-alone elements. In addition,
under per-flow setting, we also investigate per-flow LRQ based flow aggregation
and derive its properties. The analysis focuses directly on the timing of
operations, such as shaping and scheduling, in the network. This timing based
method can be found in the Guaranteed Rate (GR) server model and more generally
the max-plus branch of network calculus. With the derived properties, we not
only show that an improved end-to-end latency bound can be obtained for the
current approach, but also demonstrate with two examples that new approaches
may be devised. End-to-end delay bounds for the three approaches are derived
and compared. As a highlight, the two new approaches do not require different
node architectures in allocating (shaping / scheduling) queues, which implies
that they can be readily adapted for use in TSN and DetNet. This together with
the derived properties of LRQ shed new insights on providing the TSN / DetNet
qualities of service.
|
We characterize for the Poisson Hail problem up to the boundary the
collection of critical moments which guarantee stability in full generality. In
particular, we treat the case of infinite speed of propagation.
|
Human activity recognition in videos has been widely studied and has recently
gained significant advances with deep learning approaches; however, it remains
a challenging task. In this paper, we propose a novel framework that
simultaneously considers both implicit and explicit representations of human
interactions by fusing information of local image where the interaction
actively occurred, primitive motion with the posture of individual subject's
body parts, and the co-occurrence of overall appearance change. Human
interactions change, depending on how the body parts of each human interact
with the other. The proposed method captures the subtle difference between
different interactions using interacting body part attention. Semantically
important body parts that interact with other objects are given more weight
during feature representation. The combined feature of interacting body part
attention-based individual representation and the co-occurrence descriptor of
the full-body appearance change is fed into long short-term memory to model the
temporal dynamics over time in a single framework. We validate the
effectiveness of the proposed method using four widely used public datasets by
outperforming the competing state-of-the-art method.
|
The modulation of the electronic structure by an external magnetic field,
which could further control the electronic transport behaviour of a system, is
highly desired. Herein, an unconventional anomalous Hall effect (UAHE) was
observed during magnetization process in the magnetic Weyl semimetal EuB6,
resulting in an unconventional anomalous Hall-conductivity as high as ~1000
{\Omega}-1 cm-1 and a Hall-angle up to ~10%. The system even only shows the
UAHE, meaning that the anomalous Hall signal completely comes from the UAHE,
with UAHE accounting for 100% and 87.5% of the AHE and the total Hall response,
respectively. Theoretical calculations revealed that a largely enhanced Berry
curvature was induced by the dynamic folding of the topological bands due to
the spin-canting effect under external magnetic fields, which further produced
the prominent UAHE even in a low-field magnetization process. These findings
elucidate the connection between the non-collinear magnetism and the
topological electronic state as well as reveal a novel manner to manipulate the
transport behaviour of topological electrons.
|
We present a detailed theoretical analysis for the spectral properties of
Andreev bound states in the multiterminal Josephson junctions by employing a
symmetry-constrained scattering matrix approach. We find that in the synthetic
five-dimensional space of superconducting phases, crossings of Andreev bands
may support the non-Abelian $SU(2)$ monopoles with a topological charge
characterized by the second class Chern number. We propose that these
topological defects can be detected via nonlinear response measurement of the
current autocorrelations. In addition, multiterminal Josephson junction devices
can be tested as a hardware platform for realizing holonomic quantum
computation.
|
Most evolutionary robotics studies focus on evolving some targeted behavior
without taking the energy usage into account. This limits the practical value
of such systems because energy efficiency is an important property for
real-world autonomous robots. In this paper, we mitigate this problem by
extending our simulator with a battery model and taking energy consumption into
account during fitness evaluations. Using this system we investigate how energy
awareness affects the evolution of robots. Since our system is to evolve
morphologies as well as controllers, the main research question is twofold: (i)
what is the impact on the morphologies of the evolved robots, and (ii) what is
the impact on the behavior of the evolved robots if energy consumption is
included in the fitness evaluation? The results show that including the energy
consumption in the fitness in a multi-objective fashion (by NSGA-II) reduces
the average size of robot bodies while at the same time reducing their speed.
However, robots generated without size reduction can achieve speeds comparable
to robots from the baseline set.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.