abstract
stringlengths 42
2.09k
|
---|
This work developed a meta-learning approach that adapts the control policy
on the fly to different changing conditions for robust locomotion. The proposed
method constantly updates the interaction model, samples feasible sequences of
actions of estimated the state-action trajectories, and then applies the
optimal actions to maximize the reward. To achieve online model adaptation, our
proposed method learns different latent vectors of each training condition,
which are selected online given the newly collected data. Our work designs
appropriate state space and reward functions, and optimizes feasible actions in
an MPC fashion which are then sampled directly in the joint space considering
constraints, hence requiring no prior design of specific walking gaits. We
further demonstrate the robot's capability of detecting unexpected changes
during interaction and adapting control policies quickly. The extensive
validation on the SpotMicro robot in a physics simulation shows adaptive and
robust locomotion skills under varying ground friction, external pushes, and
different robot models including hardware faults and changes.
|
We use the tridiagonal representation approach to solve the radial
Schr\"odinger equation for an inverse power-law potential of a combined quartic
and sextic degrees and for all angular momenta. The amplitude of the quartic
singularity is larger than that of the sextic but the signs are negative and
positive, respectively. It turns out that the system has a finite number of
bound states, which is determined by the larger ratio of the two singularity
amplitudes. The solution is written as a finite series of square integrable
functions written in terms of the Bessel polynomial.
|
The interest in the use of the HF band in telecommunication has increased
significantly in the last decade, mainly due to the development of new
standards for military telecommunications in HF, as well as the expansion of
digital broadcasting in the HF band. More specifically, these new standards
allow the implementation of links of hundreds or thousands of kilometers at a
low cost, which suggests a widespread adoption can occur. In Brazil, this type
of communication can be used in remote regions or regions of difficult access,
such as the Amazon rain-forest region. In addition to the evolution of
technologies concerning the physical layer of the HF telecommunication systems,
there has been a great development of techniques that use machine learning
algorithms for audio and image coding. It is believed that all these advances
will enable the use of the HF band for communication services in places without
telecommunication infrastructure. This work presents recent applications of HF
radio for digital links in Brazil, describing the challenges present for the
development of telecommunication systems in the HF band.
|
In this paper we explore the topological properties of self-replicating,
3-dimensional manifolds, which are modeled by idempotents in the
(2+1)-cobordism category. We give a classification theorem for all such
idempotents. Additionally, we characterize biologically interesting ways in
which self-replicating 3-manifolds can embed in $\mathbb{R}^3$.
|
Previous post-processing bias mitigation algorithms on both group and
individual fairness don't work on regression models and datasets with
multi-class numerical labels. We propose a priority-based post-processing bias
mitigation on both group and individual fairness with the notion that similar
individuals should get similar outcomes irrespective of socio-economic factors
and more the unfairness, more the injustice. We establish this proposition by a
case study on tariff allotment in a smart grid. Our novel framework establishes
it by using a user segmentation algorithm to capture the consumption strategy
better. This process ensures priority-based fair pricing for group and
individual facing the maximum injustice. It upholds the notion of fair tariff
allotment to the entire population taken into consideration without modifying
the in-built process for tariff calculation. We also validate our method and
show superior performance to previous work on a real-world dataset in criminal
sentencing.
|
The rise of Internet has made it a major source of information.
Unfortunately, not all information online is true, and thus a number of
fact-checking initiatives have been launched, both manual and automatic, to
deal with the problem. Here, we present our contribution in this regard:
\emph{WhatTheWikiFact}, a system for automatic claim verification using
Wikipedia. The system can predict the veracity of an input claim, and it
further shows the evidence it has retrieved as part of the verification
process. It shows confidence scores and a list of relevant Wikipedia articles,
together with detailed information about each article, including the phrase
used to retrieve it, the most relevant sentences extracted from it and their
stance with respect to the input claim, as well as the associated
probabilities. The system supports several languages: Bulgarian, English, and
Russian.
|
In this technical report, we present our 1st place solution for the ICDAR
2021 competition on mathematical formula detection (MFD). The MFD task has
three key challenges including a large scale span, large variation of the ratio
between height and width, and rich character set and mathematical expressions.
Considering these challenges, we used Generalized Focal Loss (GFL), an
anchor-free method, instead of the anchor-based method, and prove the Adaptive
Training Sampling Strategy (ATSS) and proper Feature Pyramid Network (FPN) can
well solve the important issue of scale variation. Meanwhile, we also found
some tricks, e.g., Deformable Convolution Network (DCN), SyncBN, and Weighted
Box Fusion (WBF), were effective in MFD task. Our proposed method ranked 1st in
the final 15 teams.
|
We determine the tensor rank of all semifields of order 16 over
$\mathbb{F}_2$ and of all semifields of order 81 over $\mathbb{F}_3$. Our
results imply that some semifields of order 81 have lower multiplicative
complexity than the finite field $\mathbb{F}_{81}$ over $\mathbb{F}_3$. We
prove new results on the correspondence between linear codes and tensor rank,
including a generalisation of a theorem of Brockett and Dobkin to arbitrary
tensors, which makes the problem computationally feasible.
|
The nonlocal response functions to quantum fluctuations are used to find
asymptotic expressions for the Casimir free energy and entropy at arbitrarily
low temperature in the configuration of two parallel metallic plates. It is
shown that by introducing an alternative nonlocal response to the
off-the-mass-shell fluctuations the Lifshitz theory is brought into agreement
with the requirements of thermodynamics. According to our results, the Casimir
entropy calculated using the nonlocal response functions, which take into
account dissipation of conduction electrons, remains positive and monotonously
goes to zero with vanishing temperature, i.e., satisfies the Nernst heat
theorem. This is true for both plates with perfect crystal lattices and for
lattices with defects of structure. The obtained results are discussed in the
context of the Casimir puzzle.
|
We consider a game for a continuum of non-identical players evolving on a
finite state space. Their heterogeneous interactions are represented by a
graphon, which can be viewed as the limit of a dense random graph. The player's
transition rates between the states depend on their own control and the
interaction strengths with the other players. We develop a rigorous
mathematical framework for this game and analyze Nash equilibria. We provide a
sufficient condition for a Nash equilibrium and prove existence of solutions to
a continuum of fully coupled forward-backward ordinary differential equations
characterizing equilibria. Moreover, we propose a numerical approach based on
machine learning tools and show experimental results on different applications
to compartmental models in epidemiology.
|
We introduce and prove the $n$-dimensional Pizza Theorem: Let $\mathcal{H}$
be a hyperplane arrangement in $\mathbb{R}^{n}$. If $K$ is a measurable set of
finite volume, the {pizza quantity} of $K$ is the alternating sum of the
volumes of the regions obtained by intersecting $K$ with the arrangement
$\mathcal{H}$. We prove that if $\mathcal{H}$ is a Coxeter arrangement
different from $A_{1}^{n}$ such that the group of isometries $W$ generated by
the reflections in the hyperplanes of $\mathcal{H}$ contains the map
$-\mathrm{id}$, and if $K$ is a translate of a convex body that is stable under
$W$ and contains the origin, then the pizza quantity of $K$ is equal to zero.
Our main tool is an induction formula for the pizza quantity involving a
subarrangement of the restricted arrangement on hyperplanes of $\mathcal{H}$
that we call the {even restricted arrangement}. More generally, we prove that
for a class of arrangements that we call {even} (this includes the Coxeter
arrangements above) and for a {sufficiently symmetric} set $K$, the pizza
quantity of $K+a$ is polynomial in $a$ for $a$ small enough, for example if $K$
is convex and $0\in K+a$. We get stronger results in the case of balls, more
generally, convex bodies bounded by quadratic hypersurfaces. For example, we
prove that the pizza quantity of the ball centered at $a$ having radius
$R\geq\|a\|$ vanishes for a Coxeter arrangement $\mathcal{H}$ with
$|\mathcal{H}|-n$ an even positive integer. We also prove the Pizza Theorem for
the surface volume: When $\mathcal{H}$ is a Coxeter arrangement and
$|\mathcal{H}| - n$ is a nonnegative even integer, for an $n$-dimensional ball
the alternating sum of the $(n-1)$-dimensional surface volumes of the regions
is equal to zero.
|
The R package optimall offers a collection of functions that efficiently
streamline the design process of sampling in surveys ranging from simple to
complex. The package's main functions allow users to interactively define and
adjust strata cut points based on values or quantiles of auxiliary covariates,
adaptively calculate the optimum number of samples to allocate to each stratum
using Neyman or Wright allocation, and select specific IDs to sample based on a
stratified sampling design. Using real-life epidemiological study examples, we
demonstrate how optimall facilitates an efficient workflow for the design and
implementation of surveys in R. Although tailored towards multi-wave sampling
under two- or three-phase designs, the R package optimall may be useful for any
sampling survey.
|
The orbital observatory Spectrum-Roentgen-Gamma (SRG), equipped with the
grazing-incidence X-ray telescopes Mikhail Pavlinsky ART-XC and eROSITA, was
launched by Roscosmos to the Lagrange L2 point of the Sun-Earth system on July
13, 2019. The launch was carried out from the Baikonur Cosmodrome by a Proton-M
rocket with a DM-03 upper stage. The German telescope eROSITA was installed on
SRG under an agreement between Roskosmos and the DLR, the German Aerospace
Agency. In December 2019, SRG started to perform its main scientific task:
scanning the celestial sphere to obtain X-ray maps of the entire sky in several
energy ranges (from 0.2 to 8 keV with eROSITA, and from 4 to 30 keV with
ART-XC). By mid-June 2021, the third six-month all-sky survey had been
completed. Over a period of four years, it is planned to obtain eight
independent maps of the entire sky in each of the energy ranges. The sum of
these maps will provide high sensitivity and reveal more than three million
quasars and over one hundred thousand massive galaxy clusters and galaxy
groups. The availability of eight sky maps will enable monitoring of long-term
variability (every six months) of a huge number of extragalactic and Galactic
X-ray sources, including hundreds of thousands of stars with hot coronae. The
rotation of the satellite around the axis directed toward the Sun with a period
of four hours enables tracking the faster variability of bright X-ray sources
during one day every half year. The chosen strategy of scanning the sky leads
to the formation of deep survey zones near both ecliptic poles. The paper
presents sky maps obtained by the telescopes on board SRG during the first
survey of the entire sky and a number of results of deep observations performed
during the flight to the L2 point in the frame of the performance verification
program.(Abriged)
|
In this paper, we consider the convex, finite-sum minimization problem with
explicit convex constraints over strongly connected directed graphs. The
constraint is an intersection of several convex sets each being known to only
one node. To solve this problem, we propose a novel decentralized projected
gradient scheme based on local averaging and prove its convergence using only
local functions' smoothness.
|
Immiscible two-phase flow in porous media with mixed wet conditions was
examined using a capillary fiber bundle model, which is analytically solvable,
and a dynamic pore network model. The mixed wettability was implemented in the
models by allowing each tube or link to have a different wetting angle chosen
randomly from a given distribution. Both models showed that mixed wettability
can have significant influence on the rheology in terms of the dependence of
the global volumetric flow rate on the global pressure drop. In the capillary
fiber bundle model, for small pressure drops when only a small fraction of the
tubes were open, it was found that the volumetric flow rate depended on the
excess pressure drop as a power law with an exponent equal to 3/2 or 2
depending on the minimum pressure drop necessary for flow. When all the tubes
were open due to a high pressure drop, the volumetric flow rate depended
linearly on the pressure drop, independent of the wettability. In the
transition region in between where most of the tubes opened, the volumetric
flow depended more sensitively on the wetting angle distribution function and
was in general not a simple power law. The dynamic pore network model results
also showed a linear dependence of the flow rate on the pressure drop when the
pressure drop is large. However, out of this limit the dynamic pore network
model demonstrated a more complicated behaviour that depended on the mixed
wettability condition and the saturation. In particular, the exponent relating
volumetric flow rate to the excess pressure drop could take on values anywhere
between 1.0 and 1.8. The values of the exponent were highest for saturations
approaching 0.5, also, the exponent generally increased when the difference in
wettability of the two fluids were larger and when this difference was present
for a larger fraction of the porous network.
|
We consider a data-driven robust hypothesis test where the optimal test will
minimize the worst-case performance regarding distributions that are close to
the empirical distributions with respect to the Wasserstein distance. This
leads to a new non-parametric hypothesis testing framework based on
distributionally robust optimization, which is more robust when there are
limited samples for one or both hypotheses. Such a scenario often arises from
applications such as health care, online change-point detection, and anomaly
detection. We study the computational and statistical properties of the
proposed test by presenting a tractable convex reformulation of the original
infinite-dimensional variational problem exploiting Wasserstein's properties
and characterizing the radii selection for the uncertainty sets. We also
demonstrate the good performance of our method on synthetic and real data.
|
This paper presents a coding scheme for an insertion deletion substitution
channel. We extend a previous scheme for the deletion channel where polar codes
are modified by adding "guard bands" between segments. In the new scheme, each
guard band is comprised of a middle segment of '1' symbols, and left and right
segments of '0' symbols. Our coding scheme allows for a regular hidden-Markov
input distribution, and achieves the information rate between the input and
corresponding output of such a distribution. Thus, we prove that our scheme can
be used to efficiently achieve the capacity of the channel. The probability of
error of our scheme decays exponentially in the cube-root of the block length.
|
In this letter, two unmanned-aerial-vehicle (UAV) optimal position selection
schemes are proposed. Based on the proposed schemes, the optimal UAV
transmission positions for secure precise wireless transmission (SPWT) are
given, where the maximum secrecy rate (SR) can be achieved without artificial
noise (AN). In conventional SPWT schemes, the transmission location is not
considered which impacts the SR a lot. The proposed schemes find the optimal
transmission positions based on putting the eavesdropper at the null point.
Thus, the received confidential message energy at the eavesdropper is zero, and
the maximum SR achieves. Simulation results show that proposed schemes have
improved the SR performance significantly.
|
Given two triangles whose angles are all acute, we find a homeomorphism with
the smallest Lipschitz constant between them and we give a formula for the
Lipschitz constant of this map. We show that on the set of pairs of acute
triangles with fixed area, the function which assigns the logarithm of the
smallest Lipschitz constant of Lipschitz maps between them is a symmetric
metric. We show that this metric is Finsler, we give a necessary and sufficient
condition for a path in this metric space to be geodesic and we determine the
isometry group of this metric space. This study is motivated by Thurston's
asymmetric metric on the Teichm{\"u}ller space of a hyperbolic surface, and the
results in this paper constitute an analysis of a basic Euclidean analogue of
Thurston's hyperbolic theory. Many interesting questions in the Euclidean
setting deserve further attention.
|
Data subset selection from a large number of training instances has been a
successful approach toward efficient and cost-effective machine learning.
However, models trained on a smaller subset may show poor generalization
ability. In this paper, our goal is to design an algorithm for selecting a
subset of the training data, so that the model can be trained quickly, without
significantly sacrificing on accuracy. More specifically, we focus on data
subset selection for L2 regularized regression problems and provide a novel
problem formulation which seeks to minimize the training loss with respect to
both the trainable parameters and the subset of training data, subject to error
bounds on the validation set. We tackle this problem using several technical
innovations. First, we represent this problem with simplified constraints using
the dual of the original training problem and show that the objective of this
new representation is a monotone and alpha-submodular function, for a wide
variety of modeling choices. Such properties lead us to develop SELCON, an
efficient majorization-minimization algorithm for data subset selection, that
admits an approximation guarantee even when the training provides an imperfect
estimate of the trained model. Finally, our experiments on several datasets
show that SELCON trades off accuracy and efficiency more effectively than the
current state-of-the-art.
|
Platooning of connected and autonomous vehicles (CAVs) is an emerging
technology with a strong potential for throughput improvement and fuel
reduction. Adequate macroscopic models are critical for system-level efficiency
and reliability of platooning. In this paper, we consider a hybrid queuing
model for a mixed-autonomy highway section and develop an easy-to-use training
algorithm. The model predicts CAV and non-CAV counts according to the traffic
demand as well as key parameters of the highway section. The training algorithm
learns the highway parameters from observed data in real time. We test the
model and the algorithm in Simulation of Urban Mobility (SUMO) and show that
the prediction error is around 15% in a stationary setting and around 25% in a
non-stationary setting. We also show that the trained model leads to a platoon
headway regulation policy very close to the simulated optimum. The proposed
model and algorithm can directly support model-predictive decision-making for
platooning in mixed autonomy.
|
This paper proposed the 'Post Triangular Rewiring' method that minimizes the
sacrifice of planning time and overcomes the limit of Optimality of
sampling-based algorithm such as Rapidly-exploring Random Tree (RRT) algorithm.
The proposed 'Post Triangular Rewiring' method creates a closer to the optimal
path than RRT algorithm before application through the triangular inequality
principle. The experiments were conducted to verify a performance of the
proposed method. When the method proposed in this paper are applied to the RRT
algorithm, the Optimality efficiency increase compared to the planning time.
|
Optical Coherence Tomography (OCT) is a widely used non-invasive biomedical
imaging modality that can rapidly provide volumetric images of samples. Here,
we present a deep learning-based image reconstruction framework that can
generate swept-source OCT (SS-OCT) images using undersampled spectral data,
without any spatial aliasing artifacts. This neural network-based image
reconstruction does not require any hardware changes to the optical set-up and
can be easily integrated with existing swept-source or spectral domain OCT
systems to reduce the amount of raw spectral data to be acquired. To show the
efficacy of this framework, we trained and blindly tested a deep neural network
using mouse embryo samples imaged by an SS-OCT system. Using 2-fold
undersampled spectral data (i.e., 640 spectral points per A-line), the trained
neural network can blindly reconstruct 512 A-lines in ~6.73 ms using a desktop
computer, removing spatial aliasing artifacts due to spectral undersampling,
also presenting a very good match to the images of the same samples,
reconstructed using the full spectral OCT data (i.e., 1280 spectral points per
A-line). We also successfully demonstrate that this framework can be further
extended to process 3x undersampled spectral data per A-line, with some
performance degradation in the reconstructed image quality compared to 2x
spectral undersampling. This deep learning-enabled image reconstruction
approach can be broadly used in various forms of spectral domain OCT systems,
helping to increase their imaging speed without sacrificing image resolution
and signal-to-noise ratio.
|
We prove that the volume measure of the Brownian sphere is equal to a
constant multiple of the Hausdorff measure associated with the gauge function
$h(r)=r^4\log\log(1/r)$. This shows in particular that the volume measure of
the Brownian sphere is determined by its metric structure. As a key ingredient
of our proofs, we derive precise estimates on moments of the volume of balls in
the Brownian sphere.
|
Despite the enormous progress achieved during the past decade, nanoelectronic
devices based on two-dimensional (2D) semiconductors still suffer from a
limited electrical stability. This limited stability has been shown to result
from the interaction of charge carriers originating from the 2D semiconductors
with defects in the surrounding insulating materials. The resulting dynamically
trapped charges are particularly relevant in field effect transistors (FETs)
and can lead to a large hysteresis, which endangers stable circuit operation.
Based on the notion that charge trapping is highly sensitive to the energetic
alignment of the channel Fermi-level with the defect band in the insulator, we
propose to optimize device stability by deliberately tuning the channel
Fermi-level. Our approach aims to minimize the amount of electrically active
border traps without modifying the total number of traps in the insulator. We
demonstrate the applicability of this idea by using two differently doped
graphene layers in otherwise identical FETs with Al$_2$O$_3$ as a gate oxide
mounted on a flexible substrate. Our results clearly show that by increasing
the distance of the Fermi-level to the defect band, the hysteresis is
significantly reduced. Furthermore, since long-term reliability is also very
sensitive to trapped charges, a corresponding improvement in reliability is
both expected theoretically and demonstrated experimentally. Our study paves
the way for the construction of more stable and reliable 2D FETs in which the
channel material is carefully chosen and tuned to maximize the energetic
distance between charge carriers in the channel and the defect bands in the
insulator employed.
|
In this paper, we study Riemannian maps whose base manifolds admit a Ricci
soliton and give a non-trivial example of such Riemannian maps. First, we find
Riemannian curvature tensor of base manifolds for Riemannian map $F$. Further,
we obtain Ricci tensors and calculate scalar curvature of base manifolds.
Moreover, we obtain necessary conditions for $rangeF_\ast$ to be Ricci soliton,
almost Ricci soliton and Einstein. We also obtain necessary conditions for
$(rangeF_\ast)^\bot$ to be Ricci soliton and Einstein. Also, we calculate
scalar curvatures of $rangeF_\ast$ and $(rangeF_\ast)^\bot$ by using Ricci
soliton. Finally, we study harmonicity and biharmonicity of such Riemannian
maps and obtain a necessary and sufficient condition for Riemannian map between
Riemannian manifolds whose base manifold admits a Ricci soliton to be harmonic.
We also obtain necessary and sufficient conditions for Riemannian map from
Riemannian manifold to space form which admits Ricci soliton to be harmonic and
biharmonic. Finally, some applications are presented for further studies on
base manifolds of Riemannian maps.
|
The leptophilic weakly interacting massive particle (WIMP) is realized in a
minimal renormalizable model scenario where scalar mediators with lepton number
establish the WIMP interaction with the standard model (SM) leptons. We perform
a comprehensive analysis for such a WIMP scenario for two distinct cases with
an SU(2) doublet or singlet mediator considering all the relevant theoretical,
cosmological and experimental constraints at present. We show that the
mono-photon search at near-future lepton collider experiments (ILC, FCC-ee,
CEPC, etc.) can play a significant role to probe the yet unexplored parameter
range allowed by the WIMP relic density constraint. This will complement the
search prospect at the near-future hadron collider experiment (HL-LHC).
Furthermore, we discuss the combined model scenario including both the doublet
and singlet mediator. The combined model is capable of explaining the
long-standing muon (g-2) anomaly which is an additional advantage. We
demonstrate that the allowed region for anomalous muon (g-2) explanation, which
has been updated very recently at Fermi National Accelerator Laboratory, can
also be probed at the future colliders which will thus be a simultaneous
authentication of the model scenario.
|
Keyword spotting is an important research field because it plays a key role
in device wake-up and user interaction on smart devices. However, it is
challenging to minimize errors while operating efficiently in devices with
limited resources such as mobile phones. We present a broadcasted residual
learning method to achieve high accuracy with small model size and
computational load. Our method configures most of the residual functions as 1D
temporal convolution while still allows 2D convolution together using a
broadcasted-residual connection that expands temporal output to
frequency-temporal dimension. This residual mapping enables the network to
effectively represent useful audio features with much less computation than
conventional convolutional neural networks. We also propose a novel network
architecture, Broadcasting-residual network (BC-ResNet), based on broadcasted
residual learning and describe how to scale up the model according to the
target device's resources. BC-ResNets achieve state-of-the-art 98.0% and 98.7%
top-1 accuracy on Google speech command datasets v1 and v2, respectively, and
consistently outperform previous approaches, using fewer computations and
parameters.
|
In this paper, we study stochastic optimization of areas under
precision-recall curves (AUPRC), which is widely used for combating imbalanced
classification tasks. Although a few methods have been proposed for maximizing
AUPRC, stochastic optimization of AUPRC with convergence guarantee remains an
undeveloped territory. A recent work [42] has proposed a promising approach
towards AUPRC based on maximizing a surrogate loss for the average precision,
and proved an $O(1/\epsilon^5)$ complexity for finding an $\epsilon$-stationary
solution of the non-convex objective. In this paper, we further improve the
stochastic optimization of AURPC by (i) developing novel stochastic momentum
methods with a better iteration complexity of $O(1/\epsilon^4)$ for finding an
$\epsilon$-stationary solution; and (ii) designing a novel family of stochastic
adaptive methods with the same iteration complexity of $O(1/\epsilon^4)$, which
enjoy faster convergence in practice. To this end, we propose two innovative
techniques that are critical for improving the convergence: (i) the biased
estimators for tracking individual ranking scores are updated in a randomized
coordinate-wise manner; and (ii) a momentum update is used on top of the
stochastic gradient estimator for tracking the gradient of the objective.
Extensive experiments on various data sets demonstrate the effectiveness of the
proposed algorithms. Of independent interest, the proposed stochastic momentum
and adaptive algorithms are also applicable to a class of two-level stochastic
dependent compositional optimization problems.
|
A high energy muon collider can provide new and complementary discovery
potential to the LHC or future hadron colliders. Leptoquarks are a motivated
class of exotic new physics models, with distinct production channels at hadron
and lepton machines. We study a vector leptoquark model at a muon collider with
$\sqrt{s} = 3, 14$ TeV within a set of both UV and phenomenologically motivated
flavor scenarios. We compute which production mechanism has the greatest reach
for various values of the leptoquark mass and the coupling between leptoquark
and Standard Model fermions. We find that we can probe leptoquark masses up to
an order of magnitude beyond $\sqrt{s}$ with perturbative couplings.
Additionally, we can also probe regions of parameter space unavailable to
flavor experiments. In particular, all of the parameter space of interest to
explain recent low-energy anomalies in B meson decays would be covered even by
a $\sqrt{s} = 3$ TeV collider.
|
There is evidence of misinformation in the online discourses and discussions
about the COVID-19 vaccines. Using a sample of 1.6 million geotagged English
tweets and the data from the CDC COVID Data Tracker, we conduct a quantitative
study to understand the influence of both misinformation and fact-based news on
Twitter on the COVID-19 vaccine uptake in the U.S. from April 19 when U.S.
adults were vaccine eligible to May 7, 2021, after controlling state-level
factors such as demographics, education, and the pandemic severity. We identify
the tweets related to either misinformation or fact-based news by analyzing the
URLs. By analyzing the content of the most frequent tweets of these two groups,
we find that their structures are similar, making it difficult for Twitter
users to distinguish one from another by reading the text alone. The users who
spread both fake news and fact-based news tend to show a negative attitude
towards the vaccines. We further conduct the Fama-MacBeth regression with the
Newey-West adjustment to examine the effect of fake-news-related and
fact-related tweets on the vaccination rate, and find marginally negative
correlations.
|
We prove the equidistribution of several multistatistics over some classes of
permutations avoiding a $3$-length pattern. We deduce the equidistribution, on
the one hand of inv and foze" statistics, and on the other hand that of maj and
makl statistics, over these classes of pattern avoiding permutations. Here inv
and maj are the celebrated Mahonian statistics, foze" is one of the statistics
defined in terms of generalized patterns in the 2000 pioneering paper of Babson
and Steingr\'imsson, and makl is one of the statistics defined by Clarke,
Steingr\'imsson and Zeng in 1997. These results solve several conjectures posed
by Amini in 2018.
|
In this paper we construct a family of holomorphic functions $\beta_\lambda
(s)$ which are solutions to the asymptotic tetration equation. Each
$\beta_\lambda$ satisfies the functional relationship ${\displaystyle
\beta_\lambda(s+1) = \frac{e^{\beta_\lambda(s)}}{e^{-\lambda s} + 1}}$; which
asymptotically converges as $\log \beta_\lambda(s+1) = \beta_\lambda (s) +
\mathcal{O}(e^{-\lambda s})$ as $\Re(\lambda s) \to \infty$. This family of
asymptotic solutions is used to construct a holomorphic function
$\text{tet}_\beta(s) : \mathbb{C}/(-\infty,-2] \to \mathbb{C}$ such that
$\text{tet}_\beta(s+1) = e^{\text{tet}_\beta(s)}$ and $\text{tet}_\beta :
(-2,\infty) \to \mathbb{R}$ bijectively.
|
Pulsars are rapidly spinning highly magnetised neutron stars. Their spin
period is observed to decrease with time. An early analytical model for this
process was the vacuum retarded dipole (VRD) by Deutsch in his 1955 paper "The
Electromagnetic Field of an Idealized Star in Rigid Rotation in Vacuo" (D55).
This model assumes an idealised star and it finds that the energy is radiated
away by the electromagnetic fields. This model has been superseded by more
realistic numerical simulations that account for the non-vacuum like
surroundings of the neutron star. However, the VRD still provides a reasonable
approximation and is a useful limiting case that can provide some qualitative
understanding. We provide a detailed derivation of the spin down and related
field equations of the VRD. We also correct a typo found in the general field
equations in D55.
|
Labelled data often comes at a high cost as it may require recruiting human
labelers or running costly experiments. At the same time, in many practical
scenarios, one already has access to a partially labelled, potentially biased
dataset that can help with the learning task at hand. Motivated by such
settings, we formally initiate a study of $semi-supervised$ $active$ $learning$
through the frame of linear regression. In this setting, the learner has access
to a dataset $X \in \mathbb{R}^{(n_1+n_2) \times d}$ which is composed of $n_1$
unlabelled examples that an algorithm can actively query, and $n_2$ examples
labelled a-priori. Concretely, denoting the true labels by $Y \in
\mathbb{R}^{n_1 + n_2}$, the learner's objective is to find $\widehat{\beta}
\in \mathbb{R}^d$ such that, \begin{equation}
\| X \widehat{\beta} - Y \|_2^2 \le (1 + \epsilon) \min_{\beta \in
\mathbb{R}^d} \| X \beta - Y \|_2^2 \end{equation} while making as few
additional label queries as possible. In order to bound the label queries, we
introduce an instance dependent parameter called the reduced rank, denoted by
$R_X$, and propose an efficient algorithm with query complexity
$O(R_X/\epsilon)$. This result directly implies improved upper bounds for two
important special cases: (i) active ridge regression, and (ii) active kernel
ridge regression, where the reduced-rank equates to the statistical dimension,
$sd_\lambda$ and effective dimension, $d_\lambda$ of the problem respectively,
where $\lambda \ge 0$ denotes the regularization parameter. For active ridge
regression we also prove a matching lower bound of $O(sd_\lambda / \epsilon)$
on the query complexity of any algorithm. This subsumes prior work that only
considered the unregularized case, i.e., $\lambda = 0$.
|
Context: Using student subjects in empirical studies has been discussed
extensively from a methodological perspective in Software Engineering (SE), but
there is a lack of similar discussion surrounding ethical aspects of doing so.
As students are in a subordinate relationship to their instructors, such a
discussion is needed. Objective: We aim to increase the understanding of
practices and perceptions SE researchers have of ethical issues with student
participation in empirical studies. Method: We conducted a systematic mapping
study of 372 empirical SE studies involving students, following up with a
survey answered by 100 SE researchers regarding their current practices and
opinions regarding student participation. Results: The mapping study shows that
the majority of studies does not report conditions regarding recruitment,
voluntariness, compensation, and ethics approval. In contrast, the majority of
survey participants supports reporting these conditions. The survey further
reveals that less than half of the participants require ethics approval.
Additionally, the majority of participants recruit their own students on a
voluntary basis, and use informed consent with withdrawal options. There is
disagreement among the participants whether course instructors should be
involved in research studies and if should know who participates in a study.
Conclusions: It is a positive sign that mandatory participation is rare, and
that informed consent and withdrawal options are standard. However, we see
immediate need for action, as study conditions are under-reported, and as
opinions on ethical practices differ widely. In particular, there is little
regard in SE on the power relationship between instructors and students.
|
The difficulty of obtaining paired data remains a major bottleneck for
learning image restoration and enhancement models for real-world applications.
Current strategies aim to synthesize realistic training data by modeling noise
and degradations that appear in real-world settings. We propose DeFlow, a
method for learning stochastic image degradations from unpaired data. Our
approach is based on a novel unpaired learning formulation for conditional
normalizing flows. We model the degradation process in the latent space of a
shared flow encoder-decoder network. This allows us to learn the conditional
distribution of a noisy image given the clean input by solely minimizing the
negative log-likelihood of the marginal distributions. We validate our DeFlow
formulation on the task of joint image restoration and super-resolution. The
models trained with the synthetic data generated by DeFlow outperform previous
learnable approaches on three recent datasets. Code and trained models are
available at: https://github.com/volflow/DeFlow
|
XRT 201423 is an X-ray transient with a nearly flat plateau lasting 4.1 ks
followed by a steep decay. This feature indicates that it might come from a
magnetar formed through a binary neutron star merger, similar to CDF-S XT2 and
as predicted as a type of electromagnetic counterpart of binary neutron star
mergers. We test the compliance of the data with this model and use the
observed duration and flux of the X-ray signal as well as upper limits of
optical emission to pose constraints on the parameters of the underlying
putative magnetar. Both the free-zone and trapped-zone geometric configurations
are considered. We find that the data are generally consistent with such a
model. The surface dipolar magnetic field and the ellipticity of the magnetar
should satisfy $B_p < 7\times 10^{14}{\rm G}$ ($B_p < 4.9 \times 10^{14}{\rm
G}$) and $\epsilon < 1.5 \times 10^{-3}$ ($\epsilon < 1.1 \times 10^{-3}$)
under free zone (trapped zone) configurations, respectively. An upper limit on
the distance (e.g. $z < 0.55$ with $\eta_x = 10^{-4}$ or $z < 3.5$ with $\eta_x
= 10^{-2}$) can be derived from the X-ray data which depends on the X-ray
dissipation efficiency $\eta_x$ of the spin-down luminosity. The non-detection
of an optical counterpart places a conservative lower limit on the distance of
the source, i.e. $z > 0.045$ regardless of the geometric configuration.
|
We study the environmental dependence of ultralight scalar dark matter (DM)
with linear interactions to the standard model particles. The solution to the
DM field turns out to be a sum of the cosmic harmonic oscillation term and the
local exponential fluctuation term. The amplitude of the first term depends on
the local DM density and the mass of the DM field. The second term is induced
by the local distribution of matter, such as the Earth. Then, we compute the
phase shift induced by the DM field in atom interferometers (AIs), through
solving the trajectories of atoms. Especially, the AI signal for the violation
of weak equivalence principle (WEP) caused by the DM field is calculated.
Depending on the values of the DM coupling parameters, contributions to the WEP
violation from the first and second terms of the DM field can be either
comparable or one larger than the other. Finally, we give some constraints to
DM coupling parameters using results from the terrestrial atomic WEP tests.
|
With the rise and ever-increasing potential of deep learning techniques in
recent years, publicly available medical datasets became a key factor to enable
reproducible development of diagnostic algorithms in the medical domain.
Medical data contains sensitive patient-related information and is therefore
usually anonymized by removing patient identifiers, e.g., patient names before
publication. To the best of our knowledge, we are the first to show that a
well-trained deep learning system is able to recover the patient identity from
chest X-ray data. We demonstrate this using the publicly available large-scale
ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images
from 30,805 unique patients. Our verification system is able to identify
whether two frontal chest X-ray images are from the same person with an AUC of
0.9940 and a classification accuracy of 95.55%. We further highlight that the
proposed system is able to reveal the same person even ten and more years after
the initial scan. When pursuing a retrieval approach, we observe an mAP@R of
0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to
0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks
on CheXpert and the COVID-19 Image Data Collection. Based on this high
identification rate, a potential attacker may leak patient-related information
and additionally cross-reference images to obtain more information. Thus, there
is a great risk of sensitive content falling into unauthorized hands or being
disseminated against the will of the concerned patients. Especially during the
COVID-19 pandemic, numerous chest X-ray datasets have been published to advance
research. Therefore, such data may be vulnerable to potential attacks by deep
learning-based re-identification algorithms.
|
In normal times, it is assumed that financial institutions operating in
non-overlapping sectors have complementary and distinct outcomes, typically
reflected in mostly uncorrelated outcomes and asset returns. Such is the
reasoning behind common "free lunches" to be had in investing, like
diversifying assets across equity and bond sectors. Unfortunately, the
recurrence of crises like the Great Financial Crisis of 2007-2008 demonstrate
that such convenient assumptions often break down, with dramatic consequences
for all financial actors. In hindsight, the emergence of systemic risk (as
exemplified by failure in one part of a system spreading to ostensibly
unrelated parts of the system) has been explained by narratives such as
deregulation and leverage. But can we diagnose and quantify the ongoing
emergence of systemic risk in financial systems? In this study, we focus on two
previously-documented measures of systemic risk that require only easily
available time series data (eg monthly asset returns): cross-correlation and
principal component analysis. We apply these tests to daily and monthly returns
on hedge fund indexes and broad-based market indexes, and discuss their
results. We hope that a frank discussion of these simple, non-parametric
measures can help inform legislators, lawmakers, and financial actors of
potential crises looming on the horizon.
|
We present a detailed description of the experiment realising for the first
time a protective measurement, a novel measurement protocol which combines weak
interactions with a ``protection mechanism'' preserving the measured state
coherence during the whole measurement process. Furthermore, protective
measurement allows finding the expectation value of an observable, i.e. an
inherently statistical quantity, by measuring a single particle, without the
need of any statistics. This peculiar property, in sharp contrast with the
framework of traditional (projective) quantum measurement, might constitute a
groundbreaking advance for several quantum technology related fields.
|
Temporal-Difference (TD) learning is a general and very useful tool for
estimating the value function of a given policy, which in turn is required to
find good policies. Generally speaking, TD learning updates states whenever
they are visited. When the agent lands in a state, its value can be used to
compute the TD-error, which is then propagated to other states. However, it may
be interesting, when computing updates, to take into account other information
than whether a state is visited or not. For example, some states might be more
important than others (such as states which are frequently seen in a successful
trajectory). Or, some states might have unreliable value estimates (for
example, due to partial observability or lack of data), making their values
less desirable as targets. We propose an approach to re-weighting states used
in TD updates, both when they are the input and when they provide the target
for the update. We prove that our approach converges with linear function
approximation and illustrate its desirable empirical behaviour compared to
other TD-style methods.
|
We extend the classical tracking-by-detection paradigm to this
tracking-any-object task. Solid detection results are first extracted from TAO
dataset. Some state-of-the-art techniques like \textbf{BA}lanced-\textbf{G}roup
\textbf{S}oftmax (\textbf{BAGS}\cite{li2020overcoming}) and
DetectoRS\cite{qiao2020detectors} are integrated during detection. Then we
learned appearance features to represent any object by training feature
learning networks. We ensemble several models for improving detection and
feature representation. Simple linking strategies with most similar appearance
features and tracklet-level post association module are finally applied to
generate final tracking results. Our method is submitted as \textbf{AOA} on the
challenge website. Code is available at
https://github.com/feiaxyt/Winner_ECCV20_TAO.
|
In this paper we study the set of prime ideals in vector lattices and how the
properties of the prime ideals structure the vector lattice in question. The
different properties that will be considered are firstly, that all or none of
the prime ideals are order dense, secondly, that there are only finitely many
prime ideals, thirdly, that every prime ideal is principal, and lastly, that
every ascending chain of prime ideals is stationary (a property that we refer
to as prime Noetherian). We also completely characterize the prime ideals in
vector lattices of piecewise polynomials, which turns out to be an interesting
class of vector lattices for studying principal prime ideals and ascending
chains of prime ideals.
|
For a connected smooth proper rigid space $X$ over a perfectoid field
extension of $\mathbb{Q}_p$, we show that the Picard functor of
$X^\diamondsuit$ defined on perfectoid test objects is the diamondification of
the rigid analytic Picard functor. In particular, it is represented by a rigid
group variety if and only if the rigid analytic Picard functor is.
As an application, we determine which line bundles are trivialized by
pro-finite-\'etale covers, and prove unconditionally that the associated
"topological torsion Picard functor" is represented by a divisible analytic
group. We use this to generalize and geometrize a construction of
Deninger--Werner in the $p$-adic Simpson correspondence: There is an
isomorphism of rigid analytic group varieties between the moduli space of
continuous characters of $\pi_1(X,x)$ and that of pro-finite-\'etale Higgs line
bundles on $X$.
This article is part II of a series about line bundles on rigid spaces as
diamonds.
|
We construct a large family of normal $\kappa$-complete
$\mathbb{R}_\kappa$-embeddable $\kappa^+$-Aronszajn trees which have no club
isomorphic subtrees using an instance of the proxy principle of Brodsky-Rinot.
|
Recently, Robotic Cooking has been a very promising field. To execute a
recipe, a robot has to recognize different objects and their states. Contrary
to object recognition, state identification has not been explored that much.
But it is very important because different recipe might require different state
of an object. Moreover, robotic grasping depends on the state. Pretrained model
usually perform very well in this type of tests. Our challenge was to handle
this problem without using any pretrained model. In this paper, we have
proposed a CNN and trained it from scratch. The model is trained and tested on
the dataset from cooking state recognition challenge. We have also evaluated
the performance of our network from various perspective. Our model achieves
65.8% accuracy on the unseen test dataset.
|
Robust and distributionally robust optimization are modeling paradigms for
decision-making under uncertainty where the uncertain parameters are only known
to reside in an uncertainty set or are governed by any probability distribution
from within an ambiguity set, respectively, and a decision is sought that
minimizes a cost function under the most adverse outcome of the uncertainty. In
this paper, we develop a rigorous and general theory of robust and
distributionally robust nonlinear optimization using the language of convex
analysis. Our framework is based on a generalized
`primal-worst-equals-dual-best' principle that establishes strong duality
between a semi-infinite primal worst and a non-convex dual best formulation,
both of which admit finite convex reformulations. This principle offers an
alternative formulation for robust optimization problems that may be
computationally advantageous, and it obviates the need to mobilize the
machinery of abstract semi-infinite duality theory to prove strong duality in
distributionally robust optimization. We illustrate the modeling power of our
approach through convex reformulations for distributionally robust optimization
problems whose ambiguity sets are defined through general optimal transport
distances, which generalize earlier results for Wasserstein ambiguity sets.
|
This work derives explicit series reversions for the solution of Calder\'on's
problem. The governing elliptic partial differential equation is
$\nabla\cdot(A\nabla u)=0$ in a bounded Lipschitz domain and with a
matrix-valued coefficient. The corresponding forward map sends $A$ to a
projected version of a local Neumann-to-Dirichlet operator, allowing for the
use of partial boundary data and finitely many measurements. It is first shown
that the forward map is analytic, and subsequently reversions of its Taylor
series up to specified orders lead to a family of numerical methods for solving
the inverse problem with increasing accuracy. The convergence of these methods
is shown under conditions that ensure the invertibility of the Fr\'echet
derivative of the forward map. The introduced numerical methods are of the same
computational complexity as solving the linearised inverse problem. The
analogous results are also presented for the smoothened complete electrode
model.
|
Quantum technology is approaching a level of maturity, recently demonstrated
in space-borne experiments and in-field measurements, which would allow for
adoption by non-specialist users. Parallel advancements made in
microprocessor-based electronics and database software can be combined to
create robust, versatile and modular experimental monitoring systems. Here, we
describe a monitoring network used across a number of cold atom laboratories
with a shared laser system. The ability to diagnose malfunction, unexpected or
unintended behaviour and passively collect data for key experimental
parameters, such as vacuum chamber pressure, laser beam power, or resistances
of important conductors, significantly reduces debugging time. This allows for
efficient control over a number of experiments and remote control when access
is limited.
|
The Cherenkov Telescope Array (CTA) is an initiative that is currently
building the largest gamma-ray ground Observatory that ever existed. A Science
Alert Generation (SAG) system, part of the Array Control and Data Acquisition
(ACADA) system of the CTA Observatory, analyses online the telescope data -
arriving at an event rate of tens of kHz - to detect transient gamma-ray
events. The SAG system also performs an online data quality analysis to assess
the instruments' health during the data acquisition: this analysis is crucial
to confirm good detections. A Python and a C++ software library to perform the
online data quality analysis of CTA data, called rta-dq-lib, has been proposed
for CTA. The Python version is dedicated to the rapid prototyping of data
quality use cases. The C++ version is optimized for maximum performance. The
library allows the user to define, through XML configuration files, the format
of the input data and, for each data field, which quality checks must be
performed and which types of aggregations and transformations must be applied.
It internally translates the XML configuration into a direct acyclic
computational graph that encodes the dependencies of the computational tasks to
be performed. This model allows the library to easily take advantage of
parallelization at the thread level and the overall flexibility allow us to
develop generic data quality analysis pipelines that could also be reused in
other applications.
|
The transformer based model (e.g., FusingTF) has been employed recently for
Electrocardiogram (ECG) signal classification. However, the high-dimensional
embedding obtained via 1-D convolution and positional encoding can lead to the
loss of the signal's own temporal information and a large amount of training
parameters. In this paper, we propose a new method for ECG classification,
called low-dimensional denoising embedding transformer (LDTF), which contains
two components, i.e., low-dimensional denoising embedding (LDE) and transformer
learning. In the LDE component, a low-dimensional representation of the signal
is obtained in the time-frequency domain while preserving its own temporal
information. And with the low dimensional embedding, the transformer learning
is then used to obtain a deeper and narrower structure with fewer training
parameters than that of the FusingTF. Experiments conducted on the MIT-BIH
dataset demonstrates the effectiveness and the superior performance of our
proposed method, as compared with state-of-the-art methods.
|
The climate system is a complex, chaotic system with many degrees of freedom
and variability on a vast range of temporal and spatial scales. Attaining a
deeper level of understanding of its dynamical processes is a scientific
challenge of great urgency, especially given the ongoing climate change and the
evolving climate crisis. In statistical physics, complex, many-particle systems
are studied successfully using the mathematical framework of Large Deviation
Theory (LDT). A great potential exists for applying LDT to problems relevant
for geophysical fluid dynamics and climate science. In particular, LDT allows
for understanding the fundamental properties of persistent deviations of
climatic fields from the long-term averages and for associating them to
low-frequency, large scale patterns of climatic variability. Additionally, LDT
can be used in conjunction with so-called rare events algorithms to explore
rarely visited regions of the phase space and thus to study special dynamical
configurations of the climate. These applications are of key importance to
improve our understanding of high-impact weather and climate events.
Furthermore, LDT provides powerful tools for evaluating the probability of
noise-induced transitions between competing metastable states of the climate
system or of its components. This in turn essential for improving our
understanding of the global stability properties of the climate system and of
its predictability of the second kind in the sense of Lorenz. The goal of this
review is manifold. First, we want to provide an introduction to the derivation
of large deviation laws in the context of stochastic processes. We then relate
such results to the existing literature showing the current status of
applications of LDT in climate science and geophysical fluid dynamics. Finally,
we propose some possible lines of future investigations.
|
We propose leveraging our proficiency for detecting Higgs resonances by using
the Higgs as a tagging object for new heavy physics. In particular, we argue
that searches for exotic Higgs production from decays of color-singlet fields
with electroweak charges could beat current searches at the LHC which look for
their decays to vectors. As an example, we study the production and decay of
vector-like leptons which admit Yukawa couplings with SM leptons. We find that
bounds from Run 2 searches are consistent with anywhere from hundreds to many
thousands of Higgses having been produced in their decays over the same period,
depending on the representation. Dedicated searches for these signatures may
thus be able to significantly improve our reach at the electroweak energy
frontier.
|
In this paper, we study certifying the robustness of ReLU neural networks
against adversarial input perturbations. To diminish the relaxation error
suffered by the popular linear programming (LP) and semidefinite programming
(SDP) certification methods, we propose partitioning the input uncertainty set
and solving the relaxations on each part separately. We show that this approach
reduces relaxation error, and that the error is eliminated entirely upon
performing an LP relaxation with an intelligently designed partition. To scale
this approach to large networks, we consider courser partitions that take the
same form as this motivating partition. We prove that computing such a
partition that directly minimizes the LP relaxation error is NP-hard. By
instead minimizing the worst-case LP relaxation error, we develop a
computationally tractable scheme with a closed-form optimal two-part partition.
We extend the analysis to the SDP, where the feasible set geometry is exploited
to design a two-part partition that minimizes the worst-case SDP relaxation
error. Experiments on IRIS classifiers demonstrate significant reduction in
relaxation error, offering certificates that are otherwise void without
partitioning. By independently increasing the input size and the number of
layers, we empirically illustrate under which regimes the partitioned LP and
SDP are best applied.
|
Triangle counting is a building block for a wide range of graph applications.
Traditional wisdom suggests that i) hashing is not suitable for triangle
counting, ii) edge-centric triangle counting beats vertex-centric design, and
iii) communication-free and workload balanced graph partitioning is a grand
challenge for triangle counting. On the contrary, we advocate that i) hashing
can help the key operations for scalable triangle counting on Graphics
Processing Units (GPUs), i.e., list intersection and graph partitioning,
ii)vertex-centric design reduces both hash table construction cost and memory
consumption, which is limited on GPUs. In addition, iii) we exploit graph and
workload collaborative, and hashing-based 2D partitioning to scale
vertex-centric triangle counting over 1,000 GPUswith sustained scalability. In
this work, we present TRUST which performs triangle counting with the hash
operation and vertex-centric mechanism at the core. To the best of our
knowledge, TRUSTis the first work that achieves over one trillion Traversed
Edges Per Second (TEPS) rate for triangle counting.
|
The rapid development of reliable Quantum Processing Units (QPU) opens up
novel computational opportunities for machine learning. Here, we introduce a
procedure for measuring the similarity between graph-structured data, based on
the time-evolution of a quantum system. By encoding the topology of the input
graph in the Hamiltonian of the system, the evolution produces measurement
samples that retain key features of the data. We study analytically the
procedure and illustrate its versatility in providing links to standard
classical approaches. We then show numerically that this scheme performs well
compared to standard graph kernels on typical benchmark datasets. Finally, we
study the possibility of a concrete implementation on a realistic neutral-atom
quantum processor.
|
We established a Spatio-Temporal Neural Network, namely STNN, to forecast the
spread of the coronavirus COVID-19 outbreak worldwide in 2020. The basic
structure of STNN is similar to the Recurrent Neural Network (RNN)
incorporating with not only temporal data but also spatial features. Two
improved STNN architectures, namely the STNN with Augmented Spatial States
(STNN-A) and the STNN with Input Gate (STNN-I), are proposed, which ensure more
predictability and flexibility. STNN and its variants can be trained using
Stochastic Gradient Descent (SGD) algorithm and its improved variants (e.g.,
Adam, AdaGrad and RMSProp). Our STNN models are compared with several classical
epidemic prediction models, including the fully-connected neural network
(BPNN), and the recurrent neural network (RNN), the classical curve fitting
models, as well as the SEIR dynamical system model. Numerical simulations
demonstrate that STNN models outperform many others by providing more accurate
fitting and prediction, and by handling both spatial and temporal data.
|
Domain-specific neural network accelerators have seen growing interest in
recent years due to their improved energy efficiency and inference performance
compared to CPUs and GPUs. In this paper, we propose a novel cross-layer
optimized neural network accelerator called CrossLight that leverages silicon
photonics. CrossLight includes device-level engineering for resilience to
process variations and thermal crosstalk, circuit-level tuning enhancements for
inference latency reduction, and architecture-level optimization to enable
higher resolution, better energy-efficiency, and improved throughput. On
average, CrossLight offers 9.5x lower energy-per-bit and 15.9x higher
performance-per-watt at 16-bit resolution than state-of-the-art photonic deep
learning accelerators.
|
The rise of digitization of cultural documents offers large-scale contents,
opening the road for development of AI systems in order to preserve, search,
and deliver cultural heritage. To organize such cultural content also means to
classify them, a task that is very familiar to modern computer science.
Contextual information is often the key to structure such real world data, and
we propose to use it in form of a knowledge graph. Such a knowledge graph,
combined with content analysis, enhances the notion of proximity between
artworks so it improves the performances in classification tasks. In this
paper, we propose a novel use of a knowledge graph, that is constructed on
annotated data and pseudo-labeled data. With label propagation, we boost
artwork classification by training a model using a graph convolutional network,
relying on the relationships between entities of the knowledge graph. Following
a transductive learning framework, our experiments show that relying on a
knowledge graph modeling the relations between labeled data and unlabeled data
allows to achieve state-of-the-art results on multiple classification tasks on
a dataset of paintings, and on a dataset of Buddha statues. Additionally, we
show state-of-the-art results for the difficult case of dealing with unbalanced
data, with the limitation of disregarding classes with extremely low degrees in
the knowledge graph.
|
DeepOnets have recently been proposed as a framework for learning nonlinear
operators mapping between infinite dimensional Banach spaces. We analyze
DeepOnets and prove estimates on the resulting approximation and generalization
errors. In particular, we extend the universal approximation property of
DeepOnets to include measurable mappings in non-compact spaces. By a
decomposition of the error into encoding, approximation and reconstruction
errors, we prove both lower and upper bounds on the total error, relating it to
the spectral decay properties of the covariance operators, associated with the
underlying measures. We derive almost optimal error bounds with very general
affine reconstructors and with random sensor locations as well as bounds on the
generalization error, using covering number arguments. We illustrate our
general framework with four prototypical examples of nonlinear operators,
namely those arising in a nonlinear forced ODE, an elliptic PDE with variable
coefficients and nonlinear parabolic and hyperbolic PDEs. In all these
examples, we prove that DeepOnets break the curse of dimensionality, thus
demonstrating the efficient approximation of infinite-dimensional operators
with this machine learning framework.
|
Chaotic quantum systems with Lyapunov exponent $\lambda_\mathrm{L}$ obey an
upper bound $\lambda_\mathrm{L}\leq 2\pi k_\mathrm{B}T/\hbar$ at temperature
$T$, implying a divergence of the bound in the classical limit $\hbar\to 0$.
Following this trend, does a quantum system necessarily become `more chaotic'
when quantum fluctuations are reduced? We explore this question by computing
$\lambda_\mathrm{L}(\hbar,T)$ in the quantum spherical $p$-spin glass model,
where $\hbar$ can be continuously varied. We find that quantum fluctuations, in
general, make paramagnetic phase less chaotic and the spin glass phase more
chaotic. We show that the approach to the classical limit could be non-trivial,
with non-monotonic dependence of $\lambda_\mathrm{L}$ on $\hbar$ close to the
dynamical glass transition temperature $T_d$. Our results in the classical
limit ($\hbar\to 0$) naturally describe chaos in super-cooled liquid in
structural glasses. We find a crossover from strong to weak chaos substantially
above $T_d$, concomitant with the onset of two-step glassy relaxation. We
further show that $\lambda_\mathrm{L}\sim T^\alpha$, with the exponent $\alpha$
varying between 2 and 1 from quantum to classical limit, at low temperatures in
the spin glass phase. Our results reveal intricate interplay between quantum
fluctuations, glassy dynamics and chaos.
|
An exotic rotationally invariant harmonic oscillator (ERIHO) is constructed
by applying a non-unitary isotropic conformal bridge transformation (CBT) to a
free planar particle. It is described by the isotropic harmonic oscillator
Hamiltonian supplemented by a Zeeman type term with a real coupling constant
$g$. The model reveals the Euclidean ($|g|<1$) and Minkowskian ($|g|>1$) phases
separated by the phases $g=+1$ and $g=-1$ of the Landau problem in the
symmetric gauge with opposite orientation of the magnetic field. A hidden
symmetry emerges in the system at rational values of $g$. Its generators,
together with the Hamiltonian and angular momentum produce non-linearly
deformed $\mathfrak{u}(2)$ and $\mathfrak{gl}(2,{\mathbb R})$ algebras in the
cases of $0<|g|<1$ and $\infty>|g|>1$, which transmute one into another under
the inversion $g\rightarrow -1/g$. Similarly, the true, $\mathfrak{u}(2)$, and
extended conformal, $\mathfrak{gl}(2,{\mathbb R})$, symmetries of the isotropic
Euclidean oscillator ($g=0$) interchange their roles in the isotropic
Minkowskian oscillator ($|g|=\infty$), while two copies of the
$\mathfrak{gl}(2,{\mathbb R})$ algebra of analogous symmetries mutually
transmute in Landau phases. We show that the ERIHO system is transformed by a
peculiar unitary transformation into the anisotropic harmonic oscillator
generated, in turn, by anisotropic CBT. The relationship between the ERIHO and
the subcritical phases of the harmonically extended Landau problem, as well as
with a plane isotropic harmonic oscillator in a uniformly rotating reference
frame, is established.
|
We consider a sequence of variables having multinomial distribution with the
number of trials corresponding to these variables being large and possibly
different. The multinomial probabilities of the categories are assumed to vary
randomly depending on batches. The proposed framework is interesting from the
perspective of various applications in practice such as predicting the winner
of an election, forecasting the market share of different brands etc. In this
work, first we derive sufficient conditions of asymptotic normality of the
estimates of the multinomial cell probabilities, and corresponding suitable
transformations. Then, we consider a Bayesian setting to implement our model.
We consider hierarchical priors using multivariate normal and inverse Wishart
distributions, and establish the posterior consistency. Based on this result
and following appropriate Gibbs sampling algorithms, we can infer about
aggregate data. The methodology is illustrated in detail with two real life
applications, in the contexts of political election and sales forecasting.
Additional insights of effectiveness are also derived through a simulation
study.
|
It is widely accepted that both backscattering and dissipation cannot occur
in topological systems because of the topological protection. Here we show that
the thermal dissipation can occur in the quantum Hall (QH) regime in graphene
in the presence of dissipation sources, although the Hall plateaus and the zero
longitudinal resistance still survive. Dissipation appears along the downstream
chiral flow direction of the constriction in the Hall plateau regime, but it
occurs mainly in the bulk in the Hall plateau transition regime. In addition,
dissipation processes are accompanied with the evolution of the energy
distribution from non-equilibrium to equilibrium. This indicates that topology
neither prohibits the appearance of dissipation nor prohibits entropy
increasing, which opens a new topic on the dissipation in topological systems.
|
Attractor-based end-to-end diarization is achieving comparable accuracy to
the carefully tuned conventional clustering-based methods on challenging
datasets. However, the main drawback is that it cannot deal with the case where
the number of speakers is larger than the one observed during training. This is
because its speaker counting relies on supervised learning. In this work, we
introduce an unsupervised clustering process embedded in the attractor-based
end-to-end diarization. We first split a sequence of frame-wise embeddings into
short subsequences and then perform attractor-based diarization for each
subsequence. Given subsequence-wise diarization results, inter-subsequence
speaker correspondence is obtained by unsupervised clustering of the vectors
computed from the attractors from all the subsequences. This makes it possible
to produce diarization results of a large number of speakers for the whole
recording even if the number of output speakers for each subsequence is
limited. Experimental results showed that our method could produce accurate
diarization results of an unseen number of speakers. Our method achieved 11.84
%, 28.33 %, and 19.49 % on the CALLHOME, DIHARD II, and DIHARD III datasets,
respectively, each of which is better than the conventional end-to-end
diarization methods.
|
We study the effects of the flux configurations on the emergent Majorana
fermions in the $S=1/2$ Kitaev model on a honeycomb lattice, where quantum
spins are fractionalized into itinerant Majorana fermions and localized fluxes.
A quantum spin liquid appears as the ground state of the Kitaev model in the
flux-free sector, which has intensively been investigated so far. In this flux
sector, the Majorana fermion system has linear dispersions and shows power law
behavior in the Majorana correlations. On the other hand, periodically-arranged
flux configurations yield low-energy excitations in the Majorana fermion
system, which are distinctly different from those in the flux-free state. We
find that one of the periodically arranged flux states results in the gapped
Majorana dispersion and the exponential decay in the Majorana correlations. The
Kitaev system with another flux configuration exhibits a semi-Dirac like
dispersion, leading to the power law decay with a smaller power than that in
the flux-free sector along symmetry axes. We also examine the effect of the
randomness in the flux configurations and clarify that the Majorana density of
states is filled by increasing the flux density, and power-law decay in the
Majorana correlations remains. The present results could be important to
control the motion of Majorana fermions, which carries the spin excitations, in
the Kitaev candidate materials.
|
Bouncing models are alternatives to inflationary cosmology that replace the
initial Big-Bang singularity by a `bouncing' phase. A deeper understanding of
the initial conditions of the universe, in these scenarios, requires knowledge
of quantum aspects of bouncing models. In this work, we propose two classes of
bouncing models that can be studied with great analytical ease and hence,
provide test-bed for investigating more profound problems in quantum cosmology
of bouncing universes. Our model's two key ingredients enable us to do
straightforward analytical calculations: (i) a convenient parametrization of
the minisuperspace of FRLW spacetimes and (ii) two distinct choices of the
effective perfect fluids that source the background geometry of the bouncing
universe. We study the quantum cosmology of these models using both the
Wheeler-de Witt equations and the path integral approach. In particular, we
found a bouncing model analogue of the no-boundary wavefunction and presented a
Lorentzian path integral representation for the same. We also discuss the
introduction of real scalar perturbations.
|
In highway scenarios, an alert human driver will typically anticipate early
cut-in/cut-out maneuvers of surrounding vehicles using visual cues mainly.
Autonomous vehicles must anticipate these situations at an early stage too, to
increase their safety and efficiency. In this work, lane-change recognition and
prediction tasks are posed as video action recognition problems. Up to four
different two-stream-based approaches, that have been successfully applied to
address human action recognition, are adapted here by stacking visual cues from
forward-looking video cameras to recognize and anticipate lane-changes of
target vehicles. We study the influence of context and observation horizons on
performance, and different prediction horizons are analyzed. The different
models are trained and evaluated using the PREVENTION dataset. The obtained
results clearly demonstrate the potential of these methodologies to serve as
robust predictors of future lane-changes of surrounding vehicles proving an
accuracy higher than 90% in time horizons of between 1-2 seconds.
|
The advent of large pre-trained language models has made it possible to make
high-quality predictions on how to add or change a sentence in a document.
However, the high branching factor inherent to text generation impedes the
ability of even the strongest language models to offer useful editing
suggestions at a more global or document level. We introduce a new task,
document sketching, which involves generating entire draft documents for the
writer to review and revise. These drafts are built from sets of documents that
overlap in form - sharing large segments of potentially reusable text - while
diverging in content. To support this task, we introduce a Wikipedia-based
dataset of analogous documents and investigate the application of weakly
supervised methods, including use of a transformer-based mixture of experts,
together with reinforcement learning. We report experiments using automated and
human evaluation methods and discuss relative merits of these models.
|
Quasi-periodic changes of the paleointensity and geomagnetic polarity in the
intervals of 170 Ma to the present time and of 550 Ma to the present time were
studied, respectively. It is revealed that the spectrum of the basic variations
in the paleointensity and of the duration of the polar intervals is discrete
and includes quasi-periodic oscillations with characteristic times of 15 Ma, 8
Ma, 5 Ma, and 3 Ma. The characteristic time of these quasi-periodic changes of
the geomagnetic field at the beginning and at the end of the Phanerozoic
differed by no more than 10%. The spectral density of quasi-periodic variations
of the geomagnetic field changed cyclically over geological time. The relation
between the behaviors of the amplitude of paleointensity variations, the
duration of the polar intervals, and their spectral density was shown.
Quasi-periodic variations of the paleointensity (geomagnetic activity) had a
relatively high spectral density in the interval of (150 - 40) Ma (in the
Cretaceous - Early Paleogene). In this interval, both the amplitude of
paleointensity variations and the duration of polar intervals increased. In the
intervals of (170 - 150) Ma and of 30 Ma to the present, a quasi-periodic
variation in the paleointensity practically did not detect against the
background of its noise variations. At the same time, the amplitude of the
paleointensity variations and duration of polar intervals decreased. An
alternation of time intervals in which the paleointensity variations acquired
either a quasi-periodic or noise character took place during the geomagnetic
history.
|
Object tracking has achieved significant progress over the past few years.
However, state-of-the-art trackers become increasingly heavy and expensive,
which limits their deployments in resource-constrained applications. In this
work, we present LightTrack, which uses neural architecture search (NAS) to
design more lightweight and efficient object trackers. Comprehensive
experiments show that our LightTrack is effective. It can find trackers that
achieve superior performance compared to handcrafted SOTA trackers, such as
SiamRPN++ and Ocean, while using much fewer model Flops and parameters.
Moreover, when deployed on resource-constrained mobile chipsets, the discovered
trackers run much faster. For example, on Snapdragon 845 Adreno GPU, LightTrack
runs $12\times$ faster than Ocean, while using $13\times$ fewer parameters and
$38\times$ fewer Flops. Such improvements might narrow the gap between academic
models and industrial deployments in object tracking task. LightTrack is
released at https://github.com/researchmm/LightTrack.
|
Solar radio type II bursts serve as early indicators of incoming
geo-effective space weather events such as coronal mass ejections (CMEs). In
order to investigate the origin of high-frequency type II bursts (HF type II
bursts), we have identified 51 of them (among 180 type II bursts from SWPC
reports) that are observed by ground-based Compound Astronomical Low-cost
Low-frequency Instrument for Spectroscopy and Transportable Observatory
(CALLISTO) spectrometers and whose upper-frequency cutoff (of either
fundamental or harmonic emission) lies in between 150 MHz-450 MHz during
2010-2019. We found that 60% of HF type II bursts, whose upper-frequency cutoff
$\geq$ 300 MHz originate from the western longitudes. Further, our study finds
a good correlation $\sim $ 0.73 between the average shock speed derived from
the radio dynamic spectra and the corresponding speed from CME data. Also, we
found that analyzed HF type II bursts are associated with wide and fast CMEs
located near the solar disk. In addition, we have analyzed the spatio-temporal
characteristics of two of these high-frequency type II bursts and compared the
derived from radio observations with those derived from multi-spacecraft CME
observations from SOHO/LASCO and STEREO coronagraphs.
|
We report on the design and whole characterization of low-noise and
affordable-cost Yb-doped double-clad fiber amplifiers operating at room
temperature in the near-infrared spectral region at pulse repetition rate of
160 MHz. Two different experimental configurations are discussed. In the first
one, a broadband seed radiation with a transform limited pulse duration of 71
fs, an optical spectrum of 20 nm wide at around 1040 nm, and 20 mW average
power is adopted. In the second configuration, the seed radiation is
constituted by stretched pulses with a time duration as long as 170 ps, with a
5-nm narrow pulse spectrum centered at 1029 nm and 2 mW average input power. In
both cases we obtained transform limited pulse trains with an amplified output
power exceeding 2 W. Furthermore, relative intensity noise measurements show
that no significant noise degradation occurs during the amplification process.
|
The mapper construction is a powerful tool from topological data analysis
that is designed for the analysis and visualization of multivariate data. In
this paper, we investigate a method for stitching a pair of univariate mappers
together into a bivariate mapper, and study topological notions of information
gains, referred to as topological gains, during such a process. We further
provide implementations that visualize such topological gains for mapper
graphs.
|
Artificial bacteria flagella (ABFs) are magnetic helical micro-swimmers that
can be remotely controlled via a uniform, rotating magnetic field. Previous
studies have used the heterogeneous response of microswimmers to external
magnetic fields for achieving independent control. Here we introduce analytical
and reinforcement learning control strategies for path planning to a target by
multiple swimmers using a uniform magnetic field. The comparison of the two
algorithms shows the superiority of reinforcement learning in achieving minimal
travel time to a target. The results demonstrate, for the first time, the
effective independent navigation of realistic micro-swimmers with a uniform
magnetic field in a viscous flow field.
|
When a chaotic, ergodic Hamiltonian system with $N$ degrees of freedom is
subject to sufficiently rapid periodic driving, its energy evolves diffusively.
We derive a Fokker-Planck equation that governs the evolution of the system's
probability distribution in energy space, and we provide explicit expressions
for the energy drift and diffusion rates. Our analysis suggests that the system
generically relaxes to a long-lived "prethermal" state characterized by minimal
energy absorption, eventually followed by more rapid heating. When $N\gg 1$,
the system ultimately absorbs energy indefinitely from the drive, or at least
until an infinite temperature state is reached.
|
Understanding the drift motion and dynamical locking of crystalline clusters
on patterned substrates is important for the diffusion and manipulation of
nano- and micro-scale objects on surfaces. In a previous work, we studied the
orientational and directional locking of colloidal two-dimensional clusters
with triangular structure driven across a triangular substrate lattice. Here we
show with experiments and simulations that such locking features arise for
clusters with arbitrary lattice structure sliding across arbitrary regular
substrates. Similar to triangular-triangular contacts, orientational and
directional locking are strongly correlated via the real- and reciprocal-space
moir\'e patterns of the contacting surfaces. Due to the different symmetries of
the surfaces in contact, however the relation between the locking orientation
and the locking direction becomes more complicated compared to interfaces
composed of identical lattice symmetries. We provide a generalized formalism
which describes the relation between the locking orientation and locking
direction with arbitrary lattice symmetries.
|
A unification of Klein--Gordon, Dirac, Maxwell, Rarita--Schwinger and
Einstein equations exact solutions (for the massless fields cases) is
presented. The method is based on writing all of the relevant dynamical fields
in terms of products and derivatives of pre--potential functions, which satisfy
d'Alambert equation. The coupled equations satisfied by the pre--potentials are
non-linear. Remarkably, there are particular solutions of (gradient) orthogonal
pre--potentials that satisfy the usual wave equation which may be used to
construct {\it{exact non--trivial solutions to Klein--Gordon, Dirac, Maxwell,
Rarita--Schwinger and (linearized and full) Einstein equations}}, thus giving
rise to a unification of the solutions of all massless field equations for any
spin. Some solutions written in terms of orthogonal pre--potentials are
presented. Relations of this method to previously developed ones, as well as to
other subjects in physics are pointed out.
|
Soil carbon accounting and prediction play a key role in building decision
support systems for land managers selling carbon credits, in the spirit of the
Paris and Kyoto protocol agreements. Land managers typically rely on
computationally complex models fit using sparse datasets to make these
accountings and predictions. The model complexity and sparsity of the data can
lead to over-fitting, leading to inaccurate results using new data or making
predictions. Modellers address over-fitting by simplifying their models,
neglecting some soil organic carbon (SOC) components. In this study, we
introduce two novel SOC models and a new RothC-like model and investigate how
the SOC components and complexity of the SOC models affect the SOC prediction
in the presence of small and sparse time series data. We develop model
selection methods that can identify the soil carbon model with the best
predictive performance, in light of the available data. Through this analysis
we reveal that commonly used complex soil carbon models can over-fit in the
presence of sparse time series data, and our simpler models can produce more
accurate predictions.
|
The observation of a radioactively powered kilonova AT~2017gfo associated
with the gravitational wave-event GW170817 from binary neutron star merger
proves that these events are ideal sites for the production of heavy
$r$-process elements. The gamma-ray photons produced by the radioactive decay
of heavy elements are unique probes for the detailed nuclide compositions.
Basing on the detailed $r$-process nucleosynthesis calculations and considering
radiative transport calculations for the gamma-rays in different shells, we
study the gamma-ray emission in a merger ejecta on a timescale of a few days.
It is found that the total gamma-ray energy generation rate evolution is
roughly depicted as $\dot{E}\propto t^{-1.3}$. For the dynamical ejecta with a
low electron fraction ($Y_{\rm e}\lesssim0.20$), the dominant contributors of
gamma-ray energy are the nuclides around the second $r$-process peak
($A\sim130$), and the decay chain of $^{132}$Te ($t_{1/2}=3.21$~days)
$\rightarrow$ $^{132}$I ($t_{1/2}=0.10$~days) $\rightarrow$ $^{132}$Xe produces
gamma-ray lines at $228$ keV, $668$ keV, and $773$ keV. For the case of a wind
ejecta with $Y_{\rm e}\gtrsim0.30$, the dominant contributors of gamma-ray
energy are the nuclides around the first $r$-process peak ($A\sim80$), and the
decay chain of $^{72}$Zn ($t_{1/2}=1.93$~days) $\rightarrow$ $^{72}$Ga
($t_{1/2}=0.59$~days) $\rightarrow$ $^{72}$Ge produces gamma-ray lines at $145$
keV, $834$ keV, $2202$ keV, and $2508$ keV. The peak fluxes of these lines are
$10^{-9}\sim 10^{-7}$~ph~cm$^{-2}$ s$^{-1}$, which are marginally detectable
with the next-generation MeV gamma-ray detector \emph{ETCC} if the source is at
a distance of $40$~Mpc.
|
This paper addresses reinforcement learning based, direct signal tracking
control with an objective of developing mathematically suitable and practically
useful design approaches. Specifically, we aim to provide reliable and easy to
implement designs in order to reach reproducible neural network-based
solutions. Our proposed new design takes advantage of two control design
frameworks: a reinforcement learning based, data-driven approach to provide the
needed adaptation and (sub)optimality, and a backstepping based approach to
provide closed-loop system stability framework. We develop this work based on
an established direct heuristic dynamic programming (dHDP) learning paradigm to
perform online learning and adaptation and a backstepping design for a class of
important nonlinear dynamics described as Euler-Lagrange systems. We provide a
theoretical guarantee for the stability of the overall dynamic system, weight
convergence of the approximating nonlinear neural networks, and the Bellman
(sub)optimality of the resulted control policy. We use simulations to
demonstrate significantly improved design performance of the proposed approach
over the original dHDP.
|
In this paper, we formalize precisely the sense in which the application of
cellular automaton to partial configuration is a natural extension of its local
transition function through the categorical notion of Kan extension. In fact,
the two possible ways to do such an extension and the ingredients involved in
their definition are related through Kan extensions in many ways. These
relations provide additional links between computer science and category
theory, and also give a new point of view on the famous Curtis-Hedlung theorem
of cellular automata from the extended topological point of view provided by
category theory. These relations provide additional links between computer
science and category theory. No prior knowledge of category theory is assumed.
|
Emotion recognition from speech is a challenging task. Re-cent advances in
deep learning have led bi-directional recur-rent neural network (Bi-RNN) and
attention mechanism as astandard method for speech emotion recognition,
extractingand attending multi-modal features - audio and text, and thenfusing
them for downstream emotion classification tasks. Inthis paper, we propose a
simple yet efficient neural networkarchitecture to exploit both acoustic and
lexical informationfrom speech. The proposed framework using multi-scale
con-volutional layers (MSCNN) to obtain both audio and text hid-den
representations. Then, a statistical pooling unit (SPU)is used to further
extract the features in each modality. Be-sides, an attention module can be
built on top of the MSCNN-SPU (audio) and MSCNN (text) to further improve the
perfor-mance. Extensive experiments show that the proposed modeloutperforms
previous state-of-the-art methods on IEMOCAPdataset with four emotion
categories (i.e., angry, happy, sadand neutral) in both weighted accuracy (WA)
and unweightedaccuracy (UA), with an improvement of 5.0% and 5.2% respectively
under the ASR setting.
|
The asymptotic equivalence of canonical and microcanonical ensembles is a
central concept in statistical physics, with important consequences for both
theoretical research and practical applications. However, this property breaks
down under certain circumstances. The most studied violation of ensemble
equivalence requires phase transitions, in which case it has a `restricted'
(i.e. confined to a certain region in parameter space) but `strong' (i.e.
characterized by a difference between the entropies of the two ensembles that
is of the same order as the entropies themselves) form. However, recent
research on networks has shown that the presence of an extensive number of
local constraints can lead to ensemble nonequivalence even in the absence of
phase transitions. This occurs in a `weak' (i.e. leading to a subleading
entropy difference) but remarkably `unrestricted' (i.e. valid in the entire
parameter space) form. Here we look for more general manifestations of ensemble
nonequivalence in arbitrary ensembles of matrices with given margins. These
models have widespread applications in the study of spatially heterogeneous
and/or temporally nonstationary systems, with consequences for the analysis of
multivariate financial and neural time-series, multi-platform social activity,
gene expression profiles and other Big Data. We confirm that ensemble
nonequivalence appears in `unrestricted' form throughout the entire parameter
space due to the extensivity of local constraints. Surprisingly, at the same
time it can also exhibit the `strong' form. This novel, simultaneously `strong
and unrestricted' form of nonequivalence is very robust and imposes a
principled choice of the ensemble. We calculate the proper mathematical
quantities to be used in real-world applications.
|
Privacy-preserving genomic data sharing is prominent to increase the pace of
genomic research, and hence to pave the way towards personalized genomic
medicine. In this paper, we introduce ($\epsilon , T$)-dependent local
differential privacy (LDP) for privacy-preserving sharing of correlated data
and propose a genomic data sharing mechanism under this privacy definition. We
first show that the original definition of LDP is not suitable for genomic data
sharing, and then we propose a new mechanism to share genomic data. The
proposed mechanism considers the correlations in data during data sharing,
eliminates statistically unlikely data values beforehand, and adjusts the
probability distributions for each shared data point accordingly. By doing so,
we show that we can avoid an attacker from inferring the correct values of the
shared data points by utilizing the correlations in the data. By adjusting the
probability distributions of the shared states of each data point, we also
improve the utility of shared data for the data collector. Furthermore, we
develop a greedy algorithm that strategically identifies the processing order
of the shared data points with the aim of maximizing the utility of the shared
data. Considering the interdependent privacy risks while sharing genomic data,
we also analyze the information gain of an attacker about genomes of a donor's
family members by observing perturbed data of the genome donor and we propose a
mechanism to select the privacy budget (i.e., $\epsilon$ parameter of LDP) of
the donor by also considering privacy preferences of her family members. Our
evaluation results on a real-life genomic dataset show the superiority of the
proposed mechanism compared to the randomized response mechanism (a widely used
technique to achieve LDP).
|
In this note we study the contact geometry of symplectic divisors. We show
the contact structure induced on the boundary of a divisor neighborhood is
invariant under toric and interior blow-ups and blow-downs. We also construct
an open book decomposition on the boundary of a concave divisor neighborhood
and apply it to the study of universally tight contact structures of contact
torus bundles.
|
We study the quantum Riemannian geometry of quantum projective spaces of any
dimension. In particular we compute the Riemann and Ricci tensors, using
previously introduced quantum metrics and quantum Levi-Civita connections. We
show that the Riemann tensor is a bimodule map and derive various consequences
of this fact. We prove that the Ricci tensor is proportional to the quantum
metric, giving a quantum analogue of the Einstein condition, and compute the
corresponding scalar curvature. Along the way we also prove several results for
various objects related to those mentioned here.
|
The hole probability, i.e., the probability that a region is void of
particles, is a benchmark of correlations in many body systems. We compute
analytically this probability $P(R)$ for a spherical region of radius $R$ in
the case of $N$ noninteracting fermions in their ground state in a
$d$-dimensional trapping potential. Using a connection to the Laguerre-Wishart
ensembles of random matrices, we show that, for large $N$ and in the bulk of
the Fermi gas, $P(R)$ is described by a universal scaling function of $k_F R$,
for which we obtain an exact formula ($k_F$ being the local Fermi wave-vector).
It exhibits a super exponential tail $P(R)\propto e^{- \kappa_d (k_F R)^{d+1}}$
where $\kappa_d$ is a universal amplitude, in good agreement with existing
numerical simulations. When $R$ is of the order of the radius of the Fermi gas,
the hole probability is described by a large deviation form which is not
universal and which we compute exactly for the harmonic potential. Similar
results also hold in momentum space.
|
We consider a high-Q microresonator with $\chi^{(2)}$ nonlinearity under
conditions when the coupling rates between the sidebands around the pump and
second harmonic exceed the damping rates, implying the strong coupling regime
(SC). Using the dressed-resonator approach we demonstrate that this regime
leads to the dominance of the Hermitian part of the operator driving the
side-band dynamics over its non-Hermitian part responsible for the parametric
gain. This has allowed us to introduce and apply the cross-area concept of the
polariton quasi-particles and define their effective masses in the context of
$\chi^{(2)}$ ring-microresonators. We further use polaritons to predict the
modified spectral response of the resonator to a weak probe field, and to
reveal splitting of the bare-resonator resonances, avoided crossings, and Rabi
dynamics. Polariton basis also allows deriving a discrete sequence of the
parametric thresholds for the generation of sidebands of different orders.
|
We use a Wigner distribution-like function based on the strong field
approximation theory to obtain the time-energy distributions and the ionization
time distributions of electrons ionized by an XUV pulse alone and in the
presence of an infrared (IR) pulse. In the case of a single XUV pulse, although
the overall shape of the ionization time distribution resembles the
XUV-envelope, its detail shows dependence on the emission direction of the
electron and the carrier-envelope phase of the pulse, which mainly results from
the low-energy interference structure. It is further found that the electron
from the counter-rotating term plays an important role in the interference. In
the case of the two-color pulse, both the time-energy distributions and the
ionization time distributions change with varying IR field. Our analysis
demonstrates that the IR field not only modifies the final electron kinetic
energy but also changes the electron's emission time, which results from the
change of the electric field induced by the IR pulse. Moreover, the ionization
time distributions of the photoelectrons emitted from atoms with higher
ionization energy are also given, which show less impact of the IR field on the
electron dynamics.
|
Breathers are localized structures that undergo a periodic oscillation in
their duration and amplitude. Optical microresonators, benefiting from their
high quality factor, provide an ideal test bench for studying the breathing
phenomena. In the monochromatically pumped microresonator system, intrinsic
breathing instabilities are widely observed in the form of temporal dissipative
Kerr solitons which only exist in the effectively red detuned regime. Here, we
proposed a novel bichromatic pumping scheme to create compulsive breathing
microcombs via respectively distributing two pump lasers at the effectively
blue and red detuned side of a single resonance. We experimentally discover the
artificial cnoidal wave breathers and molecular crystal-like breathers in a
chip-based silicon nitride microresonator, and theoretically describe their
intriguing temporal dynamics based on the bichromatic pumping Lugiato-Lefever
equation. In particular, the corresponding breathing microcombs exhibit diverse
comb line spacing ranging from 2 to 17 times of the free spectral range of the
resonator. Our discovery not only provides a simple and robust method to
produce microcombs with reconfigurable comb line spacing, but also reveals a
new type of breathing waves in driven dissipative nonlinear systems.
|
We derive general covariant coupled equations of QCD describing the
tetraquark in terms of a mix of four-quark states $2q2\bar q$, and two-quark
states $q\bar q$. The coupling of $2q2\bar q$ to $q\bar q$ states is achieved
by a simple contraction of a four-quark $q\bar q$-irreducible Green function
down to a two-quark $q\bar q$ Bethe-Salpeter kernel. The resulting tetraquark
equations are expressed in an exact field theoretic form, and are in agreement
with those obtained previously by consideration of disconnected interactions;
however, despite being more general, they have been derived here in a much
simpler and more transparent way.
|
We show how spectral submanifold theory can be used to construct
reduced-order models for harmonically excited mechanical systems with internal
resonances. Efficient calculations of periodic and quasi-periodic responses
with the reduced-order models are discussed in this paper and its companion,
Part II, respectively. The dimension of a reduced-order model is determined by
the number of modes involved in the internal resonance, independently of the
dimension of the full system. The periodic responses of the full system are
obtained as equilibria of the reduced-order model on spectral submanifolds. The
forced response curve of periodic orbits then becomes a manifold of equilibria,
which can be easily extracted using parameter continuation. To demonstrate the
effectiveness and efficiency of the reduction, we compute the forced response
curves of several high-dimensional nonlinear mechanical systems, including the
finite-element models of a von K\'arm\'an beam and a plate.
|
We report for the first time the occurrence of superconductivity in the
quaternary silicide carbide YRe2SiC with Tc = 5.9 K. The emergence of
superconductivity was confirmed by means of magnetic susceptibility, electrical
resistivity, and heat capacity measurements. The presence of a well developed
heat capacity feature at Tc confirms that superconductivity is a bulk
phenomenon, while a second feature in the heat capacity near 0.5 Tc combined
with the unusual temperature dependence of the upper critical field Hc2(T)
indicate the presence of a multiband superconducting state. Additionally, the
linear dependence of the lower critical field Hc1 with temperature resemble the
behavior found in compounds with unconventional pairing symmetry. Band
structure calculations reveal YRe2SiC could harbor a non-trivial topological
state and that the low-energy states occupy multiple disconnected sheets at the
Fermi surface, with different degrees of hybridization, nesting, and screening
effects, therefore making unconventional multiband superconductivity plausible.
|
We consider the dynamics of local entropy and nearest neighbor mutual
information of a 1-D lattice of qubits via the repeated application of nearest
neighbor CNOT quantum gates. This is a quantum version of a cellular automaton.
We analyze the entropy dynamics for different initial product states, both for
open boundary conditions, periodic boundary conditions and we also consider the
infinite lattice thermodynamic limit. The dynamics gives rise to fractal
behavior, where we see the appearance of the Sierpinski triangle both for
states in the computational basis and for operator dynamics in the Heisenberg
picture. In the thermodynamics limit, we see equilibration with a time
dependence controlled by $\exp(-\alpha t^{h-1})$ where $h$ is the fractal
dimension of the Sierpinski triangle, and $\alpha$ depends on the details of
the initial state. We also see log-periodic reductions in the one qubit entropy
where the approach to equilibrium is only power law. For open boundary
conditions we see time periodic oscillations near the boundary, associated to
subalgebras of operators localized near the boundary that are mapped to
themselves by the dynamics.
|
In an article by Garc\'ia-Pintos et al. [Rev. Lett. 125, 040601 (2020)] the
connection between the charging power of a quantum battery and the fluctuations
of a "free energy operator" whose expectation value characterizes the maximum
extractable work of the battery is studied. The result of the closed-system
analysis shows that for a general charging process the battery will have a
nonzero charging power if and only if the state of the battery is not an
eigenstate of the free energy operator. In this Comment, we point out a few
mistakes in the analysis and obtain the correct bound on the charging power.
Consequently, the result for closed-system dynamics is in general not correct.
|
In real-world multi-agent systems, agents with different capabilities may
join or leave without altering the team's overarching goals. Coordinating teams
with such dynamic composition is challenging: the optimal team strategy varies
with the composition. We propose COPA, a coach-player framework to tackle this
problem. We assume the coach has a global view of the environment and
coordinates the players, who only have partial views, by distributing
individual strategies. Specifically, we 1) adopt the attention mechanism for
both the coach and the players; 2) propose a variational objective to
regularize learning; and 3) design an adaptive communication method to let the
coach decide when to communicate with the players. We validate our methods on a
resource collection task, a rescue game, and the StarCraft micromanagement
tasks. We demonstrate zero-shot generalization to new team compositions. Our
method achieves comparable or better performance than the setting where all
players have a full view of the environment. Moreover, we see that the
performance remains high even when the coach communicates as little as 13% of
the time using the adaptive communication strategy.
|
Let $H\subset G$ be semisimple Lie groups, $\Gamma\subset G$ a lattice and
$K$ a compact subgroup of $G$. For $n \in \mathbb N$, let $\mathcal O_n$ be the
projection to $\Gamma \backslash G/K$ of a finite union of closed $H$-orbits in
$\Gamma \backslash G$. In this very general context of homogeneous dynamics, we
prove an equidistribution theorem for intersections of $\mathcal O_n$ with an
analytic subvariety $S$ of $G/K$ of complementary dimension: if $\mathcal O_n$
is equidistributed in $\Gamma \backslash G/K$, then the signed intersection
measure of $S \cap \mathcal O_n$ normalized by the volume of $\mathcal O_n$
converges to the restriction to $S$ of some $G$-invariant closed form on $G/K$.
We give general tools to determine this closed form and compute it in some
examples.
As our main application, we prove that, if $\mathbb V$ is a polarized
variation of Hodge structure of weight $2$ and Hodge numbers $(q,p,q)$ over a
base $S$ of dimension $rq$, then the (non-exceptional) locus where the Picard
rank is at least $r$ is equidistributed in $S$ with respect to the volume form
$c_q^r$, where $c_q$ is the $q^{\textrm{th}}$ Chern form of the Hodge bundle.
This generalizes a previous work of the first author which treated the case
$q=r=1$. We also prove an equidistribution theorem for certain families of CM
points in Shimura varieties, and another one for Hecke translates of a divisor
in $\mathcal A_g$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.