text
stringlengths 6
128k
|
---|
Contact tracing has been extensively studied from different perspectives in
recent years. However, there is no clear indication of why this intervention
has proven effective in some epidemics (SARS) and mostly ineffective in some
others (COVID-19). Here, we perform an exhaustive evaluation of random testing
and contact tracing on novel superspreading random networks to try to identify
which epidemics are more containable with such measures. We also explore the
suitability of positive rates as a proxy of the actual infection statuses of
the population. Moreover, we propose novel ideal strategies to explore the
potential limits of both testing and tracing strategies. Our study counsels
caution, both at assuming epidemic containment and at inferring the actual
epidemic progress, with current testing or tracing strategies. However, it also
brings a ray of light for the future, with the promise of the potential of
novel testing strategies that can achieve great effectiveness.
|
A graph $G$ has the Perfect-Matching-Hamiltonian property (PMH-property) if
for each one of its perfect matchings, there is another perfect matching of $G$
such that the union of the two perfect matchings yields a Hamiltonian cycle of
$G$. The study of graphs that have the PMH-property, initiated in the 1970s by
Las Vergnas and H\"{a}ggkvist, combines three well-studied properties of
graphs, namely matchings, Hamiltonicity and edge-colourings. In this work, we
study these concepts for cubic graphs in an attempt to characterise those cubic
graphs for which every perfect matching corresponds to one of the colours of a
proper 3-edge-colouring of the graph. We discuss that this is equivalent to
saying that such graphs are even-2-factorable (E2F), that is, all 2-factors of
the graph contain only even cycles. The case for bipartite cubic graphs is
trivial, since if $G$ is bipartite then it is E2F. Thus, we restrict our
attention to non-bipartite cubic graphs. A sufficient, but not necessary,
condition for a cubic graph to be E2F is that it has the PMH-property. The aim
of this work is to introduce an infinite family of E2F non-bipartite cubic
graphs on two parameters, which we coin papillon graphs, and determine the
values of the respective parameters for which these graphs have the
PMH-property or are just E2F. We also show that no two papillon graphs with
different parameters are isomorphic.
|
To study the evolution of binary star clusters we have imaged 7 systems in
the Small Magellanic Cloud with SOAR 4-m telescope using B and V filters. The
sample contains pairs with well-separated components (d < 30 pc) as well as
systems that apparently merged, as evidenced by their unusual structures. By
employing isochrone fittings to their CMDs we have determined reddening, age
and metallicity and by fitting King models to their radial stellar density
profile we have estimated core radius. Disturbances of the density profile are
interpreted as an evidence of interaction. Circunstances as distances between
components and their age difference are addressed in terms of the timescales
involved to access the physical connection of the system. In two cases the age
difference is above 50 Myr, which suggests chance alignment, capture or
sequential star formation.
|
We establish a Morita theorem to construct triangle equivalences between the
singularity categories of (commutative and non-commutative) Gorenstein rings
and the cluster categories of finite dimensional algebras over fields, and more
strongly, quasi-equivalences between their canonical dg enhancements. More
precisely, we prove that such an equivalence exists as soon as we find a
quasi-equivalence between the graded dg singularity category of a Gorenstein
ring and the derived category of a finite dimensional algebra which can be done
by finding a single tilting object.
Our result is based on two key theorems on dg enhancements of cluster
categories and of singularity categories, which are of independent interest.
First we give a Morita-type theorem which realizes certain $\mathbb{Z}$-graded
dg categories as dg orbit categories. Secondly, we show that the canonical dg
enhancements of the singularity categories of symmetric orders have the
bimodule Calabi-Yau property, which lifts the classical Auslander-Reiten
duality on singularity categories.
We apply our results to such classes of rings as Gorenstein rings of
dimension at most $1$, quotient singularities, and Geigle-Lenzing complete
intersections, including finite or infinite Grassmannian cluster categories, to
realize their singularity categories as cluster categories of finite
dimensional algebras.
|
We present a new "grey-box" approach to anomaly detection in smart
manufacturing. The approach is designed for tools run by control systems which
execute recipe steps to produce semiconductor wafers. Multiple streaming
sensors capture trace data to guide the control systems and for quality
control. These control systems are typically PI controllers which can be
modeled as an ordinary differential equation (ODE) coupled with a control
equation, capturing the physics of the process. The ODE "white-box" models
capture physical causal relationships that can be used in simulations to
determine how the process will react to changes in control parameters, but they
have limited utility for anomaly detection. Many "black-box" approaches exist
for anomaly detection in manufacturing, but they typically do not exploit the
underlying process control. The proposed "grey-box" approach uses the
process-control ODE model to derive a parametric function of sensor data.
Bayesian regression is used to fit the parameters of these functions to form
characteristic "shape signatures". The probabilistic model provides a natural
anomaly score for each wafer, which captures poor control and strange shape
signatures. The anomaly score can be deconstructed into its constituent parts
in order to identify which parameters are contributing to anomalies. We
demonstrate how the anomaly score can be used to monitor complex multi-step
manufacturing processes to detect anomalies and changes and show how the shape
signatures can provide insight into the underlying sources of process variation
that are not readily apparent in the sensor data.
|
We study a class of Schr\"odinger operators on $\Z^2$ with a random potential
decaying as $|x|^{-\dex}$, $0<\dex\leq\frac12$, in the limit of small disorder
strength $\lambda$. For the critical exponent $\dex=\frac12$, we prove that the
localization length of eigenfunctions is bounded below by
$2^{\lambda^{-\frac14+\eta}}$, while for $0<\dex<\frac12$, the lower bound is
$\lambda^{-\frac{2-\eta}{1-2\dex}}$, for any $\eta>0$. These estimates
"interpolate" between the lower bound $\lambda^{-2+\eta}$ due to recent work of
Schlag-Shubin-Wolff for $\dex=0$, and pure a.c. spectrum for $\dex>\frac12$
demonstrated in recent work of Bourgain.
|
We consider a model of a two-dimensional interface of the SOS type, with
finite-range, even, strictly convex, twice continuously differentiable
interactions. We prove that, under an arbitrarily weak potential favouring
zero-height, the surface has finite mean square heights. We consider the cases
of both square well and $\delta$ potentials. These results extend previous
results for the case of nearest-neighbours Gaussian interactions in \cite{DMRR}
and \cite{BB}. We also obtain estimates on the tail of the height distribution
implying, for example, existence of exponential moments. In the case of the
$\delta$ potential, we prove a spectral gap estimate for linear functionals. We
finally prove exponential decay of the two-point function (1) for strong
$\delta$-pinning and the above interactions, and (2) for arbitrarily weak
$\delta$-pinning, but with finite-range Gaussian interactions.
|
The integration of Large Language Models (LLMs) in information retrieval has
raised a critical reevaluation of fairness in the text-ranking models. LLMs,
such as GPT models and Llama2, have shown effectiveness in natural language
understanding tasks, and prior works (e.g., RankGPT) have also demonstrated
that the LLMs exhibit better performance than the traditional ranking models in
the ranking task. However, their fairness remains largely unexplored. This
paper presents an empirical study evaluating these LLMs using the TREC Fair
Ranking dataset, focusing on the representation of binary protected attributes
such as gender and geographic location, which are historically underrepresented
in search outcomes. Our analysis delves into how these LLMs handle queries and
documents related to these attributes, aiming to uncover biases in their
ranking algorithms. We assess fairness from both user and content perspectives,
contributing an empirical benchmark for evaluating LLMs as the fair ranker.
|
Objective: To evaluate differences in major outcomes between Bundled Payments
for Care Improvement (BPCI) participating providers and non-participating
providers for both Major Joint Replacement of the Lower Extremity (MJRLE) and
Acute Myocardial Infarction (AMI) episodes. Methods: A
difference-in-differences approach estimated the differential change in
outcomes for Medicare beneficiaries who had an MJRLE or AMI at a BPCI
participating hospital between the baseline (January 2011 through September
2013) and intervention (October 2013 through December 2016) periods and
beneficiaries with the same episode (MJRLE or AMI) at a matched comparison
hospital. Main Outcomes and Measures: Medicare payments, LOS, and readmissions
during the episode, which includes the anchor hospitalization and the 90-day
post discharge period. Results: Mean total Medicare payments for an MJRLE
episode and the 90-day post discharge period declined $444 more (p < 0.0001)
for Medicare beneficiaries with episodes initiated in a BPCI-participating
provider than for the beneficiaries in a comparison provider. This reduction
was mainly due to reduced institutional post-acute care (PAC) payments. Slight
reductions in carrier payments and LOS were estimated. Readmission rates were
not statistically different between the BPCI and the comparison populations.
These findings suggest that PAC use can be reduced without adverse effects on
recovery from MJRLE. The lack of statistically significant differences in
effects for AMI could be explained by a smaller sample size or more
heterogenous recovery paths in AMI. Conclusions: Our findings suggest that, as
currently designed, bundled payments can be effective in reducing payments for
MJRLE episodes of care, but not necessarily for AMI. Most savings came from the
declines in PAC. These findings are consistent with the results reported in the
BPCI model evaluation for CMS.
|
Classical hypergeometric functions are well-known to play an important role
in arithmetic algebraic geometry. These functions offer solutions to ordinary
differential equations, and special cases of such solutions are periods of
Picard-Fuchs varieties of Calabi-Yau type. Gauss' $_2F_1$ includes the
celebrated case of elliptic curves through the theory of elliptic functions. In
the 80s, Greene defined finite field hypergeometric functions that can be used
to enumerate the number of finite field points on such varieties. We extend
some of these results to count finite field ``matrix points." For example, for
every $n\geq 1,$ we consider the matrix elliptic curves $$ B^2 = A(A-I_n)(A-a
I_n), $$ where $(A,B)$ are commuting $n\times n$ matrices over a finite field
$\mathbb{F}_q$ and $a\neq 0,1$ is fixed. Our formulas are assembled from
Greene's hypergeometric functions and $q$-multinomial coefficients. We use
these formulas to prove Sato-Tate distributions for the error terms for matrix
point counts for these curves and some families of $K3$ surfaces.
|
How can one discriminate different inequivalent classes of multiparticle
entanglement experimentally? We present an approach for the discrimination of
an experimentally prepared state from the equivalence class of another state.
We consider two possible measures for the discrimination strength of an
observable. The first measure is based on the difference of expectation values,
the second on the relative entropy of the probability distributions of the
measurement outcomes. The interpretation of these measures and their usefulness
for experiments with limited resources are discussed. In the case of graph
states, the stabilizer formalism is employed to compute these quantities and to
find sets of observables that result in the most decisive discrimination.
|
This paper presents novel controllers that yield finite-time stability for
linear systems. We first present a sufficient condition for the origin of a
scalar system to be finite-time stable. Then we present novel finite-time
controllers based on vector fields and barrier functions to demonstrate the
utility of this geometric condition. We also consider the general class of
linear controllable systems, and present a continuous feedback control law to
stabilize the system in finite time. Finally, we present simulation results for
each of these cases, showing the efficacy of the designed control laws.
|
Deep Reinforcement Learning combined with Fictitious Play shows impressive
results on many benchmark games, most of which are, however, single-stage. In
contrast, real-world decision making problems may consist of multiple stages,
where the observation spaces and the action spaces can be completely different
across stages. We study a two-stage strategy card game Legends of Code and
Magic and propose an end-to-end policy to address the difficulties that arise
in multi-stage game. We also propose an optimistic smooth fictitious play
algorithm to find the Nash Equilibrium for the two-player game. Our approach
wins double championships of COG2022 competition. Extensive studies verify and
show the advancement of our approach.
|
This work presents a Model Predictive Control (MPC) for the artificial
pancreas, which is able to autonomously manage basal insulin injections in type
1 diabetic patients. Specifically, the MPC goal is to maintain the patients'
blood glucose level inside the safe range of 70-180 mg/dL, acting on the
insulin amount and respecting all the imposed constraints, taking into
consideration also the Insulin On Board (IOB), to avoid excess of insulin
infusion. MPC uses a model to make predictions of the system behaviour. In this
work, due to the complexity of the diabetes disease that complicates the
identification of a general physiological model, a data-driven learning method
is employed instead. The Componentwise H\"{o}lder Kinky Inference (CHoKI)
method is adopted, to have a customized controller for each patient. For the
data collection phase and also to test the proposed controller, the virtual
patients of the FDA-accepted UVA/Padova simulator are exploited. The proposed
MPC is also tested on a modified version of the simulator, that takes into
consideration also the variability of the insulin sensitivity. The final
results are satisfying since the proposed controller reduces the time in
hypoglycemia (which is more dangerous) if compared to the outcome obtained with
the standard constant basal insulin therapy provided by the simulator,
satisfying also the time in range requirements and avoiding long-term
hyperglycemia events.
|
Character polynomials are used to study the restriction of a polynomial
representation of a general linear group to its subgroup of permutation
matrices. A simple formula is obtained for computing inner products of class
functions given by character polynomials. Character polynomials for symmetric
and alternating tensors are computed using generating functions with Eulerian
factorizations. These are used to compute character polynomials for Weyl
modules, which exhibit a duality. By taking inner products of character
polynomials for Weyl modules and character polynomials for Specht modules,
stable restriction coefficients are easily computed. Generating functions of
dimensions of symmetric group invariants in Weyl modules are obtained.
Partitions with two rows, two columns, and hook partitions whose Weyl modules
have non-zero vectors invariant under the symmetric group are characterized. A
reformulation of the restriction problem in terms of a restriction functor from
the category of strict polynomial functors to the category of finitely
generated FI-modules is obtained.
|
In the Coulomb blockade regime of a ballistic quantum dot, the distribution
of conductance peak spacings is well known to be incorrectly predicted by a
single-particle picture; instead, matrix element fluctuations of the residual
electronic interaction need to be taken into account. In the normalized
random-wave model, valid in the semiclassical limit where the number of
electrons in the dot becomes large, we obtain analytic expressions for the
fluctuations of two-body and one-body matrix elements. However, these
fluctuations may be too small to explain low-temperature experimental data. We
have examined matrix element fluctuations in realistic chaotic geometries, and
shown that at energies of experimental interest these fluctuations generically
exceed by a factor of about 3-4 the predictions of the random wave model. Even
larger fluctuations occur in geometries with a mixed chaotic-regular phase
space. These results may allow for much better agreement between the
Hartree-Fock picture and experiment. Among other findings, we show that the
distribution of interaction matrix elements is strongly non-Gaussian in the
parameter range of experimental interest, even in the random wave model. We
also find that the enhanced fluctuations in realistic geometries cannot be
computed using a leading-order semiclassical approach, but may be understood in
terms of short-time dynamics.
|
A $\chi^2$ analysis of several SUSY GUTs recently discussed in the literature
is presented. We obtain global fits to electroweak data, which include gauge
couplings, gauge boson masses, BR($b\to s\gamma$) and masses of fermions of all
three generations and their mixing angles. Thus we are able to test gauge
unification, radiative electroweak symmetry breaking, SUSY sector (- in the
context of supergravity induced SUSY breaking) and the Yukawa sector in each
particular model self-consistently. One of the models studied provides a very
good fit with $\chi^2 \sim 1$ for 3 degrees of freedom, in a large region of
the allowed SUSY parameter space. The Yukawa sector works so well in this case
that the analysis ends up testing the MSSM constrained by unification. Adopting
this point of view, in the second part of this talk we focus on the details of
the fit for BR($b\to s\gamma$) and discuss the correlations among $\delta
m_b^{SUSY}$, $\alpha_s(M_Z)$ and a GUT threshold to $\alpha_s(M_G)$. We
conclude that an attractive SO(10)-derived regime of the MSSM remains a viable
option.
|
Object grasping is critical for many applications, which is also a
challenging computer vision problem. However, for the clustered scene, current
researches suffer from the problems of insufficient training data and the
lacking of evaluation benchmarks. In this work, we contribute a large-scale
grasp pose detection dataset with a unified evaluation system. Our dataset
contains 87,040 RGBD images with over 370 million grasp poses. Meanwhile, our
evaluation system directly reports whether a grasping is successful or not by
analytic computation, which is able to evaluate any kind of grasp poses without
exhausted labeling pose ground-truth. We conduct extensive experiments to show
that our dataset and evaluation system can align well with real-world
experiments. Our dataset, source code and models will be made publicly
available.
|
The escalating battles between attackers and defenders in cybersecurity make
it imperative to test and evaluate defense capabilities from the attackers'
perspective. However, constructing full-life-cycle cyberattacks and performing
red team emulations requires significant time and domain knowledge from
security experts. Existing cyberattack simulation frameworks face challenges
such as limited technical coverage, inability to conduct full-life-cycle
attacks, and the need for manual infrastructure building. These limitations
hinder the quality and diversity of the constructed attacks. In this paper, we
leveraged the capabilities of Large Language Models (LLMs) in summarizing
knowledge from existing attack intelligence and generating executable machine
code based on human knowledge. we proposed AURORA, an automatic end-to-end
cyberattack construction and emulation framework. AURORA can autonomously build
multi-stage cyberattack plans based on Cyber Threat Intelligence (CTI) reports,
construct the emulation infrastructures, and execute the attack procedures. We
also developed an attack procedure knowledge graph to integrate knowledge about
attack techniques throughout the full life cycle of advanced cyberattacks from
various sources. We constructed and evaluated more than 20 full-life-cycle
cyberattacks based on existing CTI reports. Compared to previous attack
simulation frameworks, AURORA can construct multi-step attacks and the
infrastructures in several minutes without human intervention. Furthermore,
AURORA incorporates a wider range (40% more) of attack techniques into the
constructed attacks in a more efficient way than the professional red teams. To
benefit further research, we open-sourced the dataset containing the execution
files and infrastructures of 20 emulated cyberattacks.
|
We present a new approach to response around arbitrary out-of-equilibrium
states in the form of a fluctuation-response inequality (FRI). We study the
response of an observable to a perturbation of the underlying stochastic
dynamics. We find that magnitude of the response is bounded from above by the
fluctuations of the observable in the unperturbed system and the
Kullback-Leibler divergence between the probability densities describing the
perturbed and unperturbed system. This establishes a connection between linear
response and concepts of information theory. We show that in many physical
situations, the relative entropy may be expressed in terms of physical
observables. As a direct consequence of this FRI, we show that for steady state
particle transport, the differential mobility is bounded by the diffusivity.
For a virtual perturbation proportional to the local mean velocity, we recover
the thermodynamic uncertainty relation (TUR) for steady state transport
processes. Finally, we use the FRI to derive a generalization of the
uncertainty relation to arbitrary dynamics, which involves higher-order
cumulants of the observable. We provide an explicit example, in which the TUR
is violated but its generalization is satisfied with equality.
|
Recent research has shown that each apnea episode results in a significant
rise in the beat-to-beat blood pressure and by a drop to the pre-episode levels
when patient resumes normal breathing. While the physiological implications of
these repetitive and significant oscillations are still unknown, it is of
interest to quantify them. Since current array of instruments deployed for
polysomnography studies does not include beat-to-beat measurement of blood
pressure, but includes oximetry, it is both of clinical interest to estimate
the magnitude of BP oscillations from the photoplethysmography (PPG) signal
that is readily available from sleep lab oximeters. We have investigated a new
method for continuous estimation of systolic (SBP), diastolic (DBP), and mean
(MBP) blood pressure waveforms from PPG. Peaks and troughs of PPG waveform are
used as input to a 5th order autoregressive moving average model to construct
estimates of SBP, DBP, and MBP waveforms. Since breath hold maneuvers are shown
to simulate apnea episodes faithfully, we evaluated the performance of the
proposed method in 7 subjects (4 F; 32+-4 yrs., BMI 24.57+-3.87 kg/m2) in
supine position doing 5 breath maneuvers with 90s of normal breathing between
them. The modeling error ranges were (all units are in mmHg) -0.88+-4.87 to
-2.19+-5.73 (SBP); 0.29+-2.39 to -0.97+-3.83 (DBP); and -0.42+-2.64 to
-1.17+-3.82 (MBP). The cross validation error ranges were 0.28+-6.45 to
-1.74+-6.55 (SBP); 0.09+-3.37 to -0.97+-3.67 (DBP); and 0.33+-4.34 to
-0.87+-4.42 (MBP). The level of estimation error in, as measured by the root
mean squared of the model residuals, was less than 7 mmHg
|
Pure and homogeneous biological macromolecules (i.e. proteins, nucleic acids,
protein-protein or protein-nucleic acid complexes, and functional assemblies
such as ribosomes and viruses) are the key for consistent and reliable
biochemical and biophysical measurements, as well as for reproducible
crystallizations, best crystal diffraction properties, and exploitable electron
microscopy images. Highlights: Pure and homogeneous macromolecules are the key
for the best experimental results; They warrant the consistency and the
reliability of biochemical and biophysical data; They give more reproducible
crystallography and electron microscopy results as well.
|
Continuous O$(d,d)$ global symmetries emerge in Kaluza-Klein reductions of
$D$-dimensional string supergravities to $D-d$ dimensions. We show that the
non-geometric elements of this group effectively act in the $D$-dimensional
parent theory as a hidden bosonic symmetry that fixes its couplings: the
$\beta$-symmetry. We give the explicit $\beta$-transformations to first order
in $\alpha'$ and verify the invariance of the action as well as the closure of
the transformation rules.
|
We study the anomalous Hall effect (AHE) in tilted Weyl metals with Gaussian
disorder due to the crossed X and {\Psi} diagrams in this work. The importance
of such diagrams to the AHE has been demonstrated recently in two dimensional
(2D) massive Dirac model and Rashba ferromagnets. It has been shown that the
inclusion of such diagrams dramatically changes the total AHE in such systems.
In this work, we show that the contributions from the X and {\Psi} diagrams to
the AHE in tilted Weyl metals are of the same order of the non-crossing diagram
we studied in a previous work, but with opposite sign. The total contribution
of the X and {\Psi} diagrams cancels the majority part of the contribution from
the non-crossing diagram in tilted Weyl metals, similar to the 2D massive Dirac
model. We also discuss the difference of the contributions from the crossed
diagrams between 2D massive Dirac model and the tilted Weyl metals. At last, we
discuss the experimental relevance of observing the AHE due to the X and {\Psi}
diagrams in type-I Weyl metal such as Co3Sn2S2.
|
A recent letter Cai et al. [2107.14548] within a phenomenological dark matter
framework with a massive graviton in the external state indicated a divergence
with increasing centre-of-momentum energy arising from the longitudinal
polarizations of the graviton. In this letter we point out that in processes
such as graviton-photon production from matter annihilation, $f\bar{f} \to
G\gamma$, no such anomalous divergences occur at tree-level. This then applies
to other tree-level amplitudes related by crossing symmetry such as $\gamma f
\to Gf$, $Gf \to {\gamma}f$, ${\gamma}f \to Gf$, $f \to fG{\gamma}$ and so on.
We show this by explicitly computing the relevant tree-level diagrams, where we
find that delicate cancellations ensure that all anomalously growing terms are
well-regulated. Effectively at tree-level this is consistent with the operation
of a Ward identity associated with the external photon for such amplitudes. The
same tree-level results apply if the photon is replaced by a gluon. These
results are important for cosmological models of dark matter within the
framework of extra dimensions.
|
Quantum theory has the property of "local tomography": the state of any
composite system can be reconstructed from the statistics of measurements on
the individual components. In this respect the holism of quantum theory is
limited. We consider in this paper a class of theories more holistic than
quantum theory in that they are constrained only by "bilocal tomography": the
state of any composite system is determined by the statistics of measurements
on pairs of components. Under a few auxiliary assumptions, we derive certain
general features of such theories. In particular, we show how the number of
state parameters can depend on the number of perfectly distinguishable states.
We also show that real-vector-space quantum theory, while not locally
tomographic, is bilocally tomographic.
|
With the discovery of a particle that seems rather consistent with the
minimal Standard Model Higgs boson, attention turns to questions of
naturalness, fine-tuning, and what they imply for physics beyond the Standard
Model and its discovery prospects at run II of the LHC. In this article we
revisit the issue of naturalness, discussing some implicit assumptions that
underly some of the most common statements, which tend to assign physical
significance to certain regularization procedures. Vague arguments concerning
fine-tuning can lead to conclusions that are too strong and perhaps not as
generic as one would hope. Instead, we explore a more pragmatic definition of
the hierarchy problem that does not rely on peeking beyond the murky boundaries
of quantum field theory: we investigate the fine-tuning of the electroweak
scale associated with thresholds from heavy particles, which is both calculable
and dependent on the nature of the would-be ultraviolet completion of the
Standard Model. We discuss different manifestations of new high-energy scales
that are favored by experimental hints for new physics with an eye toward
making use of fine-tuning in order to determine natural regions of the new
physics parameter spaces.
|
We have no certain knowledge of the early history of dark matter (DM). In
this paper we propose a scenario where DM is produced post-recombination but
prior to the cosmic dawn. It helps to relax the bounds on DM interactions, in
particular with baryons, from the CMB. It may be of interest in some
circumstances, for example, to understand the recent cosmic dawn 21-cm signal
anomaly. We argue that the cosmic gas cooling mechanism via the minicharged
DM-baryon scattering may be viable even if it takes up the total DM budget. We
also investigate the possibility of a gluon-philic mediator of a few 10 keV, to
find that the most reliable exclusion is from the neutron scattering.
|
Cryptography plays a pivotal role in safeguarding sensitive information and
facilitating secure communication. Classical cryptography relies on
mathematical computations, whereas quantum cryptography operates on the
principles of quantum mechanics, offering a new frontier in secure
communication. Quantum cryptographic systems introduce novel dimensions to
security, capable of detecting and thwarting eavesdropping attempts. By
contrasting quantum cryptography with its classical counterpart, it becomes
evident how quantum mechanics revolutionizes the landscape of secure
communication.
|
The three-flavor chiral expansion for octet baryons has well-known problems
with convergence. We show that this three-flavor chiral expansion can be
reorganized into a two-flavor expansion thereby eliminating large kaon and eta
loop contributions. Issues of the underlying formulation are addressed by
considering the effect of strangeness changing thresholds on hyperon masses.
While the spin-3/2 hyperon resonances are considerably more sensitive to these
thresholds compared to the spin-1/2 hyperons, we demonstrate that in both cases
the essential physics can be captured in the two-flavor effective theory by
terms that are analytic in the pion mass squared, but non-analytic in the
strange quark mass. Using the two-flavor theory of hyperons, baryon masses and
axial charges are investigated. Loop contributions in the two-flavor theory
appear to be perturbatively under control. A natural application for our
development is to study the pion mass dependence of lattice QCD data on hyperon
properties.
|
The task of unsupervised semantic segmentation aims to cluster pixels into
semantically meaningful groups. Specifically, pixels assigned to the same
cluster should share high-level semantic properties like their object or part
category. This paper presents MaskDistill: a novel framework for unsupervised
semantic segmentation based on three key ideas. First, we advocate a
data-driven strategy to generate object masks that serve as a pixel grouping
prior for semantic segmentation. This approach omits handcrafted priors, which
are often designed for specific scene compositions and limit the applicability
of competing frameworks. Second, MaskDistill clusters the object masks to
obtain pseudo-ground-truth for training an initial object segmentation model.
Third, we leverage this model to filter out low-quality object masks. This
strategy mitigates the noise in our pixel grouping prior and results in a clean
collection of masks which we use to train a final segmentation model. By
combining these components, we can considerably outperform previous works for
unsupervised semantic segmentation on PASCAL (+11% mIoU) and COCO (+4% mask
AP50). Interestingly, as opposed to existing approaches, our framework does not
latch onto low-level image cues and is not limited to object-centric datasets.
The code and models will be made available.
|
Learning from Demonstration (LfD) constitutes one of the most robust
methodologies for constructing efficient cognitive robotic systems. Despite the
large body of research works already reported, current key technological
challenges include those of multi-agent learning and long-term autonomy.
Towards this direction, a novel cognitive architecture for multi-agent LfD
robotic learning is introduced, targeting to enable the reliable deployment of
open, scalable and expandable robotic systems in large-scale and complex
environments. In particular, the designed architecture capitalizes on the
recent advances in the Artificial Intelligence (AI) field, by establishing a
Federated Learning (FL)-based framework for incarnating a multi-human
multi-robot collaborative learning environment. The fundamental
conceptualization relies on employing multiple AI-empowered cognitive processes
(implementing various robotic tasks) that operate at the edge nodes of a
network of robotic platforms, while global AI models (underpinning the
aforementioned robotic tasks) are collectively created and shared among the
network, by elegantly combining information from a large number of human-robot
interaction instances. Regarding pivotal novelties, the designed cognitive
architecture a) introduces a new FL-based formalism that extends the
conventional LfD learning paradigm to support large-scale multi-agent
operational settings, b) elaborates previous FL-based self-learning robotic
schemes so as to incorporate the human in the learning loop and c) consolidates
the fundamental principles of FL with additional sophisticated AI-enabled
learning methodologies for modelling the multi-level inter-dependencies among
the robotic tasks. The applicability of the proposed framework is explained
using an example of a real-world industrial case study for agile
production-based Critical Raw Materials (CRM) recovery.
|
In the last few years evidence has been accumulating that there are a
multiplicity of energy scales which characterize superconductivity in the
underdoped cuprates. In contrast to the situation in BCS superconductors, the
phase coherence temperature Tc is different from the energy gap onset
temperature T*. In addition, thermodynamic and tunneling spectroscopies have
led to the inference that the order parameter $\Delta_{sc}$ is to be
distinguished from the excitation gap $\Delta$; in this way, pseudogap effects
persist below Tc. It has been argued by many in the community that the presence
of these distinct energy scales demonstrates that the pseudogap is unrelated to
superconductivity. In this paper we show that this inference is incorrect. We
demonstrate that the difference between the order parameter and excitation gap
and the contrasting dependences of T* and Tc on hole concentration $x$ and
magnetic field $H$ follow from a natural generalization of BCS theory. This
simple generalized form is based on a BCS-like ground state, but with self
consistently determined chemical potential in the presence of arbitrary
attractive coupling $g$. We have applied this mean field theory with some
success to tunneling, transport, thermodynamics and magnetic field effects. We
contrast the present approach with the phase fluctuation scenario and discuss
key features which might distinguish our precursor superconductivity picture
from that involving a competing order parameter.
|
In this article, goal-oriented a posteriori error estimation for the
biharmonic plate bending problem is considered. The error for approximation of
goal functional is represented by an estimator which combines dual-weighted
residual method and equilibrated moment tensor. An abstract unified framework
for the goal-oriented a posteriori error estimation is derived. In particular,
$C^0$ interior penalty and discontinuous Galerkin finite element methods are
employed for practical realization. The abstract estimation is based on
equilibrated moment tensor and potential reconstruction that provides a
guaranteed upper bound for the goal error. Numerical experiments are performed
to illustrate the effectivity of the estimators.
|
Supermassive black holes with masses of millions to billions of solar masses
are commonly found in the centers of galaxies. Astronomers seek to image jet
formation using radio interferometry, but still suffer from insufficient
angular resolution. An alternative method to resolve small structures is to
measure the time variability of their emission. Here, we report on gamma-ray
observations of the radio galaxy IC 310 obtained with the MAGIC telescopes
revealing variability with doubling time scales faster than 4.8 min. Causality
constrains the size of the emission region to be smaller than 20\% of the
gravitational radius of its central black hole. We suggest that the emission is
associated with pulsar-like particle acceleration by the electric field across
a magnetospheric gap at the base of the radio jet.
|
We extract information on the fluxes of Be and CNO neutrinos directly from
solar neutrino experiments, with minimal assumptions about solar models. Next
we compare these results with solar models, both standard and non standard
ones. Finally we discuss the expectations for Borexino, both in the case of
standard and non standard neutrinos.
|
Let $\mathcal{C}$ be a conjugacy class of involutions in a group $G$. We
study the graph $\Gamma(\mathcal{C})$ whose vertices are elements of
$\mathcal{C}$ with $g,h\in\mathcal{C}$ connected by an edge if and only if
$gh\in\mathcal{C}$. For $t\in \mathcal{C}$, we define the component group of
$t$ to be the subgroup of $G$ generated by all vertices in
$\Gamma(\mathcal{C})$ that lie in the connected component of the graph that
contains $t$.
We classify the component groups of all involutions in simple groups of Lie
type over a field of characteristic $2$. We use this classification to
partially classify the transitive binary actions of the simple groups of Lie
type over a field of characteristic $2$ for which a point stabilizer has even
order. The classification is complete unless the simple group in question is a
symplectic or unitary group.
|
We report new measurements of the atmospheric muons at mountain altitude. The
measurement was carried out with the BESS detector at the top of Mt. Norikura,
Japan. The altitude is 2,770 m above sea level. Comparing our results and
predictions given by some interaction models, a further appropriate model has
been investigated. These studies would improve accuracy of atmospheric neutrino
calculations.
|
We construct the exchange gate with small elementary gates on the space of
qudits, which consist of three controlled shift gates and three "reverse"
gates. This is a natural extension of the qubit case.
We also consider a similar subject on the Fock space, but in this case we
meet with some different situation. However we can construct the exchange gate
by making use of generalized coherent operator based on the Lie algebra su(2)
which is a well--known method in Quantum Optics. We moreover make a brief
comment on "imperfect clone".
|
We study the Landau levels associated with electrons moving in a magnetic
field in the presence of a continuous distribution of disclinations, a magnetic
screw dislocation and a dispiration. We focus on the influence of these
topological defects on the spectrum of the electron(or hole) in a magnetic
field in the framework of the geometric theory of defects in solids of Katanaev
and Volovich. The presence of the defects breaks the degeneracy of the Landau
levels in different ways depending on the defect. Exact expressions for
energies and eigenfuctions are found for all cases.
|
Let $(X,T^{1,0}X)$ be a $(2n+1+d)$-dimensional compact CR manifold with
codimension $d+1$, $d\geq1$, and let $G$ be a $d$-dimensional compact Lie group
with CR action on $X$ and $T$ be a globally defined vector field on $X$ such
that $\mathbb C TX=T^{1,0}X\oplus T^{0,1}X\oplus\mathbb C T\oplus\mathbb
C\underline{\mathfrak{g}}$, where $\underline{\mathfrak{g}}$ is the space of
vector fields on $X$ induced by the Lie algebra of $G$. In this work, we show
that if $X$ is strongly pseudoconvex in the direction of $T$ and $n\geq 2$,
then there exists a $G$-equivariant CR embedding of $X$ into $\mathbb C^N$, for
some $N\in\mathbb N$. We also establish a CR orbifold version of Boutet de
Monvel's embedding theorem.
|
Exotic beams of short-lived radioisotopes are produced in nuclear reactions
such as thermal neutron induced fission, target or projectile fragmentation and
fusion reactions. For a given radioactive ion beam (RIB), different production
modes are in competition. For each of them the cross section, the intensity of
the projectile beam and the target thickness define an upper production rate.
The final yield relies on the optimisation of the ion-source, which should be
fast and highly efficient in view of the limited production cross section, and
on obtaining a minimum diffusion time out of the target matrix or fragment
catcher to reduce decay losses. Eventually, either chemical or isobaric
selectivity is needed to confine unwanted elements near to the production site.
These considerations are discussed for pulsed or dc-driven RIB facilities and
the solutions to some of the technical challenges will be illustrated by
examples of currently produced near-drip-line elements.
|
We prove a new quantitative version of the Alexandrov theorem which states
that if the mean curvature of a regular set in R^{n+1} is close to a constant
in L^{n}-sense, then the set is close to a union of disjoint balls with respect
to the Hausdorff distance. This result is more general than the previous
quantifications of the Alexandrov theorem and using it we are able to show that
in R^2 and R^3 a weak solution of the volume preserving mean curvature flow
starting from a set of finite perimeter asymptotically convergences to a
disjoint union of equisize balls, up to possible translations. Here by weak
solution we mean a flat flow, obtained via the minimizing movements scheme.
|
Hand gesture recognition has long been a hot topic in human computer
interaction. Traditional camera-based hand gesture recognition systems cannot
work properly under dark circumstances. In this paper, a Doppler Radar based
hand gesture recognition system using convolutional neural networks is
proposed. A cost-effective Doppler radar sensor with dual receiving channels at
5.8GHz is used to acquire a big database of four standard gestures. The
received hand gesture signals are then processed with time-frequency analysis.
Convolutional neural networks are used to classify different gestures.
Experimental results verify the effectiveness of the system with an accuracy of
98%. Besides, related factors such as recognition distance and gesture scale
are investigated.
|
Feedforward Neural Network (FNN)-based language models estimate the
probability of the next word based on the history of the last N words, whereas
Recurrent Neural Networks (RNN) perform the same task based only on the last
word and some context information that cycles in the network. This paper
presents a novel approach, which bridges the gap between these two categories
of networks. In particular, we propose an architecture which takes advantage of
the explicit, sequential enumeration of the word history in FNN structure while
enhancing each word representation at the projection layer through recurrent
context information that evolves in the network. The context integration is
performed using an additional word-dependent weight matrix that is also learned
during the training. Extensive experiments conducted on the Penn Treebank (PTB)
and the Large Text Compression Benchmark (LTCB) corpus showed a significant
reduction of the perplexity when compared to state-of-the-art feedforward as
well as recurrent neural network architectures.
|
Rationales, snippets of extracted text that explain an inference, have
emerged as a popular framework for interpretable natural language processing
(NLP). Rationale models typically consist of two cooperating modules: a
selector and a classifier with the goal of maximizing the mutual information
(MMI) between the "selected" text and the document label. Despite their
promises, MMI-based methods often pick up on spurious text patterns and result
in models with nonsensical behaviors. In this work, we investigate whether
counterfactual data augmentation (CDA), without human assistance, can improve
the performance of the selector by lowering the mutual information between
spurious signals and the document label. Our counterfactuals are produced in an
unsupervised fashion using class-dependent generative models. From an
information theoretic lens, we derive properties of the unaugmented dataset for
which our CDA approach would succeed. The effectiveness of CDA is empirically
evaluated by comparing against several baselines including an improved
MMI-based rationale schema on two multi aspect datasets. Our results show that
CDA produces rationales that better capture the signal of interest.
|
The helicity density matrix elements rho[00] of rho(770)+- and omega(782)
mesons produced in Z decays have been measured using the OPAL detector at LEP.
Over the measured meson energy range, the values are compatible with 1/3,
corresponding to a statistical mix of helicity -1, 0 and +1 states. For the
highest accessible scaled energy range 0.3 < x_E < 0.6, the measured rho[00]
values of the rho(770)+- and the omega are 0.373 +- 0.052 and 0.142 +- 0.114,
respectively. These results are compared to measurements of other vector
mesons.
|
We study the phenomenon of cluster synchrony that occurs in ensembles of
coupled phase oscillators when higher-order modes dominate the coupling between
oscillators. For the first time, we develop a complete analytic description of
the dynamics in the limit of a large number of oscillators and use it to
quantify the degree of cluster synchrony, cluster asymmetry, and switching. We
use a variation of the recent dimensionality-reduction technique of Ott and
Antonsen [Chaos {\bf 18}, 037113 (2008)] and find an analytic description of
the degree of cluster synchrony valid on a globally attracting manifold. Shaped
by this manifold, there is an infinite family of steady-state distributions of
oscillators, resulting in a high degree of multi-stability in the cluster
asymmetry. We also show how through external forcing the degree of asymmetry
can be controlled, and suggest that systems displaying cluster synchrony can be
used to encode and store data.
|
This paper continues the study of the poset of eigenspaces of elements of a
unitary reflection group (for a fixed eigenvalue), which was commenced in [6]
and [5]. The emphasis in this paper is on the representation theory of unitary
reflection groups. The main tool is the theory of poset extensions due to Segev
and Webb ([16]). The new results place the well-known representations of
unitary reflection groups on the top homology of the lattice of intersections
of hyperplanes into a natural family, parameterised by eigenvalue.
|
We consider a repulsion actuator located in an $n$-sided convex environment
full of point particles. When the actuator is activated, all the particles move
away from the actuator. We study the problem of gathering all the particles to
a point. We give an $O(n^2)$ time algorithm to compute all the actuator
locations that gather the particles to one point with one activation, and an
$O(n)$ time algorithm to find a single such actuator location if one exists. We
then provide an $O(n)$ time algorithm to place the optimal number of actuators
whose sequential activation results in the gathering of the particles when such
a placement exists.
|
We investigate the generation and the evolution of two-mode
continuous-variable (CV) entanglement from system of a microwave-driven V-type
atom in a quantum beat laser. By taking into account the effects of
spontaneously generated quantum interference between two atomic decay channels,
we show that the CV entanglement with large mean number of photons can be
generated in our scheme, and the property of the filed entanglement can be
adjusted by properly modulating the frequency detuning of the fields. More
interesting, it is found that the entanglement can be significantly enhanced by
the spontaneously generated interference.
|
Light experiences dielectric matter as an effective gravitational field and
matter experiences light as a form of gravity as well. Light and matter waves
see each other as dual space-time metrics, thus establishing a unique model in
field theory. Actio et reactio are governed by Abraham's energy-momentum tensor
and equations of state for quantum dielectrics.
|
Results are presented from a programme of detailed longslit spectroscopic
observations of the extended emission-line region (EELR) associated with the
powerful radio galaxy PKS 2356-61. The observations have been used to construct
spectroscopic datacubes, which yield detailed information on the spatial
variations of emission-line ratios across the EELR, together with its kinematic
structure. We present an extensive comparison between the data and results
obtained from the MAPPINGS II shock ionization code, and show that the physical
properties of the line-emitting gas, including its ionization, excitation,
dynamics and overall energy budget, are entirely consistent with a scenario
involving auto-ionizing shocks as the dominant ionization mechanism. This has
the advantage of accounting for the observed EELR properties by means of a
single physical process, thereby requiring less free parameters than the
alternative scheme involving photoionization by radiation from the active
nucleus. Finally, possible mechanisms of shock formation are considered in the
context of the dynamics and origin of the gas, specifically scenarios involving
infall or accretion of gas during an interaction between the host radio galaxy
and a companion galaxy.
|
In this paper, we propose a novel image interpolation algorithm, which is
formulated via combining both the local autoregressive (AR) model and the
nonlocal adaptive 3-D sparse model as regularized constraints under the
regularization framework. Estimating the high-resolution image by the local AR
regularization is different from these conventional AR models, which weighted
calculates the interpolation coefficients without considering the rough
structural similarity between the low-resolution (LR) and high-resolution (HR)
images. Then the nonlocal adaptive 3-D sparse model is formulated to regularize
the interpolated HR image, which provides a way to modify these pixels with the
problem of numerical stability caused by AR model. In addition, a new
Split-Bregman based iterative algorithm is developed to solve the above
optimization problem iteratively. Experiment results demonstrate that the
proposed algorithm achieves significant performance improvements over the
traditional algorithms in terms of both objective quality and visual perception
|
We investigate the dynamical formation and evaporation of a spherically
symmetric charged black hole. We study the self-consistent one loop order
semiclassical back-reaction problem. To this end the mass-evaporation is
modeled by an expectation value of the stress-energy tensor of a neutral
massless scalar field, while the charge is not radiated away. We observe the
formation of an initially non extremal black hole which tends toward the
extremal black hole $M=Q$, emitting Hawking radiation. If also the discharge
due to the instability of vacuum to pair creation in strong electric fields
occurs, then the black hole discharges and evaporates simultaneously and decays
regularly until the scale where the semiclassical approximation breaks down. We
calculate the rates of the mass and the charge loss and estimate the life-time
of the decaying black holes.
|
We present accurate simulations of the dynamical bar-mode instability in full
General Relativity focussing on two aspects which have not been investigated in
detail in the past. Namely, on the persistence of the bar deformation once the
instability has reached its saturation and on the precise determination of the
threshold for the onset of the instability in terms of the parameter
$\beta={T}/{|W|}$. We find that generic nonlinear mode-coupling effects appear
during the development of the instability and these can severely limit the
persistence of the bar deformation and eventually suppress the instability. In
addition, we observe the dynamics of the instability to be strongly influenced
by the value $\beta$ and on its separation from the critical value $\beta_c$
marking the onset of the instability. We discuss the impact these results have
on the detection of gravitational waves from this process and provide evidence
that the classical perturbative analysis of the bar-mode instability for
Newtonian and incompressible Maclaurin spheroids remains qualitatively valid
and accurate also in full General Relativity.
|
We study thermal transport induced by soliton dynamics in a long Josephson
tunnel junction operating in the flux-flow regime. A thermal bias across the
junction is established by imposing the superconducting electrodes to reside at
different temperatures, when solitons flow along the junction. Here, we
consider the effect of both a bias current and an external magnetic field on
the thermal evolution of the device. In the flux-flow regime, a chain of
magnetically-excited solitons rapidly moves along the junction driven by the
bias current. We explore the range of bias current triggering the flux-flow
regime at fixed values of magnetic field, and the stationary temperature
distribution in this operation mode. We evidence a steady multi-peaked
temperature profile which reflects on the average soliton distribution along
the junction. Finally, we analyse also how the friction affecting the soliton
dynamics influences the thermal evolution of the system.
|
Neural Architecture Search has achieved state-of-the-art performance in a
variety of tasks, out-performing human-designed networks. However, many
assumptions, that require human definition, related with the problems being
solved or the models generated are still needed: final model architectures,
number of layers to be sampled, forced operations, small search spaces, which
ultimately contributes to having models with higher performances at the cost of
inducing bias into the system. In this paper, we propose HMCNAS, which is
composed of two novel components: i) a method that leverages information about
human-designed models to autonomously generate a complex search space, and ii)
an Evolutionary Algorithm with Bayesian Optimization that is capable of
generating competitive CNNs from scratch, without relying on human-defined
parameters or small search spaces. The experimental results show that the
proposed approach results in competitive architectures obtained in a very short
time. HMCNAS provides a step towards generalizing NAS, by providing a way to
create competitive models, without requiring any human knowledge about the
specific task.
|
FinFETs are predicted to advance semiconductorscaling for sub-20nm devices.
In order to support their intro-duction into research and universities it is
crucial to develop anopen source predictive process design kit. This paper
discussesin detail the design process for such a kit for 15nm FinFETdevices,
called the FreePDK15. The kit consists of a layerstack with thirteen-metal
layers based on hierarchical-scalingused in ASIC architecture, Middle-of-Line
local interconnectlayers and a set of Front-End-of-Line layers. The physical
andgeometrical properties of these layers are defined and theseproperties
determine the density and parasitics of the design. Thedesign rules are laid
down considering additional guidelines forprocess variability, challenges
involved in FinFET fabrication anda unique set of design rules are developed
for critical dimensions.Layout extraction including modified rules for
determining thegeometrical characteristics of FinFET layouts are implementedand
discussed to obtain successful Layout Versus Schematicchecks for a set of
layouts. Moreover, additional parasiticcomponents of a standard FinFET device
are analyzed andthe parasitic extraction of sample layouts is performed.
Theseextraction results are then compared and assessed against thevalidation
models.
|
Kinematic measurements of two simultaneous coordinates from postural sway
during quiet standing were performed employing multiple ultrasonic transducers.
The use of accurate acoustic devices was required for the detection of the
small random noise displacements. The trajectory in the anteroposterior -
mediolateral plane of human chest was measured and compared with the trajectory
in anteroposterior direction from the upper and lower body. The latter was
statistically analyzed and appeared to be strongly anti-correlated. The
anti-correlations represent strong evidence for the dominance of hip strategy
during an unperturbed one minute stance. That the hip strategy, normally
observed for large amplitude motions, also appears in the small amplitude of a
quite stance, indicates the utility of such noise measurements for exploring
the biomechanics of human balance.
|
In this article, the authors find the evidence that media coverage consisting
of 13 online newspapers enhanced the electoral results of right wing party in
Spain (Vox) during general elections in November 2019. We consider the
political parties and leaders mentions in these media during the electoral
campaign from 1st to 10th November 2019, and only visibility or prominence
dimension is necessary for the evidence.
|
Given the symplectic polar space of type $W(5,2)$, let us call a set of five
Fano planes sharing pairwise a single point a Fano pentad. Once 63 points of
$W(5,2)$ are appropriately labeled by 63 non-trivial three-qubit observables,
any such Fano pentad gives rise to a quantum contextual set known as Mermin
pentagram. Here, it is shown that a Fano pentad also hosts another, closely
related contextual set, which features 25 observables and 30 three-element
contexts. Out of 25 observables, ten are such that each of them is on six
contexts, while each of the remaining 15 observables belongs to two contexts
only. Making use of the recent classification of Mermin pentagrams (Saniga et
al., Symmetry 12 (2020) 534), it was found that 12,096 such contextual sets
comprise 47 distinct types, falling into eight families according to the number
($3, 5, 7, \ldots, 17$) of negative contexts.
|
The golden binomials, introduced in the golden quantum calculus, have
expansion determined by Fibonomial coefficients and the set of simple zeros
given by powers of Golden ratio. We show that these golden binomials are
equivalent to Carlitz characteristic polynomials of certain matrices of
binomial coefficients. It is shown that trace invariants for powers of these
matrices are determined by Fibonacci divisors, quantum calculus of which was
developed very recently.
|
Code optimization and high level synthesis can be posed as constraint
satisfaction and optimization problems, such as graph coloring used in register
allocation. Graph coloring is also used to model more traditional CSPs relevant
to AI, such as planning, time-tabling and scheduling. Provably optimal
solutions may be desirable for commercial and defense applications.
Additionally, for applications such as register allocation and code
optimization, naturally-occurring instances of graph coloring are often small
and can be solved optimally. A recent wave of improvements in algorithms for
Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests
generic problem-reduction methods, rather than problem-specific heuristics,
because (1) heuristics may be upset by new constraints, (2) heuristics tend to
ignore structure, and (3) many relevant problems are provably inapproximable.
Problem reductions often lead to highly symmetric SAT instances, and
symmetries are known to slow down SAT solvers. In this work, we compare several
avenues for symmetry breaking, in particular when certain kinds of symmetry are
present in all generated instances. Our focus on reducing CSPs to SAT allows us
to leverage recent dramatic improvement in SAT solvers and automatically
benefit from future progress. We can use a variety of black-box SAT solvers
without modifying their source code because our symmetry-breaking techniques
are static, i.e., we detect symmetries and add symmetry breaking predicates
(SBPs) during pre-processing.
An important result of our work is that among the types of
instance-independent SBPs we studied and their combinations, the simplest and
least complete constructions are the most effective. Our experiments also
clearly indicate that instance-independent symmetries should mostly be
processed together with instance-specific symmetries rather than at the
specification level, contrary to what has been suggested in the literature.
|
The Belle II experiment at the SuperKEKB electron-positron collider aims to
collect an unprecedented data set of $50~{\rm ab}^{-1}$ to study $CP$-violation
in the $B$-meson system and to search for Physics beyond the Standard Model.
SuperKEKB is already the world's highest-luminosity collider. In order to
collect the planned data set within approximately one decade, the target is to
reach a peak luminosity of $\rm 6 \times 10^{35}~cm^{-2}s^{-1}$ by further
increasing the beam currents and reducing the beam size at the interaction
point by squeezing the betatron function down to $\beta^{*}_{\rm y}=\rm
0.3~mm$. To ensure detector longevity and maintain good reconstruction
performance, beam backgrounds must remain well controlled. We report on current
background rates in Belle II and compare these against simulation. We find that
a number of recent refinements have significantly improved the background
simulation accuracy. Finally, we estimate the safety margins going forward. We
predict that backgrounds should remain high but acceptable until a luminosity
of at least $\rm 2.8 \times 10^{35}~cm^{-2}s^{-1}$ is reached for
$\beta^{*}_{\rm y}=\rm 0.6~mm$. At this point, the most vulnerable Belle II
detectors, the Time-of-Propagation (TOP) particle identification system and the
Central Drift Chamber (CDC), have predicted background hit rates from
single-beam and luminosity backgrounds that add up to approximately half of the
maximum acceptable rates.
|
The relativistic continuity equations for the extensive thermodynamic
quantities are derived based on the divergence theorem in Minkowski space
outlined by St\"uckelberg. This covariant approach leads to a relativistic
formulation of the first and second laws of thermodynamics. The internal energy
density and the pressure of a relativistic perfect fluid carry inertia, which
leads to a relativistic coupling between heat and work. The relativistic
continuity equation for the relativistic inertia is derived. The relativistic
corrections in the Euler equation and in the continuity equations for the
energy and momentum are identified. This relativistic theoretical framework
allows a rigorous derivation of the relativistic transformation laws for the
temperature, the pressure and the chemical potential based on the relativistic
transformation laws for the energy density, the entropy density, the mass
density and the number density.
|
The goal of this paper is to clarify when a closed convex cone is invariant
for a stochastic partial differential equation (SPDE) driven by a Wiener
process and a Poisson random measure, and to provide conditions on the
parameters of the SPDE, which are necessary and sufficient.
|
The reduced density matrices (RDMs) are calculated in the thermodynamic limit
for the Chern-Simons non-relativistic particle system and Maxwell-Boltzmann
(MB) statistics. It is established that they are zero outside of a diagonal and
well-behaved after a renormalization, depending on an arbitrary real number, if
the condition of neutrality holds.
|
We develop cointegration for multivariate continuous-time stochastic
processes, both in finite and infinite dimension. Our definition and analysis
are based on factor processes and operators mapping to the space of prices and
cointegration. The focus is on commodity markets, where both spot and forward
prices are analysed in the context of cointegration. We provide many examples
which include the most used continuous-time pricing models, including forward
curve models in the Heath-Jarrow-Morton paradigm in Hilbert space.
|
The Possibilistic Fuzzy Local Information C-Means (PFLICM) method is
presented as a technique to segment side-look synthetic aperture sonar (SAS)
imagery into distinct regions of the sea-floor. In this work, we investigate
and present the results of an automated feature selection approach for SAS
image segmentation. The chosen features and resulting segmentation from the
image will be assessed based on a select quantitative clustering validity
criterion and the subset of the features that reach a desired threshold will be
used for the segmentation process.
|
Reconstructive transformations in layered silicates need a high tem- perature
in order to be observed. However, very recently, some systems have been found
where transformation can be studied at temperatures 600 C below the lowest
experimental results previously reported, including sol-gel methods. We explore
the possible relation with the existence of intrinsic localized modes, known as
discrete breathers. We construct a model for nonlinear vibrations within the
cation layer, obtain their parameters and calculate them numerically, obtaining
their energies. Their statistics shows that although there are far less
breathers than phonons, there are much more above the activation energy, being
therefore a good candidate to explain the reconstructive transformations at low
temperature.
|
We have compared the performance of five non-commercial triple stores,
Virtuoso-open source, Jena SDB, Jena TDB, SWIFT-OWLIM and 4Store. We examined
three performance aspects: the query execution time, scalability and run-to-run
reproducibility. The queries we chose addressed different ontological or
biological topics, and we obtained evidence that individual store performance
was quite query specific. We identified three groups of queries displaying
similar behavior across the different stores: 1) relatively short response
time, 2) moderate response time and 3) relatively long response time. OWLIM
proved to be a winner in the first group, 4Store in the second and Virtuoso in
the third. Our benchmarking showed Virtuoso to be a very balanced performer -
its response time was better than average for all the 24 queries; it showed a
very good scalability and a reasonable run-to-run reproducibility.
|
The dual Komar mass generalizes the concept of the NUT parameter and is akin
to the magnetic charge in electrodynamics. In asymptotically flat spacetimes it
coincides with the dual supertranslation charge. The dual mass vanishes
identically on Riemannian manifolds in General Relativity unless conical
singularities corresponding to Misner strings are introduced. In this paper we
propose an alternative way to source the dual mass locally. We show that this
can be done by enlarging the phase space of the theory to allow for a violation
of the algebraic Bianchi identity using local fields. A minimal extension of
Einstein's gravity that meets this requirement is known as the Einstein-Cartan
theory. Our main result is that on Riemann-Cartan manifolds the dual Komar mass
does not vanish and is given by a volume integral over a local 1-form
gravitational-magnetic current that is a function of the torsion.
|
We consider the Fast Fourier Transform (FFT) based numerical method for thin
film magnetization problems [Vestg{\aa}rden and Johansen, SuST, 25 (2012)
104001], compare it with the finite element methods, and evaluate its accuracy.
Proposed modifications of this method implementation ensure stable convergence
of iterations and enhance its efficiency. A new method, also based on the FFT,
is developed for 3D bulk magnetization problems. This method is based on a
magnetic field formulation, different from the popular h-formulation of eddy
current problems typically employed with the edge finite elements. The method
is simple, easy to implement, and can be used with a general current-voltage
relation; its efficiency is illustrated by numerical simulations.
|
We present an analysis of COMPTEL observations made between November 1991 and
May 1994 of 2CG 135+01, a bright gamma-ray source located near the Galactic
plane. At energies above 1 MeV, an excess consistent with the position of 2CG
135+01 is detected in the sum of the observations, at flux levels which are a
factor of 10-100 below those published in the past. The detection significance
of this excess, when the possible presence of underlying Galactic diffuse
emission is neglected, is 6.6 sigma for 3 degrees of freedom. The differential
photon spectrum in the 1-30 MeV energy range can be described by a power law
with a spectral index of $1.95^{+0.2}_{-0.3}$. Due to the uncertainties
involved in modelling the Galactic-disk diffuse emission underneath the source,
the absolute flux levels must be considered uncertain by a factor of two. They
are consistent with the extrapolation of the time-averaged spectrum of 2CG
135+01 measured with EGRET, thereby strengthening the identification. No
significant temporal correlation between the gamma-ray emission and the
monitored radio emission of the possible counterpart radio source GT 0236+610
(showing a 26.5 day modulation) is found.
|
The first transiting extrasolar planet, orbiting HD209458, was a Doppler
wobble planet before its transits were discovered with a 10 cm CCD camera.
Wide-angle CCD cameras, by monitoring in parallel the light curves of tens of
thousands of stars, should find hot Jupiter transits much faster than the
Doppler wobble method. The discovery rate could easily rise by a factor 10. The
sky holds perhaps 1000 hot Jupiters transiting stars brighter than V=13. These
are bright enough for follow-up radial velocity studies to measure planet
masses to go along with the radii from the transit light curves. I derive
scaling laws for the discovery potential of ground-based transit searches, and
use these to assess over two dozen planetary transit surveys currently
underway. The main challenge lies in calibrating small systematic errors that
limit the accuracy of CCD photometry at milli-magnitude levels. Promising
transit candidates have been reported by several groups, and many more are sure
to follow.
|
We show that many aspects of ultracold three-body collisions can be
controlled by choosing the mass ratio between the collision partners. In the
ultracold regime, the scattering length dependence of the three-body rates can
be substantially modified from the equal mass results. We demonstrate that the
only non-trivial mass dependence is due solely to Efimov physics. We have
determined the mass dependence of the three-body collision rates for all
heteronuclear systems relevant for two-component atomic gases with resonant
s-wave interspecies interactions, which includes only three-body systems with
two identical bosons or two identical fermions.
|
We construct a large class of non-Markovian master equations that describe
the dynamics of open quantum systems featuring strong memory effects, which
relies on a quantum generalization of the concept of classical semi-Markov
processes. General conditions for the complete positivity of the corresponding
quantum dynamical maps are formulated. The resulting non-Markovian quantum
processes allow the treatment of a variety of physical systems, as is
illustrated by means of various examples and applications, including quantum
optical systems and models of quantum transport.
|
A novel approximation method in studying the perihelion precession and
planetary orbits in general relativity is to use geodesic deviation equations
of first and high-orders, proposed by Kerner et.al. Using higher-order geodesic
deviation approach, we generalize the calculation of orbital precession and the
elliptical trajectory of neutral test particles to Kerr$-$Newman space-times.
One of the advantage of this method is that, for small eccentricities, one
obtains trajectories of planets without using Newtonian and post-Newtonian
approximations for arbitrary values of quantity ${G M}/{R c^2}$.
|
We present a novel method to investigate the effects of varying channel
parameters on geometrically shaped constellations for communication systems
employing the blind phase search algorithm. We show that introduced asymmetries
significantly improve performance if adapted to changing channel parameters.
|
The force due to electromagnetic induction on a test charge is calculated in
different reference frames. The Faraday-Lenz Law and different formulae for the
fields of a uniformly moving charge are used. The classical Heaviside formula
for the electric field of a moving charge predicts that, for the particular
spatial configuration considered, the inductive force vanishes in the frame in
which the magnet is in motion and the test charge at rest. In contrast,
consistent results, in different frames, are given by the recently derived
formulae of relativistic classical electrodynamics.
|
Enceladus is a primary target for astrobiology due to the $\rm H_2O$ plume
ejecta measured by the Cassini spacecraft and the inferred subsurface ocean
sustained by tidal heating. Sourcing the plumes via a direct connection from
the ocean to the surface requires a fracture through the entire ice shell
($\sim$10 km). Here we explore an alternative mechanism in which shear heating
within shallower tiger stripe fractures produces partial melting in the ice
shell and interstitial convection allows fluid to be ejected as geysers. We use
an idealized two-dimensional multiphase reactive transport model to simulate
the thermomechanics of a mushy region generated by an upper bound estimate for
the localized shear heating rate in a salty ice shell. From our simulations, we
predict the temperature, porosity, salt content, melting rate, and liquid
volume of an intrashell mushy zone surrounding a fracture. We find that the
rate of internal melting can match the observed $\rm H_2O$ eruption rate and
that there is sufficient brine volume within the mushy zone to sustain the
geysers for $\sim350$ kyr without additional melting. The composition of the
liquid brine is, however, distinct from that of the ocean, due to partial
melting. This shear heating mechanism for geyser formation applies to Enceladus
and other icy moons and has implications for our understanding of the
geophysical processes and astrobiological potential of icy satellites.
|
In presence of long range dispersal, epidemics spread in spatially
disconnected regions known as clusters. Here, we characterize exactly their
statistical properties in a solvable model, in both the supercritical
(outbreak) and critical regimes. We identify two diverging length scales,
corresponding to the bulk and the outskirt of the epidemic. We reveal a
nontrivial critical exponent that governs the cluster number, the distribution
of their sizes and of the distances between them. We also discuss applications
to depinning avalanches with long range elasticity.
|
After a rapid introduction about the models of comptonization, we present
some simulations that underlines the expected capabilities of Simbol-X to
constrain the presence of this process in objects like AGNs or XRB.
|
Thomas and Williams conjectured that rowmotion acting on the rational
$(a,b)$-Tamari lattice has order $a+b-1$. We construct an equivariant bijection
that proves this conjecture when $b\equiv 1\pmod a$; in fact, we determine the
entire orbit structure of rowmotion in this case, showing that it exhibits the
cyclic sieving phenomenon. We additionally show that the down-degree statistic
is homomesic for this action. In a different vein, we consider the action of
rowmotion on Barnard and Reading's biCambrian lattices. Settling a different
conjecture of Thomas and Williams, we prove that if $c$ is a bipartite Coxeter
element of a coincidental-type Coxeter group $W$, then the orbit structure of
rowmotion on the $c$-biCambrian lattice is the same as the orbit structure of
rowmotion on the lattice of order ideals of the doubled root poset of type $W$.
|
Under an applied traction, highly concentrated suspensions of solid particles
in fluids can turn from a state in which they flow to a state in which they
counteract the traction as an elastic solid: a shear-jammed state. Remarkably,
the suspension can turn back to the flowing state simply by inverting the
traction. A tensorial model is presented and tested in paradigmatic cases. We
show that, to reproduce the phenomenology of shear jamming in generic
geometries, it is necessary to link this effect to the elastic response
supported by the suspension microstructure rather than to a divergence of the
viscosity.
|
We show the existence and uniqueness of a continuous solution to a
path-dependent volatility model introduced by Guyon and Lekeufack (2023) to
model the price of an equity index and its spot volatility. The considered
model for the trend and activity features can be written as a Stochastic
Volterra Equation (SVE) with non-convolutional and non-bounded kernels as well
as non-Lipschitz coefficients. We first prove the existence and uniqueness of a
solution to the SVE under integrability and regularity assumptions on the two
kernels and under a condition on the second kernel weighting the past squared
returns which ensures that the activity feature is bounded from below by a
positive constant. Then, assuming in addition that the kernel weighting the
past returns is of exponential type and that an inequality relating the
logarithmic derivatives of the two kernels with respect to their second
variables is satisfied, we show the positivity of the volatility process which
is obtained as a non-linear function of the SVE's solution. We show numerically
that the choice of an exponential kernel for the kernel weighting the past
returns has little impact on the quality of model calibration compared to other
choices and the inequality involving the logarithmic derivatives is satisfied
by the calibrated kernels. These results extend those of Nutz and Valdevenito
(2023).
|
The growing number of noncooperative flying objects has prompted interest in
sample-return and space debris removal missions. Current solutions are both
costly and largely dependent on specific object identification and capture
methods. In this paper, a low-cost modular approach for control of a swarm
flight of small satellites in rendezvous and capture missions is proposed by
solving the optimal output regulation problem. By integrating the theories of
tracking control, adaptive optimal control, and output regulation, the optimal
control policy is designed as a feedback-feedforward controller to guarantee
the asymptotic tracking of a class of reference input generated by the leader.
The estimated state vector of the space object of interest and communication
within satellites is assumed to be available. The controller rejects the
nonvanishing disturbances injected into the follower satellite while
maintaining the closed-loop stability of the overall leader-follower system.
The simulation results under the Basilisk-ROS2 framework environment for
high-fidelity space applications with accurate spacecraft dynamics, are
compared with those from a classical linear quadratic regulator controller, and
the results reveal the efficiency and practicality of the proposed method.
|
We study the behaviour of an initially spherical bunch of accelerated
particles emitted along trajectories parallel to the symmetry axis of a
rotating black hole. We find that, under suitable conditions, curvature and
inertial strains compete to model the shape of axial outflows of matter
contributing to generate jet-like structures. This is of course a purely
kinematical effect which does not account by itself for physical processes
underlying the formation of jets. In our analysis a crucial role is played by a
property of the electric and magnetic part of the Weyl tensor to be
Lorentz-invariant boosting along the axis of symmetry in Kerr spacetime.
|
The source-coding problem with side information at the decoder is studied
subject to a constraint that the encoder---to whom the side information is
unavailable---be able to compute the decoder's reconstruction sequence to
within some distortion. For discrete memoryless sources and finite
single-letter distortion measures, an expression is given for the minimal
description rate as a function of the joint law of the source and side
information and of the allowed distortions at the encoder and at the decoder.
The minimal description rate is also computed for a memoryless Gaussian source
with squared-error distortion measures. A solution is also provided to a more
general problem where there are more than two distortion constraints and each
distortion function may be a function of three arguments: the source symbol,
the encoder's reconstruction symbol, and the decoder's reconstruction symbol.
|
Astor is a program repair library which has different modes. In this paper,
we present the Cardumen mode of Astor, a repair approach based mined templates
that has an ultra-large search space. We evaluate the capacity of Cardumen to
discover test-suite adequate patches (aka plausible patches) over the 356 real
bugs from Defects4J. Cardumen finds 8935 patches over 77 bugs of Defects4J.
This is the largest number of automatically synthesized patches ever reported,
all patches being available in an open-science repository. Moreover, Cardumen
identifies 8 unique patches, that are patches for Defects4J bugs that were
never repaired in the whole history of program repair.
|
Very high energy physics needs a coherent description of the four fundamental
forces. Non-commutative geometry is a promising mathematical framework which
already allowed to unify the general relativity and the standard model, at the
classical level, thanks to the spectral action principle. Quantum field
theories on non-commutative spaces is a first step towards the quantification
of such a model. These theories can't be obtained simply by writing usual field
theory on non-commutative spaces. Such attempts exhibit indeed a new type of
divergencies, called ultraviolet/infrared mixing, which prevents
renormalisability. H. Grosse and R. Wulkenhaar showed, with an example, that a
modification of the propagator may restore renormalisability. This thesis aims
at studying the generalization of such a method. We studied two different
models which allowed to specify certain aspects of non-commutative field
theory. In x space, the major technical difficulty is due to oscillations in
the interaction part. We generalized the results of T. Filk in order to exploit
such oscillations at best. We were then able to distinguish between two
mixings, renormalizable or not. We also bring the notion of orientability to
light : the orientable non-commutative Gross-Neveu model is renormalizable
without any modification of its propagator. The adaptation of multi-scale
analysis to the matrix basis emphasized the importance of dual graphs and
represents a first step towards a formulation of field theory independent of
the underlying space.
|
A Theorem is proved which reduces the problem of completeness of orbits of
Killing vector fields in maximal globally hyperbolic, say vacuum, space--times
to some properties of the orbits near the Cauchy surface. In particular it is
shown that all Killing orbits are complete in maximal developements of
asymptotically flat Cauchy data, or of Cauchy data prescribed on a compact
manifold. This result gives a significant strengthening of the uniqueness
theorems for black holes.
|
A generative model with a disentangled representation allows for independent
control over different aspects of the output. Learning disentangled
representations has been a recent topic of great interest, but it remains
poorly understood. We show that even for GANs that do not possess disentangled
representations, one can find curved trajectories in latent space over which
local disentanglement occurs. These trajectories are found by iteratively
following the leading right-singular vectors of the Jacobian of the generator
with respect to its input. Based on this insight, we describe an efficient
regularizer that aligns these vectors with the coordinate axes, and show that
it can be used to induce disentangled representations in GANs, in a completely
unsupervised manner.
|
A mathematical model is presented for the dynamics of time relative to space.
The model design is analogous to a chemical kinetic reaction based on
transition state theory which posits the existence of reactants, activated
complex, and products. Here, time future is considered to be analogous to
reactants, time now to transition state (activated complex) and time past to
products. Thus, future, now, and past events are considered to be distinct from
one another in the progression of time which flows from future to now to past.
The model also incorporates a cyclical reaction (in a quasi-equilibrium state)
between time future and time now as well as an irreversible reaction (that is
unidirectional and not in equilibrium) from time now to time past. The results
from modeling show that modeling time in terms of changes in space can explain
the asymmetric nature of time.
|
Today's quantum processors composed of fifty or more qubits have allowed us
to enter a computational era where the output results are not easily
simulatable on the world's biggest supercomputers. What we have not seen yet,
however, is whether or not such quantum complexity can be ever useful for any
practical applications. A fundamental question behind this lies in the
non-trivial relation between the complexity and its computational power. If we
find a clue for how and what quantum complexity could boost the computational
power, we might be able to directly utilize the quantum complexity to design
quantum computation even with the presence of noise and errors. In this work we
introduce a new reservoir computational model for pattern recognition showing a
quantum advantage utilizing scale-free networks. This new scheme allows us to
utilize the complexity inherent in the scale-free networks, meaning we do not
require programing nor optimization of the quantum layer even for other
computational tasks. The simplicity in our approach illustrates the
computational power in quantum complexity as well as provide new applications
for such processors.
|
Point defect migration is considered as a mechanism for aging in
ferroelectrics. Numerical results are given for the coupled problems of point
defect migration and electrostatic energy relaxation in a 2D domain
configuration. The peak values of the clamping pressure at domain walls are in
the range of $10^6$ Pa, which corresponds to macroscopically observed coercive
stresses in perovskite ferroelectrics. The effect is compared to mechanisms
involving orientational reordering of defect dipoles in the bulk of domains.
Domain clamping is significantly stronger in the drift mechanism than in the
orientational picture for the same material parameters.
|
We discuss various space-time metrics which are compatible with Einstein's
equations and a previously suggested cosmology with a finite total mass. In
this alternative cosmology the matter density was postulated to be a spatial
delta function at the time of the big bang thereafter diffusing outward with
constant total mass. This proposal explores a departure from standard
assumptions that the big bang occurred everywhere at once or was just one of an
infinite number of previous and later transitions.
|
Nonstationary phenomena, such as satiation effects in recommendations, have
mostly been modeled using bandits with finitely many arms. However, the richer
action space provided by linear bandits is often preferred in practice. In this
work, we introduce a novel nonstationary linear bandit model, where current
rewards are influenced by the learner's past actions in a fixed-size window.
Our model, which recovers stationary linear bandits as a special case,
leverages two parameters: the window size $m \ge 0$, and an exponent $\gamma$
that captures the rotting ($\gamma < 0)$ or rising ($\gamma > 0$) nature of the
phenomenon. When both $m$ and $\gamma$ are known, we propose and analyze a
variant of OFUL which minimizes regret against cycling policies. By choosing
the cycle length so as to trade-off approximation and estimation errors, we
then prove a bound of order
$\sqrt{d}\,(m+1)^{\frac{1}{2}+\max\{\gamma,0\}}\,T^{3/4}$ (ignoring log
factors) on the regret against the optimal sequence of actions, where $T$ is
the horizon and $d$ is the dimension of the linear action space. Through a
bandit model selection approach, our results are extended to the case where $m$
and $\gamma$ are unknown. Finally, we complement our theoretical results with
experiments against natural baselines.
|
The main object of this course given in Hammamet (December 2014) is the
so-called Galton-Watson process.We introduce in the first chapter of this
course the framework of discrete random trees. We then use this framework to
construct GW trees that describe the genealogy of a GW process. It is very easy
to recover the GW process from theGW tree as it is just the number of
individuals at each generation. We then give alternativeproofs of classical
results on GW processes using the tree formalism. We focus in particular onthe
extinction probability (which was the first question of F. Galton) and on the
description ofthe processes conditioned on extinction or non extinction.In a
second chapter, we focus on local limits of conditioned GW trees. In the
critical andsub-critical cases, the population becomesa.s. extinct and the
associated genealogical tree is finite. However, it has a small but
positiveprobability of being large (this notion must be precised). The question
that arises is to describethe law of the tree conditioned of being large, and
to say what exceptional event has occurredso that the tree is not typical. A
first answer to this question is due to H. Kesten whoconditioned a GW tree to
reach height n and look at the limit in distribution when n tends toinfinity.
There are however other ways of conditioning a tree to be large: conditioning
on havingmany vertices, or many leaves... We present here very recent general
results concerning this kindof problems due to the authors of this course and
completed by results of X. He.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.