text
stringlengths 6
128k
|
---|
The coil-globule transition has been studied for A-B copolymer chains both by
means of lattice Monte Carlo simulations using bond fluctuation algorithm and
by a numerical self-consistent field method. Copolymer chains of fixed length
with A and B monomeric units with regular, random and specially designed
(protein-like) primary sequences have been investigated. The dependence of the
transition temperature on the AB sequence has been analyzed. A protein-like
copolymer is more stable than a copolymer with statistically random sequence.
The transition is more sharpe for random copolymers. It is found that there
exist a temperature below which the chain appears to be in the lowest energy
state (ground state). Both for random and protein-like sequences and for
regular copolymers with a relatively long repeating block, a molten globule
regime is found between the ground state temperature and the transition
temperature. For regular block copolymers the transition temperature increases
with block size. Qualitatively, the results from both methods are in agreement.
Differences between the methods result from approximations in the SCF theory
and equilibration problems in MC simulations. The two methods are thus
complementary.
|
We consider a three Higgs doublet model with an $S_3$ symmetry in which
beside the SM-like doublet there are two fermiophobic doublets. Due to the new
charged scalars there is an enhancement in the two-photon decay while the other
channels have the same decay widths that the SM neutral Higgs. The fermiophobic
scalars are mass degenerated unless soft terms breaking the $S_3$ symmetry are
added.
|
This paper is concerned with the spatial propagation of nonlocal dispersal
equations with bistable or multistable nonlinearity in exterior domains. We
obtain the existence and uniqueness of an entire solution which behaves like a
planar wave front as time goes to negative infinity. In particular, some
disturbances on the profile of the entire solution happen as the entire
solution comes to the interior domain. But the disturbances disappear as the
entire solution is far away from the interior domain. Furthermore, we prove
that the solution can gradually recover its planar wave profile and continue to
propagate in the same direction as time goes to positive infinity for compact
convex interior domain. Our work generalizes the local (Laplace) diffusion
results obtained by Berestycki et al. (2009) to the nonlocal dispersal setting
by using new known Liouville results and Lipschitz continuity of entire
solutions due to Li et al. (2010).
|
We reformulate the Cont-Bouchaud model of financial markets in terms of
classical "super-spins" where the spin value is a measure of the number of
individual traders represented by a portfolio manager of an investment agency.
We then extend this simplified model by switching on interactions among the
super-spins to model the tendency of agencies getting influenced by the opinion
of other managers. We also introduce a fictitious temperature (to model other
random influences), and time-dependent local fields to model slowly changing
optimistic or pessimistic bias of traders. We point out close similarities
between the price variations in our model with $N$ super-spins and total
displacements in an $N$-step Levy flight. We demonstrate the phenomena of
natural and artificially created bubbles and subsequent crashes as well as the
occurrence of "fat tails" in the distributions of stock price variations.
|
The large amount of chemical and kinematic information available in large
spectroscopic surveys have inspired the search for chemically peculiar stars in
the field. Though these metal-poor field stars ([Fe/H$]<-1$) are commonly
enriched in nitrogen, their detailed spatial, kinematic, and chemical
distributions suggest that various groups may exist, and thus their origin is
still a mystery. To study these stars statistically, we increase the sample
size by identifying new CN-strong stars with LAMOST DR3 for the first time. We
use CN-CH bands around 4000 \AA~to find CN-strong stars, and further separate
them into CH-normal stars (44) and CH-strong (or CH) stars (35). The chemical
abundances from our data-driven software and APOGEE DR 14 suggest that most
CH-normal stars are N-rich, and it cannot be explained by only internal mixing
process. The kinematics of our CH-normal stars indicate a substantial fraction
of these stars are retrograding, pointing to an extragalactic origin. The
chemistry and kinematics of CH-normal stars imply that they may be GC-dissolved
stars, or accreted halo stars, or both.
|
We propose to study equivalence relations between phenomena in high-energy
physics and the existence of standard cryptographic primitives, and show the
first example where such an equivalence holds. A small number of prior works
showed that high-energy phenomena can be explained by cryptographic hardness.
Examples include using the existence of one-way functions to explain the
hardness of decoding black-hole Hawking radiation (Harlow and Hayden 2013,
Aaronson 2016), and using pseudorandom quantum states to explain the hardness
of computing AdS/CFT dictionary (Bouland, Fefferman and Vazirani, 2020).
In this work we show, for the former example of black-hole radiation
decoding, that it also implies the existence of secure quantum cryptography. In
fact, we show an existential equivalence between the hardness of black-hole
radiation decoding and a variety of cryptographic primitives, including
bit-commitment schemes and oblivious transfer protocols (using quantum
communication). This can be viewed (with proper disclaimers, as we discuss) as
providing a physical justification for the existence of secure cryptography. We
conjecture that such connections may be found in other high-energy physics
phenomena.
|
We propose a novel geometric approach for learning bilingual mappings given
monolingual embeddings and a bilingual dictionary. Our approach decouples
learning the transformation from the source language to the target language
into (a) learning rotations for language-specific embeddings to align them to a
common space, and (b) learning a similarity metric in the common space to model
similarities between the embeddings. We model the bilingual mapping problem as
an optimization problem on smooth Riemannian manifolds. We show that our
approach outperforms previous approaches on the bilingual lexicon induction and
cross-lingual word similarity tasks. We also generalize our framework to
represent multiple languages in a common latent space. In particular, the
latent space representations for several languages are learned jointly, given
bilingual dictionaries for multiple language pairs. We illustrate the
effectiveness of joint learning for multiple languages in zero-shot word
translation setting. Our implementation is available at
https://github.com/anoopkunchukuttan/geomm .
|
Given a graph G and an integer k, two players take turns coloring the
vertices of G one by one using k colors so that neighboring vertices get
different colors. The first player wins iff at the end of the game all the
vertices of G are colored. The game chromatic number \chi_g(G) is the minimum k
for which the first player has a winning strategy. In this paper we analyze the
asymptotic behavior of this parameter for a random graph G_{n,p}. We show that
with high probability the game chromatic number of G_{n,p} is at least twice
its chromatic number but, up to a multiplicative constant, has the same order
of magnitude. We also study the game chromatic number of random bipartite
graphs.
|
The holographic principle relates (classical) gravitational waves in the bulk
to quantum fluctuations and the Weyl anomaly of a conformal field theory on the
boundary (the brane). One can thus argue that linear perturbations in the bulk
of static black holes located on the brane be related to the Hawking flux and
that (brane-world) black holes are therefore unstable. We try to gain some
information on such instability from established knowledge of the Hawking
radiation on the brane. In this context, the well-known trace anomaly is used
as a measure of both the validity of the holographic picture and of the
instability for several proposed static brane metrics. In light of the above
analysis, we finally consider a time-dependent metric as the (approximate)
representation of the late stage of evaporating black holes which is
characterized by decreasing Hawking temperature, in qualitative agreement with
what is required by energy conservation.
|
We present V, V-I color-magnitude diagrams (CMDs) for three old star clusters
in the Large Magellanic Cloud (LMC): NGC 1466, NGC 2257 and Hodge 11. Our data
extend about 3 magnitudes below the main-sequence turnoff, allowing us to
determine accurate relative ages and the blue straggler frequencies. Based on a
differential comparison of the CMDs, any age difference between the three LMC
clusters is less than 1.5 Gyr. Comparing their CMDs to those of M 92 and M 3,
the LMC clusters, unless their published metallicities are significantly in
error, are the same age as the old Galactic globulars. The similar ages to
Galactic globulars are shown to be consistent with hierarchial clustering
models of galaxy formation. The blue straggler frequencies are also similar to
those of Galactic globular clusters. We derive a true distance modulus to the
LMC of (m-M)=18.46 +/- 0.09 (assuming (m-M)=14.61 for M 92) using these three
LMC clusters.
|
We show that confinement of bulk electrons can be observed at low-dimensional
surface structures and can serve as a long-range sensor for the magnetism and
electronic properties of single impurities or as a quantum information transfer
channel with large coherence lengths. Our ab initio calculations reveal
oscillations of electron density in magnetic chains on metallic surfaces and
help to unambiguously identify the electrons involved as bulk electrons. We
furthermore discuss the possibility of utilizing bulk state confinement to
transfer quantum information, encoded in an atom's species or spin, across
distances of several nanometers with high efficiency.
|
Systems with artificial intelligence components, so-called AI-based systems,
have gained considerable attention recently. However, many organizations have
issues with achieving production readiness with such systems. As a means to
improve certain software quality attributes and to address frequently occurring
problems, design patterns represent proven solution blueprints. While new
patterns for AI-based systems are emerging, existing patterns have also been
adapted to this new context.
The goal of this study is to provide an overview of design patterns for
AI-based systems, both new and adapted ones. We want to collect and categorize
patterns, and make them accessible for researchers and practitioners. To this
end, we first performed a multivocal literature review (MLR) to collect design
patterns used with AI-based systems. We then integrated the created pattern
collection into a web-based pattern repository to make the patterns browsable
and easy to find.
As a result, we selected 51 resources (35 white and 16 gray ones), from which
we extracted 70 unique patterns used for AI-based systems. Among these are 34
new patterns and 36 traditional ones that have been adapted to this context.
Popular pattern categories include "architecture" (25 patterns), "deployment"
(16), "implementation" (9), or "security & safety" (9). While some patterns
with four or more mentions already seem established, the majority of patterns
have only been mentioned once or twice (51 patterns). Our results in this
emerging field can be used by researchers as a foundation for follow-up studies
and by practitioners to discover relevant patterns for informing the design of
AI-based systems.
|
We apply the theory of Weyl structures for parabolic geometries developed by
A. Cap and J. Slovak in to compute, for a quaternionic contact (qc) structure,
the Weyl connection associated to a choice of scale, i.e. to a choice of
Carnot-Carath\'eodory metric in the conformal class. The result of this
computation has applications to the study of the conformal Fefferman space of a
qc manifold. In addition to this application, we are also able to easily
compute a tensorial formula for the qc analog of the Weyl curvature tensor in
conformal geometry and the Chern-Moser tensor in CR geometry. This tensor
agrees with the formula derived via independent methods by S. Ivanov and D.
Vasillev. However, as a result of our derivation of this tensor, its
fundamental properties -- conformal covariance, and that its vanishing is a
sharp obstruction to local flatness of the qc structure -- follow as easy
corollaries from the general parabolic theory.
|
In Part 1 of this work, we have derived a dynamical system describing the
approach to a finite-time singularity of the Navier-Stokes equations. We now
supplement this system with an equation describing the process of vortex
reconnection at the apex of a pyramid, neglecting core deformation during the
reconnection process. On this basis, we compute the maximum vorticity
$\omega_{max}$ as a function of vortex Reynolds number $R_\Gamma$ in the range
$2000\le R_\Gamma \le 3400$, and deduce a compatible behaviour
$\omega_{max}\sim \omega_{0}\exp{\left[1 + 220
\left(\log\left[R_{\Gamma}/2000\right]\right)^{2}\right]}$ as
$R_\Gamma\rightarrow \infty$. This may be described as a physical (although not
strictly mathematical) singularity, for all $R_\Gamma \gtrsim 4000$.
|
We classify ``arithmetic convection equations'' on modular curves, and
describe their space of solutions. Certain of these solutions involve the
Fourier expansions of the Eisenstein modular forms of weight 4 and 6, while
others involve the Serre-Tate expansions of the same modular forms; in this
sense, our arithmetic convection equations can be seen as "unifying" the two
types of expansions. The theory can be generalized to one of
``arithmetic heat equations'' on modular curves, but we prove that modular
curves do not carry ``arithmetic wave equations.'' Finally, we prove an
instability result for families of arithmetic heat equations converging to an
arithmetic convection equation.
|
We estimate the number of templates, computational power, and storage
required for a one-step matched filtering search for gravitational waves from
inspiraling compact binaries. These estimates should serve as benchmarks for
the evaluation of more sophisticated strategies such as hierarchical searches.
We use waveform templates based on the second post-Newtonian approximation for
binaries composed of nonspinning compact bodies in circular orbits. We present
estimates for six noise curves: LIGO (three configurations), VIRGO, GEO600, and
TAMA. To search for binaries with components more massive than 0.2M_o while
losing no more than 10% of events due to coarseness of template spacing,
initial LIGO will require about 1*10^11 flops (floating point operations per
second) for data analysis to keep up with data acquisition. This is several
times higher than estimated in previous work by Owen, in part because of the
improved family of templates and in part because we use more realistic (higher)
sampling rates. Enhanced LIGO, GEO600, and TAMA will require computational
power similar to initial LIGO. Advanced LIGO will require 8*10^11 flops, and
VIRGO will require 5*10^12 flops. If the templates are stored rather than
generated as needed, storage requirements range from 1.5*10^11 real numbers for
TAMA to 6*10^14 for VIRGO. We also sketch and discuss an algorithm for placing
the templates in the parameter space.
|
In this paper, we study existence and uniqueness of strong as well as weak
solutions for general time fractional Poisson equations. We show that there is
an integral representation of the solutions of time fractional Poisson
equations with zero initial values in terms of semigroup for the infinitesimal
spatial generator ${\cal L}$ and the corresponding subordinator associated with
the time fractional derivative. This integral representation has an integral
kernel $q(t, x, y)$, which we call the fundamental solution for the time
fractional Poisson equation, if the semigrou for ${\cal L}$ has an integral
kernel. We further show that $q(t, x, y)$ can be expressed as a time fractional
derivative of the fundamental solution for the homogenous time fractional
equation under the assumption that the associated subordinator admits a
conjugate subordinator. Moreover, when the Laplace exponent of the associated
subordinator satisfies the weak scaling property and its distribution is
self-decomposable, we establish two-sided estimates for the fundamental
solution $q(t,x, y)$ through explicit estimates of transition density functions
of subordinators.
|
We compute the effect of an orbiting gas disc in promoting the coalescence of
a central supermassive black hole binary. Unlike earlier studies, we consider a
finite mass of gas with explicit time dependence: we do not assume that the gas
necessarily adopts a steady state or a spatially constant accretion rate, i.e.
that the merging black hole was somehow inserted into a pre--existing accretion
disc. We consider the tidal torque of the binary on the disc, and the binary's
gravitational radiation. We study the effects of star formation in the gas disc
in a simple energy feedback framework. The disc spectrum differs in detail from
that found before. In particular, tidal torques from the secondary black hole
heat the edges of the gap, creating bright rims around the secondary. These
rims do not in practice have uniform brightness either in azimuth or time, but
can on average account for as much as 50 per cent of the integrated light from
the disc. This may lead to detectable high--photon--energy variability on the
relatively long orbital timescale of the secondary black hole, and thus offer a
prospective signature of a coalescing black hole binary. We also find that the
disc can drive the binary to merger on a reasonable timescale only if its mass
is at least comparable with that of the secondary black hole, and if the
initial binary separation is relatively small, i.e. $a_0 \lesssim 0.05$ pc.
Star formation complicates the merger further by removing mass from the disc.
In the feedback model we consider, this sets an effective limit to the disc
mass. As a result, binary merging is unlikely unless the black hole mass ratio
is $\la 0.001$. Gas discs thus appear not to be an effective solution to the
`last parsec' problem for a significant class of mergers.
|
3D shape reconstruction is a primary component of augmented/virtual reality.
Despite being highly advanced, existing solutions based on RGB, RGB-D and Lidar
sensors are power and data intensive, which introduces challenges for
deployment in edge devices. We approach 3D reconstruction with an event camera,
a sensor with significantly lower power, latency and data expense while
enabling high dynamic range. While previous event-based 3D reconstruction
methods are primarily based on stereo vision, we cast the problem as multi-view
shape from silhouette using a monocular event camera. The output from a moving
event camera is a sparse point set of space-time gradients, largely sketching
scene/object edges and contours. We first introduce an event-to-silhouette
(E2S) neural network module to transform a stack of event frames to the
corresponding silhouettes, with additional neural branches for camera pose
regression. Second, we introduce E3D, which employs a 3D differentiable
renderer (PyTorch3D) to enforce cross-view 3D mesh consistency and fine-tune
the E2S and pose network. Lastly, we introduce a 3D-to-events simulation
pipeline and apply it to publicly available object datasets and generate
synthetic event/silhouette training pairs for supervised learning.
|
Significant advances in biotechnology have allowed for simultaneous
measurement of molecular data points across multiple genomic and transcriptomic
levels from a single tumor/cancer sample. This has motivated systematic
approaches to integrate multi-dimensional structured datasets since cancer
development and progression is driven by numerous co-ordinated molecular
alterations and the interactions between them. We propose a novel two-step
Bayesian approach that combines a variable selection framework with integrative
structure learning between multiple sources of data. The structure learning in
the first step is accomplished through novel joint graphical models for
heterogeneous (mixed scale) data allowing for flexible incorporation of prior
knowledge. This structure learning subsequently informs the variable selection
in the second step to identify groups of molecular features within and across
platforms associated with outcomes of cancer progression. The variable
selection strategy adjusts for collinearity and multiplicity, and also has
theoretical justifications. We evaluate our methods through simulations and
apply them to a motivating genomic (DNA copy number and methylation) and
transcriptomic (mRNA expression) data for assessing important markers
associated with Glioblastoma progression.
|
We consider the real-time holography on Anti-de-Sitter (AdS) and more
generally on Lifshitz spacetimes for spinorial fields. The equation of motion
for fermions on general Lifshitz space is derived. Analytically solvable cases
are identified. On AdS space we derived time-ordered, time-reversed, advanced
and retarded propagators with the correct i \epsilon-insertions. Using the
Keldysh-Schwinger contour we also calculated a propagator on thermal AdS. For
massless fermions on the Lifshitz spacetime with z=2 we calculated the
Euclidean 2-point function and explored the structure of divergences of the
on-shell action for general values of z and mass m. The covariant counterterm
action is derived.
|
The effect of the energy deposition inside the human body made by radioactive
substances is discussed. For the first time, we stress the importance of the
recoiling nucleus in such reactions, particularly concerning the damage caused
on the DNA structure.
|
Signal averaging is the process that consists in computing a mean shape from
a set of noisy signals. In the presence of geometric variability in time in the
data, the usual Euclidean mean of the raw data yields a mean pattern that does
not reflect the typical shape of the observed signals. In this setting, it is
necessary to use alignment techniques for a precise synchronization of the
signals, and then to average the aligned data to obtain a consistent mean
shape. In this paper, we study the numerical performances of Fr\'echet means of
curves which are extensions of the usual Euclidean mean to spaces endowed with
non-Euclidean metrics. This yields a new algorithm for signal averaging without
a reference template. We apply this approach to the estimation of a mean heart
cycle from ECG records.
|
Recently we have found that integral of the squared Coulomb wave function
describing systemcomposed of charged pion and central charged fragment
$Z_{eff}$ protons, $|\psi_r(r)|^2$, times pion source function $\rho(r)$ (of
the size $\beta$), $\intdr |\psi_r(r)|^2 \rho(r)$, shows a quasiscaling
behavior. This is approximately invariant under the following transformation:
$(\beta,Z_{eff}) \to (\lambda\beta,\lambda Z_{eff})$; $\lambda >0$. We called
such behavior $\beta-Z_{eff}$ quasiscaling. We examine this quasiscaling
behavior in detail. In particular we provide a semi-analytical examination of
this behavior and confirm it for the exponential pionic source functions in
addition to the Gaussian ones and for the production of K mesons as well. When
combined with the results of the HBT, a result of the yield ratio allows us to
estimate the size of the central charged fragment (CCF) to be $125\le
Z_{eff}\le 150$ for Pb+Pb collisions at energy 158 GeV/nucleon. From our
estimation, the baryon number density $0.024 \le n_{B}\le0.036$ [1/fm^3] is
obtained.
|
Real-world data is complex and often consists of objects that can be
decomposed into multiple entities (e.g. images into pixels, graphs into
interconnected nodes). Randomized smoothing is a powerful framework for making
models provably robust against small changes to their inputs - by guaranteeing
robustness of the majority vote when randomly adding noise before
classification. Yet, certifying robustness on such complex data via randomized
smoothing is challenging when adversaries do not arbitrarily perturb entire
objects (e.g. images) but only a subset of their entities (e.g. pixels). As a
solution, we introduce hierarchical randomized smoothing: We partially smooth
objects by adding random noise only on a randomly selected subset of their
entities. By adding noise in a more targeted manner than existing methods we
obtain stronger robustness guarantees while maintaining high accuracy. We
initialize hierarchical smoothing using different noising distributions,
yielding novel robustness certificates for discrete and continuous domains. We
experimentally demonstrate the importance of hierarchical smoothing in image
and node classification, where it yields superior robustness-accuracy
trade-offs. Overall, hierarchical smoothing is an important contribution
towards models that are both - certifiably robust to perturbations and
accurate.
|
We study the classical planar two-center problem of a particle $m$ subjected
to harmonic-like interactions with two fixed centers. For convenient values of
the dimensionless parameter of this problem we use the averaging theory for
showing analytically the existence of periodic orbits bifurcating from two of
the three equilibrium points of the Hamiltonian system modeling this problem.
Moreover, it is shown that the system is generically non-integrable in the
sense of Liouville-Arnold. The analytical results are complemented by numerical
computations of the Poincar\'e sections as well as providing some explicit
periodic orbits.
|
This chapter outlines some of the highlights of efforts undertaken by our
group to describe the role of contextuality in the conceptualization of
conscious experience using generalized formalisms from quantum mechanics.
|
Some have criticised Generative AI Systems for replicating the familiar
pathologies of already widely-deployed AI systems. Other critics highlight how
they foreshadow vastly more powerful future systems, which might threaten
humanity's survival. The first group says there is nothing new here; the other
looks through the present to a perhaps distant horizon. In this paper, I
instead pay attention to what makes these particular systems distinctive: both
their remarkable scientific achievement, and the most likely and consequential
ways in which they will change society over the next five to ten years. In
particular, I explore the potential societal impacts and normative questions
raised by the looming prospect of 'Generative Agents', in which multimodal
large language models (LLMs) form the executive centre of complex, tool-using
AI systems that can take unsupervised sequences of actions towards some goal.
|
SPT-CLJ2040-4451 -- spectroscopically confirmed at z = 1.478 -- is the
highest redshift galaxy cluster yet discovered via the Sunyaev-Zel'dovich
effect. SPT-CLJ2040-4451 was a candidate galaxy cluster identified in the first
720 deg^2 of the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey, and
confirmed in follow-up imaging and spectroscopy. From multi-object spectroscopy
with Magellan-I/Baade+IMACS we measure spectroscopic redshifts for 15 cluster
member galaxies, all of which have strong [O II] 3727 emission.
SPT-CLJ2040-4451 has an SZ-measured mass of M_500,SZ = 3.2 +/- 0.8 X 10^14
M_Sun/h_70, corresponding to M_200,SZ = 5.8 +/- 1.4 X 10^14 M_Sun/h_70. The
velocity dispersion measured entirely from blue star forming members is sigma_v
= 1500 +/- 520 km/s. The prevalence of star forming cluster members (galaxies
with > 1.5 M_Sun/yr) implies that this massive, high-redshift cluster is
experiencing a phase of active star formation, and supports recent results
showing a marked increase in star formation occurring in galaxy clusters at z
>1.4. We also compute the probability of finding a cluster as rare as this in
the SPT-SZ survey to be >99%, indicating that its discovery is not in tension
with the concordance Lambda-CDM cosmological model.
|
Differentially private multiple testing procedures can protect the
information of individuals used in hypothesis tests while guaranteeing a small
fraction of false discoveries. In this paper, we propose a differentially
private adaptive FDR control method that can control the classic FDR metric
exactly at a user-specified level $\alpha$ with privacy guarantee, which is a
non-trivial improvement compared to the differentially private
Benjamini-Hochberg method proposed in Dwork et al. (2021). Our analysis is
based on two key insights: 1) a novel p-value transformation that preserves
both privacy and the mirror conservative property, and 2) a mirror peeling
algorithm that allows the construction of the filtration and application of the
optimal stopping technique. Numerical studies demonstrate that the proposed
DP-AdaPT performs better compared to the existing differentially private FDR
control methods. Compared to the non-private AdaPT, it incurs a small accuracy
loss but significantly reduces the computation cost.
|
General relativity has previously been extended to incorporate degenerate
metrics using Ashtekar's hamiltonian formulation of the theory. In this letter,
we show that a natural alternative choice for the form of the hamiltonian
constraints leads to a theory which agrees with GR for non-degenerate metrics,
but differs in the degenerate sector from Ashtekar's original degenerate
extension. The Poisson bracket algebra of the alternative constraints closes in
the non-degenerate sector, with structure functions that involve the {\it
inverse} of the spatial triad. Thus, the algebra does {\it not} close in the
degenerate sector. We find that it must be supplemented by an infinite number
ofsecondary constraints, which are shown to be first class (although their
explicit form is not worked out in detail). All of the constraints taken
together are implied by, but do not imply, Ashtekar's original form of
constraints. Thus, the alternative constraints give rise to a different
degenerate extension of GR. In the corresponding quantum theory, the single
loop and intersecting loop holonomy states found in the connection
representation satisfy {\it all} of the constraints. These states are therefore
exact (formal) solutions to this alternative degenerate extension of quantum
gravity, even though they are {\it not} solutions to the usual vector
constraint.
|
A strong electron-phonon interaction in a metal increases the electron
density of states in the vicinity of the Fermi energy dramatically. This
phenomenon is called electron-phonon mass enhancement. In this paper the
question is investigated whether the mass enhancement can be manipulated in
multi-layers of two metals with strong and weak electron-phonon interaction. A
rich behavior is observed for different thickness ranges of the layers. For
thin layers one observes a rather homogeneous averaged enhancement. However,
for an intermediate thickness range the mass enhancement is highly anisotropic,
i.e. direction dependent, as well as position dependent. For large layer
thicknesses one obtains the bulk behavior for each metal.
|
We calculate energy release associated with a first order phase transition at
the center of a rotating neutron star. The results are based on precise
numerical 2-D calculations, in which both the polytropic equations of state
(EOS) as well as realistic EOS of the normal phase are used. Presented results
are obtained for a broad range of metastability of initial configuration and
size of the new superdense phase core in the final configuration. For small
radii of the superdense phase core analytical expressions for the energy
release are obtained. For a fixed "overpressure" dP (the relative excess of
central pressure of collapsing metastable star over the pressure of equilibrium
first-order phase transition) the energy release remarkably does not depend on
the stellar angular momentum and coincides with that for nonrotating stars with
the same dP. The energy release is proportional to dP^2.5 for small dPs, when
sufficiently precise brute force 2-D numerical calculations are out of
question. For higher dPs, results of 1-D calculations of energy release for
non-rotating stars are shown to reproduce, with very high precision, the exact
2-D results for rotating stars.
|
The NA48/2 and NA62-$R_K$ experiments at CERN collected large samples of
charged kaon decays in 2003--2004 and 2007, respectively. These samples,
collected with different trigger conditions, allow to search for both short and
long-living heavy neutrinos produced in $K^{\pm}\to\mu^{\pm}N_4$ decays. The
results of these complementary searches are presented in this letter. In the
absence of observed signal, the limits obtained on
$\mathcal{B}(K^{\pm}\to\pi^{\mp}\mu^{\pm}\mu^{\pm})$, $\mathcal{B}(K^{\pm}\to
\mu^{\pm} N_4)\mathcal{B}(N_4\to \pi\mu)$, $\mathcal{B}(K^{+}\to\mu^{+}N_4)$
and on the mixing matrix element $|U_{\mu4}|^2$ are reported.
|
In this thesis a comprehensive verification framework is proposed to contend
with some important issues in composability verification and a verification
process is suggested to verify composability of different kinds of systems
models, such as reactive, real-time and probabilistic systems. With an
assumption that all these systems are concurrent in nature in which different
composed components interact with each other simultaneously, the requirements
for the extensive techniques for the structural and behavioral analysis becomes
increasingly challenging. The proposed verification framework provides methods,
techniques and tool support for verifying composability at its different
levels. These levels are defined as foundations of consistent model
composability. Each level is discussed in detail and an approach is presented
to verify composability at that level. In particular we focus on the
Dynamic-Semantic Composability level due to its significance in the overall
composability correctness and also due to the level of difficulty it poses in
the process. In order to verify composability at this level we investigate the
application of three different approaches namely (i) Petri Nets based Algebraic
Analysis (ii) Colored Petri Nets (CPN) based State-space Analysis and (iii)
Communicating Sequential Processes based Model Checking. All three approaches
attack the problem of verifying dynamic-semantic composability in different
ways however they all share the same aim i.e., to confirm the correctness of a
composed model with respect to its requirement specifications.
|
In this paper we study spectral properties associated to Schrodinger operator
with potential that is an exponential decaying function. As applications we
prove local energy decay for solutions to the perturbed wave equation and lack
of resonances for the NLS.
|
For $0<p<\infty$, $\Psi:[0,\infty)\to(0,\infty)$ and a finite positive Borel
measure $\mu$ on the unit disc $\mathbb{D}$, the Lebesgue--Zygmund space
$L^p_{\mu,\Psi}$ consists of all measurable functions $f$ such that $\lVert f
\rVert_{L_{\mu, \Psi}^{p}}^p =\int_{\mathbb{D}}|f|^p\Psi(|f|)\,d\mu< \infty$.
For an integrable radial function $\omega$ on $\mathbb{D}$, the corresponding
weighted Bergman-Zygmund space $A_{\omega, \Psi}^{p}$ is the set of all
analytic functions in $L_{\mu, \Psi}^{p}$ with $d\mu=\omega\,dA$.
The purpose of the paper is to characterize bounded (and compact) embeddings
$A_{\omega,\Psi}^{p}\subset L_{\mu, \Phi}^{q}$, when $0<p\le q<\infty$, the
functions $\Psi$ and $\Phi$ are essential monotonic, and $\Psi,\Phi,\omega$
satisfy certain doubling properties. The tools developed on the way to the main
results are applied to characterize bounded and compact integral operators
acting from $A^p_{\omega,\Psi}$ to $A^q_{\nu,\Phi}$, provided $\nu$ admits the
same doubling property as $\omega$.
|
Reflexive polytopes form one of the distinguished classes of lattice
polytopes. Especially reflexive polytopes which possess the integer
decomposition property are of interest. In the present paper, by virtue of the
algebraic technique on Gr\"onbner bases, a new class of reflexive polytopes
which possess the integer decomposition property and which arise from perfect
graphs will be presented. Furthermore, the Ehrhart $\delta$-polynomials of
these polytopes will be studied.
|
The implementation of electron- and hole-doping, in conjunction to applied
pressure, is analyzed as a mechanism to induce or enhance the superconducting
state on fcc YH$_3$ and ScH$_3$. In particular, the evolution of their
structural, electronic, and lattice dynamical properties, as well as the
electron-phonon coupling and superconducting critical temperature ($T_c$) is
presented and discussed, as a function of the electron- and hole-doping content
as well as applied pressure. The study was performed within the density
functional perturbation theory, taking into account the effects of zero-point
energy through the quasi-harmonic approximation, while the doping was
implemented by means of the construction of the Sc$_{1-x}$M$_{x}$H$_{3}$
(M=Ca,Ti) and Y$_{1-x}$M$_{x}$H$_{3}$ (M=Sr,Zr) solid solutions modeled with
the virtual crystal approximation (VCA). We found that the ScH$_3$ and YH$_3$
hydrides shown a significant improvement of their electron-phonon coupling
properties under hole-doping (M=Ca,Sr) and at pressure values close to
dynamical instabilities. Instead, by electron-doping (M=Ti,Zr), the systems do
not improve such properties, whatever value of applied pressure is considered.
Then, as a result, $T_c$ rapidly increases as a function of $x$ on the
hole-doping region, reaching its maximum value of $92.7(67.9)$~K and
$84.5(60.2)$~K at $x=0.3$ for Sc$_{1-x}$Ca$_{x}$H$_{3}$ at $10.8$~GPa and
Y$_{1-x}$Sr$_{x}$H$_{3}$ at $5.8$~GPa respectively, with $\mu^{*}=0(0.15)$,
while for both, electron- and hole-doping, $T_c$ decreases as a function of the
applied pressure, mainly due to phonon hardening. By the thorough analysis of
the electron-phonon properties as a function of doping and pressure, we can
conclude that the tuning of the lattice dynamics is a promising path for
improving the superconductivity on both systems.
|
One-loop radiative Majorana neutrino masses through the exchange of scalars
have been considered for many years. We show for the first time how such a
one-loop mass is also possible through the exchange of vector gauge bosons. It
is based on a simple variation of a recently proposed $SU(2)_N$ extension of
the standard model, where a vector boson is a candidate for the dark matter of
the Universe.
|
In the era of wide-field surveys like the Zwicky Transient Facility and the
Rubin Observatory's Legacy Survey of Space and Time, sparse photometric
measurements constitute an increasing percentage of asteroid observations,
particularly for asteroids newly discovered in these large surveys. Follow-up
observations to supplement these sparse data may be prohibitively expensive in
many cases, so to overcome these sampling limitations, we introduce a flexible
model based on Gaussian Processes to enable Bayesian parameter inference of
asteroid time series data. This model is designed to be flexible and
extensible, and can model multiple asteroid properties such as the rotation
period, light curve amplitude, changing pulse profile, and magnitude changes
due to the phase angle evolution at the same time. Here, we focus on the
inference of rotation periods. Based on both simulated light curves and real
observations from the Zwicky Transient Facility, we show that the new model
reliably infers rotational periods from sparsely sampled light curves, and
generally provides well-constrained posterior probability densities for the
model parameters. We propose this framework as an intermediate method between
fast, but very limited period detection algorithms and much more comprehensive,
but computationally expensive shape modeling based on ray-tracing codes.
|
One of the major goal of the Large Hadron Collider (LHC) is to probe the
electroweak symmetry breaking mechanism and the generation of the masses of the
elementary particles. We review the physics of the Higgs sector in the Standard
Model. The main production channels of the Higgs production at the LHC are
reviewed. The prospects for the discovering the Higgs particles at the LHC and
the study of their fundamental properties are summarizes.
|
Non-reciprocal electronic transport in a material occurs if both time
reversal and inversion symmetries are broken. The superconducting diode effect
(SDE) is an exotic manifestation of this type of behavior where the critical
current for positive and negative currents are mismatched, as recently observed
in some non-centrosymmetric superconductors with a magnetic field. Here, we
demonstrate a SDE in non-magnetic Nb/Ru/Sr$_2$RuO$_4$ Josephson junctions
without applying an external magnetic field. The cooling history dependence of
the SDE suggests that time-reversal symmetry is intrinsically broken by the
superconducting phase of Sr$_2$RuO$_4$. Applied magnetic fields modify the SDE
dynamically by randomly changing the sign of the non-reciprocity. We propose a
model for such a topological junction with a conventional superconductor
surrounded by a chiral superconductor with broken time reversal symmetry.
|
The size distribution of planned and forced outages and following restoration
times in power systems have been studied for almost two decades and has drawn
great interest as they display heavy tails. Understanding of this phenomenon
has been done by various threshold models, which are self-tuned at their
critical points, but as many papers pointed out, explanations are intuitive,
and more empirical data is needed to support hypotheses. In this paper, the
authors analyze outage data collected from various public sources to calculate
the outage energy and outage duration exponents of possible power-law fits.
Temporal thresholds are applied to identify crossovers from initial short-time
behavior to power-law tails. We revisit and add to the possible explanations of
the uniformness of these exponents. By performing power spectral analyses on
the outage event time series and the outage duration time series, it is found
that, on the one hand, while being overwhelmed by white noise, outage events
show traits of self-organized criticality (SOC), which may be modeled by a
crossover from random percolation to directed percolation branching process
with dissipation, coupled to a conserved density. On the other hand, in
responses to outages, the heavy tails in outage duration distributions could be
a consequence of the highly optimized tolerance (HOT) mechanism, based on the
optimized allocation of maintenance resources.
|
We conjecture that the class of frame matroids can be characterised by a
sentence in the monadic second-order logic of matroids, and we prove that there
is such a characterisation for the class of bicircular matroids. The proof does
not depend on an excluded-minor characterisation.
|
Research on machine learning for channel estimation, especially neural
network solutions for wireless communications, is attracting significant
current interest. This is because conventional methods cannot meet the present
demands of the high speed communication. In the paper, we deploy a general
residual convolutional neural network to achieve channel estimation for the
orthogonal frequency-division multiplexing (OFDM) signals in a downlink
scenario. Our method also deploys a simple interpolation layer to replace the
transposed convolutional layer used in other networks to reduce the computation
cost. The proposed method is more easily adapted to different pilot patterns
and packet sizes. Compared with other deep learning methods for channel
estimation, our results for 3GPP channel models suggest improved mean squared
error performance for our approach.
|
We show that with a purely blue-detuned cooling mechanism we can densely load
single neutral atoms into large arrays of shallow optical tweezers. With this
ability, more efficient assembly of larger ordered arrays will be possible -
hence expanding the number of particles available for bottom-up quantum
simulation and computation with atoms. Using Lambda-enhanced grey molasses on
the D1 line of 87Rb, we achieve loading into a single 0.63 mK trap with 89%
probability, and we further extend this loading to 100 atoms at 80%
probability. The loading behavior agrees with a model of consecutive
light-assisted collisions in repulsive molecular states. With simple
rearrangement that only moves rows and columns of a 2D array, we demonstrate
one example of the power of enhanced loading in large arrays.
|
The A64FX CPU is arguably the most powerful Arm-based processor design to
date. Although it is a traditional cache-based multicore processor, its peak
performance and memory bandwidth rival accelerator devices. A good
understanding of its performance features is of paramount importance for
developers who wish to leverage its full potential. We present an architectural
analysis of the A64FX used in the Fujitsu FX1000 supercomputer at a level of
detail that allows for the construction of Execution-Cache-Memory (ECM)
performance models for steady-state loops. In the process we identify
architectural peculiarities that point to viable generic optimization
strategies. After validating the model using simple streaming loops we apply
the insight gained to sparse matrix-vector multiplication (SpMV) and the domain
wall (DW) kernel from quantum chromodynamics (QCD). For SpMV we show why the
CRS matrix storage format is not a good practical choice on this architecture
and how the SELL-C-sigma format can achieve bandwidth saturation. For the DW
kernel we provide a cache-reuse analysis and show how an appropriate choice of
data layout for complex arrays can realize memory-bandwidth saturation in this
case as well. A comparison with state-of-the-art high-end Intel Cascade Lake AP
and Nvidia V100 systems puts the capabilities of the A64FX into perspective. We
also explore the potential for power optimizations using the tuning knobs
provided by the Fugaku system, achieving energy savings of about 31% for SpMV
and 18% for DW.
|
In the framework of Polynomial Eigenvalue Problems, most of the matrix
polynomials arising in applications are structured polynomials (namely
(skew-)symmetric, (skew-)Hermitian, (anti-)palindromic, or alternating). The
standard way to solve Polynomial Eigenvalue Problems is by means of
linearizations. The most frequently used linearizations belong to general
constructions, valid for all matrix polynomials of a fixed degree, known as
{\em companion linearizations}. It is well known, however, that is not possible
to construct companion linearizations that preserve any of the previous
structures for matrix polynomials of even degree. This motivates the search for
more general companion forms, in particular {\em companion $\ell$-ifications}.
In this paper, we present, for the first time, a family of (generalized)
companion $\ell$-ifications that preserve any of these structures, for matrix
polynomials of degree $k=(2d+1)\ell$. We also show how to construct sparse
$\ell$-ifications within this family. Finally, we prove that there are no
structured companion quadratifications for quartic matrix polynomials.
|
We suggest stellar oscillations are responsible for the strange radio
behaviors of Anomalous X-ray pulsars and soft Gamma-ray repeaters (AXP/SGRs),
within the framework of both solid quark star model and magnetar model. In
solid quark star model, the extra voltage provided by oscillations activates
the star from under death line to above death line. In magnetar model,
oscillations enlarge the radio beam so that increase the possibility to detect
it. Later radio emission decays and vanishes as oscillations damp.
|
Developments in Brain Computer Interfaces (BCIs) are empowering those with
severe physical afflictions through their use in assistive systems. Common
methods of achieving this is via Motor Imagery (MI), which maps brain signals
to code for certain commands. Electroencephalogram (EEG) is preferred for
recording brain signal data on account of it being non-invasive. Despite their
potential utility, MI-BCI systems are yet confined to research labs. A major
cause for this is lack of robustness of such systems. As hypothesized by two
teams during Cybathlon 2016, a particular source of the system's vulnerability
is the sharp change in the subject's state of emotional arousal. This work aims
towards making MI-BCI systems resilient to such emotional perturbations. To do
so, subjects are exposed to high and low arousal-inducing virtual reality (VR)
environments before recording EEG data. The advent of COVID-19 compelled us to
modify our methodology. Instead of training machine learning algorithms to
classify emotional arousal, we opt for classifying subjects that serve as proxy
for each state. Additionally, MI models are trained for each subject instead of
each arousal state. As training subjects to use MI-BCI can be an arduous and
time-consuming process, reducing this variability and increasing robustness can
considerably accelerate the acceptance and adoption of assistive technologies
powered by BCI.
|
We present a selective review of statistical modeling of dynamic networks. We
focus on models with latent variables, specifically, the latent space models
and the latent class models (or stochastic blockmodels), which investigate both
the observed features and the unobserved structure of networks. We begin with
an overview of the static models, and then we introduce the dynamic extensions.
For each dynamic model, we also discuss its applications that have been studied
in the literature, with the data source listed in Appendix. Based on the
review, we summarize a list of open problems and challenges in dynamic network
modeling with latent variables.
|
A limit on the mass of the tau neutrino is derived from 4.5 million tau pairs
produced in an integrated luminosity of 5.0 fb^{-1} of electron-positron
annihilation to tau pairs at center of mass energies near 10.6 GeV. The
measurement technique involves a two-dimensional extended likelihood analysis,
including the dependence of the end-point population on the neutrino mass, and
allows for the first time an explicit background contribution. We use the
decays of the tau to five charged pions and a neutrino as well as the decay to
three charged pions, two neutral pions and a neutrino to obtain an upper limit
of 30 MeV/c^2 at 95% C.L.
|
We relate R-equivalence on tori with Voevodsky's theory of homotopy invariant
Nisnevich sheaves with transfers and effective motivic complexes.
|
We show that a Majorana fermion description of the two channel Kondo model
can emerge quite naturally as a representation of the algebra associated with
the spin currents in the two channels. Using this representation we derive an
exact equivalent Hamiltonian for the two channel model expressed entirely in
terms of Majorana fermions. The part of the Hamiltonian that is coupled to the
impurity spin corresponds to the vector part of the $\sigma$-$\tau$ model
(compactified two channel model). Consequently all the thermodynamic properties
associated with the impurity spin can be calculated from the $\sigma$-$\tau$
model alone. The equivalent model can be used to confirm the interpretation of
the many-body excitation spectrum of the low energy fixed point of the
two-channel model as due to free Majorana fermions with appropriate boundary
conditions.
|
We present numerical simulations of the hydrodynamical interactions that
produce circumstellar shells. These simulations include several scenarios, such
as wind-wind interaction and wind-ISM collisions. In our calculations we have
taken into account the presence of dust in the stellar wind. Our results show
that, while small dust grains tend to be strongly coupled to the gas, large
dust grains are only weakly coupled. As a result, the distribution of the large
dust grains is not representative of the gas distribution. Combining these
results with observations may give us a new way of validating hydrodynamical
models of the circumstellar medium.
|
Based on 1-minute price changes recorded since year 2012, the fluctuation
properties of the rapidly-emerging Bitcoin (BTC) market are assessed over
chosen sub-periods, in terms of return distributions, volatility
autocorrelation, Hurst exponents and multiscaling effects. The findings are
compared to the stylized facts of mature world markets. While early trading was
affected by system-specific irregularities, it is found that over the months
preceding Apr 2018 all these statistical indicators approach the features
hallmarking maturity. This can be taken as an indication that the Bitcoin
market, and possibly other cryptocurrencies, carry concrete potential of
imminently becoming a regular market, alternative to the foreign exchange
(Forex). Since high-frequency price data are available since the beginning of
trading, the Bitcoin offers a unique window into the statistical
characteristics of a market maturation trajectory.
|
As the next step toward an improved large scale Galactic magnetic field
model, we present a simple comparison of polarised synchrotron and thermal dust
emission on the Galactic plane. We find that the field configuration in our
previous model that reproduces the polarised synchrotron is not compatible with
the WMAP 94 GHz polarised emission data. In particular, the high degree of dust
polarisation in the outer Galaxy (90deg < l < 270deg) implies that the fields
in the dust-emitting regions are more ordered than the average of
synchrotron-emitting regions. This new dust information allows us to constrain
the spatial mixing of the coherent and random magnetic field components in the
outer Galaxy. The inner Galaxy differs in polarisation degree and apparently
requires a more complicated scenario than our current model. In the scenario
that each interstellar component (including fields and now dust) follows a
spiral arm modulation, as observed in external galaxies, the changing degree of
ordering of the fields in dust-emitting regions may imply that the dust arms
and the field component arms are shifted as a varying function of
Galacto-centric radius. We discuss the implications for how the spiral arm
compression affects the various components of the magnetised interstellar
medium but conclude that improved data such as that expected from the Planck
satellite will be required for a thorough analysis.
|
In order to extend the blow-up criterion of solutions to the Euler equations,
Kozono and Taniuchi have proved a logarithmic Sobolev inequality by means of
isotropic (elliptic) $BMO$ norm. In this paper, we show a parabolic version of
the Kozono-Taniuchi inequality by means of anisotropic (parabolic) $BMO$ norm.
More precisely we give an upper bound for the $L^{\infty}$ norm of a function
in terms of its parabolic $BMO$ norm, up to a logarithmic correction involving
its norm in some Sobolev space. As an application, we also explain how to apply
this inequality in order to establish a long-time existence result for a class
of nonlinear parabolic problems.
|
We develop a Galois theory for systems of linear difference equations with
periodic parameters, for which we also introduce linear difference algebraic
groups. We then apply this to constructively test if solutions of linear
q-difference equations, with complex q, not a root of unity, satisfy any
polynomial q'-difference equations with q' being a root of unity.
|
Shock waves are common in astrophysical environments. On many occasions, they
are collisionless, which means they occur in settings where the mean free path
is much larger than the dimensions of the system. For this very reason,
magnetohydrodynamic (MHD) is not equipped to deal with such shocks, be it
because it assumes binary collisions, hence temperature isotropy, when such
isotropy is not guaranteed in the absence of collisions. Here we solve a model
capable of dealing with perpendicular shocks with anisotropic upstream
pressure. The system of MHD conservation equations is closed assuming the
temperature normal to the flow is conserved at the crossing of the shock front.
In the strong shock sonic limit, the behavior of a perpendicular shock with
isotropic upstream is retrieved, regardless of the upstream anisotropy.
Generally speaking, a rich variety of behaviors is found, inaccessible to MHD,
depending on the upstream parameters. The present work can be viewed as the
companion paper of MNRAS 520, 6083-6090 (2023), where the case of a parallel
shock was treated. Differences and similarities with the present case are
discussed.
|
In this paper, the Cauchy problem for linear and nonlinear convolution wave
equations are studied.The equation involves convolution terms with a general
kernel functions whose Fourier transform are operator functions defined in a
Banach space E together with some growth conditions. Here, assuming enough
smoothness on the initial data and the operator functions, the local, global
existence, uniqueness and regularity properties of solutions are established in
terms of fractional powers of given sectorial operator functon. Furthermore,
conditions for finite time blow-up are provided. By choosing the space E and
the operators, the regularity properties the wide class of nonlocal wave
equations in the field of physics are obtained.
|
Context: How to adopt, scale and tailor agile methods depends on several
factors such as the size of the organization, business goals, operative model,
and needs. The Scaled Agile Framework (SAFe) was developed to support
organizations to scale agile practices across the enterprise. Problem: Early
adopters of SAFe tend to be large multi-national enterprises who report that
the adoption of SAFe has led to significant productivity and quality gains.
However, little is known about whether these benefits translate to small to
medium sized enterprises (SMEs). Method: As part of a longitudinal study of an
SME transitioning to SAFe we ask, to what extent are SAFe practices adopted at
the team level? We targeted all team members and administrated a mixed method
survey in February, 2017 and in July, 2017 to identify and evaluate the
adoption rate of SAFe practices. Results: Initially in Quarter 1, teams were
struggling with PI/Release health and Technical health throughout the
organization as most of the teams were transitioning from plan-driven to SAFe .
But, during the transition period in Quarter 3, we observed discernible
improvements in different areas of SAFe practice adoption. Conclusion: The
observed improvement might be due to teams merely becoming more familiar with
the practices over-time. However, management had also made some structural
changes to the teams that may account for the change.
|
Accurate and fast prediction of materials properties is central to the
digital transformation of materials design. However, the vast design space and
diverse operating conditions pose significant challenges for accurately
modeling arbitrary material candidates and forecasting their properties. We
present MatterSim, a deep learning model actively learned from large-scale
first-principles computations, for efficient atomistic simulations at
first-principles level and accurate prediction of broad material properties
across the periodic table, spanning temperatures from 0 to 5000 K and pressures
up to 1000 GPa. Out-of-the-box, the model serves as a machine learning force
field, and shows remarkable capabilities not only in predicting ground-state
material structures and energetics, but also in simulating their behavior under
realistic temperatures and pressures, signifying an up to ten-fold enhancement
in precision compared to the prior best-in-class. This enables MatterSim to
compute materials' lattice dynamics, mechanical and thermodynamic properties,
and beyond, to an accuracy comparable with first-principles methods.
Specifically, MatterSim predicts Gibbs free energies for a wide range of
inorganic solids with near-first-principles accuracy and achieves a 15 meV/atom
resolution for temperatures up to 1000K compared with experiments. This opens
an opportunity to predict experimental phase diagrams of materials at minimal
computational cost. Moreover, MatterSim also serves as a platform for
continuous learning and customization by integrating domain-specific data. The
model can be fine-tuned for atomistic simulations at a desired level of theory
or for direct structure-to-property predictions, achieving high data efficiency
with a reduction in data requirements by up to 97%.
|
We discuss a modification of Gopakumar's prescription for constructing string
theory duals of free gauge theories. We use this prescription to construct an
unconventional worldsheet dual of the Gaussian matrix model.
|
The work which we present takes place within the framework of mission SMOS of
the ESA which will consist to send a radiometer (1.4 GHz) in space. The goal of
the research which we propose is the improvement of the comprehension of the
effects of structure soil and litter. The effects of the litter and
heterogeneities of the ground are probably important but still are very
ignored. So we developed an experimental approach in laboratory and in situ.
That makes it possible to take measurements for various configurations
(frequency, temporal, polarization, incidence, Bi-statics, Brewster effect...)
and in term of surface conditions(homogeneous or heterogeneous ground, more or
less wet, presence of litter...). Measurements at the laboratory with waveguide
enabled us to characterize the various components of the geological structure
(ground, rocks) and to check the model of Dobson usually used.
|
Relation-aware graph structure embedding is promising for predicting
multi-relational drug-drug interactions (DDIs). Typically, most existing
methods begin by constructing a multi-relational DDI graph and then learning
relation-aware graph structure embeddings (RaGSEs) of drugs from the DDI graph.
Nevertheless, most existing approaches are usually limited in learning RaGSEs
of new drugs, leading to serious over-fitting when the test DDIs involve such
drugs. To alleviate this issue, we propose a novel DDI prediction method based
on relation-aware graph structure embedding with co-contrastive learning,
RaGSECo. The proposed RaGSECo constructs two heterogeneous drug graphs: a
multi-relational DDI graph and a multi-attribute drug-drug similarity (DDS)
graph. The two graphs are used respectively for learning and propagating the
RaGSEs of drugs, aiming to ensure all drugs, including new ones, can possess
effective RaGSEs. Additionally, we present a novel co-contrastive learning
module to learn drug-pairs (DPs) representations. This mechanism learns DP
representations from two distinct views (interaction and similarity views) and
encourages these views to supervise each other collaboratively to obtain more
discriminative DP representations. We evaluate the effectiveness of our RaGSECo
on three different tasks using two real datasets. The experimental results
demonstrate that RaGSECo outperforms existing state-of-the-art prediction
methods.
|
We give an algorithm to compute the $\omega$-primality of finitely generated
atomic monoids. Asymptotic $\w$-primality is also studied and a formula to
obtain it in finitely generated quasi-Archimedean monoids is proven. The
formulation is applied to numerical semigroups, obtaining an expression of this
invariant in terms of its system of generators.
|
The thermodynamic stability of structural isomers of $\mathrm{C}_{24}$,
$\mathrm{C}_{26}$, $\mathrm{C}_{28}$ and $\mathrm{C}_{32}$, including
fullerenes, is studied using density functional and quantum Monte Carlo
methods. The energetic ordering of the different isomers depends sensitively on
the treatment of electron correlation. Fixed-node diffusion quantum Monte Carlo
calculations predict that a $\mathrm{C}_{24}$ isomer is the smallest stable
graphitic fragment and that the smallest stable fullerenes are the
$\mathrm{C}_{26}$ and $\mathrm{C}_{28}$ clusters with $\mathrm{C}_{2v}$ and
$\mathrm{T}_{d}$ symmetry, respectively. These results support proposals that a
$\mathrm{C}_{28}$ solid could be synthesized by cluster deposition.
|
With a Jupiter-like exoplanet and a debris disk with both asteroid and Kuiper
belt analogs, $\epsilon$ Eridani has a fascinating resemblance to our
expectations for a young Solar System. We present a deep HST/STIS coronographic
dataset using eight orbit visits and the PSF calibrator $\delta$ Eridani. While
we were unable to detect the debris disk, we place stringent constraints on the
scattered light surface brightness of $\sim 4 \, \mu Jy/arcsec^{2}$. We combine
this scattered light detection limit with a reanalysis of archival near and
mid-infrared observations and a dynamical model of the full planetary system to
refine our model of the $\epsilon$ Eridani debris disk components. Radiative
transfer modeling suggests an asteroid belt analog inside of 3 au, an
intermediate disk component in the 6 - 37 au region and a Kuiper belt analog
co-located with the narrow belt observed in the millimeter (69 au). Modeling
also suggests a large minimum grain size requiring either very porous grains or
a suppression of small grain production, and a radially stratified particle
size distribution. The inner disk regions require a steep power law slope
($s^{-3.8}$ where $s$ is the grain size) weighted towards smaller grains and
the outer disk prefers a shallower slope ($s^{-3.4}$) with a minimum particle
size of $> 2 \, \mu m$. These conclusions will be enhanced by upcoming
coronographic observations of the system with the James Webb Space Telescope,
which will pinpoint the radial location of the dust belts and further diagnose
the dust particle properties.
|
An $\mathcal{F}$-essential subgroup is called a pearl if it is either
elementary abelian of order $p^2$ or non-abelian of order $p^3$. In this paper
we start the investigation of fusion systems containing pearls: we determine a
bound for the order of $p$-groups containing pearls and we classify the
saturated fusion systems on $p$-groups containing pearls and having sectional
rank at most $4$.
|
Bilevel optimization reveals the inner structure of otherwise oblique
optimization problems, such as hyperparameter tuning, neural architecture
search, and meta-learning. A common goal in bilevel optimization is to minimize
a hyper-objective that implicitly depends on the solution set of the
lower-level function. Although this hyper-objective approach is widely used,
its theoretical properties have not been thoroughly investigated in cases where
the lower-level functions lack strong convexity. In this work, we first provide
hardness results to show that the goal of finding stationary points of the
hyper-objective for nonconvex-convex bilevel optimization can be intractable
for zero-respecting algorithms. Then we study a class of tractable
nonconvex-nonconvex bilevel problems when the lower-level function satisfies
the Polyak-{\L}ojasiewicz (PL) condition. We show a simple first-order
algorithm can achieve better complexity bounds of
$\tilde{\mathcal{O}}(\epsilon^{-2})$, $\tilde{\mathcal{O}}(\epsilon^{-4})$ and
$\tilde{\mathcal{O}}(\epsilon^{-6})$ in the deterministic, partially
stochastic, and fully stochastic setting respectively.
|
We develop a general theory of buoyancy instabilities in the electron-ion
plasma with the electron heat flux based not upon MHD equations, but using a
multicomponent plasma approach in which the momentum equation is solved for
each species. We investigate the geometry in which the background magnetic
field is perpendicular to the gravity and stratification. General expressions
for the perturbed velocities are given without any simplifications. Collisions
between electrons and ions are taken into account in the momentum equations in
a general form, permitting us to consider both weakly and strongly collisional
objects. However, the electron heat flux is assumed to be directed along the
magnetic field that implies a weakly collisional case. Using simplifications
justified for an investigation of buoyancy instabilities with the electron
thermal flux, we derive simple dispersion relations both for collisionless and
collisional cases for arbitrary directions of the wave vector. The
collisionless dispersion relation considerably differs from that obtained in
the MHD framework and is similar to the Schwarzschild's criterion. This
difference is connected with simplified assumptions used in the MHD analysis of
buoyancy instabilities and with the role of the longitudinal electric field
perturbation which is not captured by the ideal MHD equations. The results
obtained can be applied to clusters of galaxies and other astrophysical
objects.
|
Consider a nonlocal conservation where the flux function depends on the
convolution of the solution with a given kernel. In the singular local limit
obtained by letting the convolution kernel converge to the Dirac delta one
formally recovers a conservation law. However, recent counter-examples show
that in general the solutions of the nonlocal equations do not converge to a
solution of the conservation law. In this work we focus on nonlocal
conservation laws modeling vehicular traffic: in this case, the convolution
kernel is anisotropic. We show that, under fairly general assumptions on the
(anisotropic) convolution kernel, the nonlocal-to-local limit can be rigorously
justified provided the initial datum satisfies a one-sided Lipschitz condition
and is bounded away from $0$. We also exhibit a counter-example showing that,
if the initial datum attains the value $0$, then there are severe obstructions
to a convergence proof.
|
We give an algebraic/geometric characterization of the classical
pseudodifferential operators on a smooth manifold in terms of the tangent
groupoid and its natural $\mathbb{R}^\times_+$-action. Specifically, we show
that a properly supported semiregular distribution on $M\times M$ is the
Schwartz kernel of a classical pseudodifferential operator if and only if it
extends to a smooth family of distributions on the range fibres of the tangent
groupoid which is homogeneous for the $\mathbb{R}^\times_+$-action modulo
smooth functions. Moreover, we show that the basic properties of
pseudodifferential operators can be proven directly from this characterization.
Finally, we show that with the appropriate generalization of the tangent
bundle, the same definition applies without change to define pseudodifferential
calculi on arbitrary filtered manifolds, in particular the Heisenberg calculus.
|
The rapid neutron capture process (r-process) is thought to be responsible
for the creation of more than half of all elements beyond iron. The scientific
challenges to understanding the origin of the heavy elements beyond iron lie in
both the uncertainties associated with astrophysical conditions that are needed
to allow an r-process to occur and a vast lack of knowledge about the
properties of nuclei far from stability. There is great global competition to
access and measure the most exotic nuclei that existing facilities can reach,
while simultaneously building new, more powerful accelerators to make even more
exotic nuclei. This work is an attempt to determine the most crucial nuclear
masses to measure using an r-process simulation code and several mass models
(FRDM, Duflo-Zuker, and HFB-21). The most important nuclear masses to measure
are determined by the changes in the resulting r-process abundances. Nuclei
around the closed shells near N=50, 82, and 126 have the largest impact on
r-process abundances irrespective of the mass models used.
|
We show that on a hyperbolic knot $K$ in $S^3$, the distance between any two
finite surgery slopes is at most two and consequently there are at most three
nontrivial finite surgeries. Moreover in case that $K$ admits three nontrivial
finite surgeries, $K$ must be the pretzel knot $P(-2,3,7)$. In case that $K$
admits two noncyclic finite surgeries or two finite surgeries at distance two,
the two surgery slopes must be one of ten or seventeen specific pairs
respectively. For $D$-type finite surgeries, we improve a finiteness theorem
due to Doig by giving an explicit bound on the possible resulting prism
manifolds, and also prove that $4m$ and $4m+4$ are characterizing slopes for
the torus knot $T(2m+1,2)$ for each $m\geq 1$.
|
We analyze the deviations from Maxwell-Boltzmann statistics found in recent
experiments studying velocity distributions in two-dimensional granular gases
driven into a non-equilibrium stationary state by a strong vertical vibration.
We show that in its simplest version, the ``stochastic thermostat'' model of
heated inelastic hard spheres, contrary to what has been hitherto stated, is
incompatible with the experimental data, although predicting a reminiscent high
velocity stretched exponential behavior with an exponent 3/2. The experimental
observations lead to refine a recently proposed random restitution coefficient
model. Very good agreement is then found with experimental velocity
distributions within this framework, which appears self-consistent and further
provides relevant probes to investigate the universality of the velocity
statistics.
|
The quark-delocalization, color-screening model, extended by inclusion of a
one-pion-exchange (OPE) tail, is applied to the study of the deuteron and the
d* dibaryon. The results show that the properties of the deuteron (an extended
object) are well reproduced, greatly improving the agreement with experimental
data as compared to our previous study (without OPE). At the same time, the
mass and decay width of the d* (a compact object) are, as expected, not altered
significantly.
|
We define Landau quasi-particles within the Gutzwiller variational theory,
and derive their dispersion relation for general multi-band Hubbard models in
the limit of large spatial dimensions D. Thereby we reproduce our previous
calculations which were based on a phenomenological effective single-particle
Hamiltonian. For the one-band Hubbard model we calculate the first-order
corrections in 1/D and find that the corrections to the quasi-particle
dispersions are small in three dimensions. They may be largely absorbed in a
rescaling of the total band width, unless the system is close to half band
filling. Therefore, the Gutzwiller theory in the limit of large dimensions
provides quasi-particle bands which are suitable for a comparison with real,
three-dimensional Fermi liquids.
|
Within a holographic model, we calculate the time evolution of 2-point and
1-point correlation functions (of selected operators) within a charged strongly
coupled system of many particles. That system is thermalizing from an
anisotropic initial charged state far from equilibrium towards equilibrium
while subjected to a constant external magnetic field. One main result is that
thermalization times for 2-point functions are significantly (approximately
three times) larger than those of 1-point functions. Magnetic field and charge
amplify this difference, generally increasing thermalization times. However,
there is also a competition of scales between charge density, magnetic field,
and initial anisotropy, which leads to an array of qualitative changes on the
2- and 1-point functions. There appears to be a strong effect of the medium on
2-point functions at early times, but approximately none at later times. At
strong magnetic fields, an apparently universal thermalization time emerges, at
which all 2-point functions appear to thermalize regardless of any other scale
in the system. Hence, this time scale is referred to as saturation time scale.
As extremality is approached in the purely charged case, 2- and 1-point
functions appear to equilibrate at infinitely late time. We also compute
2-point functions of charged operators. Our results can be taken to model
thermalization in heavy ion collisions, or thermalization in selected condensed
matter systems.
|
Consider a stationary, linear Hilbert space valued process. We establish
Berry-Essen type results with optimal convergence rates under sharp dependence
conditions on the underlying coefficient sequence of the linear operators. The
case of non-linear Bernoulli-shift sequences is also considered. If the
sequence is $m$-dependent, the optimal rate $(n/m)^{1/2}$ is reached. If the
sequence is weakly geometrically dependent, the rate $(n/\log n)^{1/2}$ is
obtained.
|
We consider a kinetic model of two species of particles interacting with a
reservoir at fixed temperature, described by two coupled Vlasov-Fokker-Plank
equations. We prove that in the diffusive limit the evolution is described by a
macroscopic equation in the form of the gradient flux of the macroscopic free
energy functional. Moreover, we study the sharp interface limit and find by
formal Hilbert expansions that the interface motion is given in terms of a
quasi stationary problem for the chemical potentials. The velocity of the
interface is the sum of two contributions: the velocity of the Mullins-Sekerka
motion for the difference of the chemical potentials and the velocity of a
Hele-Shaw motion for a linear combination of the two potentials. These
equations are identical to the ones in Otto-E modelling the motion of a sharp
interface for a polymer blend.
|
In passage retrieval system, the initial passage retrieval results may be
unsatisfactory, which can be refined by a reranking scheme. Existing solutions
to passage reranking focus on enriching the interaction between query and each
passage separately, neglecting the context among the top-ranked passages in the
initial retrieval list. To tackle this problem, we propose a Hybrid and
Collaborative Passage Reranking (HybRank) method, which leverages the
substantial similarity measurements of upstream retrievers for passage
collaboration and incorporates the lexical and semantic properties of sparse
and dense retrievers for reranking. Besides, built on off-the-shelf retriever
features, HybRank is a plug-in reranker capable of enhancing arbitrary passage
lists including previously reranked ones. Extensive experiments demonstrate the
stable improvements of performance over prevalent retrieval and reranking
methods, and verify the effectiveness of the core components of HybRank.
|
Grant free access, in which each Internet of Things (IoT) device delivers its
packets through a randomly selected resource without spending time on
handshaking procedures, is a promising solution for supporting the massive
connectivity required for IoT systems. In this paper, we explore grant free
access with multi packet reception capabilities, with an emphasis on ultra low
end IoT applications with small data sizes, sporadic activity, and energy usage
constraints. We propose a power allocation scheme that integrates the IoT
device's traffic and energy budget by using a stochastic geometry framework and
meanfield game theory to model and analyze mutual interference among active IoT
devices.We also derive a Markov chain model to capture and track the IoT
device's queue length and derive the successful transmission probability at
steady state. Simulation results illustrate the optimal power allocation
strategy and show the effectiveness of the proposed approach.
|
Associated with a locally compact group $\cal G$ and a $\cal G$-space $\cal
X$ there is a Banach subspace $LUC({\cal X},{\cal G})$ of $C_b({\cal X})$,
which has been introduced and studied by Lau and Chu in \cite{chulau}. In this
paper, we study some properties of the first dual space of $LUC({\cal X},{\cal
G})$. In particular, we introduce a left action of $LUC({\cal G})^*$ on
$LUC({\cal X},{\cal G})^*$ to make it a Banach left module and then we
investigate the Banach subalgebra ${{\frak{Z}}({\cal X},{\cal G})}$ of
$LUC({\cal G})^*$, as the topological centre related to this module action,
which contains $M({\cal G})$ as a closed subalgebra. Also, we show that the
faithfulness of this module action is related to the properties of the action
of $\cal G$ on $\cal X$ and we extend the main results of Lau~\cite{lau} from
locally compact groups to ${\cal G}$-spaces. Sufficient and/or necessary
conditions for the equality ${{\frak{Z}}({\cal X},{\cal G})}=M({\cal G})$ or
$LUC({\cal G})^*$ are given. Finally, we apply our results to some special
cases of $\cal G$ and $\cal X$ for obtaining various examples whose topological
centres ${{\frak{Z}}({\cal X},{\cal G})}$ are $M({\cal G})$, $LUC({\cal G})^*$
or neither of them.
|
In human-human conversations, Context Tracking deals with identifying
important entities and keeping track of their properties and relationships.
This is a challenging problem that encompasses several subtasks such as slot
tagging, coreference resolution, resolving plural mentions and entity linking.
We approach this problem as an end-to-end modeling task where the
conversational context is represented by an entity repository containing the
entity references mentioned so far, their properties and the relationships
between them. The repository is updated turn-by-turn, thus making training and
inference computationally efficient even for long conversations. This paper
lays the groundwork for an investigation of this framework in two ways. First,
we release Contrack, a large scale human-human conversation corpus for context
tracking with people and location annotations. It contains over 7000
conversations with an average of 11.8 turns, 5.8 entities and 15.2 references
per conversation. Second, we open-source a neural network architecture for
context tracking. Finally we compare this network to state-of-the-art
approaches for the subtasks it subsumes and report results on the involved
tradeoffs.
|
Deep learning has been successful in BCI decoding. However, it is very
data-hungry and requires pooling data from multiple sources. EEG data from
various sources decrease the decoding performance due to negative transfer.
Recently, transfer learning for EEG decoding has been suggested as a remedy and
become subject to recent BCI competitions (e.g. BEETL), but there are two
complications in combining data from many subjects. First, privacy is not
protected as highly personal brain data needs to be shared (and copied across
increasingly tight information governance boundaries). Moreover, BCI data are
collected from different sources and are often based on different BCI tasks,
which has been thought to limit their reusability. Here, we demonstrate a
federated deep transfer learning technique, the Multi-dataset Federated
Separate-Common-Separate Network (MF-SCSN) based on our previous work of SCSN,
which integrates privacy-preserving properties into deep transfer learning to
utilise data sets with different tasks. This framework trains a BCI decoder
using different source data sets obtained from different imagery tasks (e.g.
some data sets with hands and feet, vs others with single hands and tongue,
etc). Therefore, by introducing privacy-preserving transfer learning
techniques, we unlock the reusability and scalability of existing BCI data
sets. We evaluated our federated transfer learning method on the NeurIPS 2021
BEETL competition BCI task. The proposed architecture outperformed the baseline
decoder by 3%. Moreover, compared with the baseline and other transfer learning
algorithms, our method protects the privacy of the brain data from different
data centres.
|
I review both well established and more recent findings on the properties of
bars, and their host galaxies, stemming from photometric and spectroscopic
observations, and discuss how these findings can be understood in terms of a
global picture of the formation and evolution of bars, keeping a connection
with theoretical developments. In particular, I show the results of a detailed
structural analysis of ~ 300 barred galaxies in the Sloan Digital Sky Survey,
providing physical quantities, such as bar length, ellipticity and boxyness,
and bar-to-total luminosity ratio, that can either be used as a solid basis on
which realistic models can be built, or be compared against more fundamental
theoretical results. I also show correlations that indicate that bars grow
longer, thinner and stronger with dynamical age, and that the growth of bars
and bulges is connected. Finally, I briefly discuss open questions and possible
directions for future research.
|
In the middle of the 1980s, David Poole introduced a semantical,
model-theoretic notion of specificity to the artificial-intelligence community.
Since then it has found further applications in non-monotonic reasoning, in
particular in defeasible reasoning. Poole tried to approximate the intuitive
human concept of specificity, which seems to be essential for reasoning in
everyday life with its partial and inconsistent information. His notion,
however, turns out to be intricate and problematic, which --- as we show ---
can be overcome to some extent by a closer approximation of the intuitive human
concept of specificity. Besides the intuitive advantages of our novel
specificity ordering over Poole's specificity relation in the classical
examples of the literature, we also report some hard mathematical facts:
Contrary to what was claimed before, we show that Poole's relation is not
transitive. The present means to decide our novel specificity relation,
however, show only a slight improvement over the known ones for Poole's
relation, and further work is needed in this aspect.
|
The three-dimensional equations for the compressible flow of liquid crystals
are considered. An initial-boundary value problem is studied in a bounded
domain with large data. The existence and large-time behavior of a global weak
solution are established through a three-level approximation, energy estimates,
and weak convergence for the adiabatic exponent $\gamma>\frac32$.
|
We report on the high-pressure synthesis of novel nano- and microcrystalline
high-quality diamonds with luminescent Ge-related centers. Observation of the
four-line fine structure in luminescence at 2 eV (602 nm) at temperatures below
80 K manifests a high quality of diamonds. We demonstrate germanium and carbon
isotope shifts in the fine structure of luminescence at 602 nm and its
vibrational sideband which allows us to unambiguously associate the center with
the germanium impurity entering into the diamond lattice. We show that there
are two ground-state energy levels with the separation of 0.7 meV and two
excited-state levels separated by 4.6 meV in the electronic structure of the
center and suggest a split-vacancy structure of this center.
|
Trip recommendation is a significant and engaging location-based service that
can help new tourists make more customized travel plans. It often attempts to
suggest a sequence of point of interests (POIs) for a user who requests a
personalized travel demand. Conventional methods either leverage the heuristic
algorithms (e.g., dynamic programming) or statistical analysis (e.g., Markov
models) to search or rank a POI sequence. These procedures may fail to capture
the diversity of human needs and transitional regularities. They even provide
recommendations that deviate from tourists' real travel intention when the trip
data is sparse. Although recent deep recursive models (e.g., RNN) are capable
of alleviating these concerns, existing solutions hardly recognize the
practical reality, such as the diversity of tourist demands, uncertainties in
the trip generation, and the complex visiting preference. Inspired by the
advance in deep learning, we introduce a novel self-supervised representation
learning framework for trip recommendation -- SelfTrip, aiming at tackling the
aforementioned challenges. Specifically, we propose a two-step contrastive
learning mechanism concerning the POI representation, as well as trip
representation. Furthermore, we present four trip augmentation methods to
capture the visiting uncertainties in trip planning. We evaluate our SelfTrip
on four real-world datasets, and extensive results demonstrate the promising
gain compared with several cutting-edge benchmarks, e.g., up to 4% and 12% on
F1 and pair-F1, respectively.
|
In real-world applications, speaker recognition models often face various
domain-mismatch challenges, leading to a significant drop in performance.
Although numerous domain adaptation techniques have been developed to address
this issue, almost all present methods focus on a simple configuration where
the model is trained in one domain and deployed in another. However, real-world
environments are often complex and may contain multiple domains, making the
methods designed for one-to-one adaptation suboptimal. In our paper, we propose
a self-supervised learning method to tackle this multi-domain adaptation
problem. Building upon the basic self-supervised adaptation algorithm, we
designed three strategies to make it suitable for multi-domain adaptation: an
in-domain negative sampling strategy, a MoCo-like memory bank scheme, and a
CORAL-like distribution alignment. We conducted experiments using VoxCeleb2 as
the source domain dataset and CN-Celeb1 as the target multi-domain dataset. Our
results demonstrate that our method clearly outperforms the basic
self-supervised adaptation method, which simply treats the data of CN-Celeb1 as
a single domain. Importantly, the improvement is consistent in nearly all
in-domain tests and cross-domain tests, demonstrating the effectiveness of our
proposed method.
|
We characterize the simplicity of Pimsner algebras for non-proper
C*-correspondences. With the aid of this criterion, we give a systematic
strategy to produce outer actions of unitary tensor categories on Kirchberg
algebras. In particular, every countable unitary tensor category admits an
outer action on the Cuntz algebra $\mathcal{O}_2$. We also discuss the
realizability of modules over fusion rings as K-groups of Kirchberg algebras
acted by unitary tensor categories. Several new examples are provided, among
which actions on Cuntz algebras of 3-cocycle twists of cyclic groups are
constructed for all possible 3-cohomological classes by answering a question
asked by Izumi.
|
We study the blow-down map in cohomology in the context of real projective
blowups of Lie algebroids. Using the blow-down map in cohomology we compute the
Lie algebroid cohomology of the blowup of transversals of arbitrary
codimension, generalising the Mazzeo-Melrose theorem on b-cohomology. To prove
the result we develop a Gysin sequence for Lie algebroids. As another example
we use the developed tools to compute the Lie algebroid cohomology of the
action Lie algebroid $\mathfrak{so}(3)\ltimes \mathbb{R}^3$, a result known in
Poisson geometry literature. Moreover, we use similar techniques to compute the
de Rham cohomology of real projective blowups.
|
The space and run-time requirements of broad coverage grammars appear for
many applications unreasonably large in relation to the relative simplicity of
the task at hand. On the other hand, handcrafted development of
application-dependent grammars is in danger of duplicating work which is then
difficult to re-use in other contexts of application. To overcome this problem,
we present in this paper a procedure for the automatic extraction of
application-tuned consistent subgrammars from proved large-scale generation
grammars. The procedure has been implemented for large-scale systemic grammars
and builds on the formal equivalence between systemic grammars and typed
unification based grammars. Its evaluation for the generation of encyclopedia
entries is described, and directions of future development, applicability, and
extensions are discussed.
|
In classical mechanics, the 'geometry of motion' refers to a development to
visualize the motion of freely spinning bodies. In this paper, such an approach
of studying the rotational motion of axisymmetric variable mass systems is
developed. An analytic solution to the second Euler angle characterising
nutation naturally falls out of this method, without explicitly solving the
nonlinear differential equations of motion. This is used to examine the coning
motion of a free axisymmetric cylinder subject to three idealized models of
mass loss and new insight into their rotational stability is presented. It is
seen that the angular speeds for some configurations of these cylinders grow
without bounds. In spite of this phenomenon, all configurations explored here
are seen to exhibit nutational stability, a desirable property in solid rocket
motors.
|
We present a model-independent calculation of hadron matrix elements for all
dimension-six operators associated with baryon number violating processes using
lattice QCD. The calculation is performed with the Wilson quark action in the
quenched approximation at $\beta=6/g^2=6.0$ on a $28^2\times 48\times 80$
lattice. Our results cover all the matrix elements required to estimate the
partial lifetimes of (proton,neutron)$\to$($\pi,K,\eta$) +(${\bar
\nu},e^+,\mu^+$) decay modes. We point out the necessity of disentangling two
form factors that contribute to the matrix element; previous calculations did
not make the separation, which led to an underestimate of the physical matrix
elements. With a correct separation, we find that the matrix elements have
values 3-5 times larger than the smallest estimates employed in
phenomenological analyses of the nucleon decays, which could give strong
constraints on several GUT models. We also find that the values of the matrix
elements are comparable with the tree-level predictions of chiral lagrangian.
|
We show that a reduct of the Zariski structure of an algebraic curve which is
not locally modular interprets a field, answering a question of Zilber's.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.