abstract
stringlengths 42
2.09k
|
---|
Background: To prevent future outbreaks of COVID-19, Australia is pursuing a
mass-vaccination approach in which a targeted group of the population
comprising healthcare workers, aged-care residents and other individuals at
increased risk of exposure will receive a highly effective priority vaccine.
The rest of the population will instead have access to a less effective
vaccine.
Methods: We apply a large-scale agent-based model of COVID-19 in Australia to
investigate the possible implications of this hybrid approach to
mass-vaccination. The model is calibrated to recent epidemiological and
demographic data available in Australia, and accounts for several components of
vaccine efficacy.
Findings: Within a feasible range of vaccine efficacy values, our model
supports the assertion that complete herd immunity due to vaccination is not
likely in the Australian context. For realistic scenarios in which herd
immunity is not achieved, we simulate the effects of mass-vaccination on
epidemic growth rate, and investigate the requirements of lockdown measures
applied to curb subsequent outbreaks. In our simulations, Australia's
vaccination strategy can feasibly reduce required lockdown intensity and
initial epidemic growth rate by 43% and 52%, respectively. The severity of
epidemics, as measured by the peak number of daily new cases, decreases by up
to two orders of magnitude under plausible mass-vaccination and lockdown
strategies.
Interpretation: The study presents a strong argument for a large-scale
vaccination campaign in Australia, which would substantially reduce both the
intensity of future outbreaks and the stringency of non-pharmaceutical
interventions required for their suppression.
|
The study of network structural controllability focuses on the minimum number
of driver nodes needed to control a whole network. Despite intensive studies on
this topic, most of them consider static networks only. It is well-known,
however, that real networks are growing, with new nodes and links added to the
system. Here, we analyze controllability of evolving networks and propose a
general rule for the change of driver nodes. We further apply the rule to solve
the problem of network augmentation subject to the controllability constraint.
The findings fill a gap in our understanding of network controllability and
shed light on controllability of real systems.
|
This article presents the results of investigations using topic modeling of
the Voynich Manuscript (Beinecke MS408). Topic modeling is a set of
computational methods which are used to identify clusters of subjects within
text. We use latent dirichlet allocation, latent semantic analysis, and
nonnegative matrix factorization to cluster Voynich pages into `topics'. We
then compare the topics derived from the computational models to clusters
derived from the Voynich illustrations and from paleographic analysis. We find
that computationally derived clusters match closely to a conjunction of scribe
and subject matter (as per the illustrations), providing further evidence that
the Voynich Manuscript contains meaningful text.
|
Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent
scalability in Natural Language Processing. In Computer Vision, however, almost
all performant networks are "dense", that is, every input is processed by every
parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision
Transformer, that is scalable and competitive with the largest dense networks.
When applied to image recognition, V-MoE matches the performance of
state-of-the-art networks, while requiring as little as half of the compute at
inference time. Further, we propose an extension to the routing algorithm that
can prioritize subsets of each input across the entire batch, leading to
adaptive per-image compute. This allows V-MoE to trade-off performance and
compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE
to scale vision models, and train a 15B parameter model that attains 90.35% on
ImageNet.
|
A recent proposal for a superdeterministic account of quantum mechanics,
named Invariant-set theory, appears to bring ideas from several diverse fields
like chaos theory, number theory and dynamical systems to quantum foundations.
However, a clear cut hidden-variable model has not been developed, which makes
it difficult to assess the proposal from a quantum foundational perspective. In
this article, we first build a hidden-variable model based on the proposal, and
then critically analyse several aspects of the proposal using the model. We
show that several arguments related to counter-factual measurements,
nonlocality, non-commutativity of quantum observables, measurement independence
etcetera that appear to work in the proposal fail when considered in our model.
We further show that our model is not only superdeterministic but also
nonlocal, with an ontic quantum state. We argue that the bit string defined in
the model is a hidden variable and that it contains redundant information.
Lastly, we apply the analysis developed in a previous work (Proc. R. Soc. A,
476(2243):20200214, 2020) to illustrate the issue of superdeterministic
conspiracy in the model. Our results lend further support to the view that
superdeterminism is unlikely to solve the puzzle posed by the Bell
correlations.
|
We consider statistical models arising from the common set of solutions to a
sparse polynomial system with general coefficients. The maximum likelihood
degree counts the number of critical points of the likelihood function
restricted to the model.
We prove the maximum likelihood degree of a sparse polynomial system is
determined by its Newton polytopes and equals the mixed volume of a related
Lagrange system of equations. As a corollary, we find that the algebraic degree
of several optimization problems is equal to a similar mixed volume.
|
Single crystals of SrMn2P2 and CaMn2P2 were grown using Sn flux and
characterized by single-crystal x-ray diffraction, electrical resistivity rho,
heat capacity Cp, and magnetic susceptibility chi = M/H measurements versus
temperature T and magnetization M versus applied magnetic field H isotherm
measurements. The x-ray diffraction results show that both compounds adopt the
trigonal CaAl2Si2-type structure. The rho(T) measurements demonstrate
insulating ground states for both compounds. The chi(T) and Cp(T) data reveal a
weak first-order antiferromagnetic (AFM) transition at the Neel temperature TN
= 53(1) K for SrMn2P2 and a strong first-order AFM transition at TN = 69.8(3) K
for CaMn2P2. Both compounds show an isotropic and nearly T-independent chi(T <
TN). {31}P NMR measurements confirm the strong first-order transition in
CaMn2P2 but show critical slowing down near TN for SrMn2P2 thus evidencing
second-order character. The NMR measurements also indicate that the AFM
structure of CaMn2P2 is commensurate with the lattice whereas that of SrMn2P2
is incommensurate. These first-order AFM transitions are unique among the class
of trigonal (Ca, Sr, Ba)Mn2(P, As, Sb, Bi)2 compounds which otherwise exhibit
second-order AFM transitions. This result presents a challenge to understand
the systematics of magnetic ordering in this class of materials in which
magnetically-frustrated antiferromagnetism is quasi-two-dimensional.
|
We used a convolutional neural network to infer stellar rotation periods from
a set of synthetic light curves simulated with realistic spot evolution
patterns. We convolved these simulated light curves with real TESS light curves
containing minimal intrinsic astrophysical variability to allow the network to
learn TESS systematics and estimate rotation periods despite them. In addition
to periods, we predict uncertainties via heteroskedastic regression to estimate
the credibility of the period predictions. In the most credible half of the
test data, we recover 10%-accurate periods for 46% of the targets, and
20%-accurate periods for 69% of the targets. Using our trained network, we
successfully recover periods of real stars with literature rotation
measurements, even past the 13.7-day limit generally encountered by TESS
rotation searches using conventional period-finding techniques. Our method also
demonstrates resistance to half-period aliases. We present the neural network
and simulated training data, and introduce the software butterpy used to
synthesize the light curves using realistic star spot evolution.
|
We pursue a novel strategy towards a first detection of continuous
gravitational waves from rapidly-rotating deformed neutron stars. Computational
power is focused on a narrow region of signal parameter space selected by a
strategically-chosen benchmark. We search data from the 2nd observing run of
the LIGO Observatory with an optimised analysis run on graphics processing
units. While no continuous waves are detected, the search achieves a
sensitivity to gravitational wave strain of $h_0 = 1.01{\times}10^{-25}$ at 90%
confidence, 24% to 69% better than past searches of the same parameter space.
Constraints on neutron star deformity are within theoretical maxima, thus a
detection by this search was not inconceivable.
|
A high imbalance exists between technical debt and non-technical debt source
code comments. Such imbalance affects Self-Admitted Technical Debt (SATD)
detection performance, and existing literature lacks empirical evidence on the
choice of balancing technique. In this work, we evaluate the impact of multiple
balancing techniques, including Data level, Classifier level, and Hybrid, for
SATD detection in Within-Project and Cross-Project setup. Our results show that
the Data level balancing technique SMOTE or Classifier level Ensemble
approaches Random Forest or XGBoost are reasonable choices depending on whether
the goal is to maximize Precision, Recall, F1, or AUC-ROC. We compared our
best-performing model with the previous SATD detection benchmark
(cost-sensitive Convolution Neural Network). Interestingly the top-performing
XGBoost with SMOTE sampling improved the Within-project F1 score by 10% but
fell short in Cross-Project set up by 9%. This supports the higher
generalization capability of deep learning in Cross-Project SATD detection, yet
while working within individual projects, classical machine learning algorithms
can deliver better performance. We also evaluate and quantify the impact of
duplicate source code comments in SATD detection performance. Finally, we
employ SHAP and discuss the interpreted SATD features. We have included the
replication package and shared a web-based SATD prediction tool with the
balancing techniques in this study.
|
Quantum dots are arguably one of the best platforms for optically accessible
spin based qubits. The paramount demand of extended qubit storage time can be
met by using quantum-dot-confined dark exciton: a longlived electron-hole pair
with parallel spins. Despite its name the dark exciton reveals weak
luminescence that can be directly measured. The origins of this optical
activity remain largely unexplored. In this work, using the atomistic
tight-binding method combined with configuration-interaction approach, we
demonstrate that atomic-scale randomness strongly affects oscillator strength
of dark excitons confined in self-assembled cylindrical InGaAs quantum dots
with no need for faceting or shape-elongation. We show that this process is
mediated by two mechanisms: mixing dark and bright configurations by exchange
interaction, and equally important appearance of non-vanishing optical
transition matrix elements that otherwise correspond to nominally forbidden
transitions in a non-alloyed case. The alloy randomness has essential impact on
both bright and dark exciton states, including their energy, emission
intensity, and polarization angle. We conclude that, due to the atomic-scale
alloy randomness, finding dots with desired dark exciton properties may require
exploration of a large ensemble, similarly to how dots with low bright exciton
splitting are selected for entanglement generation.
|
Superpixel is generated by automatically clustering pixels in an image into
hundreds of compact partitions, which is widely used to perceive the object
contours for its excellent contour adherence. Although some works use the
Convolution Neural Network (CNN) to generate high-quality superpixel, we
challenge the design principles of these networks, specifically for their
dependence on manual labels and excess computation resources, which limits
their flexibility compared with the traditional unsupervised segmentation
methods. We target at redefining the CNN-based superpixel segmentation as a
lifelong clustering task and propose an unsupervised CNN-based method called
LNS-Net. The LNS-Net can learn superpixel in a non-iterative and lifelong
manner without any manual labels. Specifically, a lightweight feature embedder
is proposed for LNS-Net to efficiently generate the cluster-friendly features.
With those features, seed nodes can be automatically assigned to cluster pixels
in a non-iterative way. Additionally, our LNS-Net can adapt the sequentially
lifelong learning by rescaling the gradient of weight based on both channel and
spatial context to avoid overfitting. Experiments show that the proposed
LNS-Net achieves significantly better performance on three benchmarks with
nearly ten times lower complexity compared with other state-of-the-art methods.
|
In the May-Leonard model of three cyclically competing species, we analyze
the statistics of rare events in which all three species go extinct due to
strong but rare fluctuations. These fluctuations are from the tails of the
probability distribution of species concentrations. They render a coexistence
of three populations unstable even if the coexistence is stable in the
deterministic limit. We determine the mean time to extinction (MTE) by using a
WKB-ansatz in the master equation that represents the stochastic description of
this model. This way, the calculation is reduced to a problem of classical
mechanics and amounts to solving a Hamilton-Jacobi equation with zero-energy
Hamiltonian. We solve the corresponding Hamilton's equations of motion in
six-dimensional phase space numerically by using the Iterative Action
Minimization Method. This allows to project on the optimal path to extinction,
starting from a parameter choice where the three-species coexistence-fixed
point undergoes a Hopf bifurcation and becomes stable. Specifically for our
system of three species, extinction events can be triggered along various paths
to extinction, differing in their intermediate steps. We compare our analytical
predictions with results from Gillespie simulations for two-species
extinctions, complemented by an analytical calculation of the MTE in which the
remaining third species goes extinct. From Gillespie simulations we also
analyze how the distributions of times to extinction change upon varying the
bifurcation parameter. Even within the same model and the same dynamical
regime, the MTE depends on the distance from the bifurcation point in a way
that contains the system size dependence in the exponent. It is challenging and
worthwhile to quantify how rare the rare events of extinction are.
|
Extracting critical behavior in the wake of quantum quenches has recently
been at the forefront of theoretical and experimental investigations in
condensed matter physics and quantum synthetic matter, with particular emphasis
on experimental feasibility. Here, we investigate the potential of single-site
observables in probing equilibrium phase transitions and dynamical criticality
in short-range transverse-field Ising chains. For integrable and
near-integrable models, our exact and mean-field-theory analyses reveal a truly
out-of-equilibrium universal scaling exponent in the vicinity of the transition
that is independent of the initial state and the location of the probe site so
long as the latter is sufficiently close to the edge of the chain. Signature of
a dynamical crossover survives when integrability is strongly broken. Our work
provides a robust scheme for the experimental detection of quantum critical
points and dynamical scaling laws in short-range interacting models using
modern ultracold-atom setups.
|
A common concern in observational studies focuses on properly evaluating the
causal effect, which usually refers to the average treatment effect or the
average treatment effect on the treated. In this paper, we propose a data
preprocessing method, the Kernel-distance-based covariate balancing, for
observational studies with binary treatments. This proposed method yields a set
of unit weights for the treatment and control groups, respectively, such that
the reweighted covariate distributions can satisfy a set of pre-specified
balance conditions. This preprocessing methodology can effectively reduce
confounding bias of subsequent estimation of causal effects. We demonstrate the
implementation and performance of Kernel-distance-based covariate balancing
with Monte Carlo simulation experiments and a real data analysis.
|
Apart from the role the clustering coefficient plays in the definition of the
small-world phenomena, it also has great relevance for practical problems
involving networked dynamical systems. To study the impact of the clustering
coefficient on dynamical processes taking place on networks, some authors have
focused on the construction of graphs with tunable clustering coefficients.
These constructions are usually realized through a stochastic process, either
by growing a network through the preferential attachment procedure, or by
applying a random rewiring process. In contrast, we consider here several
families of static graphs whose clustering coefficients can be determined
explicitly. The basis for these families is formed by the $k$-regular graphs on
$N$ nodes, that belong to the family of so-called circulant graphs denoted by
$C_{N,k}$. We show that the expression for the clustering coefficient of
$C_{N,k}$ reported in literature, only holds for sufficiently large $N$. Next,
we consider three generalizations of the circulant graphs, either by adding
some pendant links to $C_{N,k}$, or by connecting, in two different ways, an
additional node to some nodes of $C_{N,k}$. For all three generalizations, we
derive explicit expressions for the clustering coefficient. Finally, we
construct a family of pairs of generalized circulant graphs, with the same
number of nodes and links, but with different clustering coefficients.
|
In this paper we study lumpy black holes with AdS${}_p \times S^q$
asymptotics, where the isometry group coming from the sphere factor is broken
down to SO($q$). Depending on the values of $p$ and $q$, these are solutions to
a certain Supergravity theory with a particular gauge field. We have considered
the values $(p,q) = (5,5)$ and $(p,q) = (4,7)$, corresponding to type IIB
supergravity in ten dimensions and eleven-dimensional supergravity
respectively. These theories presumably contain an infinite spectrum of
families of lumpy black holes, labeled by a harmonic number $\ell$, whose
endpoints in solution space merge with another type of black holes with
different horizon topology. We have numerically constructed the first four
families of lumpy solutions, corresponding to $\ell = 1, 2^+, 2^-$ and $3$. We
show that the geometry of the horizon near the merger is well-described by a
cone over a triple product of spheres, thus extending Kol's local model to the
present asymptotics. Interestingly, the presence of non-trivial fluxes in the
internal sphere implies that the cone is no longer Ricci flat. This conical
manifold accounts for the geometry and the behavior of the physical quantities
of the solutions sufficiently close to the critical point. Additionally, we
show that the vacuum expectation values of the dual scalar operators approach
their critical values with a power law whose exponents are dictated by the
local cone geometry in the bulk.
|
We propose an efficient framework of reversible data hiding to preserve
compatibility between normal printing and printing with a special color ink by
using a single common image. The special color layer is converted to a binary
image by digital halftoning and losslessly compressed using JBIG2. Then, the
compressed information of the binarized special color layer is reversibly
embedded into the general color layer without significant distortion. Our
experimental results show the availability of the proposed method in terms of
the marked image quality.
|
Two-dimensional (2D) palladium ditelluride (PdTe2) and platinum ditelluride
(PtTe2) are two Dirac semimetals which demonstrate fascinating quantum
properties such as superconductivity, magnetism and topological order,
illustrating promising applications in future nanoelectronics and
optoelectronics. However, the synthesis of their monolayers is dramatically
hindered by strong interlayer coupling and orbital hybridization. In this
study, an efficient synthesis method for monolayer PdTe2 and PtTe2 is
demonstrated. Taking advantages of the surface reaction, epitaxial growth of
large-area and high quality monolayers of PdTe2 and patterned PtTe2 is achieved
by direct tellurization of Pd(111) and Pt(111). A well-ordered PtTe2 pattern
with Kagome lattice formed by Te vacancy arrays is successfully grown.
Moreover, multilayer PtTe2 can be also obtained and potential excitation of
Dirac plasmons is observed. The simple and reliable growth procedure of
monolayer PdTe2 and patterned PtTe2 gives unprecedented opportunities for
investigating new quantum phenomena and facilitating practical applications in
optoelectronics.
|
We present Sandwich Batch Normalization (SaBN), a frustratingly easy
improvement of Batch Normalization (BN) with only a few lines of code changes.
SaBN is motivated by addressing the inherent feature distribution heterogeneity
that one can be identified in many tasks, which can arise from data
heterogeneity (multiple input domains) or model heterogeneity (dynamic
architectures, model conditioning, etc.). Our SaBN factorizes the BN affine
layer into one shared sandwich affine layer, cascaded by several parallel
independent affine layers. Concrete analysis reveals that, during optimization,
SaBN promotes balanced gradient norms while still preserving diverse gradient
directions -- a property that many application tasks seem to favor. We
demonstrate the prevailing effectiveness of SaBN as a drop-in replacement in
four tasks: conditional image generation, neural architecture search (NAS),
adversarial training, and arbitrary style transfer. Leveraging SaBN immediately
achieves better Inception Score and FID on CIFAR-10 and ImageNet conditional
image generation with three state-of-the-art GANs; boosts the performance of a
state-of-the-art weight-sharing NAS algorithm significantly on NAS-Bench-201;
substantially improves the robust and standard accuracies for adversarial
defense; and produces superior arbitrary stylized results. We also provide
visualizations and analysis to help understand why SaBN works. Codes are
available at: https://github.com/VITA-Group/Sandwich-Batch-Normalization.
|
Flat band moir\'e superlattices have recently emerged as unique platforms for
investigating the interplay between strong electronic correlations, nontrivial
band topology, and multiple isospin 'flavor' symmetries. Twisted
monolayer-bilayer graphene (tMBG) is an especially rich system owing to its low
crystal symmetry and the tunability of its bandwidth and topology with an
external electric field. Here, we find that orbital magnetism is abundant
within the correlated phase diagram of tMBG, giving rise to the anomalous Hall
effect (AHE) in correlated metallic states nearby most odd integer fillings of
the flat conduction band, as well as correlated Chern insulator states
stabilized in an external magnetic field. The behavior of the states at zero
field appears to be inconsistent with simple spin and valley polarization for
the specific range of twist angles we investigate, and instead may plausibly
result from an intervalley coherent (IVC) state with an order parameter that
breaks time reversal symmetry. The application of a magnetic field further
tunes the competition between correlated states, in some cases driving
first-order topological phase transitions. Our results underscore the rich
interplay between closely competing correlated ground states in tMBG, with
possible implications for probing exotic IVC ordering.
|
We consider a magnetic skyrmion crystal formed at the surface of a
topological insulator. Incorporating the exchange interaction between the
helical Dirac surface states and the periodic N\'eel or Bloch skyrmion texture,
we obtain the resulting electronic band structures. We discuss the properties
of the reconstructed skyrmion bands, namely the impact of symmetries on the
energies and Berry curvature. We find substantive qualitative differences
between the N\'eel and Bloch cases, with the latter generically permitting a
low-energy tight-binding representation whose parameters are tightly
constrained by symmetries. We explicitly construct the associated Wannier
orbitals, which resemble the ring-like chiral bound states of helical Dirac
fermions coupled to a single skyrmion in a ferromagnetic background. We
construct a two-band tight-binding model with complex nearest-neighbor hoppings
which captures the salient topological features of the low-energy bands. Our
results are relevant to magnetic topological insulators (TIs), as well as to
TI-magnetic thin film heterostructures, in which skyrmion crystals may be
stabilized.
|
This paper underlines a subtle property of batch-normalization (BN):
Successive batch normalizations with random linear transformations make hidden
representations increasingly orthogonal across layers of a deep neural network.
We establish a non-asymptotic characterization of the interplay between depth,
width, and the orthogonality of deep representations. More precisely, under a
mild assumption, we prove that the deviation of the representations from
orthogonality rapidly decays with depth up to a term inversely proportional to
the network width. This result has two main implications: 1) Theoretically, as
the depth grows, the distribution of the representation -- after the linear
layers -- contracts to a Wasserstein-2 ball around an isotropic Gaussian
distribution. Furthermore, the radius of this Wasserstein ball shrinks with the
width of the network. 2) In practice, the orthogonality of the representations
directly influences the performance of stochastic gradient descent (SGD). When
representations are initially aligned, we observe SGD wastes many iterations to
orthogonalize representations before the classification. Nevertheless, we
experimentally show that starting optimization from orthogonal representations
is sufficient to accelerate SGD, with no need for BN.
|
Planet formation via core accretion requires the production of km-sized
planetesimals from cosmic dust. This process must overcome barriers to simple
collisional growth, for which the Streaming Instability (SI) is often invoked.
Dust evolution is still required to create particles large enough to undergo
vigorous instability. The SI has been studied primarily with single size dust,
and the role of the full evolved dust distribution is largely unexplored. We
survey the Polydispserse Streaming Instability (PSI) with physical parameters
corresponding to plausible conditions in protoplanetary discs. We consider a
full range of particle stopping times, generalized dust size distributions, and
the effect of turbulence. We find that, while the PSI grows in many cases more
slowly with a interstellar power-law dust distribution than with a single size,
reasonable collisional dust evolution, producing an enhancement of the largest
dust sizes, produces instability behaviour similar to the monodisperse case.
Considering turbulent diffusion the trend is similar. We conclude that if fast
linear growth of PSI is required for planet formation, then dust evolution
producing a distribution with peak stopping times on the order of 0.1 orbits
and an enhancement of the largest dust significantly above the single power-law
distribution produced by a fragmentation cascade is sufficient, along with
local enhancement of the dust to gas volume mass density ratio to order unity.
|
In today's networked society, many real-world problems can be formalized as
predicting links in networks, such as Facebook friendship suggestions,
e-commerce recommendations, and the prediction of scientific collaborations in
citation networks. Increasingly often, link prediction problem is tackled by
means of network embedding methods, owing to their state-of-the-art
performance. However, these methods lack transparency when compared to simpler
baselines, and as a result their robustness against adversarial attacks is a
possible point of concern: could one or a few small adversarial modifications
to the network have a large impact on the link prediction performance when
using a network embedding model? Prior research has already investigated
adversarial robustness for network embedding models, focused on classification
at the node and graph level. Robustness with respect to the link prediction
downstream task, on the other hand, has been explored much less.
This paper contributes to filling this gap, by studying adversarial
robustness of Conditional Network Embedding (CNE), a state-of-the-art
probabilistic network embedding model, for link prediction. More specifically,
given CNE and a network, we measure the sensitivity of the link predictions of
the model to small adversarial perturbations of the network, namely changes of
the link status of a node pair. Thus, our approach allows one to identify the
links and non-links in the network that are most vulnerable to such
perturbations, for further investigation by an analyst. We analyze the
characteristics of the most and least sensitive perturbations, and empirically
confirm that our approach not only succeeds in identifying the most vulnerable
links and non-links, but also that it does so in a time-efficient manner thanks
to an effective approximation.
|
We calculate Galactic Chemical Evolution (GCE) of Mo and Ru by taking into
account the contribution from $\nu p$-process nucleosynthesis. We estimate
yields of $p$-nuclei such as $^{92,94}\mathrm{Mo}$ and $^{96,98}\mathrm{Ru}$
through the $\nu p$-process in various supernova (SN) progenitors based upon
recent models. In particular, the $\nu p$-process in energetic hypernovae
produces a large amount of $p$-nuclei compared to the yield in ordinary
core-collapse SNe. Because of this the abundances of $^{92,94}\mathrm{Mo}$ and
$^{96,98}\mathrm{Ru}$ in the Galaxy are significantly enhanced at [Fe/H]=0 by
the $\nu p$-process. We find that the $\nu p$-process in hypernovae is the main
contributor to the elemental abundance of $^{92}$Mo at low metallicity
[Fe/H$]<-2$. Our theoretical prediction of the elemental abundances in
metal-poor stars becomes more consistent with observational data when the $\nu
p$-process in hypernovae is taken into account.
|
Photo-Induced Enhanced Raman Spectroscopy (PIERS) is a new surface enhanced
Raman spectroscopy (SERS) modality with an order-of-magnitude Raman signal
enhancement of adsorbed analytes over that of typical SERS substrates. Despite
the impressive PIERS enhancement factors and explosion in recent demonstrations
of its utility, the detailed enhancement mechanism remains undetermined. Using
a range of optical and X-ray spectroscopies, supported by density functional
theory calculations, we elucidate the chemical and atomic-scale mechanism
behind the PIERS enhancement. Stable PIERS substrates with enhancement factors
of 10^6 were fabricated using self-organized hexagonal arrays of TiO2 nanotubes
that were defect-engineered via annealing in inert atmospheres, and silver
nanoparticles were deposited by magnetron sputtering and subsequent thermal
dewetting. We identified the key source of the enhancement of PIERS vs. SERS in
these structures as an increase in the Raman polarizability of the adsorbed
probe molecule upon photo-induced charge transfer. A balance between
crystallinity, which enhances charge transfer due to higher electron mobility
in anatase-rutile heterostructures but decreases visible light absorption, and
oxygen vacancy defects, which increase visible light absorption and
photo-induced electron transfers, was critical to achieve high PIERS
enhancements.
|
The optical phase shifter that constantly rotates the local oscillator phase
is a necessity in continuous-variable quantum key distribution systems with
heterodyne detection. In previous experimental implementations, the optical
phase shifter is generally regarded as an ideal passive optical device that
perfectly rotates the phase of the electromagnetic wave of $90^\circ$. However,
the optical phase shifter in practice introduces imperfections, mainly the
measurement angular error, which inevitably deteriorates the security of the
practical systems. Here, we will give a concrete interpretation of measurement
angular error in practical systems and the corresponding entanglement-based
description. Subsequently, from the parameter estimation, we deduce the
overestimated excess noise and the underestimated transmittance, which lead to
a reduction in the final secret key rate. Simultaneously, we propose an
estimation method of the measurement angular error. Next, the practical
security analysis is provided in detail, and the effect of the measurement
angular error and its corresponding compensation scheme are demonstrated. We
conclude that measurement angular error severely degrades the security, but the
proposed calibration and compensation method can significantly help improve the
performance of the practical CV-QKD systems.
|
Deep energy renovation of building stock came more into focus in the European
Union due to energy efficiency related directives. Many buildings that must
undergo deep energy renovation are old and may lack design/renovation
documentation, or possible degradation of materials might have occurred in
building elements over time. Thermal transmittance (i.e. U-value) is one of the
most important parameters for determining the transmission heat losses through
building envelope elements. It depends on the thickness and thermal properties
of all the materials that form a building element. In-situ U-value can be
determined by ISO 9869-1 standard (Heat Flux Method - HFM). Still, measurement
duration is one of the reasons why HFM is not widely used in field testing
before the renovation design process commences. This paper analyzes the
possibility of reducing the measurement time by conducting parallel
measurements with one heat-flux sensor. This parallelization could be achieved
by applying a specific class of the Artificial Neural Network (ANN) on HFM
results to predict unknown heat flux based on collected interior and exterior
air temperatures. After the satisfying prediction is achieved, HFM sensor can
be relocated to another measuring location. Paper shows a comparison of four
ANN cases applied to HFM results for a measurement held on one multi-layer wall
- multilayer perceptron with three neurons in one hidden layer, long short-term
memory with 100 units, gated recurrent unit with 100 units and combination of
50 long short-term memory units and 50 gated recurrent units. The analysis gave
promising results in term of predicting the heat flux rate based on the two
input temperatures. Additional analysis on another wall showed possible
limitations of the method that serves as a direction for further research on
this topic.
|
HR 8799 hosts four directly imaged giant planets, but none has a mass
measured from first principles. We present the first dynamical mass measurement
in this planetary system, finding that the innermost planet HR~8799~e has a
mass of $9.6^{+1.9}_{-1.8} \, M_{\rm Jup}$. This mass results from combining
the well-characterized orbits of all four planets with a new astrometric
acceleration detection (5$\sigma$) from the Gaia EDR3 version of the
Hipparcos-Gaia Catalog of Accelerations. We find with 95\% confidence that
HR~8799~e is below $13\, M_{\rm Jup}$, the deuterium-fusing mass limit. We
derive a hot-start cooling age of $42^{+24}_{-16}$\,Myr for HR~8799~e that
agrees well with its hypothesized membership in the Columba association but is
also consistent with an alternative suggested membership in the
$\beta$~Pictoris moving group. We exclude the presence of any additional
$\gtrsim$5-$M_{\rm Jup}$ planets interior to HR~8799~e with semi-major axes
between $\approx$3-16\,au. We provide proper motion anomalies and a matrix
equation to solve for the mass of any of the planets of HR~8799 using only mass
ratios between the planets.
|
In this paper, we explore methods for computing wall-normal derivatives used
for calculating wall skin friction and heat transfer over a solid wall in
unstructured simplex-element (triangular/tetrahedral) grids generated by
anisotropic grid adaptation. Simplex-element grids are considered as efficient
and suitable for automatic grid generation and adaptation, but present a
challenge to accurately predict wall-normal derivatives. For example,
wall-normal derivatives computed by a simple finite-difference approximation,
as typically done in practical fluid-dynamics simulation codes, are often
contaminated with numerical noise. To address this issue, we propose an
improved method based on a common step-length for the finite-difference
approximation, which is otherwise random due to grid irregularity and thus
expected to smooth the wall-normal derivative distribution over a boundary.
Also, we consider using least-squares gradients to compute the wall-normal
derivatives and discuss their possible improvements. Numerical results show
that the improved methods greatly reduce the noise in the wall-normal
derivatives for irregular simplex-element grids.
|
In this study we extract the deep features and investigate the compression of
the Mg II k spectral line profiles observed in quiet Sun regions by NASA's IRIS
satellite. The data set of line profiles used for the analysis was obtained on
April 20th, 2020, at the center of the solar disc, and contains almost 300,000
individual Mg II k line profiles after data cleaning. The data are separated
into train and test subsets. The train subset was used to train the autoencoder
of the varying embedding layer size. The early stopping criterion was
implemented on the test subset to prevent the model from overfitting. Our
results indicate that it is possible to compress the spectral line profiles
more than 27 times (which corresponds to the reduction of the data
dimensionality from 110 to 4) while having a 4 DN average reconstruction error,
which is comparable to the variations in the line continuum. The mean squared
error and the reconstruction error of even statistical moments sharply decrease
when the dimensionality of the embedding layer increases from 1 to 4 and almost
stop decreasing for higher numbers. The observed occasional improvements in
training for values higher than 4 indicate that a better compact embedding may
potentially be obtained if other training strategies and longer training times
are used. The features learned for the critical four-dimensional case can be
interpreted. In particular, three of these four features mainly control the
line width, line asymmetry, and line dip formation respectively. The presented
results are the first attempt to obtain a compact embedding for spectroscopic
line profiles and confirm the value of this approach, in particular for feature
extraction, data compression, and denoising.
|
The J-PARC muon $g - 2$/EDM experiment aims to measure the muon magnetic
moment anomaly $a_{\mu} = (g -2) / 2 ~ $ and the muon electric dipole moment
(EDM) $d_{\mu}$. The target sensitivity for $a_{\mu}$ is a statistical
uncertainty of $450 \times 10^{-9}$, and for $d_{\mu}$ it is $1.5 \times
10^{-21}$ $e \cdot \rm{cm}$. The readout electronics require a DC to DC
converter to provide 1.5 V and 3.3 V electric voltage within a specific
environment. To capture the positron decay from the muon, the detector
components are placed in the muon storage area where a 3 T magnetic field is
applied in a vacuum. The extra magnetic field from the converter should be less
than 30 $\mu$T to reach the aforementioned sensitivity on $a_{\mu}$. In
addition, the electric field produced by the converter has to be as small as 1
V/m to also reach the target sensitivity on $d_{\mu}$. For this purpose, we
develop a step-down converter. We discuss the development process and the
performance of our DC to DC converter for the J-PARC muon $g - 2$/EDM
experiment.
|
Few-shot Named Entity Recognition (NER) exploits only a handful of
annotations to identify and classify named entity mentions. Prototypical
network shows superior performance on few-shot NER. However, existing
prototypical methods fail to differentiate rich semantics in other-class words,
which will aggravate overfitting under few shot scenario. To address the issue,
we propose a novel model, Mining Undefined Classes from Other-class (MUCO),
that can automatically induce different undefined classes from the other class
to improve few-shot NER. With these extra-labeled undefined classes, our method
will improve the discriminative ability of NER classifier and enhance the
understanding of predefined classes with stand-by semantic knowledge.
Experimental results demonstrate that our model outperforms five
state-of-the-art models in both 1-shot and 5-shots settings on four NER
benchmarks. We will release the code upon acceptance. The source code is
released on https: //github.com/shuaiwa16/OtherClassNER.git.
|
We present a new trainable system for physically plausible markerless 3D
human motion capture, which achieves state-of-the-art results in a broad range
of challenging scenarios. Unlike most neural methods for human motion capture,
our approach, which we dub physionical, is aware of physical and environmental
constraints. It combines in a fully differentiable way several key innovations,
i.e., 1. a proportional-derivative controller, with gains predicted by a neural
network, that reduces delays even in the presence of fast motions, 2. an
explicit rigid body dynamics model and 3. a novel optimisation layer that
prevents physically implausible foot-floor penetration as a hard constraint.
The inputs to our system are 2D joint keypoints, which are canonicalised in a
novel way so as to reduce the dependency on intrinsic camera parameters -- both
at train and test time. This enables more accurate global translation
estimation without generalisability loss. Our model can be finetuned only with
2D annotations when the 3D annotations are not available. It produces smooth
and physically principled 3D motions in an interactive frame rate in a wide
variety of challenging scenes, including newly recorded ones. Its advantages
are especially noticeable on in-the-wild sequences that significantly differ
from common 3D pose estimation benchmarks such as Human 3.6M and MPI-INF-3DHP.
Qualitative results are available at
http://gvv.mpi-inf.mpg.de/projects/PhysAware/
|
It is known that if $f\colon {\mathbb R}^2 \to {\mathbb R}$ is a polynomial
in each variable, then $f$ is a polynomial. We present generalizations of this
fact, when ${\mathbb R}^2$ is replaced by $G\times H$, where $G$ and $H$ are
topological Abelian groups. We show, e.g., that the conclusion holds (with
generalized polynomials in place of polynomials) if $G$ is a connected Baire
space and $H$ has a dense subgroup of finite rank or, for continuous functions,
if $G$ and $H$ are connected Baire spaces. The condition of continuity can be
omitted if $G$ and $H$ are locally compact or complete metric spaces. We
present several examples showing that the results are not far from being
optimal.
|
We introduce a new sequential methodology to calibrate the fixed parameters
and track the stochastic dynamical variables of a state-space system. The
proposed method is based on the nested hybrid filtering (NHF) framework of [1],
that combines two layers of filters, one inside the other, to compute the joint
posterior probability distribution of the static parameters and the state
variables. In particular, we explore the use of deterministic sampling
techniques for Gaussian approximation in the first layer of the algorithm,
instead of the Monte Carlo methods employed in the original procedure. The
resulting scheme reduces the computational cost and so makes the algorithms
potentially better-suited for high-dimensional state and parameter spaces. We
describe a specific instance of the new method and then study its performance
and efficiency of the resulting algorithms for a stochastic Lorenz 63 model
with uncertain parameters.
|
We present an Extended Kalman Filter framework for system identification and
control of a stochastic high-dimensional epidemic model. The scale and severity
of the COVID-19 emergency have highlighted the need for accurate forecasts of
the state of the pandemic at a high resolution. Mechanistic compartmental
models are widely used to produce such forecasts and assist in the design of
control and relief policies. Unfortunately, the scale and stochastic nature of
many of these models often makes the estimation of their parameters difficult.
With the goal of calibrating a high dimensional COVID-19 model using low-level
mobility data, we introduce a method for tractable maximum likelihood
estimation that combines tools from Bayesian inference with scalable
optimization techniques from machine learning. The proposed approach uses
automatic backward-differentiation to directly compute the gradient of the
likelihood of COVID-19 incidence and death data. The likelihood of the
observations is estimated recursively using an Extended Kalman Filter and can
be easily optimized using gradient-based methods to compute maximum likelihood
estimators. Our compartmental model is trained using GPS mobility data that
measures the mobility patterns of millions of mobile phones across the United
States. We show that, after calibrating against incidence and deaths data from
the city of Philadelphia, our model is able to produce an accurate 30-day
forecast of the evolution of the pandemic.
|
We propose a minimal model based on Lepton number symmetry (and violation),
to address a common origin of baryon asymmetry, dark matter (DM) and neutrino
mass generation, having one-to-one correspondence. The {\em unnatural}
largeness and smallness of the parameters required to describe correct
experimental limits are attributed to lepton number violation. Allowed
parameter space of the model is illustrated via numerical scan.
|
In 2019, The Centers for Medicare and Medicaid Services (CMS) launched an
Artificial Intelligence (AI) Health Outcomes Challenge seeking solutions to
predict risk in value-based care for incorporation into CMS Innovation Center
payment and service delivery models. Recently, modern language models have
played key roles in a number of health related tasks. This paper presents, to
the best of our knowledge, the first application of these models to patient
readmission prediction. To facilitate this, we create a dataset of 1.2 million
medical history samples derived from the Limited Dataset (LDS) issued by CMS.
Moreover, we propose a comprehensive modeling solution centered on a deep
learning framework for this data. To demonstrate the framework, we train an
attention-based Transformer to learn Medicare semantics in support of
performing downstream prediction tasks thereby achieving 0.91 AUC and 0.91
recall on readmission classification. We also introduce a novel data
pre-processing pipeline and discuss pertinent deployment considerations
surrounding model explainability and bias.
|
Reconstructing the shape and appearance of real-world objects using measured
2D images has been a long-standing problem in computer vision. In this paper,
we introduce a new analysis-by-synthesis technique capable of producing
high-quality reconstructions through robust coarse-to-fine optimization and
physics-based differentiable rendering.
Unlike most previous methods that handle geometry and reflectance largely
separately, our method unifies the optimization of both by leveraging image
gradients with respect to both object reflectance and geometry. To obtain
physically accurate gradient estimates, we develop a new GPU-based Monte Carlo
differentiable renderer leveraging recent advances in differentiable rendering
theory to offer unbiased gradients while enjoying better performance than
existing tools like PyTorch3D and redner. To further improve robustness, we
utilize several shape and material priors as well as a coarse-to-fine
optimization strategy to reconstruct geometry. We demonstrate that our
technique can produce reconstructions with higher quality than previous methods
such as COLMAP and Kinect Fusion.
|
This paper derives the CS decomposition for orthogonal tensors (T-CSD) and
the generalized singular value decomposition for two tensors (T-GSVD) via the
T-product. The structures of the two decompositions are analyzed in detail and
are consistent with those for matrix cases. Then the corresponding algorithms
are proposed respectively. Finally, T-GSVD can be used to give the explicit
expression for the solution of tensor Tikhonov regularization. Numerical
examples demonstrate the effectiveness of T-GSVD in solving image restoration
problems.
|
While there have been many results on lower bounds for Max Cut in unweighted
graphs, the only lower bound for non-integer weights is that by Poljak and
Turzik (1986). In this paper, we launch an extensive study of lower bounds for
Max Cut in weighted graphs. We introduce a new approach for obtaining lower
bounds for Weighted Max Cut. Using it, Probabilistic Method, Vizing's chromatic
index theorem, and other tools, we obtain several lower bounds for arbitrary
weighted graphs, weighted graphs of bounded girth and triangle-free weighted
graphs. We pose conjectures and open questions.
|
This paper addresses the problem of creating abstract transformers
automatically. The method we present provides the basis for creating a tool to
automate the construction of program analyzers in a fashion similar to the way
yacc automates the construction of parsers. Our method treats the problem as a
program-synthesis problem. The user provides specifications of (i) the concrete
semantics of a given operation O, (ii) the abstract domain A to be used by the
analyzer, and (iii) the semantics of a domain-specific language L in which the
abstract transformer is to be expressed. As output, our method creates an
abstract transformer for O for abstract domain A, expressed in DSL L. We
implemented our method, and used it to create a set of replacement abstract
transformers for those used in an existing analyzer, and obtained essentially
identical performance. However, when we compared the existing transformers with
the generated transformers, we discovered that two of the existing transformers
were unsound, which demonstrates the risk of using manually created
transformers.
|
Autonomous vehicles (AVs) need to interact with other traffic participants
who can be either cooperative or aggressive, attentive or inattentive. Such
different characteristics can lead to quite different interactive behaviors.
Hence, to achieve safe and efficient autonomous driving, AVs need to be aware
of such uncertainties when they plan their own behaviors. In this paper, we
formulate such a behavior planning problem as a partially observable Markov
Decision Process (POMDP) where the cooperativeness of other traffic
participants is treated as an unobservable state. Under different
cooperativeness levels, we learn the human behavior models from real traffic
data via the principle of maximum likelihood. Based on that, the POMDP problem
is solved by Monte-Carlo Tree Search. We verify the proposed algorithm in both
simulations and real traffic data on a lane change scenario, and the results
show that the proposed algorithm can successfully finish the lane changes
without collisions.
|
The perfect soliton crystal (PSC) was recently discovered as an extraordinary
Kerr soliton state with regularly distributed soliton pulses and enhanced comb
line power spaced by multiples of the cavity free spectral ranges (FSRs). The
modulation of continuous-wave excitation in optical microresonators and the
tunable repetition rate characteristic will significantly enhance and extend
the application potential of soliton microcombs for self-referencing comb
source, terahertz wave generation, and arbitrary waveform generation. However,
the reported PSC spectrum is generally narrow. Here, we demonstrate the
deterministic accessing of versatile perfect soliton crystals in the AlN
microresonators (FSR ~374 GHz), featuring a broad spectral range up to 0.96 of
an octave-span (1170-2300 nm) and terahertz repetition rates (up to ~1.87 THz).
The measured 60-fs short pulses and low-noise characteristics confirms the high
coherence of the PSCs
|
Superconducting and topological states are two quantum phenomena attracting
much interest. Their coexistence may lead to topological superconductivity
sought-after for Majorana-based quantum computing. However, there is no causal
relationship between the two, since superconductivity is a many-body effect due
to electron-electron interaction while topology is a single-particle
manifestation of electron band structure. Here, we demonstrate a novel form of
Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) pairing, induced by topological Weyl
nodal lines in Ising Bardeen-Cooper-Schrieffer (IBCS) superconductors. Based on
first-principles calculations and analyses, we predict that the nonmagnetic
metals of MA$_2$Z$_4$ family, including ${\alpha}_1$-TaSi$_2$P$_4$,
${\alpha}_1$-TaSi$_2$N$_4$, ${\alpha}_1$-NbSi$_2$P$_4$,
${\alpha}_2$-TaGe$_2$P$_4$, and ${\alpha}_2$-NbGe$_2$P$_4$ monolayers, are all
superconductors. While the intrinsic IBCS paring arises in these
non-centrosymmetric systems, the extrinsic FFLO pairing is revealed to be
evoked by the Weyl nodal lines under magnetic field, facilitating the formation
of Cooper pairs with nonzero momentum in their vicinity. Moreover, we show that
the IBCS pairing alone will enhance the in-plane critical field $B_c$ to ~10-50
times of Pauli paramagnetic limit $B_p$, and additional FFLO pairing can
further triple the $B_c/B_p$ ratio. It therefore affords an effective approach
to enhance the robustness of superconductivity. Also, the topology induced
superconductivity renders naturally the possible existence of topological
superconducting state.
|
Motion artifacts are a common occurrence in the Magnetic Resonance Imaging
(MRI) exam. Motion during acquisition has a profound impact on workflow
efficiency, often requiring a repeat of sequences. Furthermore, motion
artifacts may escape notice by technologists, only to be revealed at the time
of reading by the radiologists, affecting their diagnostic quality. Designing a
computer-aided tool for automatic motion detection and elimination can improve
the diagnosis, however, it needs a deep understanding of motion
characteristics. Motion artifacts in MRI have a complex nature and it is
directly related to the k-space sampling scheme. In this study we investigate
the effect of three conventional k-space samplers, including Cartesian, Uniform
Spiral and Radial on motion induced image distortion. In this regard, various
synthetic motions with different trajectories of displacement and rotation are
applied to T1 and T2-weighted MRI images, and a convolutional neural network is
trained to show the difficulty of motion classification. The results show that
the spiral k-space sampling method get less effect of motion artifact in image
space as compared to radial k-space sampled images, and radial k-space sampled
images are more robust than Cartesian ones. Cartesian samplers, on the other
hand, are the best in terms of deep learning motion detection because they can
better reflect motion.
|
This is a commentary on, and critique of, Latif Salum's paper titled
"Tractability of One-in-three $\mathrm{3SAT}$: $\mathrm{P} = \mathrm{NP}$."
Salum purports to give a polynomial-time algorithm that solves the
$\mathrm{NP}$-complete problem $\mathrm{X3SAT}$, thereby claiming $\mathrm{P} =
\mathrm{NP}$. The algorithm, in short, fixes the polarity of a variable,
carries out simplifications over the resulting formula to decide whether to
keep the value assigned or flip the polarity, and repeats with the remaining
variables. One thing this algorithm does not do is backtrack. We give an
illustrative counterexample showing why the lack of backtracking makes this
algorithm flawed.
|
The neural text generation suffers from the text degeneration issue such as
repetition. Traditional stochastic sampling methods only focus on truncating
the unreliable "tail" of the distribution, and do not address the "head" part,
which we show might contain tedious or even repetitive candidates with high
probability that lead to repetition loops. They also do not consider the issue
that human text does not always favor high-probability words. Inspired by
these, in this work we propose a heuristic sampling method. We propose to use
interquartile range of the predicted distribution to determine the "head" part,
then permutate and rescale the "head" with inverse probability. This aims at
decreasing the probability for the tedious and possibly repetitive candidates
with higher probability, and increasing the probability for the rational but
more surprising candidates with lower probability. The proposed algorithm
provides a reasonable permutation on the predicted distribution which enhances
diversity without compromising rationality of the distribution. We use
pre-trained language model to compare our algorithm with traditional methods.
Results show that our algorithm can effectively increase the diversity of
generated samples while achieving close resemblance to human text.
|
On a smooth, closed Riemannian manifold, we study the question of
proportionality of components, also called synchronization, of vector-valued
solutions to nonlinear elliptic Schr\"odinger systems with constant
coefficients. In particular, we obtain bifurcation results showing the
existence of branches of non-synchronized solutions emanating from the constant
solutions.
|
It is proved that for any mapping of a unit segment to a unit square, there
is a pair of points of the segment for which the square of the Euclidean
distance between their images exceeds the distance between them on the segment
by at least $3\frac58$ times. And the additional condition that the images of
the beginning and end of the segment belong to opposite sides of the square
increases the estimate to $4+\varepsilon$.
|
The so-called aromatic infrared bands are attributed to emission of
polycyclic aromatic hydrocarbons. The observed variations toward different
regions in space are believed to be caused by contributions of different
classes of PAH molecules, i.e. with respect to their size, structure, and
charge state. Laboratory spectra of members of these classes are needed to
compare them to observations and to benchmark quantum-chemically computed
spectra of these species. In this paper we present the experimental infrared
spectra of three different PAH dications, naphthalene$^{2+}$,
anthracene$^{2+}$, and phenanthrene$^{2+}$, in the vibrational fingerprint
region 500-1700~cm$^{-1}$. The dications were produced by electron impact
ionization of the vapors with 70 eV electrons, and they remained stable against
dissociation and Coulomb explosion. The vibrational spectra were obtained by IR
predissociation of the PAH$^{2+}$ complexed with neon in a 22-pole cryogenic
ion trap setup coupled to a free-electron infrared laser at the Free-Electron
Lasers for Infrared eXperiments (FELIX) Laboratory. We performed anharmonic
density-functional theory calculations for both singly and doubly charged
states of the three molecules. The experimental band positions showed excellent
agreement with the calculated band positions of the singlet electronic ground
state for all three doubly charged species, indicating its higher stability
over the triplet state. The presence of several strong combination bands and
additional weaker features in the recorded spectra, especially in the
10-15~$\mu$m region of the mid-IR spectrum, required anharmonic calculations to
understand their effects on the total integrated intensity for the different
charge states. These measurements, in tandem with theoretical calculations,
will help in the identification of this specific class of doubly-charged PAHs
as carriers of AIBs.
|
Distributed software development is more difficult than co-located software
development. One of the main reasons is that communication is more difficult in
distributed settings. Defined processes and artifacts help, but cannot cover
all information needs. Not communicating important project information,
decisions and rationales can result in duplicate or extra work, delays or even
project failure. Planning and managing a distributed project from an
information flow perspective helps to facilitate available communication
channels right from the start - beyond the documents and artifacts which are
defined for a given development process. In this paper we propose FLOW Mapping,
a systematic approach for planning and managing information flows in
distributed projects. We demonstrate the feasibility of our approach with a
case study in a distributed agile class room project. FLOW Mapping is
sufficient to plan communication and to measure conformance to the
communication strategy. We also discuss cost and impact of our approach.
|
A new ultra-hard rhombohedral carbon rh-C4 (or hexagonal h-C12) is reported
as derived from 3R graphite through crystal chemistry construction and ground
state energy within the density functional theory. An extended hexagonal
three-dimensional network of h-C12 is formed of C4 tetrahedra alike in h-C4
lonsdaleite (hexagonal diamond). The electronic band structure of rh-C4 is
characteristic of insulator with Egap = 4 eV similarly to diamond. From the set
of elastic constants a larger value of bulk modulus versus lonsdaleite, and the
largest Vickers hardness (HV) versus both forms of diamond were derived.
|
Multifunctional-therapeutic 3D scaffolds have been prepared. These
biomaterials are able to destroy the S. aureus bacteria biofilm and to allow
bone regeneration at the same time. The present study is focused on the design
of pH sensitive 3D hierarchical meso-macroporous scaffolds based on MGHA
nanocomposite formed by a mesostructured glassy network with embedded
hydroxyapatite nanoparticles, whose mesopores have been loaded with
levofloxacin as antibacterial agent. These 3D platforms exhibit controlled and
pH-dependent levofloxacin release, sustained over time at physiological pH
(7.4) and notably increased at infection pH (6.7 and 5.5), which is due to the
different interaction rate between diverse levofloxacin species and the silica
matrix. These 3D systems are able to inhibit the S. aureus growth and to
destroy the bacterial biofilm without cytotoxic effects on human osteoblasts
and allowing an adequate colonization and differentiation of preosteoblastic
cells on their surface. These findings suggest promising applications of these
hierarchical MGHA nanocomposite scaffolds for the treatment and prevention of
bone infection.
|
We investigate the role of higher-form symmetries and two kinds of 't Hooft
anomalies in non-equilibrium systems. To aid our investigation, we extend the
coset construction to account for $p$-form symmetries at zero and finite
temperature. One kind of anomaly arises when a $p$-form symmetry is
spontaneously broken: in a $d+1$-dimensional spacetime there often exists an
emergent $d-p-1$-form symmetry with mixed 't Hooft anomaly. That is, the
$p$-form and $d-p-1$-form symmetries cannot be gauged simultaneously. At the
level of the coset construction, this mixed anomaly prevents the Goldstones for
the $p$- and $d-p-1$-form symmetries from appearing in the same Maurer-Cartan
form. As a result, whenever such a mixed anomaly exists, we find the emergence
of dual theories -- one involving the $p$-form Goldstone and the other
involving the $d-p-1$-form Goldstone -- that are related to each other by a
kind of Legendre transform. Such an anomaly can exist at zero and finite
temperature. The other kind of 't Hooft anomaly can only arise in
non-equilibrium systems; we therefore term it the non-equilibrium 't Hoof
anomaly. In this case, an exact symmetry of the non-equilibrium effective
action fails to have a non-trivial, conserved Noether current. This anomalous
behavior arises when a global symmetry cannot be gauged in the non-equilibrium
effective action and can arise in both open and closed systems. We construct
actions for a number of systems including chemically reacting fluids,
Yang-Mills theory, Chern-Simons theory, magnetohydrodynamic systems, and dual
superfluid and solid theories. Finally, we find that the interplay of these two
kinds of anomalies has a surprising result: in non-equilibrium systems, whether
or not a symmetry appears spontaneously broken can depend on the time-scale
over which the system is observed.
|
We present a brief review of Friedmann's impact on cosmology from a
historical and a physical perspective.
|
Multi-agent behavior modeling aims to understand the interactions that occur
between agents. We present a multi-agent dataset from behavioral neuroscience,
the Caltech Mouse Social Interactions (CalMS21) Dataset. Our dataset consists
of trajectory data of social interactions, recorded from videos of freely
behaving mice in a standard resident-intruder assay. To help accelerate
behavioral studies, the CalMS21 dataset provides benchmarks to evaluate the
performance of automated behavior classification methods in three settings: (1)
for training on large behavioral datasets all annotated by a single annotator,
(2) for style transfer to learn inter-annotator differences in behavior
definitions, and (3) for learning of new behaviors of interest given limited
training data. The dataset consists of 6 million frames of unlabeled tracked
poses of interacting mice, as well as over 1 million frames with tracked poses
and corresponding frame-level behavior annotations. The challenge of our
dataset is to be able to classify behaviors accurately using both labeled and
unlabeled tracking data, as well as being able to generalize to new settings.
|
Synchronization, cooperation, and chaos are ubiquitous phenomena in nature.
In a population composed of many distinct groups of individuals playing the
prisoner's dilemma game, there exists a migration dilemma: No cooperator would
migrate to a group playing the prisoner's dilemma game lest it should be
exploited by a defector; but unless the migration takes place, there is no
chance of the entire population's cooperator-fraction to increase. Employing a
randomly rewired coupled map lattice of chaotic replicator maps, modelling
replication-selection evolutionary game dynamics, we demonstrate that the
cooperators -- evolving in synchrony -- overcome the migration dilemma to
proliferate across the population when altruism is mildly incentivized making
few of the demes play the leader game.
|
We present large-eddy simulations (LESs) of riverine flow in a study reach in
the Sacramento River, California. The riverbed bathymetry was surveyed in
high-resolution using a multibeam echosounder to construct the computational
model of the study area, while the topographies were defined using aerial
photographs taken by an Unmanned Aircraft System (UAS). In a series of field
campaigns, we measured the flow field of the river using the acoustic Doppler
current profiler (ADCP) and estimated using large-scale particle velocimetry of
the videos taken during the operation UAS. We used the measured data of the
river flow field to evaluate the accuracy of the LES-computed hydrodynamics.
The propagation of uncertainties in the LES results due to the variations in
the effective roughness height of the riverbed and the inflow discharge of the
river was studied using uncertainty quantification (UQ) analyses. The
polynomial chaos expansion (PCE) method was used to develop a surrogate model,
which was randomly sampled sufficiently by the Monte Carlo Sampling (MCS)
method to generate confidence intervals for the LES-computed velocity field.
Also, Sobol indices derived from the PCE coefficients were calculated to help
understand the relative influence of different input parameters on the global
uncertainty of the results. The UQ analysis showed that uncertainties of LES
results in the shallow near bank regions of the river were mainly related to
the roughness, while the variation of inflow discharge leads to uncertainty in
the LES results throughout the river, indiscriminately.
|
Directly manipulating the atomic structure to achieve a specific property is
a long pursuit in the field of materials. However, hindered by the disordered,
non-prototypical glass structure and the complex interplay between structure
and property, such inverse design is dauntingly hard for glasses. Here,
combining two cutting-edge techniques, graph neural networks and swap Monte
Carlo, we develop a data-driven, property-oriented inverse design route that
managed to improve the plastic resistance of Cu-Zr metallic glasses in a
controllable way. Swap Monte Carlo, as "sampler", effectively explores the
glass landscape, and graph neural networks, with high regression accuracy in
predicting the plastic resistance, serves as "decider" to guide the search in
configuration space. Via an unconventional strengthening mechanism, a
geometrically ultra-stable yet energetically meta-stable state is unraveled,
contrary to the common belief that the higher the energy, the lower the plastic
resistance. This demonstrates a vast configuration space that can be easily
overlooked by conventional atomistic simulations. The data-driven techniques,
structural search methods and optimization algorithms consolidate to form a
toolbox, paving a new way to the design of glassy materials.
|
In this paper a novel non-negative finite volume discretization scheme is
proposed for certain first order nonlinear partial differential equations
describing conservation laws arising in traffic flow modelling. The spatially
discretized model is shown to preserve several fundamentally important
analytical properties of the conservation law (e.g., conservativeness,
capacity) giving rise to a set of (second order) polynomial ODEs. Furthermore,
it is shown that the discretized traffic flow model is formally kinetic and
that it can be interpreted in a compartmental context. As a consequence,
traffic networks can be represented as reaction graphs. It is shown that the
model can be equipped with on- and off- ramps in a physically meaningful way,
still preserving the advantageous properties of the discretization. Numerical
case studies include empirical convergence tests, and the stability analysis
presented in the paper paves the way to scalable observer and controller
design.
|
We continue our study of initial-value problems for fully nonlinear systems
exhibiting strong or weak defects of hyperbolicity. We prove that, regardless
of the initial Sobolev regularity, the initial-value problem has no local $H^s$
solutions, for $s > s_0 + d/2,$ if the principal symbol has a strong, or even
weak, defect of hyperbolicity, and the purely imaginary eigenvalues of the
principal symbol are semi-simple and have constant multiplicity. The index $s_0
> 0$ depends on the severity of the defect of hyperbolicity. These results
recover and extend previous work from G. M\'etivier [{\it Remarks on the well
posedness of the nonlinear Cauchy problem,} 2005], N.Lerner, Y. Morimoto, C.-J.
Xu [{\it Instability of the Cauchy-Kovalevskaya solution for a class of
non-linear systems}, 2010] and N. Lerner, T. Nguyen, B. Texier, {\it The onset
of instability in first-order systems}, 2018]
|
In this paper, we present a new objective prediction model for synthetic
speech naturalness. It can be used to evaluate Text-To-Speech or Voice
Conversion systems and works language independently. The model is trained
end-to-end and based on a CNN-LSTM network that previously showed to give good
results for speech quality estimation. We trained and tested the model on 16
different datasets, such as from the Blizzard Challenge and the Voice
Conversion Challenge. Further, we show that the reliability of deep
learning-based naturalness prediction can be improved by transfer learning from
speech quality prediction models that are trained on objective POLQA scores.
The proposed model is made publicly available and can, for example, be used to
evaluate different TTS system configurations.
|
The (poset) cube Ramsey number $R(Q_n,Q_n)$ is defined as the least~$m$ such
that any 2-coloring of the $m$-dimensional cube $Q_m$ admits a monochromatic
copy of $Q_n$. The trivial lower bound $R(Q_n,Q_n)\ge 2n$ was improved by Cox
and Stolee, who showed $R(Q_n,Q_n)\ge 2n+1$ for $3\le n\le 8$ and $n\ge 13$
using a probabilistic existence proof. In this paper, we provide an explicit
construction that establishes $R(Q_n,Q_n)\ge 2n+1$ for all $n\ge 3$.
|
In this article, we present a visual introduction to Gaussian Belief
Propagation (GBP), an approximate probabilistic inference algorithm that
operates by passing messages between the nodes of arbitrarily structured factor
graphs. A special case of loopy belief propagation, GBP updates rely only on
local information and will converge independently of the message schedule. Our
key argument is that, given recent trends in computing hardware, GBP has the
right computational properties to act as a scalable distributed probabilistic
inference framework for future machine learning systems.
|
Materials property predictions have improved from advances in
machine-learning algorithms, delivering materials discoveries and novel
insights through data-driven models of structure-property relationships. Nearly
all available models rely on featurization of materials composition, however,
whether the exclusive use of structural knowledge in such models has the
capacity to make comparable predictions remains unknown. Here we employ a deep
neural network (DNN) model, deepKNet, to learn structure-property relationships
in crystalline materials without explicit chemical compositions, focusing on
classification of crystal systems, mechanical elasticity, electrical behavior,
and phase stability. The deepKNet model utilizes a three-dimensional (3D)
momentum space representation of structure from elastic X-ray scattering theory
and simultaneously exhibits rotation and permutation invariance. We find that
the spatial symmetry of the 3D point cloud, which reflects crystalline symmetry
operations, is more important than the point intensities contained within,
which correspond to various planar electron densities, for making a successful
metal-insulator classification. In contrast, the intensities are more important
for predicting bulk moduli. Phase stability, however, relies more upon chemical
composition information, where our structure-based model exhibits limited
predictability. We find learning the materials structural genome in the form of
a chemistry-agnostic DNN demonstrates that some crystal structures inherently
host high propensities for optimal materials properties, which enables the
decoupling of structure and composition for future co-design of
multifunctionality.
|
As a consequence of the classification of finite simple groups, the
classification of permutation groups of prime degree is complete, apart from
the question of when the natural degree $(q^n-1)/(q-1)$ of ${\rm PSL}_n(q)$ is
prime. We present heuristic arguments and computational evidence based on the
Bateman-Horn Conjecture to support a conjecture that for each prime $n\ge 3$
there are infinitely many primes of this form, even if one restricts to prime
values of $q$. Similar arguments and results apply to the parameters of the
simple groups ${\rm PSL}_n(q)$, ${\rm PSU}_n(q)$ and ${\rm PSp}_{2n}(q)$ which
arise in the work of Dixon and Zalesskii on linear groups of prime degree.
|
It is shown that $c=-29/16$ is the unique rational number of smallest
denominator, and the unique rational number of smallest numerator, for which
the map $f_c(x) = x^2+c$ has a rational periodic point of period $3$. Several
arithmetic conditions on the set of all such rational numbers $c$ and the
rational orbits of $f_c(x)$ are proved. A graph on the numerators of the
rational $3$-periodic points of maps $f_c$ is considered which reflects
connections between solutions of norm equations from the cubic field of
discriminant $-23$.
|
The emergence of deep learning has been accompanied by privacy concerns
surrounding users' data and service providers' models. We focus on private
inference (PI), where the goal is to perform inference on a user's data sample
using a service provider's model. Existing PI methods for deep networks enable
cryptographically secure inference with little drop in functionality; however,
they incur severe latency costs, primarily caused by non-linear network
operations (such as ReLUs). This paper presents Sphynx, a ReLU-efficient
network design method based on micro-search strategies for convolutional cell
design. Sphynx achieves Pareto dominance over all existing private inference
methods on CIFAR-100. We also design large-scale networks that support
cryptographically private inference on Tiny-ImageNet and ImageNet.
|
Electronic health record (EHR) coding is the task of assigning ICD codes to
each EHR. Most previous studies either only focus on the frequent ICD codes or
treat rare and frequent ICD codes in the same way. These methods perform well
on frequent ICD codes but due to the extremely unbalanced distribution of ICD
codes, the performance on rare ones is far from satisfactory. We seek to
improve the performance for both frequent and rare ICD codes by using a
contrastive graph-based EHR coding framework, CoGraph, which re-casts EHR
coding as a few-shot learning task. First, we construct a heterogeneous EHR
word-entity (HEWE) graph for each EHR, where the words and entities extracted
from an EHR serve as nodes and the relations between them serve as edges. Then,
CoGraph learns similarities and dissimilarities between HEWE graphs from
different ICD codes so that information can be transferred among them. In a
few-shot learning scenario, the model only has access to frequent ICD codes
during training, which might force it to encode features that are useful for
frequent ICD codes only. To mitigate this risk, CoGraph devises two graph
contrastive learning schemes, GSCL and GECL, that exploit the HEWE graph
structures so as to encode transferable features. GSCL utilizes the
intra-correlation of different sub-graphs sampled from HEWE graphs while GECL
exploits the inter-correlation among HEWE graphs at different clinical stages.
Experiments on the MIMIC-III benchmark dataset show that CoGraph significantly
outperforms state-of-the-art methods on EHR coding, not only on frequent ICD
codes, but also on rare codes, in terms of several evaluation indicators. On
frequent ICD codes, GSCL and GECL improve the classification accuracy and F1 by
1.31% and 0.61%, respectively, and on rare ICD codes CoGraph has more obvious
improvements by 2.12% and 2.95%.
|
Semiconductor nanowire networks are essential elements for a variety of
gate-tunable quantum applications. Their relevance, however, depends critically
on the material quality. In this work we study selective area growth (SAG) of
highly lattice-mismatched InAs/In$_x$Ga$_{1-x}$As nanowires on insulating
GaAs(001) substrates and address two key challenges: crystalline quality and
compositional uniformity. We introduce optimization steps and show how misfit
dislocations are guided away from the InAs active region and how Ga-In
intermixing is kinetically limited with growth temperature. The optimization
process leads to a more than twofold increase in electron mobility and shows an
advancement toward realizing high quality gatable quantum wire networks.
|
This article is mostly based on a talk I gave at the March 2021 meeting
(virtual) of the American Physical Society on the occasion of receiving the
Dannie Heineman prize for Mathematical Physics from the American Institute of
Physics and the American Physical Society. I am greatly indebted to many
colleagues for the results leading to this award. To name them all would take
up all the space allotted to this article. (I have had more than 200
collaborators so far), I will therefore mention just a few: Michael Aizenman,
Bernard Derrida, Shelly Goldstein, Elliott Lieb, Oliver Penrose, Errico
Presutti, Gene Speer and Herbert Spohn. I am grateful to all of my
collaborators, listed and unlisted. I would also like to acknowledge here long
time support form the AFOSR and the NSF.
|
We investigate the possibility of gravitationally generated particle
production via the mechanism of nonminimal torsion--matter coupling. An
intriguing feature of this theory is that the divergence of the matter
energy--momentum tensor does not vanish identically. We explore the physical
and cosmological implications of the nonconservation of the energy--momentum
tensor by using the formalism of irreversible thermodynamics of open systems in
the presence of matter creation/annihilation. The particle creation rates,
pressure, and the expression of the comoving entropy are obtained in a
covariant formulation and discussed in detail. Applied together with the
gravitational field equations, the thermodynamics of open systems lead to a
generalization of the standard $\Lambda$CDM cosmological paradigm, in which the
particle creation rates and pressures are effectively considered as components
of the cosmological fluid energy--momentum tensor. We consider specific models,
and we show that cosmology with a torsion--matter coupling can almost perfectly
reproduce the $\Lambda$CDM scenario, while it additionally gives rise to
particle creation rates, creation pressures, and entropy generation through
gravitational matter production in both low and high redshift limits.
|
We present the identification of the COCONUTS-2 system, composed of the M3
dwarf L 34-26 and the T9 dwarf WISEPA J075108.79$-$763449.6. Given their common
proper motions and parallaxes, these two field objects constitute a physically
bound pair with a projected separation of 594$"$ (6471 au). The primary star
COCONUTS-2A has strong stellar activity (H$\alpha$, X-ray, and UV emission) and
is rapidly rotating ($P_{\rm rot} = 2.83$ days), from which we estimate an age
of 150-800 Myr. Comparing equatorial rotational velocity derived from the TESS
light curve to spectroscopic $v\sin{i}$, we find COCONUTS-2A has a nearly
edge-on inclination. The wide exoplanet COCONUTS-2b has an effective
temperature of $T_{\rm eff}=434 \pm 9$ K, a surface gravity of $\log{g} =
4.11^{+0.11}_{-0.18}$ dex, and a mass of $M=6.3^{+1.5}_{-1.9}$ $M_{\rm Jup}$
based on hot-start evolutionary models, leading to a $0.016^{+0.004}_{-0.005}$
mass ratio for the COCONUTS-2 system. COCONUTS-2b is the second coldest (after
WD 0806$-$661B) and the second widest (after TYC 9486-927-1 b) exoplanet imaged
to date. Comparison of COCONUTS-2b's infrared photometry with ultracool model
atmospheres suggests the presence of both condensate clouds and non-equilibrium
chemistry in its photosphere. Similar to 51 Eri b, COCONUTS-2b has a
sufficiently low luminosity ($\log{(L_{\rm bol}/L_{\odot})} = -6.384 \pm 0.028$
dex) to be consistent with the cold-start process that may form gas-giant
(exo)planets, though its large separation means such formation would not have
occurred in situ. Finally, at a distance of 10.9 pc, COCONUTS-2b is the nearest
imaged exoplanet to Earth known to date.
|
Magnetic insulators are important materials for a range of next generation
memory and spintronic applications. Structural constraints in this class of
devices generally require a clean heterointerface that allows effective
magnetic coupling between the insulating layer and the conducting layer.
However, there are relatively few examples of magnetic insulators which can be
synthesized with surface qualities that would allow these smooth interfaces and
precisely tuned interfacial magnetic exchange coupling which might be
applicable at room temperature. In this work, we demonstrate an example of how
the configurational complexity in the magnetic insulator layer can be used to
realize these properties. The entropy-assisted synthesis is used to create
single crystal (Mg0.2Ni0.2Fe0.2Co0.2Cu0.2)Fe2O4 films on substrates spanning a
range of strain states. These films show smooth surfaces, high resistivity, and
strong magnetic responses at room temperature. Local and global magnetic
measurements further demonstrate how strain can be used to manipulate magnetic
texture and anisotropy. These findings provide insight into how precise
magnetic responses can be designed using compositionally complex materials that
may find application in next generation magnetic devices.
|
We characterize the essential spectrum of the plasmonic problem for polyhedra
in $\mathbb{R}^3$. The description is particularly simple for convex polyhedra
and permittivities $\epsilon < - 1$. The plasmonic problem is interpreted as a
spectral problem through a boundary integral operator, the direct value of the
double layer potential, also known as the Neumann--Poincar\'e operator. We
therefore study the spectral structure of the the double layer potential for
polyhedral cones and polyhedra.
|
Lattice simulations of the QCD correlation functions in the Landau gauge have
established two remarkable facts. First, the coupling constant in the gauge
sector remains finite and moderate at all scales, suggesting that some kind of
perturbative description should be valid down to infrared momenta. Second, the
gluon propagator reaches a finite nonzero value at vanishing momentum,
corresponding to a gluon screening mass. We review recent studies which aim at
describing the long-distance properties of Landau gauge QCD by means of the
perturbative Curci-Ferrari model. The latter is the simplest deformation of the
Faddeev-Popov Lagrangian in the Landau gauge that includes a gluon screening
mass at tree-level. There are, by now, strong evidences that this approach
successfully describes many aspects of the infrared QCD dynamics. In
particular, several correlation functions were computed at one- and two-loop
orders and compared with {\it ab-initio} lattice simulations. The typical error
is of the order of ten percent for a one-loop calculation and drops to few
percents at two loops. We review such calculations in the quenched
approximation as well as in the presence of dynamical quarks. In the latter
case, the spontaneous breaking of the chiral symmetry requires to go beyond a
coupling expansion but can still be described in a controlled approximation
scheme in terms of small parameters. We also review applications of the
approach to nonzero temperature and chemical potential.
|
Two-band model works well for Hall effect in topological insulators. It turns
out to be non-Hermitian when the system is subjected to environments, and its
topology characterized by Chern numbers has received extensive studies in the
past decades. However, how a non-Hermitian system responses to an electric
field and what is the connection of the response to the Chern number defined
via the non-Hermitian Hamiltonian remain barely explored. In this paper,
focusing on a k-dependent decay rate, we address this issue by studying the
response of such a non-Hermitian Chern insulator to an external electric field.
To this aim, we first derive an effective non-Hermitian Hamiltonian to describe
the system and give a specific form of k-dependent decay rate. Then we
calculate the response of the non-Hermitian system to a constant electric
field. We observe that the environment leads the Hall conductance to be a
weighted integration of curvature of the ground band and hence the conductance
is no longer quantized in general. And the environment induces a delay in the
response of the system to the electric field. A discussion on the validity of
the non-Hermitian model compared with the master equation description is also
presented.
|
An appearance or disappearance of QPOs associated with the variation of X-ray
flux can be used to decipher the accretion ejection mechanism of black hole
X-ray sources. We searched and studied such rapid transitions in H1743-322
using RXTE archival data and found eight such events, where QPO vanishes
suddenly along with the variation of X-ray flux. The appearance/disappearance
of QPOs were associated to the four events exhibiting type-B QPOs at $\sim$ 4.5
Hz, one with type-A QPO at $\nu$ $\sim$ 3.5 Hz, and the remaining three were
connected to type-C QPOs at $\sim$ 9.5 Hz. Spectral studies of the data
unveiled that an inner disk radius remained at the same location around 2-9
r$_g$ , depending on the used model but power-law indices were varying,
indicating that either corona or jet is responsible for the events. The
probable ejection radii of corona were estimated to be around 4.2-15.4 r$_g$
based on the plasma ejection model. Our X-ray and quasi-simultaneous radio
correlation studies suggest that the type-B QPOs are probably related to the
precession of a weak jet though a small and weak corona is present at its base
and the type-C QPOs are associated to the base of a relatively strong jet which
is acting like a corona.
|
Solutions to the $\mu$ problem in supersymmetry based on the Kim-Nilles
mechanism naturally feature a Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) axion
with decay constant of order the geometric mean of the Planck and TeV scales,
consistent with astrophysical limits. We investigate minimal models of this
type with two gauge-singlet fields that break a Peccei-Quinn symmetry, and
extensions with extra vectorlike quark and lepton supermultiplets consistent
with gauge coupling unification. We show that there are many anomaly-free
discrete symmetries, depending on the vectorlike matter content, that protect
the Peccei-Quinn symmetry to sufficiently high order to solve the strong CP
problem. We study the axion couplings in this class of models. Models of this
type that are automatically free of the domain wall problem require at least
one pair of strongly interacting vectorlike multiplets with mass at the
intermediate scale, and predict axion couplings that are greatly enhanced
compared to the minimal supersymmetric DFSZ models, putting them within reach
of proposed axion searches.
|
In this paper, we focus on the $(\si,\t)$-derivation theory of Lie conformal
superalgebras. Firstly, we study the fundamental properties of conformal
$(\si,\t)$-derivations. Secondly, we mainly research the interiors of conformal
$G$-derivations. Finally, we discuss the relationships between the conformal
$(\si,\t)$-derivations and some generalized conformal derivations of Lie
conformal superalgebras.
|
The thermodynamic activities of all components in Ga-In-Tl system have been
predicted at 1073, 1173 and 1273 K, using Molecular interaction volume model
(MIVM). The infinite dilute activity coefficients for Ga-In and Ga-Tl binary
subsystems, which were needed for the determination of thermodynamic activities
of all components in Ga-In-Tl, have been predicted by using a method that is
based on Complex formation model for liquid alloys. The computed thermodynamic
activities of Ga in Ga-In-Tl were observed to satisfactorily agree with the
available experimental data when the newly computed coefficients were applied
in MIVM. The satisfactory prediction of the activity of Ga led to the
prediction of the activities of the remaining two components (In and Tl).
Iso-activities of all components (Ga, In and Tl) were plotted, and they reveal
the dependence of the nature of chemical short range order in Ga-In-Tl system
on composition.
|
The Ni-based superalloy Alloy 718 is used in aircraft engines as
high-pressure turbine discs and must endure challenging demands on
high-temperature yield strength, creep-, and oxidation-resistance. Nanoscale
$\gamma^{\prime}$- and $\gamma^{\prime \prime}$-precipitates commonly found in
duplet and triplet co-precipitate morphologies provide high-temperature
strength under these harsh operating conditions. Direct ageing of Alloy 718 is
an attractive alternative manufacturing route known to increase the yield
strength at 650 $^{\deg}$C by at least +10 $\%$, by both retaining high
dislocation densities and changing the nanoscale co-precipitate morphology.
However, the detailed nucleation and growth mechanisms of the duplet and
triplet co-precipitate morphologies of $\gamma^{\prime}$ and $\gamma^{\prime
\prime}$ during the direct ageing process remain unknown. We provide a
correlative high-resolution microscopy approach using transmission electron
microscopy, high-angle annular dark-field imaging, and atom probe microscopy to
reveal the early stages of precipitation during direct ageing of Alloy 718.
Quantitative stereological analyses of the $\gamma^{\prime}$- and
$\gamma^{\prime \prime}$-precipitate dispersions as well as their chemical
compositions have allowed us to propose a qualitative model of the
microstructural evolution. It is shown that fine $\gamma^{\prime}$- and
$\gamma^{\prime \prime}$-precipitates nucleate homogeneously and grow
coherently. However, $\gamma^{\prime \prime}$-precipitates also nucleate
heterogeneously on dislocations and experience accelerated growth due to Nb
pipe diffusion. Moreover, the co-precipitation reactions are largely influenced
by solute availability and the potential for enrichment of Nb and rejection of
Al+Ti.
|
We study systems of String Equations where block variables need to be
assigned strings so that their concatenation gives a specified target string.
We investigate this problem under a multivariate complexity framework,
searching for tractable special cases such as systems of equations with few
block variables or few equations. Our main results include a polynomial-time
algorithm for size-2 equations, and hardness for size-3 equations, as well as
hardness for systems of two equations, even with tight constraints on the block
variables. We also study a variant where few deletions are allowed in the
target string, and give XP algorithms in this setting when the number of block
variables is constant.
|
The inverse renormalization group is studied based on the image
super-resolution using the deep convolutional neural networks. We consider the
improved correlation configuration instead of spin configuration for the spin
models, such as the two-dimensional Ising and three-state Potts models. We
propose a block-cluster transformation as an alternative to the block-spin
transformation in dealing with the improved estimators. In the framework of the
dual Monte Carlo algorithm, the block-cluster transformation is regarded as a
transformation in the graph degrees of freedom, whereas the block-spin
transformation is that in the spin degrees of freedom. We demonstrate that the
renormalized improved correlation configuration successfully reproduces the
original configuration at all the temperatures by the super-resolution scheme.
Using the rule of enlargement, we repeatedly make inverse renormalization
procedure to generate larger correlation configurations. To connect
thermodynamics, an approximate temperature rescaling is discussed. The enlarged
systems generated using the super-resolution satisfy the finite-size scaling.
|
Although much progress has been made on the physics of magic angle twisted
bilayer graphene at integer fillings, little attention has been given to
fractional fillings. Here we show that the three-peak structure of Wannier
orbitals, dictated by the symmetry and topology of flat bands, facilitates the
emergence of a novel state at commensurate fractional filling of $\nu = n \pm
1/3$. We dub this state a "fractional correlated insulator". Specifically for
the filling of $\pm 1/3$ electrons per moir\'{e} unit cell, we show that
short-range interactions alone imply an approximate extensive entropy due to
the "breathing" degree of freedom of an irregular honeycomb lattice that
emerges through defect lines. The leading further-range interaction lifts this
degeneracy and selects a novel ferromagnetic nematic state that breaks AB/BA
sublattice symmetry. The proposed fractional correlated insulating state might
underlie the suppression of superconductivity at $\nu = 2-1/3$ filling observed
in arXiv:2004.04148. Further investigation of the proposed fractional
correlated insulating state would open doors to new regimes of correlation
effects in MATBG.
|
Self-supervised learning presents a remarkable performance to utilize
unlabeled data for various video tasks. In this paper, we focus on applying the
power of self-supervised methods to improve semi-supervised action proposal
generation. Particularly, we design an effective Self-supervised
Semi-supervised Temporal Action Proposal (SSTAP) framework. The SSTAP contains
two crucial branches, i.e., temporal-aware semi-supervised branch and
relation-aware self-supervised branch. The semi-supervised branch improves the
proposal model by introducing two temporal perturbations, i.e., temporal
feature shift and temporal feature flip, in the mean teacher framework. The
self-supervised branch defines two pretext tasks, including masked feature
reconstruction and clip-order prediction, to learn the relation of temporal
clues. By this means, SSTAP can better explore unlabeled videos, and improve
the discriminative abilities of learned action features. We extensively
evaluate the proposed SSTAP on THUMOS14 and ActivityNet v1.3 datasets. The
experimental results demonstrate that SSTAP significantly outperforms
state-of-the-art semi-supervised methods and even matches fully-supervised
methods. Code is available at https://github.com/wangxiang1230/SSTAP.
|
We consider the Keller-Segel system of consumption type coupled with an
incompressible fluid equation. The system describes the dynamics of oxygen and
bacteria densities evolving within a fluid. We establish local well-posedness
of the system in Sobolev spaces for partially inviscid and fully inviscid
cases. In the latter, additional assumptions on the initial data are required
when either the oxygen or bacteria density touches zero. Even though the oxygen
density satisfies a maximum principle due to consumption, we prove finite time
blow-up of its $C^{2}$--norm with certain initial data.
|
Magnetic impurities in $s$-wave superconductors lead to spin-polarized
Yu-Shiba-Rusinov (YSR) in-gap states. Chains of magnetic impurities offer one
of the most viable routes for the realization of Majorana bound states which
hold a promise for topological quantum computing. However, this ambitious goal
looks distant since no quantum coherent degrees of freedom have yet been
identified in these systems. To fill this gap we propose an effective two-level
system, a YSR qubit, stemming from two nearby impurities. Using a
time-dependent wave-function approach, we derive an effective Hamiltonian
describing the YSR qubit evolution as a function of distance between the
impurity spins, their relative orientations, and their dynamics. We show that
the YSR qubit can be controlled and read out using the state-of-the-art
experimental techniques for manipulation of the spins. Finally, we address the
effect of the spin noises on the coherence properties of the YSR qubit, and
show a robust behaviour for a wide range of experimentally relevant parameters.
Looking forward, the YSR qubit could facilitate the implementation of a
universal set of quantum gates in hybrid systems where they are coupled to
topological Majorana qubits.
|
We consider the problem of finding, through adaptive sampling, which of $n$
options (arms) has the largest mean. Our objective is to determine a rule which
identifies the best arm with a fixed minimum confidence using as few
observations as possible, i.e. this is a fixed-confidence (FC) best arm
identification (BAI) in multi-armed bandits. We study such problems under the
Bayesian setting with both Bernoulli and Gaussian arms. We propose to use the
classical "vector at a time" (VT) rule, which samples each remaining arm once
in each round. We show how VT can be implemented and analyzed in our Bayesian
setting and be improved by early elimination. Our analysis show that these
algorithms guarantee an optimal strategy under the prior. We also propose and
analyze a variant of the classical "play the winner" (PW) algorithm. Numerical
results show that these rules compare favorably with state-of-art algorithms.
|
The dust production in debris discs by grinding collisions of planetesimals
requires their orbits to be stirred. However, stirring levels remain largely
unconstrained, and consequently the stirring mechanisms as well. This work
shows how the sharpness of the outer edge of discs can be used to constrain the
stirring levels. Namely, the sharper the edge is the lower the eccentricity
dispersion must be. For a Rayleigh distribution of eccentricities ($e$), I find
that the disc surface density near the outer edge can be parametrised as
$\tanh[(r_{\max}-r)/l_{\rm out}]$, where $r_{\max}$ approximates the maximum
semi-major axis and $l_{\rm out}$ defines the edge smoothness. If the
semi-major axis distribution has sharp edges $e_\mathrm{rms}$ is roughly $1.2
l_{\rm out}/r_{\max}$, or $e_\mathrm{rms}=0.77 l_{\rm out}/r_{\max}$ if
semi-major axes have diffused due to self-stirring. This model is fitted to
ALMA data of five wide discs: HD 107146, HD 92945, HD 206893, AU Mic and HR
8799. The results show that HD 107146, HD 92945 and AU Mic have the sharpest
outer edges, corresponding to $e_\mathrm{rms}$ values of $0.121\pm0.05$,
$0.15^{+0.07}_{-0.05}$ and $0.10\pm0.02$ if their discs are self-stirred,
suggesting the presence of Pluto-sized objects embedded in the disc. Although
these stirring values are larger than typically assumed, the radial stirring of
HD 92945 is in good agreement with its vertical stirring constrained by the
disc height. HD 206893 and HR~8799, on the other hand, have smooth outer edges
that are indicative of scattered discs since both systems have massive inner
companions.
|
Entangled quantum states play an important role in quantum information
science and also in quantum mechanics fundamental investigations.
Implementation and characterization of techniques allowing for easy preparation
of entangled states are important steps in such fields. Here we generated
entangled quantum states encoded in photons transversal paths, obtained by
pumping a non-linear crystal with multiple transversal Gaussian beams. Such
approach allows us to generate entangled states of two qubits and two qutrits
encoded in Gaussian transversal path of twin photons. We make a theoretical
analyses of this source, considering the influence of the pump angular spectrum
on the generated states, further characterizing those by their purity and
entanglement degree. Our experimental results reveals that the generated states
presents both high purity and entanglement, and the theoretical analysis
elucidates how the pump beams profile can be used to manipulate such photonic
states.
|
The logic of Bunched Implications (BI) combines both additive and
multiplicative connectives, which include two primitive intuitionistic
implications. As a consequence, contexts in the sequent presentation are not
lists, nor multisets, but rather tree-like structures called bunches. This
additional complexity notwithstanding, the logic has a well-behaved metatheory
admitting all the familiar forms of semantics and proof systems. However, the
presentation of an effective proof-search procedure has been elusive since the
logic's debut. We show that one can reduce the proof-search space for any given
sequent to a primitive recursive set, the argument generalizing Gentzen's
decidability argument for classical propositional logic and combining key
features of Dyckhoff's contraction-elimination argument for intuitionistic
logic. An effective proof-search procedure, and hence decidability of
provability, follows as a corollary.
|
Cell-free translational strategies are needed to accelerate the repair of
mineralised tissues, particularly large bone defects, using minimally invasive
approaches. Regenerative bone scaffolds should ideally mimic aspects of the
tissue's ECM over multiple length scales and enable surgical handling and
fixation during implantation in vivo. Leveraging the knowledge gained with
bioactive self-assembling peptides (SAPs) and SAP-enriched electrospun fibres,
we presented a cell free approach for promoting mineralisation via apatite
deposition and crystal growth, in vitro, of SAP-enriched nonwoven scaffolds.
The nonwoven scaffold was made by electrospinning poly(epsilon-caprolactone)
(PCL) in the presence of either peptide P11-4 (Ac-QQRFEWEFEQQ-Am) or P11-8
(Ac-QQRFOWOFEQQ-Am), in light of the polymer's fibre forming capability and its
hydrolytic degradability as well as the well-known apatite nucleating
capability of SAPs. The 11-residue family of peptides (P11-X) has the ability
to self-assemble into beta-sheet ordered structures at the nano-scale and to
generate hydrogels at the macroscopic scale, some of which are capable of
promoting biomineralisation due to their apatite-nucleating capability. Both
variants of SAP-enriched nonwoven used in this study were proven to be
biocompatible with murine fibroblasts and supported nucleation and growth of
apatite minerals in simulated body fluid (SBF) in vitro. The fibrous nonwoven
provided a structurally robust scaffold, with the capability to control SAP
release behaviour. Up to 75% of P11-4 and 45% of P11-8 were retained in the
fibres after 7-day incubation in aqueous solution at pH 7.4. The encapsulation
of SAP in a nonwoven system with apatite-forming as well as localised and
long-term SAP delivery capabilities is appealing as a potential means of
achieving cost-effective bone repair therapy for critical size defects.
|
Min-max saddle point games have recently been intensely studied, due to their
wide range of applications, including training Generative Adversarial Networks
(GANs). However, most of the recent efforts for solving them are limited to
special regimes such as convex-concave games. Further, it is customarily
assumed that the underlying optimization problem is solved either by a single
machine or in the case of multiple machines connected in centralized fashion,
wherein each one communicates with a central node. The latter approach becomes
challenging, when the underlying communications network has low bandwidth. In
addition, privacy considerations may dictate that certain nodes can communicate
with a subset of other nodes. Hence, it is of interest to develop methods that
solve min-max games in a decentralized manner. To that end, we develop a
decentralized adaptive momentum (ADAM)-type algorithm for solving min-max
optimization problem under the condition that the objective function satisfies
a Minty Variational Inequality condition, which is a generalization to
convex-concave case. The proposed method overcomes shortcomings of recent
non-adaptive gradient-based decentralized algorithms for min-max optimization
problems that do not perform well in practice and require careful tuning. In
this paper, we obtain non-asymptotic rates of convergence of the proposed
algorithm (coined DADAM$^3$) for finding a (stochastic) first-order Nash
equilibrium point and subsequently evaluate its performance on training GANs.
The extensive empirical evaluation shows that DADAM$^3$ outperforms recently
developed methods, including decentralized optimistic stochastic gradient for
solving such min-max problems.
|
Artificial Intelligence (AI), along with the recent progress in biomedical
language understanding, is gradually changing medical practice. With the
development of biomedical language understanding benchmarks, AI applications
are widely used in the medical field. However, most benchmarks are limited to
English, which makes it challenging to replicate many of the successes in
English for other languages. To facilitate research in this direction, we
collect real-world biomedical data and present the first Chinese Biomedical
Language Understanding Evaluation (CBLUE) benchmark: a collection of natural
language understanding tasks including named entity recognition, information
extraction, clinical diagnosis normalization, single-sentence/sentence-pair
classification, and an associated online platform for model evaluation,
comparison, and analysis. To establish evaluation on these tasks, we report
empirical results with the current 11 pre-trained Chinese models, and
experimental results show that state-of-the-art neural models perform by far
worse than the human ceiling. Our benchmark is released at
\url{https://tianchi.aliyun.com/dataset/dataDetail?dataId=95414&lang=en-us}.
|
For $m, d \in {\mathbb N}$, a jittered sampling point set $P$ having $N =
m^d$ points in $[0,1)^d$ is constructed by partitioning the unit cube $[0,1)^d$
into $m^d$ axis-aligned cubes of equal size and then placing one point
independently and uniformly at random in each cube. We show that there are
constants $c \ge 0$ and $C$ such that for all $d$ and all $m \ge d$ the
expected non-normalized star discrepancy of a jittered sampling point set
satisfies \[c \,dm^{\frac{d-1}{2}} \sqrt{1 + \log(\tfrac md)} \le {\mathbb E}
D^*(P) \le C\, dm^{\frac{d-1}{2}} \sqrt{1 + \log(\tfrac md)}.\]
This discrepancy is thus smaller by a factor of
$\Theta\big(\sqrt{\frac{1+\log(m/d)}{m/d}}\,\big)$ than the one of a uniformly
distributed random point set of $m^d$ points. This result improves both the
upper and the lower bound for the discrepancy of jittered sampling given by
Pausinger and Steinerberger (Journal of Complexity (2016)). It also removes the
asymptotic requirement that $m$ is sufficiently large compared to $d$.
|
Magnetic field evolution in neutron-star crusts is driven by the Hall effect
and Ohmic dissipation, for as long as the crust is sufficiently strong to
absorb Maxwell stresses exerted by the field and thus make the momentum
equation redundant. For the strongest neutron-star fields, however, stresses
build to the point of crustal failure, at which point the standard evolution
equations are no longer valid. Here, we study the evolution of the magnetic
field of the crust up to and beyond crustal failure, whence the crust begins to
flow plastically. We perform global axisymmetric evolutions, exploring
different types of failure affecting a limited region of the crust. We find
that a plastic flow does not simply suppress the Hall effect even in the regime
of a low plastic viscosity, but it rather leads to non-trivial evolution -- in
some cases even overreacting and enhancing the impact of the Hall effect. Its
impact is more pronouced in the toroidal field, with the differences on the
poloidal field being less substantial. We argue that both the nature of
magnetar bursts and their spindown evolution will be affected by plastic flow,
so that observations of these phenomena may help to constrain the way the crust
fails.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.