abstract
stringlengths 42
2.09k
|
---|
We show that the symmetric radial decreasing rearrangement can increase the
fractional Gagliardo semi-norm in domains.
|
We consider two degenerate heat equations with a nonlocal space term,
studying, in particular, their null controllability property. To this aim, we
first consider the associated nonhomogeneous degenerate heat equations: we
study their well posedness, the Carleman estimates for the associated adjoint
problems and, finally, the null controllability. Then, as a consequence, using
the Kakutani's fixed point Theorem, we deduce the null controllability property
for the initial nonlocal problems.
|
Decades of deindustrialization have led to economic decline and population
loss throughout the U.S. Midwest, with the highest national poverty rates found
in Detroit, Cleveland, and Buffalo. This poverty is often confined to core
cities themselves, however, as many of their surrounding suburbs continue to
prosper. Poverty can therefore be highly concentrated at the MSA level, but
more evenly distributed within the borders of the city proper. One result of
this disparity is that if suburbanites consider poverty to be confined to the
central city, they might be less willing to devote resources to alleviate it.
But due to recent increases in suburban poverty, particularly since the 2008
recession, such urban-suburban gaps might be shrinking. Using Census
tract-level data, this study quantifies poverty concentrations for four "Rust
Belt" MSAs, comparing core-city and suburban concentrations in 2000, 2010, and
2015. There is evidence of a large gap between core cities and outlying areas,
which is closing in the three highest-poverty cities, but not in Milwaukee. A
set of four comparison cities show a smaller, more stable city-suburban divide
in the U.S. "Sunbelt," while Chicago resembles a "Rust Belt" metro.
|
Recent observations by the {\it Juno} spacecraft have revealed that the tidal
Love number $k_2$ of Jupiter is $4\%$ lower than the hydrostatic value. We
present a simple calculation of the dynamical Love number of Jupiter that
explains the observed "anomaly". The Love number is usually dominated by the
response of the (rotation-modified) f-modes of the planet. Our method also
allows for efficient computation of high-order dynamical Love numbers. While
the inertial-mode contributions to the Love numbers are negligible, a
sufficiently strong stratification in a large region of the planet's interior
would induce significant g-mode responses and influence the measured Love
numbers.
|
Conformal predictors are an important class of algorithms that allow
predictions to be made with a user-defined confidence level. They are able to
do this by outputting prediction sets, rather than simple point predictions.
The conformal predictor is valid in the sense that the accuracy of its
predictions is guaranteed to meet the confidence level, only assuming
exchangeability in the data. Since accuracy is guaranteed, the performance of a
conformal predictor is measured through the efficiency of the prediction sets.
Typically, a conformal predictor is built on an underlying machine learning
algorithm and hence its predictive power is inherited from this algorithm.
However, since the underlying machine learning algorithm is not trained with
the objective of minimizing predictive efficiency it means that the resulting
conformal predictor may be sub-optimal and not aligned sufficiently to this
objective. Hence, in this study we consider an approach to train the conformal
predictor directly with maximum predictive efficiency as the optimization
objective, and we focus specifically on the inductive conformal predictor for
classification. To do this, the conformal predictor is approximated by a
differentiable objective function and gradient descent used to optimize it. The
resulting parameter estimates are then passed to a proper inductive conformal
predictor to give valid prediction sets. We test the method on several real
world data sets and find that the method is promising and in most cases gives
improved predictive efficiency against a baseline conformal predictor.
|
A recent experiment [K. H. Kim, et al., Science 370, 978 (2020)] showed that
it may be possible to detect a liquid-liquid phase transition (LLPT) in
supercooled water by subjecting high density amorphous ice (HDA) to ultrafast
heating, after which the sample reportedly undergoes spontaneous decompression
from a high density liquid (HDL) to a low density liquid (LDL) via a
first-order phase transition. Here we conduct computer simulations of the ST2
water model, in which a LLPT is known to occur. We subject various HDA samples
of this model to a heating and decompression protocol that follows a
thermodynamic pathway similar to that of the recent experiments. Our results
show that a signature of the underlying equilibrium LLPT can be observed in a
strongly out-of-equilibrium process that follows this pathway despite the very
high heating and decompression rates employed here. Our results are also
consistent with the phase diagram of glassy ST2 water reported in previous
studies.
|
In recent years, the notion of Quantum Materials has emerged as a powerful
unifying concept across diverse fields of science and engineering, from
condensed-matter and cold atom physics to materials science and quantum
computing. Beyond traditional quantum materials such as unconventional
superconductors, heavy fermions, and multiferroics, the field has significantly
expanded to encompass topological quantum matter, two-dimensional materials and
their van der Waals heterostructures, Moire materials, Floquet time crystals,
as well as materials and devices for quantum computation with Majorana
fermions. In this Roadmap collection we aim to capture a snapshot of the most
recent developments in the field, and to identify outstanding challenges and
emerging opportunities. The format of the Roadmap, whereby experts in each
discipline share their viewpoint and articulate their vision for quantum
materials, reflects the dynamic and multifaceted nature of this research area,
and is meant to encourage exchanges and discussions across traditional
disciplinary boundaries. It is our hope that this collective vision will
contribute to sparking new fascinating questions and activities at the
intersection of materials science, condensed matter physics, device
engineering, and quantum information, and to shaping a clearer landscape of
quantum materials science as a new frontier of interdisciplinary scientific
inquiry.
|
The paper contains several theoretical results related to the weighted
nonlinear least-squares problem for low-rank signal estimation, which can be
considered as a Hankel structured low-rank approximation problem. A
parameterization of the subspace of low-rank time series connected with
generalized linear recurrence relations (GLRRs) is described and its features
are investigated. It is shown how the obtained results help to describe the
tangent plane, prove optimization problem features and construct stable
algorithms for solving low-rank approximation problems. For the latter, a
stable algorithm for constructing the projection onto a subspace of time series
that satisfy a given GLRR is proposed and justified. This algorithm is used for
a new implementation of the known Gauss-Newton method using the variable
projection approach. The comparison by stability and computational cost is
performed theoretically and with the help of an example.
|
We develop a new method for analyzing moduli problems related to the stack of
pure coherent sheaves on a polarized family of projective schemes. It is an
infinite-dimensional analogue of geometric invariant theory. We apply this to
two familiar moduli problems: the stack of $\Lambda$-modules and the stack of
pairs. In both examples, we construct a $\Theta$-stratification of the stack,
defined in terms of a polynomial numerical invariant, and we construct good
moduli spaces for the open substacks of semistable points. One of the essential
ingredients is the construction of higher dimensional analogues of the affine
Grassmannian for the moduli problems considered.
|
Subset selection is a valuable tool for interpretable learning, scientific
discovery, and data compression. However, classical subset selection is often
eschewed due to selection instability, computational bottlenecks, and lack of
post-selection inference. We address these challenges from a Bayesian
perspective. Given any Bayesian predictive model $\mathcal{M}$, we elicit
predictively-competitive subsets using linear decision analysis. The approach
is customizable for (local) prediction or classification and provides
interpretable summaries of $\mathcal{M}$. A key quantity is the acceptable
family of subsets, which leverages the predictive distribution from
$\mathcal{M}$ to identify subsets that offer nearly-optimal prediction. The
acceptable family spawns new (co-) variable importance metrics based on whether
variables (co-) appear in all, some, or no acceptable subsets. Crucially, the
linear coefficients for any subset inherit regularization and predictive
uncertainty quantification via $\mathcal{M}$. The proposed approach exhibits
excellent prediction, interval estimation, and variable selection for simulated
data, including $p=400 > n$. These tools are applied to a large education
dataset with highly correlated covariates, where the acceptable family is
especially useful. Our analysis provides unique insights into the combination
of environmental, socioeconomic, and demographic factors that predict
educational outcomes, and features highly competitive prediction with
remarkable stability.
|
As infamous invaders to the North American ecosystem, the Asian giant hornet
(Vespa mandarinia) is devastating not only to native bee colonies, but also to
local apiculture. One of the most effective way to combat the harmful species
is to locate and destroy their nests. By mobilizing the public to actively
report possible sightings of the Asian giant hornet, the governmentcould timely
send inspectors to confirm and possibly destroy the nests. However, such
confirmation requires lab expertise, where manually checking the reports one by
one is extremely consuming of human resources. Further given the limited
knowledge of the public about the Asian giant hornet and the randomness of
report submission, only few of the numerous reports proved positive, i.e.
existing nests. How to classify or prioritize the reports efficiently and
automatically, so as to determine the dispatch of personnel, is of great
significance to the control of the Asian giant hornet. In this paper, we
propose a method to predict the priority of sighting reports based on machine
learning. We model the problem of optimal prioritization of sighting reports as
a problem of classification and prediction. We extracted a variety of rich
features in the report: location, time, image(s), and textual description.
Based on these characteristics, we propose a classification model based on
logistic regression to predict the credibility of a certain report.
Furthermore, our model quantifies the impact between reports to get the
priority ranking of the reports. Extensive experiments on the public dataset
from the WSDA (the Washington State Department of Agriculture) have proved the
effectiveness of our method.
|
Despite the recent success of reconciling spike-based coding with the error
backpropagation algorithm, spiking neural networks are still mostly applied to
tasks stemming from sensory processing, operating on traditional data
structures like visual or auditory data. A rich data representation that finds
wide application in industry and research is the so-called knowledge graph - a
graph-based structure where entities are depicted as nodes and relations
between them as edges. Complex systems like molecules, social networks and
industrial factory systems can be described using the common language of
knowledge graphs, allowing the usage of graph embedding algorithms to make
context-aware predictions in these information-packed environments. We propose
a spike-based algorithm where nodes in a graph are represented by single spike
times of neuron populations and relations as spike time differences between
populations. Learning such spike-based embeddings only requires knowledge about
spike times and spike time differences, compatible with recently proposed
frameworks for training spiking neural networks. The presented model is easily
mapped to current neuromorphic hardware systems and thereby moves inference on
knowledge graphs into a domain where these architectures thrive, unlocking a
promising industrial application area for this technology.
|
Mutations sometimes increase contagiousness for evolving pathogens. During an
epidemic, scientists use viral genome data to infer a shared evolutionary
history and connect this history to geographic spread. We propose a model that
directly relates a pathogen's evolution to its spatial contagion dynamics --
effectively combining the two epidemiological paradigms of phylogenetic
inference and self-exciting process modeling -- and apply this
\emph{phylogenetic Hawkes process} to a Bayesian analysis of 23,422 viral cases
from the 2014-2016 Ebola outbreak in West Africa. The proposed model is able to
detect individual viruses with significantly elevated rates of spatiotemporal
propagation for a subset of 1,610 samples that provide genome data. Finally, to
facilitate model application in big data settings, we develop massively
parallel implementations for the gradient and Hessian of the log-likelihood and
apply our high performance computing framework within an adaptively
preconditioned Hamiltonian Monte Carlo routine.
|
Understanding the interaction of massive black hole binaries with their
gaseous environment is crucial since at sub-parsec scales the binary is too
wide for gravitational wave emission to take over and to drive the two black
holes to merge. We here investigate the interaction between a massive black
hole binary and a self-gravitating circumbinary disc using 3D smoothed particle
hydrodynamics simulations. We find that, when the disc self-gravity regulates
the angular momentum transport, the binary semi-major axis decreases regardless
the choice of disc masses and temperatures, within the range we explored. In
particular, we find that the disc initial temperature (hence the disc aspect
ratio) has little effect on the evolution of the binary since discs with the
same mass self-regulate towards the same temperature. Initially warmer discs
cause the binary to shrink on a slightly shorter timescale until the disc has
reached the self-regulated equilibrium temperature. More massive discs drive
the binary semi-major axis to decrease at a faster pace compared to less
massive discs and result in faster binary eccentricity growth even after the
initial-condition-dependent transient evolution. Finally we investigate the
effect that the initial cavity size has on the binary-disc interaction and we
find that, in the self-gravitating regime, an initially smaller cavity leads to
a much faster binary shrinking, as expected. Our results are especially
important for very massive black hole binaries such as those in the PTA band,
for which gas self gravity cannot be neglected.
|
When wave scattering systems are subject to certain symmetries, resonant
states may decouple from the far-field continuum; they remain localized to the
structure and cannot be excited by incident waves from the far field. In this
work, we use layer-potential techniques to prove the existence of such states,
known as bound states in the continuum, in systems of subwavelength resonators.
When the symmetry is slightly broken, this resonant state can be excited from
the far field. Remarkably, this may create asymmetric (Fano-type) scattering
behaviour where the transmission is fundamentally different for frequencies on
either side of the resonant frequency. Using asymptotic analysis, we compute
the scattering matrix of the system explicitly, thereby characterizing this
Fano-type transmission anomaly.
|
We study the problem of regularization of inverse problems adopting a purely
data driven approach, by using the similarity to the method of regularization
by projection. We provide an application of a projection algorithm, utilized
and applied in frames theory, as a data driven reconstruction procedure in
inverse problems, generalizing the algorithm proposed by the authors in Inverse
Problems 36 (2020), n. 12, 125009, based on an orthonormalization procedure for
the training pairs. We show some numerical experiments, comparing the different
methods.
|
We enumerate the number of staircase diagrams over classically finite
$E$-type Dynkin diagrams, extending the work of Richmond and Slofstra
(Staircase Diagrams and Enumeration of smooth Schubert varieties) and
completing the enumeration of staircase diagrams over finite type Dynkin
diagrams. The staircase diagrams are in bijection to smooth and rationally
smooth Schubert varieties over $E$-type thereby giving an enumeration of these
varieties.
|
Machine learning, artificial intelligence, and deep learning have advanced
significantly over the past decade. Nonetheless, humans possess unique
abilities such as creativity, intuition, context and abstraction, analytic
problem solving, and detecting unusual events. To successfully tackle pressing
scientific and societal challenges, we need the complementary capabilities of
both humans and machines. The Federal Government could accelerate its
priorities on multiple fronts through judicious integration of citizen science
and crowdsourcing with artificial intelligence (AI), Internet of Things (IoT),
and cloud strategies.
|
The affinoid envelope, $\widehat{U(\mathcal{L})}$ of a free, finitely
generated $\mathbb{Z}_p$-Lie algebra $\mathcal{L}$ has proven to be useful
within the representation theory of compact $p$-adic Lie groups. Our aim is to
further understand the algebraic structure of $\widehat{U(\mathcal{L})}$, and
to this end, we will define a Dixmier module over $\widehat{U(\mathcal{L})}$,
and prove that this object is generally irreducible in case where $\mathcal{L}$
is nilpotent. Ultimately, we will prove that all primitive ideals in the
affinoid envelope can be described in terms of the annihilators of Dixmier
modules, and using this, we aim towards proving that these algebras satisfy a
version of the classical Dixmier-Moeglin equivalence.
|
We theoretically investigate electron-hole recollisions in high-harmonic
generation (HHG) in band-gap solids irradiated by linearly and elliptically
polarized drivers. We find that in many cases the emitted harmonics do not
originate in electron-hole pairs created at the minimum band gap, where the
tunneling probability is maximized, but rather in pairs created across an
extended region of the Brillouin zone (BZ). In these situations, the analogy to
gas-phase HHG in terms of the short- and long-trajectory categorizations is
inadequate. Our analysis methodology comprises three complementary levels of
theory: the numerical solutions to the semiconductor Bloch equations, an
extended semiclassical recollision model, and a quantum wave packet approach.
We apply this methodology to two general material types with representative
band structures: a bulk system and a hexagonal monolayer system. In the bulk,
the interband harmonics generated using elliptically-polarized drivers are
found to originate not from tunneling at the minimum band gap $\Gamma$, but
from regions away from it. In the monolayer system driven by linearly-polarized
pulses, tunneling regions near different symmetry points in the BZ lead to
distinct harmonic energies and emission profiles. We show that the imperfect
recollisions, where an electron-hole pair recollide while being spatially
separated, are important in both bulk and monolayer materials. The excellent
agreement between our three levels of theory highlights and characterizes the
complexity behind the HHG emission dynamics in solids, and expands on the
notion of interband HHG as always originating in trajectories tunnelled at the
minimum band gap. Our work furthers the fundamental understanding of HHG in
periodic systems and will benefit the future design of experiments.
|
Within the transport model evaluation project (TMEP) of simulations for
heavy-ion collisions, the mean-field response is examined here. Specifically,
zero-sound propagation is considered for neutron-proton symmetric matter
enclosed in a periodic box, at zero temperature and around normal density. The
results of several transport codes belonging to two families (BUU-like and
QMD-like) are compared among each other and to exact calculations. For BUU-like
codes, employing the test particle method, the results depend on the
combination of the number of test particles and the spread of the profile
functions that weight integration over space. These parameters can be properly
adapted to give a good reproduction of the analytical zero-sound features.
QMD-like codes, using molecular dynamics methods, are characterized by large
damping effects, attributable to the fluctuations inherent in their phase-space
representation. Moreover, for a given nuclear effective interaction, they
generally lead to slower density oscillations, as compared to BUU-like codes.
The latter problem is mitigated in the more recent lattice formulation of some
of the QMD codes. The significance of these results for the description of real
heavy-ion collisions is discussed.
|
In this paper, we present a physics-constrained deep neural network (PCDNN)
method for parameter estimation in the zero-dimensional (0D) model of the
vanadium redox flow battery (VRFB). In this approach, we use deep neural
networks (DNNs) to approximate the model parameters as functions of the
operating conditions. This method allows the integration of the VRFB
computational models as the physical constraints in the parameter learning
process, leading to enhanced accuracy of parameter estimation and cell voltage
prediction. Using an experimental dataset, we demonstrate that the PCDNN method
can estimate model parameters for a range of operating conditions and improve
the 0D model prediction of voltage compared to the 0D model prediction with
constant operation-condition-independent parameters estimated with traditional
inverse methods. We also demonstrate that the PCDNN approach has an improved
generalization ability for estimating parameter values for operating conditions
not used in the DNN training.
|
Billiards in ellipses have a confocal ellipse or hyperbola as caustic. The
goal of this paper is to prove that for each billiard of one type there exists
an isometric counterpart of the other type. Isometry means here that the
lengths of corresponding sides are equal. The transition between these two
isometric billiard can be carried out continuosly via isometric focal billiards
in a fixed ellipsoid. The extended sides of these particular billiards in an
ellipsoid are focal axes, i.e., generators of confocal hyperboloids. This
transition enables to transfer properties of planar billiards to focal
billiards, in particular billiard motions and canonical parametrizations. A
periodic planar billiard and its associated Poncelet grid give rise to periodic
focal billiards and spatial Poncelet grids. If the sides of a focal billiard
are materialized as thin rods with spherical joints at the vertices and other
crossing points between different sides, then we obtain Henrici's hyperboloid,
which is flexible between the two planar limits.
|
A search for laser light from Proxima Centauri was performed, including 107
high-resolution, optical spectra obtained between 2004 and 2019. Among them, 57
spectra contain multiple, confined spectral combs, each consisting of 10
closely-spaced frequencies of light. The spectral combs, as entities, are
themselves equally spaced with a frequency separation of 5800 GHz, rendering
them unambiguously technological in origin. However, the combs do not originate
at Proxima Centauri. Otherwise, the 107 spectra of Proxima Centauri show no
evidence of technological signals, including 29 observations between March and
July 2019 when the candidate technological radio signal, BLC1, was captured by
Breakthrough Listen. This search would have revealed lasers pointed toward
Earth having a power of 20 to 120 kilowatts and located within the 1.3au field
of view centered on Proxima Centauri, assuming a benchmark laser launcher
having a 10-meter aperture.
|
Agent-based modelling is a powerful tool when simulating human systems, yet
when human behaviour cannot be described by simple rules or maximising one's
own profit, we quickly reach the limits of this methodology. Machine learning
has the potential to bridge this gap by providing a link between what people
observe and how they act in order to reach their goal. In this paper we use a
framework for agent-based modelling that utilizes human values like fairness,
conformity and altruism. Using this framework we simulate a public goods game
and compare to experimental results. We can report good agreement between
simulation and experiment and furthermore find that the presented framework
outperforms strict reinforcement learning. Both the framework and the utility
function are generic enough that they can be used for arbitrary systems, which
makes this method a promising candidate for a foundation of a universal
agent-based model.
|
Drug combination therapy has become a increasingly promising method in the
treatment of cancer. However, the number of possible drug combinations is so
huge that it is hard to screen synergistic drug combinations through wet-lab
experiments. Therefore, computational screening has become an important way to
prioritize drug combinations. Graph neural network have recently shown
remarkable performance in the prediction of compound-protein interactions, but
it has not been applied to the screening of drug combinations. In this paper,
we proposed a deep learning model based on graph neural networks and attention
mechanism to identify drug combinations that can effectively inhibit the
viability of specific cancer cells. The feature embeddings of drug molecule
structure and gene expression profiles were taken as input to multi-layer
feedforward neural network to identify the synergistic drug combinations. We
compared DeepDDS with classical machine learning methods and other deep
learning-based methods on benchmark data set, and the leave-one-out
experimental results showed that DeepDDS achieved better performance than
competitive methods. Also, on an independent test set released by well-known
pharmaceutical enterprise AstraZeneca, DeepDDS was superior to competitive
methods by more than 16\% predictive precision. Furthermore, we explored the
interpretability of the graph attention network, and found the correlation
matrix of atomic features revealed important chemical substructures of drugs.
We believed that DeepDDS is an effective tool that prioritized synergistic drug
combinations for further wet-lab experiment validation.
|
Bell inequality can provide a useful witness for device-independent
applications with quantum (or post-quantum) eavesdroppers. This feature holds
only for single entangled systems. Our goal is to explore device-independent
model for quantum networks. We firstly propose a Bell inequality to verify the
genuinely multipartite nonlocality of connected quantum networks including
cyclic networks and universal quantum computational resources for
measurement-based computation model. This is further used to construct new
monogamy relation in a fully device-independent model with multisource quantum
resources. It is finally applied for multiparty quantum key distribution, blind
quantum computation, and quantum secret sharing. The present model can inspire
various large-scale applications on quantum networks in a device-independent
manner.
|
In this paper the authors study quotients of the product of elliptic curves
by a rigid diagonal action of a finite group $G$. It is shown that only for $G
= \operatorname{He(3)}, \mathbb Z_3^2$, and only for dimension $\geq 4$ such an
action can be free. A complete classification of the singular quotients in
dimension 3 and the smooth quotients in dimension $4$ is given. For the other
finite groups a strong structure theorem for rigid quotients is proven.
|
Log-based cyber threat hunting has emerged as an important solution to
counter sophisticated cyber attacks. However, existing approaches require
non-trivial efforts of manual query construction and have overlooked the rich
external knowledge about threat behaviors provided by open-source Cyber Threat
Intelligence (OSCTI). To bridge the gap, we build ThreatRaptor, a system that
facilitates cyber threat hunting in computer systems using OSCTI. Built upon
mature system auditing frameworks, ThreatRaptor provides (1) an unsupervised,
light-weight, and accurate NLP pipeline that extracts structured threat
behaviors from unstructured OSCTI text, (2) a concise and expressive
domain-specific query language, TBQL, to hunt for malicious system activities,
(3) a query synthesis mechanism that automatically synthesizes a TBQL query
from the extracted threat behaviors, and (4) an efficient query execution
engine to search the big system audit logging data.
|
We develop a theory for the susceptible-infected-susceptible (SIS) epidemic
model on networks that incorporate both network structure and dynamic
correlations. This theory can account for the multistage onset of the epidemic
phase in scale-free networks. This phenomenon is characterized by multiple
peaks in the susceptibility as a function of the infection rate. It can be
explained by that, even under the global epidemic threshold, a hub can sustain
the epidemics for an extended period. Moreover, our approach improves
theoretical calculations of prevalence close to the threshold in heterogeneous
networks and also can predict the average risk of infection for neighbors of
nodes with different degree and state on uncorrelated static networks.
|
This paper describes a search for galaxy centers with clear indications of
unusual stellar populations with an initial mass function flatter than Salpeter
at high stellar masses. Out of a sample of 668 face-on galaxies with stellar
masses in the range 10^10- 10^11 M_sol, I identify 15 galaxies with young to
intermediate age central stellar populations with unusual stellar population
gradients in the inner regions of the galaxy. In these galaxies, the 4000
Angstrom break is either flat or rising towards the center of the galaxy,
indicating that the central regions host evolved stars, but the H$\alpha$
equivalent width also rises steeply in the central regions. The ionization
parameter [OIII]/[OII] is typically low in these galactic centers, indicating
that ionizing sources are stellar rather than AGN. Wolf Rayet features
characteristic of hot young stars are often found in the spectra and these also
get progressively stronger at smaller galactocentric radii. These outliers are
compared to a control sample of galaxies of similar mass with young inner
stellar populations, but where the gradients in Halpha equivalent width and
4000 Angstrom break follow each other more closely. The outliers exhibit
central Wolf Rayet red bump excesses much more frequently, they have higher
central stellar and ionized gas metallicities, and they are also more
frequently detected at 20 cm radio wavelengths. I highlight one outlier where
the ionized gas is clearly being strongly perturbed and blown out either by
massive stars after they explode as supernovae, or by energy injection from
matter falling onto a black hole.
|
We consider the problem of operator-valued kernel learning and investigate
the possibility of going beyond the well-known separable kernels. Borrowing
tools and concepts from the field of quantum computing, such as partial trace
and entanglement, we propose a new view on operator-valued kernels and define a
general family of kernels that encompasses previously known operator-valued
kernels, including separable and transformable kernels. Within this framework,
we introduce another novel class of operator-valued kernels called entangled
kernels that are not separable. We propose an efficient two-step algorithm for
this framework, where the entangled kernel is learned based on a novel
extension of kernel alignment to operator-valued kernels. We illustrate our
algorithm with an application to supervised dimensionality reduction, and
demonstrate its effectiveness with both artificial and real data for
multi-output regression.
|
Turbulence has the potential for creating gas density enhancements that
initiate cloud and star formation (SF), and it can be generated locally by SF.
To study the connection between turbulence and SF, we looked for relationships
between SF traced by FUV images, and gas turbulence traced by kinetic energy
density (KED) and velocity dispersion ($v_{disp}$) in the LITTLE THINGS sample
of nearby dIrr galaxies. We performed 2D cross-correlations between FUV and KED
images, measured cross-correlations in annuli to produce correlation
coefficients as a function of radius, and determined the cumulative
distribution function of the cross correlation value. We also plotted on a
pixel-by-pixel basis the locally excess KED, $v_{disp}$, and HI mass surface
density, $\Sigma_{\rm HI}$, as determined from the respective values with the
radial profiles subtracted, versus the excess SF rate density $\Sigma_{\rm
SFR}$, for all regions with positive excess $\Sigma_{\rm SFR}$. We found that
$\Sigma_{\rm SFR}$ and KED are poorly correlated. The excess KED associated
with SF implies a $\sim0.5$% efficiency for supernova energy to pump local HI
turbulence on the scale of resolution here, which is a factor of $\sim2$ too
small for all of the turbulence on a galactic scale. The excess $v_{disp}$ in
SF regions is also small, only $\sim0.37$ km s$^{-1}$. The local excess in
$\Sigma_{\rm HI}$ corresponding to an excess in $\Sigma_{\rm SFR}$ is
consistent with an HI consumption time of $\sim1.6$ Gyr in the inner parts of
the galaxies. The similarity between this timescale and the consumption time
for CO implies that CO-dark molecular gas has comparable mass to HI in the
inner disks.
|
Through CaH2 chemical reduction of a parent R3+Ni3+O3 perovskite form,
superconductivity was recently achieved in Sr-doped NdNiO2 on SrTiO3 substrate.
Using density functional theory (DFT) calculations, we find that stoichiometric
NdNiO2 is significantly unstable with respect to decomposition into 1/2[Nd2O3 +
NiO + Ni] with exothermic decomposition energy of +176 meV/atom, a considerably
higher instability than that for common ternary oxides. This poses the question
if the stoichiometric NdNiO2 nickelate compound used extensively to model the
electronic band structure of Ni-based oxide analog to cuprates and found to be
metallic is the right model for this purpose. To examine this, we study via DFT
the role of the common H impurity expected to be present in the process of
chemical reduction needed to obtain NdNiO2. We find that H can be incorporated
exothermically, i.e., spontaneously in NdNiO2, even from H2 gas. In the
concentrated limit, such impurities can result in the formation of a hydride
compound NdNiO2H, which has significantly reduced instability relative to
hydrogen-free NdNiO2. Interestingly, the hydrogenated form has a similar
lattice constant as the pure form (leading to comparable XRD patterns), but
unlike the metallic character of NdNiO2, the hydrogenated form is predicted to
be a wide gap insulator thus, requiring doping to create a metallic or
superconducting state, just like cuprates, but unlike unhydrogenated
nickelates. While it is possible that hydrogen would be eventually desorbed,
the calculation suggests that pristine NdNiO2 is hydrogen-stabilized. One must
exercise caution with theories predicting new physics in pristine
stoichiometric NdNiO2 as it might be an unrealizable compound. Experimental
examination of the composition of real NdNiO2 superconductors and the effect of
hydrogen on the superconductivity is called for.
|
The dynamics of water molecules plays a vital role in understanding water. We
combined computer simulation and deep learning to study the dynamics of H-bonds
between water molecules. Based on ab initio molecular dynamics simulations and
a newly defined directed Hydrogen (H-) bond population operator, we studied a
typical dynamic process in bulk water: interchange, in which the H-bond donor
reverses roles with the acceptor. By designing a recurrent neural network-based
model, we have successfully classified the interchange and breakage processes
in water. We have found that the ratio between them is approximately 1:4, and
it hardly depends on temperatures from 280 to 360 K. This work implies that
deep learning has the great potential to help distinguish complex dynamic
processes containing H-bonds in other systems.
|
Understanding and simulating how a quantum system interacts and exchanges
information or energy with its surroundings is a ubiquitous problem, one which
must be carefully addressed in order to establish a coherent framework to
describe the dynamics and thermodynamics of quantum systems. Significant effort
has been invested in developing various methods for tackling this issue and in
this Perspective we focus on one such technique, namely collision models, which
have emerged as a remarkably flexible approach. We discuss their application to
understanding non-Markovian dynamics and to studying the thermodynamics of
quantum systems, two areas in which collision models have proven to be
particularly insightful. Their simple structure endows them with extremely
broad applicability which has spurred their recent experimental demonstrations.
By focusing on these areas, our aim is to provide a succinct entry point to
this remarkable framework.
|
Nucleons (protons and neutrons) are the building blocks of atomic nuclei, and
are responsible for more than 99\% of the visible matter in the universe.
Despite decades of efforts in studying its internal structure, there are still
a number of puzzles surrounding the proton such as its spin, and charge radius.
Accurate knowledge about the proton charge radius is not only essential for
understanding how quantum chromodynamics (QCD) works in the non-perturbative
region, but also important for bound state quantum electrodynamics (QED)
calculations of atomic energy levels. It also has an impact on the Rydberg
constant, one of the most precisely measured fundamental constants in nature.
This article reviews the latest situation concerning the proton charge radius
in light of the new experimental results from both atomic hydrogen spectroscopy
and electron scattering measurements, with particular focus on the latter. We
also present the related theoretical developments and backgrounds concerning
the determination of the proton charge radius using different experimental
techniques. We discuss upcoming experiments, and briefly mention the deuteron
charge radius puzzle at the end.
|
Topological properties of the jacobian curve ${\mathcal
J}_{\mathcal{F},\mathcal{G}}$ of two foliations $\mathcal{F}$ and $\mathcal{G}$
are described in terms of invariants associated to the foliations. The main
result gives a decomposition of the jacobian curve ${\mathcal
J}_{\mathcal{F},\mathcal{G}}$ which depends on how similar are the foliations
$\mathcal{F}$ and $\mathcal{G}$. The similarity between foliations is codified
in terms of the Camacho-Sad indices of the foliations with the notion of
collinear point or divisor. Our approach allows to recover the results
concerning the factorization of the jacobian curve of two plane curves and of
the polar curve of a curve or a foliation.
|
Relativistic jets and disc-winds are typically observed in BH-XRBs and AGNs.
However, many physical details of jet launching and the driving of disc winds
from the underlying accretion disc are still not fully understood. In this
study, we further investigate the role of the magnetic field strength and
structure in launching jets and disc winds. In particular, we explore the
connection between jet, wind, and the accretion disc around the central black
hole. We perform axisymmetric GRMHD simulations of the accretion-ejection
system using adaptive mesh refinement. Essentially, our simulations are
initiated with a thin accretion disc in equilibrium. An extensive parametric
study by choosing different combinations of magnetic field strength and initial
magnetic field inclination is also performed. Our study finds relativistic jets
driven by the Blandford \& Znajek (BZ) mechanism and the disc-wind driven by
the Blandford \& Payne (BP) mechanism. We also find that plasmoids are formed
due to the reconnection events, and these plasmoids advect with disc-winds. As
a result, the tension force due to the poloidal magnetic field is enhanced in
the inner part of the accretion disc, resulting in disc truncation and
oscillation. These oscillations result in flaring activities in the jet mass
flow rates. We find simulation runs with a lower value of the plasma-$\beta$,
and lower inclination angle parameters are more prone to the formation of
plasmoids and subsequent inner disc oscillations. Our models provide a possible
template to understand spectral state transition phenomena in BH-XRBs.
|
Space-borne optical frequency references based on spectroscopy of atomic
vapors may serve as an integral part of compact optical atomic clocks, which
can advance global navigation systems, or can be utilized for earth observation
missions as part of laser systems for cold atom gradiometers. Nanosatellites
offer low launch-costs, multiple deployment opportunities and short payload
development cycles, enabling rapid maturation of optical frequency references
and underlying key technologies in space. Towards an in-orbit demonstration on
such a platform, we have developed a CubeSat-compatible prototype of an optical
frequency reference based on the D2-transition in rubidium. A frequency
instability of 1.7e-12 at 1 s averaging time is achieved. The optical module
occupies a volume of 35 cm^3, weighs 73 g and consumes 780 mW of power.
|
At the heart of all automated driving systems is the ability to sense the
surroundings, e.g., through semantic segmentation of LiDAR sequences, which
experienced a remarkable progress due to the release of large datasets such as
SemanticKITTI and nuScenes-LidarSeg. While most previous works focus on sparse
segmentation of the LiDAR input, dense output masks provide self-driving cars
with almost complete environment information. In this paper, we introduce MASS
- a Multi-Attentional Semantic Segmentation model specifically built for dense
top-view understanding of the driving scenes. Our framework operates on pillar-
and occupancy features and comprises three attention-based building blocks: (1)
a keypoint-driven graph attention, (2) an LSTM-based attention computed from a
vector embedding of the spatial input, and (3) a pillar-based attention,
resulting in a dense 360-degree segmentation mask. With extensive experiments
on both, SemanticKITTI and nuScenes-LidarSeg, we quantitatively demonstrate the
effectiveness of our model, outperforming the state of the art by 19.0% on
SemanticKITTI and reaching 32.7% in mIoU on nuScenes-LidarSeg, where MASS is
the first work addressing the dense segmentation task. Furthermore, our
multi-attention model is shown to be very effective for 3D object detection
validated on the KITTI-3D dataset, showcasing its high generalizability to
other tasks related to 3D vision.
|
Prediction tasks about students have practical significance for both student
and college. Making multiple predictions about students is an important part of
a smart campus. For instance, predicting whether a student will fail to
graduate can alert the student affairs office to take predictive measures to
help the student improve his/her academic performance. With the development of
information technology in colleges, we can collect digital footprints which
encode heterogeneous behaviors continuously. In this paper, we focus on
modeling heterogeneous behaviors and making multiple predictions together,
since some prediction tasks are related and learning the model for a specific
task may have the data sparsity problem. To this end, we propose a variant of
LSTM and a soft-attention mechanism. The proposed LSTM is able to learn the
student profile-aware representation from heterogeneous behavior sequences. The
proposed soft-attention mechanism can dynamically learn different importance
degrees of different days for every student. In this way, heterogeneous
behaviors can be well modeled. In order to model interactions among multiple
prediction tasks, we propose a co-attention mechanism based unit. With the help
of the stacked units, we can explicitly control the knowledge transfer among
multiple tasks. We design three motivating behavior prediction tasks based on a
real-world dataset collected from a college. Qualitative and quantitative
experiments on the three prediction tasks have demonstrated the effectiveness
of our model.
|
We tackle the problem of predicting saliency maps for videos of dynamic
scenes. We note that the accuracy of the maps reconstructed from the gaze data
of a fixed number of observers varies with the frame, as it depends on the
content of the scene. This issue is particularly pressing when a limited number
of observers are available. In such cases, directly minimizing the discrepancy
between the predicted and measured saliency maps, as traditional deep-learning
methods do, results in overfitting to the noisy data. We propose a noise-aware
training (NAT) paradigm that quantifies and accounts for the uncertainty
arising from frame-specific gaze data inaccuracy. We show that NAT is
especially advantageous when limited training data is available, with
experiments across different models, loss functions, and datasets. We also
introduce a video game-based saliency dataset, with rich temporal semantics,
and multiple gaze attractors per frame. The dataset and source code are
available at https://github.com/NVlabs/NAT-saliency.
|
In this work, we present a comparative analysis of the trajectories estimated
from various Simultaneous Localization and Mapping (SLAM) systems in a
simulation environment for vineyards. Vineyard environment is challenging for
SLAM methods, due to visual appearance changes over time, uneven terrain, and
repeated visual patterns. For this reason, we created a simulation environment
specifically for vineyards to help studying SLAM systems in such a challenging
environment. We evaluated the following SLAM systems: LIO-SAM, StaticMapping,
ORB-SLAM2, and RTAB-MAP in four different scenarios. The mobile robot used in
this study equipped with 2D and 3D lidars, IMU, and RGB-D camera (Kinect v2).
The results show good and encouraging performance of RTAB-MAP in such an
environment.
|
We present a study of the wrinkling modes, localized in the plane of single-
and few-layer graphene sheets embedded in or placed on a compliant
compressively strained matrix. We provide the analytical model based on
nonlinear elasticity of the graphene sheet, which shows that the compressive
surface stress results in spatial localization of the extended sinusoidal
wrinkling mode with soliton-like envelope with localization length, decreasing
with the overcritical external strain. The parameters of the extended
sinusoidal wrinkling modes are found from the conditions of anomalous softening
of flexural surface acoustic wave propagating along the graphene sheet in or on
the matrix. For relatively small overcritical external strain, the continuous
transition occurs from the sinusoidal wrinkling modes with soliton-like
envelope to the strongly localized modes with approximately one-period
sinusoidal profiles and amplitude- and external-strain-independent spatial
widths. Two types of graphene wrinkling modes with different symmetry are
described, when the in-plane antisymmetric or symmetric modes are presumably
realized in the graphene sheet embedded in or placed on a compliant strained
matrix. Strongly localized wrinkling modes can be realized without delamination
of the graphene sheet from the compliant matrix and are not equivalent to the
ripplocations in layered solids. Molecular-dynamics modeling confirms the
appearance of sinusoidal wrinkling modes in single- and few-layer graphene
sheets embedded in polyethylene matrix at T = 300K.
|
A new method is proposed for human motion predition by learning temporal and
spatial dependencies in an end-to-end deep neural network. The joint
connectivity is explicitly modeled using a novel autoregressive structured
prediction representation based on flow-based generative models. We learn a
latent space of complex body poses in consecutive frames which is conditioned
on the high-dimensional structure input sequence. To construct each latent
variable, the general and local smoothness of the joint positions are
considered in a generative process using conditional normalizing flows. As a
result, all frame-level and joint-level continuities in the sequence are
preserved in the model. This enables us to parameterize the inter-frame and
intra-frame relationships and joint connectivity for robust long-term
predictions as well as short-term prediction. Our experiments on two
challenging benchmark datasets of Human3.6M and AMASS demonstrate that our
proposed method is able to effectively model the sequence information for
motion prediction and outperform other techniques in 42 of the 48 total
experiment scenarios to set a new state-of-the-art.
|
We consider bootstrap percolation and diffusion in sparse random graphs with
fixed degrees, constructed by configuration model. Every node has two states:
it is either active or inactive. We assume that to each node is assigned a
nonnegative (integer) threshold. The diffusion process is initiated by a subset
of nodes with threshold zero which consists of initially activated nodes,
whereas every other node is inactive. Subsequently, in each round, if an
inactive node with threshold $\theta$ has at least $\theta$ of its neighbours
activated, then it also becomes active and remains so forever. This is repeated
until no more nodes become activated. The main result of this paper provides a
central limit theorem for the final size of activated nodes. Namely, under
suitable assumptions on the degree and threshold distributions, we show that
the final size of activated nodes has asymptotically Gaussian fluctuations.
|
A novel class of methods for combining $p$-values to perform aggregate
hypothesis tests has emerged that exploit the properties of heavy-tailed Stable
distributions. These methods offer important practical advantages including
robustness to dependence and better-than-Bonferroni scaleability, and they
reveal theoretical connections between Bayesian and classical hypothesis tests.
The harmonic mean $p$-value (HMP) procedure is based on the convergence of
summed inverse $p$-values to the Landau distribution, while the Cauchy
combination test (CCT) is based on the self-similarity of summed
Cauchy-transformed $p$-values. The CCT has the advantage that it is analytic
and exact. The HMP has the advantage that it emulates a model-averaged Bayes
factor, is insensitive to $p$-values near 1, and offers multilevel testing via
a closed testing procedure. Here I investigate whether other Stable combination
tests can combine these benefits, and identify a new method, the L\'evy
combination test (LCT). The LCT exploits the self-similarity of sums of L\'evy
random variables transformed from $p$-values. Under arbitrary dependence, the
LCT possesses better robustness than the CCT and HMP, with two-fold worst-case
inflation at small significance thresholds. It controls the strong-sense
familywise error rate through a multilevel test uniformly more powerful than
Bonferroni. Simulations show that the LCT behaves like Simes' test in some
respects, with power intermediate between the HMP and Bonferroni. The LCT
represents an interesting and attractive addition to combined testing methods
based on heavy-tailed distributions.
|
We shall present with examples how analysis of astronomy data can be used for
an educational purpose to train students in methods of data analysis,
statistics, programming skills and research problems. Special reference will be
made to our IAU-OAD project `Astronomy from Archival Data' where we are in the
process of building a repository of instructional videos and reading material
for undergraduate and postgraduate students. Virtual Observatory tools will
also be discussed and applied. As this is an ongoing project, by the time of
the conference we will have the projects and work done by students included in
our presentation. The material produced can be freely used by the community.
|
Maxwell's boundary conditions (MBCs) were long known insufficient to
determine the optical responses of spatially dispersive medium. Supplementing
MBCs with additional boundary conditions (ABCs) has become a normal yet
controversial practice. Here the problem of ABCs is solved by analyzing some
subtle aspects of a physical surface. A generic theory is presented for
handling the interaction of light with the surfaces of an arbitrary medium and
applied to study the traditional problem of exciton polaritons. We show that
ABCs can always be adjusted to fit the theory but they can by no means be
construed as intrinsic surface characteristics, which are instead captured by a
\textit{surface response function} (SRF). Unlike any ABCs, a SRF describes
essentially non-local boundary effects. Methods for experimentally extracting
the spatial profile of this function are proposed.
|
These Monte Carlo studies describe the impact of higher order effects in both
QCD and EW $t\bar{t}W$ production. Both next-to-leading inclusive and multileg
setups are studied for $t\bar{t}W$ QCD production.
|
In this paper we show how rederive the Bogomolny's equations of generalized
Maxwell-Chern-Simons-Higgs model presented in Ref. \cite{Bazeia:2012ux} by
using BPS Lagrangian method. We also show that the other results
(identification, potential terms, Gauss's law constraint) in there can be
obtained rigorously with a particular form of the BPS Lagrangian density. In
this method, we find that the potential terms are the most general form that
could have the BPS vortex solutions. The Gauss's law constraint turns out to be
the Euler-Lagrange equations of the BPS Lagrangian density. We also find
another BPS vortex solutions by taking other identification between the neutral
scalar field and the electric scalar potential field, $N=\pm A_0$, which is
different by a relative sign to the identification in Ref.
\cite{Bazeia:2012ux}, $N=\mp A_0$,. We find the BPS vortex solutions have
negative electric charge which are related to the corresponding BPS vortes
solutions in Ref. \cite{Bazeia:2012ux} by tranforming the neutral scalar field
$N\to-N$. Other possible choice of BPS Lagrangian density might give different
Bogomolny's equations and the form of potential terms which will be discussed
in another work.
|
We introduce the magnon circular photogalvanic effect enabled by stimulated
Raman scattering. This provides an all-optical pathway to the generation of
directed magnon currents with circularly polarized light in honeycomb
antiferromagnetic insulators. The effect is the leading order contribution to
magnon photocurrent generation via optical fields. Control of the magnon
current by the polarization and angle of incidence of the laser is
demonstrated. Experimental detection by sizeable inverse spin Hall voltages in
platinum contacts is proposed.
|
Let $F: T^{n} \times I \to T^{n}$ be a homotopy on a n-dimensional torus. The
main purpose of this paper is to present a formula for the one-parameter
Nielsen number $N(F)$ of $F$ in terms of induced homomorphism. If $L(F)$ is the
one-parameter Lefschetz class of $F$ then $L(F)$ is given by $L(F) = \pm
N(F)\alpha,$ for some $\alpha \in H_{1}(\pi_{1}(T^{n}),\mathbb{Z}).$
|
Neutrino non-standard interactions (NSI) can be constrained using coherent
elastic neutrino-nucleus scattering. We discuss here two aspects in this
respect, namely the effects of (i) charged current NSI in neutrino production
and (ii) CP-violating phases associated with neutral current NSI in neutrino
detection. Effects of CP-phases require the simultaneous presence of two
different flavor-changing neutral current NSI parameters. Applying these two
scenarios to the COHERENT measurement, we derive limits on charged current NSI
and find that more data is required to compete with the existing limits.
Regarding CP-phases, we show how the limits on the NSI parameters depend
dramatically on the values of the phases. Accidentally, the same parameters
influencing coherent scattering also show up in neutrino oscillation
experiments. We find that COHERENT provides complementary constraints on the
set of NSI parameters that can explain the discrepancy in the best-fit value of
the standard CP-phase obtained by T2K and NO$\nu$A, while the significance with
which the LMA-Dark solution is ruled out can be weakened by the presence of
additional NSI parameters introduced here.
|
Scientific digital libraries play a critical role in the development and
dissemination of scientific literature. Despite dedicated search engines,
retrieving relevant publications from the ever-growing body of scientific
literature remains challenging and time-consuming. Indexing scientific articles
is indeed a difficult matter, and current models solely rely on a small portion
of the articles (title and abstract) and on author-assigned keyphrases when
available. This results in a frustratingly limited access to scientific
knowledge. The goal of the DELICES project is to address this pitfall by
exploiting semantic relations between scientific articles to both improve and
enrich indexing. To this end, we will rely on the latest advances in semantic
representations to both increase the relevance of keyphrases extracted from the
documents, and extend indexing to new terms borrowed from semantically similar
documents.
|
We give a brief account of the history of neutrino, and how that most aloof
of all particles has shaped our search for a theory of fundamental interactions
ever since it was theoretically proposed. We introduce the necessary concepts
and phenomena in a non-technical language aimed at a physicist with some basic
knowledge of quantum mechanics. In showing that neutrino mass could be the door
to new physics beyond the Standard Model, we emphasize the need to frame the
issue in the context of a complete theory, with testable predictions accessible
to present and near future experiments. We argue in favor of the Minimal
Left-Right Symmetric theory as the strongest candidate for such theory,
connecting neutrino mass with parity breakdown in nature. This is the theory
that led originally to neutrino mass and the seesaw mechanism behind its
smallness, but even more important, the theory that sheds light on a
fundamental question that touches us all: the symmetry between left and right.
|
Contextuality and entanglement are valuable resources for quantum computing
and quantum information. Bell inequalities are used to certify entanglement;
thus, it is important to understand why and how they are violated. Quantum
mechanics and behavioral sciences teach us that random variables measuring the
same content (the answer to the same Yes or No question) may vary, if measured
jointly with other random variables. Alice and Bob raw data confirm Einsteinian
non-signaling, but setting dependent experimental protocols are used to create
samples of coupled pairs of distant outcomes and to estimate correlations.
Marginal expectations, estimated using these final samples, depend on distant
settings. Therefore, a system of random variables measured in Bell tests is
inconsistently connected and it should be analyzed using a
Contextuality-by-Default approach, what is done for the first time in this
paper. The violation of Bell inequalities and inconsistent connectedness may be
explained using a contextual locally causal probabilistic model in which
setting dependent variables describing measuring instruments are correctly
incorporated. We prove that this model does not restrict experimenters freedom
of choice which is a prerequisite of science. Contextuality seems to be the
rule and not an exception; thus, it should be carefully tested.
|
The agent-based Yard-Sale model of wealth inequality is generalized to
incorporate exponential economic growth and its distribution. The distribution
of economic growth is nonuniform and is determined by the wealth of each agent
and a parameter $\lambda$. Our numerical results indicate that the model has a
critical point at $\lambda=1$ between a phase for $\lambda < 1$ with economic
mobility and exponentially growing wealth of all agents and a non-stationary
phase for $\lambda \geq 1$ with wealth condensation and no mobility. We define
the energy of the system and show that the system can be considered to be in
thermodynamic equilibrium for $\lambda < 1$. Our estimates of various critical
exponents are consistent with a mean-field theory (see following paper). The
exponents do not obey the usual scaling laws unless a combination of parameters
that we refer to as the Ginzburg parameter is held fixed as the transition is
approached. The model illustrates that both poorer and richer agents benefit
from economic growth if its distribution does not favor the richer agents too
strongly. This work and the accompanying theory paper contribute to
understanding whether the methods of equilibrium statistical mechanics can be
applied to economic systems.
|
We consider the (discrete) parabolic Anderson model $\partial u(t,x)/\partial
t=\Delta u(t,x) +\xi_t(x) u(t,x)$, $t\geq 0$, $x\in \mathbb{Z}^d$, where the
$\xi$-field is $\mathbb{R}$-valued and plays the role of a dynamic random
environment, and $\Delta$ is the discrete Laplacian. We focus on the case in
which $\xi$ is given by a properly rescaled symmetric simple exclusion process
under which it converges to an Ornstein--Uhlenbeck process. Scaling the
Laplacian diffusively and restricting ourselves to a torus, we show that in
dimension $d=3$ upon considering a suitably renormalised version of the above
equation, the sequence of solutions converges in law.
As a by-product of our main result we obtain precise asymptotics for the
survival probability of a simple random walk that is killed at a scale
dependent rate when meeting an exclusion particle. Our proof relies on the
discrete theory of regularity structures of \cite{ErhardHairerRegularity} and
on novel sharp estimates of joint cumulants of arbitrary large order for the
exclusion process. We think that the latter is of independent interest and may
find applications elsewhere.
|
We present an arithmetic circuit performing constant modular addition having
$\mathcal{O}(n)$ depth of Toffoli gates and using a total of $n+3$ qubits. This
is an improvement by a factor of two compared to the width of the
state-of-the-art Toffoli-based constant modular adder. The advantage of our
adder, compared to the ones operating in the Fourier-basis, is that it does not
require small angle rotations and their Clifford+T decomposition. Our circuit
uses a recursive adder combined with the modular addition scheme proposed by
Vedral et. al. The circuit is implemented and verified exhaustively with
QUANTIFY, an open-sourced framework. We also report on the Clifford+T cost of
the circuit.
|
The recently proposed end-to-end transformer detectors, such as DETR and
Deformable DETR, have a cascade structure of stacking 6 decoder layers to
update object queries iteratively, without which their performance degrades
seriously. In this paper, we investigate that the random initialization of
object containers, which include object queries and reference points, is mainly
responsible for the requirement of multiple iterations. Based on our findings,
we propose Efficient DETR, a simple and efficient pipeline for end-to-end
object detection. By taking advantage of both dense detection and sparse set
detection, Efficient DETR leverages dense prior to initialize the object
containers and brings the gap of the 1-decoder structure and 6-decoder
structure. Experiments conducted on MS COCO show that our method, with only 3
encoder layers and 1 decoder layer, achieves competitive performance with
state-of-the-art object detection methods. Efficient DETR is also robust in
crowded scenes. It outperforms modern detectors on CrowdHuman dataset by a
large margin.
|
We introduce a class of $n$-dimensional (possibly inhomogeneous) spin-like
lattice systems presenting modulated phases with possibly different textures.
Such systems can be parameterized according to the number of ground states, and
can be described by a phase-transition energy which we compute by means of
variational techniques. Degeneracies due to frustration are also discussed.
|
We translate a closed text that is known in advance into a severely low
resource language by leveraging massive source parallelism. In other words,
given a text in 124 source languages, we translate it into a severely low
resource language using only ~1,000 lines of low resource data without any
external help. Firstly, we propose a systematic method to rank and choose
source languages that are close to the low resource language. We call the
linguistic definition of language family Family of Origin (FAMO), and we call
the empirical definition of higher-ranked languages using our metrics Family of
Choice (FAMC). Secondly, we build an Iteratively Pretrained Multilingual
Order-preserving Lexiconized Transformer (IPML) to train on ~1,000 lines
(~3.5%) of low resource data. To translate named entities correctly, we build a
massive lexicon table for 2,939 Bible named entities in 124 source languages,
and include many that occur once and covers more than 66 severely low resource
languages. Moreover, we also build a novel method of combining translations
from different source languages into one. Using English as a hypothetical low
resource language, we get a +23.9 BLEU increase over a multilingual baseline,
and a +10.3 BLEU increase over our asymmetric baseline in the Bible dataset. We
get a 42.8 BLEU score for Portuguese-English translation on the medical EMEA
dataset. We also have good results for a real severely low resource Mayan
language, Eastern Pokomchi.
|
Recent advance in deep offline reinforcement learning (RL) has made it
possible to train strong robotic agents from offline datasets. However,
depending on the quality of the trained agents and the application being
considered, it is often desirable to fine-tune such agents via further online
interactions. In this paper, we observe that state-action distribution shift
may lead to severe bootstrap error during fine-tuning, which destroys the good
initial policy obtained via offline RL. To address this issue, we first propose
a balanced replay scheme that prioritizes samples encountered online while also
encouraging the use of near-on-policy samples from the offline dataset.
Furthermore, we leverage multiple Q-functions trained pessimistically offline,
thereby preventing overoptimism concerning unfamiliar actions at novel states
during the initial training phase. We show that the proposed method improves
sample-efficiency and final performance of the fine-tuned robotic agents on
various locomotion and manipulation tasks. Our code is available at:
https://github.com/shlee94/Off2OnRL.
|
In this paper, we present a new approach based on dynamic factor models
(DFMs) to perform nowcasts for the percentage annual variation of the Mexican
Global Economic Activity Indicator (IGAE in Spanish). The procedure consists of
the following steps: i) build a timely and correlated database by using
economic and financial time series and real-time variables such as social
mobility and significant topics extracted by Google Trends; ii) estimate the
common factors using the two-step methodology of Doz et al. (2011); iii) use
the common factors in univariate time-series models for test data; and iv)
according to the best results obtained in the previous step, combine the
statistically equal better nowcasts (Diebold-Mariano test) to generate the
current nowcasts. We obtain timely and accurate nowcasts for the IGAE,
including those for the current phase of drastic drops in the economy related
to COVID-19 sanitary measures. Additionally, the approach allows us to
disentangle the key variables in the DFM by estimating the confidence interval
for both the factor loadings and the factor estimates. This approach can be
used in official statistics to obtain preliminary estimates for IGAE up to 50
days before the official results.
|
Motivated by estimation of quantum noise models, we study the problem of
learning a Pauli channel, or more generally the Pauli error rates of an
arbitrary channel. By employing a novel reduction to the "Population Recovery"
problem, we give an extremely simple algorithm that learns the Pauli error
rates of an $n$-qubit channel to precision $\epsilon$ in $\ell_\infty$ using
just $O(1/\epsilon^2) \log(n/\epsilon)$ applications of the channel. This is
optimal up to the logarithmic factors. Our algorithm uses only unentangled
state preparation and measurements, and the post-measurement classical runtime
is just an $O(1/\epsilon)$ factor larger than the measurement data size. It is
also impervious to a limited model of measurement noise where heralded
measurement failures occur independently with probability $\le 1/4$.
We then consider the case where the noise channel is close to the identity,
meaning that the no-error outcome occurs with probability $1-\eta$. In the
regime of small $\eta$ we extend our algorithm to achieve multiplicative
precision $1 \pm \epsilon$ (i.e., additive precision $\epsilon \eta$) using
just $O\bigl(\frac{1}{\epsilon^2 \eta}\bigr) \log(n/\epsilon)$ applications of
the channel.
|
We find that the Casimir pressure in peptide films deposited on metallic
substrates is always repulsive which makes these films less stable. It is shown
that by adding a graphene sheet on top of peptide film one can change the sign
of the Casimir pressure by making it attractive. For this purpose, the
formalism of the Lifshitz theory is extended to the case when the film and
substrate materials are described by the frequency-dependent dielectric
permittivities, whereas the response of graphene to the electromagnetic field
is governed by the polarization tensor in (2+1)-dimensional space-time found in
the framework of the Dirac model. Both pristine and gapped and doped graphene
sheets are considered possessing some nonzero energy gap and chemical
potential. According to our results, in all cases the presence of graphene
sheet makes the Casimir pressure in peptide film deposited on a metallic
substrate attractive starting from some minimum film thickness. The value of
this minimum thickness becomes smaller with increasing chemical potential and
larger with increasing energy gap and the fraction of water in peptide film.
The physical explanation for these results is provided, and their possible
applications in organic electronics are discussed.
|
Urban Air Mobility (UAM) has the potential to revolutionize transportation.
It will exploit the third dimension to help smooth ground traffic in densely
populated areas. To be successful, it will require an integrated approach able
to balance efficiency and safety while harnessing common resources and
information. In this work we focus on future urban air-taxi services, and
present the first methods and algorithms to efficiently operate air-taxi at
scale. Our approach is twofold. First, we use a passenger-centric perspective
which introduces traveling classes, and information sharing between transport
modes to differentiate quality of services. This helps smooth multimodal
journeys and increase passenger satisfaction. Second, we provide a flight
routing and recharging solution which minimizes direct operational costs while
preserving long term battery life through reduced energy-intense recharging.
Our methods, which surpass the performance of a general state-of-the-art
commercial solver, are also used to gain meaningful insights on the design
space of the air-taxi problem, including solutions to hidden fairness issues.
|
In this paper, we consider user selection and downlink precoding for an
over-loaded single-cell massive multiple-input multiple-output (MIMO) system in
frequency division duplexing (FDD) mode, where the base station is equipped
with a dual-polarized uniform planar array (DP-UPA) and serves a large number
of single-antenna users. Due to the absence of uplink-downlink channel
reciprocity and the high-dimensionality of channel matrices, it is extremely
challenging to design downlink precoders using closed-loop channel probing and
feedback with limited spectrum resource. To address these issues, a novel
methodology -- active channel sparsification (ACS) -- has been proposed
recently in the literature for uniform linear array (ULA) to design sparsifying
precoders, which boosts spectral efficiency for multi-user downlink
transmission with substantially reduced channel feedback overhead. Pushing
forward this line of research, we aim to facilitate the potential deployment of
ACS in practical FDD massive MIMO systems, by extending it from ULA to DP-UPA
with explicit user selection and making the current ACS implementation
simplified. To this end, by leveraging Toeplitz structure of channel covariance
matrices, we extend the original ACS using scale-weight bipartite graph
representation to the matrix-weight counterpart. Building upon this, we propose
a multi-dimensional ACS (MD-ACS) method, which is a generalization of original
ACS formulation and is more suitable for DP-UPA antenna configurations. The
nonlinear integer program formulation of MD-ACS can be classified as a
generalized multi-assignment problem (GMAP), for which we propose a simple yet
efficient greedy algorithm to solve it. Simulation results demonstrate the
performance improvement of the proposed MD-ACS with greedy algorithm over the
state-of-the-art methods based on the QuaDRiGa channel models.
|
The QCD$\times$QED factorization is studied for two-body non-leptonic and
semi-leptonic $B$ decays with heavy-light final states. These non-leptonic
decays, like $\bar{B}^0_{(s)}\to D^+_{(s)} \pi^-$ and $\bar{B}_d^0 \to D^+
K^-$, are among the theoretically cleanest non-leptonic decays as penguin loops
do not contribute and colour-suppressed tree amplitudes are suppressed in the
heavy-quark limit or even completely absent. Advancing the theoretical
calculations of such decays requires therefore also a careful analysis of QED
effects. Including QED effects does not alter the general structure of
factorization which is analogous for both semi-leptonic and non-leptonic
decays. For the latter, we express our result as a correction of the tree
amplitude coefficient $a_1$. At the amplitude level, we find QED effects at the
sub-percent level, which is of the same order as the QCD uncertainty. We
discuss the phenomenological implications of adding QED effects in light of
discrepancies observed between theory and experimental data, for ratios of
non-leptonic over semi-leptonic decay rates. At the level of the rate,
ultrasoft photon effects can produce a correction up to a few percent,
requiring a careful treatment of such effects in the experimental analyses.
|
Accelerated multi-coil magnetic resonance imaging reconstruction has seen a
substantial recent improvement combining compressed sensing with deep learning.
However, most of these methods rely on estimates of the coil sensitivity
profiles, or on calibration data for estimating model parameters. Prior work
has shown that these methods degrade in performance when the quality of these
estimators are poor or when the scan parameters differ from the training
conditions. Here we introduce Deep J-Sense as a deep learning approach that
builds on unrolled alternating minimization and increases robustness: our
algorithm refines both the magnetization (image) kernel and the coil
sensitivity maps. Experimental results on a subset of the knee fastMRI dataset
show that this increases reconstruction performance and provides a significant
degree of robustness to varying acceleration factors and calibration region
sizes.
|
Learning from examples with noisy labels has attracted increasing attention
recently. But, this paper will show that the commonly used CIFAR-based datasets
and the accuracy evaluation metric used in the literature are both
inappropriate in this context. An alternative valid evaluation metric and new
datasets are proposed in this paper to promote proper research and evaluation
in this area. Then, friends and foes are identified from existing methods as
technical components that are either beneficial or detrimental to deep learning
from noisy labeled examples, respectively, and this paper improves and combines
technical components from the friends category, including self-supervised
learning, new warmup strategy, instance filtering and label correction. The
resulting F&F method significantly outperforms existing methods on the proposed
nCIFAR datasets and the real-world Clothing1M dataset.
|
Signatures of superconductivity at elevated temperatures above $T_c$ in high
temperature superconductors have been observed near 1/8 hole doping for
photoexcitation with infrared or optical light polarized either in the
CuO$_2$-plane or along the $c$-axis. While the use of in-plane polarization has
been effective for incident energies aligned to specific phonons, $c$-axis
laser excitation in a broad range between 5 $\mu$m and 400 nm was found to
affect the superconducting dynamics in striped La$_{1.885}$Ba$_{0.115}$CuO$_4$,
with a maximum enhancement in the $1/\omega$ dependence to the conductivity
observed at 800 nm. This broad energy range, and specifically 800 nm, is not
resonant with any phonon modes, yet induced electronic excitations appear to be
connected to superconductivity at energy scales well above the typical gap
energies in the cuprates. A critical question is what can be responsible for
such an effect at 800 nm? Using time-dependent exact diagonalization, we
demonstrate that the holes in the CuO$_2$ plane can be photoexcited into the
charge reservoir layers at resonant wavelengths within a multi-band Hubbard
model. This orbitally selective photoinduced charge transfer effectively
changes the in-plane doping level, which can lead to an enhancement of $T_c$
near the 1/8 anomaly.
|
A dominating set of a graph $G$ is a set of vertices that contains at least
one endpoint of every edge on the graph. The domination number of $G$ is the
order of a minimum dominating set of $G$. The $(t,r)$ broadcast domination is a
generalization of domination in which a set of broadcasting vertices emits
signals of strength $t$ that decrease by 1 as they traverse each edge, and we
require that every vertex in the graph receives a cumulative signal of at least
$r$ from its set of broadcasting neighbors. In this paper, we extend the study
of $(t,r)$ broadcast domination to directed graphs. Our main result explores
the interval of values obtained by considering the directed $(t,r)$ broadcast
domination numbers of all orientations of a graph $G$. In particular, we prove
that in the cases $r=1$ and $(t,r) = (2,2)$, for every integer value in this
interval, there exists an orientation $\vec{G}$ of $G$ which has directed
$(t,r)$ broadcast domination number equal to that value. We also investigate
directed $(t,r)$ broadcast domination on the finite grid graph, the star graph,
the infinite grid graph, and the infinite triangular lattice graph. We conclude
with some directions for future study.
|
Content feed, a type of product that recommends a sequence of items for users
to browse and engage with, has gained tremendous popularity among social media
platforms. In this paper, we propose to study the diversity problem in such a
scenario from an item sequence perspective using time series analysis
techniques. We derive a method called sliding spectrum decomposition (SSD) that
captures users' perception of diversity in browsing a long item sequence. We
also share our experiences in designing and implementing a suitable item
embedding method for accurate similarity measurement under long tail effect.
Combined together, they are now fully implemented and deployed in Xiaohongshu
App's production recommender system that serves the main Explore Feed product
for tens of millions of users every day. We demonstrate the effectiveness and
efficiency of the method through theoretical analysis, offline experiments and
online A/B tests.
|
Test automation is common in software development; often one tests repeatedly
to identify regressions. If the amount of test cases is large, one may select a
subset and only use the most important test cases. The regression test
selection (RTS) could be automated and enhanced with Artificial Intelligence
(AI-RTS). This however could introduce ethical challenges. While such
challenges in AI are in general well studied, there is a gap with respect to
ethical AI-RTS. By exploring the literature and learning from our experiences
of developing an industry AI-RTS tool, we contribute to the literature by
identifying three challenges (assigning responsibility, bias in decision-making
and lack of participation) and three approaches (explicability, supervision and
diversity). Additionally, we provide a checklist for ethical AI-RTS to help
guide the decision-making of the stakeholders involved in the process.
|
In this note, we show that the convolution of a discrete symmetric
log-concave distribution and a discrete symmetric bimodal distribution can have
any strictly positive number of modes. A similar result is proved for smooth
distributions.
|
Blood glucose (BG) management is crucial for type-1 diabetes patients
resulting in the necessity of reliable artificial pancreas or insulin infusion
systems. In recent years, deep learning techniques have been utilized for a
more accurate BG level prediction system. However, continuous glucose
monitoring (CGM) readings are susceptible to sensor errors. As a result,
inaccurate CGM readings would affect BG prediction and make it unreliable, even
if the most optimal machine learning model is used. In this work, we propose a
novel approach to predicting blood glucose level with a stacked Long short-term
memory (LSTM) based deep recurrent neural network (RNN) model considering
sensor fault. We use the Kalman smoothing technique for the correction of the
inaccurate CGM readings due to sensor error. For the OhioT1DM dataset,
containing eight weeks' data from six different patients, we achieve an average
RMSE of 6.45 and 17.24 mg/dl for 30 minutes and 60 minutes of prediction
horizon (PH), respectively. To the best of our knowledge, this is the leading
average prediction accuracy for the ohioT1DM dataset. Different physiological
information, e.g., Kalman smoothed CGM data, carbohydrates from the meal, bolus
insulin, and cumulative step counts in a fixed time interval, are crafted to
represent meaningful features used as input to the model. The goal of our
approach is to lower the difference between the predicted CGM values and the
fingerstick blood glucose readings - the ground truth. Our results indicate
that the proposed approach is feasible for more reliable BG forecasting that
might improve the performance of the artificial pancreas and insulin infusion
system for T1D diabetes management.
|
The key challenge in multiple-object tracking task is temporal modeling of
the object under track. Existing tracking-by-detection methods adopt simple
heuristics, such as spatial or appearance similarity. Such methods, in spite of
their commonality, are overly simple and lack the ability to learn temporal
variations from data in an end-to-end manner. In this paper, we present MOTR, a
fully end-to-end multiple-object tracking framework. It learns to model the
long-range temporal variation of the objects. It performs temporal association
implicitly and avoids previous explicit heuristics. Built upon DETR, MOTR
introduces the concept of "track query". Each track query models the entire
track of an object. It is transferred and updated frame-by-frame to perform
iterative predictions in a seamless manner. Tracklet-aware label assignment is
proposed for one-to-one assignment between track queries and object tracks.
Temporal aggregation network together with collective average loss is further
proposed to enhance the long-range temporal relation. Experimental results show
that MOTR achieves competitive performance and can serve as a strong
Transformer-based baseline for future research. Code is available at
\url{https://github.com/megvii-model/MOTR}.
|
The stiffness of the Hodgkin-Huxley (HH) equations during an action potential
(spike) limits the use of large time steps. We observe that the neurons can be
evolved independently between spikes, $i.e.,$ different neurons can be evolved
with different methods and different time steps. This observation motivates us
to design fast algorithms to raise efficiency. We present an adaptive method,
an exponential time differencing (ETD) method and a library-based method to
deal with the stiff period. All the methods can use time steps one order of
magnitude larger than the regular Runge-Kutta methods to raise efficiency while
achieving precise statistical properties of the original HH neurons like the
largest Lyapunov exponent and mean firing rate. We point out that the ETD and
library methods can stably achieve maximum 8 and 10 times of speedup,
respectively.
|
Singer voice classification is a meaningful task in the digital era. With a
huge number of songs today, identifying a singer is very helpful for music
information retrieval, music properties indexing, and so on. In this paper, we
propose a new method to identify the singer's name based on analysis of
Vietnamese popular music. We employ the use of vocal segment detection and
singing voice separation as the pre-processing steps. The purpose of these
steps is to extract the singer's voice from the mixture sound. In order to
build a singer classifier, we propose a neural network architecture working
with Mel Frequency Cepstral Coefficient as extracted input features from said
vocal. To verify the accuracy of our methods, we evaluate on a dataset of 300
Vietnamese songs from 18 famous singers. We achieve an accuracy of 92.84% with
5-fold stratified cross-validation, the best result compared to other methods
on the same data set.
|
Payment channel networks, such as Bitcoin's Lightning Network, promise to
improve the scalability of blockchain systems by processing the majority of
transactions off-chain. Due to the design, the positioning of nodes in the
network topology is a highly influential factor regarding the experienced
performance, costs, and fee revenue of network participants. As a consequence,
today's Lightning Network is built around a small number of highly-connected
hubs. Recent literature shows the centralizing tendencies to be
incentive-compatible and at the same time detrimental to security and privacy.
The choice of attachment strategies therefore becomes a crucial factor for the
future of such systems. In this paper, we provide an empirical study on the
(local and global) impact of various attachment strategies for payment channel
networks. To this end, we introduce candidate strategies from the field of
graph theory and analyze them with respect to their computational complexity as
well as their repercussions for end users and service providers. Moreover, we
evaluate their long-term impact on the network topology.
|
Spherically, plane, or hyperbolically symmetric spacetimes with an additional
hypersurface orthogonal Killing vector are often called ``static'' spacetimes
even if they contain regions where the Killing vector is non-timelike. It seems
to be widely believed that an energy-momentum tenor for a matter field
compatible with these spacetimes in general relativity is of the Hawking-Ellis
type I everywhere. We show in arbitrary $n(\ge 3)$ dimensions that, contrary to
popular belief, a matter field on a Killing horizon is not necessarily of type
I but can be of type II. Such a type-II matter field on a Killing horizon is
realized in the Gibbons-Maeda-Garfinkle-Horowitz-Strominger black hole in the
Einstein-Maxwell-dilaton system and may be interpreted as a mixture of a
particular anisotropic fluid and a null dust fluid.
|
Principal component analysis (PCA) defines a reduced space described by PC
axes for a given multidimensional-data sequence to capture the variations of
the data. In practice, we need multiple data sequences that accurately obey
individual probability distributions and for a fair comparison of the sequences
we need PC axes that are common for the multiple sequences but properly capture
these multiple distributions. For these requirements, we present individual
ergodic samplings for these sequences and provide special reweighting for
recovering the target distributions.
|
In many multiagent environments, a designer has some, but limited control
over the game being played. In this paper, we formalize this by considering
incompletely specified games, in which some entries of the payoff matrices can
be chosen from a specified set. We show that it is NP-hard for the designer to
make these choices optimally, even in zero-sum games. In fact, it is already
intractable to decide whether a given action is (potentially or necessarily)
played in equilibrium. We also consider incompletely specified symmetric games
in which all completions are required to be symmetric. Here, hardness holds
even in weak tournament games (symmetric zero-sum games whose entries are all
-1, 0, or 1) and in tournament games (symmetric zero-sum games whose
non-diagonal entries are all -1 or 1). The latter result settles the complexity
of the possible and necessary winner problems for a social-choice-theoretic
solution concept known as the bipartisan set. We finally give a mixed-integer
linear programming formulation for weak tournament games and evaluate it
experimentally.
|
Introducing the notion of extended Schr\"odinger spaces, we define the
criticality and subcriticality of Schr\"odinger forms in the same manner as the
recurrence and transience of Dirichlet forms, and give a sufficient condition
for the subcriticality of Schr\"odinger forms in terms the bottom of spectrum.
We define a subclass of Hardy potentials and prove that Schr\"odinger forms
with potentials in this subclass are always critical, which leads us to optimal
Hardy type inequality. We show that this definition of criticality and
subcriticality is equivalent to that there exists an excessive function with
respect to Schr\"odinger semigroup and its generating Dirichlet form through
$h$-transform is recurrent and transient respectively. As an application, we
can show the recurrence and transience of a family of Dirichlet forms by
showing the criticality and subcriticaly of Schr\"odinger forms and show the
other way around through $h$-transform, We give a such example with fractional
Schr\"odinger operators with Hardy potential.
|
A splitting BIBD is a type of combinatorial design that can be used to
construct splitting authentication codes with good properties. In this paper we
show that a design-theoretic approach is useful in the analysis of more general
splitting authentication codes. Motivated by the study of algebraic
manipulation detection (AMD) codes, we define the concept of a group generated
splitting authentication code. We show that all group-generated authentication
codes have perfect secrecy, which allows us to demonstrate that algebraic
manipulation detection codes can be considered to be a special case of an
authentication code with perfect secrecy.
We also investigate splitting BIBDs that can be "equitably ordered". These
splitting BIBDs yield authentication codes with splitting that also have
perfect secrecy. We show that, while group generated BIBDs are inherently
equitably ordered, the concept is applicable to more general splitting BIBDs.
For various pairs $(k,c)$, we determine necessary and sufficient (or almost
sufficient) conditions for the existence of $(v, k \times c,1)$-splitting BIBDs
that can be equitably ordered. The pairs for which we can solve this problem
are $(k,c) = (3,2), (4,2), (3,3)$ and $(3,4)$, as well as all cases with $k =
2$.
|
Introduction: Mobile apps, through artificial vision, are capable of
recognizing vegetable species in real time. However, the existing species
recognition apps do not take in consideration the wide variety of endemic and
native (Chilean) species, which leads to wrong species predictions. This study
introduces the development of a chilean species dataset and an optimized
classification model implemented to a mobile app. Method: the data set was
built by putting together pictures of several species captured on the field and
by selecting some pictures available from other datasets available online.
Convolutional neural networks were used in order to develop the images
prediction models. The networks were trained by performing a sensitivity
analysis, validating with k-fold cross validation and performing tests with
different hyper-parameters, optimizers, convolutional layers, and learning
rates in order to identify and choose the best models and then put them
together in one classification model. Results: The final data set was
compounded by 46 species, including native species, endemic and exotic from
Chile, with 6120 training pictures and 655 testing pictures. The best models
were implemented on a mobile app, obtaining a 95% correct prediction rate with
respect to the set of tests. Conclusion: The app developed in this study is
capable of classifying species with a high level of accuracy, depending on the
state of the art of the artificial vision and it can also show relevant
information related to the classified species.
|
We study quantum effects of the vacuum light-matter interaction in materials
embedded in optical cavities. We focus on the electronic response of a
two-dimensional semiconductor placed inside a planar cavity. By using a
diagrammatic expansion of the electron-photon interaction, we describe
signatures of light-matter hybridization characterized by large asymmetric
shifts of the spectral weight at resonant frequencies. We follow the evolution
of the light-dressing from the cavity to the free-space limit. In the cavity
limit, light-matter hybridization results in a modification of the optical gap
with sizeable spectral weight appearing below the bare gap edge. In the limit
of large cavities, we find a residual redistribution of spectral weight which
becomes independent of the distance between the two mirrors. We show that the
photon dressing of the electronic response can be fully explained by using a
classical description of light. The classical description is found to hold up
to a strong coupling regime of the light-matter interaction highlighted by the
large modification of the photon spectra with respect to the empty cavity. We
show that, despite the strong coupling, quantum corrections are negligibly
small and weakly dependent on the cavity confinement. As a consequence, in
contrast to the optical gap, the single particle electronic band gap is not
sensibly modified by the strong-coupling. Our results show that quantum
corrections are dominated by off-resonant photon modes at high energy. As such,
cavity confinement can hardly be seen as a knob to control the quantum effects
of the light-matter interaction in vacuum.
|
Recent discoveries of charge order and electronic nematic order in the
iron-based superconductors and cuprates have pointed towards the possibility of
nematic and charge fluctuations playing a role in the enhancement of
superconductivity. The Ba1-xSrxNi2As2 system, closely related in structure to
the BaFe2As2 system, has recently been shown to exhibit both types of ordering
without the presence of any magnetic order. We report single crystal X-ray
diffraction, resistance transport measurements, and magnetization of \BaSrLate,
providing evidence that the previously reported incommensurate charge order
with wavevector $(0,0.28,0)_{tet}$ in the tetragonal state of \BaNi~vanishes by
this concentration of Sr substitution together with nematic order. Our
measurements suggest that the nematic and incommensurate charge orders are
closely tied in the tetragonal state, and show that the $(0,0.33,0)_{tri}$
charge ordering in the triclinic phase of BaNi2As2 evolves to become
$(0,0.5,0)_{tri}$ charge ordering at $x$=0.65 before vanishing at $x$=0.71.
|
We study the one-level density for families of L-functions associated with
cubic Dirichlet characters defined over the Eisenstein field. We show that the
family of $L$-functions associated with the cubic residue symbols $\chi_n$ with
$n$ square-free and congruent to 1 modulo 9 satisfies the Katz-Sarnak
conjecture for all test functions whose Fourier transforms are supported in
$(-13/11, 13/11)$, under GRH. This is the first result extending the support
outside the \emph{trivial range} $(-1, 1)$ for a family of cubic L-functions.
This implies that a positive density of the L-functions associated with these
characters do not vanish at the central point $s=1/2$. A key ingredient in our
proof is a bound on an average of generalized cubic Gauss sums at prime
arguments, whose proof is based on the work of Heath-Brown and Patterson.
|
To inhibit the spread of rumorous information and its severe consequences,
traditional fact checking aims at retrieving relevant evidence to verify the
veracity of a given claim. Fact checking methods typically use knowledge graphs
(KGs) as external repositories and develop reasoning mechanism to retrieve
evidence for verifying the triple claim. However, existing methods only focus
on verifying a single claim. As real-world rumorous information is more complex
and a textual statement is often composed of multiple clauses (i.e. represented
as multiple claims instead of a single one), multiclaim fact checking is not
only necessary but more important for practical applications. Although previous
methods for verifying a single triple can be applied repeatedly to verify
multiple triples one by one, they ignore the contextual information implied in
a multi-claim statement and could not learn the rich semantic information in
the statement as a whole. In this paper, we propose an end-to-end knowledge
enhanced learning and verification method for multi-claim fact checking. Our
method consists of two modules, KG-based learning enhancement and multi-claim
semantic composition. To fully utilize the contextual information, the KG-based
learning enhancement module learns the dynamic context-specific representations
via selectively aggregating relevant attributes of entities. To capture the
compositional semantics of multiple triples, the multi-claim semantic
composition module constructs the graph structure to model claim-level
interactions, and integrates global and salient local semantics with multi-head
attention. Experimental results on a real-world dataset and two benchmark
datasets show the effectiveness of our method for multi-claim fact checking
over KG.
|
Ferroelectric tunneling junctions (FTJ) are considered to be the
intrinsically most energy efficient memristors. In this work, specific
electrical features of ferroelectric hafnium-zirconium oxide based FTJ devices
are investigated. Moreover, the impact on the design of FTJ-based circuits for
edge computing applications is discussed by means of two example circuits.
|
The number of non-isomorphic cubic fields L sharing a common discriminant
d(L) = d is called the multiplicity m = m(d) of d. For an assigned value of d,
these fields are collected in a multiplet M(d) = (L(1) ,..., L(m)). In this
paper, the information in all existing tables of totally real cubic number
fields L with positive discriminants d(L) < 10000000 is extended by computing
the differential principal factorization types tau(L) in (alpha1, alpha2,
alpha3, beta1, beta2, gamma, delta1, delta2, epsilon) of the members L of each
multiplet M(d) of non-cyclic fields, a new kind of arithmetical invariants
which provide succinct information about ambiguous principal ideals and
capitulation in the normal closures N of non-Galois cubic fields L. The
classification is arranged with respect to increasing 3-class rank of the
quadratic subfields K of the S3-fields N, and to ascending number of prime
divisors of the conductor f of N/K. The Scholz conjecture concerning the
distinguished index of subfield units (U(N) : U(0)) = 1 for ramified extensions
N/K with conductor f > 1 is refined and verified.
|
One of the major challenges for low-rank multi-fidelity (MF) approaches is
the assumption that low-fidelity (LF) and high-fidelity (HF) models admit
"similar" low-rank kernel representations. Low-rank MF methods have
traditionally attempted to exploit low-rank representations of linear kernels,
which are kernel functions of the form $K(u,v) = v^T u$ for vectors $u$ and
$v$. However, such linear kernels may not be able to capture low-rank behavior,
and they may admit LF and HF kernels that are not similar. Such a situation
renders a naive approach to low-rank MF procedures ineffective. In this paper,
we propose a novel approach for the selection of a near-optimal kernel function
for use in low-rank MF methods. The proposed framework is a two-step strategy
wherein: (1) hyperparameters of a library of kernel functions are optimized,
and (2) a particular combination of the optimized kernels is selected, through
either a convex mixture (Additive Kernels) or through a data-driven
optimization (Adaptive Kernels). The two resulting methods for this generalized
framework both utilize only the available inexpensive low-fidelity data and
thus no evaluation of high-fidelity simulation model is needed until a kernel
is chosen. These proposed approaches are tested on five non-trivial problems
including multi-fidelity surrogate modeling for one- and two-species molecular
systems, gravitational many-body problem, associating polymer networks,
plasmonic nano-particle arrays, and an incompressible flow in channels with
stenosis. The results for these numerical experiments demonstrate the numerical
stability efficiency of both proposed kernel function selection procedures, as
well as high accuracy of their resultant predictive models for estimation of
quantities of interest. Comparisons against standard linear kernel procedures
also demonstrate increased accuracy of the optimized kernel approaches.
|
The investigation of the energy frontier in physics requires novel concepts
for future colliders. The idea of a muon collider is very appealing since it
would allow to study particle collisions at up to tens of TeV energy, while
offering a cleaner experimental environment with respect to hadronic colliders.
One key element in the muon collider design is the low-emittance muon
production. Recently,the Low EMittance Muon Accelerator (LEMMA) collaboration
has explored the muon pair production close to its kinematic threshold by
annihilating 45 GeV positrons with electrons in a low Z material target. In
this configuration, muons are emerging from the target with a naturally
low-emittance. In this paper we describe the performance of a system, to study
this production mechanism, that consists in several segmented absorbers with
alternating active layers composed of fast Cherenkov detectors together with a
muon identification technique based on this detector. Passive layers were made
of tungsten. We collected data corresponding to muon and electron beams
produced at the H2 line in the North Area of the European Organization for
Nuclear Research (CERN) in September 2018.
|
We explore equilibrium solutions of spherically symmetric boson stars in the
Palatini formulation of $f(\mathcal{R})$ gravity. We account for the
modifications introduced in the gravitational sector by using a recently
established correspondence between modified gravity with scalar matter and
general relativity with modified scalar matter. We focus on the quadratic
theory $f(\mathcal{R})=R+\xi R^2$ and compare its solutions with those found in
general relativity, exploring both positive and negative values of the coupling
parameter $\xi$. As matter source, a complex, massive scalar field with and
without self-interaction terms is considered. Our results show that the
existence curves of boson stars in Palatini $f(\mathcal{R})$ gravity are fairly
similar to those found in general relativity. Major differences are observed
for negative values of the coupling parameter which results in a repulsive
gravitational component for high enough scalar field density distributions.
Adding self-interactions makes the degeneracy between $f(\mathcal{R})$ and
general relativity even more pronounced, leaving very little room for
observational discrimination between the two theories.
|
With the widespread use of toxic language online, platforms are increasingly
using automated systems that leverage advances in natural language processing
to automatically flag and remove toxic comments. However, most automated
systems -- when detecting and moderating toxic language -- do not provide
feedback to their users, let alone provide an avenue of recourse for these
users to make actionable changes. We present our work, RECAST, an interactive,
open-sourced web tool for visualizing these models' toxic predictions, while
providing alternative suggestions for flagged toxic language. Our work also
provides users with a new path of recourse when using these automated
moderation tools. RECAST highlights text responsible for classifying toxicity,
and allows users to interactively substitute potentially toxic phrases with
neutral alternatives. We examined the effect of RECAST via two large-scale user
evaluations, and found that RECAST was highly effective at helping users reduce
toxicity as detected through the model. Users also gained a stronger
understanding of the underlying toxicity criterion used by black-box models,
enabling transparency and recourse. In addition, we found that when users focus
on optimizing language for these models instead of their own judgement (which
is the implied incentive and goal of deploying automated models), these models
cease to be effective classifiers of toxicity compared to human annotations.
This opens a discussion for how toxicity detection models work and should work,
and their effect on the future of online discourse.
|
The production of $\Lambda$ baryons and ${\rm K}^{0}_{\rm S}$ mesons (${\rm
V}^{0}$ particles) was measured in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5$
TeV and pp collisions at $\sqrt{s} = 7$ TeV with ALICE at the LHC. The
production of these strange particles is studied separately for particles
associated with hard scatterings and the underlying event to shed light on the
baryon-to-meson ratio enhancement observed at intermediate transverse momentum
($p_{\rm T}$) in high multiplicity pp and p-Pb collisions. Hard scatterings are
selected on an event-by-event basis with jets reconstructed with the
anti-$k_{\rm T}$ algorithm using charged particles. The production of strange
particles associated with jets $p_{\rm T,\;jet}^{\rm ch}>10$ GeV/$c$ is
reported as a function of $p_{\rm T}$ in both systems; and its dependence on
$p_{\rm T}$ with jets $p_{\rm T,\;jet}^{\rm ch}>20$ GeV/$c$ and on angular
distance from the jet axis, $R({\rm V}^{0},\;{\rm jet})$, for jets with $p_{\rm
T,\;jet}^{\rm ch} > 10$ GeV/$c$ are reported in p-Pb collisions. The results
are compared with the strange particle production in the underlying event. The
$\Lambda/{\rm K}^{0}_{\rm S}$ ratio associated with jets in p-Pb collisions for
$R({\rm V}^{0},\;{\rm jet})<0.4$ is consistent with the ratio measured in pp
collisions and with the expectation of jets fragmenting in vacuum given by the
PYTHIA event generator.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.