abstract
stringlengths 42
2.09k
|
---|
The Ly$\alpha$ emission line is one of the most promising probes of cosmic
reionisation but isolating the signature of a change in the ionisation state of
the IGM is challenging because of intrinsic evolution and internal radiation
transfer effects. We present the first study of the evolution of Ly$\alpha$
emitters (LAE) during the epoch of reionisation based on a full
radiation-hydrodynamics cosmological simulation that is able to capture both
the large-scale process of reionisation and the small-scale properties of
galaxies. We predict the Ly$\alpha$ emission of galaxies in the $10^3$ cMpc$^3$
SPHINX simulation at $6\leq z\leq9$ by computing the full Ly$\alpha$ radiation
transfer from ISM to IGM scales. SPHINX is able to reproduce many observational
constraints such as the UV/Ly$\alpha$ luminosity functions and stellar mass
functions at z $\geq$ 6 for the dynamical range probed by our simulation
($M_{\rm 1500}\gtrsim-18$, $L_{\rm Ly\alpha}\lesssim10^{42}$ erg/s,
$M_{\star}\lesssim10^9$ M$_{\odot}$). As intrinsic Ly$\alpha$ emission and
internal Ly$\alpha$ escape fractions barely evolve from $z=6$ to 9, the
observed suppression of Ly$\alpha$ luminosities with increasing redshift is
fully attributed to IGM absorption. For most observable galaxies ($M_{\rm
1500}\lesssim-16$), the Ly$\alpha$ line profiles are slightly shifted to the
red due to internal radiative transfer effects which mitigates the effect of
IGM absorption. Overall, the enhanced Ly$\alpha$ suppression during
reionisation traces the IGM neutral fraction $x_{\rm HI}$ well but the
predicted amplitude of this reduction is a strong function of the Ly$\alpha$
peak shift, which is set at ISM/CGM scales. We find that a large number of LAEs
could be detectable in very deep surveys during reionisation when $x_{\rm HI}$
is still $\approx 50\%$.
|
We present our 500 pc distance-limited study of stellar fares using the Dark
Energy Camera as part of the Deeper, Wider, Faster Program. The data was
collected via continuous 20-second cadence g band imaging and we identify
19,914 sources with precise distances from Gaia DR2 within twelve, ~3
square-degree, fields over a range of Galactic latitudes. An average of ~74
minutes is spent on each field per visit. All light curves were accessed
through a novel unsupervised machine learning technique designed for anomaly
detection. We identify 96 flare events occurring across 80 stars, the majority
of which are M dwarfs. Integrated are energies range from $\sim
10^{31}-10^{37}$ erg, with a proportional relationship existing between
increased are energy with increased distance from the Galactic plane,
representative of stellar age leading to declining yet more energetic are
events. In agreement with previous studies we observe an increase in flaring
fraction from M0 -> M6 spectral types. Furthermore, we find a decrease in the
flaring fraction of stars as vertical distance from the galactic plane is
increased, with a steep decline present around ~100 pc. We find that ~70% of
identified flares occur on short timescales of ~8 minutes. Finally we present
our associated are rates, finding a volumetric rate of $2.9 \pm 0.3 \times
10^{-6}$ flares pc$^{-3}$ hr$^{-1}$.
|
We combine NLO predictions with full top-quark mass dependence with
approximate NNLO predictions for Higgs-boson pair production in gluon fusion,
including the possibility to vary coupling parameters within a non-linear
Effective Field Theory framework containing five anomalous couplings for this
process. We study the impact of the anomalous couplings on various observables,
and present Higgs-pair invariant-mass distributions at seven benchmark points
characterising different $m_{hh}$ shape types. We also provide numerical
coefficients for the approximate NNLO cross section as a function of the
anomalous couplings at $\sqrt{s}=14$ TeV.
|
Detection and classification of ships based on their silhouette profiles in
natural imagery is an important undertaking in computer science. This problem
can be viewed from a variety of perspectives, including security, traffic
control, and even militarism. Therefore, in each of the aforementioned
applications, specific processing is required. In this paper, by applying the
"bag of words" (BoW), a new method is presented that its words are the features
that are obtained using pre-trained models of deep convolutional networks. ,
Three VGG models are utilized which provide superior accuracy in identifying
objects. The regions of the image that are selected as the initial proposals
are derived from a greedy algorithm on the key points generated by the Scale
Invariant Feature Transform (SIFT) method. Using the deep features in the BOW
method provides a good improvement in the recognition and classification of
ships. Eventually, we obtained an accuracy of 91.8% in the classification of
the ships which shows the improvement of about 5% compared to previous methods.
|
Recent data on the nuclear modification of W and Z boson production measured
by the ATLAS collaboration in PbPb collisions at $\sqrt{s_{\rm nn}}=5.02$ TeV
show an enhancement in peripheral collisions, seemingly contradicting
predictions of the Glauber model. The data were previously explained by arguing
that the nucleon-nucleon cross section may be shadowed in nucleus-nucleus
collisions, and hence suppressed compared to the proton-proton cross section at
the same collision energy. This interpretation has quite significant
consequences for the understanding of heavy-ion data, in particular in the
context of the Glauber model. Instead, we provide an alternative explanation of
the data by assuming that there is a mild bias present in the centrality
determination of the measurement; on the size of the related systematic
uncertainty. Using this assumption, we show that the data is in agreement with
theoretical calculations using nuclear parton distribution functions. Finally,
we speculate that the centrality dependence of the W$^-$/W$^{+}$ ratio may
point to the relevance of a larger skin thickness of the Pb nucleus, which, if
present, would result in a few percent larger PbPb cross section than currently
accounted for in the Glauber model and may hence be the root of the centrality
bias.
|
A complete understanding of solar radio bursts requires developing numerical
techniques which can connect large-scale activities with kinetic plasma
processes. As a starting point, this study presents a numerical scheme
combining three different techniques: (1) extrapolation of magnetic field
overlying a specific active region in order to derive the background field, (2)
guiding-center simulation of dynamics of millions of particles within a
selected loop to reveal the integral velocity distribution function (VDF)
around certain sections of the loop, and (3) particle-in-cell (PIC) simulation
of kinetic instabilities driven by energetic electrons initiated by the
obtained distributions. Scattering effects at various levels (weak, moderate,
and strong) due to wave/turbulence-particle interaction are considered using
prescribed time scales of scattering. It was found that the obtained VDFs
contain strip-like and loss-cone features with positive gradient, and both
features are capable of driving electron cyclotron maser emission (ECME), which
is a viable radiation mechanism for some solar radio bursts, in particular,
solar radio spikes. The strip-like feature is important in driving the harmonic
X mode, while the loss-cone feature can be important in driving the fundamental
X mode. In the weak-scattering case, the rate of energy conversion from
energetic electrons to X2 can reach up to ~2.9 * 10^-3 Ek0, where Ek0 is the
initial kinetic energy of energetic electrons. The study demonstrates a novel
way of exciting X2 mode in the corona during solar flares, and provides new
sight into how escaping radiation can be generated within a coronal loop during
solar flares.
|
We show that the QRAT simulation algorithm of $\forall$Exp+Res from [B. Kiesl
and M. Seidl, 2019] cannot be lifted to IR-calc.
|
William Cranch Bond, director of the Harvard College Observatory in mid-19th
century, carried out detailed sunspot observations during the period 1847-1849.
We highlight Bond was the observer with the highest daily number of sunspot
groups observed in Solar Cycle 9 recording 18 groups on 26 December 1848
according to the current sunspot group database. However, we have detected
significant mistakes in these counts due to the use of sunspot position tables
instead of solar drawings. Therefore, we have revisited the sunspot
observations made by Bond, establishing a new group counting. Our new counts of
the sunspot groups from Bond's drawings indicate that solar activity was
previously overestimated. Moreover, after this new counting, Bond would not be
the astronomer who recorded the highest daily group number for Solar Cycle 9
but Schmidt with 16 groups on 14 February 1849. We have also indicated the new
highest annual group numbers recorded by any observer for the period 1847-1849
in order to correct those values applied in the "brightest star" method, which
is used as a rough indicator of the solar activity level. Furthermore, a
comparison between Bond's sunspot records and the sunspot observations made by
Schwabe and Wolf is shown. We conclude that the statistics of Wolf and Bond are
similar regarding to the group count. Additionally, Schwabe was able to observe
smaller groups than Bond.
|
Implementing embedded neural network processing at the edge requires
efficient hardware acceleration that couples high computational performance
with low power consumption. Driven by the rapid evolution of network
architectures and their algorithmic features, accelerator designs are
constantly updated and improved. To evaluate and compare hardware design
choices, designers can refer to a myriad of accelerator implementations in the
literature. Surveys provide an overview of these works but are often limited to
system-level and benchmark-specific performance metrics, making it difficult to
quantitatively compare the individual effect of each utilized optimization
technique. This complicates the evaluation of optimizations for new accelerator
designs, slowing-down the research progress. This work provides a survey of
neural network accelerator optimization approaches that have been used in
recent works and reports their individual effects on edge processing
performance. It presents the list of optimizations and their quantitative
effects as a construction kit, allowing to assess the design choices for each
building block separately. Reported optimizations range from up to 10'000x
memory savings to 33x energy reductions, providing chip designers an overview
of design choices for implementing efficient low power neural network
accelerators.
|
The speed-accuracy Pareto curve of object detection systems have advanced
through a combination of better model architectures, training and inference
methods. In this paper, we methodically evaluate a variety of these techniques
to understand where most of the improvements in modern detection systems come
from. We benchmark these improvements on the vanilla ResNet-FPN backbone with
RetinaNet and RCNN detectors. The vanilla detectors are improved by 7.7% in
accuracy while being 30% faster in speed. We further provide simple scaling
strategies to generate family of models that form two Pareto curves, named
RetinaNet-RS and Cascade RCNN-RS. These simple rescaled detectors explore the
speed-accuracy trade-off between the one-stage RetinaNet detectors and
two-stage RCNN detectors. Our largest Cascade RCNN-RS models achieve 52.9% AP
with a ResNet152-FPN backbone and 53.6% with a SpineNet143L backbone. Finally,
we show the ResNet architecture, with three minor architectural changes,
outperforms EfficientNet as the backbone for object detection and instance
segmentation systems.
|
We are concerned with the linear stability of the Couette flow for the
non-isentropic compressible Navier-Stokes equations with vanished shear
viscosity in a domain $\mathbb{T}\times \mathbb{R}$. For a general initial data
settled in Sobolev spaces, we obtain a Lyapunov type instability of the
density, the temperature, the compressible part of the velocity field, and also
obtain an inviscid damping for the incompressible part of the velocity field.
Moreover, if the initial density, the initial temperature and the
incompressible part of the initial velocity field satisfy some quality
relation, we can prove the enhanced dissipation phenomenon for the velocity
field.
|
In 2021, the Sombor index was introduced by Gutman, which is a new
degree-based topological molecular descriptors. The Sombor index of a graph $G$
is defined as $SO(G) =\sum_{uv\in E(G)}\sqrt{d^2_G(u)+d^2_G(v)}$, where
$d_G(v)$ is the degree of the vertex $v$ in $G$. Let $\mathscr{T}_{n,m}$ and
$\mathscr{U}_{n,m}$ be the set of trees and unicyclic graphs on $n$ vertices
with fixed matching number $m$, respectively. In this paper, the tree and the
unicyclic graph with the maximum Sombor index are determined among
$\mathscr{T}_{n,m}$ and $\mathscr{U}_{n,m}$, respectively.
|
Purpose: Diffusion MRI (dMRI) suffers from eddy currents induced by strong
diffusion gradients, which introduce artefacts that can impair subsequent
diffusion metric analysis. Existing retrospective correction techniques that
correct for diffusion gradient induced eddy currents do not account for eddy
current decay, which is generally effective for traditional Pulsed Gradient
Spin Echo (PGSE) diffusion encoding. However, these techniques do not
necessarily apply to advanced forms of dMRI that require substantial gradient
slewing, such as Oscillating Gradient Spin Echo (OGSE).
Methods: An in-house algorithm (TVEDDY), that for the first time
retrospectively models eddy current decay, was tested on PGSE and OGSE brain
images acquired at 7T. Correction performance was compared to conventional
correction methods by evaluating the mean-squared error (MSE) between
diffusion-weighted images acquired with opposite polarity diffusion gradients.
As a ground truth comparison, images were corrected using field dynamics up to
third order in space measured using a field monitoring system.
Results: Time-varying eddy currents were observed for OGSE, which introduced
blurring that was not reduced using the traditional approach but was diminished
considerably with TVEDDY and model-based reconstruction. No MSE difference was
observed between the conventional approach and TVEDDY for PGSE, but for OGSE
TVEDDY resulted in significantly lower MSE than the conventional approach. The
field-monitoring-informed model-based reconstruction had the lowest MSE for
both PGSE and OGSE.
Conclusion: This work establishes that it is possible to estimate
time-varying eddy currents from the diffusion data itself, which provides
substantial image quality improvements for gradient-intensive dMRI acquisitions
like OGSE.
|
We consider two popular Graph Representation Learning (GRL) methods: message
passing for node classification and network embedding for link prediction. For
each, we pick a popular model that we: (i) linearize and (ii) and switch its
training objective to Frobenius norm error minimization. These simplifications
can cast the training into finding the optimal parameters in closed-form. We
program in TensorFlow a functional form of Truncated Singular Value
Decomposition (SVD), such that, we could decompose a dense matrix $\mathbf{M}$,
without explicitly computing $\mathbf{M}$. We achieve competitive performance
on popular GRL tasks while providing orders of magnitude speedup. We
open-source our code at http://github.com/samihaija/tf-fsvd
|
We describe and contrast two distinct problem areas for statistical
causality: studying the likely effects of an intervention ("effects of
causes"), and studying whether there is a causal link between the observed
exposure and outcome in an individual case ("causes of effects"). For each of
these, we introduce and compare various formal frameworks that have been
proposed for that purpose, including the decision-theoretic approach,
structural equations, structural and stochastic causal models, and potential
outcomes. It is argued that counterfactual concepts are unnecessary for
studying effects of causes, but are needed for analysing causes of effects.
They are however subject to a degree of arbitrariness, which can be reduced,
though not in general eliminated, by taking account of additional structure in
the problem.
|
The breaking of chiral symmetry in holographic light-front QCD is encoded in
its longitudinal dynamics with its chiral limit protected by the superconformal
algebraic structure which governs its transverse dynamics. The scale in the
longitudinal light-front Hamiltonian determines the confinement strength in
this direction: It is also responsible for most of the light meson ground state
mass consistent with the Gell-Mann-Oakes-Renner constraint. Longitudinal
confinement and the breaking of chiral symmetry are found to be different
manifestations of the same underlying dynamics as found in 't Hooft large $N_C$
QCD(1 + 1) model.
|
We study the benefits of complex-valued weights for neural networks. We prove
that shallow complex neural networks with quadratic activations have no
spurious local minima. In contrast, shallow real neural networks with quadratic
activations have infinitely many spurious local minima under the same
conditions. In addition, we provide specific examples to demonstrate that
complex-valued weights turn poor local minima into saddle points. The
activation function CReLU is also discussed to illustrate the superiority of
analytic activations in complex-valued neural networks.
|
In the development of governmental policy for artificial intelligence (AI)
that is informed by ethics, one avenue currently pursued is that of drawing on
AI Ethics Principles. However, these AI Ethics Principles often fail to be
actioned in governmental policy. This paper proposes a novel framework for the
development of Actionable Principles for AI. The approach acknowledges the
relevance of AI Ethics Principles and homes in on methodological elements to
increase their practical implementability in policy processes. As a case study,
elements are extracted from the development process of the Ethics Guidelines
for Trustworthy AI of the European Commissions High Level Expert Group on AI.
Subsequently, these elements are expanded on and evaluated in light of their
ability to contribute to a prototype framework for the development of
Actionable Principles for AI. The paper proposes the following three
propositions for the formation of such a prototype framework: (1) preliminary
landscape assessments; (2) multi-stakeholder participation and cross-sectoral
feedback; and, (3) mechanisms to support implementation and
operationalizability.
|
We describe the asymptotic behavior of positive solutions $u_\epsilon$ of the
equation $-\Delta u + au = 3\,u^{5-\epsilon}$ in $\Omega\subset\mathbb{R}^3$
with a homogeneous Dirichlet boundary condition. The function $a$ is assumed to
be critical in the sense of Hebey and Vaugon and the functions $u_\epsilon$ are
assumed to be an optimizing sequence for the Sobolev inequality. Under a
natural nondegeneracy assumption we derive the exact rate of the blow-up and
the location of the concentration point, thereby proving a conjecture of
Br\'ezis and Peletier (1989). Similar results are also obtained for solutions
of the equation $-\Delta u + (a+\epsilon V) u = 3\,u^5$ in $\Omega$.
|
The multiple-input multiple-output (MIMO) detection problem, a fundamental
problem in modern digital communications, is to detect a vector of transmitted
symbols from the noisy outputs of a fading MIMO channel. The maximum likelihood
detector can be formulated as a complex least-squares problem with discrete
variables, which is NP-hard in general. Various semidefinite relaxation (SDR)
methods have been proposed in the literature to solve the problem due to their
polynomial-time worst-case complexity and good detection error rate
performance. In this paper, we consider two popular classes of SDR-based
detectors and study the conditions under which the SDRs are tight and the
relationship between different SDR models. For the enhanced complex and real
SDRs proposed recently by Lu et al., we refine their analysis and derive the
necessary and sufficient condition for the complex SDR to be tight, as well as
a necessary condition for the real SDR to be tight. In contrast, we also show
that another SDR proposed by Mobasher et al. is not tight with high probability
under mild conditions. Moreover, we establish a general theorem that shows the
equivalence between two subsets of positive semidefinite matrices in different
dimensions by exploiting a special "separable" structure in the constraints.
Our theorem recovers two existing equivalence results of SDRs defined in
different settings and has the potential to find other applications due to its
generality.
|
The joint detection of the gravitational wave GW170817, of the short
$\gamma$-ray burst GRB170817A and of the kilonova AT2017gfo, generated by the
the binary neutron star merger observed on August 17, 2017, is a milestone in
multimessenger astronomy and provides new constraints on the neutron star
equation of state. We perform Bayesian inference and model selection on
AT2017gfo using semi-analytical, multi-components models that also account for
non-spherical ejecta. Observational data favor anisotropic geometries to
spherically symmetric profiles, with a log-Bayes' factor of ${\sim}10^{4}$, and
favor multi-component models against single-component ones. The best fitting
model is an anisotropic three-component composed of dynamical ejecta plus
neutrino and viscous winds. Using the dynamical ejecta parameters inferred from
the best-fitting model and numerical-relativity relations connecting the ejecta
properties to the binary properties, we constrain the binary mass ratio to
$q<1.54$ and the reduced tidal parameter to $120<\tilde\Lambda<1110$. Finally,
we combine the predictions from AT2017gfo with those from GW170817,
constraining the radius of a neutron star of $1.4~{\rm M}_\odot$ to
$12.2\pm0.5~{\rm km}$ ($1\sigma$ level). This prediction could be further
strengthened by improving kilonova models with numerical-relativity
information.
|
In this paper, we discuss possible color palletes, prediction and analysis of
originality of the colors that Artists used on the Renaissance oil paintings.
This framework goal is to help to use the color symbology and image enhancement
tools, to predict the historical color palletes of the Renaissance oil
artworks. This work is only the start of a development to explore the
possibilities of prediction of color palletes of the Renaissance oil artworks.
We believe that framework might be very useful in the prediction of color
palletes of the Renaissance oil artworks and other artworks. The images in
number 105 have been taken from the paintings of three well-known artists,
Rafael, Leonardo Da Vinci, and Rembrandt that are available in the Olga's
Gallery. Images are processed in the frequency domain to enhance a quality of
images and ratios of primary colors are calculated and analyzed by using new
measurements of color-ratios.
|
The control of domain walls is central to nearly all magnetic technologies,
particularly for information storage and spintronics. Creative attempts to
increase storage density need to overcome volatility due to thermal
fluctuations of nanoscopic domains and heating limitations. Topological
defects, such as solitons, skyrmions, and merons, may be much less susceptible
to fluctuations, owing to topological constraints, while also being
controllable with low current densities. Here, we present the first evidence
for soliton/soliton and soliton/antisoliton domain walls in the hexagonal
chiral magnet Mn1/3NbS2 that respond asymmetrically to magnetic fields and
exhibit pair-annihilation. This is important because it suggests the
possibility of controlling the occurrence of soliton pairs and the use of small
fields or small currents to control nanoscopic magnetic domains. Specifically,
our data suggest that either soliton/soliton or soliton/antisoliton pairs can
be stabilized by tuning the balance between intrinsic exchange interactions and
long-range magnetostatics in restricted geometries
|
How many neurons are needed to approximate a target probability distribution
using a neural network with a given input distribution and approximation error?
This paper examines this question for the case when the input distribution is
uniform, and the target distribution belongs to the class of histogram
distributions. We obtain a new upper bound on the number of required neurons,
which is strictly better than previously existing upper bounds. The key
ingredient in this improvement is an efficient construction of the neural nets
representing piecewise linear functions. We also obtain a lower bound on the
minimum number of neurons needed to approximate the histogram distributions.
|
The development of recommender systems that optimize multi-turn interaction
with users, and model the interactions of different agents (e.g., users,
content providers, vendors) in the recommender ecosystem have drawn increasing
attention in recent years. Developing and training models and algorithms for
such recommenders can be especially difficult using static datasets, which
often fail to offer the types of counterfactual predictions needed to evaluate
policies over extended horizons. To address this, we develop RecSim NG, a
probabilistic platform for the simulation of multi-agent recommender systems.
RecSim NG is a scalable, modular, differentiable simulator implemented in
Edward2 and TensorFlow. It offers: a powerful, general probabilistic
programming language for agent-behavior specification; tools for probabilistic
inference and latent-variable model learning, backed by automatic
differentiation and tracing; and a TensorFlow-based runtime for running
simulations on accelerated hardware. We describe RecSim NG and illustrate how
it can be used to create transparent, configurable, end-to-end models of a
recommender ecosystem, complemented by a small set of simple use cases that
demonstrate how RecSim NG can help both researchers and practitioners easily
develop and train novel algorithms for recommender systems.
|
Experimentally synthesized perovskite-type YRh$_{3}$B with a $Pm\bar{3}m$
type structure was proposed as a novel topological material (TM) via
first-principles calculations and the low-energy $k\cdot p$ effective
Hamiltonian, which has a quadratic contact triple point (QCTP) at point
$\Gamma$ and six pairs of open nodal lines (NLs) of the hybrid type. Clear
surface states observed in the surface spectrum confirmed the topological
states. When spin-orbit coupling was considered, the QCTP at $\Gamma$
transferred to the quadratic-type Dirac nodal point (NP). Under 1$\%$
tetragonal strained lattice constants, YRh$_{3}$B hosted richer topological
states, including a quadratic-type two-fold degenerate NP, six pairs of open
NLs of the hybrid type, and two closed NLs of type I and hybrid type. Moreover,
it was proved that the NLs of YRh$_{3}$B at its strained lattice constants
contain all types of band-crossing points (BCPs) (i.e., type I, type II, and
critical type). Such rich types of NP and NL states in one compound make it
potentially applicable for multifunctional electronic devices as well as an
appropriate platform to study entanglement among topological states.
|
A couple of dozen Earth-like planets orbiting M dwarfs have been discovered
so far. Some of them have attracted interest because of their potential
long-term habitability; such a possibility is currently vigorously debated in
the literature. I show that post-Keplerian (pK) orbit precessions may impact
the habitability of a fictitious telluric planet orbiting an oblate late-type M
dwarf of spectral class M9V with $M_\star=0.08\,M_\odot$ at
$a=0.02\,\mathrm{au}$, corresponding to an orbital period $P_\mathrm{b}\simeq
4\,\mathrm{d}$, inducing long-term variations of the planetary obliquity
$\varepsilon$ which, under certain circumstances, may not be deemed as
negligible from the point of view of life's sustainability. I resume the
analytical orbit-averaged equations of the pK precessions, both classical and
general relativistic, of the unit vectors
$\boldsymbol{\hat{S}},\,\boldsymbol{\hat{h}}$ of both the planet's spin and
orbital angular momenta $\boldsymbol S,\,\boldsymbol{L}$ entering
$\varepsilon$, and numerically integrate them by producing time series of the
pK changes $\Delta\varepsilon(t)$ of the obliquity. For rapidly rotating M
dwarfs with rotational periods of the order of $P_\star \simeq
0.1-1\,\mathrm{d}$, the planet's obliquity $\varepsilon$ can undergo classical
pK large variations $\Delta\varepsilon(t)$ up to tens of degrees over
timescales $\Delta t \simeq 20-200\,\mathrm{kyr}$, depending on the mutual
orientations of the star's spin ${\boldsymbol J}_\star$, of $\boldsymbol S$,
and of $\boldsymbol L$. Instead, $\Delta\varepsilon(t)$ are $\lesssim
1-1.5^\circ$ for the planet b of the Teegarden's Star. In certain
circumstances, the M dwarf's oblateness $J_2^\star$ should be considered as one
of the key dynamical features to be taken into account in compiling budgets of
the long-term habitability of rocky planets around fast spinning late M dwarfs.
(Abridged)
|
Multipartite entangled states are significant resources for both quantum
information processing and quantum metrology. In particular, non-Gaussian
entangled states are predicted to achieve a higher sensitivity of precision
measurements than Gaussian states. On the basis of metrological sensitivity,
the conventional linear Ramsey squeezing parameter (RSP) efficiently
characterises the Gaussian entangled atomic states but fails for much wider
classes of highly sensitive non-Gaussian states. These complex non-Gaussian
entangled states can be classified by the nonlinear squeezing parameter (NLSP),
as a generalisation of the RSP with respect to nonlinear observables, and
identified via the Fisher information. However, the NLSP has never been
measured experimentally. Using a 19-qubit programmable superconducting
processor, here we report the characterisation of multiparticle entangled
states generated during its nonlinear dynamics. First, selecting 10 qubits, we
measure the RSP and the NLSP by single-shot readouts of collective spin
operators in several different directions. Then, by extracting the Fisher
information of the time-evolved state of all 19 qubits, we observe a large
metrological gain of 9.89$^{+0.28}_{-0.29}$ dB over the standard quantum limit,
indicating a high level of multiparticle entanglement for quantum-enhanced
phase sensitivity. Benefiting from high-fidelity full controls and addressable
single-shot readouts, the superconducting processor with interconnected qubits
provides an ideal platform for engineering and benchmarking non-Gaussian
entangled states that are useful for quantum-enhanced metrology.
|
Convolutional Neural Networks (CNN) are used mainly to treat problems with
many images characteristic of Deep Learning. In this work, we propose a hybrid
image classification model to take advantage of quantum and classical
computing. The method will use the potential that convolutional networks have
shown in artificial intelligence by replacing classical filters with
variational quantum filters. Similarly, this work will compare with other
classification methods and the system's execution on different servers. The
algorithm's quantum feasibility is modelled and tested on Amazon Braket
Notebook instances and experimented on the Pennylane's philosophy and
framework.
|
We derive a distribution function for the position of a tagged active
particle in a slowly varying in space external potential, in a system of
interacting active particles. The tagged particle distribution has the form of
the Boltzmann distribution but with an effective temperature that replaces the
temperature of the heat bath. We show that the effective temperature that
enters the tagged particle distribution is the same as the effective
temperature defined through the Einstein relation, i.e. it is equal to the
ratio of the self-diffusion and tagged particle mobility coefficients. This
shows that this effective temperature, which is defined through a
fluctuation-dissipation ratio, is relevant beyond the linear response regime.
We verify our theoretical findings through computer simulations. Our theory
fails when an additional large length scale appears in our active system. This
length scale is associated with long-wavelength density fluctuations that
emerge upon approaching motility-induced phase separation.
|
We study all the possible spin asymmetries that can arise in back-to-back
electron-jet production, $ep\rightarrow e+\text{jet}+X$, as well as the
associated jet fragmentation process, $ep\rightarrow e+ \text{jet} (h)+X$, in
electron-proton collisions. We derive the factorization formalism for these
spin asymmetries and perform the corresponding phenomenology for the kinematics
relevant to the future electron ion collider. In the case of unpolarized
electron-proton scattering, we also give predictions for azimuthal asymmetries
for the HERA experiment. This demonstrates that electron-jet production is an
outstanding process for probing unpolarized and polarized transverse momentum
dependent parton distribution functions and fragmentation functions.
|
In this paper, combinatorial quantitative group testing (QGT) with noisy
measurements is studied. The goal of QGT is to detect defective items from a
data set of size $n$ with counting measurements, each of which counts the
number of defects in a selected pool of items. While most literatures consider
either probabilistic QGT with random noise or combinatorial QGT with noiseless
measurements, our focus is on the combinatorial QGT with noisy measurements
that might be adversarially perturbed by additive bounded noises. Since perfect
detection is impossible, a partial detection criterion is adopted. With the
adversarial noise being bounded by $d_n = \Theta(n^\delta)$ and the detection
criterion being to ensure no more than $k_n = \Theta(n^\kappa)$ errors can be
made, our goal is to characterize the fundamental limit on the number of
measurement, termed \emph{pooling complexity}, as well as provide explicit
construction of measurement plans with optimal pooling complexity and efficient
decoding algorithms. We first show that the fundamental limit is
$\frac{1}{1-2\delta}\frac{n}{\log n}$ to within a constant factor not depending
on $(n,\kappa,\delta)$ for the non-adaptive setting when $0<2\delta\leq \kappa
<1$, sharpening the previous result by Chen and Wang [1]. We also provide an
explicit construction of a non-adaptive deterministic measurement plan with
$\frac{1}{1-2\delta}\frac{n}{\log_{2} n}$ pooling complexity up to a constant
factor, matching the fundamental limit, with decoding complexity being
$o(n^{1+\rho})$ for all $\rho > 0$, nearly linear in $n$, the size of the data
set.
|
In SPECT, list-mode (LM) format allows storing data at higher precision
compared to binned data. There is significant interest in investigating whether
this higher precision translates to improved performance on clinical tasks.
Towards this goal, in this study, we quantitatively investigated whether
processing data in LM format, and in particular, the energy attribute of the
detected photon, provides improved performance on the task of absolute
quantification of region-of-interest (ROI) uptake in comparison to processing
the data in binned format. We conducted this evaluation study using a DaTscan
brain SPECT acquisition protocol, conducted in the context of imaging patients
with Parkinson's disease. This study was conducted with a synthetic phantom. A
signal-known exactly/background-known-statistically (SKE/BKS) setup was
considered. An ordered-subset expectation-maximization algorithm was used to
reconstruct images from data acquired in LM format, including the
scatter-window data, and including the energy attribute of each LM event. Using
a realistic 2-D SPECT system simulation, quantification tasks were performed on
the reconstructed images. The results demonstrated improved quantification
performance when LM data was used compared to binning the attributes in all the
conducted evaluation studies. Overall, we observed that LM data, including the
energy attribute, yielded improved performance on absolute quantification tasks
compared to binned data.
|
In the last few years, the interest in knowledge bases has grown
exponentially in both the research community and the industry due to their
essential role in AI applications. Entity alignment is an important task for
enriching knowledge bases. This paper provides a comprehensive tutorial-type
survey on representative entity alignment techniques that use the new approach
of representation learning. We present a framework for capturing the key
characteristics of these techniques, propose two datasets to address the
limitation of existing benchmark datasets, and conduct extensive experiments
using the proposed datasets. The framework gives a clear picture of how the
techniques work. The experiments yield important results about the empirical
performance of the techniques and how various factors affect the performance.
One important observation not stressed by previous work is that techniques
making good use of attribute triples and relation predicates as features stand
out as winners.
|
Micrometer sized alkane-in-water emulsion drops, stabilized by appropriate
long-chain surfactants, spontaneously break symmetry upon cooling and transform
consecutively into series of regular shapes (Denkov et al., Nature 2015, 528,
392). Two mechanisms were proposed to explain this phenomenon of drop
"self-shaping". One of these mechanisms assumes that thin layers of plastic
rotator phase form at the drop surface around the freezing temperature of the
oil. This mechanism has been supported by several indirect experimental
findings but direct structural characterization has not been reported so far.
We combine small- and wide-angle X-ray scattering (SAXS/WAXS) with optical
microscopy and DSC measurements of self-shaping drops in emulsions. In the
emulsions exhibiting drop self-shaping, the scattering spectra reveal the
formation of intermediate, metastable rotator phases in the alkane drops before
their crystallization. In addition, shells of rotator phase were observed to
form in hexadecane drops, stabilized by C16EO10 surfactant. This rotator phase
melts at ca. 16.6 {\deg}C which is significantly lower than the melting
temperature of crystalline hexadecane, 18 {\deg}C. The scattering results are
in a very good agreement with the complementary optical observations and DSC
measurements.
|
We present a method that generalizes the periodic orbit dividing surface
construction for Hamiltonian systems with three or more degrees of freedom. We
construct a torus using as a basis a periodic orbit and we extend this to a
$2n-2$ dimensional object in the $2n-1$ dimensional energy surface. We present
our methods using benchmark examples for two and three degree of freedom
Hamiltonian systems to illustrate the corresponding algorithm for this
construction. Towards this end we use the normal form quadratic Hamiltonian
system with two and three degrees of freedom. We found that the periodic orbit
dividing surface can provide us the same dynamical information as the dividing
surface constructed using normally hyperbolic invariant manifolds. This is
significant because, in general, computations of normally hyperbolic invariant
manifolds are very difficult in Hamiltonian systems with three or more degrees
of freedom. However, our method avoids this computation and the only
information that we need is the location of one periodic orbit.
|
We perform the maximal twist of eleven-dimensional supergravity. This twist
is partially topological and exists on manifolds of $G_2 \times SU(2)$
holonomy. Our derivation starts with an explicit description of the
Batalin-Vilkovisky complex associated to the three-form multiplet in the pure
spinor superfield formalism. We then determine the $L_\infty$ module structure
of the supersymmetry algebra on the component fields. We twist the theory by
modifying the differential of the Batalin-Vilkovisky complex to incorporate the
action of a scalar supercharge. We find that the resulting free twisted theory
is given by the tensor product of the de Rham and Dolbeault complexes of the
respective $G_2$ and $SU(2)$ holonomy manifolds as conjectured by Costello.
|
A numerical method is developed to solve linear semi-infinite programming
problem (LSIP) in which the iterates produced by the algorithm are feasible for
the original problem. This is achieved by constructing a sequence of standard
linear programming problems with respect to the successive discretization of
the index set such that the approximate regions are included in the original
feasible region. The convergence of the approximate solutions to the solution
of the original problem is proved and the associated optimal objective function
values of the approximate problems are monotonically decreasing and converge to
the optimal value of LSIP. An adaptive refinement procedure is designed to
discretize the index set and update the constraints for the approximate
problem. Numerical experiments demonstrate the performance of the proposed
algorithm.
|
Searches for the lepton number violating $K^{+} \rightarrow \pi^{-} \mu^{+}
e^{+}$ decay and the lepton flavour violating $K^{+} \rightarrow \pi^{+}
\mu^{-} e^{+}$ and $\pi^{0} \rightarrow \mu^{-} e^{+}$ decays are reported
using data collected by the NA62 experiment at CERN in $2017$-$2018$. No
evidence for these decays is found and upper limits of the branching ratios are
obtained at 90% confidence level:
$\mathcal{B}(K^{+}\rightarrow\pi^{-}\mu^{+}e^{+})<4.2\times 10^{-11}$,
$\mathcal{B}(K^{+}\rightarrow\pi^{+}\mu^{-}e^{+})<6.6\times10^{-11}$ and
$\mathcal{B}(\pi^{0}\rightarrow\mu^{-}e^{+})<3.2\times 10^{-10}$. These results
improve by one order of magnitude over previous results for these decay modes.
|
The phenomenon of population interference, where a treatment assigned to one
experimental unit affects another experimental unit's outcome, has received
considerable attention in standard randomized experiments. The complications
produced by population interference in this setting are now readily recognized,
and partial remedies are well known. Much less understood is the impact of
population interference in panel experiments where treatment is sequentially
randomized in the population, and the outcomes are observed at each time step.
This paper proposes a general framework for studying population interference in
panel experiments and presents new finite population estimation and inference
results. Our findings suggest that, under mild assumptions, the addition of a
temporal dimension to an experiment alleviates some of the challenges of
population interference for certain estimands. In contrast, we show that the
presence of carryover effects -- that is, when past treatments may affect
future outcomes -- exacerbates the problem. Revisiting the special case of
standard experiments with population interference, we prove a central limit
theorem under weaker conditions than previous results in the literature and
highlight the trade-off between flexibility in the design and the interference
structure.
|
One of the difficulties related to the COVID-19 pandemic is the shifting from
face-to-face to distance teaching. Both schools and universities had suddenly
to organize on-line lectures. To perform laboratory practice even in this
period, easily accessible materials, smartphones physics apps, on-line tools
and devices can be used. In this paper a method to measure the gravitational
acceleration studying the free falling body using Arduino board is presented.
|
The presence of multiple talkers in the surrounding environment poses a
difficult challenge for real-time speech communication systems considering the
constraints on network size and complexity. In this paper, we present
Personalized PercepNet, a real-time speech enhancement model that separates a
target speaker from a noisy multi-talker mixture without compromising on
complexity of the recently proposed PercepNet. To enable speaker-dependent
speech enhancement, we first show how we can train a perceptually motivated
speaker embedder network to produce a representative embedding vector for the
given speaker. Personalized PercepNet uses the target speaker embedding as
additional information to pick out and enhance only the target speaker while
suppressing all other competing sounds. Our experiments show that the proposed
model significantly outperforms PercepNet and other baselines, both in terms of
objective speech enhancement metrics and human opinion scores.
|
Composite quantum compounds (CQC) are classic example of quantum materials
which host more than one apparently distinct quantum phenomenon in physics.
Magnetism, topological superconductivity, Rashba physics etc. are few such
quantum phenomenon which are ubiquitously observed in several functional
materials and can co-exist in CQCs. In this letter, we use {\it ab-initio}
calculations to predict the co-existence of two incompatible phenomena, namely
topologically non-trivial Weyl semimetal and spin gapless semiconducting (SGS)
behavior, in a single crystalline system. SGS belong to a special class of
spintronics material which exhibit a unique band structure involving a
semiconducting state for one spin channel and a gapless state for the other. We
report such a SGS behavior in conjunction with the topologically non-trivial
multi-Weyl Fermions in MnPO$_4$. Interestingly, these Weyl nodes are located
very close to the Fermi level with the minimal trivial band density. A drumhead
like surface state originating from a nodal loop around Y-point in the
Brillouin zone is observed. A large value of the simulated anomalous Hall
conductivity (1265 $\Omega^{-1} cm^{-1}$) indirectly reflects the topological
non-trivial behavior of this compound. Such co-existent quantum phenomena are
not common in condensed matter systems and hence it opens up a fertile ground
to explore and achieve newer functional materials.
|
We present the structural and magnetic properties of KNaCuP$_2$O$_7$
investigated via x-ray diffraction, magnetization, specific heat, and $^{31}$P
NMR and $^{23}$Na NMR measurements and complementary electronic structure
calculations. The temperature dependent magnetic susceptibility and $^{31}$P
NMR shift could be modeled very well by the uniform spin-$1/2$ Heisenberg
antiferromagnetic chain model with nearest-neighbour interaction $J/k_{\rm
B}\simeq 58.7$ K. The corresponding mapping using first principles electronic
structure calculations leads to $J^{\rm DFT}/k_{\rm B} \simeq 59$ K with
negligibly small inter-chain couplings ($J^{\prime}/k_{\rm B}$, $J^{\prime
\prime}/k_{\rm B} < 0.1$ K), further confirming that the system is indeed an
one-dimensional uniform spin-$1/2$ Heisenberg antiferromagnet. The
temperature-dependent unit cell volume could be described well using the Debye
approximation with a Debye temperature of $\Theta_{\rm D} \simeq 294$ K,
consistent with the heat capacity data. The diverging trend of the NMR
spin-lattice relaxation rates ($^{31}1/T_1$ and $^{23}1/T_1$) imply the onset
of a magnetic long-range-ordering at very low temperatures supporting the
anticipated $T_{\rm N} \simeq 0.38$ K from the inter-chain couplings. Moreover,
the NMR spin-lattice relaxation rates show the dominant contributions from
uniform ($q=0$) and staggered ($q = \pm \pi/a$) spin fluctuations in the high
and low temperature regimes, respectively mimicking one-dimensionality of the
spin-lattice. We have also demonstrated that $^{31}1/T_1$ in high temperatures
varies linearly with $1/\sqrt{H}$ reflecting the effect of spin diffusion on
the dynamic susceptibility. Further, the inter-chain frustration also
substantially impede the magnetic ordering rendering the spin-lattice a perfect
one-dimensional uniform spin-$1/2$ Heisenberg antiferromagnet over a wide
temperature range.
|
Internet of Things (IoT) devices are becoming ubiquitous in our lives, with
applications spanning from the consumer domain to commercial and industrial
systems. The steep growth and vast adoption of IoT devices reinforce the
importance of sound and robust cybersecurity practices during the device
development life-cycles. IoT-related vulnerabilities, if successfully exploited
can affect, not only the device itself, but also the application field in which
the IoT device operates. Evidently, identifying and addressing every single
vulnerability is an arduous, if not impossible, task. Attack taxonomies can
assist in classifying attacks and their corresponding vulnerabilities. Security
countermeasures and best practices can then be leveraged to mitigate threats
and vulnerabilities before they emerge into catastrophic attacks and ensure
overall secure IoT operation. Therefore, in this paper, we provide an attack
taxonomy which takes into consideration the different layers of IoT stack,
i.e., device, infrastructure, communication, and service, and each layer's
designated characteristics which can be exploited by adversaries. Furthermore,
using nine real-world cybersecurity incidents, that had targeted IoT devices
deployed in the consumer, commercial, and industrial sectors, we describe the
IoT-related vulnerabilities, exploitation procedures, attacks, impacts, and
potential mitigation mechanisms and protection strategies. These (and many
other) incidents highlight the underlying security concerns of IoT systems and
demonstrate the potential attack impacts of such connected ecosystems, while
the proposed taxonomy provides a systematic procedure to categorize attacks
based on the affected layer and corresponding impact.
|
Many deep learning based methods are designed to remove non-uniform
(spatially variant) motion blur caused by object motion and camera shake
without knowing the blur kernel. Some methods directly output the latent sharp
image in one stage, while others utilize a multi-stage strategy (\eg
multi-scale, multi-patch, or multi-temporal) to gradually restore the sharp
image. However, these methods have the following two main issues: 1) The
computational cost of multi-stage is high; 2) The same convolution kernel is
applied in different regions, which is not an ideal choice for non-uniform
blur. Hence, non-uniform motion deblurring is still a challenging and open
problem. In this paper, we propose a new architecture which consists of
multiple Atrous Spatial Pyramid Deformable Convolution (ASPDC) modules to
deblur an image end-to-end with more flexibility. Multiple ASPDC modules
implicitly learn the pixel-specific motion with different dilation rates in the
same layer to handle movements of different magnitude. To improve the training,
we also propose a reblurring network to map the deblurred output back to the
blurred input, which constrains the solution space. Our experimental results
show that the proposed method outperforms state-of-the-art methods on the
benchmark datasets.
|
Constant function market makers (CFMMs) such as Uniswap, Balancer, Curve, and
mStable, among many others, make up some of the largest decentralized exchanges
on Ethereum and other blockchains. Because all transactions are public in
current implementations, a natural next question is if there exist similar
decentralized exchanges which are privacy-preserving; i.e., if a transaction's
quantities are hidden from the public view, then an adversary cannot correctly
reconstruct the traded quantities from other public information. In this note,
we show that privacy is impossible with the usual implementations of CFMMs
under most reasonable models of an adversary and provide some mitigating
strategies.
|
Motivated by recent results of Corwin and Knizel on stationary measures for
the open KPZ equation on the spatial interval [0, 1], we study a pair of Markov
processes with Laplace transforms that have dual representations, with the
arguments of the Laplace transforms and the time parameters of the processes
swapped. Combined with the results of Corwin and Knizel, our formula identifies
the law of the stationary solutions for the open KPZ in terms of a Markov
process which is a Doob's h transform of the Brownian motion killed at an
exponential rate.
|
The entropy principle shows that, for self-gravitating perfect fluid, the
Einstein field equations can be derived from the extrema of the total entropy,
and the thermodynamical stability criterion are equivalent to the dynamical
stability criterion. In this paper, we recast the dynamical criterion for the
charged self-gravitating perfect fluid in Einstein-Maxwell theory, and further
give the criterion of the star with barotropic condition. In order to obtain
the thermodynamical stability criterion, first we get the general formula of
the second variation of the total entropy for charged perfect fluid case, and
then obtain the thermodynamical criterion for radial perturbation. We show that
these two stability criterion are the same, which suggest that the inherent
connection between gravity and thermodynamic even when the electric field is
taken into account.
|
In this paper, we first consider two scalar nonlocal diffusion problems with
a free boundary and a fixed boundary. We obtain the global existence,
uniqueness and longtime behaviour of solution of these two problems. The
spreading-vanishing dichotomy and sharp criteria for spreading and vanishing
are established. We also prove that accelerated spreading could happen if and
only if a threshold condition is violated by kernel function. Then we discuss a
classical Lotka-Volterra predator-prey model with nonlocal diffusions and a
free boundary which can be seen as nonlocal diffusion counterpart of the model
in the work of Wang (2014 J. Differential Equations \textbf{256}, 3365-3394).
|
We provide a setting and a general approach to fair online learning with
stochastic sensitive and non-sensitive contexts. The setting is a repeated game
between the Player and Nature, where at each stage both pick actions based on
the contexts. Inspired by the notion of unawareness, we assume that the Player
can only access the non-sensitive context before making a decision, while we
discuss both cases of Nature accessing the sensitive contexts and Nature
unaware of the sensitive contexts. Adapting Blackwell's approachability theory
to handle the case of an unknown contexts' distribution, we provide a general
necessary and sufficient condition for learning objectives to be compatible
with some fairness constraints. This condition is instantiated on (group-wise)
no-regret and (group-wise) calibration objectives, and on demographic parity as
an additional constraint. When the objective is not compatible with the
constraint, the provided framework permits to characterise the optimal
trade-off between the two.
|
In this paper, we propose a new deep neural network classifier that
simultaneously maximizes the inter-class separation and minimizes the
intra-class variation by using the polyhedral conic classification function.
The proposed method has one loss term that allows the margin maximization to
maximize the inter-class separation and another loss term that controls the
compactness of the class acceptance regions. Our proposed method has a nice
geometric interpretation using polyhedral conic function geometry. We tested
the proposed method on various visual classification problems including
closed/open set recognition and anomaly detection. The experimental results
show that the proposed method typically outperforms other state-of-the art
methods, and becomes a better choice compared to other tested methods
especially for open set recognition type problems.
|
We embed natural inflation in an explict string theory model and derive
observables in cosmology. We achieve this by compactifying the type IIB string
on a Calabi-Yau orientifold, stabilizing moduli via the Large Volume Scenario,
and configuring axions using D7-brane stacks. In order to obtain a large
effective decay constant, we employ the Kim-Nilles-Peloso alignment mechanism,
with the required multiple axions arising naturally from anisotropic bulk
geometries. The bulk volumes, and hence the axion decay constants, are
stabilized by generalized one-loop corrections and subject to various
conditions: the K\"ahler cone condition on the string geometry; the convex hull
condition of the weak gravity conjecture; and the constraint from the power
spectrum of scalar perturbations. We find that all constraints can be satisfied
in a geometry with relatively small volume and thus heavy bulk axion mass. We
also covariantize the convex hull condition for the axion-dilaton-instanton
system and verify the normalization of the extremal bound.
|
The authors of the article have reviewed the scientific literature on the
development of the Russian-Chinese cooperation in the field of combining
economic and logistics projects of the Eurasian Economic Union and the Silk
Road Economic Belt. The opinions of not only Russian, but also Chinese experts
on these projects are indicated, which provides the expansion of the vision of
the concept of the New Silk Road in both countries.
|
[This paper was initially published in PHME conference in 2016, selected for
further publication in International Journal of Prognostics and Health
Management.]
This paper describes an Autoregressive Partially-hidden Markov model (ARPHMM)
for fault detection and prognostics of equipments based on sensors' data. It is
a particular dynamic Bayesian network that allows to represent the dynamics of
a system by means of a Hidden Markov Model (HMM) and an autoregressive (AR)
process. The Markov chain assumes that the system is switching back and forth
between internal states while the AR process ensures a temporal coherence on
sensor measurements. A sound learning procedure of standard ARHMM based on
maximum likelihood allows to iteratively estimate all parameters
simultaneously. This paper suggests a modification of the learning procedure
considering that one may have prior knowledge about the structure which becomes
partially hidden. The integration of the prior is based on the Theory of
Weighted Distributions which is compatible with the Expectation-Maximization
algorithm in the sense that the convergence properties are still satisfied. We
show how to apply this model to estimate the remaining useful life based on
health indicators. The autoregressive parameters can indeed be used for
prediction while the latent structure can be used to get information about the
degradation level. The interest of the proposed method for prognostics and
health assessment is demonstrated on CMAPSS datasets.
|
We study the limit behaviour of upper and lower bounds on expected time
averages in imprecise Markov chains; a generalised type of Markov chain where
the local dynamics, traditionally characterised by transition probabilities,
are now represented by sets of `plausible' transition probabilities. Our first
main result is a necessary and sufficient condition under which these upper and
lower bounds, called upper and lower expected time averages, will converge as
time progresses towards infinity to limit values that do not depend on the
process' initial state. Our condition is considerably weaker than that needed
for ergodic behaviour; a similar notion which demands that marginal upper and
lower expectations of functions at a single time instant converge to so-called
limit-or steady state-upper and lower expectations. For this reason, we refer
to our notion as `weak ergodicity'. Our second main result shows that, as far
as this weakly ergodic behaviour is concerned, one should not worry about which
type of independence assumption to adopt-epistemic irrelevance, complete
independence or repetition independence. The characterisation of weak
ergodicity as well as the limit values of upper and lower expected time
averages do not depend on such a choice. Notably, this type of robustness is
not exhibited by the notion of ergodicity and the related inferences of limit
upper and lower expectations. Finally, though limit upper and lower
expectations are often used to provide approximate information about the limit
behaviour of time averages, we show that such an approximation is sub-optimal
and that it can be significantly improved by directly using upper and lower
expected time averages.
|
We consider a generalization of the recursive utility model by adding a new
component that represents utility of investment gains and losses. We also study
the utility process in this generalized model with constant elasticity of
intertemporal substitution and relative risk aversion degree, and with infinite
time horizon. In a specific, finite-state Markovian setting, we prove that the
utility process uniquely exists when the agent derives nonnegative gain-loss
utility, and that it can be non-existent or non-unique otherwise. Moreover, we
prove that the utility process, when it uniquely exists, can be computed by
starting from any initial guess and applying the recursive equation that
defines the utility process repeatedly. We then consider a portfolio selection
problem with gain-loss utility and solve it by proving that the corresponding
dynamic programming equation has a unique solution. Finally, we extend certain
previous results to the case in which the state space is infinite.
|
A stochastic sewing lemma which is applicable for processes taking values in
Banach spaces is introduced. Applications to additive functionals of fractional
Brownian motion of distributional type are discussed.
|
Compositional Zero-Shot learning (CZSL) aims to recognize unseen compositions
of state and object visual primitives seen during training. A problem with
standard CZSL is the assumption of knowing which unseen compositions will be
available at test time. In this work, we overcome this assumption operating on
the open world setting, where no limit is imposed on the compositional space at
test time, and the search space contains a large number of unseen compositions.
To address this problem, we propose a new approach, Compositional Cosine Graph
Embeddings (Co-CGE), based on two principles. First, Co-CGE models the
dependency between states, objects and their compositions through a graph
convolutional neural network. The graph propagates information from seen to
unseen concepts, improving their representations. Second, since not all unseen
compositions are equally feasible, and less feasible ones may damage the
learned representations, Co-CGE estimates a feasibility score for each unseen
composition, using the scores as margins in a cosine similarity-based loss and
as weights in the adjacency matrix of the graphs. Experiments show that our
approach achieves state-of-the-art performances in standard CZSL while
outperforming previous methods in the open world scenario.
|
In this paper, we present a technique for balancing predictive relevance
models related to supervised modelling ligand biochemical activities to
biological targets. We train uncalibrated models employing conventional
supervised machine learning technique, namely Support Vector Machines.
Unfortunately, SVMs have a serious drawback. They are sensitive to imbalanced
datasets, outliers and high multicollinearity among training samples, which
could be a cause of preferencing one group over another. Thus, an additional
calibration could be required for balancing a predictive relevance of models.
As a technique for this balancing, we propose the Platt's scaling. The achieved
results were demonstrated on single-target models trained on datasets exported
from the ExCAPE database. Unlike traditional used machine techniques, we focus
on decreasing uncertainty employing deterministic solvers.
|
We use symplectic self-dual additive codes over $\mathbb{F}_4$ obtained from
metacirculant graphs to construct, for the first time, $[[\ell, 0, d ]]$ qubit
codes with parameters $(\ell,d) \in \{(78, 20), (90, 21), (91, 22),
(93,21),(96,21)\}$. Secondary constructions applied to the qubit codes result
in many qubit codes that perform better than the previous best-known.
|
The bidomain equations have been widely used to mathematically model the
electrical activity of the cardiac tissue. In this work, we present a potential
theory-based Cartesian grid method which is referred as the kernel-free
boundary integral (KFBI) method which works well on complex domains to
efficiently simulate the linear diffusion part of the bidomain equation. After
a proper temporal discretization, the KFBI method is applied to solve the
resulting homogeneous Neumann boundary value problems with a second-order
accuracy. According to the potential theory, the boundary integral equations
reformulated from the boundary value problems can be solved iteratively with
the simple Richardson iteration or the Krylov subspace iteration method. During
the iteration, the boundary and volume integrals are evaluated by limiting the
structured grid-based discrete solutions of the equivalent interface problems
at quasi-uniform interface nodes without the need to know the analytical
expression of Green's functions. In particular, the discrete linear system of
the equivalent interface problem obtained from the standard finite difference
schemes or the finite element schemes can be efficiently solved by fast
elliptic solvers such as the fast Fourier transform based solvers or those
based on geometric multigrid iterations after an appropriate modification at
the irregular grid nodes. Numerical results for solving the FitzHugh-Nagumo
bidomain equations in both two- and three-dimensional spaces are presented to
demonstrate the numerical performance of the KFBI method such as the
second-order accuracy and the propagation and scroll wave of the voltage
simulated on the real human left ventricle model.
|
Beginning programmers struggle with the complex grammar of modern programming
languages like Java, and make lot of syntax errors. The diagnostic syntax error
messages from compilers and IDEs are sometimes useful, but often the messages
are cryptic and puzzling. Students could be helped, and instructors' time
saved, by automated repair suggestions when dealing with syntax errors. Large
samples of student errors and fixes are now available, offering the possibility
of data-driven machine-learning approaches to help students fix syntax errors.
Current machine-learning approaches do a reasonable job fixing syntax errors in
shorter programs, but don't work as well even for moderately longer programs.
We introduce SYNFIX, a machine-learning based tool that substantially improves
on the state-of-the-art, by learning to use compiler diagnostics, employing a
very large neural model that leverages unsupervised pre-training, and relying
on multi-label classification rather than autoregressive synthesis to generate
the (repaired) output. We describe SYNFIX's architecture in detail, and provide
a detailed evaluation. We have built SYNFIX into a free, open-source version of
Visual Studio Code; we make all our source code and models freely available.
|
Multiple-input multiple-output (MIMO) is an enabling technology to meet the
growing demand for faster and more reliable communications in wireless networks
with a large number of terminals, but it can also be applied for position
estimation of a terminal exploiting multipath propagation from multiple
antennas. In this paper, we investigate new convolutional neural network (CNN)
structures for exploiting MIMO-based channel state information (CSI) to improve
indoor positioning. We evaluate and compare the performance of three variants
of the proposed CNN structure to five NN structures proposed in the scientific
literature using the same sets of training-evaluation data. The results
demonstrate that the proposed residual convolutional NN structure improves the
accuracy of position estimation and keeps the total number of weights lower
than the published NN structures. The proposed CNN structure yields from 2cm to
10cm better position accuracy than known NN structures used as a reference.
|
In this paper, one uses a damped potential to present a description of the
running coupling constant of QCD in the confinement phase. Based on a
phenomenological perspective for the Debye screening length, one compares the
running coupling obtained here with both the Brodsky-de T\'eramond-Deur and the
Richardson approaches. The results seem to indicate the model introduced here
corroborate the Richardson approach. Moreover, the Debye screening mass in the
confinement phase depends on a small parameter, which tends to vanish in the
non-confinement phase of QCD.
|
Measurements of single Higgs production and its decays are in good agreement
with the Standard Model. There is still room for large modifications in double
Higgs production at LHC, though these effects may be correlated with large
corrections to other observables, in particular single Higgs production. In
this work we address the issue of enhancing double Higgs production in the
presence of scalar leptoquarks while satisfying all experimental constraints.
We show at leading order that including more than one species of leptoquarks,
large cubic interactions with the Higgs can lead to sizable enhancement of
di-Higgs production cross section at LHC, while at the same time keeping other
Higgs observables and precision measurements under control. For masses above
800 GeV these corrections are in general below 30%, whereas in a viable
scenario in which one of the leptoquarks can be light, specifically in the mass
range $400-600$ GeV, we show that it is possible to roughly double the SM cross
section for di-Higgs production, implying that possible first hints of it may
be probed at the high luminosity LHC at $\mathcal{L}\sim 2$ ab$^{-1}$.
|
We investigate a quantum non-relativistic system describing the interaction
of two particles with spin 1/2 and spin 0, respectively. Assuming that the
Hamiltonian is rotationally invariant and parity conserving we identify all
such systems which allow additional (pseudo)tensor integrals of motion that are
second order matrix polynomials in the momenta. Previously we found all the
(pseudo)scalar and (axial)vector integrals of motion. No non-obvious tensor
integrals exist. However, nontrivial pseudo-tensor integrals do exist. Together
with our earlier results we give a complete list of such superintegrable
Hamiltonian systems allowing second-order integrals of motion.
|
Graph coloring is often used in parallelizing scientific computations that
run in distributed and multi-GPU environments; it identifies sets of
independent data that can be updated in parallel. Many algorithms exist for
graph coloring on a single GPU or in distributed memory, but to the best of our
knowledge, hybrid MPI+GPU algorithms have been unexplored until this work. We
present several MPI+GPU coloring approaches based on the distributed coloring
algorithms of Gebremedhin et al. and the shared-memory algorithms of Deveci et
al. . The on-node parallel coloring uses implementations in KokkosKernels,
which provide parallelization for both multicore CPUs and GPUs. We further
extend our approaches to compute distance-2 and partial distance-2 colorings,
giving the first known distributed, multi-GPU algorithm for these problems. In
addition, we propose a novel heuristic to reduce communication for recoloring
in distributed graph coloring. Our experiments show that our approaches operate
efficiently on inputs too large to fit on a single GPU and scale up to graphs
with 76.7 billion edges running on 128 GPUs.
|
It is often assumed that atoms are hard spheres in the estimation of local
lattice distortion (LLD) in high-entropy alloys (HEAs). However, our study
demonstrates that the hard sphere model misses the key effect, charge transfer
among atoms with different electronegativities, in the understanding of the
stabilization of severely-distorted HEAs. Through the characterization and
simulations of the local structure of the HfNbTiZr HEA, we found that the
charge transfer effect competes with LLD to significantly reduce the average
atomic-size mismatch. Our finding may form the basis for the design of severely
distorted, but stable HEAs.
|
Byzantine fault-tolerant systems have been researched for more than four
decades, and although shown possible early, the solutions were impractical for
a long time. With PBFT the first practical solution was proposed in 1999 and
spawned new research which culminated in novel applications using it today.
Although the safety and liveness properties of PBFT-type protocols have been
rigorously analyzed, when it comes to practical performance only empirical
results - often in artificial settings - are known and imperfections on the
communication channels are not specifically considered. In this work we present
the first performance model for PBFT specifically considering the impact of
unreliable channels and the use of different transport protocols over them. We
also did extensive simulations to verify the model and to gain more insight on
the impact of deployment parameters on the overall transaction time. We show
that the usage of UDP can lead to significant speedup for PBFT protocols
compared to TCP when tuned accordingly even over lossy channels. Finally, we
compared the simulation to a real implementation and measure the benefits of a
developed improvement directly. We found that the impact on the design of the
network layer has been overlooked in the past but offers some additional room
for improvement when it comes to practical performance. In this work we are
focusing on the optimistic case with no node failures, as this is hopefully the
most relevant situation.
|
The polarization of Cosmic Microwave Background (CMB) photons is rotated as
they pass through (ultralight-) axion string loops. Studying this birefringence
can reveal valuable information about the axion-photon coupling and the
structure of the string network. We develop an approximate analytic formalism
and identify a kernel function that can be used to calculate the two-point
correlation function for CMB birefringence induced by an arbitrary axion string
network. Using this formalism, we evaluate the birefringence signal for some
simple loop distributions (including scaling and network collapse). We find
that the angular correlation function has a characteristic angular scale set by
$\theta_\mathrm{min}$, which corresponds to the angular extent of the loops at
the time of recombination. This results in a peak in the birefringence power
spectrum around $\ell_p \sim 1/\theta_\mathrm{min}$. An additional scale,
controlled by the axion's mass, is introduced if the network collapses before
today.
|
High-power and narrow-linewidth laser light is a vital tool for atomic
physics, being used for example in laser cooling and trapping and precision
spectroscopy. Here we produce Watt-level laser radiation at 457.49 nm and
460.86 nm of respective relevance for the cooling transitions of cadmium and
strontium atoms. This is achieved via the frequency doubling of a kHz-linewidth
vertical-external-cavity surface-emitting laser (VECSEL), which is based on a
novel gain chip design enabling lasing at > 2 W in the 915-928 nm region.
Following an additional doubling stage, spectroscopy of the $^1S_0\to{}^1P_1$
cadmium transition at 228.89 nm is performed on an atomic beam, with all the
transitions from all eight natural isotopes observed in a single continuous
sweep of more than 4 GHz in the deep ultraviolet. The absolute value of the
transition frequency of Cd-114 and the isotope shifts relative to this
transition are determined, with values for some of these shifts provided for
the first time
|
Holographic acoustical tweezers (HAT) based on Archimedes-Fermat spiraling
InterDigitated Transducers (S-IDTs) are a versatile tool for the selective
manipulation of microparticles [Baudoin et. al., Sci. Adv., 5: eaav1967 (2019)]
and cells [Baudoin et. al., Nat. Commu., 11, 4244 (2020)] in a standard
microfluidic environment. These binary active holograms produce some focused
helical wave, with the ability to trap particles at the vortex core. Yet, all
the studies conducted with S-IDTs have so far been restricted to 2D
manipulation only. Here we show (i) that 3D radiation trap for microparticles
and cells can be obtained with spiraling tweezers with sufficiently large
aperture and (ii) that the particles can be displaced axially by simply tuning
the driving frequency, without any motion of the transducer. This work opens
perspectives for 3D cells and microparticles manipulation with single-beam
acoustical tweezers.
|
To assess generalization, machine learning scientists typically either (i)
bound the generalization gap and then (after training) plug in the empirical
risk to obtain a bound on the true risk; or (ii) validate empirically on
holdout data. However, (i) typically yields vacuous guarantees for
overparameterized models. Furthermore, (ii) shrinks the training set and its
guarantee erodes with each re-use of the holdout set. In this paper, we
introduce a method that leverages unlabeled data to produce generalization
bounds. After augmenting our (labeled) training set with randomly labeled fresh
examples, we train in the standard fashion. Whenever classifiers achieve low
error on clean data and high error on noisy data, our bound provides a tight
upper bound on the true risk. We prove that our bound is valid for 0-1
empirical risk minimization and with linear classifiers trained by gradient
descent. Our approach is especially useful in conjunction with deep learning
due to the early learning phenomenon whereby networks fit true labels before
noisy labels but requires one intuitive assumption. Empirically, on canonical
computer vision and NLP tasks, our bound provides non-vacuous generalization
guarantees that track actual performance closely. This work provides
practitioners with an option for certifying the generalization of deep nets
even when unseen labeled data is unavailable and provides theoretical insights
into the relationship between random label noise and generalization.
|
The increasing digitization and interconnection of legacy Industrial Control
Systems (ICSs) open new vulnerability surfaces, exposing such systems to
malicious attackers. Furthermore, since ICSs are often employed in critical
infrastructures (e.g., nuclear plants) and manufacturing companies (e.g.,
chemical industries), attacks can lead to devastating physical damages. In
dealing with this security requirement, the research community focuses on
developing new security mechanisms such as Intrusion Detection Systems (IDSs),
facilitated by leveraging modern machine learning techniques. However, these
algorithms require a testing platform and a considerable amount of data to be
trained and tested accurately. To satisfy this prerequisite, Academia,
Industry, and Government are increasingly proposing testbed (i.e., scaled-down
versions of ICSs or simulations) to test the performances of the IDSs.
Furthermore, to enable researchers to cross-validate security systems (e.g.,
security-by-design concepts or anomaly detectors), several datasets have been
collected from testbeds and shared with the community. In this paper, we
provide a deep and comprehensive overview of ICSs, presenting the architecture
design, the employed devices, and the security protocols implemented. We then
collect, compare, and describe testbeds and datasets in the literature,
highlighting key challenges and design guidelines to keep in mind in the design
phases. Furthermore, we enrich our work by reporting the best performing IDS
algorithms tested on every dataset to create a baseline in state of the art for
this field. Finally, driven by knowledge accumulated during this survey's
development, we report advice and good practices on the development, the
choice, and the utilization of testbeds, datasets, and IDSs.
|
Strong gravitational lensing of gravitational wave sources offers a novel
probe of both the lens galaxy and the binary source population. In particular,
the strong lensing event rate and the time delay distribution of
multiply-imaged gravitational-wave binary coalescence events can be used to
constrain the mass distribution of the lenses as well as the intrinsic
properties of the source population. We calculate the strong lensing event rate
for a range of second (2G) and third generation (3G) detectors, including
Advanced LIGO/Virgo, A+, Einstein Telescope (ET), and Cosmic Explorer (CE). For
3G detectors, we find that {$\sim0.1\%$} of observed events are expected to be
strongly lensed. We predict detections of {$\sim 1$} lensing pair per year with
A+, and {$\sim 50$} pairs {per year} with ET/CE. These rates are highly
sensitive to the characteristic galaxy velocity dispersion, $\sigma_*$,
implying that observations of the rates will be a sensitive probe of lens
properties. We explore using the time delay distribution between
multiply-imaged gravitational-wave sources to constrain properties of the
lenses. We find that 3G detectors would constrain $\sigma_*$ to {$\sim21\%$
after 5 years}. Finally, we show that the presence or absence of strong lensing
{within the detected population} provides useful insights into the source
redshift and mass distribution out to redshifts beyond the peak of the star
formation rate, which can be used to constrain formation channels and their
relation to the star formation rate and delay time distributions for these
systems.
|
Two new classes of skew codes over a finite field $\F$ are proposed, called
skew convolutional codes and skew trellis codes. These two classes are defined
by, respectively, left or right sub-modules over the skew fields of fractions
of skew polynomials over $\F$. The skew convolutional codes can be represented
as periodic time-varying ordinary convolutional codes. The skew trellis codes
are in general nonlinear over $\F$. Every code from both classes has a code
trellis and can be decoded by Viterbi or BCJR algorithms.
|
A key challenge for abstractive summarization is ensuring factual consistency
of the generated summary with respect to the original document. For example,
state-of-the-art models trained on existing datasets exhibit entity
hallucination, generating names of entities that are not present in the source
document. We propose a set of new metrics to quantify the entity-level factual
consistency of generated summaries and we show that the entity hallucination
problem can be alleviated by simply filtering the training data. In addition,
we propose a summary-worthy entity classification task to the training process
as well as a joint entity and summary generation approach, which yield further
improvements in entity level metrics.
|
This paper considers distributed estimation of linear systems when the state
observations are corrupted with Gaussian noise of unbounded support and under
possible random adversarial attacks. We consider sensors equipped with single
time-scale estimators and local chi-square ($\chi^2$) detectors to
simultaneously opserve the states, share information, fuse the
noise/attack-corrupted data locally, and detect possible anomalies in their own
observations. While this scheme is applicable to a wide variety of systems
associated with full-rank (invertible) matrices, we discuss it within the
context of distributed inference in social networks. The proposed technique
outperforms existing results in the sense that: (i) we consider Gaussian noise
with no simplifying upper-bound assumption on the support; (ii) all existing
$\chi^2$-based techniques are centralized while our proposed technique is
distributed, where the sensors \textit{locally} detect attacks, with no central
coordinator, using specific probabilistic thresholds; and (iii) no
local-observability assumption at a sensor is made, which makes our method
feasible for large-scale social networks. Moreover, we consider a Linear Matrix
Inequalities (LMI) approach to design block-diagonal gain (estimator) matrices
under appropriate constraints for isolating the attacks.
|
A double-phase argon Time Projection Chamber (TPC), with an active mass of
185 g, has been designed and constructed for the Recoil Directionality (ReD)
experiment. The aim of the ReD project is to investigate the directional
sensitivity of argon-based TPCs via columnar recombination to nuclear recoils
in the energy range of interest (20-200 keV$_{nr}$) for direct dark matter
searches. The key novel feature of the ReD TPC is a readout system based on
cryogenic Silicon Photomultipliers, which are employed and operated
continuously for the first time in an argon TPC. Over the course of six months,
the ReD TPC was commissioned and characterised under various operating
conditions using $\gamma$-ray and neutron sources, demonstrating remarkable
stability of the optical sensors and reproducibility of the results. The
scintillation gain and ionisation amplification of the TPC were measured to be
$g_1 = (0.194 \pm 0.013)$ PE/photon and $g_2 = (20.0 \pm 0.9)$ PE/electron,
respectively. The ratio of the ionisation to scintillation signals (S2/S1),
instrumental for the positive identification of a candidate directional signal
induced by WIMPs, has been investigated for both nuclear and electron recoils.
At a drift field of 183 V/cm, an S2/S1 dispersion of 12% was measured for
nuclear recoils of approximately 60-90 keV$_{nr}$, as compared to 18% for
electron recoils depositing 60 keV of energy. The detector performance reported
here meets the requirements needed to achieve the principal scientific goals of
the ReD experiment in the search for a directional effect due to columnar
recombination. A phenomenological parameterisation of the recombination
probability in LAr is presented and employed for modeling the dependence of
scintillation quenching and charge yield on the drift field for electron
recoils between 50-500 keV and fields up to 1000 V/cm.
|
Although distributed machine learning has opened up numerous frontiers of
research, the separation of large models across different devices, nodes, and
sites can invite significant communication overhead, making reliable training
difficult.
The focus on gradients as the primary shared statistic during training has
led to a number of intuitive algorithms for distributed deep learning; however,
gradient-based algorithms for training large deep neural networks (DNNs) are
communication-heavy, often requiring additional modifications via sparsity
constraints, compression, quantization, and other similar approaches, to lower
bandwidth.
We introduce a surprisingly simple statistic for training distributed DNNs
that is more communication-friendly than the gradient. The error
backpropagation process can be modified to share these smaller intermediate
values instead of the gradient, reducing communication overhead with no impact
on accuracy. The process provides the flexibility of averaging gradients during
backpropagation, enabling novel flexible training schemas while leaving room
for further bandwidth reduction via existing gradient compression methods.
Finally, consideration of the matrices used to compute the gradient inspires a
new approach to compression via structured power iterations, which can not only
reduce bandwidth but also enable introspection into distributed training
dynamics, without significant performance loss.
|
Field observations form the basis of many scientific studies, especially in
ecological and social sciences. Despite efforts to conduct such surveys in a
standardized way, observations can be prone to systematic measurement errors.
The removal of systematic variability introduced by the observation process, if
possible, can greatly increase the value of this data. Existing non-parametric
techniques for correcting such errors assume linear additive noise models. This
leads to biased estimates when applied to generalized linear models (GLM). We
present an approach based on residual functions to address this limitation. We
then demonstrate its effectiveness on synthetic data and show it reduces
systematic detection variability in moth surveys.
|
We present an approach based on density-functional theory for the calculation
of fundamental gaps of both finite and periodic two-dimensional (2D) electronic
systems. The computational cost of our approach is comparable to that of total
energy calculations performed via standard semi-local forms. We achieve this by
replacing the 2D local density approximation with a more sophisticated -- yet
computationally simple -- orbital-dependent modeling of the exchange potential
within the procedure by Guandalini et al. [Phys. Rev. B 99, 125140 (2019)]. We
showcase promising results for semiconductor 2D quantum dots and artificial
graphene systems, where the band structure can be tuned through, e.g., Kekul\'e
distortion.
|
Many weakly supervised classification methods employ a noise transition
matrix to capture the class-conditional label corruption. To estimate the
transition matrix from noisy data, existing methods often need to estimate the
noisy class-posterior, which could be unreliable due to the overconfidence of
neural networks. In this work, we propose a theoretically grounded method that
can estimate the noise transition matrix and learn a classifier simultaneously,
without relying on the error-prone noisy class-posterior estimation.
Concretely, inspired by the characteristics of the stochastic label corruption
process, we propose total variation regularization, which encourages the
predicted probabilities to be more distinguishable from each other. Under mild
assumptions, the proposed method yields a consistent estimator of the
transition matrix. We show the effectiveness of the proposed method through
experiments on benchmark and real-world datasets.
|
I suggest a novel solution of the inflation and reheating problems of the
very early universe. My start point is directly to solve the evolution equation
system of the slow-roll parameters rather than build an inflaton potential. My
model can completely calculate the time evolutions of the inflation and
reheating processes provided a few boundary values. The numerical results of
the model not only clearly show the slow-roll characteristic of the inflation
and the unconventional mechanism of the inflaton mass generation, but also
perfectly reproduce all of the measured data of the inflation. In addition, the
model establishes the relationships among the inflation, the reheating and the
particle physics, in particular, it predicts that the reheating duration is
$\approx1.1$ times the inflaton lifetime, $r_{0.002}$ is one to two order of
magnitude smaller than its current upper bound, and so on. Finally, it is very
possible to test the model in the near future.
|
A popular way to create detailed yet easily controllable 3D shapes is via
procedural modeling, i.e. generating geometry using programs. Such programs
consist of a series of instructions along with their associated parameter
values. To fully realize the benefits of this representation, a shape program
should be compact and only expose degrees of freedom that allow for meaningful
manipulation of output geometry. One way to achieve this goal is to design
higher-level macro operators that, when executed, expand into a series of
commands from the base shape modeling language. However, manually authoring
such macros, much like shape programs themselves, is difficult and largely
restricted to domain experts. In this paper, we present ShapeMOD, an algorithm
for automatically discovering macros that are useful across large datasets of
3D shape programs. ShapeMOD operates on shape programs expressed in an
imperative, statement-based language. It is designed to discover macros that
make programs more compact by minimizing the number of function calls and free
parameters required to represent an input shape collection. We run ShapeMOD on
multiple collections of programs expressed in a domain-specific language for 3D
shape structures. We show that it automatically discovers a concise set of
macros that abstract out common structural and parametric patterns that
generalize over large shape collections. We also demonstrate that the macros
found by ShapeMOD improve performance on downstream tasks including shape
generative modeling and inferring programs from point clouds. Finally, we
conduct a user study that indicates that ShapeMOD's discovered macros make
interactive shape editing more efficient.
|
In some scientific fields, it is common to have certain variables of interest
that are of particular importance and for which there are many studies
indicating a relationship with a different explanatory variable. In such cases,
particularly those where no relationships are known among explanatory
variables, it is worth asking under what conditions it is possible for all such
claimed effects to exist simultaneously. This paper addresses this question by
reviewing some theorems from multivariate analysis that show, unless the
explanatory variables also have sizable effects on each other, it is impossible
to have many such large effects. We also discuss implications for the
replication crisis in social science.
|
Convolutions are the core operation of deep learning applications based on
Convolutional Neural Networks (CNNs). Current GPU architectures are highly
efficient for training and deploying deep CNNs, and hence, these are largely
used in production for this purpose. State-of-the-art implementations, however,
present a lack of efficiency for some commonly used network configurations.
In this paper we propose a GPU-based implementation of the convolution
operation for CNN inference that favors coalesced accesses, without requiring
prior data transformations. Our experiments demonstrate that our proposal
yields notable performance improvements in a range of common CNN forward
propagation convolution configurations, with speedups of up to 2.29x with
respect to the best implementation of convolution in cuDNN, hence covering a
relevant region in currently existing approaches.
|
In this paper we prove two results pertaining to the (unramified and global)
geometric Langlands program. The first result is an analogue of the Ramanujan
conjecture: any cuspidal D-module on Bun_G is tempered. We actually prove a
more general statement: any D-module that is *-extended from a quasi-compact
open substack of Bun_G is tempered. Then the assertion about cuspidal objects
is an immediate consequence of a theorem of Drinfeld-Gaitsgory. Building up on
this, we prove our second main result, the automorphic gluing theorem for the
group SL_2: it states that any D-module on Bun_{SL_2} is determined by its
tempered part and its constant term. This theorem (vaguely speaking, an
analogue of Langlands' classification for the group SL_2(R)) corresponds under
geometric Langlands to the spectral gluing theorem of Arinkin-Gaitsgory and the
author.
|
We find all solutions to the parametrized family of norm-form equations
$x^3-(t^3-1)y^3+3(t^3-1)xy+(t^3-1)^2 = \pm 1$ studied by Amoroso, Masser and
Zannier. Our proof relies upon an appeal to lower bounds for linear forms in
logarithms and various elementary arguments.
|
In this paper, we have designed and employed a suspended-wall silo to remove
Janssen effect in order to explore directly the local pressure dependence of
Granular Orifice Flow (GOF) systematically. We find that as Janssen effect is
removed, the flow rate Q changes linearly with the external pressure. The slope
{\alpha} of the linear change decays exponentially with the ratio of the silo
size and the size of the orifice {\Phi}/D, which suggests the existence of a
characteristic ratio {\lambda} (~2.4). When {\Phi}/D > {\lambda}, {\alpha}
gradually decays to zero, and the effect of external pressure on the GOF
becomes negligible, where the Beverloo law retrieves. Our results show that
Janssen effect is not a determining factor of the constant rate of GOF,
although it may contribute to shield the top load. The key parameter in GOF is
{\Phi}/D. In small {\Phi}/D, the flow rate of GOF can be directly adjusted by
the external pressure via our suspended-wall setup, which may be useful to the
transportation of granules in microgravity environment where the gravity-driven
Beverloo law is disabled.
|
Recent experiments in quantum simulators have provided evidence for the
Many-Body Localized (MBL) phase in 1D and 2D bosonic quantum matter. The
theoretical study of such bosonic MBL, however, is a daunting task due to the
unbounded nature of its Hilbert space. In this work, we introduce a method to
compute the long-time real-time evolution of 1D and 2D bosonic systems in an
MBL phase at strong disorder and weak interactions. We focus on local dynamical
indicators that are able to distinguish an MBL phase from an Anderson localized
one. In particular, we consider the temporal fluctuations of local observables,
the spatiotemporal behavior of two-time correlators and Out-Of-Time-Correlators
(OTOCs). We show that these few-body observables can be computed with a
computational effort that depends only polynomially on system size but is
independent of the target time, by extending a recently proposed numerical
method [Phys. Rev. B 99, 241114 (2019)] to mixed states and bosons. Our method
also allows us to surrogate our numerical study with analytical considerations
of the time-dependent behavior of the studied quantities.
|
Benford's Law (BL) or the Significant Digit Law defines the probability
distribution of the first digit of numerical values in a data sample. This Law
is observed in many naturally occurring datasets. It can be seen as a measure
of naturalness of a given distribution and finds its application in areas like
anomaly and fraud detection. In this work, we address the following question:
Is the distribution of the Neural Network parameters related to the network's
generalization capability? To that end, we first define a metric, MLH (Model
Enthalpy), that measures the closeness of a set of numbers to Benford's Law and
we show empirically that it is a strong predictor of Validation Accuracy.
Second, we use MLH as an alternative to Validation Accuracy for Early Stopping,
removing the need for a Validation set. We provide experimental evidence that
even if the optimal size of the validation set is known before-hand, the peak
test accuracy attained is lower than not using a validation set at all.
Finally, we investigate the connection of BL to Free Energy Principle and First
Law of Thermodynamics, showing that MLH is a component of the internal energy
of the learning system and optimization as an analogy to minimizing the total
energy to attain equilibrium.
|
Predicting the densest random disc packing fraction is an unsolved paradigm
problem relevant to a number of disciplines and technologies. One difficulty is
that it is ill-defined without setting a criterion for the disorder. Another is
that the density depends on the packing protocol and the multitude of possible
protocol parameters has so far hindered a general solution. A new approach is
proposed here. After formulating a well-posed form of the general
protocol-independent problem for planar packings of discs, a systematic
criterion is proposed to avoid crystalline hexagonal order as well as further
topological order. The highest possible random packing fraction is then derived
exactly: $\phi_{RCP}=0.852525...$. The solution is based on the cell order
distribution that is shown to: (i) yield directly the packing fraction; (ii)
parameterise all possible packing protocols; (iii) make it possible to define
and limit all topological disorder. The method is further useful for predicting
the highest packing fraction in specific protocols, which is illustrated for a
family of simply-sheared packings that generate maximum-entropy cell order
distributions.
|
Quantum state tomography (QST) is a crucial ingredient for almost all aspects
of experimental quantum information processing. As an analog of the "imaging"
technique in the quantum settings, QST is born to be a data science problem,
where machine learning techniques, noticeably neural networks, have been
applied extensively. In this work, we build an integrated all-optical setup for
neural network QST, based on an all-optical neural network (AONN). Our AONN is
equipped with built-in nonlinear activation function, which is based on
electromagnetically induced transparency. Experiment results demonstrate the
validity and efficiency of the all-optical setup, indicating that AONN can
mitigate the state-preparation-and-measurement error and predict the phase
parameter in the quantum state accurately. Given that optical setups are highly
desired for future quantum networks, our all-optical setup of integrated
AONN-QST may shed light on replenishing the all-optical quantum network with
the last brick.
|
Recent advances in training deep learning models have demonstrated the
potential to provide accurate chest X-ray interpretation and increase access to
radiology expertise. However, poor generalization due to data distribution
shifts in clinical settings is a key barrier to implementation. In this study,
we measured the diagnostic performance for 8 different chest X-ray models when
applied to (1) smartphone photos of chest X-rays and (2) external datasets
without any finetuning. All models were developed by different groups and
submitted to the CheXpert challenge, and re-applied to test datasets without
further tuning. We found that (1) on photos of chest X-rays, all 8 models
experienced a statistically significant drop in task performance, but only 3
performed significantly worse than radiologists on average, and (2) on the
external set, none of the models performed statistically significantly worse
than radiologists, and five models performed statistically significantly better
than radiologists. Our results demonstrate that some chest X-ray models, under
clinically relevant distribution shifts, were comparable to radiologists while
other models were not. Future work should investigate aspects of model training
procedures and dataset collection that influence generalization in the presence
of data distribution shifts.
|
In this work we characterise the properties of the object SDSS
J020536.84-081424.7, an extended nebular region with projected extension of $14
\times 14$ kpc$^{2}$ in the line of sight of the ETG Mrk 1172, using
unprecedented spectroscopic data from MUSE. We perform a spatially resolved
stellar population synthesis and estimate the stellar mass for both Mrk 1172
($1 \times 10^{11} M_{\odot}$) and our object of study ($3 \times 10^{9}
M_{\odot}$). While the stellar content of Mrk 1172 is dominated by an old
($\sim 10$ Gyr) stellar population, the extended nebular emission has its light
dominated by young to intermediate age populations (from $\sim 100$ Myr to
$\sim 1$ Gyr) and presents strong emission lines such as: H${\beta}$, [O III]
${\lambda}{\lambda}$4959,5007, H${\alpha}$, [N II]
${\lambda}{\lambda}$6549,6585 and [S II] ${\lambda}{\lambda}$6717,6732. Using
these emission lines we find that it is metal-poor (with $Z \sim$ 1/3
$Z_{\odot}$, comparable to the LMC) and is actively forming stars ($0.70$
M$_{\odot}$ yr$^{-1}$), especially in a few bright clumpy knots that are
readily visible in H${\alpha}$. The object has an ionised gas mass $\geq 3.8
\times 10^{5}$ M$_{\odot}$. Moreover, the motion of the gas is well described
by a gas in circular orbit in the plane of a disk and is being affected by
interaction with Mrk 1172. We conclude that SDSS J020536.84-081424.7 is most
likely a dwarf irregular galaxy (dIGal).
|
We construct new examples of exceptional Hahn and Jacobi polynomials.
Exceptional polynomials are orthogonal polynomials with respect to a measure
which are also eigenfunctions of a second order difference or differential
operator. The most apparent difference between classical or classical discrete
orthogonal polynomials and their exceptional counterparts is that the
exceptional families have gaps in their degrees, in the sense that not all
degrees are present in the sequence of polynomials. The new examples have the
novelty that they depend on an arbitrary number of continuous parameters.
|
Weak instruments present a major setback to empirical work. This paper
introduces an estimator that admits weak, uncorrelated, or mean-independent
instruments that are non-independent of endogenous covariates. Relative to
conventional instrumental variable methods, the proposed estimator weakens the
relevance condition considerably without imposing a stronger exclusion
restriction. Identification mainly rests on (1) a weak conditional median
exclusion restriction imposed on pairwise differences in disturbances and (2)
non-independence between covariates and instruments. Under mild conditions, the
estimator is consistent and asymptotically normal. Monte Carlo experiments
showcase an excellent performance of the estimator, and two empirical examples
illustrate its practical utility.
|
We propose a novel thermal production mechanism for dark matter based on the
idea that dark matter particles $\chi$ can transform (`infect') heat bath
particles $\psi$: $\chi \psi \rightarrow \chi \chi$. For a small initial
abundance of $\chi$ this induces an exponential growth in the dark matter
number density, closely resembling the epidemic curves of a spreading pathogen
after an initial outbreak. To quantify this relation we present a sharp duality
between the Boltzmann equation for the dark matter number density and
epidemiological models for the spread of infectious diseases. Finally we
demonstrate that the exponential growth naturally stops before $\chi$
thermalizes with the heat bath, corresponding to a triumphant `flattening of
the curve' that matches the observed dark matter abundance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.