abstract
stringlengths 42
2.09k
|
---|
The question of whether to use one classifier or a combination of classifiers
is a central topic in Machine Learning. We propose here a method for finding an
optimal linear combination of classifiers derived from a bias-variance
framework for the classification task.
|
By fitting stellar populations to SDSS-IV MaNGA survey observations of ~7000
suitably-weighted individual galaxies, we reconstruct the star-formation
history of the Universe, which we find to be in reasonable agreement with
previous studies. Dividing the galaxies by their present-day stellar mass, we
demonstrate the downsizing phenomenon, whereby the more massive galaxies hosted
the most star-formation at earlier times. Further dividing the galaxy sample by
colour and morphology, we find that a galaxy's present-day colour tells us more
about its historical contribution to the cosmic star formation history than its
current morphology. We show that downsizing effects are greatest among galaxies
currently in the blue cloud, but that the level of downsizing in galaxies of
different morphologies depends quite sensitively on the morphological
classification used, due largely to the difficulty in classifying the smaller
low-mass galaxies from their ground-based images. Nevertheless, we find
agreement that among galaxies with stellar masses
$M_{\star}>6\times10^{9}\,M_{\odot}$, downsizing is most significant in
spirals. However, there are complicating factors. For example, for more massive
galaxies, we find that colour and morphology are predictors of the past star
formation over a longer timescale than in less massive systems. Presumably this
effect is reflecting the longer period of evolution required to alter these
larger galaxies' physical properties, but shows that conclusions based on any
single property don't tell the full story.
|
Model predictive control (MPC) schemes are commonly designed with fixed,
i.e., time-invariant, horizon length and cost functions. If no stabilizing
terminal ingredients are used, stability can be guaranteed via a sufficiently
long horizon. A suboptimality index can be derived that gives bounds on the
performance of the MPC law over an infinite-horizon (IH). While for
time-invariant schemes such index can be computed offline, less attention has
been paid to time-varying strategies with adapting cost function which can be
found, e.g., in learning-based optimal control. This work addresses the
performance bounds of nonlinear MPC with stabilizing horizon and time-varying
terminal cost. A scheme is proposed that uses the decay of the optimal
finite-horizon cost and convolutes a history stack to predict the bounds on the
IH performance. Based on online information on the decay rate, the performance
bound estimate is improved while the terminal cost is adapted using methods
from adaptive dynamic programming. The adaptation of the terminal cost leads to
performance improvement over a time-invariant scheme with the same horizon
length. The approach is demonstrated in a case study.
|
Microlensing is a powerful tool for discovering cold exoplanets, and the The
Roman Space Telescope microlensing survey will discover over 1000 such planets.
Rapid, automated classification of Roman's microlensing events can be used to
prioritize follow-up observations of the most interesting events. Machine
learning is now often used for classification problems in astronomy, but the
success of such algorithms can rely on the definition of appropriate features
that capture essential elements of the observations that can map to parameters
of interest. In this paper, we introduce tools that we have developed to
capture features in simulated Roman light curves of different types of
microlensing events, and evaluate their effectiveness in classifying
microlensing light curves. These features are quantified as parameters that can
be used to decide the likelihood that a given light curve is due to a specific
type of microlensing event. This method leaves us with a list of parameters
that describe features like the smoothness of the peak, symmetry, the number of
peaks, and width and height of small deviations from the main peak. This will
allow us to quickly analyze a set of microlensing light curves and later use
the resulting parameters as input to machine learning algorithms to classify
the events.
|
An effective email search engine can facilitate users' search tasks and
improve their communication efficiency. Users could have varied preferences on
various ranking signals of an email, such as relevance and recency based on
their tasks at hand and even their jobs. Thus a uniform matching pattern is not
optimal for all users. Instead, an effective email ranker should conduct
personalized ranking by taking users' characteristics into account. Existing
studies have explored user characteristics from various angles to make email
search results personalized. However, little attention has been given to users'
search history for characterizing users. Although users' historical behaviors
have been shown to be beneficial as context in Web search, their effect in
email search has not been studied and remains unknown. Given these
observations, we propose to leverage user search history as query context to
characterize users and build a context-aware ranking model for email search. In
contrast to previous context-dependent ranking techniques that are based on raw
texts, we use ranking features in the search history. This frees us from
potential privacy leakage while giving a better generalization power to unseen
users. Accordingly, we propose a context-dependent neural ranking model (CNRM)
that encodes the ranking features in users' search history as query context and
show that it can significantly outperform the baseline neural model without
using the context. We also investigate the benefit of the query context vectors
obtained from CNRM on the state-of-the-art learning-to-rank model LambdaMart by
clustering the vectors and incorporating the cluster information. Experimental
results show that significantly better results can be achieved on LambdaMart as
well, indicating that the query clusters can characterize different users and
effectively turn the ranking model personalized.
|
Can a regulated, legal market for wildlife products protect species
threatened by poaching? It is one of the most controversial ideas in
biodiversity conservation. Perhaps the most convincing reason for legalizing
wildlife trade is that trade revenue could fund the protection and conservation
of poached species. In this paper, we examine the possible poacher-population
dynamic consequences of legal trade funding conservation. The model consists of
a manager scavenging carcasses for wildlife products, who then sells the
products, and directs a portion of the revenue towards funding anti-poaching
law enforcement. Through a global analysis of the model, we derive the critical
proportion of product the manager must scavenge, and the critical proportion of
trade revenue the manager must allocate towards increased enforcement, in order
for legal trade to lead to abundant long-term wildlife populations. We
illustrate how the model could inform management with parameter values derived
from the African elephant literature, under a hypothetical scenario where a
manager scavenges elephant carcasses to sell ivory. We find that there is a
large region of parameter space where populations go extinct under legal trade,
unless a significant portion of trade revenue is directed towards protecting
populations from poaching. The model is general and therefore can be used as a
starting point for exploring the consequences of funding many conservation
programs using wildlife trade revenue.
|
Brain-inspired computing and neuromorphic hardware are promising approaches
that offer great potential to overcome limitations faced by current computing
paradigms based on traditional von-Neumann architecture. In this regard,
interest in developing memristor crossbar arrays has increased due to their
ability to natively perform in-memory computing and fundamental synaptic
operations required for neural network implementation. For optimal efficiency,
crossbar-based circuits need to be compatible with fabrication processes and
materials of industrial CMOS technologies. Herein, we report a complete
CMOS-compatible fabrication process of TiO2-based passive memristor crossbars
with 700 nm wide electrodes. We show successful bottom electrode fabrication by
a damascene process, resulting in an optimised topography and a surface
roughness as low as 1.1 nm. DC sweeps and voltage pulse programming yield
statistical results related to synaptic-like multilevel switching. Both
cycle-to-cycle and device-to-device variability are investigated. Analogue
programming of the conductance using sequences of 200 ns voltage pulses suggest
that the fabricated memories have a multilevel capacity of at least 3 bits due
to the cycle-to-cycle reproducibility.
|
We study the effects of bond and site disorder in the classical
$J_{1}$-$J_{2}$ Heisenberg model on a square lattice in the order-by-disorder
frustrated regime $2J_{2}>\left|J_{1}\right|$. Combining symmetry arguments,
numerical energy minimization and large scale Monte Carlo simulations, we
establish that the finite temperature Ising-like transition of the clean system
is destroyed in the presence of any finite concentration of impurities. We
explain this finding via a random-field mechanism which generically emerges in
systems where disorder locally breaks the same real-space symmetry
spontaneously globally broken by the associated order parameter. We also
determine that the phase replacing the clean one is a paramagnet polarized in
the nematic glass order with non-trivial magnetic response. This is because
disorder also induces non-collinear spin-vortex-crystal order and produces a
conjugated transverse dipolar random field. As a result of these many competing
effects, the associated magnetic susceptibilities are non-monotonic functions
of the temperature. As a further application of our methods, we show the
generation of random axes in other frustrated magnets with broken SU(2)
symmetry. We also discuss the generality of our findings and their relevance to
experiments.
|
Motivated by a recent first principles prediction of an anisotropic cubic
Dirac semi-metal in a real material Tl(TeMo)$_3$, we study the behavior of
electrons tunneling through a potential barrier in such systems. To clearly
investigate effects from different contributions to the Hamiltonian we study
the model in various limits. First, in the limit of a very thin material where
the linearly dispersive $z$-direction is frozen out at zero momentum and the
dispersion in the $x$-$y$ plane is rotationally symmetric. In this limit we
find a Klein tunneling reminiscent of what is observed in single layer graphene
and linearly dispersive Dirac semi-metals. Second, an increase in thickness of
the material leads to the possibility of a non-zero momentum eigenvalue $k_z$
that acts as an effective mass term in the Hamiltonian. We find that these lead
to a suppression of Klein tunneling. Third, the inclusion of an anisotropy
parameter $\lambda\neq 1$ leads to a breaking of rotational invariance.
Furthermore, we observed that for different values of incident angle $\theta$
and anisotropy parameter $\lambda$ the Hamiltonian supports different numbers
of modes propagating to infinity. We display this effect in form of a diagram
that is similar to a phase diagram of a distant detector. Fourth, we consider
coexistence of both anisotropy and non-zero $k_z$ but do not find any effect
that is unique to the interplay between non-zero momentum $k_z$ and anisotropy
parameter $\lambda$. Last, we studied the case of a barrier that was placed in
the linearly dispersive direction and found Klein tunneling $T-1\propto
\theta^6+\mathcal{O}(\theta^8)$ that is enhanced when compared to the Klein
tunneling in linear Dirac semi-metals or graphene where $T-1\propto
\theta^2+\mathcal{O}(\theta^4)$.
|
Let $G$ be an irreducible imprimitive subgroup of
$\operatorname{GL}_n(\mathbb{F})$, where $\mathbb{F}$ is a field. Any system of
imprimitivity for $G$ can be refined to a nonrefinable system of imprimitivity,
and we consider the question of when such a refinement is unique. Examples show
that $G$ can have many nonrefinable systems of imprimitivity, and even the
number of components is not uniquely determined. We consider the case where $G$
is the wreath product of an irreducible primitive $H \leq
\operatorname{GL}_d(\mathbb{F})$ and transitive $K \leq S_k$, where $n = dk$.
We show that $G$ has a unique nonrefinable system of imprimitivity, except in
the following special case: $d = 1$, $n = k$ is even, $|H| = 2$, and $K$ is a
transitive subgroup of $C_2 \wr S_{n/2}$. As a simple application, we prove
results about inclusions between wreath product subgroups.
|
We report a configuration strategy for improving the thermoelectric (TE)
performance of two-dimensional (2D) transition metal dichalcogenide (TMDC) WS2
based on the experimentally prepared WS2/WSe2 lateral superlattice (LS)
crystal. On the basis of density function theory combined with Boltzmann
transport equation, we show that the TE figure of merit zT of monolayer WS2 is
remarkably enhanced when forming into a WS2/WSe2 LS crystal. This is primarily
ascribed to the almost halved lattice thermal conductivity due to the enhanced
anharmonic processes. Electronic transport properties parallel (xx) and
perpendicular (yy) to the superlattice period are highly symmetric for both p-
and n-doped LS owing to the nearly isotropic lifetime of charger carriers. The
spin-orbital effect causes a significant split of conduction band and leads to
three-fold degenerate sub-bands and high density of states (DOS), which offers
opportunity to obtain the high n-type Seebeck coefficient (S). Interestingly,
the separated degenerate sub-bands and upper conduction band in monolayer WS2
form a remarkable stairlike DOS, yielding a higher S. The hole carriers with
much higher mobility than electrons reveal the high p-type power factor and the
potential to be good p-type TE materials with optimal zT exceeds 1 at 400K in
WS2/WSe2 LS.
|
The flux ratios of high-ionization lines are commonly assumed to indicate the
metallicity of the broad emission line region in luminous quasars. When
accounting for the variation in their kinematic profiles, we show that the
NV/CIV, (SiIV+OIV])/CIV and NV/Lya line ratios do not vary as a function of the
quasar continuum luminosity, black hole mass, or accretion rate. Using
photoionization models from CLOUDY , we further show that the observed changes
in these line ratios can be explained by emission from gas with solar
abundances, if the physical conditions of the emitting gas are allowed to vary
over a broad range of densities and ionizing fluxes. The diversity of broad
line emission in quasar spectra can be explained by a model with emission from
two kinematically distinct regions, where the line ratios suggest that these
regions have either very different metallicity or density. Both simplicity and
current galaxy evolution models suggest that near-solar abundances, with parts
of the spectrum forming in high-density clouds, are more likely. Within this
paradigm, objects with stronger outflow signatures show stronger emission from
gas which is denser and located closer to the ionizing source, at radii
consistent with simulations of line-driven disc-winds. Studies using broad-line
ratios to infer chemical enrichment histories should consider changes in
density and ionizing flux before estimating metallicities.
|
Given the coordinates of the terminals $ \{(x_j,y_j)\}_{j=1}^n $ of the full
Euclidean Steiner tree, its length equals $$ \left| \sum_{j=1}^n z_j U_j
\right| \, , $$ where $ \{z_j:=x_j+ \mathbf i y_j\}_{j=1}^n $ and $
\{U_j\}_{j=1}^n $ are suitably chosen $ 6 $th roots of unity. We also extend
this result for the cost of the optimal Weber networks which are topologically
equivalent to some full Steiner trees.
|
The Inverse First Ionization Potential (FIP) Effect, the depletion in coronal
abundance of elements like Fe, Mg, and Si that are ionized in the solar
chromosphere relative to those that are neutral, has been identified in several
solar flares. We give a more detailed discussion of the mechanism of
fractionation by the ponderomotive force associated with magnetohydrodynamic
waves, paying special attention to the conditions in which Inverse FIP
fractionation arises in order to better understand its relation to the usual
FIP Effect, i.e. the enhancement of coronal abundance of Fe, Mg, Si, etc. The
FIP Effect is generated by parallel propagating Alfv\'en waves, with either
photospheric, or more likely coronal, origins. The Inverse FIP Effect arises as
upward propagating fast mode waves with an origin in the photosphere or below,
refract back downwards in the chromosphere where the Alfv\'en speed is
increasing with altitude. We give a more physically motivated picture of the
FIP fractionation, based on the wave refraction around inhomogeneities in the
solar atmosphere, and inspired by previous discussions of analogous phenomena
in the optical trapping of particles by laser beams. We apply these insights to
modeling the fractionation and find good agreement with the observations of
Katsuda et al. (2020; arXiv:2001.10643) and Dennis et al. (2015;
arXiv:1503.01602).
|
We work out axioms for the duals $G\subset U_N^+$ of the finite quantum
permutation groups, $F\subset S_N^+$ with $|F|<\infty$, and we discuss how the
basic theory of such quantum permutation groups partly simplifies in the dual
setting. We discuss as well some potential extensions to the infinite case, in
connection with the well-known question of axiomatizing the discrete quantum
group actions on the infinite graphs.
|
The standard electrocardiogram (ECG) is a point-wise evaluation of the body
potential at certain given locations. These locations are subject to
uncertainty and may vary from patient to patient or even for a single patient.
In this work, we estimate the uncertainty in the ECG induced by uncertain
electrode positions when the ECG is derived from the forward bidomain model. In
order to avoid the high computational cost associated to the solution of the
bidomain model in the entire torso, we propose a low-rank approach to solve the
uncertainty quantification (UQ) problem. More precisely, we exploit the
sparsity of the ECG and the lead field theory to translate it into a set of
deterministic, time-independent problems, whose solution is eventually used to
evaluate expectation and covariance of the ECG. We assess the approach with
numerical experiments in a simple geometry.
|
Quantum materials with non-trivial band topology and bulk superconductivity
are considered superior materials to realize topological superconductivity. In
this regard, we report detailed Density Functional Theory (DFT) calculations
and Z2 invaraints for the NbC superconductor, exhibiting its band structure to
be topologically non-trivial. Bulk superconductivity at 8.9K is confirmed
through DC magnetization measurements under Field Cooled (FC) and Zero Field
Cooled (ZFC) protocols. This superconductivity is found to be of type-II nature
as revealed by isothermal M-H measurements and thus calculated the
Ginzberg-Landau parameter. A large intermediate state is evident from the phase
diagram, showing NbC to be a strong type-II superconductor. Comparing with
earlier reports on superconducting NbC, a non-monotonic relationship of
critical temperature with lattice parameters is seen. In conclusion, NbC is a
type-II around 10K superconductor with topological non-trivial surface states.
|
Graph transaction processing raises many unique challenges such as random
data access due to the irregularity of graph structures, low throughput and
high abort rate due to the relatively large read/write sets in graph
transactions. To address these challenges, we present G-Tran -- an RDMA-enabled
distributed in-memory graph database with serializable and snapshot isolation
support. First, we propose a graph-native data store to achieve good data
locality and fast data access for transactional updates and queries. Second,
G-Tran adopts a fully decentralized architecture that leverages RDMA to process
distributed transactions with the MPP model, which can achieve high performance
by utilizing all computing resources. In addition, we propose a new MV-OCC
implementation with two optimizations to address the issue of large read/write
sets in graph transactions. Extensive experiments show that G-Tran achieves
competitive performance compared with other popular graph databases on
benchmark workloads.
|
We present new observations of the odd $z=0.96$ weak-line quasar PG1407+265,
and report the discovery of CXOU J140927.9+261813, a $z=0.68$ X-ray cluster.
Archival X-ray photometry spanning nearly four decades reveals that PG1407+265
is variable at the 1 dex level on a timescale of years. V-band variability is
present with an amplitude less than 0.1 mag. The emission-line properties of
PG1407+265 also reveal clear evidence for a powerful inflow or outflow due to
near- or super-Eddington accretion, having a mechanical luminosity of order
$10^{48}$ erg s$^{-1}$. Our follow-up {\sl Chandra} exposure centered on this
object reveal a foreground $z=0.68$ cluster roughly 1' x 1'.5 in extent, offset
to the east of PG1407+265, roughly coincident with the $z=0.68$ radio galaxy
FIRST J140927.8+261818. This non-cool-core cluster contributes about 10\% of
the X-ray flux of PG1407+265, has a mass of $(0.6- 5.5)\times10^{14} M_\odot$,
and an X-ray gas temperature of ($2.2-4.3$) keV. Because the projected position
of the quasar lies at about twice that of the cluster's inferred Einstein
radius, lensing by the cluster is unlikely to explain the quasar's unusual
properties. We also discuss the evidence for a second cluster centered on and
at the redshift of the quasar.
|
Determining habitable zones in binary star systems can be a challenging task
due to the combination of perturbed planetary orbits and varying stellar
irradiation conditions. The concept of "dynamically informed habitable zones"
allows us, nevertheless, to make predictions on where to look for habitable
worlds in such complex environments. Dynamically informed habitable zones have
been used in the past to investigate the habitability of circumstellar planets
in binary systems and Earth-like analogs in systems with giant planets. Here,
we extend the concept to potentially habitable worlds on circumbinary orbits.
We show that habitable zone borders can be found analytically even when another
giant planet is present in the system. By applying this methodology to
Kepler-16, Kepler-34, Kepler-35, Kepler-38, Kepler-64, Kepler-413, Kepler-453,
Kepler-1647 and Kepler-1661 we demonstrate that the presence of the known giant
planets in the majority of those systems does not preclude the existence of
potentially habitable worlds. Among the investigated systems Kepler-35,
Kepler-38 and Kepler-64 currently seem to offer the most benign environment. In
contrast, Kepler-16 and Kepler-1647 are unlikely to host habitable worlds.
|
Ferroelectric materials are spontaneous symmetry breaking systems
characterized by ordered electric polarizations. Similar to its ferromagnetic
counterpart, a ferroelectric domain wall can be regarded as a soft interface
separating two different ferroelectric domains. Here we show that two bound
state excitations of electric polarization (polar wave), or the vibration and
breathing modes, can be hosted and propagate within the ferroelectric domain
wall. Specially, the vibration polar wave has zero frequency gap, thus is
constricted deeply inside ferroelectric domain wall, and can propagate even in
the presence of local pinnings. The ferroelectric domain wall waveguide as
demonstrated here, offers new paradigm in developing ferroelectric information
processing units.
|
We present a tight RMR complexity lower bound for the recoverable mutual
exclusion (RME) problem, defined by Golab and Ramaraju \cite{GR2019a}. In
particular, we show that any $n$-process RME algorithm using only atomic read,
write, fetch-and-store, fetch-and-increment, and compare-and-swap operations,
has an RMR complexity of $\Omega(\log n/\log\log n)$ on the CC and DSM model.
This lower bound covers all realistic synchronization primitives that have been
used in RME algorithms and matches the best upper bounds of algorithms
employing swap objects (e.g., [5,6,10]).
Algorithms with better RMR complexity than that have only been obtained by
either (i) assuming that all failures are system-wide [7], (ii) employing
fetch-and-add objects of size $(\log n)^{\omega(1)}$ [12], or (iii) using
artificially defined synchronization primitives that are not available in
actual systems [6,9].
|
By using a sharp isoperimetric inequality and an anisotropic symmetrization
argument, we establish Morrey-Sobolev and Hardy-Sobolev inequalities on
$n$-dimensional Finsler manifolds having nonnegative $n$-Ricci curvature; in
some cases we also discuss the sharpness of these functional inequalities. As
applications, by using variational arguments, we guarantee the
existence/multiplicity of solutions for certain eigenvalue problems and
elliptic PDEs involving the Finsler-Laplace operator. Our results are also new
in the Riemannian setting.
|
The topological Hall effect is used extensively to study chiral spin textures
in various materials. However, the factors controlling its magnitude in
technologically-relevant thin films remain uncertain. Using variable
temperature magnetotransport and real-space magnetic imaging in a series of
Ir/Fe/Co/Pt heterostructures, here we report that the chiral spin fluctuations
at the phase boundary between isolated skyrmions and a disordered skyrmion
lattice result in a power-law enhancement of the topological Hall resistivity
by up to three orders of magnitude. Our work reveals the dominant role of
skyrmion stability and configuration in determining the magnitude of the
topological Hall effect.
|
We exhibit a Finsler metric on the 2-sphere whose systolic (Holmes-Thompson)
ratio is $\frac{4{\pi}}{3}$. This is bigger than the conjectured maximal
Riemannian systolic ratio of $2\sqrt{3}$ achieved by the Calabi-Croke metric.
The construction of the Finsler metric is heavily inspired by a paper of
Cossarini-Sabourau.
|
Let $d \ge 1$. We study a subspace of the space of automorphic forms of
$\mathrm{GL}_d$ over a global field of positive characteristic (or, a function
field of a curve over a finite field). We fix a place $\infty$ of $F$, and we
consider the subspace $\mathcal{A}_{\mathrm{St}}$ consisting of automorphic
forms such that the local component at $\infty$ of the associated automorphic
representation is the Steinberg representation (to be made precise in the
text).
We have two results.
One theorem (Theorem 16) describes the constituents of
$\mathcal{A}_{\mathrm{St}}$ as automorphic representation and gives a
multiplicity one type statement.
For the other theorem (Theorem 12), we construct, using the geometry of the
Bruhat-Tits building, an analogue of modular symbols in
$\mathcal{A}_{\mathrm{St}}$ integrally (that is, in the space of
$\mathbb{Z}$-valued automorphic forms). We show that the quotient is finite and
give a bound on the exponent of this quotient.
|
Coronavirus disease 2019 (COVID-19) has caused global disruption and a
significant loss of life. Existing treatments that can be repurposed as
prophylactic and therapeutic agents could reduce the pandemic's devastation.
Emerging evidence of potential applications in other therapeutic contexts has
led to the investigation of dietary supplements and nutraceuticals for
COVID-19. Such products include vitamin C, vitamin D, omega 3 polyunsaturated
fatty acids, probiotics, and zinc, all of which are currently under clinical
investigation. In this review, we critically appraise the evidence surrounding
dietary supplements and nutraceuticals for the prophylaxis and treatment of
COVID-19. Overall, further study is required before evidence-based
recommendations can be formulated, but nutritional status plays a significant
role in patient outcomes, and these products could help alleviate deficiencies.
For example, evidence indicates that vitamin D deficiency may be associated
with greater incidence of infection and severity of COVID-19, suggesting that
vitamin D supplementation may hold prophylactic or therapeutic value. A growing
number of scientific organizations are now considering recommending vitamin D
supplementation to those at high risk of COVID-19. Because research in vitamin
D and other nutraceuticals and supplements is preliminary, here we evaluate the
extent to which these nutraceutical and dietary supplements hold potential in
the COVID-19 crisis.
|
We show that the baryon asymmetry of the universe can be explained in models
where the Higgs couples to the Chern-Simons term of the hypercharge group and
is away from the late-time minimum of its potential during inflation. The Higgs
then relaxes toward this minimum once inflation ends which leads to the
production of (hyper)magnetic helicity. We discuss the conditions under which
this helicity can be approximately conserved during its joint evolution with
the thermal plasma. At the electroweak phase transition the helicity is then
converted into a baryon asymmetry by virtue of the chiral anomaly in the
standard model. We propose a simple model which realizes this mechanism and
show that the observed baryon asymmetry of the universe can be reproduced.
|
We present a novel expression for an integrated correlation function of four
superconformal primaries in $SU(N)$ $\mathcal{N}=4$ SYM. This integrated
correlator, which is based on supersymmetric localisation, has been the subject
of several recent developments. The correlator is re-expressed as a sum over a
two dimensional lattice that is valid for all $N$ and all values of the complex
Yang-Mills coupling $\tau$. In this form it is manifestly invariant under
$SL(2,\mathbb{Z})$ Montonen-Olive duality. Furthermore, it satisfies a
remarkable Laplace-difference equation that relates the $SU(N)$ to the
$SU(N+1)$ and $SU(N-1)$ correlators. For any fixed value of $N$ the correlator
is an infinite series of non-holomorphic Eisenstein series,
$E(s;\tau,\bar\tau)$ with $s\in \mathbb{Z}$, and rational coefficients. The
perturbative expansion of the integrated correlator is asymptotic and the
$n$-loop coefficient is a rational multiple of $\zeta(2n+1)$. The $n=1$ and
$n=2$ terms agree precisely with results determined directly by integrating the
expressions in one- and two-loop perturbative SYM. Likewise, the charge-$k$
instanton contributions have an asymptotic, but Borel summable, series of
perturbative corrections. The large-$N$ expansion of the correlator with fixed
$\tau$ is a series in powers of $N^{1/2-\ell}$ ($\ell\in \mathbb{Z}$) with
coefficients that are rational sums of $E_s$ with $s\in \mathbb{Z}+1/2$. This
gives an all orders derivation of the form of the recently conjectured
expansion. We further consider 't Hooft large-$N$ Yang-Mills theory. The
coefficient of each order can be expanded as a convergent series in $\lambda$.
For large $\lambda$ this becomes an asymptotic series with coefficients that
are again rational multiples of odd zeta values. The large-$\lambda$ series is
not Borel summable, and its resurgent non-perturbative completion is
$O(\exp(-2\sqrt{\lambda}))$.
|
In some conditions, bacteria self-organise into biofilms, supracellular
structures made of a self-produced embedding matrix, mainly composed on
polysaccharides, DNA, proteins and lipids. It is known that bacteria change
their colony/matrix ratio in the presence of external stimuli such as
hydrodynamic stress. However, little is still known about the molecular
mechanisms driving this self-adaptation. In this work, we monitor structural
features of Pseudomonas fluorescens biofilms grown with and without
hydrodynamic stress. Our measurements show that the hydrodynamic stress
concomitantly increases the cell density population and the matrix production.
At short growth timescales, the matrix mediates a weak cell-cell attractive
interaction due to the depletion forces originated by the polymer constituents.
Using a population dynamics model, we conclude that hydrodynamic stress causes
a faster diffusion of nutrients and a higher incorporation of planktonic
bacteria to the already formed microcolonies. This results in the formation of
more mechanically stable biofilms due to an increase of the number of
crosslinks, as shown by computer simulations. The mechanical stability also
lies on a change in the chemical compositions of the matrix, which becomes
enriched in carbohydrates, known to display adhering properties. Overall, we
demonstrate that bacteria are capable of self-adapting to hostile hydrodynamic
stress by tailoring the biofilm chemical composition, thus affecting both the
mesoscale structure of the matrix and its viscoelastic properties that
ultimately regulate the bacteria-polymer interactions.
|
Table Structure Recognition is an essential part of end-to-end tabular data
extraction in document images. The recent success of deep learning model
architectures in computer vision remains to be non-reflective in table
structure recognition, largely because extensive datasets for this domain are
still unavailable while labeling new data is expensive and time-consuming.
Traditionally, in computer vision, these challenges are addressed by standard
augmentation techniques that are based on image transformations like color
jittering and random cropping. As demonstrated by our experiments, these
techniques are not effective for the task of table structure recognition. In
this paper, we propose TabAug, a re-imagined Data Augmentation technique that
produces structural changes in table images through replication and deletion of
rows and columns. It also consists of a data-driven probabilistic model that
allows control over the augmentation process. To demonstrate the efficacy of
our approach, we perform experimentation on ICDAR 2013 dataset where our
approach shows consistent improvements in all aspects of the evaluation
metrics, with cell-level correct detections improving from 92.16% to 96.11%
over the baseline.
|
Recommender systems have achieved great success in modeling user's
preferences on items and predicting the next item the user would consume.
Recently, there have been many efforts to utilize time information of users'
interactions with items to capture inherent temporal patterns of user behaviors
and offer timely recommendations at a given time. Existing studies regard the
time information as a single type of feature and focus on how to associate it
with user preferences on items. However, we argue they are insufficient for
fully learning the time information because the temporal patterns of user
preference are usually heterogeneous. A user's preference for a particular item
may 1) increase periodically or 2) evolve over time under the influence of
significant recent events, and each of these two kinds of temporal pattern
appears with some unique characteristics. In this paper, we first define the
unique characteristics of the two kinds of temporal pattern of user preference
that should be considered in time-aware recommender systems. Then we propose a
novel recommender system for timely recommendations, called TimelyRec, which
jointly learns the heterogeneous temporal patterns of user preference
considering all of the defined characteristics. In TimelyRec, a cascade of two
encoders captures the temporal patterns of user preference using a proposed
attention module for each encoder. Moreover, we introduce an evaluation
scenario that evaluates the performance on predicting an interesting item and
when to recommend the item simultaneously in top-K recommendation (i.e.,
item-timing recommendation). Our extensive experiments on a scenario for item
recommendation and the proposed scenario for item-timing recommendation on
real-world datasets demonstrate the superiority of TimelyRec and the proposed
attention modules.
|
It was shown recently that the f-diagonal tensor in the T-SVD factorization
must satisfy some special properties. Such f-diagonal tensors are called
s-diagonal tensors. In this paper, we show that such a discussion can be
extended to any real invertible linear transformation. We show that two
Eckart-Young like theorems hold for a third order real tensor, under any doubly
real-preserving unitary transformation. The normalized Discrete Fourier
Transformation (DFT) matrix, an arbitrary orthogonal matrix, the product of the
normalized DFT matrix and an arbitrary orthogonal matrix are examples of doubly
real-preserving unitary transformations. We use tubal matrices as a tool for
our study. We feel that the tubal matrix language makes this approach more
natural.
|
Observational astronomers survey the sky in great detail to gain a better
understanding of many types of astronomical phenomena. In particular, the
formation and evolution of galaxies, including our own, is a wide field of
research. Three dimensional (spatial 3D) scientific visualisation is typically
limited to simulated galaxies, due to the inherently two dimensional spatial
resolution of Earth-based observations. However, with appropriate means of
reconstruction, such visualisation can also be used to bring out the inherent
3D structure that exists in 2D observations of known galaxies, providing new
views of these galaxies and visually illustrating the spatial relationships
within galaxy groups that are not obvious in 2D. We present a novel approach to
reconstruct and visualise 3D representations of nearby galaxies based on
observational data using the scientific visualisation software Splotch. We
apply our approach to a case study of the nearby barred spiral galaxy known as
M83, presenting a new perspective of the M83 local group and highlighting the
similarities between our reconstructed views of M83 and other known galaxies of
similar inclinations.
|
In the univariate setting, using the kernel spectral representation is an
appealing approach for generating stationary covariance functions. However,
performing the same task for multiple-output Gaussian processes is
substantially more challenging. We demonstrate that current approaches to
modelling cross-covariances with a spectral mixture kernel possess a critical
blind spot. For a given pair of processes, the cross-covariance is not
reproducible across the full range of permitted correlations, aside from the
special case where their spectral densities are of identical shape. We present
a solution to this issue by replacing the conventional Gaussian components of a
spectral mixture with block components of finite bandwidth (i.e. rectangular
step functions). The proposed family of kernel represents the first
multi-output generalisation of the spectral mixture kernel that can approximate
any stationary multi-output kernel to arbitrary precision.
|
Post-hoc explanation methods are an important class of approaches that help
understand the rationale underlying a trained model's decision. But how useful
are they for an end-user towards accomplishing a given task? In this vision
paper, we argue the need for a benchmark to facilitate evaluations of the
utility of post-hoc explanation methods. As a first step to this end, we
enumerate desirable properties that such a benchmark should possess for the
task of debugging text classifiers. Additionally, we highlight that such a
benchmark facilitates not only assessing the effectiveness of explanations but
also their efficiency.
|
Underwater image restoration is of significant importance in unveiling the
underwater world. Numerous techniques and algorithms have been developed in the
past decades. However, due to fundamental difficulties associated with
imaging/sensing, lighting, and refractive geometric distortions, in capturing
clear underwater images, no comprehensive evaluations have been conducted of
underwater image restoration. To address this gap, we have constructed a
large-scale real underwater image dataset, dubbed `HICRD' (Heron Island Coral
Reef Dataset), for the purpose of benchmarking existing methods and supporting
the development of new deep-learning based methods. We employ accurate water
parameter (diffuse attenuation coefficient) in generating reference images.
There are 2000 reference restored images and 6003 original underwater images in
the unpaired training set. Further, we present a novel method for underwater
image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial
networks to maximize the mutual information between raw and restored images.
Extensive experiments with comparisons to recent approaches further demonstrate
the superiority of our proposed method. Our code and dataset are publicly
available at GitHub.
|
A classical observation of Deligne shows that, for any prime $p \geq 5$, the
divisor polynomial of the Eisenstein series $E_{p-1}(z)$ mod $p$ is closely
related to the supersingular polynomial at $p$, $$S_p(x) :=
\prod_{E/\overline{\mathbb{F}}_p \text{ supersingular}}(x-j(E)) \in
\mathbb{F}_p[x].$$ Deuring, Hasse, and Kaneko and Zagier found other families
of modular forms which also give the supersingular polynomial at $p$. In a new
approach, we prove an analogue of Deligne's result for the Hecke trace forms
$T_k(z)$ defined by the Hecke action on the space of cusp forms $S_k$. We use
the Eichler-Selberg trace formula to identify congruences between trace forms
of different weights mod $p$, and then relate their divisor polynomials to
$S_p(x)$ using Deligne's observation.
|
One of the most important barriers toward a widespread use of mobile robots
in unstructured and human populated work environments is the ability to plan a
safe path. In this paper, we propose to delegate this activity to a human
operator that walks in front of the robot marking with her/his footsteps the
path to be followed. The implementation of this approach requires a high degree
of robustness in locating the specific person to be followed (the leader). We
propose a three phase approach to fulfil this goal: 1. identification and
tracking of the person in the image space, 2. sensor fusion between camera data
and laser sensors, 3. point interpolation with continuous curvature curves. The
approach is described in the paper and extensively validated with experimental
results.
|
The aim of this note is to provoke discussion concerning arithmetic
properties of function $p_{d}(n)$ counting partitions of an positive integer
$n$ into $d$-th powers, where $d\geq 2$. Besides results concerning the
asymptotic behavior of $p_{d}(n)$ a little is known. In the first part of the
paper, we prove certain congruences involving functions counting various types
of partitions into $d$-th powers. The second part of the paper has experimental
nature and contains questions and conjectures concerning arithmetic behavior of
the sequence $(p_{d}(n))_{n\in\N}$. They based on our computations of
$p_{d}(n)$ for $n\leq 10^5$ in case of $d=2$, and $n\leq 10^{6}$ for $d=3, 4,
5$.
|
We report the synthesis, crystal structure, and magnetic properties of two
new quantum antiferromagnets A3ReO5Cl2 (A = Sr and Ba). The crystal structure
is isostructural with the mineral pinalite Pb3WO5Cl2, in which the Re6+ ion is
square-pyramidally coordinated by five oxide atoms, and forms an anisotropic
triangular lattice (ATL) made of S = 1/2 spins. The magnetic interactions J and
J' in the ATL are estimated from magnetic susceptibilities to be 19.5 (44.9)
and 9.2 (19.3) K, respectively, with J'/J = 0.47 (0.43) for A = Ba (Sr). For
each compound, heat capacity at low temperatures shows a large T-linear
component with no signature of long-range magnetic order above 2 K, which
suggests a gapless spin liquid state of one-dimensional character of the J
chains in spite of the significantly large J' couplings. This is a consequence
of one-dimensionalization by geometrical frustration in the ATL magnet; a
similar phenomenon has been observed in two compounds with slightly smaller
J'/J values: Cs2CuCl4 (J'/J = 0.3) and the related compound Ca3ReO5Cl2 (0.32).
Our findings demonstrate that 5d mixed-anion compounds provide a unique
opportunity to explore novel quantum magnetism.
|
As robots move from the laboratory into the real world, motion planning will
need to account for model uncertainty and risk. For robot motions involving
intermittent contact, planning for uncertainty in contact is especially
important, as failure to successfully make and maintain contact can be
catastrophic. Here, we model uncertainty in terrain geometry and friction
characteristics, and combine a risk-sensitive objective with chance constraints
to provide a trade-off between robustness to uncertainty and constraint
satisfaction with an arbitrarily high feasibility guarantee. We evaluate our
approach in two simple examples: a push-block system for benchmarking and a
single-legged hopper. We demonstrate that chance constraints alone produce
trajectories similar to those produced using strict complementarity
constraints; however, when equipped with a robust objective, we show the chance
constraints can mediate a trade-off between robustness to uncertainty and
strict constraint satisfaction. Thus, our study may represent an important step
towards reasoning about contact uncertainty in motion planning.
|
Hovey introduced $A$-cordial labelings as a generalization of cordial and
harmonious labelings \cite{Hovey}. If $A$ is an Abelian group, then a labeling
$f \colon V (G) \rightarrow A$ of the vertices of some graph $G$ induces an
edge labeling on $G$; the edge $uv$ receives the label $f (u) + f (v)$. A graph
$G$ is $A$-cordial if there is a vertex-labeling such that (1) the vertex label
classes differ in size by at most one and (2) the induced edge label classes
differ in size by at most one.
Patrias and Pechenik studied the larger class of finite abelian groups $A$
such that all path graphs are $A$-cordial. They posed a conjecture that all but
finitely many paths graphs are $A$-cordial for any Abelian group $A$. In this
paper we solve this conjecture. Moreover we show that all cycle graphs are
$A$-cordial for any Abelian group $A$ of odd order.
|
Efficient and accurate object detection in video and image analysis is one of
the major beneficiaries of the advancement in computer vision systems with the
help of deep learning. With the aid of deep learning, more powerful tools
evolved, which are capable to learn high-level and deeper features and thus can
overcome the existing problems in traditional architectures of object detection
algorithms. The work in this thesis aims to achieve high accuracy in object
detection with good real-time performance.
In the area of computer vision, a lot of research is going into the area of
detection and processing of visual information, by improving the existing
algorithms. The binarized neural network has shown high performance in various
vision tasks such as image classification, object detection, and semantic
segmentation. The Modified National Institute of Standards and Technology
database (MNIST), Canadian Institute for Advanced Research (CIFAR), and Street
View House Numbers (SVHN) datasets are used which is implemented using a
pre-trained convolutional neural network (CNN) that is 22 layers deep.
Supervised learning is used in the work, which classifies the particular
dataset with the proper structure of the model. In still images, to improve
accuracy, Googlenet is used. The final layer of the Googlenet is replaced with
the transfer learning to improve the accuracy of the Googlenet. At the same
time, the accuracy in moving images can be maintained by transfer learning
techniques. Hardware is the main backbone for any model to obtain faster
results with a large number of datasets. Here, Nvidia Jetson Nano is used which
is a graphics processing unit (GPU), that can handle a large number of
computations in the process of object detection. Results show that the accuracy
of objects detected by the transfer learning method is more when compared to
the existing methods.
|
As weak lensing surveys are becoming deeper and cover larger areas,
information will be available on small angular scales down to the arcmin level.
To extract this extra information, accurate modelling of baryonic effects is
necessary. In this work, we adopt a baryonic correction model, which includes
gas both bound inside and ejected from dark matter (DM) haloes, a central
galaxy, and changes in the DM profile induced by baryons. We use this model to
incorporate baryons into a large suite of DM-only $N$-body simulations,
covering a grid of 75 cosmologies in the $\Omega_\mathrm{m}-\sigma_8$ parameter
space. We investigate how baryons affect Gaussian and non-Gaussian weak lensing
statistics and the cosmological parameter inferences from these statistics. Our
results show that marginalizing over baryonic parameters degrades the
constraints in $\Omega_\mathrm{m}-\sigma_8$ space by a factor of $2-4$ compared
to those with baryonic parameters fixed. We investigate the contribution of
each baryonic component to this degradation, and find that the distance to
which gas is ejected (from AGN feedback) has the largest impact due to its
degeneracy with cosmological parameters. External constraints on this
parameter, either from other datasets or from a better theoretical
understanding of AGN feedback, can significantly mitigate the impact of baryons
in an HSC-like survey.
|
In this letter we study how fast the energy density of a quantum gas can
increase in time, when the inter-atomic interaction characterized by the
$s$-wave scattering length $a_\text{s}$ is increased from zero with arbitrary
time dependence. We show that, at short time, the energy density can at most
increase as $\sqrt{t}$, which can be achieved when the time dependence of
$a_\text{s}$ is also proportional to $\sqrt{t}$, and especially, a universal
maximum energy growth rate can be reached when $a_\text{s}$ varies as
$2\sqrt{\hbar t/(\pi m)}$. If $a_\text{s}$ varies faster or slower than
$\sqrt{t}$, it is respectively proximate to the quench process and the
adiabatic process, and both result in a slower energy growth rate. These
results are obtained by analyzing the short time dynamics of the short-range
behavior of the many-body wave function characterized by the contact, and are
also confirmed by numerical solving an example of interacting bosons with
time-dependent Bogoliubov theory. These results can also be verified
experimentally in ultracold atomic gases.
|
In this note we continue our study of unidirectional solutions to
hydrodynamic Euler alignment systems with strongly singular communication
kernels $\phi(x):=|x|^{-(n+\alpha)}$ for $\alpha\in(0,2)$. Here, we consider
the critical case $\alpha=1$ and establish a couple of global existence results
of smooth solutions, together with a full description of their long time
dynamics. The first one is obtained via Schauder-type estimates under a null
initial entropy condition and the other is a small data result. In fact, using
Duhamel's approach we get that any solution is almost Lipschitz-continuous in
space. We extend the notion of weak solution for $\alpha\in[1,2)$ and prove the
existence of global Leray-Hopf solutions. Furthermore, we give an anisotropic
Onsager-type criteria for the validity of the natural energy law for weak
solutions of the system. Finally, we provide a series of quantitative estimates
that show how far the density of the limiting flock is from a uniform
distribution depending solely on the size of the initial entropy.
|
A method for active learning of hyperspectral images (HSI) is proposed, which
combines deep learning with diffusion processes on graphs. A deep variational
autoencoder extracts smoothed, denoised features from a high-dimensional HSI,
which are then used to make labeling queries based on graph diffusion
processes. The proposed method combines the robust representations of deep
learning with the mathematical tractability of diffusion geometry, and leads to
strong performance on real HSI.
|
Dynamic pricing schemes were introduced as an alternative to posted-price
mechanisms. In contrast to static models, the dynamic setting allows to update
the prices between buyer-arrivals based on the remaining sets of items and
buyers, and so it is capable of maximizing social welfare without the need for
a central coordinator. In this paper, we study the existence of optimal dynamic
pricing schemes in combinatorial markets. In particular, we concentrate on
multi-demand valuations, a natural extension of unit-demand valuations. The
proposed approach is based on computing an optimal dual solution of the maximum
social welfare problem with distinguished structural properties.
Our contribution is twofold. By relying on an optimal dual solution, we show
the existence of optimal dynamic prices in unit-demand markets and in
multi-demand markets up to three buyers, thus giving new interpretations of
results of Cohen-Addad et al. and Berger et al. , respectively. Furthermore, we
provide an optimal dynamic pricing scheme for bi-demand valuations with an
arbitrary number of buyers. In all cases, our proofs also provide efficient
algorithms for determining the optimal dynamic prices.
|
We propose effective scheme of deep learning method for high-order nonlinear
soliton equation and compare the activation function for high-order soliton
equation. The neural network approximates the solution of the equation under
the conditions of differential operator, initial condition and boundary
condition. We apply this method to high-order nonlinear soliton equation, and
verify its efficiency by solving the fourth-order Boussinesq equation and the
fifth-order Korteweg de Vries equation. The results show that deep learning
method can solve the high-order nonlinear soliton equation and reveal the
interaction between solitons.
|
Circumplanetary discs can be linearly unstable to the growth of disc tilt in
the tidal potential of the star-planet system. We use three-dimensional
hydrodynamical simulations to characterize the disc conditions needed for
instability, together with its long term evolution. Tilt growth occurs for disc
aspect ratios, evaluated near the disc outer edge, of $H/r\gtrsim 0.05$, with a
weak dependence on viscosity in the wave-like regime of warp propagation. Lower
mass giant planets are more likely to have circumplanetary discs that satisfy
the conditions for instability. We show that the tilt instability can excite
the inclination to above the threshold where the circumplanetary disc becomes
unstable to Kozai--Lidov (KL) oscillations. Dissipation in the Kozai--Lidov
unstable regime caps further tilt growth, but the disc experiences large
oscillations in both inclination and eccentricity. Planetary accretion occurs
in episodic accretion events. We discuss implications of the joint tilt--KL
instability for the detectability of circumplanetary discs, for the obliquity
evolution of forming giant planets, and for the formation of satellite systems.
|
Using fully-resolved simulations, we investigate the torque experienced by a
finite-length circular cylinder rotating steadily perpendicularly to its
symmetry axis. The aspect ratio $\chi$, i.e. the ratio of the length of the
cylinder to its diameter, is varied from 1 to 15. In the creeping-flow regime,
we employ the slender-body theory to derive the expression of the torque up to
order 4 with respect to the small parameter $1/\ln(2\chi)$. Numerical results
agree well with the corresponding predictions for $\chi\gtrsim3$. We introduce
an \textit{ad hoc} modification in the theoretical prediction to fit the
numerical results obtained with shorter cylinders, and a second modification to
account for the increase of the torque resulting from finite inertial effects.
In strongly inertial regimes, a prominent wake pattern made of two pairs of
counter-rotating vortices takes place. Nevertheless the flow remains stationary
and exhibits two distinct symmetries, one of which implies that the
contributions to the torque arising from the two cylinder ends are identical.
We build separate empirical formulas for the contributions of pressure and
viscous stress to the torque provided by the lateral surface and the cylinder
ends. We show that, in each contribution, the dominant scaling law may be
inferred from simple physical arguments. This approach eventually results in an
empirical formula for the rotation-induced torque valid throughout the range of
inertial regimes and aspect ratios considered in the simulations.
|
Intent classification is an important task in natural language understanding
systems. Existing approaches have achieved perfect scores on the benchmark
datasets. However they are not suitable for deployment on low-resource devices
like mobiles, tablets, etc. due to their massive model size. Therefore, in this
paper, we present a novel light-weight architecture for intent classification
that can run efficiently on a device. We use character features to enrich the
word representation. Our experiments prove that our proposed model outperforms
existing approaches and achieves state-of-the-art results on benchmark
datasets. We also report that our model has tiny memory footprint of ~5 MB and
low inference time of ~2 milliseconds, which proves its efficiency in a
resource-constrained environment.
|
In certain pulsar timing experiments, where observations are scheduled
approximately periodically (e.g. daily), timing models with significantly
different frequencies (including but not limited to glitch models with
different frequency increments) return near-equivalent timing residuals. The
average scheduling aperiodicity divided by the phase error due to
time-of-arrival uncertainties is a useful indicator of when the degeneracy is
important. Synthetic data are used to explore the effect of this degeneracy
systematically. It is found that phase-coherent tempo2 or temponest-based
approaches are biased sometimes toward reporting small glitch sizes regardless
of the true glitch size. Local estimates of the spin frequency alleviate this
bias. A hidden Markov model is free from bias towards small glitches and
announces explicitly the existence of multiple glitch solutions but sometimes
fails to recover the correct glitch size. Two glitches in the UTMOST public
data release are re-assessed, one in PSR J1709$-$4429 at MJD 58178 and the
other in PSR J1452$-$6036 at MJD 58600. The estimated fractional frequency jump
in PSR J1709$-$4429 is revised upward from $\Delta f/f = (54.6\pm 1.0) \times
10^{-9}$ to $\Delta f/f = (2432.2 \pm 0.1) \times 10^{-9}$ with the aid of
additional data from the Parkes radio telescope. We find that the available
UTMOST data for PSR J1452$-$6036 are consistent with $\Delta f/f = 270 \times
10^{-9} + N/(fT)$ with $N = 0,1,2$, where $T \approx 1\,\text{sidereal day}$ is
the observation scheduling period. Data from the Parkes radio telescope can be
included, and the $N = 0$ case is selected unambiguously with a combined
dataset.
|
Contact tracing has been extensively studied from different perspectives in
recent years. However, there is no clear indication of why this intervention
has proven effective in some epidemics (SARS) and mostly ineffective in some
others (COVID-19). Here, we perform an exhaustive evaluation of random testing
and contact tracing on novel superspreading random networks to try to identify
which epidemics are more containable with such measures. We also explore the
suitability of positive rates as a proxy of the actual infection statuses of
the population. Moreover, we propose novel ideal strategies to explore the
potential limits of both testing and tracing strategies. Our study counsels
caution, both at assuming epidemic containment and at inferring the actual
epidemic progress, with current testing or tracing strategies. However, it also
brings a ray of light for the future, with the promise of the potential of
novel testing strategies that can achieve great effectiveness.
|
Although ground robotic autonomy has gained widespread usage in structured
and controlled environments, autonomy in unknown and off-road terrain remains a
difficult problem. Extreme, off-road, and unstructured environments such as
undeveloped wilderness, caves, and rubble pose unique and challenging problems
for autonomous navigation. To tackle these problems we propose an approach for
assessing traversability and planning a safe, feasible, and fast trajectory in
real-time. Our approach, which we name STEP (Stochastic Traversability
Evaluation and Planning), relies on: 1) rapid uncertainty-aware mapping and
traversability evaluation, 2) tail risk assessment using the Conditional
Value-at-Risk (CVaR), and 3) efficient risk and constraint-aware kinodynamic
motion planning using sequential quadratic programming-based (SQP) model
predictive control (MPC). We analyze our method in simulation and validate its
efficacy on wheeled and legged robotic platforms exploring extreme terrains
including an abandoned subway and an underground lava tube.
|
We present and characterize the classes of Grothendieck toposes having enough
supercompact objects or enough compact objects. In the process, we examine the
subcategories of supercompact objects and compact objects within such toposes
and classes of geometric morphism which interact well with these objects. We
also present canonical classes of sites generating such toposes.
|
For a hereditary graph class $\mathcal{H}$, the $\mathcal{H}$-elimination
distance of a graph $G$ is the minimum number of rounds needed to reduce $G$ to
a member of $\mathcal{H}$ by removing one vertex from each connected component
in each round. The $\mathcal{H}$-treewidth of a graph $G$ is the minimum, taken
over all vertex sets $X$ for which each connected component of $G - X$ belongs
to $\mathcal{H}$, of the treewidth of the graph obtained from $G$ by replacing
the neighborhood of each component of $G-X$ by a clique and then removing $V(G)
\setminus X$. These parameterizations recently attracted interest because they
are simultaneously smaller than the graph-complexity measures treedepth and
treewidth, respectively, and the vertex-deletion distance to $\mathcal{H}$. For
the class $\mathcal{H}$ of bipartite graphs, we present non-uniform
fixed-parameter tractable algorithms for testing whether the
$\mathcal{H}$-elimination distance or $\mathcal{H}$-treewidth of a graph is at
most $k$. Along the way, we also provide such algorithms for all graph classes
$\mathcal{H}$ defined by a finite set of forbidden induced subgraphs.
|
We adapt the direct approach to the semiclassical Bergman kernel asymptotics,
developed recently by A. Deleporte, J. Sj\"ostrand, and the first-named author
for real analytic exponential weights, to the smooth case. Similar to that
work, our approach avoids the use of the Kuranishi trick and it allows us to
construct the amplitude of the asymptotic Bergman projection by means of an
asymptotic inversion of an explicit Fourier integral operator.
|
Deep learning has become the most powerful machine learning tool in the last
decade. However, how to efficiently train deep neural networks remains to be
thoroughly solved. The widely used minibatch stochastic gradient descent (SGD)
still needs to be accelerated. As a promising tool to better understand the
learning dynamic of minibatch SGD, the information bottleneck (IB) theory
claims that the optimization process consists of an initial fitting phase and
the following compression phase. Based on this principle, we further study
typicality sampling, an efficient data selection method, and propose a new
explanation of how it helps accelerate the training process of the deep
networks. We show that the fitting phase depicted in the IB theory will be
boosted with a high signal-to-noise ratio of gradient approximation if the
typicality sampling is appropriately adopted. Furthermore, this finding also
implies that the prior information of the training set is critical to the
optimization process and the better use of the most important data can help the
information flow through the bottleneck faster. Both theoretical analysis and
experimental results on synthetic and real-world datasets demonstrate our
conclusions.
|
We propose a fully asynchronous networked aggregative game (Asy-NAG) where
each player minimizes a cost function that depends on its local action and the
aggregate of all players' actions. In sharp contrast to the existing NAGs, each
player in our Asy-NAG can compute an estimate of the aggregate action at any
wall-clock time by only using (possibly stale) information from nearby players
of a directed network. Such an asynchronous update does not require any
coordination among players. Moreover, we design a novel distributed algorithm
with an aggressive mechanism for each player to adaptively adjust the
optimization stepsize per update. Particularly, the slow players in terms of
updating their estimates smartly increase their stepsizes to catch up with the
fast ones. Then, we develop an augmented system approach to address the
asynchronicity and the information delays between players, and rigorously show
the convergence to a Nash equilibrium of the Asy-NAG via a perturbed coordinate
algorithm which is also of independent interest. Finally, we evaluate the
performance of the distributed algorithm through numerical simulations.
|
An implicit and conservative numerical scheme is proposed for the isotropic
quantum Fokker-Planck equation describing the evolution of degenerate electrons
subject to elastic collisions with other electrons and ions. The electron-ion
and electron-electron collision operators are discretized using a discontinuous
Galerkin method, and the electron energy distribution is updated by an implicit
time integration method. The numerical scheme is designed to satisfy all
conservation laws exactly. Numerical tests and comparisons with other modeling
approaches are shown to demonstrate the accuracy and conservation properties of
the proposed method.
|
Traditional laws of friction believe that the friction coefficient of two
specific solids takes constant value. However, molecular simulations revealed
that the friction coefficient of nanosized asperity depends strongly on contact
size and asperity radius. Since contacting surfaces are always rough consisting
of asperities varying dramatically in geometric size, a theoretical model is
developed to predict the friction behavior of fractal rough surfaces in this
work. The result of atomic-scale simulations of sphere-on-flat friction is
summarized into a uniform expression. Then, the size dependent feature of
friction at nanoscale is incorporated into the analysis of fractal rough
surfaces. The obtained results display the dependence of friction coefficient
on roughness, material properties and load. It is revealed that the friction
coefficient decreases with increasing contact area or external load. This model
gives a theoretical guideline for the prediction of friction coefficient and
the design of friction pairs.
|
Building a benchmark dataset for hate speech detection presents various
challenges. Firstly, because hate speech is relatively rare, random sampling of
tweets to annotate is very inefficient in finding hate speech. To address this,
prior datasets often include only tweets matching known "hate words". However,
restricting data to a pre-defined vocabulary may exclude portions of the
real-world phenomenon we seek to model. A second challenge is that definitions
of hate speech tend to be highly varying and subjective. Annotators having
diverse prior notions of hate speech may not only disagree with one another but
also struggle to conform to specified labeling guidelines. Our key insight is
that the rarity and subjectivity of hate speech are akin to that of relevance
in information retrieval (IR). This connection suggests that well-established
methodologies for creating IR test collections can be usefully applied to
create better benchmark datasets for hate speech. To intelligently and
efficiently select which tweets to annotate, we apply standard IR techniques of
{\em pooling} and {\em active learning}. To improve both consistency and value
of annotations, we apply {\em task decomposition} and {\em annotator rationale}
techniques. We share a new benchmark dataset for hate speech detection on
Twitter that provides broader coverage of hate than prior datasets. We also
show a dramatic drop in accuracy of existing detection models when tested on
these broader forms of hate. Annotator rationales we collect not only justify
labeling decisions but also enable future work opportunities for
dual-supervision and/or explanation generation in modeling. Further details of
our approach can be found in the supplementary materials.
|
The laser-driven generation of relativistic electron beams in plasma and
their acceleration to high energies with GV/m-gradients has been successfully
demonstrated. Now, it is time to focus on the application of laser-plasma
accelerated (LPA) beams. The "Accelerator Technology HElmholtz iNfrAstructure"
(ATHENA) of the Helmholtz Association fosters innovative particle accelerators
and high-power laser technology. As part of the ATHENAe pillar several
different applications driven by LPAs are to be developed, such as a compact
FEL, medical imaging and the first realization of LPA-beam injection into a
storage ring. The latter endeavour is conducted in close collaboration between
Deutsches Elektronen-Synchrotron (DESY), Karlsruhe Institute of Technology
(KIT) and Helmholtz Institute Jena (HIJ). In the cSTART project at KIT, a
compact storage ring optimized for short bunches and suitable to accept
LPA-based electron bunches is in preparation. In this conference contribution
we will introduce the 50 MeV LPA-based injector and give an overview about the
project goals. The key parameters of the plasma injector will be presented.
Finally, the current status of the project will be summarized.
|
Perturbations are ubiquitous in metabolism. A central tool to understand and
control their influence on metabolic networks is sensitivity analysis, which
investigates how the network responds to external perturbations. We follow here
a structural approach: the analysis is based on the network stoichiometry only
and it does not require any quantitative knowledge of the reaction rates. We
consider perturbations of reaction rates and metabolite concentrations, at
equilibrium, and we investigate the responses in the network. For general
metabolic systems, this paper focuses on the sign of the responses, i.e.
whether a response is positive, negative or whether its sign depends on the
parameters of the system. In particular, we identify and describe the
subnetworks that are the main players in the sign description. These
subnetworks are associated to certain kernel vectors of the stoichiometric
matrix and are thus independent from the chosen kinetics.
|
Learning high-level navigation behaviors has important implications: it
enables robots to build compact visual memory for repeating demonstrations and
to build sparse topological maps for planning in novel environments. Existing
approaches only learn discrete, short-horizon behaviors. These standalone
behaviors usually assume a discrete action space with simple robot dynamics,
thus they cannot capture the intricacy and complexity of real-world
trajectories. To this end, we propose Composable Behavior Embedding (CBE), a
continuous behavior representation for long-horizon visual navigation. CBE is
learned in an end-to-end fashion; it effectively captures path geometry and is
robust to unseen obstacles. We show that CBE can be used to performing
memory-efficient path following and topological mapping, saving more than an
order of magnitude of memory than behavior-less approaches.
|
Though models with the radiative neutrino mass generation are
phenomenologically attractive, the complicated relationship between the flavour
structure of additional Yukawa matrices and the neutrino mass matrix sometimes
is a barrier to explore the models. We introduce a simple prescription to
analyze the relation in a class of models with the asymmetric Yukawa structure.
We then apply the treatment to the Zee-Babu model as a concrete example of the
class and discuss the phenomenological consequences of the model. The combined
studies among the neutrino physics, the lepton flavour violation, and the
search for the new particles at the collider experiments provide the anatomy of
the Zee-Babu model.
|
Let $G$ be a finite simple graph with Laplacian polynomial
$\psi(G,\lambda)=\sum_{k=0}^n(-1)^{n-k}c_k\lambda^k$. In an earlier paper, the
coefficients $c_{n-4}$ and $c_{n-5}$ for tree with respect to some degree-based
graph invariants were computed. The aim of this paper is to continue this work
by giving an exact formula for the coefficients $c_{n-6}$. As a consequence of
this work, the Laplacian coefficients $c_{n-k}$ of a forest $F$, $1\leq k \leq
6$, are computed in terms of the number of closed walks in $F$ and its line
graph.
|
A system of linear equations $L$ over $\mathbb{F}_q$ is common if the number
of monochromatic solutions to $L$ in any two-colouring of $\mathbb{F}_q^n$ is
asymptotically at least the expected number of monochromatic solutions in a
random two-colouring of $\mathbb{F}_q^n$. Motivated by existing results for
specific systems (such as Schur triples and arithmetic progressions), as well
as extensive research on common and Sidorenko graphs, the systematic study of
common systems of linear equations was recently initiated by Saad and Wolf.
Building upon earlier work of Cameron, Cilleruelo and Serra, as well as Saad
and Wolf, common linear equations have recently been fully characterised by
Fox, Pham and Zhao, who asked about common \emph{systems} of equations. In this
paper we move towards a classification of common systems of two or more linear
equations. In particular we prove that any system containing an arithmetic
progression of length four is uncommon, confirming a conjecture of Saad and
Wolf. This follows from a more general result which allows us to deduce the
uncommonness of a general system from certain properties of one- or
two-equation subsystems.
|
Let $(\xi_1, \eta_1)$, $(\xi_2, \eta_2),\ldots$ be independent identically
distributed $\mathbb{R}^2$-valued random vectors. We prove a strong law of
large numbers, a functional central limit theorem and a law of the iterated
logarithm for convergent perpetuities $\sum_{k\geq
0}b^{\xi_1+\ldots+\xi_k}\eta_{k+1}$ as $b\to 1-$. Under the standard actuarial
interpretation, these results correspond to the situation when the actuarial
market is close to the customer-friendly scenario of no risk.
|
We prove that supports of a wide class of temperate distributions with
uniformly discrete support and spectrum on Euclidean spaces are finite unions
of translations of full-rank lattices. This result is a generalization of the
corresponding theorem for Fourier quasicrystals, and its proof uses the
technique of almost periodic distributions.
|
Long-range correlation plays an important role in analyses of pionic
Bose-Einstein correlations (BECs). In many cases, such correlations are
phenomenologically introduced. In this investigation, we propose an analytic
form. By making use of the form, we analyze the OPAL BEC and the L3 BEC at
$Z^0$-pole and the CMS BEC at 0.9 and 7 TeV using our formulas and the
$\tau$-model. The parameters estimated by both approaches are found to be
consistent. Utilizing the Fourier transform in four-dimensional Euclidean
space, a number of pion-pair density distributions are also studied.
|
In this paper, we study the convergence analysis for a robust stochastic
structure-preserving Lagrangian numerical scheme in computing effective
diffusivity of time-dependent chaotic flows, which are modeled by stochastic
differential equations (SDEs). Our numerical scheme is based on a splitting
method to solve the corresponding SDEs in which the deterministic subproblem is
discretized using structure-preserving schemes while the random subproblem is
discretized using the Euler-Maruyama scheme. We obtain a sharp and
uniform-in-time convergence analysis for the proposed numerical scheme that
allows us to accurately compute long-time solutions of the SDEs. As such, we
can compute the effective diffusivity for time-dependent flows. Finally, we
present numerical results to demonstrate the accuracy and efficiency of the
proposed method in computing effective diffusivity for the time-dependent
Arnold-Beltrami-Childress (ABC) flow and Kolmogorov flow in three-dimensional
space.
|
Recently, there has been rapid and significant progress on image dehazing.
Many deep learning based methods have shown their superb performance in
handling homogeneous dehazing problems. However, we observe that even if a
carefully designed convolutional neural network (CNN) can perform well on
large-scaled dehazing benchmarks, the network usually fails on the
non-homogeneous dehazing datasets introduced by NTIRE challenges. The reasons
are mainly in two folds. Firstly, due to its non-homogeneous nature, the
non-uniformly distributed haze is harder to be removed than the homogeneous
haze. Secondly, the research challenge only provides limited data (there are
only 25 training pairs in NH-Haze 2021 dataset). Thus, learning the mapping
from the domain of hazy images to that of clear ones based on very limited data
is extremely hard. To this end, we propose a simple but effective approach for
non-homogeneous dehazing via ensemble learning. To be specific, we introduce a
two-branch neural network to separately deal with the aforementioned problems
and then map their distinct features by a learnable fusion tail. We show
extensive experimental results to illustrate the effectiveness of our proposed
method.
|
We study permutations over the set of $\ell$-grams, that are feasible in the
sense that there is a sequence whose $\ell$-gram frequency has the same ranking
as the permutation. Codes, which are sets of feasible permutations, protect
information stored in DNA molecules using the rank-modulation scheme, and read
using the shotgun sequencing technique. We construct systematic codes with an
efficient encoding algorithm, and show that they are optimal in size. The
length of the DNA sequences that correspond to the codewords is shown to be
polynomial in the code parameters. Non-systematic with larger size are also
constructed.
|
A promising channel for producing binary black hole mergers is the
Lidov-Kozai orbital resonance in hierarchical triple systems. While this
mechanism has been studied in isolation, the distribution of such mergers in
time and across star-forming environments is not well characterized. In this
work, we explore Lidov-Kozai-induced black hole mergers in open clusters,
combining semi-analytic and Monte Carlo methods to calculate merger rates and
delay times for eight different population models. We predict a merger rate
density of $\sim$1--10\,Gpc$^{-3}$\,yr$^{-1}$ for the Lidov-Kozai channel in
the local universe, and all models yield delay-time distributions in which a
significant fraction of binary black hole mergers (e.g., $\sim$20\%--50\% in
our baseline model) occur during the open cluster phase. Our findings suggest
that a substantial fraction of mergers from hierarchical triples occur within
star-forming regions in spiral galaxies.
|
Automated diagnosis using deep neural networks in chest radiography can help
radiologists detect life-threatening diseases. However, existing methods only
provide predictions without accurate explanations, undermining the
trustworthiness of the diagnostic methods. Here, we present XProtoNet, a
globally and locally interpretable diagnosis framework for chest radiography.
XProtoNet learns representative patterns of each disease from X-ray images,
which are prototypes, and makes a diagnosis on a given X-ray image based on the
patterns. It predicts the area where a sign of the disease is likely to appear
and compares the features in the predicted area with the prototypes. It can
provide a global explanation, the prototype, and a local explanation, how the
prototype contributes to the prediction of a single image. Despite the
constraint for interpretability, XProtoNet achieves state-of-the-art
classification performance on the public NIH chest X-ray dataset.
|
The paper deals with dynamics of expanding Lorenz maps, which appear in a
natural way as Poincar\`e maps in geometric models of well known Lorenz
attractor. We study connections between periodic points, completely invariant
sets and renormalizations. We show that in general, renormalization cannot be
fully characterized by a completely invariant set, however there are various
situations when such characterization is possible. This way we provide a better
insight into the structure of renormalizations in Lorenz maps, correcting some
gaps existing in the literature and completing to some extent the description
of possible dynamics in this important field of study.
|
We propose a novel approach to few-shot action recognition, finding
temporally-corresponding frame tuples between the query and videos in the
support set. Distinct from previous few-shot works, we construct class
prototypes using the CrossTransformer attention mechanism to observe relevant
sub-sequences of all support videos, rather than using class averages or single
best matches. Video representations are formed from ordered tuples of varying
numbers of frames, which allows sub-sequences of actions at different speeds
and temporal offsets to be compared.
Our proposed Temporal-Relational CrossTransformers (TRX) achieve
state-of-the-art results on few-shot splits of Kinetics, Something-Something V2
(SSv2), HMDB51 and UCF101. Importantly, our method outperforms prior work on
SSv2 by a wide margin (12%) due to the its ability to model temporal relations.
A detailed ablation showcases the importance of matching to multiple support
set videos and learning higher-order relational CrossTransformers.
|
We present results on global very long baseline interferometry (VLBI)
observations at 327 MHz of eighteen compact steep-spectrum (CSS) and GHz-peaked
spectrum (GPS) radio sources from the 3C and the Peacock & Wall catalogues.
About 80 per cent of the sources have a 'double/triple' structure. The radio
emission at 327 MHz is dominated by steep-spectrum extended structures, while
compact regions become predominant at higher frequencies. As a consequence, we
could unambiguously detect the core region only in three sources, likely due to
self-absorption affecting its emission at this low frequency. Despite their low
surface brightness, lobes store the majority of the source energy budget, whose
correct estimate is a key ingredient in tackling the radio source evolution.
Low-frequency VLBI observations able to disentangle the lobe emission from that
of other regions are therefore the best way to infer the energetics of these
objects. Dynamical ages estimated from energy budget arguments provide values
between 2x10^3 and 5x10^4 yr, in agreement with the radiative ages estimated
from the fit of the integrated synchrotron spectrum, further supporting the
youth of these objects. A discrepancy between radiative and dynamical ages is
observed in a few sources where the integrated spectrum is dominated by
hotspots. In this case the radiative age likely represents the time spent by
the particles in these regions, rather than the source age.
|
The design and application of an instrumented particle for the lagrangian
characterization of turbulent free surface flows is presented in this study.
This instrumented particle constitutes a local measurement device capable of
measuring both its instantaneous 3D translational acceleration and angular
velocity components, as well as recording them on an embarked removeable memory
card. A lithium ion polymer battery provides the instrumented particle with up
to 8 hours of autonomous operation. Entirely composed of commercial off the
shelf electronic components, it features accelerometer and gyroscope sensors
with a resolution of 16 bits for each individual axis, and maximum data
acquisition rates of 1 and 8 kHz, respectively, as well as several user
programmable dynamic ranges. Its ABS 3D printed body takes the form of a 36 mm
diameter hollow sphere, and has a total mass of (19.6 $\pm$ 0.5) g. Controlled
experiments, carried out to calibrate and validate its performance showed good
agreement when compared to reference techniques. In order to assess the
practicality of the instrumented particle, we apply it to the statistical
characterization of floater dynamics in experiments of surface wave turbulence.
In this feasibility study, we focused our attention on the distribution of
acceleration and angular velocity fluctuations as a function of the forcing
intensity. The IP's motion is also simultaneously registered by a 3D particle
tracking velocimetry (PTV) system, for the purposes of comparison. Beyond the
results particular to this study case, it constitutes a proof of both the
feasibility and potentiality of the IP as a tool for the experimental
characterization of particle dynamics in such flows.
|
A realistic communication system model is critical in power system studies
emphasizing the cyber and physical intercoupling. In this paper, we provide
characteristics that could be used in modeling the underlying cyber network for
power grid models. A real utility communication network and a simplified
inter-substation connectivity model are studied, and their statistics could be
used to fulfill the requirements for different modeling resolutions.
|
We present a novel large-context end-to-end automatic speech recognition
(E2E-ASR) model and its effective training method based on knowledge
distillation. Common E2E-ASR models have mainly focused on utterance-level
processing in which each utterance is independently transcribed. On the other
hand, large-context E2E-ASR models, which take into account long-range
sequential contexts beyond utterance boundaries, well handle a sequence of
utterances such as discourses and conversations. However, the transformer
architecture, which has recently achieved state-of-the-art ASR performance
among utterance-level ASR systems, has not yet been introduced into the
large-context ASR systems. We can expect that the transformer architecture can
be leveraged for effectively capturing not only input speech contexts but also
long-range sequential contexts beyond utterance boundaries. Therefore, this
paper proposes a hierarchical transformer-based large-context E2E-ASR model
that combines the transformer architecture with hierarchical encoder-decoder
based large-context modeling. In addition, in order to enable the proposed
model to use long-range sequential contexts, we also propose a large-context
knowledge distillation that distills the knowledge from a pre-trained
large-context language model in the training phase. We evaluate the
effectiveness of the proposed model and proposed training method on Japanese
discourse ASR tasks.
|
Error-bounded lossy compression is becoming an indispensable technique for
the success of today's scientific projects with vast volumes of data produced
during the simulations or instrument data acquisitions. Not only can it
significantly reduce data size, but it also can control the compression errors
based on user-specified error bounds. Autoencoder (AE) models have been widely
used in image compression, but few AE-based compression approaches support
error-bounding features, which are highly required by scientific applications.
To address this issue, we explore using convolutional autoencoders to improve
error-bounded lossy compression for scientific data, with the following three
key contributions. (1) We provide an in-depth investigation of the
characteristics of various autoencoder models and develop an error-bounded
autoencoder-based framework in terms of the SZ model. (2) We optimize the
compression quality for main stages in our designed AE-based error-bounded
compression framework, fine-tuning the block sizes and latent sizes and also
optimizing the compression efficiency of latent vectors. (3) We evaluate our
proposed solution using five real-world scientific datasets and comparing them
with six other related works. Experiments show that our solution exhibits a
very competitive compression quality from among all the compressors in our
tests. In absolute terms, it can obtain a much better compression quality (100%
~ 800% improvement in compression ratio with the same data distortion) compared
with SZ2.1 and ZFP in cases with a high compression ratio.
|
Recent millimeter and infrared observations have shown that gap and ring-like
structures are common in both dust thermal emission and scattered-light of
protoplanetary disks. We investigate the impact of the so-called Thermal Wave
Instability (TWI) on the millimeter and infrared scattered-light images of
disks. We perform 1+1D simulations of the TWI and confirm that the TWI operates
when the disk is optically thick enough for stellar light, i.e.,
small-grain-to-gas mass ratio of $\gtrsim0.0001$. The mid-plane temperature
varies as the waves propagate and hence gap and ring structures can be seen in
both millimeter and infrared emission. The millimeter substructures can be
observed even if the disk is fully optically thick since it is induced by the
temperature variation, while density-induced substructures would disappear in
the optically thick regime. The fractional separation between TWI-induced ring
and gap is $\Delta r/r \sim$ 0.2-0.4 at $\sim$ 10-50 au, which is comparable to
those found by ALMA. Due to the temperature variation, snow lines of volatile
species move radially and multiple snow lines are observed even for a single
species. The wave propagation velocity is as fast as $\sim$ 0.6 ${\rm
au~yr^{-1}}$, which can be potentially detected with a multi-epoch observation
with a time separation of a few years.
|
Click-through rate (CTR) prediction is a critical problem in web search,
recommendation systems and online advertisement displaying. Learning good
feature interactions is essential to reflect user's preferences to items. Many
CTR prediction models based on deep learning have been proposed, but
researchers usually only pay attention to whether state-of-the-art performance
is achieved, and ignore whether the entire framework is reasonable. In this
work, we use the discrete choice model in economics to redefine the CTR
prediction problem, and propose a general neural network framework built on
self-attention mechanism. It is found that most existing CTR prediction models
align with our proposed general framework. We also examine the expressive power
and model complexity of our proposed framework, along with potential extensions
to some existing models. And finally we demonstrate and verify our insights
through some experimental results on public datasets.
|
Numerical simulation is an essential tool for many applications involving
subsurface flow and transport, yet often suffers from computational challenges
due to the multi-physics nature, highly non-linear governing equations,
inherent parameter uncertainties, and the need for high spatial resolutions to
capture multi-scale heterogeneity. We developed CCSNet, a general-purpose
deep-learning modeling suite that can act as an alternative to conventional
numerical simulators for carbon capture and storage (CCS) problems where CO$_2$
is injected into saline aquifers in 2d-radial systems. CCSNet consists of a
sequence of deep learning models producing all the outputs that a numerical
simulator typically provides, including saturation distributions, pressure
buildup, dry-out, fluid densities, mass balance, solubility trapping, and sweep
efficiency. The results are 10$^3$ to 10$^4$ times faster than conventional
numerical simulators. As an application of CCSNet illustrating the value of its
high computational efficiency, we developed rigorous estimation techniques for
the sweep efficiency and solubility trapping.
|
We present systematic and efficient solutions for both observability
enhancement and root-cause diagnosis of post-silicon System-on-Chips (SoCs)
validation with diverse usage scenarios. We model specification of interacting
flows in typical applications for message selection. Our method for message
selection optimizes flow specification coverage and trace buffer utilization.
We define the diagnosis problem as identifying buggy traces as outliers and
bug-free traces as inliers/normal behaviors, for which we use unsupervised
learning algorithms for outlier detection. Instead of direct application of
machine learning algorithms over trace data using the signals as raw features,
we use feature engineering to transform raw features into more sophisticated
features using domain specific operations. The engineered features are highly
relevant to the diagnosis task and are generic to be applied across any
hardware designs. We present debugging and root cause analysis of subtle
post-silicon bugs in industry-scale OpenSPARC T2 SoC. We achieve a trace buffer
utilization of 98.96\% with a flow specification coverage of 94.3\% (average).
Our diagnosis method was able to diagnose up to 66.7\% more bugs and took up to
847$\times$ less diagnosis time as compared to the manual debugging with a
diagnosis precision of 0.769.
|
We present an open-source Python package, Orbits from Radial Velocity,
Absolute, and/or Relative Astrometry (orvara), to fit Keplerian orbits to any
combination of radial velocity, relative astrometry, and absolute astrometry
data from the Hipparcos-Gaia Catalog of Accelerations. By combining these three
data types, one can measure precise masses and sometimes orbital parameters
even when the observations cover a small fraction of an orbit. orvara achieves
its computational performance with an eccentric anomaly solver five to ten
times faster than commonly used approaches, low-level memory management to
avoid python overheads, and by analytically marginalizing out parallax,
barycenter proper motion, and the instrument-specific radial velocity zero
points. Through its integration with the Hipparcos and Gaia intermediate
astrometry package htof, orvara can properly account for the epoch astrometry
measurements of Hipparcos and the measurement times and scan angles of
individual Gaia epochs. We configure orvara with modifiable .ini configuration
files tailored to any specific stellar or planetary system. We demonstrate
orvara with a case study application to a recently discovered white dwarf/main
sequence (WD/MS) system, HD 159062. By adding absolute astrometry to literature
RV and relative astrometry data, our comprehensive MCMC analysis improves the
precision of HD 159062B's mass by more than an order of magnitude to
$0.6083^{+0.0083}_{-0.0073}\,M_\odot$. We also derive a low eccentricity and
large semimajor axis, establishing HD 159062AB as a system that did not
experience Roche lobe overflow.
|
It has recently been established that cluster-like states -- states that are
in the same symmetry-protected topological phase as the cluster state --
provide a family of resource states that can be utilized for Measurement-Based
Quantum Computation. In this work, we ask whether it is possible to prepare
cluster-like states in finite time without breaking the symmetry protecting the
resource state. Such a symmetry-preserving protocol would benefit from
topological protection to errors in the preparation. We answer this question in
the positive by providing a Hamiltonian in one higher dimension whose
finite-time evolution is a unitary that acts trivially in the bulk, but pumps
the desired cluster state to the boundary. Examples are given for both the 1D
cluster state protected by a global symmetry, and various 2D cluster states
protected by subsystem symmetries. We show that even if unwanted symmetric
perturbations are present in the driving Hamiltonian, projective measurements
in the bulk along with post-selection is sufficient to recover a cluster-like
state. For a resource state of size $N$, failure to prepare the state is
negligible if the size of the perturbations are much smaller than $N^{-1/2}$.
|
Detecting anomalies has been a fundamental approach in detecting potentially
fraudulent activities. Tasked with detection of illegal timber trade that
threatens ecosystems and economies and association with other illegal
activities, we formulate our problem as one of anomaly detection. Among other
challenges annotations are unavailable for our large-scale trade data with
heterogeneous features (categorical and continuous), that can assist in
building automated systems to detect fraudulent transactions. Modelling the
task as unsupervised anomaly detection, we propose a novel model Contrastive
Learning based Heterogeneous Anomaly Detector to address shortcomings of prior
models. Our model uses an asymmetric autoencoder that can effectively handle
large arity categorical variables, but avoids assumptions about structure of
data in low-dimensional latent space and is robust to changes to
hyper-parameters. The likelihood of data is approximated through an estimator
network, which is jointly trained with the autoencoder,using negative sampling.
Further the details and intuition for an effective negative sample generation
approach for heterogeneous data are outlined. We provide a qualitative study to
showcase the effectiveness of our model in detecting anomalies in timber trade.
|
We often use perturbations to regularize neural models. For neural
encoder-decoders, previous studies applied the scheduled sampling (Bengio et
al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations
but these methods require considerable computational time. Thus, this study
addresses the question of whether these approaches are efficient enough for
training time. We compare several perturbations in sequence-to-sequence
problems with respect to computational time. Experimental results show that the
simple techniques such as word dropout (Gal and Ghahramani, 2016) and random
replacement of input tokens achieve comparable (or better) scores to the
recently proposed perturbations, even though these simple methods are faster.
Our code is publicly available at
https://github.com/takase/rethink_perturbations.
|
A neutron decays into a proton, an electron, and an anti-neutrino through the
beta-decay process. The decay lifetime ($\sim$880 s) is an important parameter
in the weak interaction. For example, the neutron lifetime is a parameter used
to determine the |$V_{\rm ud}$| parameter of the CKM quark mixing matrix. The
lifetime is also one of the input parameters for the Big Bang Nucleosynthesis,
which predicts light element synthesis in the early universe. However,
experimental measurements of the neutron lifetime today are significantly
different (8.4 s or 4.0$\sigma$) depending on the methods. One is a bottle
method measuring surviving neutron in the neutron storage bottle. The other is
a beam method measuring neutron beam flux and neutron decay rate in the
detector. There is a discussion that the discrepancy comes from unconsidered
systematic error or undetectable decay mode, such as dark decay. A new type of
beam experiment is performed at the BL05 MLF J-PARC. This experiment measured
neutron flux and decay rate simultaneously with a time projection chamber using
a pulsed neutron beam. We will present the world situation of neutron lifetime
and the latest results at J-PARC.
|
For two-dimensional percolation on a domain with the topology of a disc, we
introduce a nested-path operator (NP) and thus a continuous family of one-point
functions $W_k \equiv \langle \mathcal{R} \cdot k^\ell \rangle $, where $\ell$
is the number of independent nested closed paths surrounding the center, $k$ is
a path fugacity, and $\mathcal{R}$ projects on configurations having a cluster
connecting the center to the boundary. At criticality, we observe a power-law
scaling $W_k \sim L^{X_{\rm NP}}$, with $L$ the linear system size, and we
determine the exponent $X_{\rm NP}$ as a function of $k$. On the basis of our
numerical results, we conjecture an analytical formula, $X_{\rm NP} (k) =
\frac{3}{4}\phi^2 -\frac{5}{48}\phi^2/ (\phi^2-\frac{2}{3})$ where $k = 2
\cos(\pi \phi)$, which reproduces the exact results for $k=0,1$ and agrees with
the high-precision estimate of $X_{\rm NP}$ for other $k$ values. In addition,
we observe that $W_2(L)=1$ for site percolation on the triangular lattice with
any size $L$, and we prove this identity for all self-matching lattices.
|
A key functionality of emerging connected autonomous systems such as smart
transportation systems, smart cities, and the industrial Internet-of-Things, is
the ability to process and learn from data collected at different physical
locations. This is increasingly attracting attention under the terms of
distributed learning and federated learning. However, in this setup data
transfer takes place over communication resources that are shared among many
users and tasks or subject to capacity constraints. This paper examines
algorithms for efficiently allocating resources to linear regression tasks by
exploiting the informativeness of the data. The algorithms developed enable
adaptive scheduling of learning tasks with reliable performance guarantees.
|
Blazars emit a highly-variable non-thermal spectrum. It is usually assumed
that the same non-thermal electrons are responsible for the IR-optical-UV
emission (via synchrotron) and the gamma-ray emission (via inverse Compton).
Hence, the light curves in the two bands should be correlated. Orphan gamma-ray
flares (i.e., lacking a luminous low-frequency counterpart) challenge our
theoretical understanding of blazars. By means of large-scale two-dimensional
radiative particle-in-cell simulations, we show that orphan gamma-ray flares
may be a self-consistent by-product of particle energization in turbulent
magnetically-dominated pair plasmas. The energized particles produce the
gamma-ray flare by inverse Compton scattering an external radiation field,
while the synchrotron luminosity is heavily suppressed since the particles are
accelerated nearly along the direction of the local magnetic field. The ratio
of inverse Compton to synchrotron luminosity is sensitive to the initial
strength of turbulent fluctuations (a larger degree of turbulent fluctuations
weakens the anisotropy of the energized particles, thus increasing the
synchrotron luminosity). Our results show that the anisotropy of the
non-thermal particle population is key to modeling the blazar emission.
|
We present a high fidelity snapshot spectroscopic radio imaging study of a
weak type I solar noise storm which took place during an otherwise
exceptionally quiet time. Using high fidelity images from the Murchison
Widefield Array, we track the observed morphology of the burst source for 70
minutes and identify multiple instances where its integrated flux density and
area are strongly anti-correlated with each other. The type I radio emission is
believed to arise due to electron beams energized during magnetic reconnection
activity. The observed anti-correlation is interpreted as evidence for presence
of MHD sausage wave modes in the magnetic loops and strands along which these
electron beams are propagating. Our observations suggest that the sites of
these small scale reconnections are distributed along the magnetic flux tube.
We hypothesise that small scale reconnections produces electron beams which
quickly get collisionally damped. Hence, the plasma emission produced by them
span only a narrow bandwidth and the features seen even a few MHz apart must
arise from independent electron beams.
|
This study investigated how social interaction among robotic agents changes
dynamically depending on the individual belief of action intention. In a set of
simulation studies, we examine dyadic imitative interactions of robots using a
variational recurrent neural network model. The model is based on the free
energy principle such that a pair of interacting robots find themselves in a
loop, attempting to predict and infer each other's actions using active
inference. We examined how regulating the complexity term to minimize free
energy determines the dynamic characteristics of networks and interactions.
When one robot trained with tighter regulation and another trained with looser
regulation interact, the latter tends to lead the interaction by exerting
stronger action intention, while the former tends to follow by adapting to its
observations. The study confirms that the dyadic imitative interaction becomes
successful by achieving a high synchronization rate when a leader and a
follower are determined by developing action intentions with strong belief and
weak belief, respectively.
|
We reveal the microscopic origin of electric polarization $\vec{P}$ induced
by noncollinear magnetic order. We show that in Mott insulators, such $\vec{P}$
is given by all possible combinations of position operators $\hat{\vec{r}}_{ij}
= (\vec{r}_{ij}^{\, 0},\vec{\boldsymbol{r}}_{ij}^{\phantom{0}})$ and transfer
integrals $\hat{t}_{ij} = (t_{ij}^{0},\boldsymbol{t}_{ij}^{\phantom{0}})$ in
the bonds, where $\vec{r}_{ij}^{\, 0}$ and $t_{ij}^{0}$ are spin-independent
contributions in the basis of Kramers doublet states, while
$\vec{\boldsymbol{r}}_{ij}^{\phantom{0}}$ and
$\boldsymbol{t}_{ij}^{\phantom{0}}$ stem solely from the spin-orbit
interaction. Among them, the combination $t_{ij}^{0}
\vec{\boldsymbol{r}}_{ij}^{\phantom{0}}$, which couples to the spin current,
remains finite in the centrosymmetric bonds, thus yielding finite $\vec{P}$ in
the case of noncollinear arrangement of spins. The form of the magnetoelectric
coupling, which is controlled by $\vec{\boldsymbol{r}}_{ij}^{\phantom{0}}$,
appears to be rich and is not limited to the phenomenological law $\vec{P} \sim
\boldsymbol{\epsilon}_{ij} \times [\boldsymbol{e}_{i} \times
\boldsymbol{e}_{j}]$ with $\boldsymbol{\epsilon}_{ij}$ being the bond vector
connecting the spins $\boldsymbol{e}_{i}$ and $\boldsymbol{e}_{j}$. Using
density-functional theory, we illustrate how the proposed mechanism work in the
spiral magnets CuCl$_2$, CuBr$_2$, CuO, and $\alpha$-Li$_2$IrO$_3$, providing
consistent explanation to available experimental data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.