abstract
stringlengths 42
2.09k
|
---|
We prove Conjecture 4.16 of the paper [EL21] of Elagin and Lunts; namely,
that a smooth projective curve of genus at least 1 over a field has diagonal
dimension 2.
|
We characterize the performance of a system based on a magnetoresistor array.
This instrument is developed to map the magnetic field, and to track a dipolar
magnetic source in the presence of a static homogeneous field. The position and
orientation of the magnetic source with respect to the sensor frame is
retrieved together with the orientation of the frame with respect to the
environmental field. A nonlinear best-fit procedure is used, and its precision,
time performance, and reliability are analyzed. This analysis is performed in
view of the practical application for which the system is designed that is an
eye-tracking diagnostics and rehabilitative tool for medical purposes, which
require high speed ($\ge 100$~Sa/s) and sub-millimetric spatial resolution. A
throughout investigation on the results makes it possible to list several
observations, suggestions, and hints, which will be useful in the design of
similar setups.
|
The introduction of pretrained language models has reduced many complex
task-specific NLP models to simple lightweight layers. An exception to this
trend is coreference resolution, where a sophisticated task-specific model is
appended to a pretrained transformer encoder. While highly effective, the model
has a very large memory footprint -- primarily due to dynamically-constructed
span and span-pair representations -- which hinders the processing of complete
documents and the ability to train on multiple instances in a single batch. We
introduce a lightweight end-to-end coreference model that removes the
dependency on span representations, handcrafted features, and heuristics. Our
model performs competitively with the current standard model, while being
simpler and more efficient.
|
Scheduling a sports tournament is a complex optimization problem, which
requires a large number of hard constraints to satisfy. Despite the
availability of several such constraints in the literature, there remains a gap
since most of the new sports events pose their own unique set of requirements,
and demand novel constraints. Specifically talking of the strictly time bound
events, ensuring fairness between the different teams in terms of their rest
days, traveling, and the number of successive games they play, becomes a
difficult task to resolve, and demands attention. In this work, we present a
similar situation with a recently played sports event, where a suboptimal
schedule favored some of the sides more than the others. We introduce various
competitive parameters to draw a fairness comparison between the sides and
propose a weighting criterion to point out the sides that enjoyed this schedule
more than the others. Furthermore, we use root mean squared error between an
ideal schedule and the actual ones for each side to determine unfairness in the
distribution of rest days across their entire schedules. The latter is crucial,
since successively playing a large number of games may lead to sportsmen
burnout, which must be prevented.
|
Stimulated by the exciting progress in experiments, we carry out a combined
analysis of the masses, and strong and radiative decay properties of the $B$
and $B_s$-meson states up to the second orbital excitations. Based on our good
descriptions of the mass and decay properties for the low-lying
well-established states $B_1(5721)$, $B_2^*(5747)$, $B_{s1}(5830)$ and
$B_{s2}^*(5840)$, we give a quark model classification for the high mass
resonances observed in recent years. It is found that (i) the $B_{J}(5840)$
resonance may be explained as the low mass mixed state $B(|SD\rangle_L)$ via
$2^3S_1$-$1^3D_1$ mixing, or the pure $B(2^3S_1)$ state, or $B(2^1S_0)$. (ii)
The $B_J(5970)$ resonance may be assigned as the $1^3D_3$ state in the $B$
meson family, although it as a pure $2^3S_1$ state cannot be excluded. (iii)
The narrow structure around 6064 MeV observed in the $B^+K^-$ mass spectrum at
LHCb may be mainly caused by the $B_{sJ}(6109)$ resonance decaying into
$B^{*+}K^-$, and favors the assignment of the high mass $1D$-wave mixed state
$B_s(1D'_2)$ with $J^P=2^-$, although it as the $1^3D_3$ state cannot be
excluded. (iv) The relatively broader $B_{sJ}(6114)$ structure observed at LHCb
may be explained with the mixed state $B_s(|SD\rangle_H)$ via $2^3S_1$-$1^3D_1$
mixing, or a pure $1^3D_1$ state. Most of the missing $1P$-, $1D$-, and
$2S$-wave $B$- and $B_s$-meson states have a relatively narrow width, they are
most likely to be observed in their dominant decay channels with a larger data
sample at LHCb.
|
We study the magnetization process of the $S=1$ Heisenberg model on a two-leg
ladder with farther neighbor spin-exchange interaction. We consider the
interaction that couples up to the next-nearest neighbor rungs and find an
exactly solvable regime where the ground states become product states. The
next-nearest neighbor interaction tends to stabilize magnetization plateaus at
multiples of 1/6. In most of the exactly solvable regime, a single
magnetization curve shows two series of plateaus with different periodicities.
|
In conversational analyses, humans manually weave multimodal information into
the transcripts, which is significantly time-consuming. We introduce a system
that automatically expands the verbatim transcripts of video-recorded
conversations using multimodal data streams. This system uses a set of
preprocessing rules to weave multimodal annotations into the verbatim
transcripts and promote interpretability. Our feature engineering contributions
are two-fold: firstly, we identify the range of multimodal features relevant to
detect rapport-building; secondly, we expand the range of multimodal
annotations and show that the expansion leads to statistically significant
improvements in detecting rapport-building.
|
Bayesian neural networks that incorporate data augmentation implicitly use a
``randomly perturbed log-likelihood [which] does not have a clean
interpretation as a valid likelihood function'' (Izmailov et al. 2021). Here,
we provide several approaches to developing principled Bayesian neural networks
incorporating data augmentation. We introduce a ``finite orbit'' setting which
allows likelihoods to be computed exactly, and give tight multi-sample bounds
in the more usual ``full orbit'' setting. These models cast light on the origin
of the cold posterior effect. In particular, we find that the cold posterior
effect persists even in these principled models incorporating data
augmentation. This suggests that the cold posterior effect cannot be dismissed
as an artifact of data augmentation using incorrect likelihoods.
|
We use VANDELS spectroscopic data overlapping with the $\simeq$7 Ms Chandra
Deep Field South survey to extend studies of high-mass X-ray binary systems
(XRBs) in 301 normal star-forming galaxies in the redshift range $3 < z < 5.5$.
Our analysis evaluates correlations between X-ray luminosities ($L_X$), star
formation rates (SFR) and stellar metallicities ($Z_\star$) to higher redshifts
and over a wider range in galaxy properties than hitherto. Using a stacking
analysis performed in bins of both redshift and SFR for sources with robust
spectroscopic redshifts without AGN signatures, we find convincing evolutionary
trends in the ratio $L_X$/SFR to the highest redshifts probed, with a stronger
trend for galaxies with lower SFRs. Combining our data with published samples
at lower redshift, the evolution of $L_X$/SFR to $z\simeq5$ proceeds as $(1 +
z)^{1.03 \pm 0.02}$. Using stellar metallicities derived from photospheric
absorption features in our spectroscopic data, we confirm indications at lower
redshifts that $L_X$/SFR is stronger for metal-poor galaxies. We use
semi-analytic models to show that metallicity dependence of $L_X$/SFR alone may
not be sufficient to fully explain the observed redshift evolution of X-ray
emission from high-mass XRBs, particularly for galaxies with SFR $<30$
$M_\odot$ yr$^{-1}$. We speculate that the discrepancy may arise due to reduced
overall stellar ages in the early Universe leading to higher $L_X$/SFR for the
same metallicity. We use our data to define the redshift-dependent contribution
of XRBs to the integrated X-ray luminosity density and, in comparison with
models, find that the contribution of high-mass XRBs to the cosmic X-ray
background at $z>6$ may be $\gtrsim 0.25$ dex higher than previously estimated.
|
The renormalized contribution of fermions to the curvature masses of vector
and axial-vector mesons is derived with two different methods at leading order
in the loop expansion applied to the (2+1)-flavor constituent quark-meson
model. The corresponding contribution to the curvature masses of the scalar and
pseudoscalar mesons, already known in the literature, is rederived in a
transparent way. The temperature dependence of the curvature mass of various
(axial-)vector modes obtained by decomposing the curvature mass tensor is
investigated along with the (axial-)vector--(pseudo)scalar mixing. All
fermionic corrections are expressed as simple integrals that involve at finite
temperature only the Fermi-Dirac distribution function modified by the
Polyakov-loop degrees of freedom. The renormalization of the (axial-)vector
curvature mass allows us to lift a redundancy in the original Lagrangian of the
globally symmetric extended linear sigma model, in which terms already
generated by the covariant derivative were reincluded with different coupling
constants.
|
This paper introduces on-the-way choice of retail outlet as a form of
convenience shopping. It presents a model of on-the-way choice of retail outlet
and applies the model in the context of fuel retailing to explore its
implications for segmentation and spatial competition. The model is a latent
class random utility choice model. An application to gas station choices
observed in a medium-sized Asian city show the model to fit substantially
better than existing models. The empirical results indicate consumers may adopt
one of two decision strategies. When adopting an immediacy-oriented strategy
they behave in accordance with the traditional gravity-based retail models and
tend to choose the most spatially convenient outlet. When following a
destination-oriented strategy they focus more on maintaining their overall trip
efficiency and so will tend to visit outlets located closer to their main
destination and are more susceptible to retail agglomeration effects. The paper
demonstrates how the model can be used to inform segmentation and local
competition analyses that account for variations in these strategies as well as
variations in consumer type, origin and time of travel. Simulations of a
duopoly setting further demonstrate the implications.
|
For a commutative, unital and integral quantale V, we generalize to V-groups
the results developed by Gran and Michel for preordered groups. We first of all
show that, in the category V-Grp of V-groups, there exists a torsion theory
whose torsion and torsion-free subcategories are given by those of indiscrete
and separated V-groups, respectively. It turns out that this torsion theory
induces a monotone-light factorization system that we characterize, and it is
then possible to describe the coverings in V-Grp. We next classify these
coverings as internal actions of a Galois groupoid. Finally, we observe that
the subcategory of separated V-groups is also a torsion-free subcategory for a
pretorsion theory whose torsion subcategory is the one of symmetric V-groups.
As recently proved by Clementino and Montoli, this latter category is actually
not only coreflective, as it is the case for any torsion subcategory, but also
reflective.
|
The paper focuses on the a posteriori tuning of a generative model in order
to favor the generation of good instances in the sense of some external
differentiable criterion. The proposed approach, called Boltzmann Tuning of
Generative Models (BTGM), applies to a wide range of applications. It covers
conditional generative modelling as a particular case, and offers an affordable
alternative to rejection sampling. The contribution of the paper is twofold.
Firstly, the objective is formalized and tackled as a well-posed optimization
problem; a practical methodology is proposed to choose among the candidate
criteria representing the same goal, the one best suited to efficiently learn a
tuned generative model. Secondly, the merits of the approach are demonstrated
on a real-world application, in the context of robust design for energy
policies, showing the ability of BTGM to sample the extreme regions of the
considered criteria.
|
We solve the decidability problem for Boolean Set Theory with unordered
cartesian product.
|
A key component of the baryon cycle in galaxies is the depletion of metals
from the gas to the dust phase in the neutral ISM. The METAL (Metal Evolution,
Transport and Abundance in the Large Magellanic Cloud) program on the Hubble
Space Telescope acquired UV spectra toward 32 sightlines in the half-solar
metallicity LMC, from which we derive interstellar depletions (gas-phase
fractions) of Mg, Si, Fe, Ni, S, Zn, Cr, and Cu. The depletions of different
elements are tightly correlated, indicating a common origin. Hydrogen column
density is the main driver for depletion variations. Correlations are weaker
with volume density, probed by CI fine structure lines, and distance to the LMC
center. The latter correlation results from an East-West variation of the
gas-phase metallicity. Gas in the East, compressed side of the LMC encompassing
30 Doradus and the Southeast HI over-density is enriched by up to +0.3dex,
while gas in the West side is metal-deficient by up to -0.5dex. Within the
parameter space probed by METAL, no correlation with molecular fraction or
radiation field intensity are found. We confirm the factor 3-4 increase in
dust-to-metal and dust-to-gas ratios between the diffuse (logN(H)~20 cm-2) and
molecular (logN(H)~22 cm-2) ISM observed from far-infrared, 21 cm, and CO
observations. The variations of dust-to-metal and dust-to-gas ratios with
column density have important implications for the sub-grid physics of chemical
evolution, gas and dust mass estimates throughout cosmic times, and for the
chemical enrichment of the Universe measured via spectroscopy of damped
Lyman-alpha systems.
|
Concave Utility Reinforcement Learning (CURL) extends RL from linear to
concave utilities in the occupancy measure induced by the agent's policy. This
encompasses not only RL but also imitation learning and exploration, among
others. Yet, this more general paradigm invalidates the classical Bellman
equations, and calls for new algorithms. Mean-field Games (MFGs) are a
continuous approximation of many-agent RL. They consider the limit case of a
continuous distribution of identical agents, anonymous with symmetric
interests, and reduce the problem to the study of a single representative agent
in interaction with the full population. Our core contribution consists in
showing that CURL is a subclass of MFGs. We think this important to bridge
together both communities. It also allows to shed light on aspects of both
fields: we show the equivalence between concavity in CURL and monotonicity in
the associated MFG, between optimality conditions in CURL and Nash equilibrium
in MFG, or that Fictitious Play (FP) for this class of MFGs is simply
Frank-Wolfe, bringing the first convergence rate for discrete-time FP for MFGs.
We also experimentally demonstrate that, using algorithms recently introduced
for solving MFGs, we can address the CURL problem more efficiently.
|
In the near future, the redshift drift observations in optical and radio
bands will provide precise measurements on $H(z)$ covering the redshift ranges
of $2<z<5$ and $0<z<0.3$. In addition, gravitational wave (GW) standard siren
observations could make measurements on the dipole anisotropy of luminosity
distance, which will also provide the $H(z)$ measurements in the redshift range
of $0<z<3$. In this work, we propose a multi-messenger and multi-wavelength
observational strategy to measure $H(z)$ based on the three next-generation
projects, E-ELT, SKA, and DECIGO, and we wish to see whether the future $H(z)$
measurements could provide tight constraints on dark-energy parameters. The
dark energy models we consider include $\Lambda$CDM, $w$CDM, CPL, HDE, and
I$\Lambda$CDM models. It is found that E-ELT, SKA1, and DECIGO are highly
complementary in constraining dark energy models. Although any one of these
three data sets can only give rather weak constraints on each model we
consider, the combination of them could significantly break the parameter
degeneracies and give much tighter constraints on almost all the cosmological
parameters. Moreover, we find that the combination of E-ELT, SKA1, DECIGO, and
CMB could further improve the constraints on dark energy parameters, e.g.,
$\sigma(w_0)=0.024$ and $\sigma(w_a)=0.17$ in the CPL model, which means that
these three promising probes will play a key role in helping reveal the nature
of dark energy.
|
Hardware specialization is becoming a key enabler of energyefficient
performance. Future systems will be increasingly heterogeneous, integrating
multiple specialized and programmable accelerators, each with different memory
demands. Traditionally, communication between accelerators has been
inefficient, typically orchestrated through explicit DMA transfers between
different address spaces. More recently, industry has proposed unified coherent
memory which enables implicit data movement and more data reuse, but often
these interfaces limit the coherence flexibility available to heterogeneous
systems. This paper demonstrates the benefits of fine-grained coherence
specialization for heterogeneous systems. We propose an architecture that
enables low-complexity independent specialization of each individual coherence
request in heterogeneous workloads by building upon a simple and flexible
baseline coherence interface, Spandex. We then describe how to optimize
individual memory requests to improve cache reuse and performance-critical
memory latency in emerging heterogeneous workloads. Collectively, our
techniques enable significant gains, reducing execution time by up to 61% or
network traffic by up to 99% while adding minimal complexity to the Spandex
protocol.
|
Automatic image and digit recognition is a computationally challenging task
for image processing and pattern recognition, requiring an adequate
appreciation of the syntactic and semantic importance of the image for the
identification ofthe handwritten digits. Image and Pattern Recognition has been
identified as one of the driving forces in the research areas because of its
shifting of different types of applications, such as safety frameworks,
clinical frameworks, diversion, and so on.In this study, for recognition, we
implemented a hybrid neural network model that is capable of recognizing the
digit of MNISTdataset and achieved a remarkable result. The proposed neural
model network can extract features from the image and recognize the features in
the layer by layer. To expand, it is so important for the neural network to
recognize how the proposed modelcan work in each layer, how it can generate
output, and so on. Besides, it also can recognize the auto-encoding system and
the variational auto-encoding system of the MNIST dataset. This study will
explore those issues that are discussed above, and the explanation for them,
and how this phenomenon can be overcome.
|
In the current work we discuss the notion of gateways as a means for
interoperability across different blockchain systems. We discuss two key
principles for the design of gateway nodes and scalable gateway protocols,
namely (i) the opaque ledgers principle as the analogue of the autonomous
systems principle in IP datagram routing, and (ii) the externalization of value
principle as the analogue of the end-to-end principle in the Internet
architecture. We illustrate the need for a standard gateway protocol by
describing a unidirectional asset movement protocol between two peer gateways,
under the strict condition of both blockchains being private/permissioned with
their ledgers inaccessible to external entities. Several aspects of gateways
and the gateway protocol is discussed, including gateway identities, gateway
certificates and certificate hierarchies, passive locking transactions by
gateways, and the potential use of delegated hash-locks to expand the
functionality of gateways.
|
The Wilson action for Euclidean lattice gauge theory defines a
positive-definite transfer matrix that corresponds to a unitary lattice gauge
theory time-evolution operator if analytically continued to real time. Hoshina,
Fujii, and Kikukawa (HFK) recently pointed out that applying the Wilson action
discretization to continuum real-time gauge theory does not lead to this, or
any other, unitary theory and proposed an alternate real-time lattice gauge
theory action that does result in a unitary real-time transfer matrix. The
character expansion defining the HFK action is divergent, and in this work we
apply a path integral contour deformation to obtain a convergent representation
for U(1) HFK path integrals suitable for numerical Monte Carlo calculations. We
also introduce a class of real-time lattice gauge theory actions based on
analytic continuation of the Euclidean heat-kernel action. Similar divergent
sums are involved in defining these actions, but for one action in this class
this divergence takes a particularly simple form, allowing construction of a
path integral contour deformation that provides absolutely convergent
representations for U(1) and SU(N) real-time lattice gauge theory path
integrals. We perform proof-of-principle Monte Carlo calculations of real-time
U(1) and SU(3) lattice gauge theory and verify that exact results for unitary
time evolution of static quark-antiquark pairs in (1 + 1)D are reproduced.
|
Recently, huge attention has been drawn to improve optical sensing devices
based on photonic resonators in the presence of graphene. In this paper, based
on the transfer matrix approach and TE polarization for the incident
electromagnetic waves, we numerically evaluate the transmission and reflection
spectra for one-dimensional photonic resonators and surface plasmon resonances
with strained graphene, respectively. We proved that a relatively small strain
field in graphene can modulate linearly polarized resonant modes within the
photonic bandgap of the defective crystal. Moreover, we study the strain
effects on the surface plasmon resonances created by the evanescent wave
technique at the interference between a monolayer graphene and prism.
|
We reveal that phononic thermal transport in graphene is not immune to grain
boundaries (GBs) aligned along the direction of the temperature gradient.
Non-equilibrium molecular dynamics simulations uncover a large reduction in the
phononic thermal conductivity ($\kappa_p$) along linear ultra-narrow GBs
comprising periodically-repeating pentagon-heptagon dislocations. Green's
function calculations and spectral energy density analysis indicate that
$\kappa_p$ is the complex manifestation of the periodic strain field, which
behaves as a reflective diffraction grating with both diffuse and specular
phonon reflections, and represents a source of anharmonic phonon-phonon
scattering. Our findings provide new insights into the integrity of the
phononic thermal transport in GB graphene.
|
This work studies the inverse boundary problem for the two photon absorption
radiative transport equation. We show that the absorption coefficients and
scattering coefficients can be uniquely determined from the \emph{albedo}
operator. If scattering is absent, we do not require smallness of the incoming
source and the reconstructions of the absorption coefficients are explicit.
|
This paper introduces a large-scale multimodal and multilingual dataset that
aims to facilitate research on grounding words to images in their contextual
usage in language. The dataset consists of images selected to unambiguously
illustrate concepts expressed in sentences from movie subtitles. The dataset is
a valuable resource as (i) the images are aligned to text fragments rather than
whole sentences; (ii) multiple images are possible for a text fragment and a
sentence; (iii) the sentences are free-form and real-world like; (iv) the
parallel texts are multilingual. We set up a fill-in-the-blank game for humans
to evaluate the quality of the automatic image selection process of our
dataset. We show the utility of the dataset on two automatic tasks: (i)
fill-in-the blank; (ii) lexical translation. Results of the human evaluation
and automatic models demonstrate that images can be a useful complement to the
textual context. The dataset will benefit research on visual grounding of words
especially in the context of free-form sentences, and can be obtained from
https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.
|
Existing dialog state tracking (DST) models are trained with dialog data in a
random order, neglecting rich structural information in a dataset. In this
paper, we propose to use curriculum learning (CL) to better leverage both the
curriculum structure and schema structure for task-oriented dialogs.
Specifically, we propose a model-agnostic framework called Schema-aware
Curriculum Learning for Dialog State Tracking (SaCLog), which consists of a
preview module that pre-trains a DST model with schema information, a
curriculum module that optimizes the model with CL, and a review module that
augments mispredicted data to reinforce the CL training. We show that our
proposed approach improves DST performance over both a transformer-based and
RNN-based DST model (TripPy and TRADE) and achieves new state-of-the-art
results on WOZ2.0 and MultiWOZ2.1.
|
Modelling of stellar radiative intensities in various spectral pass-bands
plays an important role in stellar physics. At the same time the direct
calculations of the high-resolution spectrum and then integrating it over the
given spectral pass-band is computationally demanding due to the vast number of
atomic and molecular lines. This is particularly so when employing
three-dimensional (3D) models of stellar atmospheres. To accelerate the
calculations, one can employ approximate methods, e.g., the use of Opacity
Distribution Functions (ODFs). Generally, ODFs provide a good approximation of
traditional spectral synthesis i.e., computation of intensities through filters
with strictly rectangular transmission function. However, their performance
strongly deteriorates when the filter transmission noticeably changes within
its pass-band, which is the case for almost all filters routinely used in
stellar physics. In this context, the aims of this paper are a) to generalize
the ODFs method for calculating intensities through filters with arbitrary
transmission functions; b) to study the performance of the standard and
generalized ODFs methods for calculating intensities emergent from 3D models of
stellar atmosphere. For this purpose we use the newly-developed MPS-ATLAS
radiative transfer code to compute intensities emergent 3D cubes simulated with
the radiative magnetohydrodynamics code MURaM. The calculations are performed
in the 1.5D regime, i.e., along many parallel rays passing through the
simulated cube. We demonstrate that generalized ODFs method allows accurate and
fast syntheses of spectral intensities and their centre-to-limb variations.
|
In this paper, we first consider null geodesics of a class of charged,
spherical and asymptotically flat hairy black holes in an
Einstein-Maxwell-scalar theory with a non-minimal coupling for the scalar and
electromagnetic fields. Remarkably, we show that there are two unstable
circular orbits for a photon in a certain parameter regime, corresponding to
two unstable photon spheres of different sizes outside the event horizon. To
illustrate the optical appearance of photon spheres, we then consider a simple
spherical model of optically thin accretion on the hairy black hole, and obtain
the accretion image seen by a distant observer. In the single photon sphere
case, only one bright ring appears in the image, and is identified as the edge
of the black hole shadow. Whereas in the case with two photon spheres, there
can be two concentric bright rings of different radii in the image, and the
smaller one serves as the boundary of the shadow, whose radius goes to zero at
the critical charge.
|
Blockchain-based cryptocurrencies, facilitating the convenience of payment by
providing a decentralized online solution, have not been widely adopted so far
due to slow confirmation of transactions. Offline delegation offers an
efficient way to exchange coins. However, in such an approach, the coins that
have been delegated confront the risk of being spent twice since the
delegator's behaviour cannot be restricted easily on account of the absence of
effective supervision. Even if a third party can be regarded as a judge between
the delegator and delegatee to secure transactions, she still faces the threat
of being compromised or providing misleading assure. Moreover, the approach
equipped with a third party contradicts the real intention of decentralized
cryptocurrency systems. In this paper, we propose \textit{DelegaCoin}, an
offline delegatable cryptocurrency system to mitigate such an issue. We exploit
trusted execution environments (TEEs) as decentralized "virtual agents" to
prevent malicious delegation. In DelegaCoin, an owner can delegate his coins
through offline-transactions without interacting with the blockchain network. A
formal model and analysis, prototype implementation, and further evaluation
demonstrate that our scheme is provably secure and practically feasible.
|
Coronavirus disease 2019 (COVID-19) is one of the most infectious diseases
and one of the greatest challenge due to global health crisis. The virus has
been transmitted globally and spreading so fast with high incidence. While, the
virus still pandemic, the government scramble to seek antiviral treatment and
vaccines to combat the diseases. This study was conducted to investigate the
influence of air pressure, air temperature, and relative humidity on the number
of confirmed cases in COVID-19. Based on the result, the calculation of
reproduced correlation through path decompositions and subsequent comparison to
the empirical correlation indicated that the path model fits the empirical
data. The identified factor significantly influenced the number of confirmed
cases of COVID-19. Therefore, the number of daily confirmed cases of COVID-19
may reduce as the amount of relative humidity increases; relative humidity will
increase as the amount of air temperature decreases; and the amount of air
temperature will decrease as the amount of air pressure decreases. Thus, it is
recommended that policy-making bodies consider the result of this study when
implementing programs for COVID-19 and increase public awareness on the effects
of weather condition, as it is one of the factors to control the number of
COVID-19 cases.
|
We initiate the study of the rational SFT capacities of Siegel using tools in
toric algebraic geometry. In particular, we derive new (often sharp) bounds for
the RSFT capacities of a strongly convex toric domain in dimension $4$. These
bounds admit descriptions in terms of both lattice optimization and (toric)
algebraic geometry. Applications include (a) an extremely simple lattice
formula for for many RSFT capacities of a large class of convex toric domains,
(b) new computations of the Gromov width of a class of product symplectic
manifolds and (c) an asymptotics law for the RSFT capacities of all strongly
convex toric domains.
|
Using polarization-resolved Raman spectroscopy, we investigate layer number,
temperature, and magnetic field dependence of Raman spectra in one- to
four-layer $\mathrm{CrI_{3}}$. Layer-number-dependent Raman spectra show that
in the paramagnetic phase a doubly degenerated $E_{g}$ mode of monolayer
$\mathrm{CrI_{3}}$ splits into one $A_{g}$ and one $B_{g}$ mode in N-layer (N >
1) $\mathrm{CrI_{3}}$ due to the monoclinic stacking. Their energy separation
increases in thicker samples until an eventual saturation.
Temperature-dependent measurements further show that the split modes tend to
merge upon cooling but remain separated until 10 K, indicating a failed attempt
of the monoclinic-to-rhombohedral structural phase transition that is present
in the bulk crystal. Magnetic-field-dependent measurements reveal an additional
monoclinic distortion across the magnetic-field-induced layered
antiferromagnetism-to-ferromagnetism phase transition. We propose a structural
change that consists of both a lateral sliding toward the rhombohedral stacking
and a decrease in the interlayer distance to explain our experimental
observations.
|
Extended Berkeley Packet Filter (BPF) has emerged as a powerful method to
extend packet-processing functionality in the Linux operating system. BPF
allows users to write code in high-level languages (like C or Rust) and execute
them at specific hooks in the kernel, such as the network device driver. To
ensure safe execution of a user-developed BPF program in kernel context, Linux
uses an in-kernel static checker. The checker allows a program to execute only
if it can prove that the program is crash-free, always accesses memory within
safe bounds, and avoids leaking kernel data.
BPF programming is not easy. One, even modest-sized BPF programs are deemed
too large to analyze and rejected by the kernel checker. Two, the kernel
checker may incorrectly determine that a BPF program exhibits unsafe behaviors.
Three, even small performance optimizations to BPF code (e.g., 5% gains) must
be meticulously hand-crafted by expert developers. Traditional optimizing
compilers for BPF are often inadequate since the kernel checker's safety
constraints are incompatible with rule-based optimizations.
We present K2, a program-synthesis-based compiler that automatically
optimizes BPF bytecode with formal correctness and safety guarantees. K2
produces code with 6--26% reduced size, 1.36%--55.03% lower average
packet-processing latency, and 0--4.75% higher throughput (packets per second
per core) relative to the best clang-compiled program, across benchmarks drawn
from Cilium, Facebook, and the Linux kernel. K2 incorporates several
domain-specific techniques to make synthesis practical by accelerating
equivalence-checking of BPF programs by 6 orders of magnitude.
|
We present a novel watermarking scheme to verify the ownership of DNN models.
Existing solutions embedded watermarks into the model parameters, which were
proven to be removable and detectable by an adversary to invalidate the
protection. In contrast, we propose to implant watermarks into the model
architectures. We design new algorithms based on Neural Architecture Search
(NAS) to generate watermarked architectures, which are unique enough to
represent the ownership, while maintaining high model usability. We further
leverage cache side channels to extract and verify watermarks from the
black-box models at inference. Theoretical analysis and extensive evaluations
show our scheme has negligible impact on the model performance, and exhibits
strong robustness against various model transformations.
|
We estimate a general mixture of Markov jump processes. The key novel feature
of the proposed mixture is that the transition intensity matrices of the Markov
processes comprising the mixture are entirely unconstrained. The Markov
processes are mixed with distributions that depend on the initial state of the
mixture process. The new mixture is estimated from its continuously observed
realizations using the EM algorithm, which provides the maximum likelihood (ML)
estimates of the mixture's parameters. To obtain the standard errors of the
estimates of the mixture's parameters, we employ Louis' (1982) general formula
for the observed Fisher information matrix. We also derive the asymptotic
properties of the ML estimators. Simulation study verifies the estimates'
accuracy and confirms the consistency and asymptotic normality of the
estimators. The developed methods are applied to a medical dataset, for which
the likelihood ratio test rejects the constrained mixture in favor of the
proposed unconstrained one. This application exemplifies the usefulness of a
new unconstrained mixture for identification and characterization of
homogeneous subpopulations in a heterogeneous population.
|
We discuss the prospects of diffractive dijet photoproduction at the EIC to
distinguish different fits of diffractive proton PDFs, different schemes of
factorization breaking, to determine diffractive nuclear PDFs and pion PDFs
from leading neutron production.
|
This note introduces a simple metric for benchmarking shock-capturing
schemes. This metric is especially focused on the shock-capturing overshoots,
which may undermine the robustness of numerical simulations, as well as the
reliability of numerical results. The idea is to numerically solve the model
linear advection equation with an initial condition of a square wave
characterized with different wavenumbers. After one step of temporal evolution,
the exact numerical overshoot error can be easily determined and shown as a
function of the CFL number and the reduced wavenumber. With the overshoot error
quantified by the present metric, a number of representative shock-capturing
schemes are analyzed accordingly, and several findings including the amplitude
of overshoots non-monotonously varying with the CFL number, and the amplitude
of overshoots significantly depending on the reduced wavenumber of the square
waves (discontinuities), are newly discovered, which are not before.
|
The auto feature extraction capability of deep neural networks (DNN) endows
them the potentiality for analysing complicated electroencephalogram (EEG) data
captured from brain functionality research. This work investigates the
potential coherent correspondence between the region-of-interest (ROI) for DNN
to explore, and ROI for conventional neurophysiological oriented methods to
work with, exemplified in the case of working memory study. The attention
mechanism induced by global average pooling (GAP) is applied to a public EEG
dataset of working memory, to unveil these coherent ROIs via a classification
problem. The result shows the alignment of ROIs from different research
disciplines. This work asserts the confidence and promise of utilizing DNN for
EEG data analysis, albeit in lack of the interpretation to network operations.
|
Capacitance measurements as a function of voltage, frequency and temperature
are useful tools to identify fundamental parameters that affect solar cell
operation. Techniques such as capacitance-voltage (CV), Mott-Schottky analysis
and thermal admittance spectroscopy (TAS) measurements are therefore frequently
employed to obtain relevant parameters of the perovskite absorber layer in
perovskite solar cells. However, state-of-the-art perovskite solar cells employ
thin electron and hole transport layers that improve contact selectivity. These
selective contacts are often quite resistive in nature, which implies that
their capacitances will contribute to the total capacitance and thereby affect
the extraction of the capacitance of the perovskite layer. Based on this
premise, we develop a simple multilayer model that considers the perovskite
solar cell as a series connection of the geometric capacitance of each layer in
parallel with their voltage-dependent resistances. Analysis of this model
yields fundamental limits to the resolution of spatial doping profiles and
minimum values of doping/trap densities, built-in voltages and activation
energies. We observe that most of the experimental
capacitance-voltage-frequency-temperature data, calculated doping/trap
densities and activation energies reported in literature are within these
cut-off values derived, indicating that the capacitance response of the
perovskite solar cell is indeed strongly affected by the capacitance of its
selective contacts.
|
The photoionization of the CO molecule from the C$-1s$ orbital does not obey
the Franck-Condon approximation, as a consequence of the nuclear recoil that
accompanies the direct emission and intra-molecular scattering of the
photoelectron. We use an analytical model to investigate the temporal signature
of the entangled nuclear and electronic motion in this process. We show that
the photoelectron emission delay can be decomposed into its localization and
resonant-confinement components. Finally, photoionization by a broadband
soft-x-ray pulse results in a coherent vibrational ionic state with a tunable
delay with respect to the classical sudden-photoemission limit.
|
Reconfigurable intelligent surfaces (RIS) are a key enabler of various new
applications in 6G smart radio environments. By utilizing an RIS prototype
system, this paper aims to enhance self-interference (SI) cancellation for
in-band full-duplex(FD) communication systems. In FD communication, SI of a
node severely limits the performance of the node by shadowing the received
signal from a distant node with its own transmit signal. To this end, we
propose to assist SI cancellation by exploiting a RIS to form a suitable
cancellation signal, thus canceling the leaked SI in the analog domain.
Building upon a 64 element RIS prototype we present results of RIS-assisted SI
cancellation from a real testbed. Given a suitable amount of initial analog
isolation, we are able to cancel the leaked signal by as much as -85 dB. The
presented case study shows promising performance to build an FD communication
system on this foundation.
|
Graph Neural Networks (GNNs) have recently shown to be powerful tools for
representing and analyzing graph data. So far GNNs is becoming an increasingly
critical role in software engineering including program analysis, type
inference, and code representation. In this paper, we introduce GraphGallery, a
platform for fast benchmarking and easy development of GNNs based software.
GraphGallery is an easy-to-use platform that allows developers to automatically
deploy GNNs even with less domain-specific knowledge. It offers a set of
implementations of common GNN models based on mainstream deep learning
frameworks. In addition, existing GNNs toolboxes such as PyG and DGL can be
easily incorporated into the platform. Experiments demonstrate the reliability
of implementations and superiority in fast coding. The official source code of
GraphGallery is available at https://github.com/EdisonLeeeee/GraphGallery and a
demo video can be found at https://youtu.be/mv7Zs1YeaYo.
|
The Granular Integration Through Transients (GITT) formalism gives a
theoretical description of the rheology of moderately dense granular flows and
suspensions. In this work, we extend the GITT equations beyond the case of
simple shear flows studied before. Applying this to the particular example of
extensional flows, we show that the predicted behavior is somewhat different
from that of the more frequently studied simple shear case, as illustrated by
the possibility of non monotonous evolution of the effective friction
coefficient $\mu$ with the inertial number $\mathcal{I}$. By the reduction of
the GITT equations to simple toy-models, we provide a generalization of the
$\mu(\mathcal{I})$-law true for any type of flow deformation. Our analysis also
includes a study of the Trouton ratio, which is shown to behave quite similarly
to that of dense colloidal suspensions.
|
We present observations of 86 meteor radio afterglows (MRAs) using the new
broadband imager at the Long Wavelength Array Sevilleta (LWA-SV) station. The
MRAs were detected using the all-sky images with a bandwidth up to 20 MHz. We
fit the spectra with both a power law and a log-normal function. When fit with
a power law, the spectra varied from flat to steep and the derived spectral
index distribution from the fit peaked at -1.73. When fit with a log-normal
function, the spectra exhibits turnovers at frequencies between 30-40 MHz, and
appear to be a better functional fit to the spectra. We compared the spectral
parameters from the two fitting methods with the physical properties of MRAs.
We observe a weak correlation between the log-normal turnover frequency and the
altitude of MRAs. The spectral indices from the power law fit do not show any
strong correlations with the physical properties of MRAs. However, the full
width half maximum (FWHM) duration of MRAs is correlated with the local time,
incidence angle, luminosity and optically derived kinetic energy of parent
meteoroid. Also, the average luminosity of MRAs seems to be correlated with the
kinetic energy of parent meteoroid and the altitude at which they occur.
|
We continue the study of Harish-Chandra bimodules in the setting of the
Deligne categories $\mathrm{Rep}(G_t)$, that was started in the previous work
of the first author (arXiv:2002.01555). In this work we construct a family of
Harish-Chandra bimodules that generalize simple finite dimensional bimodules in
the classical case. It turns out that they have finite $K$-type, which is a
non-vacuous condition for the Harish-Chandra bimodules in $\mathrm{Rep}(G_t)$.
The full classification of (simple) finite $K$-type bimodules is yet unknown.
This construction also yields some examples of central characters $\chi$ of
the universal enveloping algebra $U(\mathfrak{g}_t)$ for which the quotient
$U_\chi$ is not simple, and, thereby, it allows us to partially solve a
question posed by Pavel Etingof in one of his works.
|
Progressive Quenching (PQ) is a stochastic process during which one fixes one
after another the degrees of freedom of a globally coupled Ising spin system
while letting it thermalize through a heat bath. It has previously been shown
that during PQ, the mean equilibrium spin value follows a martingale process
and that this process can characterize the memory of the system. In the present
study, we find that the aforementioned martingale implies a local invariance of
the path-weight for the total quenched magnetization, the Markovian process
whose increment is the lastly fixed spin. Consequently, PQ lets the probability
distribution for the total quenched magnetization evolve while keeping the
Boltzmann-like factor, or a canonical structure under constraint, which
consists of a path-independent potential and a path-counting entropy. Moreover,
when the PQ starts from full equilibrium, the probability distribution at each
stage of PQ is found to be the limit distribution of what we call Recycled
Quenching (RQ), the process in which a randomly chosen quenched spin is
unquenched after a single step of PQ. The local invariance is a new consequence
of the martingale property, and not an application of known theorems for
martingale process.
|
Existing unsupervised visual odometry (VO) methods either match pairwise
images or integrate the temporal information using recurrent neural networks
over a long sequence of images. They are either not accurate, time-consuming in
training or error accumulative. In this paper, we propose a method consisting
of two camera pose estimators that deal with the information from pairwise
images and a short sequence of images respectively. For image sequences, a
Transformer-like structure is adopted to build a geometry model over a local
temporal window, referred to as Transformer-based Auxiliary Pose Estimator
(TAPE). Meanwhile, a Flow-to-Flow Pose Estimator (F2FPE) is proposed to exploit
the relationship between pairwise images. The two estimators are constrained
through a simple yet effective consistency loss in training. Empirical
evaluation has shown that the proposed method outperforms the state-of-the-art
unsupervised learning-based methods by a large margin and performs comparably
to supervised and traditional ones on the KITTI and Malaga dataset.
|
Emerging ultra-low-power tiny scale computing devices in Cyber-Physical
Systems %and Internet of Things (IoT) run on harvested energy, are
intermittently powered, have limited computational capability, and perform
sensing and actuation functions under the control of a dedicated firmware
operating without the supervisory control of an operating system. Wirelessly
updating or patching the firmware of such devices is inevitable. We consider
the challenging problem of simultaneous and secure firmware updates or patching
for a typical class of such devices -- Computational Radio Frequency
Identification (CRFID) devices. We propose Wisecr, the first secure and
simultaneous wireless code dissemination mechanism to multiple devices that
prevent malicious code injection attacks and intellectual property (IP) theft,
whilst enabling remote attestation of code installation. Importantly, Wisecr is
engineered to comply with existing ISO compliant communication protocol
standards employed by CRFID devices and systems. We comprehensively evaluate
Wisecr's overhead, demonstrate its implementation over standards-compliant
protocols, analyze its security and implement an end-to-end realization with
popular CFRID devices -- the open-source code is released on GitHub.
|
Twin support vector machine (TWSVM) and twin support vector regression (TSVR)
are newly emerging efficient machine learning techniques which offer promising
solutions for classification and regression challenges respectively. TWSVM is
based upon the idea to identify two nonparallel hyperplanes which classify the
data points to their respective classes. It requires to solve two small sized
quadratic programming problems (QPPs) in lieu of solving single large size QPP
in support vector machine (SVM) while TSVR is formulated on the lines of TWSVM
and requires to solve two SVM kind problems. Although there has been good
research progress on these techniques; there is limited literature on the
comparison of different variants of TSVR. Thus, this review presents a rigorous
analysis of recent research in TWSVM and TSVR simultaneously mentioning their
limitations and advantages. To begin with we first introduce the basic theory
of support vector machine, TWSVM and then focus on the various improvements and
applications of TWSVM, and then we introduce TSVR and its various enhancements.
Finally, we suggest future research and development prospects.
|
In this paper, the hybrid metric-Palatini gravity is an approach to modified
gravity in which is added to the usual Einstein-Hilbert action a supplementary
term containing a Palatini-type correction of the form $f({\cal R},T)$. Here,
${\cal R}$ is the Palatini curvature scalar, which is constructed from an
independent connection and $T$ is the trace of the energy-momentum tensor. This
theory describes a non-minimal coupling between matter and geometry. The
modified Einstein field equations in this hybrid metric-Palatini approach are
obtained. Then, it is investigated whether this modified theory of gravity and
its field equations allow G\"{o}del-type solutions, which essentially lead to
violation of causality. Considering physically well-motivated matter sources,
causal and non-causal solutions are explored.
|
An unresolved problem in Deep Learning is the ability of neural networks to
cope with domain shifts during test-time, imposed by commonly fixing network
parameters after training. Our proposed method Meta Test-Time Training (MT3),
however, breaks this paradigm and enables adaption at test-time. We combine
meta-learning, self-supervision and test-time training to learn to adapt to
unseen test distributions. By minimizing the self-supervised loss, we learn
task-specific model parameters for different tasks. A meta-model is optimized
such that its adaption to the different task-specific models leads to higher
performance on those tasks. During test-time a single unlabeled image is
sufficient to adapt the meta-model parameters. This is achieved by minimizing
only the self-supervised loss component resulting in a better prediction for
that image. Our approach significantly improves the state-of-the-art results on
the CIFAR-10-Corrupted image classification benchmark. Our implementation is
available on GitHub.
|
We propose a novel image based localization system using graph neural
networks (GNN). The pretrained ResNet50 convolutional neural network (CNN)
architecture is used to extract the important features for each image.
Following, the extracted features are input to GNN to find the pose of each
image by either using the image features as a node in a graph and formulate the
pose estimation problem as node pose regression or modelling the image features
themselves as a graph and the problem becomes graph pose regression. We do an
extensive comparison between the proposed two approaches and the state of the
art single image localization methods and show that using GNN leads to enhanced
performance for both indoor and outdoor environments.
|
This article studies Paley's theory for lacunary Fourier series on
(nonabelian) discrete groups.
The results unify and generalize the work of Rudin for abelian discrete
groups and the work of Lust-Piquard and Pisier for operator valued functions,
and provide new examples of Paley sequences and $\Lambda(p)$ sets on free
groups.
|
In this research article, we have reported periodic cellular automata rules
for different gait state prediction and classification of the gait data using
extreme machine Leaning (ELM). This research is the first attempt to use
cellular automaton to understand the complexity of bipedal walk. Due to
nonlinearity, varying configurations throughout the gait cycle and the passive
joint located at the unilateral foot-ground contact in bipedal walk resulting
variation of dynamic descriptions and control laws from phase to phase for
human gait is making difficult to predict the bipedal walk states. We have
designed the cellular automata rules which will predict the next gait state of
bipedal steps based on the previous two neighbour states. We have designed
cellular automata rules for normal walk. The state prediction will help to
correctly design the bipedal walk. The normal walk depends on next two states
and has total 8 states. We have considered the current and previous states to
predict next state. So we have formulated 16 rules using cellular automata, 8
rules for each leg. The priority order maintained using the fact that if right
leg in swing phase then left leg will be in stance phase. To validate the model
we have classified the gait Data using ELM [1] and achieved accuracy 60%. We
have explored the trajectories and compares with another gait trajectories.
Finally we have presented the error analysis for different joints.
|
We study the problem of estimating the total number of searches (volume) of
queries in a specific domain, which were submitted to a search engine in a
given time period. Our statistical model assumes that the distribution of
searches follows a Zipf's law, and that the observed sample volumes are biased
accordingly to three possible scenarios. These assumptions are consistent with
empirical data, with keyword research practices, and with approximate
algorithms used to take counts of query frequencies. A few estimators of the
parameters of the distribution are devised and experimented, based on the
nature of the empirical/simulated data. For continuous data, we recommend using
nonlinear least square regression (NLS) on the top-volume queries, where the
bound on the volume is obtained from the well-known Clauset, Shalizi and Newman
(CSN) estimation of power-law parameters. For binned data, we propose using a
Chi-square minimization approach restricted to the top-volume queries, where
the bound is obtained by the binned version of the CSN method. Estimations are
then derived for the total number of queries and for the total volume of the
population, including statistical error bounds. We apply the methods on the
domain of recipes and cooking queries searched in Italian in 2017. The observed
volumes of sample queries are collected from Google Trends (continuous data)
and SearchVolume (binned data). The estimated total number of queries and total
volume are computed for the two cases, and the results are compared and
discussed.
|
We discuss the associated $c\bar{c}$ and $l^+l^-$ pairs production in
ultraperipheral heavy-ion collisions at high energies. Such a channel provides
a novel probe for double-parton scattering (DPS) at small $x$ enabling one to
probe the photon density inside the nucleus. We have derived an analog of the
standard central $pp$ pocket formula and studied the kinematical dependence of
the effective cross section. Taking into account both elastic and non-elastic
contributions, we have shown predictions for the DPS $c\bar c l^+l^-$
production cross section differential in charm quark rapidity and dilepton
invariant mass and rapidity for LHC and a future collider.
|
It is well-known that fractal signals appear in many fields of science. LAN
and WWW traces, wireless traffic, VBR resources, etc. are among the ones with
this behavior in computer networks traffic flows. An important question in
these applications is how long a measured trace should be to obtain reliable
estimates of de Hurst index (H). This paper addresses this question by first
providing a thorough study of estimator for short series based on the behavior
of bias, standard deviation (s), Root-Mean-Square Error (RMSE), and convergence
when using Gaussian H-Self-Similar with Stationary Increments signals (H-sssi
signals). Results show that Whittle-type estimators behave the best when
estimating H for short signals. Based on the results, empirically derived the
minimum trace length for the estimators is proposed. Finally for testing the
results, the application of estimators to real traces is accomplished.
Immediate applications from this can be found in the real-time estimation of H
which is useful in agent-based control of Quality of Service (QoS) parameters
in the high-speed computer network traffic flows.
|
A graph $G=(V,E)$ is word-representable if and only if there exists a word
$w$ over the alphabet $V$ such that letters $x$ and $y$, $x\neq y$, alternate
in $w$ if and only if $xy\in E$. A split graph is a graph in which the vertices
can be partitioned into a clique and an independent set. There is a long line
of research on word-representable graphs in the literature, and recently,
word-representability of split graphs has attracted interest.
In this paper, we first give a characterization of word-representable split
graphs in terms of permutations of columns of the adjacency matrices. Then, we
focus on the study of word-representability of split graphs obtained by
iterations of a morphism, the notion coming from combinatorics on words. We
prove a number of general theorems and provide a complete classification in the
case of morphisms defined by $2\times 2$ matrices.
|
Colon and Lung cancer is one of the most perilous and dangerous ailments that
individuals are enduring worldwide and has become a general medical problem. To
lessen the risk of death, a legitimate and early finding is particularly
required. In any case, it is a truly troublesome task that depends on the
experience of histopathologists. If a histologist is under-prepared it may even
hazard the life of a patient. As of late, deep learning has picked up energy,
and it is being valued in the analysis of Medical Imaging. This paper intends
to utilize and alter the current pre-trained CNN-based model to identify lung
and colon cancer utilizing histopathological images with better augmentation
techniques. In this paper, eight distinctive Pre-trained CNN models, VGG16,
NASNetMobile, InceptionV3, InceptionResNetV2, ResNet50, Xception, MobileNet,
and DenseNet169 are trained on LC25000 dataset. The model performances are
assessed on precision, recall, f1score, accuracy, and auroc score. The results
exhibit that all eight models accomplished noteworthy results ranging from 96%
to 100% accuracy. Subsequently, GradCAM and SmoothGrad are also used to picture
the attention images of Pre-trained CNN models classifying malignant and benign
images.
|
The focus of this paper is to address the knowledge acquisition bottleneck
for Named Entity Recognition (NER) of mutations, by analysing different
approaches to build manually-annotated data. We address first the impact of
using a single annotator vs two annotators, in order to measure whether
multiple annotators are required. Once we evaluate the performance loss when
using a single annotator, we apply different methods to sample the training
data for second annotation, aiming at improving the quality of the dataset
without requiring a full pass. We use held-out double-annotated data to build
two scenarios with different types of rankings: similarity-based and confidence
based. We evaluate both approaches on: (i) their ability to identify training
instances that are erroneous (cases where single-annotator labels differ from
double-annotation after discussion), and (ii) on Mutation NER performance for
state-of-the-art classifiers after integrating the fixes at different
thresholds.
|
Integrated circuit (IC) piracy and overproduction are serious issues that
threaten the security and integrity of a system. Logic locking is a type of
hardware obfuscation technique where additional key gates are inserted into the
circuit. Only the correct key can unlock the functionality of that circuit
otherwise the system produces the wrong output. In an effort to hinder these
threats on ICs, we have developed a probability-based logic locking technique
to protect the design of a circuit. Our proposed technique called "ProbLock"
can be applied to combinational and sequential circuits through a critical
selection process. We used a filtering process to select the best location of
key gates based on various constraints. Each step in the filtering process
generates a subset of nodes for each constraint. We also analyzed the
correlation between each constraint and adjusted the strength of the
constraints before inserting key gates. We have tested our algorithm on 40
benchmarks from the ISCAS '85 and ISCAS '89 suite.
|
As part of a chemo-dynamical survey of five nearby globular clusters with
2dF/AAOmega on the AAT, we have obtained kinematic information for the globular
cluster NGC3201. Our new observations confirm the presence of a significant
velocity gradient across the cluster which can almost entirely be explained by
the high proper motion of the cluster. After subtracting the contribution of
this perspective rotation, we found a remaining rotation signal with an
amplitude of $\sim1\ km/s$ around a different axis to what we expect from the
tidal tails and the potential escapers, suggesting that this rotation is
internal and can be a remnant of its formation process. At the outer part, we
found a rotational signal that is likely a result from potential escapers. The
proper motion dispersion at large radii reported by Bianchini et al. has
previously been attributed to dark matter. Here we show that the LOS dispersion
between 0.5-1 Jacobi radius is lower, yet above the predictions from an N-body
model of NGC3201 that we ran for this study. Based on the simulation, we find
that potential escapers cannot fully explain the observed velocity dispersion.
We also estimate the effect on the velocity dispersion of different amounts of
stellar-mass black holes and unbound stars from the tidal tails with varying
escape rates and find that these effects cannot explain the difference between
the LOS dispersion and the N-body model. Given the recent discovery of tidal
tail stars at large distances from the cluster, a dark matter halo is an
unlikely explanation. We show that the effect of binary stars, which is not
included in the N-body model, is important and can explain part of the
difference in dispersion. We speculate that the remaining difference must be
the result of effects not included in the N-body model, such as initial cluster
rotation, velocity anisotropy and Galactic substructure.
|
We study the possibilities of building a non-autoregressive speech-to-text
translation model using connectionist temporal classification (CTC), and use
CTC-based automatic speech recognition as an auxiliary task to improve the
performance. CTC's success on translation is counter-intuitive due to its
monotonicity assumption, so we analyze its reordering capability. Kendall's tau
distance is introduced as the quantitative metric, and gradient-based
visualization provides an intuitive way to take a closer look into the model.
Our analysis shows that transformer encoders have the ability to change the
word order and points out the future research direction that worth being
explored more on non-autoregressive speech translation.
|
The Cholesky factorization of the moment matrix is applied to discrete
orthogonal polynomials on the homogeneous lattice. In particular, semiclassical
discrete orthogonal polynomials, which are built in terms of a discrete Pearson
equation, are studied. The Laguerre-Freud structure semi-infinite matrix that
models the shifts by $\pm 1$ in the independent variable of the set of
orthogonal polynomials is introduced. In the semiclassical case it is proven
that this Laguerre-Freud matrix is banded. From the well known fact that
moments of the semiclassical weights are logarithmic derivatives of generalized
hypergeometric functions, it is shown how the contiguous relations for these
hypergeometric functions translate as symmetries for the corresponding moment
matrix. It is found that the 3D Nijhoff-Capel discrete Toda lattice describes
the corresponding contiguous shifts for the squared norms of the orthogonal
polynomials. The continuous Toda for these semiclassical discrete orthogonal
polynomials is discussed and the compatibility equations are derived. It also
shown that the Kadomtesev-Petvishvilii equation is connected to an adequate
deformed semiclassical discrete weight, but in this case the deformation do not
satisfy a Pearson equation.
|
This paper is the first of a pair that aims to classify a large number of the
type $II$ quantum subgroups of the categories
$\mathcal{C}(\mathfrak{sl}_{r+1},k)$. In this work we classify the braided
auto-equivalences of the categories of local modules for all known type $I$
quantum subgroups of $\mathcal{C}(\mathfrak{sl}_{r+1},k)$. We find that the
symmetries are all non-exceptional except for four cases (up to level-rank
duality). These exceptional cases are the orbifolds $\mathcal{C}(
\mathfrak{sl}_{2},16)_{\operatorname{Rep}(\mathbb{Z}_2)}$, $\mathcal{C}(
\mathfrak{sl}_{3},9)_{\operatorname{Rep}(\mathbb{Z}_3)}$, $\mathcal{C}(
\mathfrak{sl}_{4},8)_{\operatorname{Rep}(\mathbb{Z}_4)}$, and $\mathcal{C}(
\mathfrak{sl}_{5},5)_{\operatorname{Rep}(\mathbb{Z}_5)}$.
We develop several technical tools in this work. We give a skein theoretic
description of the orbifold quantum subgroups of
$\mathcal{C}(\mathfrak{sl}_{r+1},k)$. Our methods here are general, and the
techniques developed will generalise to give skein theory for any orbifold of a
braided tensor category. We also give a formulation of orthogonal level-rank
duality in the type $D$-$D$ case, which is used to construct one of the
exceptionals. Finally we uncover an unexpected connection between quadratic
categories and exceptional braided auto-equivalences of the orbifolds. We use
this connection to construct two of the four exceptionals.
In the sequel to this paper we will use the classified braided
auto-equivalences to construct the corresponding type $II$ quantum subgroups of
the categories $\mathcal{C}(\mathfrak{sl}_{r+1},k)$. When paired with Gannon's
type $I$ classification for $r\leq 6$, this will complete the type $II$
classification for these same ranks.
This paper includes an appendix by Terry Gannon, which provides useful
results on the dimensions of objects in the categories
$\mathcal{C}(\mathfrak{sl}_{r+1},k)$.
|
How cooperation emerges in human societies is both an evolutionary enigma,
and a practical problem with tangible implications for societal health.
Population structure has long been recognized as a catalyst for cooperation
because local interactions enable reciprocity. Analysis of this phenomenon
typically assumes bi-directional social interactions, even though real-world
interactions are often uni-directional. Uni-directional interactions -- where
one individual has the opportunity to contribute altruistically to another, but
not conversely -- arise in real-world populations as the result of
organizational hierarchies, social stratification, popularity effects, and
endogenous mechanisms of network growth. Here we expand the theory of
cooperation in structured populations to account for both uni- and
bi-directional social interactions. Even though directed interactions remove
the opportunity for reciprocity, we find that cooperation can nonetheless be
favored in directed social networks and that cooperation is provably maximized
for networks with an intermediate proportion of directed interactions, as
observed in many empirical settings. We also identify two simple structural
motifs that allow efficient modification of interaction directionality to
promote cooperation by orders of magnitude. We discuss how our results relate
to the concepts of generalized and indirect reciprocity.
|
The purpose of this paper is to discuss the categorical structure for a
method of defining quantum invariants of knots, links and three-manifolds.
These invariants can be defined in terms of right integrals on certain Hopf
algebras. We call such an invariant of 3-manifolds a Hennings invariant. The
work reported in this paper has its background in previous work of the authors.
The present paper gives an abstract description of these structures and shows
how the Hopf algebraic image of a knot lies in the center of the corresponding
Hopf algebra. The paper also shows how all the axiomatic properties of a
quasi-triangular Hopf algebra are involved in the topology via a functor from
the Tangle Category to the Diagrammatic Category of a Hopf Algebra. The
invariants described in this paper generalize to invariants of rotational
virtual knots. The contents of this paper are an update of the original 1998
version published in JKTR.
|
We study a stochastic program where the probability distribution of the
uncertain problem parameters is unknown and only indirectly observed via
finitely many correlated samples generated by an unknown Markov chain with $d$
states. We propose a data-driven distributionally robust optimization model to
estimate the problem's objective function and optimal solution. By leveraging
results from large deviations theory, we derive statistical guarantees on the
quality of these estimators. The underlying worst-case expectation problem is
nonconvex and involves $\mathcal O(d^2)$ decision variables. Thus, it cannot be
solved efficiently for large $d$. By exploiting the structure of this problem,
we devise a customized Frank-Wolfe algorithm with convex direction-finding
subproblems of size $\mathcal O(d)$. We prove that this algorithm finds a
stationary point efficiently under mild conditions. The efficiency of the
method is predicated on a dimensionality reduction enabled by a dual
reformulation. Numerical experiments indicate that our approach has better
computational and statistical properties than the state-of-the-art methods.
|
Poset games are a class of combinatorial game that remain unsolved. Soltys
and Wilson proved that computing wining strategies is in \textbf{PSPACE} and
aside from special cases such as Nim and N-Free games, \textbf{P} time
algorithms for finding ideal play are unknown. This paper presents methods
calculate the nimber of posets games allowing for the classification of winning
or losing positions. The results present an equivalence of ideal strategies on
posets that are seemingly unrelated.
|
An accurate and precise measurement of the spins of individual merging black
holes is required to understand their origin. While previous studies have
indicated that most of the spin information comes from the inspiral part of the
signal, the informative spin measurement of the heavy binary black hole system
GW190521 suggests that the merger and ringdown can contribute significantly to
the spin constraints for such massive systems. We perform a systematic study
into the measurability of the spin parameters of individual heavy binary black
hole mergers using a numerical relativity surrogate waveform model including
the effects of both spin-induced precession and higher-order modes. We find
that the spin measurements are driven by the merger and ringdown parts of the
signal for GW190521-like systems, but the uncertainty in the measurement
increases with the total mass of the system. We are able to place meaningful
constraints on the spin parameters even for systems observed at moderate
signal-to-noise ratios, but the measurability depends on the exact
six-dimensional spin configuration of the system. Finally, we find that the
azimuthal angle between the in-plane projections of the component spin vectors
at a given reference frequency cannot be well-measured for most of our
simulated configurations even for signals observed with high signal-to-noise
ratios.
|
Taking pictures through glass windows almost always produces undesired
reflections that degrade the quality of the photo. The ill-posed nature of the
reflection removal problem reached the attention of many researchers for more
than decades. The main challenge of this problem is the lack of real training
data and the necessity of generating realistic synthetic data. In this paper,
we proposed a single image reflection removal method based on context
understanding modules and adversarial training to efficiently restore the
transmission layer without reflection. We also propose a complex data
generation model in order to create a large training set with various type of
reflections. Our proposed reflection removal method outperforms
state-of-the-art methods in terms of PSNR and SSIM on the SIR benchmark
dataset.
|
A common view on the brain learning processes proposes that the three classic
learning paradigms -- unsupervised, reinforcement, and supervised -- take place
in respectively the cortex, the basal-ganglia, and the cerebellum. However,
dopamine outbursts, usually assumed to encode reward, are not limited to the
basal ganglia but also reach prefrontal, motor, and higher sensory cortices. We
propose that in the cortex the same reward-based trial-and-error processes
might support not only the acquisition of motor representations but also of
sensory representations. In particular, reward signals might guide
trial-and-error processes that mix with associative learning processes to
support the acquisition of representations better serving downstream action
selection. We tested the soundness of this hypothesis with a computational
model that integrates unsupervised learning (Contrastive Divergence) and
reinforcement learning (REINFORCE). The model was tested with a task requiring
different responses to different visual images grouped in categories involving
either colour, shape, or size. Results show that a balanced mix of unsupervised
and reinforcement learning processes leads to the best performance. Indeed,
excessive unsupervised learning tends to under-represent task-relevant features
while excessive reinforcement learning tends to initially learn slowly and then
to incur in local minima. These results stimulate future empirical studies on
category learning directed to investigate similar effects in the extrastriate
visual cortices. Moreover, they prompt further computational investigations
directed to study the possible advantages of integrating unsupervised and
reinforcement learning processes.
|
We provide the first inner bounds for sending private classical information
over a quantum multiple access channel. We do so by using three powerful
information theoretic techniques: rate splitting, quantum simultaneous decoding
for multiple access channels, and a novel smoothed distributed covering lemma
for classical quantum channels. Our inner bounds are given in the one shot
setting and accordingly the three techniques used are all very recent ones
specifically designed to work in this setting. The last technique is new to
this work and is our main technical advancement. For the asymptotic iid
setting, our one shot inner bounds lead to the natural quantum analogue of the
best classical inner bounds for this problem.
|
This paper defines a security injection region (SIR) to guarantee reliable
operation of water distribution systems (WDS) under extreme conditions. The
model of WDSs is highly nonlinear and nonconvex. Understanding the accurate
SIRs of WDSs involves the analysis of nonlinear constraints, which is
computationally expensive. To reduce the computational burden, this paper first
investigates the convexity of the SIR of WDSs under certain conditions. Then,
an algorithm based on a monotone inner polytope sequence is proposed to
effectively and accurately determine these SIRs. The proposed algorithm
estimates a sequence of inner polytopes that converge to the whole convex
region. Each polytope adds a new area to the SIR. The algorithm is validated on
two different WDSs, and the conclusion is drawn. The computational study shows
this method is applicable and fast for both systems.
|
We present a new empirical model to predict solar energetic particle (SEP)
event-integrated and peak intensity spectra between 10 and 130 MeV at 1 AU,
based on multi-point spacecraft measurements from the Solar TErrestrial
RElations Observatory (STEREO), the Geostationary Operational Environmental
Satellites (GOES) and the Payload for Antimatter Matter Exploration and
Light-nuclei Astrophysics (PAMELA) satellite experiment. The analyzed data
sample includes 32 SEP events occurring between 2010 and 2014, with a
statistically significant proton signal at energies in excess of a few tens of
MeV, unambiguously recorded at three spacecraft locations. The spatial
distributions of SEP intensities are reconstructed by assuming an
energy-dependent 2D Gaussian functional form, and accounting for the
correlation between the intensity and the speed of the parent coronal mass
ejection (CME), and the magnetic field line connection angle. The CME
measurements used are from the Space Weather Database Of Notifications,
Knowledge, Information (DONKI). The model performance, including its
extrapolations to lower/higher energies, is tested by comparing with the
spectra of 20 SEP events not used to derive the model parameters. Despite the
simplicity of the model, the observed and predicted event-integrated and peak
intensities at Earth and at the STEREO spacecraft for these events show
remarkable agreement, both in the spectral shapes and their absolute values.
|
We study some of the main properties (masses and open-flavor strong decay
widths) of $4P$ and $5P$ charmonia. While there are two candidates for the
$\chi_{\rm c0}(4P,5P)$ states, the $X(4500)$ and $X(4700)$, the properties of
the other members of the $\chi_{\rm c}(4P,5P)$ multiplets are still completely
unknown. With this in mind, we start to explore the charmonium interpretation
for these mesons. Our second goal is to investigate if the apparent mismatch
between the Quark Model (QM) predictions for $\chi_{\rm c0}(4P,5P)$ states and
the properties of the $X(4500)$ and $X(4700)$ mesons can be overcome by
introducing threshold corrections in the QM formalism. According to our
coupled-channel model results for the threshold mass shifts, the $\chi_{\rm
c0}(5P) \rightarrow X(4700)$ assignment is unacceptable, while the $\chi_{\rm
c0}(4P) \rightarrow X(4500)$ or $X(4700)$ assignments cannot be completely
ruled out.
|
The single-top production is an important process at the LHC to test the
Standard Model (SM) and search for the new physics beyond the SM. Although the
complete next-to-next-to-leading order (NNLO) QCD correction to the single-top
production is crucial, this calculation is still challenging at present. In
order to efficiently reduce the NNLO single-top amplitude, we improve the
auxiliary mass flow (AMF) method by introducing the $\epsilon$ truncation. For
demonstration we choose one typical planar double-box diagram for the $tW$
production. It is shown that one coefficient of the form factors on its
amplitude can be systematically reduced into the linear combination of 198
scalar integrals.
|
In this work, we address the task of referring image segmentation (RIS),
which aims at predicting a segmentation mask for the object described by a
natural language expression. Most existing methods focus on establishing
unidirectional or directional relationships between visual and linguistic
features to associate two modalities together, while the multi-scale context is
ignored or insufficiently modeled. Multi-scale context is crucial to localize
and segment those objects that have large scale variations during the
multi-modal fusion process. To solve this problem, we propose a simple yet
effective Cascaded Multi-modal Fusion (CMF) module, which stacks multiple
atrous convolutional layers in parallel and further introduces a cascaded
branch to fuse visual and linguistic features. The cascaded branch can
progressively integrate multi-scale contextual information and facilitate the
alignment of two modalities during the multi-modal fusion process. Experimental
results on four benchmark datasets demonstrate that our method outperforms most
state-of-the-art methods. Code is available at
https://github.com/jianhua2022/CMF-Refseg.
|
This paper investigates moving networks of Unmanned Aerial Vehicles (UAVs),
such as drones, as one of the innovative opportunities brought by the 5G. With
a main purpose to extend connectivity and guarantee data rates, the drones
require hovering locations due to limitations such as flight time and coverage
surface. We provide analytic bounds on the requirements in terms of
connectivity extension for vehicular networks served by fixed Enhanced Mobile
BroadBand (eMBB) infrastructure, where both vehicular networks and
infrastructures are modeled using stochastic and fractal geometry as a model
for urban environment. We prove that assuming $n$ mobile nodes (distributed
according to a hyperfractal distribution of dimension $d_F$) and an average of
$\rho$ Next Generation NodeB (gNBs), distributed like an hyperfractal of
dimension $d_r$ if $\rho=n^\theta$ with $\theta>d_r/4$ and letting $n$ tending
to infinity (to reflect megalopolis cities), then the average fraction of
mobile nodes not covered by a gNB tends to zero like
$O\left(n^{-\frac{(d_F-2)}{d_r}(2\theta-\frac{d_r}{2})}\right)$. Interestingly,
we then prove that the average number of drones, needed to connect each mobile
node not covered by gNBs is comparable to the number of isolated mobile nodes.
We complete the characterisation by proving that when $\theta<d_r/4$ the
proportion of covered mobile nodes tends to zero. We provide insights on the
intelligent placement of the "garage of drones", the home location of these
nomadic infrastructure nodes, such as to minimize what we call the
"flight-to-coverage time". We provide a fast procedure to select the relays
that will be garages (and store drones) in order to minimize the number of
garages and minimize the delay. Finally we confirm our analytical results using
simulations carried out in Matlab.
|
In many machine learning problems, large-scale datasets have become the
de-facto standard to train state-of-the-art deep networks at the price of heavy
computation load. In this paper, we focus on condensing large training sets
into significantly smaller synthetic sets which can be used to train deep
neural networks from scratch with minimum drop in performance. Inspired from
the recent training set synthesis methods, we propose Differentiable Siamese
Augmentation that enables effective use of data augmentation to synthesize more
informative synthetic images and thus achieves better performance when training
networks with augmentations. Experiments on multiple image classification
benchmarks demonstrate that the proposed method obtains substantial gains over
the state-of-the-art, 7% improvements on CIFAR10 and CIFAR100 datasets. We show
with only less than 1% data that our method achieves 99.6%, 94.9%, 88.5%, 71.5%
relative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively. We
also explore the use of our method in continual learning and neural
architecture search, and show promising results.
|
Common grounding is the process of creating and maintaining mutual
understandings, which is a critical aspect of sophisticated human
communication. While various task settings have been proposed in existing
literature, they mostly focus on creating common ground under static context
and ignore the aspect of maintaining them overtime under dynamic context. In
this work, we propose a novel task setting to study the ability of both
creating and maintaining common ground in dynamic environments. Based on our
minimal task formulation, we collected a large-scale dataset of 5,617 dialogues
to enable fine-grained evaluation and analysis of various dialogue systems.
Through our dataset analyses, we highlight novel challenges introduced in our
setting, such as the usage of complex spatio-temporal expressions to create and
maintain common ground. Finally, we conduct extensive experiments to assess the
capabilities of our baseline dialogue system and discuss future prospects of
our research.
|
Numerical solutions to the Eikonal equation are computed using variants of
the fast marching method, the fast sweeping method, and the fast iterative
method. In this paper, we provide a unified view of these algorithms that
highlights their similarities and suggests a wider class of Eikonal solvers. We
then use this framework to justify applying concurrent priority scheduling
techniques to Eikonal solvers. We demonstrate that doing so results in good
parallel performance for a problem from seismology. We explain why existing
Eikonal solvers may produce different results despite using the same
differencing scheme and demonstrate techniques to address these discrepancies.
These techniques allow us to obtain deterministic output from our asynchronous
fine-grained parallel Eikonal solver.
|
The $\beta$-decay of neutron-rich $^{129}$In into $^{129}$Sn was studied
using the GRIFFIN spectrometer at the ISAC facility at TRIUMF. The study
observed the half-lives of the ground state and each of the $\beta$-decaying
isomers. The level scheme of $^{129}$Sn has been expanded with thirty-one new
$\gamma$-ray transitions and nine new excited levels, leading to a
re-evaluation of the $\beta$-branching ratios and level spin assignments. The
observation of the $\beta$-decay of the (29/2$^{+}$) 1911-keV isomeric state in
$^{129}$In is reported for the first time, with a branching ratio of
2.0(5)$\%$.
|
Given a set $B$ of operators between subspaces of $L_p$ spaces, we
characterize the operators between subspaces of $L_p$ spaces that remain
bounded on the $X$-valued $L_p$ space for every Banach space on which elements
of the original class $B$ are bounded.
This is a form of the bipolar theorem for a duality between the class of
Banach spaces and the class of operators between subspaces of $L_p$ spaces,
essentially introduced by Pisier. The methods we introduce allow us to recover
also the other direction --characterizing the bipolar of a set of Banach
spaces--, which had been obtained by Hernandez in 1983.
|
A Banach space $X$ has \textit{property $(K)$}, whenever every weak* null
sequence in the dual space admits a convex block subsequence
$(f_{n})_{n=1}^\infty$ so that $\langle f_{n},x_{n}\rangle\to 0$ as $n\to
\infty$ for every weakly null sequence $(x_{n})_{n=1}^\infty$ in $X$; $X$ has
\textit{property $(\mu^{s})$} if every weak$^{*}$ null sequence in $X^{*}$
admits a subsequence so that all of its subsequences are Ces\`{a}ro convergent
to $0$ with respect to the Mackey topology. Both property $(\mu^{s})$ and
reflexivity (or even the Grothendieck property) imply property $(K)$. In the
present paper we propose natural ways for quantifying the aforementioned
properties in the spirit of recent results concerning other familiar properties
of Banach spaces.
|
Video interpolation aims to generate a non-existent intermediate frame given
the past and future frames. Many state-of-the-art methods achieve promising
results by estimating the optical flow between the known frames and then
generating the backward flows between the middle frame and the known frames.
However, these methods usually suffer from the inaccuracy of estimated optical
flows and require additional models or information to compensate for flow
estimation errors. Following the recent development in using deformable
convolution (DConv) for video interpolation, we propose a light but effective
model, called Pyramid Deformable Warping Network (PDWN). PDWN uses a pyramid
structure to generate DConv offsets of the unknown middle frame with respect to
the known frames through coarse-to-fine successive refinements. Cost volumes
between warped features are calculated at every pyramid level to help the
offset inference. At the finest scale, the two warped frames are adaptively
blended to generate the middle frame. Lastly, a context enhancement network
further enhances the contextual detail of the final output. Ablation studies
demonstrate the effectiveness of the coarse-to-fine offset refinement, cost
volumes, and DConv. Our method achieves better or on-par accuracy compared to
state-of-the-art models on multiple datasets while the number of model
parameters and the inference time are substantially less than previous models.
Moreover, we present an extension of the proposed framework to use four input
frames, which can achieve significant improvement over using only two input
frames, with only a slight increase in the model size and inference time.
|
The Planck or the quantum gravity scale, being $16$ orders of magnitude
greater than the electroweak scale, is often considered inaccessible by current
experimental techniques. However, it was shown recently by one of the current
authors that quantum gravity effects via the Generalized Uncertainty Principle
affects the time required for free wavepackets to double their size, and this
difference in time is at or near current experimental accuracies [1, 2]. In
this work, we make an important improvement over the earlier study, by taking
into account the leading order relativistic correction, which naturally appears
in the systems under consideration, due to the significant mean velocity of the
travelling wavepackets. Our analysis shows that although the relativistic
correction adds nontrivial modifications to the results of [1, 2], the earlier
claims remain intact and are in fact strengthened. We explore the potential for
these results being tested in the laboratory.
|
We calculate exact analytic expressions for the average cluster numbers
$\langle k \rangle_{\Lambda_s}$ on infinite-length strips $\Lambda_s$, with
various widths, of several different lattices, as functions of the bond
occupation probability, $p$. It is proved that these expressions are rational
functions of $p$. As special cases of our results, we obtain exact values of
$\langle k \rangle_{\Lambda_s}$ and derivatives of $\langle k
\rangle_{\Lambda_s}$ with respect to $p$, evaluated at the critical percolation
probabilities $p_{c,\Lambda}$ for the corresponding infinite two-dimensional
lattices $\Lambda$. We compare these exact results with an analytic finite-size
correction formula and find excellent agreement. We also analyze how unphysical
poles in $\langle k \rangle_{\Lambda_s}$ determine the radii of convergence of
series expansions for small $p$ and for $p$ near to unity. Our calculations are
performed for infinite-length strips of the square, triangular, and honeycomb
lattices with several types of transverse boundary conditions.
|
This article discusses a high-dimensional visualization technique called the
tour, which can be used to view data in more than three dimensions. We review
the theory and history behind the technique, as well as modern software
developments and applications of the tour that are being found across the
sciences and machine learning.
|
We design a multiferroic metal that combines seemingly incompatible
ferromagnetism, ferroelectricity, and metallicity by hole doping a
two-dimensional (2D) ferroelectric with high density of states near the Fermi
level. The strong magnetoelectric effect is demonstrated in hole-doped and
arsenic-doped monolayer {\alpha}-In2Se3 using first-principles calculations.
Taking advantage of the oppositely charged surfaces created by an out-of-plane
polarization, the 2D magnetization and metallicity can be electrically switched
on and off in an asymmetrically doped monolayer. The substitutional arsenic
defect pair exhibits an intriguing electric field-tunable charge
disproportionation process accompanied with an on-off switch of local magnetic
moments. The charge ordering process can be controlled by tuning the relative
strength of on-site Coulomb repulsion and defect dipole-polarization coupling
via strain engineering. Our design principle relying on no transition metal
broadens the materials design space for 2D multiferroic metals.
|
We study the synthesis of a policy in a Markov decision process (MDP)
following which an agent reaches a target state in the MDP while minimizing its
total discounted cost. The problem combines a reachability criterion with a
discounted cost criterion and naturally expresses the completion of a task with
probabilistic guarantees and optimal transient performance. We first establish
that an optimal policy for the considered formulation may not exist but that
there always exists a near-optimal stationary policy. We additionally provide a
necessary and sufficient condition for the existence of an optimal policy. We
then restrict our attention to stationary deterministic policies and show that
the decision problem associated with the synthesis of an optimal stationary
deterministic policy is NP-complete. Finally, we provide an exact algorithm
based on mixed-integer linear programming and propose an efficient
approximation algorithm based on linear programming for the synthesis of an
optimal stationary deterministic policy.
|
Pseudo-code written by natural language is helpful for novice developers'
program comprehension. However, writing such pseudo-code is time-consuming and
laborious. Motivated by the research advancements of sequence-to-sequence
learning and code semantic learning, we propose a novel deep pseudo-code
generation method DeepPseudo via code feature extraction and Transformer. In
particular, DeepPseudo utilizes a Transformer encoder to perform encoding for
source code and then use a code feature extractor to learn the knowledge of
local features. Finally, it uses a pseudo-code generator to perform decoding,
which can generate the corresponding pseudo-code. We choose two corpora (i.e.,
Django and SPoC) from real-world large-scale projects as our empirical
subjects. We first compare DeepPseudo with seven state-of-the-art baselines
from pseudo-code generation and neural machine translation domains in terms of
four performance measures. Results show the competitiveness of DeepPseudo.
Moreover, we also analyze the rationality of the component settings in
DeepPseudo.
|
A novel approach to efficiently treat pure-state equality constraints in
optimal control problems (OCPs) using a Riccati recursion algorithm is
proposed. The proposed method transforms a pure-state equality constraint into
a mixed state-control constraint such that the constraint is expressed by
variables at a certain previous time stage. It is showed that if the solution
satisfies the second-order sufficient conditions of the OCP with the
transformed mixed state-control constraints, it is a local minimum of the OCP
with the original pure-state constraints. A Riccati recursion algorithm is
derived to solve the OCP using the transformed constraints with linear time
complexity in the grid number of the horizon, in contrast to a previous
approach that scales cubically with respect to the total dimension of the
pure-state equality constraints. Numerical experiments on the whole-body
optimal control of quadrupedal gaits that involve pure-state equality
constraints owing to contact switches demonstrate the effectiveness of the
proposed method over existing approaches.
|
Using large-scale fully-kinetic two-dimensional particle-in-cell simulations,
we investigate the effects of shock rippling on electron acceleration at
low-Mach-number shocks propagating in high-$\beta$ plasmas, in application to
merger shocks in galaxy clusters. We find that the electron acceleration rate
increases considerably when the rippling modes appear. The main acceleration
mechanism is stochastic shock-drift acceleration, in which electrons are
confined at the shock by pitch-angle scattering off turbulence and gain energy
from the motional electric field. The presence of multi-scale magnetic
turbulence at the shock transition and the region immediately behind the main
shock overshoot is essential for electron energization. Wide-energy non-thermal
electron distributions are formed both upstream and downstream of the shock.
The maximum energy of the electrons is sufficient for their injection into
diffusive shock acceleration. We show for the first time that the downstream
electron spectrum has a~power-law form with index $p\approx 2.5$, in agreement
with observations.
|
The Capsule Network is widely believed to be more robust than Convolutional
Networks. However, there are no comprehensive comparisons between these two
networks, and it is also unknown which components in the CapsNet affect its
robustness. In this paper, we first carefully examine the special designs in
CapsNet that differ from that of a ConvNet commonly used for image
classification. The examination reveals five major new/different components in
CapsNet: a transformation process, a dynamic routing layer, a squashing
function, a marginal loss other than cross-entropy loss, and an additional
class-conditional reconstruction loss for regularization. Along with these
major differences, we conduct comprehensive ablation studies on three kinds of
robustness, including affine transformation, overlapping digits, and semantic
representation. The study reveals that some designs, which are thought critical
to CapsNet, actually can harm its robustness, i.e., the dynamic routing layer
and the transformation process, while others are beneficial for the robustness.
Based on these findings, we propose enhanced ConvNets simply by introducing the
essential components behind the CapsNet's success. The proposed simple ConvNets
can achieve better robustness than the CapsNet.
|
We study the 2018 Martian Global DustStorm (GDS 2018) over the Southern Polar
Region using images obtained by the Visual Monitoring Camera (VMC) on board
Mars Express during June and July 2018. Dust penetrated into the polar cap
region but never covered the cap completely, and its spatial distribution was
nonhomogeneous and rapidly changing. However, we detected long but narrow
aerosol curved arcs with a length of 2,000-3,000 km traversing part of the cap
and crossing the terminator into the night side. Tracking discrete dust clouds
allowed measurements of their motions that were towards the terminator with
velocities up to 100 m/s. The images of the dust projected into the Martian
limb show maximum altitudes of around 70 km but with large spatial and temporal
variations. We discuss these results in the context of the predictions of a
numerical model for dust storm scenario.
|
Recent observations have shown that circumbinary discs can be misaligned with
respect to the binary orbital plane.The lack of spherical symmetry, together
with the non-planar geometry of these systems, causes differential precession
which might induce the propagation of warps. While gas dynamics in such
environments is well understood, little is known about dusty discs. In this
work, we analytically study the problem of dust traps formation in misaligned
circumbinary discs. We find that pile-ups may be induced not by pressure
maxima, as the usual dust traps, but by a difference in precession rates
between the gas and dust. Indeed, this difference makes the radial drift
inefficient in two locations, leading to the formation of two dust rings whose
position depends on the system parameters. This phenomenon is likely to occur
to marginally coupled dust particles $(\text{St}\gtrsim1)$ as both the effect
of gravitational and drag force are considerable. We then perform a suite of
three-dimensional SPH numerical simulations to compare the results with our
theoretical predictions. We explore the parameter space, varying stellar mass
ratio, disc thickness, radial extension, and we find a general agreement with
the analytical expectations. Such dust pile-up prevents radial drift, fosters
dust growth and may thus promote the planet formation in circumbinary discs.
|
Quantum computing holds a great promise and this work proposes to use new
quantum data networks (QDNs) to connect multiple small quantum computers to
form a cluster. Such a QDN differs from existing QKD networks in that the
former must deliver data qubits reliably within itself. Two types of QDNs are
studied, one using teleportation and the other using tell-and-go (TAG) to
exchange quantum data. Two corresponding quantum transport protocols (QTPs),
named Tele-QTP and TAG-QTP, are proposed to address many unique design
challenges involved in reliable delivery of data qubits, and constraints
imposed by quantum physics laws such as the no-cloning theorem, and limited
availability of quantum memory.
The proposed Tele-QTP and TAG-QTP are the first transport layer protocols for
QDNs, complementing other works on the network protocol stack. Tele-QTP and
TAG-QTP have novel mechanisms to support congestion-free and reliable delivery
of streams of data qubits by managing the limited quantum memory at end hosts
as well as intermediate nodes. Both analysis and extensive simulations show
that the proposed QTPs can achieve a high throughput and fairness. This study
also offers new insights into potential tradeoffs involved in using the two
methods, teleportation and TAG, in two types of QDNs.
|
Many complex processes can be viewed as dynamical systems of interacting
agents. In many cases, only the state sequences of individual agents are
observed, while the interacting relations and the dynamical rules are unknown.
The neural relational inference (NRI) model adopts graph neural networks that
pass messages over a latent graph to jointly learn the relations and the
dynamics based on the observed data. However, NRI infers the relations
independently and suffers from error accumulation in multi-step prediction at
dynamics learning procedure. Besides, relation reconstruction without prior
knowledge becomes more difficult in more complex systems. This paper introduces
efficient message passing mechanisms to the graph neural networks with
structural prior knowledge to address these problems. A relation interaction
mechanism is proposed to capture the coexistence of all relations, and a
spatio-temporal message passing mechanism is proposed to use historical
information to alleviate error accumulation. Additionally, the structural prior
knowledge, symmetry as a special case, is introduced for better relation
prediction in more complex systems. The experimental results on simulated
physics systems show that the proposed method outperforms existing
state-of-the-art methods.
|
We present new 3 mm observations of the ionized gas toward the nuclear
starburst in the nearby (D ~ 3.5 Mpc) galaxy NGC 253. With ALMA, we detect
emission from the H40-alpha and He40-alpha lines in the central 200 pc of this
galaxy on spatial scales of ~4 pc. The recombination line emission primarily
originates from a population of approximately a dozen embedded super star
clusters in the early stages of formation. We find that emission from these
clusters is characterized by electron temperatures ranging from 7000-10000 K
and measure an average singly-ionized helium abundance <Y+> = 0.25 +/- 0.06,
both of which are consistent with values measured for HII regions in the center
of the Milky Way. We also report the discovery of unusually broad-linewidth
recombination line emission originating from seven of the embedded clusters. We
suggest that these clusters contribute to the launching of the large-scale hot
wind observed to emanate from the central starburst. Finally, we use the
measured recombination line fluxes to improve the characterization of overall
embedded cluster properties, including the distribution of cluster masses and
the fractional contribution of the clustered star formation to the total
starburst, which we estimate is at least 50%.
|
Subsets and Splits