abstract
stringlengths 42
2.09k
|
---|
Using first-principles calculations, we show that CsBX$_3$ halides with B=Sn
or Pb undergo octahedral rotation distortions, while for B=Ge and Si, they
undergo a ferro-electric rhombohedral distortion accompanied by a rhombohedral
stretching of the lattice. We show that these are mutually exclusive at their
equilibrium volume although different distortions may occur as function of
lattice expansion. The choice between the two distortion modes is in part
governed by the Goldschmidt tolerance factor. However, another factor
explaining the difference between Sn and Pb compared with Ge and Si is the
stronger lone-pair character of Ge and Si when forced to be divalent as is the
case in these structures. The lone-pair chemistry is related to the
off-centering. While the Si-based compounds have not yet been synthesized, the
Ge compounds have been established experimentally. As a final test of the
importance of the tolerance factor we consider RbGeX$_3$, which has smaller
tolerance factor than the corresponding CsGeX$_3$ because Rb is smaller than
Cs. We find that it can lower its energy by both rotations or rhombohedral
off-centering distortions but the latter lower the energy slightly more
efficiently.
|
Open Source Software (OSS) plays an important role in the digital economy.
Yet although software production is amenable to remote collaboration and its
outputs are easily shared across distances, software development seems to
cluster geographically in places such as Silicon Valley, London, or Berlin. And
while recent work indicates that OSS activity creates positive externalities
which accrue locally through knowledge spillovers and information effects,
up-to-date data on the geographic distribution of active open source developers
is limited. This presents a significant blindspot for policymakers, who tend to
promote OSS at the national level as a cost-saving tool for public sector
institutions. We address this gap by geolocating more than half a million
active contributors to GitHub in early 2021 at various spatial scales. Compared
to results from 2010, we find a significant increase in the share of developers
based in Asia, Latin America and Eastern Europe, suggesting a more even spread
of OSS developers globally. Within countries, however, we find significant
concentration in regions, exceeding the concentration of workers in high-tech
fields. Social and economic development indicators predict at most half of
regional variation in OSS activity in the EU, suggesting that clusters of OSS
have idiosyncratic roots. We argue that policymakers seeking to foster OSS
should focus locally rather than nationally, using the tools of cluster policy
to support networks of OSS developers.
|
Using the group $G(1)$ of invertible elements and the maximal ideals
$\mathfrak{m}_x$ of the commutative algebra $C(X)$ of real-valued functions on
a compact regular space $X$, we define a Borel action of the algebra on the
measure space $(X,\mu)$ with $\mu$ a Radon measure. The zero sets $Z(X)$ of the
algebra $C(X)$ is used to study the ergodicity of the $G(1)$-action via its
action on the maximal ideals $\mathfrak{m}_x$ which defines an action groupoid
$\mathcal{G} = \mathfrak{m}_x \ltimes G(1)$ trivialized on $X$. The resulting
measure groupoid $(\mathcal{G},\mathcal{C})$ is used to define a proper action
on the generalized space $\mathcal{M}(X)$. The existence of slice at each point
of $\mathcal{M}(X)$ present it as a cohomogeneity-one $\mathcal{G}$-space. The
dynamical system of the algebra $C(X)$ is defined by the action of the measure
groupoid $(\mathcal{G},\mathcal{C}) \times \mathcal{M}(X) \to \mathcal{M}(X)$.
|
End-to-end speech recognition systems usually require huge amounts of
labeling resource, while annotating the speech data is complicated and
expensive. Active learning is the solution by selecting the most valuable
samples for annotation. In this paper, we proposed to use a predicted loss that
estimates the uncertainty of the sample. The CTC (Connectionist Temporal
Classification) and attention loss are informative for speech recognition since
they are computed based on all decoding paths and alignments. We defined an
end-to-end active learning pipeline, training an ASR/LP (Automatic Speech
Recognition/Loss Prediction) joint model. The proposed approach was validated
on an English and a Chinese speech recognition task. The experiments show that
our approach achieves competitive results, outperforming random selection,
least confidence, and estimated loss method.
|
Transformers with remarkable global representation capacities achieve
competitive results for visual tasks, but fail to consider high-level local
pattern information in input images. In this paper, we present a generic
Dual-stream Network (DS-Net) to fully explore the representation capacity of
local and global pattern features for image classification. Our DS-Net can
simultaneously calculate fine-grained and integrated features and efficiently
fuse them. Specifically, we propose an Intra-scale Propagation module to
process two different resolutions in each block and an Inter-Scale Alignment
module to perform information interaction across features at dual scales.
Besides, we also design a Dual-stream FPN (DS-FPN) to further enhance
contextual information for downstream dense predictions. Without bells and
whistles, the proposed DS-Net outperforms DeiT-Small by 2.4% in terms of top-1
accuracy on ImageNet-1k and achieves state-of-the-art performance over other
Vision Transformers and ResNets. For object detection and instance
segmentation, DS-Net-Small respectively outperforms ResNet-50 by 6.4% and 5.5%
in terms of mAP on MSCOCO 2017, and surpasses the previous state-of-the-art
scheme, which significantly demonstrates its potential to be a general backbone
in vision tasks. The code will be released soon.
|
Magnetic reconnection can convert magnetic energy into kinetic energy of
non-thermal electron beams. We have now characterized the EVDFs generated by 3D
kinetic magnetic reconnection obtained by numerical simulations utilizing the
ACRONYM particle-in-cell (PIC) code, and their consequences for plasma
instabilities which differ from those of 2D kinetic magnetic reconnection,
since in 3D unstable waves can propagate in all directions. We found that: (1)
In both diffusion region and separatrices of reconnection, EVDFs with positive
velocity-space gradients in the direction parallel to the local magnetic field
are formed. These gradients can cause counter-streaming and bump-on-tail
instabilities. (2) In regions with weak magnetic field strength, namely,
regions near the current sheet midplane, EVDF with positive velocity space
gradients are generated in the direction perpendicular to the local magnetic
field. In particular crescent-shaped EVDFs in the velocity space perpendicular
to local magnetic field are mainly formed in the diffusion region of
reconnection. These perpendicular gradients in the EVDFs can cause electron
cyclotron maser instabilities. (3) As guide-field strength increases, less
regions in the current sheets feature perpendicular velocity-space gradients in
the EVDFs. The formation of EVDFs with positive gradients in the parallel
(magnetic field-aligned) direction is mainly due to magnetized and adiabatic
electrons, while EVDFs with positive gradients in the direction perpendicular
to the local magnetic field are attributed to unmagnetized, nonadiabatic
electrons in the diffusion and outflow region near the reconnection midplane.
|
We propose a method for learning linear models whose predictive performance
is robust to causal interventions on unobserved variables, when noisy proxies
of those variables are available. Our approach takes the form of a
regularization term that trades off between in-distribution performance and
robustness to interventions. Under the assumption of a linear structural causal
model, we show that a single proxy can be used to create estimators that are
prediction optimal under interventions of bounded strength. This strength
depends on the magnitude of the measurement noise in the proxy, which is, in
general, not identifiable. In the case of two proxy variables, we propose a
modified estimator that is prediction optimal under interventions up to a known
strength. We further show how to extend these estimators to scenarios where
additional information about the "test time" intervention is available during
training. We evaluate our theoretical findings in synthetic experiments and
using real data of hourly pollution levels across several cities in China.
|
Sparse principal component analysis (PCA) is a popular tool for dimensional
reduction of high-dimensional data. Despite its massive popularity, there is
still a lack of theoretically justifiable Bayesian sparse PCA that is
computationally scalable. A major challenge is choosing a suitable prior for
the loadings matrix, as principal components are mutually orthogonal. We
propose a spike and slab prior that meets this orthogonality constraint and
show that the posterior enjoys both theoretical and computational advantages.
Two computational algorithms, the PX-CAVI and the PX-EM algorithms, are
developed. Both algorithms use parameter expansion to deal with the
orthogonality constraint and to accelerate their convergence speeds. We found
that the PX-CAVI algorithm has superior empirical performance than the PX-EM
algorithm and two other penalty methods for sparse PCA. The PX-CAVI algorithm
is then applied to study a lung cancer gene expression dataset. $\mathsf{R}$
package $\mathsf{VBsparsePCA}$ with an implementation of the algorithm is
available on The Comprehensive R Archive Network.
|
The physics goal of the strong interaction program of the NA61/SHINE
experiment at the CERN Super Proton Synchrotron (SPS) is to study the phase
diagram of hadronic matter by a scan of particle production in collisions of
nuclei with various sizes at a set of energies covering the SPS energy range.
This paper presents differential inclusive spectra of transverse momentum,
transverse mass and rapidity of $\pi^{-}$ mesons produced in $central$
${}^{40}$Ar+${}^{45}$Sc collisions at beam momenta of 13$A$, 19$A$, 30$A$,
40$A$, 75$A$ and 150$A$ GeV/$c$. Energy and system size dependence of
parameters of these distributions -- mean transverse mass, the inverse slope
parameter of transverse mass spectra, width of the rapidity distribution and
mean multiplicity -- are presented and discussed. Furthermore, the dependence
of the ratio of the mean number of produced pions to the mean number of wounded
nucleons on the collision energy was derived. The results are compared to
predictions of several models.
|
Effective theory framework based on symmetry has recently gained widespread
interest in the field of cosmology. In this paper, we apply the same idea on
the genesis of the primordial magnetic field and its evolution throughout the
cosmological universe. Given the broken time-diffeomorphism symmetry by the
cosmological background, we considered the most general Lagrangian of
electromagnetic and metric fluctuation up to second order, which naturally
breaks conformal symmetry in the electromagnetic (EM) sector. We also include
parity violation in the electromagnetic sector with the motivation that has
potential observational significance. In such a set-up, we explore the
evolution of EM, scalar, and tensor perturbations considering different
observational constraints. In our analysis we emphasize the role played by the
intermediate reheating phase which has got limited interest in all the previous
studies. Assuming the vanishing electrical conductivity during the entire
period of reheating, the well-known Faraday electromagnetic induction has been
shown to play a crucial role in enhancing the strength of the present-day
magnetic field. We show how such physical effects combined with the PLANCK and
the large scale magnetic field observation makes a large class of models viable
and severely restricts the reheating equation of state parameter within a very
narrow range of $0.01 < \omega_\mathrm{eff} < 0.27$, which is nearly
independent of reheating scenarios we have considered.
|
In the real world, medical datasets often exhibit a long-tailed data
distribution (i.e., a few classes occupy most of the data, while most classes
have rarely few samples), which results in a challenging imbalance learning
scenario. For example, there are estimated more than 40 different kinds of
retinal diseases with variable morbidity, however with more than 30+ conditions
are very rare from the global patient cohorts, which results in a typical
long-tailed learning problem for deep learning-based screening models. In this
study, we propose class subset learning by dividing the long-tailed data into
multiple class subsets according to prior knowledge, such as regions and
phenotype information. It enforces the model to focus on learning the
subset-specific knowledge. More specifically, there are some relational classes
that reside in the fixed retinal regions, or some common pathological features
are observed in both the majority and minority conditions. With those subsets
learnt teacher models, then we are able to distill the multiple teacher models
into a unified model with weighted knowledge distillation loss. The proposed
framework proved to be effective for the long-tailed retinal diseases
recognition task. The experimental results on two different datasets
demonstrate that our method is flexible and can be easily plugged into many
other state-of-the-art techniques with significant improvements.
|
Local feature attribution methods are increasingly used to explain complex
machine learning models. However, current methods are limited because they are
extremely expensive to compute or are not capable of explaining a distributed
series of models where each model is owned by a separate institution. The
latter is particularly important because it often arises in finance where
explanations are mandated. Here, we present DeepSHAP, a tractable method to
propagate local feature attributions through complex series of models based on
a connection to the Shapley value. We evaluate DeepSHAP across biological,
health, and financial datasets to show that it provides equally salient
explanations an order of magnitude faster than existing model-agnostic
attribution techniques and demonstrate its use in an important distributed
series of models setting.
|
A definition of a convolution of tensor fields on group manifolds is given,
which is then generalised to generic homogeneous spaces. This is applied to the
product of gauge fields in the context of `gravity $=$ gauge $\times$ gauge'.
In particular, it is shown that the linear Becchi-Rouet-Stora-Tyutin (BRST)
gauge transformations of two Yang-Mills gauge fields generate the linear BRST
diffeomorphism transformations of the graviton. This facilitates the definition
of the `gauge $\times$ gauge' convolution product on, for example, the static
Einstein universe, and more generally for ultrastatic spacetimes with compact
spatial slices.
|
Network dismantling aims to scratch the network into unconnected fragments by
removing an optimal set of nodes and has been widely adopted in many real-world
applications such as epidemic control and rumor containment. However,
conventional methods often disassemble the system from the perspective of
classic networks, which have only pairwise interactions, and often ignored the
more ubiquitous and nature group-wise interactions modeled by hypernetwork.
Moreover, a simple network can't describe the collective behavior of multiple
objects, it is necessary to solve related problems through hypernetwork
dismantling. In this work, we designed a higher order collective influence
measure to identify key node sets in hypernetwork. It comprehensively consider
the environment in which the target node is located and its own characteristics
to determine the importance of the node, so as to dismantle the hypernetwork by
removing these selected nodes. Finally, we used the method to carry out a
series of real-world hypernetwork dismantling tasks. Experimental results on
five real-world hypernetworks demonstrate the effectiveness of our proposed
measure.
|
We investigate the sensitivity of the projected TeV muon collider to the
gauged $L^{}_{\mu}$-$L^{}_{\tau}$ model. Two processes are considered:
$Z'$-mediated two-body scatterings $\mu^+ \mu^- \to \ell^+ \ell^-$ with $\ell =
\mu$ or $\tau$, and scattering with initial state photon emission, $\mu^+ \mu^-
\to \gamma Z',~Z' \to \ell \overline{\ell}$, where $\ell$ can be $\mu$, $\tau$
or $\nu_{\mu/\tau}$. We quantitatively study the sensitivities of these two
processes by taking into account possible signals and relevant backgrounds in a
muon collider experiment with a center-of-mass energy $\sqrt{s} = 3~{\rm TeV}$
and a luminosity $L=1~{\rm ab^{-1}}$. For two-body scattering one can exclude
$Z'$ masses $M^{}_{Z'} \lesssim 100~{\rm TeV}$ with $\mathcal{O}(1)$ gauge
couplings. When $M^{}_{Z'} \lesssim 1~{\rm TeV} <\sqrt{s}$, one can exclude $g'
\gtrsim 2\times 10^{-2}$. The process with photon emission is more powerful
than the two-body scattering if $M^{}_{Z'} < \sqrt{s}$. For instance, a
sensitivity of $g' \simeq 4 \times 10^{-3}$ can be achieved at $M^{}_{Z'} =
1~{\rm TeV}$. The parameter spaces favored by the $(g-2)^{}_{\mu}$ and $B$
anomalies with $M^{}_{Z'} > 100~{\rm GeV}$ are entirely covered by a muon
collider.
|
A monopolist seller of multiple goods screens a buyer whose type is initially
unknown to both but drawn from a commonly known distribution. The buyer
privately learns about his type via a signal. We derive the seller's optimal
mechanism in two different information environments. We begin by deriving the
buyer-optimal outcome. Here, an information designer first selects a signal,
and then the seller chooses an optimal mechanism in response; the designer's
objective is to maximize consumer surplus. Then, we derive the optimal
informationally robust mechanism. In this case, the seller first chooses the
mechanism, and then nature picks the signal that minimizes the seller's
profits. We derive the relation between both problems and show that the optimal
mechanism in both cases takes the form of pure bundling.
|
The task of age transformation illustrates the change of an individual's
appearance over time. Accurately modeling this complex transformation over an
input facial image is extremely challenging as it requires making convincing,
possibly large changes to facial features and head shape, while still
preserving the input identity. In this work, we present an image-to-image
translation method that learns to directly encode real facial images into the
latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a
given aging shift. We employ a pre-trained age regression network to explicitly
guide the encoder in generating the latent codes corresponding to the desired
age. In this formulation, our method approaches the continuous aging process as
a regression task between the input age and desired target age, providing
fine-grained control over the generated image. Moreover, unlike approaches that
operate solely in the latent space using a prior on the path controlling age,
our method learns a more disentangled, non-linear path. Finally, we demonstrate
that the end-to-end nature of our approach, coupled with the rich semantic
latent space of StyleGAN, allows for further editing of the generated images.
Qualitative and quantitative evaluations show the advantages of our method
compared to state-of-the-art approaches.
|
Camera pose regression methods apply a single forward pass to the query image
to estimate the camera pose. As such, they offer a fast and light-weight
alternative to traditional localization schemes based on image retrieval. Pose
regression approaches simultaneously learn two regression tasks, aiming to
jointly estimate the camera position and orientation using a single embedding
vector computed by a convolutional backbone. We propose an attention-based
approach for pose regression, where the convolutional activation maps are used
as sequential inputs. Transformers are applied to encode the sequential
activation maps as latent vectors, used for camera pose regression. This allows
us to pay attention to spatially-varying deep features. Using two Transformer
heads, we separately focus on the features for camera position and orientation,
based on how informative they are per task. Our proposed approach is shown to
compare favorably to contemporary pose regressors schemes and achieves
state-of-the-art accuracy across multiple outdoor and indoor benchmarks. In
particular, to the best of our knowledge, our approach is the only method to
attain sub-meter average accuracy across outdoor scenes. We make our code
publicly available from here.
|
A pervasive design issue of AI systems is their explainability--how to
provide appropriate information to help users understand the AI. The technical
field of explainable AI (XAI) has produced a rich toolbox of techniques.
Designers are now tasked with the challenges of how to select the most suitable
XAI techniques and translate them into UX solutions. Informed by our previous
work studying design challenges around XAI UX, this work proposes a design
process to tackle these challenges. We review our and related prior work to
identify requirements that the process should fulfill, and accordingly, propose
a Question-Driven Design Process that grounds the user needs, choices of XAI
techniques, design, and evaluation of XAI UX all in the user questions. We
provide a mapping guide between prototypical user questions and exemplars of
XAI techniques to reframe the technical space of XAI, also serving as boundary
objects to support collaboration between designers and AI engineers. We
demonstrate it with a use case of designing XAI for healthcare adverse events
prediction, and discuss lessons learned for tackling design challenges of AI
systems.
|
Assuming time-scale separation, a simple and unified theory of thermodynamics
and stochastic thermodynamics is constructed for small classical systems
strongly interacting with its environment in a controllable fashion. The total
Hamiltonian is decomposed into a bath part and a system part, the latter being
the Hamiltonian of mean force. Both the conditional equilibrium of bath and the
reduced equilibrium of the system are described by canonical ensemble theories
with respect to their own Hamiltonians. The bath free energy is independent of
the system variables and the control parameter. Furthermore, the weak coupling
theory of stochastic thermodynamics becomes applicable almost verbatim, even if
the interaction and correlation between the system and its environment are
strong and varied externally. Finally, this TSS-based approach also leads to
some new insights about the origin of the second law of thermodynamics.
|
End-to-end models are favored in automatic speech recognition (ASR) because
of their simplified system structure and superior performance. Among these
models, Transformer and Conformer have achieved state-of-the-art recognition
accuracy in which self-attention plays a vital role in capturing important
global information. However, the time and memory complexity of self-attention
increases squarely with the length of the sentence. In this paper, a
prob-sparse self-attention mechanism is introduced into Conformer to sparse the
computing process of self-attention in order to accelerate inference speed and
reduce space consumption. Specifically, we adopt a Kullback-Leibler divergence
based sparsity measurement for each query to decide whether we compute the
attention function on this query. By using the prob-sparse attention mechanism,
we achieve impressively 8% to 45% inference speed-up and 15% to 45% memory
usage reduction of the self-attention module of Conformer Transducer while
maintaining the same level of error rate.
|
In this paper, we investigate the nonhomogeneous boundary value problem for
the steady Navier-Stokes equations in a helically symmetric spatial domain.
When data is assumed to be helical invariant and satisfies the compatibility
condition, we prove this problem has at least one helical invariant solution.
|
We use the aggregate information from individual-to-firm and firm-to-firm in
Garanti BBVA Bank transactions to mimic domestic private demand. Particularly,
we replicate the quarterly national accounts aggregate consumption and
investment (gross fixed capital formation) and its bigger components (Machinery
and Equipment and Construction) in real time for the case of Turkey. In order
to validate the usefulness of the information derived from these indicators we
test the nowcasting ability of both indicators to nowcast the Turkish GDP using
different nowcasting models. The results are successful and confirm the
usefulness of Consumption and Investment Banking transactions for nowcasting
purposes. The value of the Big data information is more relevant at the
beginning of the nowcasting process, when the traditional hard data information
is scarce. This makes this information specially relevant for those countries
where statistical release lags are longer like the Emerging Markets.
|
Satterthwaite and Toepke (1970 Phys. Rev. Lett. 25 741) predicted
high-temperature superconductivity in hydrogen-rich metallic alloys, based on
an idea that these compounds should exhibit high Debye frequency of the proton
lattice, which boosts the superconducting transition temperature, Tc. The idea
has got full confirmation more than four decades later when Drozdov et al (2015
Nature 525 73) experimentally discovered near-room-temperature
superconductivity in highly-compressed sulphur superhydride, H3S. To date, more
than a dozen of high-temperature hydrogen-rich superconducting phases in Ba-H,
Pr-H, P-H, Pt-H, Ce-H, Th-H, S-H, Y-H, La-H, and (La,Y)-H systems have been
synthesized and, recently, Hong et al (2021 arXiv:2101.02846) reported on the
discovery of C2/m-SnH12 phase with superconducting transition temperature of Tc
~ 70 K. Here we analyse the magnetoresistance data, R(T,B), of C2/m-SnH12 phase
and report that this superhydride exhibits the ground state superconducting gap
of $\Delta$(0) = 9.2 meV, the ratio of 2$\Delta$(0)/k$_B$Tc = 3.3, and 0.010 <
Tc/Tf < 0.014 (where Tf is the Fermi temperature) and, thus, C2/m-SnH12 falls
into unconventional superconductors band in the Uemura plot.
|
Intelligent systems are transforming the world, as well as our healthcare
system. We propose a deep learning-based cough sound classification model that
can distinguish between children with healthy versus pathological coughs such
as asthma, upper respiratory tract infection (URTI), and lower respiratory
tract infection (LRTI). In order to train a deep neural network model, we
collected a new dataset of cough sounds, labelled with clinician's diagnosis.
The chosen model is a bidirectional long-short term memory network (BiLSTM)
based on Mel Frequency Cepstral Coefficients (MFCCs) features. The resulting
trained model when trained for classifying two classes of coughs -- healthy or
pathology (in general or belonging to a specific respiratory pathology),
reaches accuracy exceeding 84\% when classifying cough to the label provided by
the physicians' diagnosis. In order to classify subject's respiratory pathology
condition, results of multiple cough epochs per subject were combined. The
resulting prediction accuracy exceeds 91\% for all three respiratory
pathologies. However, when the model is trained to classify and discriminate
among the four classes of coughs, overall accuracy dropped: one class of
pathological coughs are often misclassified as other. However, if one consider
the healthy cough classified as healthy and pathological cough classified to
have some kind of pathologies, then the overall accuracy of four class model is
above 84\%. A longitudinal study of MFCC feature space when comparing
pathological and recovered coughs collected from the same subjects revealed the
fact that pathological cough irrespective of the underlying conditions occupy
the same feature space making it harder to differentiate only using MFCC
features.
|
The Tick library allows researchers in market microstructure to simulate and
learn Hawkes process in high-frequency data, with optimized parametric and
non-parametric learners. But one challenge is to take into account the correct
causality of order book events considering latency: the only way one order book
event can influence another is if the time difference between them (by the
central order book timestamps) is greater than the minimum amount of time for
an event to be (i) published in the order book, (ii) reach the trader
responsible for the second event, (iii) influence the decision (processing time
at the trader) and (iv) the 2nd event reach the order book and be processed.
For this we can use exponential kernels shifted to the right by the latency
amount. We derive the expression for the log-likelihood to be minimized for the
1-D and the multidimensional cases, and test this method with simulated data
and real data. On real data we find that, although not all decays are the same,
the latency itself will determine most of the decays. We also show how the
decays are related to the latency. Code is available on GitHub at
https://github.com/MarcosCarreira/Hawkes-With-Latency.
|
We study the problem of diffeomorphometric geodesic landmark matching where
the objective is to find a diffeomorphism that via its group action maps
between two sets of landmarks. It is well-known that the motion of the
landmarks, and thereby the diffeomorphism, can be encoded by an initial
momentum leading to a formulation where the landmark matching problem can be
solved as an optimisation problem over such momenta. The novelty of our work
lies in the application of a derivative-free Bayesian inverse method for
learning the optimal momentum encoding the diffeomorphic mapping between the
template and the target. The method we apply is the ensemble Kalman filter, an
extension of the Kalman filter to nonlinear observation operators. We describe
an efficient implementation of the algorithm and show several numerical results
for various target shapes.
|
We propose a novel framework for model-order reduction of hyperbolic
differential equations. The approach combines a relaxation formulation of the
hyperbolic equations with a discretization using shifted base functions.
Model-order reduction techniques are then applied to the resulting system of
coupled ordinary differential equations. On computational examples including in
particular the case of shock waves we show the validity of the approach and the
performance of the reduced system.
|
Quantum computing is poised to dramatically change the computational
landscape, worldwide. Quantum computers can solve complex problems that are, at
least in some cases, beyond the ability of even advanced future classical-style
computers. In addition to being able to solve these classical
computer-unsolvable problems, quantum computers have demonstrated a capability
to solve some problems (such as prime factoring) much more efficiently than
classical computing. This will create problems for encryption techniques, which
depend on the difficulty of factoring for their security. Security, scientific,
and other applications will require access to quantum computing resources to
access their unique capabilities, speed and economic (aggregate computing time
cost) benefits. Many scientific applications, as well as numerous other ones,
use grid computing to provide benefits such as scalability and resource access.
As these applications may benefit from quantum capabilities - and some future
applications may require quantum capabilities - identifying how to integrate
quantum computing systems into grid computing environments is critical. This
paper discusses the benefits of grid-connected quantum computers and what is
required to achieve this.
|
Moral outrage has become synonymous with social media in recent years.
However, the preponderance of academic analysis on social media websites has
focused on hate speech and misinformation. This paper focuses on analyzing
moral judgements rendered on social media by capturing the moral judgements
that are passed in the subreddit /r/AmITheAsshole on Reddit. Using the labels
associated with each judgement we train a classifier that can take a comment
and determine whether it judges the user who made the original post to have
positive or negative moral valence. Then, we use this classifier to investigate
an assortment of website traits surrounding moral judgements in ten other
subreddits, including where negative moral users like to post and their posting
patterns. Our findings also indicate that posts that are judged in a positive
manner will score higher.
|
A pair-density-wave (PDW) is a novel superconducting state with an
oscillating order parameter. A microscopic mechanism that can give rise to it
has been long sought but has not yet been established by any controlled
calculation. Here we report a density-matrix renormalization group (DMRG) study
of an effective $t$-$J$-$V$ model, which is equivalent to the Holstein-Hubbard
model in a strong-coupling limit, on long two-, four- and six-leg triangular
cylinders. While a state with long-range PDW order is precluded in one
dimension, we find strong quasi-long-range PDW order with a divergent PDW
susceptibility as well as spontaneous breaking of time-reversal and inversion
symmetries. Despite the strong interactions, the underlying Fermi surfaces and
electron pockets around the $K$ and $K^\prime$ points in the Brillouin zone can
be identified. We conclude that the state is valley-polarized and that the PDW
arises from intra-pocket pairing with an incommensurate center of mass
momentum. In the two-leg case, the exponential decay of spin correlations and
the measured central charge $c\approx 1$ are consistent with an unusual
realization of a Luther-Emery liquid.
|
One-dimensional (1D) materials have attracted significant research interest
due to their unique quantum confinement effects and edge-related properties.
Atomically thin 1D nanoribbon is particularly interesting because it is a
valuable platform with physical limits of both thickness and width. Here, we
develop a catalyst-free growth method and achieves the growth of Bi2O2Se
nanostructures with tunable dimensionality. Significantly, Bi2O2Se nanoribbons
with thickness down to 0.65 nm, corresponding to monolayer, are successfully
grown for the first time. Electrical and optoelectronic measurements show that
Bi2O2Se nanoribbons possess decent performance in terms of mobility, on/off
ratio, and photoresponsivity, suggesting their promising for devices. This work
not only reports a new method for the growth of atomically thin nanoribbons but
also provides a platform to study properties and applications of such
nanoribbon materials at thickness limit.
|
In many dynamic systems, decisions on system operation are updated over time,
and the decision maker requires an online learning approach to optimize its
strategy in response to the changing environment. When the loss and constraint
functions are convex, this belongs to the general family of online convex
optimization (OCO). In existing OCO works, the environment is assumed to vary
in a time-slotted fashion, while the decisions are updated at each time slot.
However, many wireless communication systems permit only periodic decision
updates, i.e., each decision is fixed over multiple time slots, while the
environment changes between the decision epochs. The standard OCO model is
inadequate for these systems. Therefore, in this work, we consider periodic
decision updates for OCO. We aim to minimize the accumulation of time-varying
convex loss functions, subject to both short-term and long-term constraints.
Information about the loss functions within the current update period may be
incomplete and is revealed to the decision maker only after the decision is
made. We propose an efficient algorithm, termed Periodic Queueing and Gradient
Aggregation (PQGA), which employs novel periodic queues together with possibly
multi-step aggregated gradient descent to update the decisions over time. We
derive upper bounds on the dynamic regret, static regret, and constraint
violation of PQGA. As an example application, we study the performance of PQGA
in a large-scale multi-antenna system shared by multiple wireless service
providers. Simulation results show that PQGA converges fast and substantially
outperforms the known best alternative.
|
In this paper, we propose an anchor-free single-stage LiDAR-based 3D object
detector -- RangeDet. The most notable difference with previous works is that
our method is purely based on the range view representation. Compared with the
commonly used voxelized or Bird's Eye View (BEV) representations, the range
view representation is more compact and without quantization error. Although
there are works adopting it for semantic segmentation, its performance in
object detection is largely behind voxelized or BEV counterparts. We first
analyze the existing range-view-based methods and find two issues overlooked by
previous works: 1) the scale variation between nearby and far away objects; 2)
the inconsistency between the 2D range image coordinates used in feature
extraction and the 3D Cartesian coordinates used in output. Then we
deliberately design three components to address these issues in our RangeDet.
We test our RangeDet in the large-scale Waymo Open Dataset (WOD). Our best
model achieves 72.9/75.9/65.8 3D AP on vehicle/pedestrian/cyclist. These
results outperform other range-view-based methods by a large margin (~20 3D AP
in vehicle detection), and are overall comparable with the state-of-the-art
multi-view-based methods. Codes will be public.
|
Parameter Estimation (PE) and State Estimation (SE) are the most wide-spread
tasks in the system engineering. They need to be done automatically, fast and
frequently, as measurements arrive. Deep Learning (DL) holds the promise of
tackling the challenge, however in so far, as PE and SE in power systems is
concerned, (a) DL did not win trust of the system operators because of the lack
of the physics of electricity based, interpretations and (b) DL remained
illusive in the operational regimes were data is scarce. To address this, we
present a hybrid scheme which embeds physics modeling of power systems into
Graphical Neural Networks (GNN), therefore empowering system operators with a
reliable and explainable real-time predictions which can then be used to
control the critical infrastructure. To enable progress towards trustworthy DL
for PE and SE, we build a physics-informed method, named Power-GNN, which
reconstructs physical, thus interpretable, parameters within Effective Power
Flow (EPF) models, such as admittances of effective power lines, and NN
parameters, representing implicitly unobserved elements of the system. In our
experiments, we test the Power-GNN on different realistic power networks,
including these with thousands of loads and hundreds of generators. We show
that the Power-GNN outperforms vanilla NN scheme unaware of the EPF physics.
|
A constraint satisfaction problem (CSP), $\textsf{Max-CSP}(\mathcal{F})$, is
specified by a finite set of constraints $\mathcal{F} \subseteq \{[q]^k \to
\{0,1\}\}$ for positive integers $q$ and $k$. An instance of the problem on $n$
variables is given by $m$ applications of constraints from $\mathcal{F}$ to
subsequences of the $n$ variables, and the goal is to find an assignment to the
variables that satisfies the maximum number of constraints. In the
$(\gamma,\beta)$-approximation version of the problem for parameters $0 \leq
\beta < \gamma \leq 1$, the goal is to distinguish instances where at least
$\gamma$ fraction of the constraints can be satisfied from instances where at
most $\beta$ fraction of the constraints can be satisfied. In this work we
consider the approximability of this problem in the context of sketching
algorithms and give a dichotomy result. Specifically, for every family
$\mathcal{F}$ and every $\beta < \gamma$, we show that either a linear
sketching algorithm solves the problem in polylogarithmic space, or the problem
is not solvable by any sketching algorithm in $o(\sqrt{n})$ space.
|
Traditionally, origami has been categorized into two groups according to
their kinematics design: rigid and non-rigid origami. However, such
categorization can be superficial, and rigid origami can obtain new mechanical
properties by intentionally relaxing the rigid-folding kinematics. Based on
numerical simulations using the bar-hinge approach and experiments, this study
examines the multi-stability of a stacked Miura-origami cellular structure with
different levels of facet compliance. The simulation and experiment results
show that a unit cell in such cellular solid exhibits only two stable states if
it follows the rigid origami kinematics; however, two more stable states are
reachable if the origami facets become sufficiently compliant. Moreover, the
switch between two certain stable states shows an asymmetric energy barrier,
meaning that the unit cell follows fundamentally different deformation paths
when it extends from one state to another compared to the opposite compression
switch. As a result, the reaction force required for extending this unit cell
between these two states can be higher than the compression switch. Such
asymmetric multi-stability can be fine-tuned by tailoring the underlying
origami design, and it can be extended into cellular solids with carefully
placed voids. By showing the benefits of exploiting facet compliance, this
study could foster multi-functional structures and material systems that
traditional rigid origami cannot create.
|
We prove $L^p$ bounds for the maximal operators associated to an
Ahlfors-regular variant of fractal percolation. Our bounds improve upon those
obtained by I. {\L}aba and M. Pramanik and in some cases are sharp up to the
endpoint. A consequence of our main result is that there exist Ahlfors-regular
Salem Cantor sets of any dimension $>1/2$ such that the associated maximal
operator is bounded on $L^2(\mathbb{R})$. We follow the overall scheme of
{\L}aba-Pramanik for the analytic part of the argument, while the probabilistic
part is instead inspired by our earlier work on intersection properties of
random measures.
|
Bayesian Optimization is a popular tool for tuning algorithms in automatic
machine learning (AutoML) systems. Current state-of-the-art methods leverage
Random Forests or Gaussian processes to build a surrogate model that predicts
algorithm performance given a certain set of hyperparameter settings. In this
paper, we propose a new surrogate model based on gradient boosting, where we
use quantile regression to provide optimistic estimates of the performance of
an unobserved hyperparameter setting, and combine this with a distance metric
between unobserved and observed hyperparameter settings to help regulate
exploration. We demonstrate empirically that the new method is able to
outperform some state-of-the art techniques across a reasonable sized set of
classification problems.
|
Consider a connected graph $G$ and let $T$ be a spanning tree of $G$. Every
edge $e \in G-T$ induces a cycle in $T \cup \{e\}$. The intersection of two
distinct such cycles is the set of edges of $T$ that belong to both cycles. We
consider the problem of finding a spanning tree that has the least number of
such non-empty intersections.
|
We consider the task of grasping a target object based on a natural language
command query. Previous work primarily focused on localizing the object given
the query, which requires a separate grasp detection module to grasp it. The
cascaded application of two pipelines incurs errors in overlapping multi-object
cases due to ambiguity in the individual outputs. This work proposes a model
named Command Grasping Network(CGNet) to directly output command satisficing
grasps from RGB image and textual command inputs. A dataset with ground truth
(image, command, grasps) tuple is generated based on the VMRD dataset to train
the proposed network. Experimental results on the generated test set show that
CGNet outperforms a cascaded object-retrieval and grasp detection baseline by a
large margin. Three physical experiments demonstrate the functionality and
performance of CGNet.
|
It has been shown that the parallel Lattice Linear Predicate (LLP) algorithm
solves many combinatorial optimization problems such as the shortest path
problem, the stable marriage problem and the market clearing price problem. In
this paper, we give the parallel LLP algorithm for many dynamic programming
problems. In particular, we show that the LLP algorithm solves the longest
subsequence problem, the optimal binary search tree problem, and the knapsack
problem. Furthermore, the algorithm can be used to solve the constrained
versions of these problems so long as the constraints are lattice linear. The
parallel LLP algorithm requires only read-write atomicity and no higher-level
atomic instructions.
|
We report the first investigation of the performance of EOM-CC4 -- an
approximate equation-of-motion coupled-cluster model which includes iterative
quadruple excitations -- for vertical excitation energies in molecular systems.
By considering a set of 28 excited states in 10 small molecules for which we
have computed CCSDTQP and FCI reference energies, we show that, in the case of
excited states with a dominant contribution from the single excitations, CC4
yields excitation energies with sub-kJ~mol$^{-1}$ accuracy (i.e., error below
$0.01$ eV), in very close agreement with its more expensive CCSDTQ parent.
Therefore, if one aims at high accuracy, CC4 stands as a highly competitive
approximate method to model molecular excited states, with a significant
improvement over both CC3 and CCSDT. Our results also evidence that, although
the same qualitative conclusions hold, one cannot reach the same level of
accuracy for transitions with a dominant contribution from the double
excitations.
|
We present a comprehensive analytic model of a relativistic jet propagation
in expanding media. This model is the first to cover the entire jet evolution
from early to late times, as well as a range of configurations that are
relevant to binary neutron star mergers. These include low and high luminosity
jets, unmagnetized and mildly magnetized jets, time-dependent luminosity jets,
and Newtonian and relativistic head velocities. We also extend the existing
solution of jets in a static medium to power-law density media with index
$\alpha<5$. Our model, which is tested and calibrated by a suite of 3D RMHD
simulations, provides simple analytic formulae for the jet head propagation and
breakout times, as well as a simple breakout criterion which depends only on
the jet to ejecta energy ratio and jet opening angle. Assuming a delay time $
t_d $ between the onset of a homologous ejecta expansion and jet launching, the
system evolution has two main regimes: strong and weak jets. The regime depends
on the ratio between the jet head velocity in the ejecta frame and the local
ejecta velocity, denoted as $ \eta $. Strong jets start their propagation in
the ejecta on a timescale shorter than $t_d$ with $\eta \gg 1$, and within
several ejecta dynamical times $\eta$ drops below unity. Weak jets are unable
to penetrate the ejecta at first (start with $\eta \ll 1$), and breach the
ejecta only after the ejecta expands over a timescale longer than $ t_d $, thus
their evolution is independent of $ t_d $. After enough time, both strong and
weak jets approach an asymptotic phase where $\eta$ is constant. Applying our
model to short GRBs, we find that there is most likely a large diversity of
ejecta mass, where mass $ \lesssim 10^{-3}~{\rm M}_{\odot} $ (at least along
the poles) is common.
|
As gradient descent method in deep learning causes a series of questions,
this paper proposes a novel gradient-free deep learning structure. By adding a
new module into traditional Self-Organizing Map and introducing residual into
the map, a Deep Valued Self-Organizing Map network is constructed. And analysis
about the convergence performance of such a deep Valued Self-Organizing Map
network is proved in this paper, which gives an inequality about the designed
parameters with the dimension of inputs and the loss of prediction.
|
It has been previously shown that a particular nonperturbative
constituent-quark model of hadrons describes experimental measurements of
electromagnetic form factors of light charged mesons through a small number of
common phenomenological parameters, matching at the same time the
Quantum-Chromodynamics (QCD) asymptotics for the pi-meson form factor at large
momentum transfer. Here we start with the determination of the K0
electromagnetic form factor in this approach. Precise measurement of the K0
charge radius makes it possible to constrain model parameters with high
accuracy. Then, with all parameters fixed, we revisit the K+ form factor and
find that it matches experimental measurements in the infrared, lattice results
at moderate momentum transfer and the perturbative QCD asymptotics in the
ultraviolet. In this way we obtain a narrow constraint on the K+ charge radius,
<r_K+^2> = 0.403 +0.007 -0.006 fm^2, and extend the successful
infrared-ultraviolet connection from pi to K mesons.
|
The purpose of this paper is to present an inexact version of the scaled
gradient projection method on a convex set, which is inexact in two sense.
First, an inexact projection on the feasible set is computed, allowing for an
appropriate relative error tolerance. Second, an inexact non-monotone line
search scheme is employed to compute a step size which defines the next
iteration. It is shown that the proposed method has similar asymptotic
convergence properties and iteration-complexity bounds as the usual scaled
gradient projection method employing monotone line searches.
|
We calculate the mass difference between the $\Upsilon$ and $\eta_b$ and the
$\Upsilon$ leptonic width from lattice QCD using the Highly Improved Staggered
Quark formalism for the $b$ quark and including $u$, $d$, $s$ and $c$ quarks in
the sea. We have results for lattices with lattice spacing as low as 0.03 fm
and multiple heavy quark masses, enabling us to map out the heavy quark mass
dependence and determine values at the $b$ quark mass. Our results are:
$M_{\Upsilon} -M_{\eta_b} = 57.5(2.3)(1.0) \,\mathrm{MeV}$ (where the second
uncertainty comes from neglect of quark-line disconnected correlation
functions) and decay constants, $f_{\eta_b}=724(12)$ MeV and $f_{\Upsilon}
=677.2(9.7)$ MeV, giving $\Gamma(\Upsilon \rightarrow e^+e^-) = 1.292(37)(3)
\,\mathrm{keV}$. The hyperfine splitting and leptonic width are both in good
agreement with experiment, and provide the most accurate lattice QCD results to
date for these quantities by some margin. At the same time results for the time
moments of the vector-vector correlation function can be compared to values for
the $b$ quark contribution to $\sigma(e^+e^- \rightarrow \mathrm{hadrons})$
determined from experiment. Moments 4--10 provide a 2\% test of QCD and yield a
$b$ quark contribution to the anomalous magnetic moment of the muon of
0.300(15)$\times 10^{-10}$. Our results, covering a range of heavy quark
masses, may also be useful to constrain QCD-like composite theories for beyond
the Standard Model physics.
|
The purpose of this paper is presenting a theoretical basis for the study of
$\omega$-Hamiltonian vector fields in a more general approach than the
classical one. We introduce the concepts of $\omega$-symplectic group and
$\omega$-semisymplectic group, and describe some of their properties. We show
that the Lie algebra of such groups is a useful tool in the recognition of an
$\omega$-Hamiltonian vector field defined on a symplectic vector space
$(V,\omega)$ with respect to coordinates that are not necessarily symplectic.
|
Ineffective fundraising lowers the resources charities can use to provide
goods. We combine a field experiment and a causal machine-learning approach to
increase a charity's fundraising effectiveness. The approach optimally targets
a fundraising instrument to individuals whose expected donations exceed
solicitation costs. Our results demonstrate that machine-learning-based optimal
targeting allows the charity to substantially increase donations net of
fundraising costs relative to uniform benchmarks in which either everybody or
no one receives the gift. To that end, it (a) should direct its fundraising
efforts to a subset of past donors and (b) never address individuals who were
previously asked but never donated. Further, we show that the benefits of
machine-learning-based optimal targeting even materialize when the charity only
exploits publicly available geospatial information or applies the estimated
optimal targeting rule to later fundraising campaigns conducted in similar
samples. We conclude that charities not engaging in optimal targeting waste
significant resources.
|
Spin$-$orbit alignment (SOA; i.e., the vector alignment between the halo spin
and the orbital angular momentum of neighboring halos) provides an important
clue to how galactic angular momenta develop. For this study, we extract
virial-radius-wise contact halo pairs with mass ratios between 1/10 and 10 from
a set of cosmological $N$-body simulations. In the spin--orbit angle
distribution, we find a significant SOA in that 52.7%$\pm$0.2% of neighbors are
on the prograde orbit. The SOA of our sample is mainly driven by low-mass
target halos ($<10^{11.5}h^{-1}M_{\odot}$) with close merging neighbors,
corroborating the notion that the tidal interaction is one of the physical
origins of SOA. We also examine the correlation of SOA with the adjacent
filament and find that halos closer to the filament show stronger SOA. Most
interestingly, we discover for the first time that halos with the spin parallel
to the filament experience most frequently the prograde-polar interaction
(i.e., fairly perpendicular but still prograde interaction; spin--orbit angle
$\sim$ 70$^{\circ}$). This instantly invokes the spin-flip event and the
prograde-polar interaction will soon flip the spin of the halo to align it with
the neighbor's orbital angular momentum. We propose that the SOA originates
from the local cosmic flow along the anisotropic large-scale structure,
especially that along the filament, and grows further by interactions with
neighbors.
|
The outbreak of the coronavirus disease 2019 (COVID-19) has now spread
throughout the globe infecting over 150 million people and causing the death of
over 3.2 million people. Thus, there is an urgent need to study the dynamics of
epidemiological models to gain a better understanding of how such diseases
spread. While epidemiological models can be computationally expensive, recent
advances in machine learning techniques have given rise to neural networks with
the ability to learn and predict complex dynamics at reduced computational
costs. Here we introduce two digital twins of a SEIRS model applied to an
idealised town. The SEIRS model has been modified to take account of spatial
variation and, where possible, the model parameters are based on official virus
spreading data from the UK. We compare predictions from a data-corrected
Bidirectional Long Short-Term Memory network and a predictive Generative
Adversarial Network. The predictions given by these two frameworks are accurate
when compared to the original SEIRS model data.
Additionally, these frameworks are data-agnostic and could be applied to
towns, idealised or real, in the UK or in other countries. Also, more
compartments could be included in the SEIRS model, in order to study more
realistic epidemiological behaviour.
|
This work presents the selection principle $S_1^*(\tau_x,CD)$ that
characterizes $q$-points. We also discuss the induced topological game
$G_1^*(\tau_x,CD)$ and its relations with $W$-points and
$\widetilde{W}$-points, as well as with the game $G_1(\Omega_x,\Omega_x)$.
|
A single layer neural network for the solution of linear equations is
presented. The proposed circuit is based on the standard Hopfield model albeit
with the added flexibility that the interconnection weight matrix need not be
symmetric. This results in an asymmetric Hopfield neural network capable of
solving linear equations. PSPICE simulation results are given which verify the
theoretical predictions. Experimental results for circuits set up to solve
small problems further confirm the operation of the proposed circuit.
|
We introduce a simple and effective method for learning VAEs with
controllable inductive biases by using an intermediary set of latent variables.
This allows us to overcome the limitations of the standard Gaussian prior
assumption. In particular, it allows us to impose desired properties like
sparsity or clustering on learned representations, and incorporate prior
information into the learned model. Our approach, which we refer to as the
Intermediary Latent Space VAE (InteL-VAE), is based around controlling the
stochasticity of the encoding process with the intermediary latent variables,
before deterministically mapping them forward to our target latent
representation, from which reconstruction is performed. This allows us to
maintain all the advantages of the traditional VAE framework, while
incorporating desired prior information, inductive biases, and even topological
information through the latent mapping. We show that this, in turn, allows
InteL-VAEs to learn both better generative models and representations.
|
We shall describe the various activities done by us in Covid Times including
outreach and educational workshops in Physics and Astronomy. We shall discuss
the caveats in virtual teaching of Astronomy and the lessons learnt in the
process.
|
We investigate and analyze principles of typical motion planning algorithms.
These include traditional planning algorithms, supervised learning, optimal
value reinforcement learning, policy gradient reinforcement learning.
Traditional planning algorithms we investigated include graph search
algorithms, sampling-based algorithms, and interpolating curve algorithms.
Supervised learning algorithms include MSVM, LSTM, MCTS and CNN. Optimal value
reinforcement learning algorithms include Q learning, DQN, double DQN, dueling
DQN. Policy gradient algorithms include policy gradient method, actor-critic
algorithm, A3C, A2C, DPG, DDPG, TRPO and PPO. New general criteria are also
introduced to evaluate performance and application of motion planning
algorithms by analytical comparisons. Convergence speed and stability of
optimal value and policy gradient algorithms are specially analyzed. Future
directions are presented analytically according to principles and analytical
comparisons of motion planning algorithms. This paper provides researchers with
a clear and comprehensive understanding about advantages, disadvantages,
relationships, and future of motion planning algorithms in robotics, and paves
ways for better motion planning algorithms.
|
Missing data is a common problem in clinical data collection, which causes
difficulty in the statistical analysis of such data. To overcome problems
caused by incomplete data, we propose a new imputation method called projective
resampling imputation mean estimation (PRIME), which can also address ``the
curse of dimensionality" problem in imputation with less information loss. We
use various sample sizes, missing-data rates, covariate correlations, and noise
levels in simulation studies, and all results show that PRIME outperformes
other methods such as iterative least-squares estimation (ILSE), maximum
likelihood (ML), and complete-case analysis (CC). Moreover, we conduct a study
of influential factors in cardiac surgery-associated acute kidney injury
(CSA-AKI), which show that our method performs better than the other models.
Finally, we prove that PRIME has a consistent property under some regular
conditions.
|
We study a duality for the $n$-point functions in VEV formalism that we call
the ordinary vs fully simple duality. It provides an ultimate generalisation
and a proper context for the duality between maps and fully simple maps
observed by Borot and Garcia-Failde. Our approach allows to transfer the
algebraicity properties between the systems of $n$-point functions related by
this duality, and gives direct tools for the analysis of singularities. As an
application, we give a proof of a recent conjecture of Borot and Garcia-Failde
on topological recursion for fully simple maps.
|
Logging is a development practice that plays an important role in the
operations and monitoring of complex systems. Developers place log statements
in the source code and use log data to understand how the system behaves in
production. Unfortunately, anticipating where to log during development is
challenging. Previous studies show the feasibility of leveraging machine
learning to recommend log placement despite the data imbalance since logging is
a fraction of the overall code base. However, it remains unknown how those
techniques apply to an industry setting, and little is known about the effect
of imbalanced data and sampling techniques.
In this paper, we study the log placement problem in the code base of Adyen,
a large-scale payment company. We analyze 34,526 Java files and 309,527 methods
that sum up +2M SLOC. We systematically measure the effectiveness of five
models based on code metrics, explore the effect of sampling techniques,
understand which features models consider to be relevant for the prediction,
and evaluate whether we can exploit 388,086 methods from 29 Apache projects to
learn where to log in an industry setting.
Our best performing model achieves 79% of balanced accuracy, 81% of
precision, 60% of recall. While sampling techniques improve recall, they
penalize precision at a prohibitive cost. Experiments with open-source data
yield under-performing models over Adyen's test set; nevertheless, they are
useful due to their low rate of false positives. Our supporting scripts and
tools are available to the community.
|
Faraday rotation provides a valuable tracer of magnetic fields in the
interstellar medium; catalogs of Faraday rotation measures provide key
observations for studies of the Galactic magnetic field. We present a new
catalog of rotation measures derived from the Canadian Galactic Plane Survey,
covering a large region of the Galactic plane spanning 52 deg < l < 192 deg, -3
deg < b < 5 deg, along with northern and southern latitude extensions around l
~ 105 deg. We have derived rotation measures for 2234 sources (4 of which are
known pulsars), 75% of which have no previous measurements, over an area of
approximately 1300 square degrees. These new rotation measures increase the
measurement density for this region of the Galactic plane by a factor of two.
|
Multi-Agent Reinforcement Learning (MARL) algorithms show amazing performance
in simulation in recent years, but placing MARL in real-world applications may
suffer safety problems. MARL with centralized shields was proposed and verified
in safety games recently. However, centralized shielding approaches can be
infeasible in several real-world multi-agent applications that involve
non-cooperative agents or communication delay. Thus, we propose to combine MARL
with decentralized Control Barrier Function (CBF) shields based on available
local information. We establish a safe MARL framework with decentralized
multiple CBFs and develop Multi-Agent Deep Deterministic Policy Gradient
(MADDPG) to Multi-Agent Deep Deterministic Policy Gradient with decentralized
multiple Control Barrier Functions (MADDPG-CBF). Based on a collision-avoidance
problem that includes not only cooperative agents but obstacles, we demonstrate
the construction of multiple CBFs with safety guarantees in theory. Experiments
are conducted and experiment results verify that the proposed safe MARL
framework can guarantee the safety of agents included in MARL.
|
Carbon is one of the most essential elements to support a sustained human
presence in space, and more immediately, several large-scale methalox-based
transport systems will begin operating in the near future. This raises the
question of whether indigenous carbon on the Moon is abundant and concentrated
to the extent where it could be used as a viable resource including as
propellant. Here, I assess potential sources of lunar carbon based on previous
work focused on polar water ice. A simplified model is used to estimate the
temperature-dependent Carbon Content of Ices at the lunar poles, and this is
combined with remote sensing data to estimate the total amount of carbon and
generate a Carbon Favorability Index that highlights promising deposits for
future ground-based prospecting. Hotspots in the index maps are identified, and
nearby staging areas are analyzed using quantitative models of trafficability
and solar irradiance. Overall, the Moon is extremely poor in carbon sources
compared to more abundant and readily accessible options at Mars. However, a
handful of polar regions may contain appreciable amounts of subsurface
carbon-bearing ices that could serve as a rich source in the near term, but
would be easily exhausted on longer timescales. Four of those regions were
found to have safe nearby staging areas with equatorial-like illumination at a
modest height above the surface. Any one of these sites could yield enough C, H
and O to produce propellant for hundreds of refuelings of a large spacecraft.
Other potential lunar carbon sources including bulk regolith and pyroclastic
glasses are less viable due to their low carbon concentrations.
|
Traffic simulators act as an essential component in the operating and
planning of transportation systems. Conventional traffic simulators usually
employ a calibrated physical car-following model to describe vehicles'
behaviors and their interactions with traffic environment. However, there is no
universal physical model that can accurately predict the pattern of vehicle's
behaviors in different situations. A fixed physical model tends to be less
effective in a complicated environment given the non-stationary nature of
traffic dynamics. In this paper, we formulate traffic simulation as an inverse
reinforcement learning problem, and propose a parameter sharing adversarial
inverse reinforcement learning model for dynamics-robust simulation learning.
Our proposed model is able to imitate a vehicle's trajectories in the real
world while simultaneously recovering the reward function that reveals the
vehicle's true objective which is invariant to different dynamics. Extensive
experiments on synthetic and real-world datasets show the superior performance
of our approach compared to state-of-the-art methods and its robustness to
variant dynamics of traffic.
|
The direction of arrival (DOA) estimation in array signal processing is an
important research area. The effectiveness of the direction of arrival greatly
determines the performance of multi-input multi-output (MIMO) antenna systems.
The multiple signal classification (MUSIC) algorithm, which is the most
canonical and widely used subspace-based method, has a moderate estimation
performance of DOA. However, in hybrid massive MIMO systems, the received
signals at the antennas are not sent to the receiver directly, and spatial
covariance matrix, which is essential in MUSIC algorithm, is thus unavailable.
Therefore, the spatial covariance matrix reconstruction is required for the
application of MUSIC in hybrid massive MIMO systems. In this article, we
present a quantum algorithm for MUSIC-based DOA estimation in hybrid massive
MIMO systems. Compared with the best-known classical algorithm, our quantum
algorithm can achieve an exponential speedup on some parameters and a
polynomial speedup on others under some mild conditions. In our scheme, we
first present the quantum subroutine for the beam sweeping based spatial
covariance matrix reconstruction, where we implement a quantum singular vector
transition process to avoid extending the steering vectors matrix into the
Hermitian form. Second, a variational quantum density matrix eigensolver
(VQDME) is proposed for obtaining signal and noise subspaces, where we design a
novel objective function in the form of the trace of density matrices product.
Finally, a quantum labeling operation is proposed for the direction of arrival
estimation of the signal.
|
Time-frequency concentration operators restrict the integral
analysis-synthesis formula for the short-time Fourier transform to a given
compact domain. We estimate how much the corresponding eigenvalue counting
function deviates from the Lebesgue measure of the time-frequency domain. For
window functions in the Gelfand-Shilov class, the bounds approximately match
known asymptotics. We also consider window functions that decay only
polynomially in time and frequency.
|
The correlation between the event mean-transverse momentum
$[p_{\mathrm{T}}]$, and the anisotropic flow magnitude $v_n$,
$\rho(v^{2}_{n},[p_{T}])$, has been argued to be sensitive to the initial
conditions in heavy-ion collisions. We use simulated events generated with the
AMPT and EPOS models for Au+Au at $\sqrt{\textit{s}_{NN}}$ = 200 GeV, to
investigate the model dependence and the response and sensitivity of the
$\rho(v^{2}_{2},[p_{T}])$ correlator to collision-system size and shape, and
the viscosity of the matter produced in the collisions. We find good
qualitative agreement between the correlators for the string melting version of
the AMPT model and the EPOS model. The model investigations for
shape-engineered events as well as events with different viscosity ($\eta/s$),
indicate that $\rho(v^{2}_{2},[p_{T}])$ is sensitive to the initial-state
geometry of the collision system but is insensitive to sizable changes in
$\eta/s$ for the medium produced in the collisions. These findings suggest that
precise differential measurements of $\rho(v^{2}_{2},[p_{T}])$ as a function of
system size, shape, and beam-energy could provide more stringent constraints to
discern between initial-state models and hence, more reliable extractions of
$\eta/s$.
|
Crowd counting is critical for numerous video surveillance scenarios. One of
the main issues in this task is how to handle the dramatic scale variations of
pedestrians caused by the perspective effect. To address this issue, this paper
proposes a novel convolution neural network-based crowd counting method, termed
Perspective-guided Fractional-Dilation Network (PFDNet). By modeling the
continuous scale variations, the proposed PFDNet is able to select the proper
fractional dilation kernels for adapting to different spatial locations. It
significantly improves the flexibility of the state-of-the-arts that only
consider the discrete representative scales. In addition, by avoiding the
multi-scale or multi-column architecture that used in other methods, it is
computationally more efficient. In practice, the proposed PFDNet is constructed
by stacking multiple Perspective-guided Fractional-Dilation Convolutions (PFC)
on a VGG16-BN backbone. By introducing a novel generalized dilation convolution
operation, the PFC can handle fractional dilation ratios in the spatial domain
under the guidance of perspective annotations, achieving continuous scales
modeling of pedestrians. To deal with the problem of unavailable perspective
information in some cases, we further introduce an effective perspective
estimation branch to the proposed PFDNet, which can be trained in either
supervised or weakly-supervised setting once the branch has been pre-trained.
Extensive experiments show that the proposed PFDNet outperforms
state-of-the-art methods on ShanghaiTech A, ShanghaiTech B, WorldExpo'10,
UCF-QNRF, UCF_CC_50 and TRANCOS dataset, achieving MAE 53.8, 6.5, 6.8, 84.3,
205.8, and 3.06 respectively.
|
Rikudo is a number-placement puzzle, where the player is asked to complete a
Hamiltonian path on a hexagonal grid, given some clues (numbers already placed
and edges of the path). We prove that the game is complete for NP, even if the
puzzle has no hole. When all odd numbers are placed it is in P, whereas it is
still NP-hard when all numbers of the form $3k+1$ are placed.
|
We have demonstrated the advantage of combining multi-wavelength
observations, from the ultraviolet (UV) to near-infrared, to study Kron 3, a
massive star cluster in the Small Magellanic Cloud. We have estimated the
radius of the cluster Kron 3 to be 2.'0 and for the first time, we report the
identification of NUV-bright red clump (RC) stars and the extension of the RCin
colour and magnitude in the NUV vs (NUV-optical) colour-magnitude diagram
(CMD). We found that extension of the RC is an intrinsic property of the
cluster and it is not due to contamination of field stars or differential
reddening across the field. We studied the spectral energy distribution of the
RC stars and estimated a small range in temperature ~5000 - 5500K, luminosity
~60 - 90 Land radius ~8.0 - 11.0 Supporting their RC nature. The range of UV
magnitudes amongst the RC stars (~23.3 to 24.8 mag) is likely caused by the
combined effect of variable mass loss, variation in initial helium abundance
(Y_ini=0.23 to 0.28), and a small variation in age (6.5-7.5 Gyr) and
metallicity ([Fe/H]=-1.5 to -1.3). Spectroscopic follow-up observations of RC
stars in Kron 3 are necessary to confirm the cause of the extended RC.
|
Image restoration is a typical ill-posed problem, and it contains various
tasks. In the medical imaging field, an ill-posed image interrupts diagnosis
and even following image processing. Both traditional iterative and up-to-date
deep networks have attracted much attention and obtained a significant
improvement in reconstructing satisfying images. This study combines their
advantages into one unified mathematical model and proposes a general image
restoration strategy to deal with such problems. This strategy consists of two
modules. First, a novel generative adversarial net(GAN) with WGAN-GP training
is built to recover image structures and subtle details. Then, a deep iteration
module promotes image quality with a combination of pre-trained deep networks
and compressed sensing algorithms by ADMM optimization. (D)eep (I)teration
module suppresses image artifacts and further recovers subtle image details,
(A)ssisted by (M)ulti-level (O)bey-pixel feature extraction networks
(D)iscriminator to recover general structures. Therefore, the proposed strategy
is named DIAMOND.
|
This work (Part (I)) together with its companion (Part (II) [45]) develops a
new framework for stochastic functional Kolmogorov equations, which are
nonlinear stochastic differential equations depending on the current as well as
the past states. Because of the complexity of the results, it seems to be
instructive to divide our contributions to two parts. In contrast to the
existing literature, our effort is to advance the knowledge by allowing delay
and past dependence, yielding essential utility to a wide range of
applications. A long-standing question of fundamental importance pertaining to
biology and ecology is: What are the minimal necessary and sufficient
conditions for long-term persistence and extinction (or for long-term
coexistence of interacting species) of a population? Regardless of the
particular applications encountered, persistence and extinction are properties
shared by Kolmogorov systems. While there are many excellent treaties of
stochastic-differential-equation-based Kolmogorov equations, the work on
stochastic Kolmogorov equations with past dependence is still scarce. Our aim
here is to answer the aforementioned basic question. This work, Part (I), is
devoted to characterization of persistence, whereas its companion, Part (II)
[45], is devoted to extinction. The main techniques used in this paper include
the newly developed functional It^o formula and asymptotic coupling and
Harris-like theory for infinite dimensional systems specialized to functional
equations. General theorems for stochastic functional Kolmogorov equations are
developed first. Then a number of applications are examined to obtain new
results substantially covering, improving, and extending the existing
literature. Furthermore, these conditions reduce to that of Kolmogorov systems
when there is no past dependence.
|
Most softwarized telco services are conveniently framed as Service Function
Chains (SFCs). Indeed, being structured as a combination of interconnected
nodes, service chains may suffer from the single point of failure problem,
meaning that an individual node malfunctioning could compromise the whole chain
operation. To guarantee "highly available" (HA) levels, service providers are
required to introduce redundancy strategies to achieve specific availability
demands, where cost constraints have to be taken into account as well. Along
these lines we propose HASFC (standing for High Availability SFC), a framework
designed to support, through a dedicated REST interface, the MANO
infrastructure in deploying SFCs with an optimal availability-cost trade off.
Our framework is equipped with: i) an availability model builder aimed to
construct probabilistic models of the SFC nodes in terms of failure and repair
actions; ii) a chaining and selection module to compose the possible redundant
SFCs, and extract the best candidates thereof. Beyond providing architectural
details, we demonstrate the functionalities of HASFC through a use case which
considers the IP Multimedia Subsystem, an SFC-like structure adopted to manage
multimedia contents within 4G and 5G networks.
|
Many computer systems for calculating the proper organization of memory are
among the most critical issues. Using a tier cache memory (along with branching
prediction) is an effective means of increasing modern multi-core processors'
performance. Designing high-performance processors is a complex task and
requires preliminary verification and analysis of the model level, usually used
in analytical and simulation modeling. The refinement of extreme programming is
an unfortunate challenge. Few experts disagree with the synthesis of access
points. This article demonstrates that Internet QoS and 16-bit architectures
are always incompatible, but it's the same situation for write-back caches. The
solution to this problem can be implemented by analyzing simulation models of
different complexity in combination with the analytical evaluation of
individual algorithms. This work is devoted to designing a multi-parameter
simulation model of a multi-process for evaluating the performance of cache
memory algorithms and the optimality of the structure. Optimization of the
structures and algorithms of the cache memory allows you to accelerate the
interaction of the memory process and improve the performance of the entire
system.
|
This article investigates the heat kernel of the two-dimensional uniform
spanning tree. We improve previous work by demonstrating the occurrence of
log-logarithmic fluctuations around the leading order polynomial behaviour for
the on-diagonal part of the quenched heat kernel. In addition we give two-sided
estimates for the averaged heat kernel, and we show that the exponents that
appear in the off-diagonal parts of the quenched and averaged versions of the
heat kernel differ. Finally, we derive various scaling limits for the heat
kernel, the implications of which include enabling us to sharpen the known
asymptotics regarding the on-diagonal part of the averaged heat kernel and the
expected distance travelled by the associated simple random walk.
|
We extend the weak-strong uniqueness principle to general models of
compressible viscous fluids near/on the vacuum. In particular, the physically
relevant case of positive density with polynomial decay at infinity is
considered.
|
How can a collection of motile cells, each generating contractile nematic
stresses in isolation, become an extensile nematic at the tissue-level?
Understanding this seemingly contradictory experimental observation, which
occurs irrespective of whether the tissue is in the liquid or solid states, is
not only crucial to our understanding of diverse biological processes, but is
also of fundamental interest to soft matter and many-body physics. Here, we
resolve this cellular to tissue level disconnect in the small fluctuation
regime by using analytical theories based on hydrodynamic descriptions of
confluent tissues, in both liquid and solid states. Specifically, we show that
a collection of microscopic constituents with no inherently nematic extensile
forces can exhibit active extensile nematic behavior when subject to polar
fluctuating forces. We further support our findings by performing cell level
simulations of minimal models of confluent tissues.
|
Internet of Things (IoT) is transforming human lives by paving the way for
the management of physical devices on the edge. These interconnected IoT
objects share data for remote accessibility and can be vulnerable to open
attacks and illegal access. Intrusion detection methods are commonly used for
the detection of such kinds of attacks but with these methods, the
performance/accuracy is not optimal. This work introduces a novel intrusion
detection approach based on an ensemble-based voting classifier that combines
multiple traditional classifiers as a base learner and gives the vote to the
predictions of the traditional classifier in order to get the final prediction.
To test the effectiveness of the proposed approach, experiments are performed
on a set of seven different IoT devices and tested for binary attack
classification and multi-class attack classification. The results illustrate
prominent accuracies on Global Positioning System (GPS) sensors and weather
sensors to 96% and 97% and for other machine learning algorithms to 85% and
87%, respectively. Furthermore, comparison with other traditional machine
learning methods validates the superiority of the proposed algorithm.
|
We combine observations from ALMA, ATCA, MUSE, andHerschel to study
gas-to-dust ratios in 15 Fornax cluster galaxies detected in the FIR/sub-mm by
Herschel and observed by ALMA as part of the ALMA Fornax Cluster Survey
(AlFoCS). The sample spans a stellar mass range of 8.3 $\leq$ log (M$_*$ /
M$_\odot$) $\leq$ 11.16, and a variety of morphological types. We use gas-phase
metallicities derived from MUSE observations (from the Fornax3D survey) to
study these ratios as a function of metallicity, and to study dust-to-metal
ratios, in a sub-sample of nine galaxies. We find that gas-to-dust ratios in
Fornax galaxies are systematically lower than those in field galaxies at fixed
stellar mass/metallicity. This implies that a relatively large fraction of the
metals in these Fornax systems is locked up in dust, which is possibly due to
altered chemical evolution as a result of the dense environment. The low ratios
are not only driven by HI deficiencies, but H$_2$-to-dust ratios are also
significantly decreased. This is different in the Virgo cluster, where low
gas-to-dust ratios inside the virial radius are driven by low HI-to-dust
ratios, while H$_2$-to-dust ratios are increased. Resolved observations of
NGC1436 show a radial increase in H$_2$-to-dust ratio, and show that low ratios
are present throughout the disc. We propose various explanations for the low
H$_2$-to-dust ratios in the Fornax cluster, including the more efficient
stripping of H$_2$ compared to dust, more efficient enrichment of dust in the
star formation process, and altered ISM physics in the cluster environment.
|
It has been established that solutions to the inviscid Proudman-Johnson
equation subject to a homogeneous three-point boundary condition can develop
singularities in finite time. In this paper, we consider the possibility of
singularity formation in solutions of the generalized, inviscid
Proudman-Johnson equation with damping subject to the same homogeneous
three-point boundary condition. In particular, we derive conditions the initial
data must satisfy in order for solutions to blowup in finite time with either
bounded or unbounded smooth damping term.
|
Finding a good query plan is key to the optimization of query runtime. This
holds in particular for cost-based federation engines, which make use of
cardinality estimations to achieve this goal. A number of studies compare
SPARQL federation engines across different performance metrics, including query
runtime, result set completeness and correctness, number of sources selected
and number of requests sent. Albeit informative, these metrics are generic and
unable to quantify and evaluate the accuracy of the cardinality estimators of
cost-based federation engines. To thoroughly evaluate cost-based federation
engines, the effect of estimated cardinality errors on the overall query
runtime performance must be measured. In this paper, we address this challenge
by presenting novel evaluation metrics targeted at a fine-grained benchmarking
of cost-based federated SPARQL query engines. We evaluate five cost-based
federated SPARQL query engines using existing as well as novel evaluation
metrics by using LargeRDFBench queries. Our results provide a detailed analysis
of the experimental outcomes that reveal novel insights, useful for the
development of future cost-based federated SPARQL query processing engines.
|
Radiomics is an active area of research in medical image analysis, the low
reproducibility of radiomics has limited its applicability to clinical
practice. This issue is especially prominent when radiomic features are
calculated from noisy images, such as low dose computed tomography (CT) scans.
In this article, we investigate the possibility of improving the
reproducibility of radiomic features calculated on noisy CTs by using
generative models for denoising.One traditional denoising method - non-local
means - and two generative models - encoder-decoder networks (EDN) and
conditional generative adversarial networks (CGANs) - were selected as the test
models. We added noise to the sinograms of full dose CTs to mimic low dose CTs
with two different levels of noise: low-noise CT and high-noise CT. Models were
trained on high-noise CTs and used to denoise low-noise CTs without
re-training. We also test the performance of our model in real data, using
dataset of same-day repeat low dose CTs to assess the reproducibility of
radiomic features in denoised images. The EDN and the CGAN improved the
concordance correlation coefficients (CCC) of radiomic features for low-noise
images from 0.87 to 0.92 and for high-noise images from 0.68 to 0.92
respectively. Moreover, the EDN and the CGAN improved the test-retest
reliability of radiomic features (mean CCC increased from 0.89 to 0.94) based
on real low dose CTs. The results show that denoising using EDN and CGANs can
improve the reproducibility of radiomic features calculated on noisy CTs.
Moreover, images with different noise levels can be denoised to improve the
reproducibility using these models without re-training, as long as the noise
intensity is equal or lower than that in high-noise CTs. To the authors'
knowledge, this is the first effort to improve the reproducibility of radiomic
features calculated on low dose CT scans.
|
From any location outside the event horizon of a black hole there are an
infinite number of trajectories for light to an observer. Each of these paths
differ in the number of orbits revolved around the black hole and in their
proximity to the last photon orbit. With simple numerical and a perturbed
analytical solution to the null-geodesic equation of the Schwarzschild black
hole we will reaffirm how each additional orbit is a factor $e^{2 \pi}$ closer
to the black hole's optical edge. Consequently, the surface of the black hole
and any background light will be mirrored infinitely in exponentially thinner
slices around the last photon orbit. Furthermore, the introduced formalism
proves how the entire trajectories of light in the strong field limit is
prescribed by a diverging and a converging exponential. Lastly, the existence
of the exponential family is generalized to the equatorial plane of the Kerr
black hole with the exponentials dependence on spin derived. Thereby, proving
that the distance between subsequent images increases and decreases for
respectively retrograde and prograde images. In the limit of an extremely
rotating Kerr black hole no logarithmic divergence exists for prograde
trajectories.
|
We provide a unified, comprehensive treatment of all operators that
contribute to the anti-ferromagnetic, ferromagnetic, and charge-density-wave
structure factors and order parameters of the hexagonal Hubbard Model. We use
the Hybrid Monte Carlo algorithm to perform a systematic, carefully controlled
analysis in the temporal Trotter error and of the thermodynamic limit. We
expect our findings to improve the consistency of Monte Carlo determinations of
critical exponents. We perform a data collapse analysis and determine the
critical exponent $\beta=0.898(37)$ for the semimetal-Mott insulator transition
in the hexagonal Hubbard Model. Our methods are applicable to a wide range of
lattice theories of strongly correlated electrons.
|
MixUp is a computer vision data augmentation technique that uses convex
interpolations of input data and their labels to enhance model generalization
during training. However, the application of MixUp to the natural language
understanding (NLU) domain has been limited, due to the difficulty of
interpolating text directly in the input space. In this study, we propose MixUp
methods at the Input, Manifold, and sentence embedding levels for the
transformer architecture, and apply them to finetune the BERT model for a
diverse set of NLU tasks. We find that MixUp can improve model performance, as
well as reduce test loss and model calibration error by up to 50%.
|
We enrich the setting of strongly stable ideals (SSI): We introduce shift
modules, a module category encompassing SSI's. The recently introduced duality
on SSI's is given an effective conceptual and computational setting. We study
strongly stable ideals in infinite dimensional polynomial rings, where the
duality is most natural. Finally a new type of resolution for SSI's is
introduced. This is the projective resolution in the category of shift modules.
|
The states of two electrons in tunnel-coupled semiconductor quantum dots can
be effectively described in terms of a two-spin Hamiltonian with an isotropic
Heisenberg interaction. A similar description needs to be generalized in the
case of holes due to their multiband character and spin-orbit coupling, which
mixes orbital and spin degrees of freedom, and splits $J=3/2$ and $J = 1/2$
multiplets. Here we investigate two-hole states in prototypical coupled Si and
Ge quantum dots via different theoretical approaches. Multiband
$\boldsymbol{k}\cdot\boldsymbol{p}$ and Configuration-Interaction calculations
are combined with entanglement measures in order to thoroughly characterize the
two-hole states in terms of band mixing and justify the introduction of an
effective spin representation, which we analytically derive a from generalized
Hubbard model. We find that, in the weak interdot regime, the ground state and
first excited multiplet of the two-hole system display -- unlike their
electronic counterparts -- a high degree of $J$-mixing, even in the limit of
purely heavy-hole states. The light-hole component additionally induces
$M$-mixing and a weak coupling between spinors characterized by different
permutational symmetries.
|
CT image quality is heavily reliant on radiation dose, which causes a
trade-off between radiation dose and image quality that affects the subsequent
image-based diagnostic performance. However, high radiation can be harmful to
both patients and operators. Several (deep learning-based) approaches have been
attempted to denoise low dose images. However, those approaches require access
to large training sets, specifically the full dose CT images for reference,
which can often be difficult to obtain. Self-supervised learning is an emerging
alternative for lowering the reference data requirement facilitating
unsupervised learning. Currently available self-supervised CT denoising works
are either dependent on foreign domain or pretexts are not very task-relevant.
To tackle the aforementioned challenges, we propose a novel self-supervised
learning approach, namely Self-Supervised Window-Leveling for Image DeNoising
(SSWL-IDN), leveraging an innovative, task-relevant, simple, yet effective
surrogate -- prediction of the window-leveled equivalent. SSWL-IDN leverages
residual learning and a hybrid loss combining perceptual loss and MSE, all
incorporated in a VAE framework. Our extensive (in- and cross-domain)
experimentation demonstrates the effectiveness of SSWL-IDN in aggressive
denoising of CT (abdomen and chest) images acquired at 5\% dose level only.
|
The wild McKay correspondence, a variant of the McKay correspondence in
positive characteristics, shows that stringy motives of quotient varieties
equal some motivic integrals on the moduli space of of the Galois covers of a
formal disk. In this paper, we determine when the integrals converge for the
the case of cyclic groups of prime power order. As an application, we give a
criterion for the quotient variety being canonical or log canonical.
|
This paper presents a Matlab toolbox to perform basic image processing and
visualization tasks, particularly designed for medical image processing. The
functionalities available are similar to basic functions found in other
non-Matlab widely used libraries such as the Insight Toolkit (ITK). The toolbox
is entirely written in native Matlab code, but is fast and flexible.
Main use cases for the toolbox are illustrated here, including image
input/output, pre-processing, filtering, image registration and visualisation.
Both the code and sample data are made publicly available and open source.
|
Sound event detection (SED) is a hot topic in consumer and smart city
applications. Existing approaches based on Deep Neural Networks are very
effective, but highly demanding in terms of memory, power, and throughput when
targeting ultra-low power always-on devices.
Latency, availability, cost, and privacy requirements are pushing recent IoT
systems to process the data on the node, close to the sensor, with a very
limited energy supply, and tight constraints on the memory size and processing
capabilities precluding to run state-of-the-art DNNs.
In this paper, we explore the combination of extreme quantization to a
small-footprint binary neural network (BNN) with the highly energy-efficient,
RISC-V-based (8+1)-core GAP8 microcontroller. Starting from an existing CNN for
SED whose footprint (815 kB) exceeds the 512 kB of memory available on our
platform, we retrain the network using binary filters and activations to match
these memory constraints. (Fully) binary neural networks come with a natural
drop in accuracy of 12-18% on the challenging ImageNet object recognition
challenge compared to their equivalent full-precision baselines. This BNN
reaches a 77.9% accuracy, just 7% lower than the full-precision version, with
58 kB (7.2 times less) for the weights and 262 kB (2.4 times less) memory in
total. With our BNN implementation, we reach a peak throughput of 4.6 GMAC/s
and 1.5 GMAC/s over the full network, including preprocessing with Mel bins,
which corresponds to an efficiency of 67.1 GMAC/s/W and 31.3 GMAC/s/W,
respectively. Compared to the performance of an ARM Cortex-M4 implementation,
our system has a 10.3 times faster execution time and a 51.1 times higher
energy-efficiency.
|
We prove a quantitative $h$-principle statement for subcritical isotropic
embeddings. As an application, we construct a symplectic homeomorphism that
takes a symplectic disc into an isotropic one in dimension at least $6$.
|
We construct a family of functions suitable for establishing lower bounds on
the oracle complexity of first-order minimization of smooth strongly-convex
functions. Based on this construction, we derive new lower bounds on the
complexity of strongly-convex minimization under various inaccuracy criteria.
The new bounds match the known upper bounds up to a constant factor, and when
the inaccuracy of a solution is measured by its distance to the solution set,
the new lower bound exactly matches the upper bound obtained by the recent
Information-Theoretic Exact Method by the same authors, thereby establishing
the exact oracle complexity for this class of problems.
|
A large class of two dimensional quantum gravity theories of
Jackiw-Teitelboim form have a description in terms of random matrix models.
Such models, treated fully non-perturbatively, can give an explicit and
tractable description of the underlying ``microstate'' degrees of freedom. They
play a prominent role in regimes where the smooth geometrical picture of the
physics is inadequate. This is shown using a natural tool for extracting the
detailed microstate physics, a Fredholm determinant ${\rm
det}(\mathbf{1}{-}\mathbf{ K})$. Its associated kernel $K(E,E^\prime)$ can be
defined explicitly for a wide variety of JT gravity theories. To illustrate the
methods, the statistics of the first several energy levels of a
non-perturbative definition of JT gravity are constructed explicitly using
numerical methods, and the full quenched free energy $F_Q(T)$ of the system is
computed for the first time. These results are also of relevance to quantum
properties of black holes in higher dimensions.
|
Supermassive black hole binaries (SMBHBs) should form frequently in galactic
nuclei as a result of galaxy mergers. At sub-parsec separations, binaries
become strong sources of low-frequency gravitational waves (GWs), targeted by
Pulsar Timing Arrays (PTAs). We used recent upper limits on continuous GWs from
the North American Nanohertz Observatory for Gravitational Waves (NANOGrav)
11yr dataset to place constraints on putative SMBHBs in nearby massive
galaxies. We compiled a comprehensive catalog of ~44,000 galaxies in the local
universe (up to redshift ~0.05) and populated them with hypothetical binaries,
assuming that the total mass of the binary is equal to the SMBH mass derived
from global scaling relations. Assuming circular equal-mass binaries emitting
at NANOGrav's most sensitive frequency of 8nHz, we found that 216 galaxies are
within NANOGrav's sensitivity volume. We ranked the potential SMBHBs based on
GW detectability by calculating the total signal-to-noise ratio (S/N) such
binaries would induce within the NANOGrav array. We placed constraints on the
chirp mass and mass ratio of the 216 hypothetical binaries. For 19 galaxies,
only very unequal-mass binaries are allowed, with the mass of the secondary
less than 10 percent that of the primary, roughly comparable to constraints on
a SMBHB in the Milky Way. Additionally, we were able to exclude binaries
delivered by major mergers (mass ratio of at least 1/4) for several of these
galaxies. We also derived the first limit on the density of binaries delivered
by major mergers purely based on GW data.
|
We characterize the monodromies of projective structures with fuchsian-type
singularities. Namely, any representation from the fundamental group of a
Riemann surface of finite-type in $PSL_2(\mathbb{C})$ can be represented as the
holonomy of branched projective structure with fuchsian-type singularities over
the cusps. We made a geometrical/topological study of all local conical
projective structures whose Schwarzian derivative admits a simple pole at the
cusp. Finally, we explore isomonodromic deformations of such projective
structures and the problem of minimizing angles.
|
The cytoskeleton is a model active matter system that controls diverse
cellular processes from division to motility. While both active actomyosin
dynamics and actin-microtubule interactions are key to the cytoskeleton's
versatility and adaptability, an understanding of their interplay is lacking.
Here, we couple microscale experiments with mechanistic modeling to elucidate
how connectivity, rigidity, and force-generation affect emergent material
properties in in vitro composites of actin, tubulin, and myosin. We use
time-resolved differential dynamic microscopy and spatial image autocorrelation
to show that ballistic contraction occurs in composites with sufficient
flexibility and motor density, but that a critical fraction of microtubules is
necessary to sustain controlled dynamics. Our active double-network models
reveal that percolated actomyosin networks are essential for contraction, but
that networks with comparable actin and microtubule densities can uniquely
resist mechanical stresses while simultaneously supporting substantial
restructuring. Our findings provide a much-needed blueprint for designing
cytoskeleton-inspired materials that couple tunability with resilience and
adaptability.
|
This paper describes the submission of the NiuTrans end-to-end speech
translation system for the IWSLT 2021 offline task, which translates from the
English audio to German text directly without intermediate transcription. We
use the Transformer-based model architecture and enhance it by Conformer,
relative position encoding, and stacked acoustic and textual encoding. To
augment the training data, the English transcriptions are translated to German
translations. Finally, we employ ensemble decoding to integrate the predictions
from several models trained with the different datasets. Combining these
techniques, we achieve 33.84 BLEU points on the MuST-C En-De test set, which
shows the enormous potential of the end-to-end model.
|
Video-based person re-identification aims to match pedestrians from video
sequences across non-overlapping camera views. The key factor for video person
re-identification is to effectively exploit both spatial and temporal clues
from video sequences. In this work, we propose a novel Spatial-Temporal
Correlation and Topology Learning framework (CTL) to pursue discriminative and
robust representation by modeling cross-scale spatial-temporal correlation.
Specifically, CTL utilizes a CNN backbone and a key-points estimator to extract
semantic local features from human body at multiple granularities as graph
nodes. It explores a context-reinforced topology to construct multi-scale
graphs by considering both global contextual information and physical
connections of human body. Moreover, a 3D graph convolution and a cross-scale
graph convolution are designed, which facilitate direct cross-spacetime and
cross-scale information propagation for capturing hierarchical spatial-temporal
dependencies and structural information. By jointly performing the two
convolutions, CTL effectively mines comprehensive clues that are complementary
with appearance information to enhance representational capacity. Extensive
experiments on two video benchmarks have demonstrated the effectiveness of the
proposed method and the state-of-the-art performance.
|
For fixed graphs $F$ and $H$, the generalized Tur\'an problem asks for the
maximum number $ex(n,H,F)$ of copies of $H$ that an $n$-vertex $F$-free graph
can have. In this paper, we focus on cases with $F$ being $B_{r,s}$, the graph
consisting of two cliques of size $s$ sharing $r$ common vertices. We determine
$ex(n,K_t,B_{r,0})$, $ex(n,K_{a,b},B_{3,1})$ for any values of $a,b,r,t$ if $n$
is large enough and $ex(n,K_{r+t},B_{r,s})$ if $2s+t+1<r$ and $n$ is large
enough.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.