abstract
stringlengths 42
2.09k
|
---|
We consider the problem of binomiality of the steady state ideals of
biochemical reaction networks. We are interested in finding polynomial
conditions on the parameters such that the steady state ideal of a chemical
reaction network is binomial under every specialisation of the parameters if
the conditions on the parameters hold. We approach the binomiality problem
using Comprehensive Gr\"obner systems. Considering rate constants as
parameters, we compute comprehensive Gr\"obner systems for various reactions.
In particular, we make automatic computations on n-site phosphorylations and
biomodels from the Biomodels repository using the grobcov library of the
computer algebra system Singular.
|
An important component of unsupervised learning by instance-based
discrimination is a memory bank for storing a feature representation for each
training sample in the dataset. In this paper, we introduce 3 improvements to
the vanilla memory bank-based formulation which brings massive accuracy gains:
(a) Large mini-batch: we pull multiple augmentations for each sample within the
same batch and show that this leads to better models and enhanced memory bank
updates. (b) Consistency: we enforce the logits obtained by different
augmentations of the same sample to be close without trying to enforce
discrimination with respect to negative samples as proposed by previous
approaches. (c) Hard negative mining: since instance discrimination is not
meaningful for samples that are too visually similar, we devise a novel nearest
neighbour approach for improving the memory bank that gradually merges
extremely similar data samples that were previously forced to be apart by the
instance level classification loss. Overall, our approach greatly improves the
vanilla memory-bank based instance discrimination and outperforms all existing
methods for both seen and unseen testing categories with cosine similarity.
|
Conformal quantum mechanics has been proposed to be the CFT$_1$ dual to
AdS$_2$. The $N$-point correlation function that satisfy conformal constraints
have been constructed from a non-conformal vacuum and the insertion of a
non-primary operator. The main goal of this paper is to find an interpretation
of this oddness. For this purpouse, we study possible gravitational dual models
and propose a two-dimensional dilaton gravity with a massless fermion for the
description of conformal quantum mechanics. We find a universal correspondence
between states in the conformal quantum mechanics model and two-dimensional
spacetimes. Moreover, the solutions of the Dirac equation can be interpreted as
zero modes of a Floquet-Dirac system. Within this system, the oddness of the
non-conformal vacuum and non-primary operator is elucidated. As a possible
application, we interpret the gauge symmetries of the Floquet-Dirac system as
the corresponding infinite symmetries of the Schr\"odinger equation which are
conjectured to be related to higher spin symmetries.
|
We investigate the necessary conditions for the two spacetimes, which are
solutions to the Einstein field equations with an anisotropic matter source, to
be related to each other by means of a conformal transformation. As a result,
we obtain that if one of such spacetimes is a generalization of the
Robertson-Walker solution with vanishing acceleration and vorticity, then the
other one has to be in this class as well, i.e. the conformal factor will be a
constant function on the hypersurfaces orthogonal to the fluid flow lines. The
evolution equation for this function appears naturally as a direct consequence
of the conformal transformation of the Ricci tensor.
|
A caveat to many applications of the current Deep Learning approach is the
need for large-scale data. One improvement suggested by Kolmogorov Complexity
results is to apply the minimum description length principle with
computationally universal models. We study the potential gains in sample
efficiency that this approach can bring in principle. We use polynomial-time
Turing machines to represent computationally universal models and Boolean
circuits to represent Artificial Neural Networks (ANNs) acting on
finite-precision digits.
Our analysis unravels direct links between our question and Computational
Complexity results. We provide lower and upper bounds on the potential gains in
sample efficiency between the MDL applied with Turing machines instead of ANNs.
Our bounds depend on the bit-size of the input of the Boolean function to be
learned. Furthermore, we highlight close relationships between classical open
problems in Circuit Complexity and the tightness of these.
|
Scrum, the most popular agile method and project management framework, is
widely reported to be used, adapted, misused, and abused in practice. However,
not much is known about how Scrum actually works in practice, and critically,
where, when, how and why it diverges from Scrum by the book. Through a Grounded
Theory study involving semi-structured interviews of 45 participants from 30
companies and observations of five teams, we present our findings on how Scrum
works in practice as compared to how it is presented in its formative books. We
identify significant variations in these practices such as work breakdown,
estimation, prioritization, assignment, the associated roles and artefacts, and
discuss the underlying rationales driving the variations. Critically, we claim
that not all variations are process misuse/abuse and propose a nuanced
classification approach to understanding variations as standard, necessary,
contextual, and clear deviations for successful Scrum use and adaptation
|
Axion-like particles (ALPs) are ubiquitous in models of new physics
explaining some of the most pressing puzzles of the Standard Model. However,
until relatively recently, little attention has been paid to its interplay with
flavour. In this work, we study in detail the phenomenology of ALPs that
exclusively interact with up-type quarks at the tree-level, which arise in some
well-motivated ultra-violet completions such as QCD-like dark sectors or
Froggatt-Nielsen type models of flavour. Our study is performed in the
low-energy effective theory to highlight the key features of these scenarios in
a model independent way. We derive all the existing constraints on these models
and demonstrate how upcoming experiments at fixed-target facilities and the LHC
can probe a vast region of the parameter space, which is currently not excluded
by cosmological and astrophysical bounds. We also emphasize how a future
measurement of the currently unavailable meson decay $D \to \pi +
\rm{invisible}$ could complement these upcoming searches and help to probe a
large unexplored region of their parameter space.
|
We construct analytical and regular solutions in four-dimensional General
Relativity which represent multi-black hole systems immersed in external
gravitational field configurations. The external field background is composed
by an infinite multipolar expansion, which allows to regularise the conical
singularities of an array of collinear static black holes. A stationary
rotating generalisation is achieved by adding independent angular momenta and
NUT parameters to each source of the binary configuration. Moreover, a charged
extension of the binary black hole system at equilibrium is generated. Finally,
we show that the binary Majumdar-Papapetrou solution is consistently recovered
in the vanishing external field limit. All of these solutions reach an
equilibrium state due to the external gravitational field only, avoiding in
this way the presence of any string or strut defect.
|
The problem of covariance of physical quantities has not been solved
fundamentally in the theory of relativity, which has caused a lot of confusion
in the community; a typical example is the Gordon metric tensor, which was
developed almost a century ago, and has been widely used to describe the
equivalent gravitational effect of moving media on light propagation,
predicting a novel physics of optical black hole. In this paper, it is shown
that under Lorentz transformation, a covariant tensor satisfies three rules:
(1) the tensor keeps invariant in mathematical form in all inertial frames; (2)
all elements of the tensor have the same physical definitions in all frames;
(3) the tensor expression in one inertial frame does not include any physical
quantities defined in other frames. The three rules constitute a criterion for
testing the covariance of physical laws, required by Einstein's principle of
relativity. Gordon metric does not satisfy Rule (3), and its covariance cannot
be identified before a general refractive index is defined. Finally, it is also
shown as well that in the relativistic quantum mechanics, the Lorentz
covariance of Dirac wave equation is not compatible with Einstein's mass-energy
equivalence.
|
In this work, we present a coupled 3D-1D model of solid tumor growth within a
dynamically changing vascular network to facilitate realistic simulations of
angiogenesis. Additionally, the model includes erosion of the extracellular
matrix, interstitial flow, and coupled flow in blood vessels and tissue. We
employ continuum mixture theory with stochastic Cahn--Hilliard type phase-field
models of tumor growth. The interstitial flow is governed by a mesoscale
version of Darcy's law. The flow in the blood vessels is controlled by
Poiseuille flow, and Starling's law is applied to model the mass transfer in
and out of blood vessels. The evolution of the network of blood vessels is
orchestrated by the concentration of the tumor angiogenesis factors (TAFs);
blood vessels grow towards the increasing TAFs concentrations. This process is
not deterministic, allowing random growth of blood vessels and, therefore, due
to the coupling of nutrients in tissue and vessels, makes the growth of tumors
stochastic. We demonstrate the performance of the model by applying it to a
variety of scenarios. Numerical experiments illustrate the flexibility of the
model and its ability to generate satellite tumors. Simulations of the effects
of angiogenesis on tumor growth are presented as well as sample-independent
features of cancer.
|
The behavior of hot carriers in metal-halide perovskites (MHPs) present a
valuable foundation for understanding the details of carrier-phonon coupling in
the materials as well as the prospective development of highly efficient hot
carrier and carrier multiplication solar cells. Whilst the carrier population
dynamics during cooling have been intensely studied, the evolution of the hot
carrier properties, namely the hot carrier mobility, remain largely unexplored.
To address this, we introduce a novel ultrafast visible pump - infrared push -
terahertz probe spectroscopy (PPP-THz) to monitor the real-time conductivity
dynamics of cooling carriers in methylammonium lead iodide. We find a decrease
in mobility upon optically depositing energy into the carriers, which is
typical of band-transport. Surprisingly, the conductivity recovery dynamics are
incommensurate with the intraband relaxation measured by an analogous
experiment with an infrared probe (PPP- IR), and exhibit a negligible
dependence on the density of hot carriers. These results and the kinetic
modelling reveal the importance of highly-localized lattice heating on the
mobility of the hot electronic states. This collective polaron-lattice
phenomenon may contribute to the unusual photophysics observed in MHPs and
should be accounted for in devices that utilize hot carriers.
|
Balanced weighing matrices with parameters $$
\left(1+18\cdot\frac{9^{m+1}-1}{8},9^{m+1},4\cdot 9^m\right), $$ for each
nonzero integer $m$ is constructed. This is the first infinite class not
belonging to those with classical parameters. It is shown that any balanced
weighing matrix is equivalent to a five-class association scheme.
|
Evaluating image generation models such as generative adversarial networks
(GANs) is a challenging problem. A common approach is to compare the
distributions of the set of ground truth images and the set of generated test
images. The Frech\'et Inception distance is one of the most widely used metrics
for evaluation of GANs, which assumes that the features from a trained
Inception model for a set of images follow a normal distribution. In this
paper, we argue that this is an over-simplified assumption, which may lead to
unreliable evaluation results, and more accurate density estimation can be
achieved using a truncated generalized normal distribution. Based on this, we
propose a novel metric for accurate evaluation of GANs, named TREND (TRuncated
gEneralized Normal Density estimation of inception embeddings). We demonstrate
that our approach significantly reduces errors of density estimation, which
consequently eliminates the risk of faulty evaluation results. Furthermore, we
show that the proposed metric significantly improves robustness of evaluation
results against variation of the number of image samples.
|
Unconstrained handwriting recognition is an essential task in document
analysis. It is usually carried out in two steps. First, the document is
segmented into text lines. Second, an Optical Character Recognition model is
applied on these line images. We propose the Simple Predict & Align Network: an
end-to-end recurrence-free Fully Convolutional Network performing OCR at
paragraph level without any prior segmentation stage. The framework is as
simple as the one used for the recognition of isolated lines and we achieve
competitive results on three popular datasets: RIMES, IAM and READ 2016. The
proposed model does not require any dataset adaptation, it can be trained from
scratch, without segmentation labels, and it does not require line breaks in
the transcription labels. Our code and trained model weights are available at
https://github.com/FactoDeepLearning/SPAN.
|
Determining the maximum size of a $t$-intersecting code in $[m]^n$ was a
longstanding open problem of Frankl and F\"uredi, solved independently by
Ahlswede and Khachatrian and by Frankl and Tokushige. We extend their result to
the setting of forbidden intersections, by showing that for any $m>2$ and $n$
large compared with $t$ (but not necessarily $m$) that the same bound holds for
codes with the weaker property of being $(t-1)$-avoiding, i.e.\ having no two
vectors that agree on exactly $t-1$ coordinates. Our proof proceeds via a junta
approximation result of independent interest, which we prove via a development
of our recent theory of global hypercontractivity: we show that any
$(t-1)$-avoiding code is approximately contained in a $t$-intersecting junta (a
code where membership is determined by a constant number of co-ordinates). In
particular, when $t=1$ this gives an alternative proof of a recent result of
Eberhard, Kahn, Narayanan and Spirkl that symmetric intersecting codes in
$[m]^n$ have size $o(m^n)$.
|
This paper is directed to the financial community and focuses on the
financial risks associated with climate change. It, specifically, addresses the
estimate of climate risk embedded within a bank loan portfolio. During the 21st
century, man-made carbon dioxide emissions in the atmosphere will raise global
temperatures, resulting in severe and unpredictable physical damage across the
globe. Another uncertainty associated with climate, known as the energy
transition risk, comes from the unpredictable pace of political and legal
actions to limit its impact. The Climate Extended Risk Model (CERM) adapts well
known credit risk models. It proposes a method to calculate incremental credit
losses on a loan portfolio that are rooted into physical and transition risks.
The document provides detailed description of the model hypothesis and steps.
This work was initiated by the association Green RWA (Risk Weighted Assets). It
was written in collaboration with Jean-Baptiste Gaudemet, Anne Gruz, and
Olivier Vinciguerra ([email protected]), who contributed their financial and
risk expertise, taking care of its application to a pilot-portfolio. It extends
the model proposed in a first white paper published by Green RWA
(https://www.greenrwa.org/).
|
Recent studies have suggested that low-energy cosmic rays (CRs) may be
accelerated inside molecular clouds by the shocks associated with star
formation. We use a Monte Carlo transport code to model the propagation of CRs
accelerated by protostellar accretion shocks through protostellar cores. We
calculate the CR attenuation and energy losses and compute the resulting flux
and ionization rate as a function of both radial distance from the protostar
and angular position. We show that protostellar cores have non-uniform CR
fluxes that produce a broad range of CR ionization rates, with the maximum
value being up to two orders of magnitude higher then the radial average at a
given distance. In particular, the CR flux is focused in the direction of the
outflow cavity, creating a 'flashlight' effect and allowing CRs to leak out of
the core. The radially averaged ionization rates are less than the measured
value for the Milky Way of $\zeta \approx 10^{-16} \rm s^{-1}$; however, within
$r \approx 0.03$ pc from the protostar, the maximum ionization rates exceed
this value. We show that variation in the protostellar parameters, particularly
in the accretion rate, may produce ionization rates that are a couple of orders
of magnitude higher or lower than our fiducial values. Finally, we use a
statistical method to model unresolved sub-grid magnetic turbulence in the
core. We show that turbulence modifies the CR spectrum and increases the
uniformity of the CR distribution but does not significantly affect the
resulting ionization rates.
|
We analyze the collision of three identical spin-polarized fermions at zero
collision energy, assuming arbitrary finite-range potentials, and define the
corresponding three-body scattering hypervolume $D_F$. The scattering
hypervolume $D$ was first defined for identical bosons in 2008 by one of us. It
is the three-body analog of the two-body scattering length. We solve the
three-body Schr\"{o}dinger equation asymptotically when the three fermions are
far apart or one pair and the third fermion are far apart, deriving two
asymptotic expansions of the wave function. Unlike the case of bosons for which
$D$ has the dimension of length to the fourth power, here the $D_F$ we define
has the dimension of length to the eighth power. We then analyze the
interaction energy of three such fermions with momenta $\hbar\mathbf{k}_1$,
$\hbar\mathbf{k}_2$ and $\hbar\mathbf{k}_3$ in a large periodic cubic box. The
energy shift due to $D_F$ is proportional to $D_F/\Omega^2$, where $\Omega$ is
the volume of the box. We also calculate the shifts of energy and pressure of
spin-polarized Fermi gases due to a nonzero $D_F$ and the three-body
recombination rate of spin-polarized ultracold atomic Fermi gases at finite
temperatures.
|
We present a dual-pathway approach for recognizing fine-grained interactions
from videos. We build on the success of prior dual-stream approaches, but make
a distinction between the static and dynamic representations of objects and
their interactions explicit by introducing separate motion and object detection
pathways. Then, using our new Motion-Guided Attention Fusion module, we fuse
the bottom-up features in the motion pathway with features captured from object
detections to learn the temporal aspects of an action. We show that our
approach can generalize across appearance effectively and recognize actions
where an actor interacts with previously unseen objects. We validate our
approach using the compositional action recognition task from the
Something-Something-v2 dataset where we outperform existing state-of-the-art
methods. We also show that our method can generalize well to real world tasks
by showing state-of-the-art performance on recognizing humans assembling
various IKEA furniture on the IKEA-ASM dataset.
|
We introduce a new type of inversion-free feedforward hysteresis control with
the Preisach operator. The feedforward control has a high-gain integral loop
structure with the Preisach operator in negative feedback. This allows
obtaining a dynamic quantity which corresponds to the inverse hysteresis
output, since the loop error tends towards zero for a sufficiently high
feedback gain. By analyzing the loop sensitivity function with hysteresis,
which acts as a non-constant phase lag, we show the achievable bandwidth and
accuracy of the proposed control. Remarkable fact is that the control bandwidth
is theoretically infinite, provided the integral feedback loop with Preisach
operator can be implemented with a smooth hysteresis output. Numerical control
examples with the Preisach hysteresis model in differential form are shown and
discussed in detail.
|
One of the most suitable methods for modeling fully dynamic earthquake cycle
simulations is the spectral boundary integral element method (sBIEM), which
takes advantage of the fast Fourier transform (FFT) to make a complex numerical
dynamic rupture tractable. However, this method has the serious drawback of
requiring a flat fault geometry due to the FFT approach. Here we present an
analytical formulation that extends the sBIEM to a mildly non-planar fault. We
start from a regularized boundary element method and apply a small-slope
approximation of the fault geometry. Making this assumption, it is possible to
show that the main effect of non-planar fault geometry is to change the normal
traction along the fault, which is controlled by the local curvature along the
fault. We then convert this space--time boundary integral equation of the
normal traction into a spectral-time formulation and incorporate this change in
normal traction into the existing sBIEM methodology. This approach allows us to
model fully dynamic seismic cycle simulations on non-planar faults in a
particularly efficient way. We then test this method against a regular boundary
integral element method for both rough-fault and seamount fault geometries, and
demonstrate that this sBIEM maintains the scaling between the fault geometry
and slip distribution.
|
Field-effect transistors made of wide-bandgap semiconductors can operate at
high voltages, temperatures and frequencies with low energy losses, and have
been of increasing importance in power and high-frequency electronics. However,
the poor performance of p-channel transistors compared with that of n-channel
transistors has constrained the production of energy-efficient complimentary
circuits with integrated n- and p-channel transistors. The p-type surface
conductivity of hydrogen-terminated diamond offers great potential for solving
this problem, but surface transfer doping, which is commonly believed to be
essential for generating the conductivity, limits the performance of
transistors made of hydrogen-terminated diamond because it requires the
presence of ionized surface acceptors, which cause hole scattering. Here, we
report on fabrication of a p-channel wide-bandgap heterojunction field-effect
transistor consisting of a hydrogen-terminated diamond channel and hexagonal
boron nitride ($h$-BN) gate insulator, without relying on surface transfer
doping. Despite its reduced density of surface acceptors, the transistor has
the lowest sheet resistance ($1.4$ k$\Omega$) and largest on-current ($1600$
$\mu$m mA mm$^{-1}$) among p-channel wide-bandgap transistors, owing to the
highest hole mobility (room-temperature Hall mobility: $680$
cm$^2$V$^{-1}$s$^{-1}$). Importantly, the transistor also shows normally-off
behavior, with a high on/off ratio exceeding $10^8$. These characteristics are
suited for low-loss switching and can be explained on the basis of standard
transport and transistor models. This new approach to making diamond
transistors paves the way to future wide-bandgap semiconductor electronics.
|
A plasmon is a collective excitation of electrons due to the Coulomb
interaction. Both plasmons and single-particle excitations (SPEs) are
eigenstates of bulk metallic systems and they are orthogonal to each other.
However, in non-translationally symmetric systems such as nanostructures,
plasmons and SPEs coherently interact. It has been well discussed that the
plasmons and SPEs, respectively, can couple with transverse (T) electric field
in such systems, and also that they are coupled with each other via
longitudinal (L) field. However, there has been a missing link in the previous
studies: the coherent coupling between the plasmons and SPEs mediated by the T
field. Herein, we develop a theoretical framework to describe the
self-consistent relationship between plasmons and SPEs through both the L and T
fields. The excitations are described in terms of the charge and current
densities in a constitutive equation with a nonlocal susceptibility, where the
densities include the L and T components. The electromagnetic fields
originating from the densities are described in terms of the Green's function
in the Maxwell equations. The T field is generated from both densities, whereas
the L component is attributed to the charge density only. We introduce a
four-vector representation incorporating the vector and scalar potentials in
the Coulomb gauge, in which the T and L fields are separated explicitly. The
eigenvalues of the matrix for the self-consistent equations appear as the poles
of the system excitations. The developed formulation enables to approach
unknown mechanisms for enhancement of the coherent coupling between plasmons
and the hot carriers generated by radiative fields.
|
Lin and Wang defined a model of random walks on knot diagrams and interprete
the Alexnader polynomials and the colored Jones polynomials as Ihara zeta
functions, i.e. zeta functions defined by counting cycles on the knot diagram.
Using this explanation, they gave a more conceptual proof for the Melvin-Morton
conjecture. In this paper, we give an analogous zeta function expression for
the twisted Alexander invariants.
|
Recent achievements in depth prediction from a single RGB image have powered
the new research area of combining convolutional neural networks (CNNs) with
classical simultaneous localization and mapping (SLAM) algorithms. The depth
prediction from a CNN provides a reasonable initial point in the optimization
process in the traditional SLAM algorithms, while the SLAM algorithms further
improve the CNN prediction online. However, most of the current CNN-SLAM
approaches have only taken advantage of the depth prediction but not yet other
products from a CNN. In this work, we explore the use of the outlier mask, a
by-product from unsupervised learning of depth from video, as a prior in a
classical probability model for depth estimate fusion to step up the
outlier-resistant tracking performance of a SLAM front-end. On the other hand,
some of the previous CNN-SLAM work builds on feature-based sparse SLAM methods,
wasting the per-pixel dense prediction from a CNN. In contrast to these sparse
methods, we devise a dense CNN-assisted SLAM front-end that is implementable
with TensorFlow and evaluate it on both indoor and outdoor datasets.
|
The proposed in J. Math. Phys. v.57,071903 (2016) analytical expansion of
monotone (contractive) Riemannian metrics (called also quantum Fisher
information(s)) in terms of moments of the dynamical structure factor (DSF)
relative to an original intensive observable is reconsidered and extended. The
new approach through the DSF which characterizes fully the set of monotone
Riemannian metrics on the space of Gibbs thermal states is utilized to obtain
an extension of the spectral presentation obtained for the Bogoliubov-Kubo-Mori
metric (the generalized isothermal susceptibility) on the entire class of
monotone Riemannian metrics. The obtained spectral presentation is the main
point of our consideration. The last allows to present the one to one
correspondence between monotone Riemannian metrics and operator monotone
functions (which is a statement of the Petz theorem in the quantum information
theory) in terms of the linear response theory. We show that monotone
Riemannian metrics can be determined from the analysis of the infinite chain of
equations of motion of the retarded Green's functions. Inequalities between the
different metrics have been obtained as well. It is a demonstration that the
analysis of information-theoretic problems has benefited from concepts of
statistical mechanics and might cross-fertilize or extend both directions, and
vice versa. We illustrate the presented approach on the calculation of the
entire class of monotone (contractive) Riemannian metrics on the examples of
some simple but instructive systems employed in various physical problems.
|
Mobile apps are increasingly relying on high-throughput and low-latency
content delivery, while the available bandwidth on wireless access links is
inherently time-varying. The handoffs between base stations and access modes
due to user mobility present additional challenges to deliver a high level of
user Quality-of-Experience (QoE). The ability to predict the available
bandwidth and the upcoming handoffs will give applications valuable leeway to
make proactive adjustments to avoid significant QoE degradation. In this paper,
we explore the possibility and accuracy of realtime mobile bandwidth and
handoff predictions in 4G/LTE and 5G networks. Towards this goal, we collect
long consecutive traces with rich bandwidth, channel, and context information
from public transportation systems. We develop Recurrent Neural Network models
to mine the temporal patterns of bandwidth evolution in fixed-route mobility
scenarios. Our models consistently outperform the conventional univariate and
multivariate bandwidth prediction models. For 4G \& 5G co-existing networks, we
propose a new problem of handoff prediction between 4G and 5G, which is
important for low-latency applications like self-driving strategy in realistic
5G scenarios. We develop classification and regression based prediction models,
which achieve more than 80\% accuracy in predicting 4G and 5G handoffs in a
recent 5G dataset.
|
We verify the leading order term in the asymptotic expansion conjecture of
the relative Reshetikhin-Turaev invariants proposed in \cite{WY4} for all pairs
$(M,L)$ satisfying the properties that $M\setminus L$ is homeomorphic to some
fundamental shadow link complement and the 3-manifold $M$ is obtained by doing
rational Dehn filling on some boundary components of the fundamental shadow
link complement, under the assumptions that the denominator of the surgery
coefficients are odd and the cone angles are sufficiently small. In particular,
the asymptotics of the invariants captures the complex volume and the twisted
Reidemeister torsion of the manifold $M\setminus L$ associated with the
hyperbolic cone structure determined by the sequence of colorings of the framed
link $L$.
|
A Leavitt labelled path algebra over a commutative unital ring is associated
with a labelled space, generalizing Leavitt path algebras associated with
graphs and ultragraphs as well as torsion-free commutative algebras generated
by idempotents. We show that Leavitt labelled path algebras can be realized as
partial skew group rings, Steinberg algebras, and Cuntz-Pimsner algebras. Via
these realizations we obtain generalized uniqueness theorems, a description of
diagonal preserving isomorphisms and we characterize simplicity of Leavitt
labelled path algebras. In addition, we prove that a large class of partial
skew group rings can be realized as Leavitt labelled path algebras.
|
We establish the first tight lower bound of $\Omega(\log\log\kappa)$ on the
query complexity of sampling from the class of strongly log-concave and
log-smooth distributions with condition number $\kappa$ in one dimension.
Whereas existing guarantees for MCMC-based algorithms scale polynomially in
$\kappa$, we introduce a novel algorithm based on rejection sampling that
closes this doubly exponential gap.
|
In this paper, we consider gradient-type methods for convex positively
homogeneous optimization problems with relative accuracy. An analogue of the
accelerated universal gradient-type method for positively homogeneous
optimization problems with relative accuracy is investigated. The second
approach is related to subgradient methods with B. T. Polyak stepsize. Result
on the linear convergence rate for some methods of this type with adaptive step
adjustment is obtained for some class of non-smooth problems. Some
generalization to a special class of non-convex non-smooth problems is also
considered.
|
We develop a deep convolutional neural networks(CNNs) to deal with the blurry
artifacts caused by the defocus of the camera using dual-pixel images.
Specifically, we develop a double attention network which consists of
attentional encoders, triple locals and global local modules to effectively
extract useful information from each image in the dual-pixels and select the
useful information from each image and synthesize the final output image. We
demonstrate the effectiveness of the proposed deblurring algorithm in terms of
both qualitative and quantitative aspects by evaluating on the test set in the
NTIRE 2021 Defocus Deblurring using Dual-pixel Images Challenge. The code, and
trained models are available at https://github.com/tuvovan/ATTSF.
|
As hardware architectures are evolving in the push towards exascale,
developing Computational Science and Engineering (CSE) applications depend on
performance portable approaches for sustainable software development. This
paper describes one aspect of performance portability with respect to
developing a portable library of kernels that serve the needs of several CSE
applications and software frameworks. We describe Kokkos Kernels, a library of
kernels for sparse linear algebra, dense linear algebra and graph kernels. We
describe the design principles of such a library and demonstrate portable
performance of the library using some selected kernels. Specifically, we
demonstrate the performance of four sparse kernels, three dense batched
kernels, two graph kernels and one team level algorithm.
|
Recent breakthroughs of Neural Architecture Search (NAS) extend the field's
research scope towards a broader range of vision tasks and more diversified
search spaces. While existing NAS methods mostly design architectures on a
single task, algorithms that look beyond single-task search are surging to
pursue a more efficient and universal solution across various tasks. Many of
them leverage transfer learning and seek to preserve, reuse, and refine network
design knowledge to achieve higher efficiency in future tasks. However, the
enormous computational cost and experiment complexity of cross-task NAS are
imposing barriers for valuable research in this direction. Existing NAS
benchmarks all focus on one type of vision task, i.e., classification. In this
work, we propose TransNAS-Bench-101, a benchmark dataset containing network
performance across seven tasks, covering classification, regression,
pixel-level prediction, and self-supervised tasks. This diversity provides
opportunities to transfer NAS methods among tasks and allows for more complex
transfer schemes to evolve. We explore two fundamentally different types of
search space: cell-level search space and macro-level search space. With 7,352
backbones evaluated on seven tasks, 51,464 trained models with detailed
training information are provided. With TransNAS-Bench-101, we hope to
encourage the advent of exceptional NAS algorithms that raise cross-task search
efficiency and generalizability to the next level. Our dataset file will be
available at Mindspore, VEGA.
|
We consider analytic functions from a reproducing kernel Hilbert space. Given
that such a function is of order $\epsilon$ on a set of discrete data points,
relative to its global size, we ask how large can it be at a fixed point
outside of the data set. We obtain optimal bounds on this error of analytic
continuation and describe its asymptotic behavior in $\epsilon$. We also
describe the maximizer function attaining the optimal error in terms of the
resolvent of a positive semidefinite, self-adjoint and finite rank operator.
|
In offline reinforcement learning, a policy needs to be learned from a single
pre-collected dataset. Typically, policies are thus regularized during training
to behave similarly to the data generating policy, by adding a penalty based on
a divergence between action distributions of generating and trained policy. We
propose a new algorithm, which constrains the policy directly in its weight
space instead, and demonstrate its effectiveness in experiments.
|
We study the entanglement between soft and hard particles produced in generic
scattering processes in QED. The reduced density matrix for the hard particles,
obtained via tracing over the entire spectrum of soft photons, is shown to have
a large eigenvalue, which governs the behavior of the Renyi entropies and of
the non-analytic part of the entanglement entropy at low orders in perturbation
theory. The leading perturbative entanglement entropy is logarithmically IR
divergent. The coefficient of the IR divergence exhibits certain universality
properties, irrespectively of the dressing of the asymptotic charged particles
and the detailed properties of the initial state. In a certain kinematical
limit, the coefficient is proportional to the cusp anomalous dimension in QED.
For Fock basis computations associated with two-electron scattering, we derive
an exact expression for the large eigenvalue of the density matrix in terms of
hard scattering amplitudes, which is valid at any finite order in perturbation
theory. As a result, the IR logarithmic divergences appearing in the
expressions for the Renyi and entanglement entropies persist at any finite
order of the perturbative expansion. To all orders, however, the IR logarithmic
divergences exponentiate, rendering the large eigenvalue of the density matrix
IR finite. The all-orders Renyi entropies (per unit time, per particle flux),
which are shown to be proportional to the total inclusive cross-section in the
initial state, are also free of IR divergences. The entanglement entropy, on
the other hand, retains non-analytic, logarithmic behavior with respect to the
size of the box (which provides the IR cutoff) even to all orders in
perturbation theory.
|
We introduce a tamed exponential time integrator which exploits linear terms
in both the drift and diffusion for Stochastic Differential Equations (SDEs)
with a one sided globally Lipschitz drift term. Strong convergence of the
proposed scheme is proved, exploiting the boundedness of the geometric Brownian
motion (GBM) and we establish order 1 convergence for linear diffusion terms.
In our implementation we illustrate the efficiency of the proposed scheme
compared to existing fixed step methods and utilize it in an adaptive time
stepping scheme. Furthermore we extend the method to nonlinear diffusion terms
and show it remains competitive. The efficiency of these GBM based approaches
are illustrated by considering some well-known SDE models.
|
Language models like BERT and SpanBERT pretrained on open-domain data have
obtained impressive gains on various NLP tasks. In this paper, we probe the
effectiveness of domain-adaptive pretraining objectives on downstream tasks. In
particular, three objectives, including a novel objective focusing on modeling
predicate-argument relations, are evaluated on two challenging dialogue
understanding tasks. Experimental results demonstrate that domain-adaptive
pretraining with proper objectives can significantly improve the performance of
a strong baseline on these tasks, achieving the new state-of-the-art
performances.
|
Studies evaluating bikeability usually compute spatial indicators shaping
cycling conditions and conflate them in a quantitative index. Much research
involves site visits or conventional geospatial approaches, and few studies
have leveraged street view imagery (SVI) for conducting virtual audits. These
have assessed a limited range of aspects, and not all have been automated using
computer vision (CV). Furthermore, studies have not yet zeroed in on gauging
the usability of these technologies thoroughly. We investigate, with
experiments at a fine spatial scale and across multiple geographies (Singapore
and Tokyo), whether we can use SVI and CV to assess bikeability
comprehensively. Extending related work, we develop an exhaustive index of
bikeability composed of 34 indicators. The results suggest that SVI and CV are
adequate to evaluate bikeability in cities comprehensively. As they
outperformed non-SVI counterparts by a wide margin, SVI indicators are also
found to be superior in assessing urban bikeability, and potentially can be
used independently, replacing traditional techniques. However, the paper
exposes some limitations, suggesting that the best way forward is combining
both SVI and non-SVI approaches. The new bikeability index presents a
contribution in transportation and urban analytics, and it is scalable to
assess cycling appeal widely.
|
Using hydrodynamical simulations, we study how well the underlying
gravitational potential of a galaxy cluster can be modelled dynamically with
different types of tracers. In order to segregate different systematics and the
effects of varying estimator performances, we first focus on applying a generic
minimal assumption method (oPDF) to model the simulated haloes using the full
6-D phasespace information. We show that the halo mass and concentration can be
recovered in an ensemble unbiased way, with a stochastic bias that varies from
halo to halo, mostly reflecting deviations from steady state in the tracer
distribution. The typical systematic uncertainty is $\sim 0.17$ dex in the
virial mass and $\sim 0.17$ dex in the concentration as well when dark matter
particles are used as tracers. The dynamical state of satellite galaxies are
close to that of dark matter particles, while intracluster stars are less in a
steady state, resulting in a $\sim$ 0.26 dex systematic uncertainty in mass.
Compared with galactic haloes hosting Milky-Way-like galaxies, cluster haloes
show a larger stochastic bias in the recovered mass profiles. We also test the
accuracy of using intracluster gas as a dynamical tracer modelled through a
generalised hydrostatic equilibrium equation, and find a comparable systematic
uncertainty in the estimated mass to that using dark matter. Lastly, we
demonstrate that our conclusions are largely applicable to other steady-state
dynamical models including the spherical Jeans equation, by quantitatively
segregating their statistical efficiencies and robustness to systematics. We
also estimate the limiting number of tracers that leads to the
systematics-dominated regime in each case.
|
In this paper, a novel intelligent reflecting surface (IRS)-assisted wireless
powered communication network (WPCN) architecture is proposed for
power-constrained Internet-of-Things (IoT) smart devices, where IRS is
exploited to improve the performance of WPCN under imperfect channel state
information (CSI). We formulate a hybrid access point (HAP) transmit energy
minimization problem by jointly optimizing time allocation, HAP energy
beamforming, receiving beamforming, user transmit power allocation, IRS energy
reflection coefficient and information reflection coefficient under the
imperfect CSI and non-linear energy harvesting model. On account of the high
coupling of optimization variables, the formulated problem is a non-convex
optimization problem that is difficult to solve directly. To address the
above-mentioned challenging problem, alternating optimization (AO) technique is
applied to decouple the optimization variables to solve the problem.
Specifically, through AO, time allocation, HAP energy beamforming, receiving
beamforming, user transmit power allocation, IRS energy reflection coefficient
and information reflection coefficient are divided into three sub-problems to
be solved alternately. The difference-of-convex (DC) programming is used to
solve the non-convex rank-one constraint in solving IRS energy reflection
coefficient and information reflection coefficient. Numerical simulations
verify the superiority of the proposed optimization algorithm in decreasing HAP
transmit energy compared with other benchmark schemes.
|
Ranking the participants of a tournament has applications in voting, paired
comparisons analysis, sports and other domains. In this paper we introduce
bipartite tournaments, which model situations in which two different kinds of
entity compete indirectly via matches against players of the opposite kind;
examples include education (students/exam questions) and solo sports
(golfers/courses). In particular, we look to find rankings via chain graphs,
which correspond to bipartite tournaments in which the sets of adversaries
defeated by the players on one side are nested with respect to set inclusion.
Tournaments of this form have a natural and appealing ranking associated with
them. We apply chain editing -- finding the minimum number of edge changes
required to form a chain graph -- as a new mechanism for tournament ranking.
The properties of these rankings are investigated in a probabilistic setting,
where they arise as maximum likelihood estimators, and through the axiomatic
method of social choice theory. Despite some nice properties, two problems
remain: an important anonymity axiom is violated, and chain editing is NP-hard.
We address both issues by relaxing the minimisation constraint in chain
editing, and characterise the resulting ranking methods via a greedy
approximation algorithm.
|
Studies on the interplay between the charge order and the $d$-wave
superconductivity in the copper-oxide high $T_{\rm c}$ superconductors are
reviewed with a special emphasis on the exploration based on the unconventional
concept of the electron fractionalization and its consequences supported by
solutions of high-accuracy quantum many-body solvers. Severe competitions
between the superconducting states and the charge inhomogeneity including the
charge/spin striped states revealed by the quantum many-body solvers are first
addressed for the Hubbard models and then for the {\it ab initio} Hamiltonians
of the cuprates derived without adjustable parameters to represent the
low-energy physics of the cuprates. The charge inhomogeneity and
superconductivity are born out of the same mother, namely, the carrier
attraction arising from the strong Coulomb repulsion near the Mott insulator
(Mottness) and accompanied electron fractionalization. The same mother makes
the severe competition of the two brothers inevitable. The electron
fractionalization has a remarkable consequences on the mechanism of the
superconductivity. Recent explorations motivated by the concept of the
fractionalization and their consequences on experimental observations in
energy-momentum resolved spectroscopic measurements including the angle
resolved photoemission spectroscopy (ARPES) and the resonant inelastic X-ray
spectroscopy (RIXS) are overviewed, with future vision for the integrated
spectroscopy to challenge the long-standing difficulties in the cuprates as
well as in other strongly correlated matter in general.
|
In recent years, speech processing algorithms have seen tremendous progress
primarily due to the deep learning renaissance. This is especially true for
speech separation where the time-domain audio separation network (TasNet) has
led to significant improvements. However, for the related task of
single-speaker speech enhancement, which is of obvious importance, it is yet
unknown, if the TasNet architecture is equally successful. In this paper, we
show that TasNet improves state-of-the-art also for speech enhancement, and
that the largest gains are achieved for modulated noise sources such as speech.
Furthermore, we show that TasNet learns an efficient inner-domain
representation, where target and noise signal components are highly separable.
This is especially true for noise in terms of interfering speech signals, which
might explain why TasNet performs so well on the separation task. Additionally,
we show that TasNet performs poorly for large frame hops and conjecture that
aliasing might be the main cause of this performance drop. Finally, we show
that TasNet consistently outperforms a state-of-the-art single-speaker speech
enhancement system.
|
We show that any a-priori possible entropy value is realized by an ergodic
IRS, in free groups and in SL2(Z). This is in stark contrast to what may happen
in SLn(Z) for n>2, where only the trivial entropy values can be realized by
ergodic IRSs.
|
Intense laser-plasma interactions are an essential tool for the laboratory
study of ion acceleration at a collisionless shock. With two-dimensional
particle-in-cell calculations of a multicomponent plasma we observe two
electrostatic collisionless shocks at two distinct longitudinal positions when
driven with a linearly-polarized laser at normalized laser vector potential a0
that exceeds 10. Moreover, these shocks, associated with protons and carbon
ions, show a power-law dependence on a0 and accelerate ions to different
velocities in an expanding upstream with higher flux than in a single-component
hydrogen or carbon plasma. This results from an electrostatic ion two-stream
instability caused by differences in the charge-to-mass ratio of different
ions. Particle acceleration in collisionless shocks in multicomponent plasma
are ubiquitous in space and astrophysics, and these calculations identify the
possibility for studying these complex processes in the laboratory.
|
Computer science has grown rapidly since its inception in the 1950s and the
pioneers in the field are celebrated annually by the A.M. Turing Award. In this
paper, we attempt to shed light on the path to influential computer scientists
by examining the characteristics of the 72 Turing Award laureates. To achieve
this goal, we build a comprehensive dataset of the Turing Award laureates and
analyze their characteristics, including their personal information, family
background, academic background, and industry experience. The FP-Growth
algorithm is used for frequent feature mining. Logistic regression plot, pie
chart, word cloud and map are generated accordingly for each of the interesting
features to uncover insights regarding personal factors that drive influential
work in the field of computer science. In particular, we show that the Turing
Award laureates are most commonly white, male, married, United States citizen,
and received a PhD degree. Our results also show that the age at which the
laureate won the award increases over the years; most of the Turing Award
laureates did not major in computer science; birth order is strongly related to
the winners' success; and the number of citations is not as important as one
would expect.
|
Online misinformation is a prevalent societal issue, with adversaries relying
on tools ranging from cheap fakes to sophisticated deep fakes. We are motivated
by the threat scenario where an image is used out of context to support a
certain narrative. While some prior datasets for detecting image-text
inconsistency generate samples via text manipulation, we propose a dataset
where both image and text are unmanipulated but mismatched. We introduce
several strategies for automatically retrieving convincing images for a given
caption, capturing cases with inconsistent entities or semantic context. Our
large-scale automatically generated NewsCLIPpings Dataset: (1) demonstrates
that machine-driven image repurposing is now a realistic threat, and (2)
provides samples that represent challenging instances of mismatch between text
and image in news that are able to mislead humans. We benchmark several
state-of-the-art multimodal models on our dataset and analyze their performance
across different pretraining domains and visual backbones.
|
We review the equation of state of QCD matter at finite densities. We discuss
the construction of the equation of state with net baryon number, electric
charge, and strangeness using the results of lattice QCD simulations and hadron
resonance gas models. Its application to the hydrodynamic analyses of
relativistic nuclear collisions suggests that the interplay of multiple
conserved charges is important in the quantitative understanding of the dense
nuclear matter created at lower beam energies. Several different models of the
QCD equation of state are discussed for comparison.
|
The existence of two novel hybrid two-dimensional (2D) monolayers, 2D B3C2P3
and 2D B2C4P2, has been predicted based on the density functional theory
calculations. It has been shown that these materials possess structural and
thermodynamic stability. 2D B3C2P3 is a moderate band gap semiconductor, while
2D B2C4P2 is a zero band gap semiconductor. It has also been shown that 2D
B3C2P3 has a highly tunable band gap under the effect of strain and substrate
engineering. Moreover, 2D B3C2P3 produces low barriers for dissociation of
water and hydrogen molecules on its surface, and shows fast recovery after
desorption of the molecules. The novel materials can be fabricated by carbon
doping of boron phosphide, and directly by arc discharge and laser ablation and
vaporization. Applications of 2D B3C2P3 in renewable energy and straintronic
nanodevices have been proposed.
|
The generalization performance of a machine learning algorithm such as a
neural network depends in a non-trivial way on the structure of the data
distribution. To analyze the influence of data structure on test loss dynamics,
we study an exactly solveable model of stochastic gradient descent (SGD) on
mean square loss which predicts test loss when training on features with
arbitrary covariance structure. We solve the theory exactly for both Gaussian
features and arbitrary features and we show that the simpler Gaussian model
accurately predicts test loss of nonlinear random-feature models and deep
neural networks trained with SGD on real datasets such as MNIST and CIFAR-10.
We show that the optimal batch size at a fixed compute budget is typically
small and depends on the feature correlation structure, demonstrating the
computational benefits of SGD with small batch sizes. Lastly, we extend our
theory to the more usual setting of stochastic gradient descent on a fixed
subsampled training set, showing that both training and test error can be
accurately predicted in our framework on real data.
|
As high-performance organic semiconductors, {\pi}-conjugated polymers have
attracted much attention due to their charming advantages including low-cost,
solution processability, mechanical flexibility, and tunable optoelectronic
properties. During the past several decades, the great advances have been made
in polymers-based OFETs with p-type, n-type or even ambipolar characterics.
Through chemical modification and alignment optimization, lots of conjugated
polymers exhibited superior mobilities, and some mobilities are even larger
than 10 cm2 V-1 s-1 in OFETs, which makes them very promising for the
applications in organic electronic devices. This review describes the recent
progress of the high performance polymers used in OFETs from the aspects of
molecular design and assembly strategy. Furthermore, the current challenges and
outlook in the design and development of conjugated polymers are also
mentioned.
|
We consider convex, black-box objective functions with additive or
multiplicative noise with a high-dimensional parameter space and a data space
of lower dimension, where gradients of the map exist, but may be inaccessible.
We investigate Derivative-Free Optimization (DFO) in this setting and propose a
novel method, Active STARS (ASTARS), based on STARS (Chen and Wild, 2015) and
dimension reduction in parameter space via Active Subspace (AS) methods
(Constantine, 2015). STARS hyperparmeters are inversely proportional to the
known dimension of parameter space, resulting in heavy smoothing and small step
sizes for large dimensions. When possible, ASTARS leverages a lower-dimensional
AS, defining a set of directions in parameter space causing the majority of the
variance in function values. ASTARS iterates are updated with steps only taken
in the AS, reducing the value of the objective function more efficiently than
STARS, which updates iterates in the full parameter space. Computational costs
may be reduced further by learning ASTARS hyperparameters and the AS, reducing
the total evaluations of the objective function and eliminating the requirement
that the user specify hyperparameters, which may be unknown in our setting. We
call this method Fully Automated ASTARS (FAASTARS). We show that STARS and
ASTARS will both converge -- with a certain complexity -- even with inexact,
estimated hyperparemters. We also find that FAASTARS converges with the use of
estimated AS's and hyperparameters. We explore the effectiveness of ASTARS and
FAASTARS in numerical examples which compare ASTARS and FAASTARS to STARS.
|
While several non-pharmacological measures have been implemented for a few
months in an effort to slow the coronavirus disease (COVID-19) pandemic in the
United States, the disease remains a danger in a number of counties as
restrictions are lifted to revive the economy. Making a trade-off between
economic recovery and infection control is a major challenge confronting many
hard-hit counties. Understanding the transmission process and quantifying the
costs of local policies are essential to the task of tackling this challenge.
Here, we investigate the dynamic contact patterns of the populations from
anonymized, geo-localized mobility data and census and demographic data to
create data-driven, agent-based contact networks. We then simulate the epidemic
spread with a time-varying contagion model in ten large metropolitan counties
in the United States and evaluate a combination of mobility reduction, mask
use, and reopening policies. We find that our model captures the
spatial-temporal and heterogeneous case trajectory within various counties
based on dynamic population behaviors. Our results show that a decision-making
tool that considers both economic cost and infection outcomes of policies can
be informative in making decisions of local containment strategies for optimal
balancing of economic slowdown and virus spread.
|
We propose a comprehensive field-based semianalytical method for designing
fabrication-ready multifunctional periodic metasurfaces (MSs). Harnessing
recent work on multielement metagratings based on capacitively-loaded strips,
we have extended our previous meta-atom design formulation to generate
realistic substrate-supported printed-circuit-board layouts for anomalous
refraction MSs. Subsequently, we apply a greedy algorithm for iteratively
optimizing individual scatterers across the entire macroperiod to achieve
multiple design goals for corresponding multiple incidence angles with a single
MS structure. As verified with commercial solvers, the proposed semianalytical
scheme, properly accounting for near-field coupling between the various
scatterers, can reliably produce highly efficient multifunctional MSs on
demand, without requiring time-consuming full-wave optimization.
|
We consider the quasi-one-dimensional (quasi-1D) model of a sonic black hole
in a dilute Bose-Einstein condensate. It is shown that an accurate treatment of
the dimensional reduction to quasi-1D leads to a finite condensate quantum
depletion even for axially infinite systems, and an intrinsic nonthermality of
the black hole radiation spectrum. By calculating the depletion, we derive a
{\em first-order} many-body signature of the sonic horizon, represented by a
distinct peak in the depletion power spectrum. This peak constitutes a readily
experimentally accessible tool to detect quantum sonic horizons even when a
negligible Hawking radiation flux is created by the black hole.
|
The study of strong-lensing systems conventionally involves constructing a
mass distribution that can reproduce the observed multiply-imaging properties.
Such mass reconstructions are generically non-unique. Here, we present an
alternative strategy: instead of modelling the mass distribution, we search
cosmological galaxy-formation simulations for plausible matches. In this paper
we test the idea on seven well-studied lenses from the SLACS survey. For each
of these, we first pre-select a few hundred galaxies from the EAGLE
simulations, using the expected Einstein radius as an initial criterion. Then,
for each of these pre-selected galaxies, we fit for the source light
distribution, while using MCMC for the placement and orientation of the lensing
galaxy, so as to reproduce the multiple images and arcs. The results indicate
that the strategy is feasible, and even yields relative posterior probabilities
of two different galaxy-formation scenarios, though these are not statistically
significant yet. Extensions to other observables, such as kinematics and
colours of the stellar population in the lensing galaxy, is straightforward in
principle, though we have not attempted it yet. Scaling to arbitrarily large
numbers of lenses also appears feasible. This will be especially relevant for
upcoming wide-field surveys, through which the number of galaxy lenses will
rise possibly a hundredfold, which will overwhelm conventional modelling
methods.
|
In supervised learning for medical image analysis, sample selection
methodologies are fundamental to attain optimum system performance promptly and
with minimal expert interactions (e.g. label querying in an active learning
setup). In this paper we propose a novel sample selection methodology based on
deep features leveraging information contained in interpretability saliency
maps. In the absence of ground truth labels for informative samples, we use a
novel self supervised learning based approach for training a classifier that
learns to identify the most informative sample in a given batch of images. We
demonstrate the benefits of the proposed approach, termed
Interpretability-Driven Sample Selection (IDEAL), in an active learning setup
aimed at lung disease classification and histopathology image segmentation. We
analyze three different approaches to determine sample informativeness from
interpretability saliency maps: (i) an observational model stemming from
findings on previous uncertainty-based sample selection approaches, (ii) a
radiomics-based model, and (iii) a novel data-driven self-supervised approach.
We compare IDEAL to other baselines using the publicly available NIH chest
X-ray dataset for lung disease classification, and a public histopathology
segmentation dataset (GLaS), demonstrating the potential of using
interpretability information for sample selection in active learning systems.
Results show our proposed self supervised approach outperforms other approaches
in selecting informative samples leading to state of the art performance with
fewer samples.
|
In order to remain competitive, Internet companies collect and analyse user
data for the purpose of improving user experiences. Frequency estimation is a
widely used statistical tool which could potentially conflict with the relevant
privacy regulations. Privacy preserving analytic methods based on differential
privacy have been proposed, which either require a large user base or a trusted
server; hence may give big companies an unfair advantage while handicapping
smaller organizations in their growth opportunity. To address this issue, this
paper proposes a fair privacy-preserving sampling-based frequency estimation
method and provides a relation between its privacy guarantee, output accuracy,
and number of participants. We designed decentralized privacy-preserving
aggregation mechanisms using multi-party computation technique and established
that, for a limited number of participants and a fixed privacy level, our
mechanisms perform better than those that are based on traditional perturbation
methods; hence, provide smaller companies a fair growth opportunity. We further
propose an architectural model to support weighted aggregation in order to
achieve higher accuracy estimate to cater for users with different privacy
requirements. Compared to the unweighted aggregation, our method provides a
more accurate estimate. Extensive experiments are conducted to show the
effectiveness of the proposed methods.
|
We study directions along which the norms of vectors are preserved under a
linear map. In particular, we find families of matrices for which these
directions are determined by integer vectors. We consider the two-dimensional
case in detail, and also discuss the extension to the three-dimensional case.
|
Currently, soil structure interaction in energy piles has not been understood
thoroughly. One of the important underlying features is the effect of tip and
head restraints on displacement, strain and stress in energy piles. This study
has investigated thermo-mechanical response of energy piles subjected to
different end restraints by using recently found analytical solutions, thus
providing a fundamental, rational, mechanics-based understanding. End
restraints are found to have a substantial effect on thermo-mechanical response
of energy piles, especially on thermal axial displacement and axial stress in
the pile. Head restraint imposed by interaction of an energy pile with the
superstructure led to a decrease in the magnitude of head displacement and
increase in axial stress, while decreasing the axial strain. The impact of head
restraint was more pronounced in end bearing than in fully floating energy
piles.
|
We model the 21cm power spectrum across the Cosmic Dawn and the Epoch of
Reionization (EoR) in fuzzy dark matter (FDM) cosmologies. The suppression of
small mass halos in FDM models leads to a delay in the onset redshift of these
epochs relative to cold dark matter (CDM) scenarios. This strongly impacts the
21cm power spectrum and its redshift evolution. The 21cm power spectrum at a
given stage of the EoR/Cosmic Dawn process is also modified: in general, the
amplitude of 21cm fluctuations is boosted by the enhanced bias factor of galaxy
hosting halos in FDM. We forecast the prospects for discriminating between CDM
and FDM with upcoming power spectrum measurements from HERA, accounting for
degeneracies between astrophysical parameters and dark matter properties. If
FDM constitutes the entirety of the dark matter and the FDM particle mass is
10-21eV, HERA can determine the mass to within 20 percent at 2-sigma
confidence.
|
We evaluate in closed form several series involving products of Cauchy
numbers with other special numbers (harmonic, skew-harmonic, hyperharmonic, and
central binomial). Similar results are obtained with series involving Stirling
numbers of the first kind. We focus on several particular cases which give new
closed forms for Euler sums of hyperharmonic numbers and products of
hyperharmonic and harmonic numbers.
|
Anisotropic outgassing from comets exerts a torque sufficient to rapidly
change the angular momentum of the nucleus, potentially leading to rotational
instability. Here, we use empirical measures of spin changes in a sample of
comets to characterize the torques and to compare them with expectations from a
simple model. Both the data and the model show that the characteristic spin-up
timescale, $\tau_s$, is a strong function of nucleus radius, $r_n$.
Empirically, we find that the timescale for comets (most with perihelion 1 to 2
AU and eccentricity $\sim$0.5) varies as $\tau_s \sim 100 r_n^{2}$, where $r_n$
is expressed in kilometers and $\tau_s$ is in years. The fraction of the
nucleus surface that is active varies as $f_A \sim 0.1 r_n^{-2}$. We find that
the median value of the dimensionless moment arm of the torque is $k_T$ = 0.007
(i.e. $\sim$0.7\% of the escaping momentum torques the nucleus), with weak
($<$3$\sigma$) evidence for a size dependence $k_T \sim 10^{-3} r_n^2$.
Sub-kilometer nuclei have spin-up timescales comparable to their orbital
periods, confirming that outgassing torques are quickly capable of driving
small nuclei towards rotational disruption. Torque-induced rotational
instability likely accounts for the paucity of sub-kilometer short-period
cometary nuclei, and for the pre-perihelion destruction of sungrazing comets.
Torques from sustained outgassing on small active asteroids can rival YORP
torques, even for very small ($\lesssim$1 g s$^{-1}$) mass loss rates. Finally,
we highlight the important role played by observational biases in the measured
distributions of $\tau_s$, $f_A$ and $k_T$.
|
We study the stochastic bilinear minimax optimization problem, presenting an
analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant
step size, and presenting variations of the method that yield favorable
convergence. In sharp contrasts with the basic SEG method whose last iterate
only contracts to a fixed neighborhood of the Nash equilibrium, SEG augmented
with iteration averaging provably converges to the Nash equilibrium under the
same standard settings, and such a rate is further improved by incorporating a
scheduled restarting procedure. In the interpolation setting where noise
vanishes at the Nash equilibrium, we achieve an optimal convergence rate up to
tight constants. We present numerical experiments that validate our theoretical
findings and demonstrate the effectiveness of the SEG method when equipped with
iteration averaging and restarting.
|
The polarization properties of the elastic electron scattering on H-like ions
are investigated within the framework of the relativistic QED theory. The
polarization properties are determined by a combination of relativistic effects
and spin exchange between the incident and bound electrons. The scattering of a
polarized electron on an initially unpolarized ion is fully described by five
parameters. We study these parameters for non-resonant scattering, as well as
in the vicinity of LL resonances, where scattering occurs through the formation
and subsequent decay of intermediate autoionizing states. The study was carried
out for ions from $\txt{B}^{4+}$ to $\txt{Xe}^{53+}$. Special attention was
paid to the study of asymmetry in electron scattering.
|
Dynamical quantum phase transitions (DQPTs), which refer to the criticality
in time of a quantum many-body system, have attracted much theoretical and
experimental research interest recently. Despite DQPTs are defined and
signalled by the non-analyticities in the Loschmidt rate, its interrelation
with various correlation measures such as the equilibrium order parameters of
the system remains unclear. In this work, by considering the quench dynamics in
an interacting topological model, we find that the equilibrium order parameters
of the model in general exhibit signatures around the DQPT, in the short time
regime. The first extrema of the equilibrium order parameters are connected to
the first Loschmidt rate peak. By studying the unequal-time two-point
correlation, we also find that the correlation between the nearest neighbors
decays while that with neighbors further away builds up as time grows in the
non-interacting case, and upon the addition of repulsive intra-cell
interactions. On the other hand, the inter-cell interaction tends to suppress
the two-site correlations. These findings could provide us insights into the
characteristic of the system around DQPTs, and pave the way to a better
understanding of the dynamics in non-equilibrium quantum many-body systems.
|
Ensemble learning methods are designed to benefit from multiple learning
algorithms for better predictive performance. The tradeoff of this improved
performance is slower speed and larger size of ensemble learning systems
compared to single learning systems. In this paper, we present a novel approach
to deal with this problem in Random Forest (RF) as one of the most powerful
ensemble methods. The method is based on crossbreeding of the best tree
branches to increase the performance of RF in space and speed while keeping the
performance in the classification measures. The proposed approach has been
tested on a group of synthetic and real datasets and compared to the standard
RF approach. Several evaluations have been conducted to determine the effects
of the Crossbred RF (CRF) on the accuracy and the number of trees in a forest.
The results show better performance of CRF compared to RF.
|
The structure of finite self-assembling systems depends sensitively on the
number of constituent building blocks. Recently, it was demonstrated that hard
sphere-like colloidal particles show a magic number effect when confined in
spherical emulsion droplets. Geometric construction rules permit a few dozen
magic numbers that correspond to a discrete series of completely filled
concentric icosahedral shells. Here, we investigate the free energy landscape
of these colloidal clusters as a function of the number of their constituent
building blocks for system sizes up to several thousand particles. We find that
minima in the free energy landscape, arising from the presence of filled,
concentric shells, are significantly broadened. In contrast to their atomic
analogues, colloidal clusters in spherical confinement can flexibly accommodate
excess colloids by ordering icosahedrally in the cluster center while changing
the structure near the cluster surface. In-between these magic number regions,
the building blocks cannot arrange into filled shells. Instead, we observe that
defects accumulate in a single wedge and therefore only affect a few
tetrahedral grains of the cluster. We predict the existence of this wedge by
simulation and confirm its presence in experiment using electron tomography.
The introduction of the wedge minimizes the free energy penalty by confining
defects to small regions within the cluster. In addition, the remaining ordered
tetrahedral grains can relax internal strain by breaking icosahedral symmetry.
Our findings demonstrate how multiple defect mechanisms collude to form the
complex free energy landscape of hard sphere-like colloidal clusters.
|
We prove the sufficient conditions for convergence of a certain iterative
process of order 2 for solving nonlinear functional equations, which does not
require inverting the derivative. We translate and detail our results for a
system of nonlinear equations, and apply it for some numerical example which
illustrates our theorems.
|
Percolation transition (PT) means the formation of a macroscopic-scale large
cluster, which exhibits a continuous transition. However, when the growth of
large clusters is globally suppressed, the type of PT is changed to a
discontinuous transition for random networks. A question arises as to whether
the type of PT is also changed for scale-free (SF) network, because the
existence of hubs incites the formation of a giant cluster. Here, we apply a
global suppression rule to the static model for SF networks, and investigate
properties of the PT. We find that even for SF networks with the degree
exponent $2 < \lambda <3$, a hybrid PT occurs at a finite transition point
$t_c$, which we can control by the suppression strength. The order parameter
jumps at $t_c^-$ and exhibits a critical behavior at $t_c^+$.
|
Clemm and Trebat-Leder (2014) proved that the number of quadratic number
fields with absolute discriminant bounded by $x$ over which there exist
elliptic curves with good reduction everywhere and rational $j$-invariant is
$\gg x\log^{-1/2}(x)$. In this paper, we assume the $abc$-conjecture to show
the sharp asymptotic $\sim cx\log^{-1/2}(x)$ for this number, obtaining
formulae for $c$ in both the real and imaginary cases. Our method has three
ingredients:
(1) We make progress towards a conjecture of Granville: Given a fixed
elliptic curve $E/\mathbb{Q}$ with short Weierstrass equation $y^2 = f(x)$ for
reducible $f \in \mathbb{Z}[x]$, we show that the number of integers $d$, $|d|
\leq D$, for which the quadratic twist $dy^2 = f(x)$ has an integral
non-$2$-torsion point is at most $D^{2/3+o(1)}$, assuming the $abc$-conjecture.
(2) We apply the Selberg--Delange method to obtain a Tauberian theorem which
allows us to count integers satisfying certain congruences while also being
divisible only by certain primes.
(3) We show that for a polynomially sparse subset of the natural numbers, the
number of pairs of elements with least common multiple at most $x$ is
$O(x^{1-\epsilon})$ for some $\epsilon > 0$. We also exhibit a matching lower
bound.
If instead of the $abc$-conjecture we assume a particular tail bound, we can
prove all the aforementioned results and that the coefficient $c$ above is
greater in the real quadratic case than in the imaginary quadratic case, in
agreement with an experimentally observed bias.
|
A key challenge towards the goal of multi-part assembly tasks is finding
robust sensorimotor control methods in the presence of uncertainty. In contrast
to previous works that rely on a priori knowledge on whether two parts match,
we aim to learn this through physical interaction. We propose a hierarchical
approach that enables a robot to autonomously assemble parts while being
uncertain about part types and positions. In particular, our probabilistic
approach learns a set of differentiable filters that leverage the tactile
sensorimotor trace from failed assembly attempts to update its belief about
part position and type. This enables a robot to overcome assembly failure. We
demonstrate the effectiveness of our approach on a set of object fitting tasks.
The experimental results indicate that our proposed approach achieves higher
precision in object position and type estimation, and accomplishes object
fitting tasks faster than baselines.
|
The pairing of two electrons on a Fermi surface due to an infinitesimal
attraction between them always results in a superconducting instability at zero
temperature ($T=0$). The equivalent question of pairing instability on a
Luttinger surface (LS) -- a contour of zeros of the propagator -- instead leads
to a quantum critical point (QCP) that separates a non-Fermi liquid (NFL) and
superconductor. A surprising and little understood aspect of pair fluctuations
at this QCP is that their thermodynamics maps to that of the Sachdev-Ye-Kitaev
(SYK) model in the strong coupling limit. Here, we offer a simple justification
for this mapping by demonstrating that (i) LS models share the
reparametrization symmetry of the $q\rightarrow \infty$ SYK model with $q$-body
interactions \textcolor{black}{close to the LS}, and (ii) the enforcement of
gauge invariance results in a $\frac{1}{\sqrt{\tau}}$ ($\tau\sim T^{-1}$)
behavior of the fluctuation propagator near the QCP, as is a feature of the
fundamental SYK fermion.
|
We independently determine the zero-point offset of the Gaia early Data
Release-3 (EDR3) parallaxes based on $\sim 110,000$ W Ursae Majoris (EW)-type
eclipsing binary systems. EWs cover almost the entire sky and are characterized
by a relatively complete coverage in magnitude and color. They are an excellent
proxy for Galactic main-sequence stars. We derive a $W1$-band Period-Luminosity
relation with a distance accuracy of $7.4\%$, which we use to anchor the Gaia
parallax zero-point. The final, global parallax offsets are $-28.6\pm0.6$
$\mu$as and $-25.4\pm4.0$ $\mu$as (before correction) and $4.2\pm0.5$ $\mu$as
and $4.6\pm3.7$ $\mu$as (after correction) for the five- and six-parameter
solutions, respectively. The total systematic uncertainty is $1.8$ $\mu$as. The
spatial distribution of the parallax offsets shows that the bias in the
corrected Gaia EDR3 parallaxes is less than 10 $\mu$as across $40\%$ of the
sky. Only $15\%$ of the sky is characterized by a parallax offset greater than
30 $\mu$as. Thus, we have provided independent evidence that the parallax
zero-point correction provided by the Gaia team significantly reduces the
prevailing bias. Combined with literature data, we find that the overall Gaia
EDR3 parallax offsets for Galactic stars are $[-20, -30]$ $\mu$as and 4-10
$\mu$as, respectively, before and after correction. For specific regions, an
additional deviation of about 10 $\mu$as is found.
|
The lattice Boltzmann method (LBM) has recently emerged as an efficient
alternative to classical Navier-Stokes solvers. This is particularly true for
hemodynamics in complex geometries. However, in its most basic formulation,
{i.e.} with the so-called single relaxation time (SRT) collision operator, it
has been observed to have a limited stability domain in the Courant/Fourier
space, strongly constraining the minimum time-step and grid size. The
development of improved collision models such as the multiple relaxation time
(MRT) operator in central moments space has tremendously widened the stability
domain, while allowing to overcome a number of other well-documented artifacts,
therefore opening the door for simulations over a wider range of grid and
time-step sizes. The present work focuses on implementing and validating a
specific collision operator, the central Hermite moments multiple relaxation
time model with the full expansion of the equilibrium distribution function, to
simulate blood flows in intracranial aneurysms. The study further proceeds with
a validation of the numerical model through different test-cases and against
experimental measurements obtained via stereoscopic particle image velocimetry
(PIV) and phase-contrast magnetic resonance imaging (PC-MRI). For a
patient-specific aneurysm both PIV and PC-MRI agree fairly well with the
simulation. Finally, low-resolution simulations were shown to be able to
capture blood flow information with sufficient accuracy, as demonstrated
through both qualitative and quantitative analysis of the flow field {while
leading to strongly reduced computation times. For instance in the case of the
patient-specific configuration, increasing the grid-size by a factor of two led
to a reduction of computation time by a factor of 14 with very good similarity
indices still ranging from 0.83 to 0.88.}
|
We consider correlations, $p_{n,x}$, arising from measuring a maximally
entangled state using $n$ measurements with two outcomes each, constructed from
$n$ projections that add up to $xI$. We show that the correlations $p_{n,x}$
robustly self-test the underlying states and measurements. To achieve this, we
lift the group-theoretic Gowers-Hatami based approach for proving robust
self-tests to a more natural algebraic framework. A key step is to obtain an
analogue of the Gowers-Hatami theorem allowing to perturb an "approximate"
representation of the relevant algebra to an exact one.
For $n=4$, the correlations $p_{n,x}$ self-test the maximally entangled state
of every odd dimension as well as 2-outcome projective measurements of
arbitrarily high rank. The only other family of constant-sized self-tests for
strategies of unbounded dimension is due to Fu (QIP 2020) who presents such
self-tests for an infinite family of maximally entangled states with even local
dimension. Therefore, we are the first to exhibit a constant-sized self-test
for measurements of unbounded dimension as well as all maximally entangled
states with odd local dimension.
|
Strong lensing of gravitational waves (GWs) is attracting growing attention
of the community. The event rates of lensed GWs by galaxies were predicted in
numerous papers, which used some approximations to evaluate the GW strains
detectable by a single detector. The joint-detection of GW signals by a network
of instruments will increase the detecting ability of fainter and farther GW
signals, which could increase the detection rate of the lensed GWs, especially
for the 3rd generation detectors, e.g., Einstein Telescope (ET) and Cosmic
Explorer (CE). Moreover, realistic GW templates will improve the accuracy of
the prediction. In this work, we consider the detection of galaxy-scale lensed
GW events under the 2nd, 2.5th, and 3rd generation detectors with the network
scenarios and adopt the realistic templates to simulate GW signals. Our
forecast is based on the Monte Carlo technique which enables us to take Earth's
rotation into consideration. We find that the overall detection rate is
improved, especially for the 3rd generation detector scenarios. More precisely,
it increases by ~37% adopting realistic templates, and under network detection
strategy, further increases by ~58% comparing with adoption of the realistic
templates, and we estimate that the 3rd generation GW detectors will detect
hundreds lensed events per year. The effect from the Earth's rotation is
weakened in the detector network strategy.
|
Multi-head attention has each of the attention heads collect salient
information from different parts of an input sequence, making it a powerful
mechanism for sequence modeling. Multilingual and multi-domain learning are
common scenarios for sequence modeling, where the key challenge is to maximize
positive transfer and mitigate negative transfer across languages and domains.
In this paper, we find that non-selective attention sharing is sub-optimal for
achieving good generalization across all languages and domains. We further
propose attention sharing strategies to facilitate parameter sharing and
specialization in multilingual and multi-domain sequence modeling. Our approach
automatically learns shared and specialized attention heads for different
languages and domains to mitigate their interference. Evaluated in various
tasks including speech recognition, text-to-text and speech-to-text
translation, the proposed attention sharing strategies consistently bring gains
to sequence models built upon multi-head attention. For speech-to-text
translation, our approach yields an average of $+2.0$ BLEU over $13$ language
directions in multilingual setting and $+2.0$ BLEU over $3$ domains in
multi-domain setting.
|
We give a presentation in terms of generators and relations of the cohomology
in degree zero of the Campos-Willwacher graph complexes associated to compact
orientable surfaces of genus $g$. The results carry a natural Lie algebra
structure, and for $g=1$ we recover Enriquez' elliptic
Grothendieck-Teichm\"uller Lie algebra. In analogy to Willwacher's theorem
relating Kontsevich's graph complex to Drinfeld's Grothendieck-Teichm\"uller
Lie algebra, we call the results higher genus Grothendieck-Teichm\"uller Lie
algebras. Moreover, we find that the graph cohomology vanishes in negative
degrees.
|
Social recommendation aims to fuse social links with user-item interactions
to alleviate the cold-start problem for rating prediction. Recent developments
of Graph Neural Networks (GNNs) motivate endeavors to design GNN-based social
recommendation frameworks to aggregate both social and user-item interaction
information simultaneously. However, most existing methods neglect the social
inconsistency problem, which intuitively suggests that social links are not
necessarily consistent with the rating prediction process. Social inconsistency
can be observed from both context-level and relation-level. Therefore, we
intend to empower the GNN model with the ability to tackle the social
inconsistency problem. We propose to sample consistent neighbors by relating
sampling probability with consistency scores between neighbors. Besides, we
employ the relation attention mechanism to assign consistent relations with
high importance factors for aggregation. Experiments on two real-world datasets
verify the model effectiveness.
|
The steering dynamics of re-configurable intelligent surfaces (RIS) have
hoisted them to the front row of technologies that can be exploited to solve
skip-zones in wireless communication systems. They can enable a programmable
wireless environment, turning it into a partially deterministic space that
plays an active role in determining how wireless signals propagate. However,
RIS-based communication systems' practical implementation may face challenges
such as noise generated by the RIS structure. Besides, the transmitted signal
may face a double-fading effect over the two portions of the channel. This
article tackles this double-fading problem in near-terrestrial free-space
optical (nT-FSO) communication systems using a RIS module based upon
liquid-crystal (LC) on silicon (LCoS). A doped LC layer can directly amplify a
light when placed in an external field. Leveraging on this capacity of a doped
LC, we mitigate the double-attenuation faced by the transmitted signal. We
first revisit the nT-FSO power loss scenario, then discuss the direct-light
amplification, and consider the system performance. Results show that at 51
degrees of the incoming light incidence angle, the proposed LCoS design has
minimal RIS depth, implying less LC's material. The performance results show
that the number of bit per unit bandwidth is upper-bounded and grows with the
ratio of the sub-links distances. Finally, we present and discuss open issues
to enable new research opportunities towards the use of RIS and amplifying-RIS
in nT-FSO systems.
|
In this paper, we propose a novel iterative encoding algorithm for DNA
storage to satisfy both the GC balance and run-length constraints using a
greedy algorithm. DNA strands with run-length more than three and the GC
balance ratio far from 50\% are known to be prone to errors. The proposed
encoding algorithm stores data at high information density with high
flexibility of run-length at most $m$ and GC balance between $0.5\pm\alpha$ for
arbitrary $m$ and $\alpha$. More importantly, we propose a novel mapping method
to reduce the average bit error compared to the randomly generated mapping
method, using a greedy algorithm. The proposed algorithm is implemented through
iterative encoding, consisting of three main steps: randomization, M-ary
mapping, and verification. The proposed algorithm has an information density of
1.8523 bits/nt in the case of $m=3$ and $\alpha=0.05$. Also, the proposed
algorithm is robust to error propagation, since the average bit error caused by
the one nt error is 2.3455 bits, which is reduced by $20.5\%$, compared to the
randomized mapping.
|
Decentralized vehicle-to-everything (V2X) networks (i.e., Mode-4 C-V2X and
Mode 2a NR-V2X), rely on periodic Basic Safety Messages (BSMs) to disseminate
time-sensitive information (e.g., vehicle position) and has the potential to
improve on-road safety. For BSM scheduling, decentralized V2X networks utilize
sensing-based semi-persistent scheduling (SPS), where vehicles sense radio
resources and select suitable resources for BSM transmissions at prespecified
periodic intervals termed as Resource Reservation Interval (RRI). In this
paper, we show that such a BSM scheduling (with a fixed RRI) suffers from
severe under- and over- utilization of radio resources under varying vehicle
traffic scenarios; which severely compromises timely dissemination of BSMs,
which in turn leads to increased collision risks. To address this, we extend
SPS to accommodate an adaptive RRI, termed as SPS++. Specifically, SPS++ allows
each vehicle -- (i) to dynamically adjust RRI based on the channel resource
availability (by accounting for various vehicle traffic scenarios), and then,
(ii) select suitable transmission opportunities for timely BSM transmissions at
the chosen RRI. Our experiments based on Mode-4 C-V2X standard implemented
using the ns-3 simulator show that SPS++ outperforms SPS by at least $50\%$ in
terms of improved on-road safety performance, in all considered simulation
scenarios.
|
Current gate-based quantum computers have the potential to provide a
computational advantage if algorithms use quantum hardware efficiently. To make
combinatorial optimization more efficient, we introduce the Filtering
Variational Quantum Eigensolver (F-VQE) which utilizes filtering operators to
achieve faster and more reliable convergence to the optimal solution.
Additionally we explore the use of causal cones to reduce the number of qubits
required on a quantum computer. Using random weighted MaxCut problems, we
numerically analyze our methods and show that they perform better than the
original VQE algorithm and the Quantum Approximate Optimization Algorithm
(QAOA). We also demonstrate the experimental feasibility of our algorithms on a
Honeywell trapped-ion quantum processor.
|
We study whether receiving advice from either a human or algorithmic advisor,
accompanied by five types of Local and Global explanation labelings, has an
effect on the readiness to adopt, willingness to pay, and trust in a financial
AI consultant. We compare the differences over time and in various key
situations using a unique experimental framework where participants play a
web-based game with real monetary consequences. We observed that accuracy-based
explanations of the model in initial phases leads to higher adoption rates.
When the performance of the model is immaculate, there is less importance
associated with the kind of explanation for adoption. Using more elaborate
feature-based or accuracy-based explanations helps substantially in reducing
the adoption drop upon model failure. Furthermore, using an autopilot increases
adoption significantly. Participants assigned to the AI-labeled advice with
explanations were willing to pay more for the advice than the AI-labeled advice
with a No-explanation alternative. These results add to the literature on the
importance of XAI for algorithmic adoption and trust.
|
The network-based model of social contagion has revolved around information
on local interactions; its central focus has been on network topological
properties shaping the local interactions and, ultimately, social contagion
outcomes. We extend this approach by introducing information on the global
state, or global information, into the network-based model and analyzing how it
alters social contagion dynamics in six different classes of networks: a
two-dimensional square lattice, small-world networks, Erd\H{o}s-R\'{e}nyi
networks, regular random networks, Holme-Kim networks, and Barab\'{a}si-Albert
networks. We find that there is an optimal amount of global information that
minimizes the time to reach global cascades in highly clustered networks. We
also find that global information prolongs the time to hit the tipping point
but substantially compresses the time to reach global cascades after then, so
that the overall time to reach global cascades can even be shortened under
certain conditions. Finally, we show that random links substitute for global
information in regulating the social contagion dynamics.
|
This paper provides a numerical framework for computing the achievable rate
region of memoryless multiple access channel (MAC) with a continuous alphabet
from data. In particular, we use recent results on variational lower bounds on
mutual information and KL-divergence to compute the boundaries of the rate
region of MAC using a set of functions parameterized by neural networks. Our
method relies on a variational lower bound on KL-divergence and an upper bound
on KL-divergence based on the f-divergence inequalities. Unlike previous work,
which computes an estimate on mutual information, which is neither a lower nor
an upper bound, our method estimates a lower bound on mutual information. Our
numerical results show that the proposed method provides tighter estimates
compared to the MINE-based estimator at large SNRs while being computationally
more efficient. Finally, we apply the proposed method to the optical intensity
MAC and obtain a new achievable rate boundary tighter than prior works.
|
Measuring the electrophoretic mobility of molecules is a powerful
experimental approach for investigating biomolecular processes. A frequent
challenge in the context of single-particle measurements is throughput,
limiting the obtainable statistics. Here, we present a molecular force sensor
and charge detector based on parallelised imaging and tracking of tethered
double-stranded DNA functionalised with charged nanoparticles interacting with
an externally applied electric field. Tracking the position of the tethered
particle with simultaneous nanometre precision and microsecond temporal
resolution allows us to detect and quantify electrophoretic forces down to the
sub-piconewton scale. Furthermore, we demonstrate that this approach is capable
of detecting changes to the particle charge state, as induced by the addition
of charged biomolecules or changes to pH. Our approach provides an alternative
route to studying structural and charge dynamics at the single-molecule level.
|
One of the main challenges in real-world reinforcement learning is to learn
successfully from limited training samples. We show that in certain settings,
the available data can be dramatically increased through a form of multi-task
learning, by exploiting an invariance property in the tasks. We provide a
theoretical performance bound for the gain in sample efficiency under this
setting. This motivates a new approach to multi-task learning, which involves
the design of an appropriate neural network architecture and a prioritized
task-sampling strategy. We demonstrate empirically the effectiveness of the
proposed approach on two real-world sequential resource allocation tasks where
this invariance property occurs: financial portfolio optimization and meta
federated learning.
|
Graphs are a common model for complex relational data such as social networks
and protein interactions, and such data can evolve over time (e.g., new
friendships) and be noisy (e.g., unmeasured interactions). Link prediction aims
to predict future edges or infer missing edges in the graph, and has diverse
applications in recommender systems, experimental design, and complex systems.
Even though link prediction algorithms strongly depend on the set of edges in
the graph, existing approaches typically do not modify the graph topology to
improve performance. Here, we demonstrate how simply adding a set of edges,
which we call a \emph{proposal set}, to the graph as a pre-processing step can
improve the performance of several link prediction algorithms. The underlying
idea is that if the edges in the proposal set generally align with the
structure of the graph, link prediction algorithms are further guided towards
predicting the right edges; in other words, adding a proposal set of edges is a
signal-boosting pre-processing step. We show how to use existing link
prediction algorithms to generate effective proposal sets and evaluate this
approach on various synthetic and empirical datasets. We find that proposal
sets meaningfully improve the accuracy of link prediction algorithms based on
both neighborhood heuristics and graph neural networks. Code is available at
\url{https://github.com/CUAI/Edge-Proposal-Sets}.
|
The TianQin space Gravitational Waves (GW) observatory will contain 3
geocentric and circularly orbiting spacecraft with an orbital radius of 10^5
km, to detect the GW in the milli-hertz frequency band. Each spacecraft pair
will establish a 1.7*10^5 km-long laser interferometer immersed in the solar
wind and the magnetospheric plasmas to measure the phase deviations induced by
the GW. GW detection requires a high-precision measurement of the laser phase.
The cumulative effects of the long distance and the periodic oscillations of
the plasma density may induce an additional phase noise. This paper aims to
model the plasma induced phase deviation of the inter-spacecraft laser signals,
using a realistic orbit simulator and the Space Weather Modeling Framework
(SWMF) model. Preliminary results show that the plasma density oscillation can
induce the phase deviations close to 2*10^-6 rad/Hz^1/2 or 0.3pm/Hz^1/2 in the
milli-hertz frequency band and it is within the error budget assigned to the
displacement noise of the interferometry. The amplitude spectrum density of
phases along three arms become more separated when the orbital plane is
parallel to the Sun-Earth line or during a magnetic storm. Finally, the
dependence of the phase deviations on the orbital radius is examined.
|
We elaborate on the role of higher-derivative curvature invariants as a
quantum selection mechanism of regular spacetimes in the framework of the
Lorentzian path integral approach to quantum gravity. We show that for a large
class of black hole metrics prominently regular there are higher-derivative
curvature invariants which are singular. If such terms are included in the
action, according to the finite action principle applied to a higher-derivative
gravity model, not only singular spacetimes but also some of the regular ones
do not seem to contribute to the path integral.
|
This note provides a variational description of the mechanical effects of
flexural stiffening of a 2D plate glued to an elastic-brittle or an
elastic-plastic reinforcement. The reinforcement is assumed to be linear
elastic outside possible free plastic yield lines or free crack. Explicit Euler
equations and a compliance identity are shown for the reinforcement of a 1D
beam.
|
The generation of mean flows is a long-standing issue in rotating fluids.
Motivated by planetary objects, we consider here a rapidly rotating
fluid-filled spheroid, which is subject to weak perturbations of either the
boundary (e.g. tides) or the rotation vector (e.g. in direction by precession,
or in magnitude by longitudinal librations). Using boundary-layer theory, we
determine the mean zonal flows generated by nonlinear interactions within the
viscous Ekman layer. These flows are of interest because they survive in the
relevant planetary regime of both vanishing forcings and viscous effects. We
extend the theory to take into account (i) the combination of spatial and
temporal perturbations, providing new mechanically driven zonal flows (e.g.
driven by latitudinal librations), and (ii) the spheroidal geometry relevant
for planetary bodies. Wherever possible, our analytical predictions are
validated with direct numerical simulations. The theoretical solutions are in
good quantitative agreement with the simulations, with expected discrepancies
(zonal jets) in the presence of inertial waves generated at the critical
latitudes (as for precession). Moreover, we find that the mean zonal flows can
be strongly affected in spheroids. Guided by planetary applications, we also
revisit the scaling laws for the geostrophic shear layers at the critical
latitudes, and the influence of a solid inner core.
|
Assigning meaning to parts of image data is the goal of semantic image
segmentation. Machine learning methods, specifically supervised learning is
commonly used in a variety of tasks formulated as semantic segmentation. One of
the major challenges in the supervised learning approaches is expressing and
collecting the rich knowledge that experts have with respect to the meaning
present in the image data. Towards this, typically a fixed set of labels is
specified and experts are tasked with annotating the pixels, patches or
segments in the images with the given labels. In general, however, the set of
classes does not fully capture the rich semantic information present in the
images. For example, in medical imaging such as histology images, the different
parts of cells could be grouped and sub-grouped based on the expertise of the
pathologist.
To achieve such a precise semantic representation of the concepts in the
image, we need access to the full depth of knowledge of the annotator. In this
work, we develop a novel approach to collect segmentation annotations from
experts based on psychometric testing. Our method consists of the psychometric
testing procedure, active query selection, query enhancement, and a deep metric
learning model to achieve a patch-level image embedding that allows for
semantic segmentation of images. We show the merits of our method with
evaluation on the synthetically generated image, aerial image and histology
image.
|
We report an infrared spectroscopy study of the axion topological insulator
candidate EuIn$_2$As$_2$ for which the Eu moments exhibit an A-type
antiferromagnetic (AFM) order below $T_N \simeq 18 \mathrm{K}$. The low energy
response is composed of a weak Drude peak at the origin, a pronounced
infrared-active phonon mode at 185 cm$^{-1}$ and a free carrier plasma edge
around 600 cm$^{-1}$. The interband transitions start above 800 cm$^{-1}$ and
give rise to a series of weak absorption bands at 5\,000 and 12\,000 cm$^{-1}$
and strong ones at 20\,000, 27\,500 and 32\,000 cm$^{-1}$. The AFM transition
gives rise to pronounced anomalies of the charge response in terms of a
cusp-like maximum of the free carrier scattering rate around $T_N$ and large
magnetic splittings of the interband transitions at 5\,000 and 12\,000
cm$^{-1}$. The phonon mode at 185 cm$^{-1}$ has also an anomalous temperature
dependence around $T_N$ which suggests that it couples to the fluctuations of
the Eu spins. The combined data provide evidence for a strong interaction
amongst the charge, spin and lattice degrees of freedom.
|
Backtracking of RNA polymerase (RNAP) is an important pausing mechanism
during DNA transcription that is part of the error correction process that
enhances transcription fidelity. We model the backtracking mechanism of RNA
polymerase, which usually happens when the polymerase tries to incorporate a
mismatched nucleotide triphosphate. Previous models have made simplifying
assumptions such as neglecting the trailing polymerase behind the backtracking
polymerase or assuming that the trailing polymerase is stationary. We derive
exact analytic solutions of a stochastic model that includes locally
interacting RNAPs by explicitly showing how a trailing RNAP influences the
probability that an error is corrected or incorporated by the leading
backtracking RNAP. We also provide two related methods for computing the mean
times to error correction or incorporation given an initial local RNAP
configuration.
|
Let $G$ be any group and $k\geq 1$ be an integer number. The ordered
configuration set of $k$ points in $G$ is given by the subset
$F(G,k)=\{(g_1,\ldots,g_k)\in G\times \cdots\times G: g_i\neq g_j \text{ for }
i\neq j\}\subset G^k$. In this work, we will study the configuration set
$F(G,k)$ in algebraic terms as a subset of the product $G^k=G\times
\cdots\times G$. As we will see, we develop practical tools for dealing with
the configuration set of $k$ points in $G$, which, to our knowledge, can not be
found in literature.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.