abstract
stringlengths 42
2.09k
|
---|
The Einstein-Podolsky-Rosen (EPR) steering, which is regarded as a category
of quantum nonlocal correlations, owns the asymmetric property in contrast with
the entanglement and the Bell nonlocality. For the multipartite EPR steering,
monogamy, which limits the two observers to steer the third one simultaneously,
emerges as an essential property. However, more configurations of shareability
relations in the reduced subsystem which are beyond the monogamy could be
observed by increasing the numbers of measurement setting, in which the
experimental verification is still absent. Here, in an optical experiment, we
provide a proof-of-principle demonstration of shareability of the EPR steering
without constraint of monogamy in the three-qubit system, in which Alice could
be steered by Bob and Charlie simultaneously. Moreover, based on the reduced
bipartite EPR steering detection, we verify the genuine three-qubit
entanglement. This work provides a basis for an improved understanding of the
multipartite EPR steering and has potential applications in many quantum
information protocols, such as multipartite entanglement detection and quantum
cryptography.
|
HD 163296 is a Herbig Ae star that underwent a dramatic $\sim$0.8 magnitude
drop in brightness in the V photometric band in 2001 and a brightening in the
near-IR in 2002. Because the star possesses Herbig-Haro objects travelling in
outflowing bipolar jets, it was suggested that the drop in brightness was due
to a clump of dust entrained in a disk wind, blocking the line-on-sight toward
the star. In order to quantify this hypothesis, we investigated the brightness
drop at visible wavelengths and the brightening at near-IR wavelengths of HD
163296 using the Monte Carlo Radiative Transfer Code, HOCHUNK3D. We created
three models to understand the events. Model 1 describes the quiescent state of
the system. Model 2 describes the change in structure that led to the drop in
brightness in 2001. Model 3 describes the structure needed to produce the
observed 2002 brightening of the near-IR wavelengths. Models 2 and 3 utilize a
combination of a disk wind and central bipolar flow. By introducing a filled
bipolar cavity in Models 2 and 3, we were able to successfully simulate a
jet-like structure for the star with a disk wind and created the drop and
subsequent increase in brightness of the system. On the other hand, when the
bipolar cavity is not filled, Model 1 replicates the quiescent state of the
system.
|
Finding nearest neighbors in high-dimensional spaces is a fundamental
operation in many diverse application domains. Locality Sensitive Hashing (LSH)
is one of the most popular techniques for finding approximate nearest neighbor
searches in high-dimensional spaces. The main benefits of LSH are its
sub-linear query performance and theoretical guarantees on the query accuracy.
In this survey paper, we provide a review of state-of-the-art LSH and
Distributed LSH techniques. Most importantly, unlike any other prior survey, we
present how Locality Sensitive Hashing is utilized in different application
domains.
|
We introduce a design of electrically isolated floating bilayer GaAs quantum
wells (QW) in which application of a large gating voltage controllably and
highly reproducibly induces charges that remain trapped in the bilayer after
removal of the gating voltage. At smaller gate voltages, the bilayer is fully
electrically isolated from external electrodes by thick insulating barriers.
This design permits full control of the total and differential densities of two
coupled 2D electron systems. The floating bilayer design provides a unique
approach for studying systems inaccessible by simple transport measurements. It
also provides the ability to measure the charge transfer between the layers,
even when the in-plane resistivities of the 2D systems diverge. We measure the
capacitance and inter-layer tunneling spectra of the QW bilayer with
independent control of the top and bottom layer electron densities. Our
measurements display strongly enhanced inter-layer tunneling current at the
total filling factor of 1, a signature of exciton condensation of a strongly
interlayer-correlated bilayer system. With fully tunable densities of
individual layers, the floating bilayer QW system provides a versatile platform
to access previously unavailable information on the quantum phases in electron
bilayer systems.
|
There is an extensive literature on the asymptotic order of Sudler's
trigonometric product $P_N (\alpha) = \prod_{n=1}^N |2 \sin (\pi n \alpha)|$
for fixed or for "typical" values of $\alpha$. In the present paper we
establish a structural result, which for a given $\alpha$ characterizes those
$N$ for which $P_N(\alpha)$ attains particularly large values. This
characterization relies on the coefficients of $N$ in its Ostrowski expansion
with respect to $\alpha$, and allows us to obtain very precise estimates for
$\max_{1 \le N \leq M} P_N(\alpha)$ and for $\sum_{N=1}^M P_N(\alpha)^c$ in
terms of $M$, for any $c>0$. Furthermore, our arguments give a natural
explanation of the fact that the value of the hyperbolic volume of the
complement of the figure-eight knot appears generically in results on the
asymptotic order of the Sudler product and of the Kashaev invariant.
|
We present a detailed analysis of a cool-core galaxy cluster Abell 3017, at a
redshift of z=0.219, which has been identified to be merging with its companion
cluster Abell 3016. This study has made use of X-ray Chandra, UV (GALEX),
optical (ESO/VLT), mid-infrared (WISE), and radio uGMRT observations of this
cluster. Using various image processing techniques, such as unsharp masking,
2-d fits using Beta models, contour binning and the use of surface brightness
profiles, we show the existence of a pair of X-ray cavities, at a projected
distance of $\sim$20'' (70 kpc) and $\sim$16'' (57 kpc), respectively from the
core of Abell~3017. We also detect an excess of X-ray emission located at 25''
$\sim$(88 kpc) south of the centre of Abell 3017, is likely due to the bulk
motions in the ICM either by gas sloshing or ram-pressure striping due to a
merger. We find that the radio lobes are responsible for the observed X-ray
cavities detected in this system. The lower values of the mid-IR WISE colour
[W1-W2] and [W2-W3] imply that the central BCG of Abell~3017 is a star-forming
galaxy. The current star formation rate of the central BCG, estimated from the
${\rm H\alpha}$ and GALEX FUV luminosities, are equal to be $\sim 5.06\pm 0.78$
Msun yr$^{-1}$ and $\sim 9.20\pm 0.81$ Msun yr$^{-1}$, respectively. We detect,
for the first time, a radio phoenix $\sim$150 kpc away from the radio core,
with a spectral index of ($\alpha \!\leq\! -1.8$). We also report the detection
of $\rm~Pa_\alpha$ emission in this cluster using ESO VLT SINFONI imaging data.
|
An asymptotic formula is given for the number of y-smooth numbers up to x in
a Beatty sequence corresponding to an irrational number of finite type.
|
The extreme fragility of deep neural networks when presented with tiny
perturbations in their inputs was independently discovered by several research
groups in 2013, but in spite of enormous effort these adversarial examples
remained a baffling phenomenon with no clear explanation. In this paper we
introduce a new conceptual framework (which we call the Dimpled Manifold Model)
which provides a simple explanation for why adversarial examples exist, why
their perturbations have such tiny norms, why these perturbations look like
random noise, and why a network which was adversarially trained with
incorrectly labeled images can still correctly classify test images. In the
last part of the paper we describe the results of numerous experiments which
strongly support this new model, and in particular our assertion that
adversarial perturbations are roughly perpendicular to the low dimensional
manifold which contains all the training examples.
|
The numerical solution of dynamical systems with memory requires the
efficient evaluation of Volterra integral operators in an evolutionary manner.
After appropriate discretisation, the basic problem can be represented as a
matrix-vector product with a lower diagonal but densely populated matrix. For
typical applications, like fractional diffusion or large scale dynamical
systems with delay, the memory cost for storing the matrix approximations and
complete history of the data then would become prohibitive for an accurate
numerical approximation. For Volterra-integral operators of convolution type,
the \emph{fast and oblivious convolution quadrature} method of Sch\"adle,
Lopez-Fernandez, and Lubich allows to compute the discretized valuation with
$N$ time steps in $O(N \log N)$ complexity and only requiring $O(\log N)$
active memory to store a compressed version of the complete history of the
data. We will show that this algorithm can be interpreted as an
$\mathcal{H}$-matrix approximation of the underlying integral operator and,
consequently, a further improvement can be achieved, in principle, by resorting
to $\mathcal{H}^2$-matrix compression techniques. We formulate a variant of the
$\mathcal{H}^2$-matrix vector product for discretized Volterra integral
operators that can be performed in an evolutionary and oblivious manner and
requires only $O(N)$ operations and $O(\log N)$ active memory. In addition to
the acceleration, more general asymptotically smooth kernels can be treated and
the algorithm does not require a-priori knowledge of the number of time steps.
The efficiency of the proposed method is demonstrated by application to some
typical test problems.
|
Abridged for arXiv: In this work, we apply a powerful new technique in order
to observationally derive accurate assembly histories through a self-consistent
combined stellar dynamical and population galaxy model. We present this
approach for three edge-on lenticular galaxies from the Fornax3D project -- FCC
153, FCC 170, and FCC 177 -- in order to infer their mass assembly histories
individually and in the context of the Fornax cluster. The method was tested on
mock data from simulations to quantify its reliability. We find that the
galaxies studied here have all been able to form dynamically-cold (intrinsic
vertical velocity dispersion $\sigma_z \lesssim 50\ {\rm km}\ {\rm s}^{-1}$)
stellar disks after cluster infall. Moreover, the pre-existing (old) high
angular momentum components have retained their angular momentum (orbital
circularity $\lambda_z > 0.8$) through to the present day. Comparing the
derived assembly histories with a comparable galaxy in a low-density
environment -- NGC 3115 -- we find evidence for cluster-driven suppression of
stellar accretion and merging. We measured the intrinsic stellar
age--velocity-dispersion relation and find that the shape of the relation is
consistent with galaxies in the literature across redshift. There is tentative
evidence for enhancement in the luminosity-weighted intrinsic vertical velocity
dispersion due to the cluster environment. But importantly, there is an
indication that metallicity may be a key driver of this relation. We finally
speculate that the cluster environment is responsible for the S0 morphology of
these galaxies via the gradual external perturbations, or `harassment',
generated within the cluster.
|
Knowledge distillation (KD) has been actively studied for image
classification tasks in deep learning, aiming to improve the performance of a
student model based on the knowledge from a teacher model. However, there have
been very few efforts for applying KD in image regression with a scalar
response, and there is no KD method applicable to both tasks. Moreover,
existing KD methods often require a practitioner to carefully choose or adjust
the teacher and student architectures, making these methods less scalable in
practice. Furthermore, although KD is usually conducted in scenarios with
limited labeled data, very few techniques are developed to alleviate such data
insufficiency. To solve the above problems in an all-in-one manner, we propose
in this paper a unified KD framework based on conditional generative
adversarial networks (cGANs), termed cGAN-KD. Fundamentally different from
existing KD methods, cGAN-KD distills and transfers knowledge from a teacher
model to a student model via cGAN-generated samples. This unique mechanism
makes cGAN-KD suitable for both classification and regression tasks, compatible
with other KD methods, and insensitive to the teacher and student
architectures. Also, benefiting from the recent advances in cGAN methodology
and our specially designed subsampling and filtering procedures, cGAN-KD also
performs well when labeled data are scarce. An error bound of a student model
trained in the cGAN-KD framework is derived in this work, which theoretically
explains why cGAN-KD takes effect and guides the implementation of cGAN-KD in
practice. Extensive experiments on CIFAR-10 and Tiny-ImageNet show that we can
incorporate state-of-the-art KD methods into the cGAN-KD framework to reach a
new state of the art. Also, experiments on RC-49 and UTKFace demonstrate the
effectiveness of cGAN-KD in image regression tasks, where existing KD methods
are inapplicable.
|
For a complex Lie group G and a prime number p, Totaro had conjectured that
the dimension of the singular cohomology with Z/p-coefficients of classifying
space of G is bounded above by that of the de Rham cohomology of the
classifying stack of (the split form of) G in characteristic p. This conjecture
was recently proven by Kubrak--Prikhodko. In this note, we give a shorter
proof.
|
In this paper, we propose some variational formulations with the use of high
order impedance boundary condition (HOIBC) to solve the scattering problem. We
study the existence and uniqueness of the solution. Then, a discretization of
these formulations is done. We give validations of the HOIBC obtained with a
MoM code that show the improvement in accuracy over the standard impedance
boundary condition (SIBC) computations.
|
Bayesian experimental design (BED) aims at designing an experiment to
maximize the information gathering from the collected data. The optimal design
is usually achieved by maximizing the mutual information (MI) between the data
and the model parameters. When the analytical expression of the MI is
unavailable, e.g., having implicit models with intractable data distributions,
a neural network-based lower bound of the MI was recently proposed and a
gradient ascent method was used to maximize the lower bound. However, the
approach in Kleinegesse et al., 2020 requires a pathwise sampling path to
compute the gradient of the MI lower bound with respect to the design
variables, and such a pathwise sampling path is usually inaccessible for
implicit models. In this work, we propose a hybrid gradient approach that
leverages recent advances in variational MI estimator and evolution strategies
(ES) combined with black-box stochastic gradient ascent (SGA) to maximize the
MI lower bound. This allows the design process to be achieved through a unified
scalable procedure for implicit models without sampling path gradients. Several
experiments demonstrate that our approach significantly improves the
scalability of BED for implicit models in high-dimensional design space.
|
Medical Visual Question Answering (VQA) is a multi-modal challenging task
widely considered by research communities of the computer vision and natural
language processing. Since most current medical VQA models focus on visual
content, ignoring the importance of text, this paper proposes a multi-view
attention-based model(MuVAM) for medical visual question answering which
integrates the high-level semantics of medical images on the basis of text
description. Firstly, different methods are utilized to extract the features of
the image and the question for the two modalities of vision and text. Secondly,
this paper proposes a multi-view attention mechanism that include
Image-to-Question (I2Q) attention and Word-to-Text (W2T) attention. Multi-view
attention can correlate the question with image and word in order to better
analyze the question and get an accurate answer. Thirdly, a composite loss is
presented to predict the answer accurately after multi-modal feature fusion and
improve the similarity between visual and textual cross-modal features. It
consists of classification loss and image-question complementary (IQC) loss.
Finally, for data errors and missing labels in the VQA-RAD dataset, we
collaborate with medical experts to correct and complete this dataset and then
construct an enhanced dataset, VQA-RADPh. The experiments on these two datasets
show that the effectiveness of MuVAM surpasses the state-of-the-art method.
|
We explore hierarchical black hole (BH) mergers in nuclear star clusters
(NSCs), globular clusters (GCs) and young star clusters (YSCs), accounting for
both original and dynamically assembled binary BHs (BBHs). We find that the
median mass of both first- and nth-generation dynamical mergers is larger in
GCs and YSCs with respect to NSCs, because the lighter BHs are ejected by
supernova kicks from the lower-mass clusters. Also, first- and nth-generation
BH masses are strongly affected by the metallicity of the progenitor stars: the
median mass of the primary BH of a nth-generation merger is $\sim{}24-38$
M$_\odot$ ($\sim{}9-15$ M$_\odot$) in metal-poor (metal-rich) NSCs. The maximum
BH mass mainly depends on the escape velocity: BHs with mass up to several
thousand M$_\odot$ form in NSCs, while YSCs and GCs host BHs with mass up to
several hundred M$_\odot$. Furthermore, we calculate the fraction of mergers
with at least one component in the pair-instability mass gap ($f_{\rm PI}$) and
in the intermediate-mass BH regime ($f_{\rm IMBH}$). In the fiducial model for
dynamical BBHs with metallicity $Z=0.002$, we find $f_{\rm PI}\approx{}0.05$,
$0.02$ and $0.007$ ($f_{\rm IMBH}\approx{}0.01$, $0.002$ and $0.001$) in NSCs,
GCs and YSCs, respectively. Both $f_{\rm PI}$ and $f_{\rm IMBH}$ drop by at
least one order of magnitude at solar metallicity. Finally, we investigate the
formation of GW190521 by assuming that it is either a nearly equal-mass BBH or
an intermediate-mass ratio inspiral.
|
We consider an interacting particle system with two species under strong
competition dynamics between the two species. Then, through the hydrodynamic
limit procedure for the microscopic model, we derive a one-phase Stefan type
free boundary problem with non-linear diffusion, letting the competition rate
divergent. Non-linearity of diffusion comes from a zero-range dynamics for one
species while we impose the other species to weakly diffuse according to the
Kawasaki dynamics for technical reasons, which macroscopically corresponds to
the vanishing viscosity method.
|
Visual place recognition is a challenging task in computer vision and a key
component of camera-based localization and navigation systems. Recently,
Convolutional Neural Networks (CNNs) achieved high results and good
generalization capabilities. They are usually trained using pairs or triplets
of images labeled as either similar or dissimilar, in a binary fashion. In
practice, the similarity between two images is not binary, but rather
continuous. Furthermore, training these CNNs is computationally complex and
involves costly pair and triplet mining strategies.
We propose a Generalized Contrastive loss (GCL) function that relies on image
similarity as a continuous measure, and use it to train a siamese CNN.
Furthermore, we propose three techniques for automatic annotation of image
pairs with labels indicating their degree of similarity, and deploy them to
re-annotate the MSLS, TB-Places, and 7Scenes datasets.
We demonstrate that siamese CNNs trained using the GCL function and the
improved annotations consistently outperform their binary counterparts. Our
models trained on MSLS outperform the state-of-the-art methods, including
NetVLAD, and generalize well on the Pittsburgh, TokyoTM and Tokyo 24/7
datasets. Furthermore, training a siamese network using the GCL function does
not require complex pair mining. We release the source code at
https://github.com/marialeyvallina/generalized_contrastive_loss.
|
We prove formulas for the rational Chow motives of moduli spaces of
semistable vector bundles and Higgs bundles of rank 3 and coprime degree on a
smooth projective curve. Our approach involves identifying criteria to lift
identities in (a completion of) the Grothendieck group of effective Chow
motives to isomorphisms in the category of Chow motives. For the Higgs moduli
space, we use motivic Bialynicki-Birula decompositions associated to a scaling
action with variation of stability and wall-crossing for moduli spaces of rank
2 pairs, which occur in the fixed locus of this action.
|
The NLP community has seen substantial recent interest in grounding to
facilitate interaction between language technologies and the world. However, as
a community, we use the term broadly to reference any linking of text to data
or non-textual modality. In contrast, Cognitive Science more formally defines
"grounding" as the process of establishing what mutual information is required
for successful communication between two interlocutors -- a definition which
might implicitly capture the NLP usage but differs in intent and scope. We
investigate the gap between these definitions and seek answers to the following
questions: (1) What aspects of grounding are missing from NLP tasks? Here we
present the dimensions of coordination, purviews and constraints. (2) How is
the term "grounding" used in the current research? We study the trends in
datasets, domains, and tasks introduced in recent NLP conferences. And finally,
(3) How to advance our current definition to bridge the gap with Cognitive
Science? We present ways to both create new tasks or repurpose existing ones to
make advancements towards achieving a more complete sense of grounding.
|
We develop a Bayesian spatio-temporal model to study pre-industrial grain
market integration during the Finnish famine of the 1860s. Our model takes into
account several problematic features often present when analysing multiple
spatially interdependent time series. For example, compared with the error
correction methodology commonly applied in econometrics, our approach allows
simultaneous modeling of multiple interdependent time series avoiding
cumbersome statistical testing needed to predetermine the market leader as a
point of reference. Furthermore, introducing a flexible spatio-temporal
structure enables analysing detailed regional and temporal dynamics of the
market mechanisms. Applying the proposed method, we detected spatially
asymmetric "price ripples" that spread out from the shock origin. We
corroborated the existing literature on the speedier adjustment to emerging
price differentials during the famine, but we observed this principally in
urban markets. This hastened return to long-run equilibrium means faster and
longer travel of price shocks, implying prolonged out-of-equilibrium dynamics,
proliferated influence of market shocks, and, importantly, a wider spread of
famine conditions.
|
We present the analysis of the diffuse, low column density HI environment of
18 MHONGOOSE galaxies. We obtained deep observations with the Robert C. Byrd
Green Bank Telescope, and reached down to a 3sigma column density detection
limit of NHI=6.3x10^{17} cm^{-2} over a 20 km/s linewidth. We analyze the
environment around these galaxies, with a focus on HI gas that reaches column
densities below NHI=10^{19} cm^{-2}. We calculate the total amount of HI gas in
and around the galaxies revealing that nearly all of these galaxies contained
excess HI outside of their disks. We quantify the amount of diffuse gas in the
maps of each galaxy, defined by HI gas with column densities below 10^{19}
cm^{-2}, and find a large spread in percentages of diffuse gas. However, by
binning the percentage of diffuse HI into quarters, we find that the bin with
the largest number of galaxies is the lowest quartile (0-25\% diffuse HI). We
identified several galaxies which may be undergoing gas accretion onto the
galaxy disk using multiple methods of analysis, including azimuthally averaging
column densities beyond the disk, and identifying structure within our
integrated intensity (Moment 0) maps. We measured HI mass outside the disks of
most of our galaxies, with rising cumulative flux even at large radii. We also
find a strong correlation between the fraction of diffuse gas in a galaxy and
its baryonic mass, and test this correlation using both Spearman and Pearson
correlation coefficients. We see evidence of a dark matter halo mass threshold
of M_{halo}~10^{11.1} \msun{} in which galaxies with high fractions of diffuse
HI all reside below. It is in this regime in which cold-mode accretion should
dominate. Finally, we suggest a rotation velocity of v_{rot}~80 km\s as an
upper threshold to find diffuse gas-dominated galaxies.
|
Certain applications require the use of signals that combine both the
capability to operate with low signal-to-noise ratios and the ability to
support multiple users without interference. In the case where many users have
very different signal-to-noise ratios, it is necessary to consider coding
schemes that can be used in a multi-user environment but with different noise
immunity levels. Traditional detection systems based on the correlation
function and coding sequences have significant limitations in satisfying both
objectives, since the cross-correlation between coded signals corresponding
with different users is linked to the use of the same coded sequences length.
The research topic of binary sequences that have null cross-correlation and
different length has not been studied in depth, but it has potential
applications in multi-user environments. In this work an algorithm to generate
binary sequences completely uncorrelated with certain sets of complementary
sequences is presented. The proposed algorithm is based on nested Barker
sequences, and it is compared with a previous proposal based on an iterative
algorithm. This approach allows to generate more diversity of sequences of
different length than the iterative approach, which it makes useful for
applications based on binary sequences detection and expand the horizon of many
applications.
|
In many control problems that include vision, optimal controls can be
inferred from the location of the objects in the scene. This information can be
represented using feature points, which is a list of spatial locations in
learned feature maps of an input image. Previous works show that feature points
learned using unsupervised pre-training or human supervision can provide good
features for control tasks. In this paper, we show that it is possible to learn
efficient feature point representations end-to-end, without the need for
unsupervised pre-training, decoders, or additional losses. Our proposed
architecture consists of a differentiable feature point extractor that feeds
the coordinates of the estimated feature points directly to a soft actor-critic
agent. The proposed algorithm yields performance competitive to the
state-of-the art on DeepMind Control Suite tasks.
|
In this thesis, we focus on the proposal of distributed workflow systems
dedicated to the automation of administrative business processes. We propose an
approach to build such systems by relying on the concepts of multiagent
systems, Peer to Peer (P2P) architecture, Service-Oriented Architecture (SOA)
and structured documents (artifacts) cooperative edition. Indeed, we develop
mathematical tools that allow any workflow systems designer, to express each
administrative process in the form of an attributed grammar whose symbols
represent tasks to be executed, productions specify a scheduling of these
tasks, and instances (the derivation trees that conform to it) represent the
different execution scenarios leading to business goal states. The obtained
grammatical model is then introduced into a proposed P2P system which is in
charge of carrying out the completely decentralised execution of the underlying
process's instances. The said system orchestrates a process's instance
execution as a choreography during which, various software agents driven by
human agents (actors), coordinate themselves through artifacts that they
collectively edit. The exchanged artifacts represent the system's memory: they
provide information on already executed tasks, on those ready to be executed
and on their executors. The software agents are autonomous and identical: they
execute the same unique protocol each time they receive an artifact. This
protocol allows them to identify the tasks they must immediately execute, to
execute them, to update the artifact and to disseminate it if necessary, for
the continuation of the execution. Moreover, actors potentially have only a
partial perception of processes in which they are involved. In practice, this
means that certain tasks can be carried out confidentially.
|
Functor lifting along a fibration is used for several different purposes in
computer science. In the theory of coalgebras, it is used to define coinductive
predicates, such as simulation preorder and bisimilarity. Codensity lifting is
a scheme to obtain a functor lifting along a fibration. It generalizes a few
previous lifting schemes including the Kantorovich lifting. In this paper, we
seek a property of functor lifting called fiberedness. Hinted by a known result
for Kantorovich lifting, we identify a sufficient condition for a codensity
lifting to be fibered. We see that this condition applies to many examples that
have been studied. As an application, we derive some results on
bisimilarity-like notions.
|
Particle beam eigen-emittances comprise the lowest set of rms-emittances that
can be imposed to a beam through symplectic optical elements. For cases of
practical relevance this paper introduces an approximation providing a very
simple and powerful relation between transverse eigen-emittance variation and
the beam phase integral. This relation enormously facilitates modeling
eigen-emittance tailoring scenarios. It reveals that difference of
eigen-emittances is given by the beam phase integral or vorticity rather than
by angular momentum. Within the approximation any beam is equivalent to two
objects rotating at angular velocities of same strength and different sign. A
description through circular beam modes has been done already in [A. Burov, S.
Nagaitsev, and Y. Derbenev, Circular modes, beam adapters, and their
applications in beam optics, Phys. Rev. E 66, 016503 (2002)]. The new relation
presented here is a complementary and vivid approach to provide a physical
picture of the nature of eigen-emittances for cases of practical interest.
|
In this paper, we propose an image compression algorithm called Microshift.
We employ an algorithm hardware co-design methodology, yielding a
hardware-friendly compression approach with low power consumption. In our
method, the image is first micro-shifted, then the sub-quantized values are
further compressed. Two methods, the FAST and MRF model, are proposed to
recover the bit-depth by exploiting the spatial correlation of natural images.
Both methods can decompress images progressively. Our compression algorithm
compresses images to 1.25 bits per pixel on average with PSNR of 33.16 dB,
outperforming other on-chip compression algorithms. Then, we propose a hardware
architecture and implement the algorithm on an FPGA and ASIC. The results on
the VLSI design further validate the low hardware complexity and high power
efficiency, showing our method is promising, particularly for low-power
wireless vision sensor networks.
|
Automated cyber threat detection in computer networks is a major challenge in
cybersecurity. The cyber domain has inherent challenges that make traditional
machine learning techniques problematic, specifically the need to learn
continually evolving attacks through global collaboration while maintaining
data privacy, and the varying resources available to network owners. We present
a scheme to mitigate these difficulties through an architectural approach using
community model sharing with a streaming analytic pipeline. Our streaming
approach trains models incrementally as each log record is processed, thereby
adjusting to concept drift resulting from changing attacks. Further, we
designed a community sharing approach which federates learning through merging
models without the need to share sensitive cyber-log data. Finally, by
standardizing data and Machine Learning processes in a modular way, we provide
network security operators the ability to manage cyber threat events and model
sensitivity through community member and analytic method weighting in ways that
are best suited for their available resources and data.
|
We initiate the homotopical study of racks and quandles, two algebraic
structures that govern knot theory and related braided structures in algebra
and geometry. We prove analogs of Milnor's theorem on free groups for these
theories and their pointed variants, identifying the homotopy types of the free
racks and free quandles on spaces of generators. These results allow us to
complete the stable classification of racks and quandles by identifying the
ring spectra that model their stable homotopy theories. As an application, we
show that the stable homotopy of a knot quandle is, in general, more
complicated than what any Wirtinger presentation coming from a diagram
predicts.
|
The existence, uniqueness and stability of periodic traveling waves for the
fractional Benjamin-Bona-Mahony equation is considered. In our approach, we
give sufficient conditions to prove a uniqueness result for the single-lobe
solution obtained by a constrained minimization problem. The spectral stability
is then shown by determining that the associated linearized operator around the
wave restricted to the orthogonal of the tangent space related to the momentum
and mass at the periodic wave has no negative eigenvalues. We propose the
Petviashvili's method to investigate the spectral stability of the periodic
waves for the fractional Benjamin-Bona-Mahony equation, numerically. Some
remarks concerning the orbital stability of periodic traveling waves are also
presented.
|
Object detection is an important computer vision task with plenty of
real-world applications; therefore, how to enhance its robustness against
adversarial attacks has emerged as a crucial issue. However, most of the
previous defense methods focused on the classification task and had few
analysis in the context of the object detection task. In this work, to address
the issue, we present a novel class-aware robust adversarial training paradigm
for the object detection task. For a given image, the proposed approach
generates an universal adversarial perturbation to simultaneously attack all
the occurred objects in the image through jointly maximizing the respective
loss for each object. Meanwhile, instead of normalizing the total loss with the
number of objects, the proposed approach decomposes the total loss into
class-wise losses and normalizes each class loss using the number of objects
for the class. The adversarial training based on the class weighted loss can
not only balances the influence of each class but also effectively and evenly
improves the adversarial robustness of trained models for all the object
classes as compared with the previous defense methods. Furthermore, with the
recent development of fast adversarial training, we provide a fast version of
the proposed algorithm which can be trained faster than the traditional
adversarial training while keeping comparable performance. With extensive
experiments on the challenging PASCAL-VOC and MS-COCO datasets, the evaluation
results demonstrate that the proposed defense methods can effectively enhance
the robustness of the object detection models.
|
Some generalizations of the relation between high-energy astrophysical
neutrino and cosmic ray fluxes are obtained, taking into account present
results on the cosmic ray spectrum and composition as well as a more realistic
modeling of the Galactic and extragalactic cosmic ray components down to PeV
energies. It is found that the level of neutrino fluxes measured by IceCube can
be consistent with sources that are thin to escaping protons. This could also
make it easier for heavier nuclei to be emitted from the sources without
suffering excessive disintegration processes.
|
We investigate the problem of synthesizing T-depth-optimal quantum circuits
over the universal fault-tolerant Clifford+T gate set, where the implementation
of the non-Clifford T-gate is the most expensive.
We use nested meet-in-the-middle (MITM) technique to develop algorithms for
synthesizing provably \emph{depth-optimal} and \emph{T-depth-optimal} circuits
for exactly implementable unitaries. These algorithms improve space complexity.
Specifically, for synthesizing T-depth-optimal circuits we define a special
subset of T-depth-1 unitaries, which can generate the T-depth-optimal
decomposition (up to a Clifford). This plays a crucial role in having better
time complexity as well. We get an algorithm with space and time complexity
$O\left(\left(n\cdot 2^{5.6n}\right)^{\lceil d/c\rceil}\right)$ and
$O\left(\left(n\cdot 2^{5.6n}\right)^{(c-1)\lceil d/c\rceil}\right)$
respectively, where $d$ is the minimum T-depth and $c\geq 2$ is a constant.
This is much better than the complexity of the algorithm by Amy~et~al.(2013),
the previous best with a complexity much more than
$O\left(\left(2^{kn^2}\right)^{\lceil d/2\rceil}\right)$, where $k$ is a
constant. For example, our new methods took 2 seconds for a task that would
have taken more than 4 days using the methods in Amy~et~al.(2013).
We design an even more efficient algorithm for synthesizing T-depth-optimal
circuits. The claimed efficiency and optimality depends on some conjectures,
which have been inspired from the work of Mosca and Mukhopadhyay (2020). To the
best of our knowledge, the conjectures are not related to the previous work.
Our algorithm has space and time complexity $\poly(n,2^{5.6n},d)$ (or
$\poly(n^{\log n},2^{5.6n},d)$ under some weaker assumptions).
|
Radiative-transfer (RT) is a fundamental part of modelling exoplanet
atmospheres with general circulation models (GCMs). An accurate RT scheme is
required for estimates of the atmospheric energy transport and for gaining
physical insight from model spectra. We implement three RT schemes for Exo-FMS:
semi-grey, non-grey `picket fence', and real gas with correlated-k. We
benchmark the Exo-FMS GCM using these RT schemes to hot Jupiter simulation
results from the literature. We perform a HD 209458b-like simulation with the
three schemes and compare their results. These simulations are then
post-processed to compare their observable differences. The semi-grey scheme
results show qualitative agreement with previous studies in line with
variations seen between GCM models. The real gas model reproduces well the
temperature and dynamical structures from other studies. After post-processing
our non-grey picket fence scheme compares very favourably with the real gas
model, producing similar transmission spectra, emission spectra and phase curve
behaviours. Exo-FMS is able to reliably reproduce the essential features of
contemporary GCM models in the hot gas giant regime. Our results suggest the
picket fence approach offers a simple way to improve upon RT realism beyond
semi-grey schemes.
|
We propose a straightforward vocabulary adaptation scheme to extend the
language capacity of multilingual machine translation models, paving the way
towards efficient continual learning for multilingual machine translation. Our
approach is suitable for large-scale datasets, applies to distant languages
with unseen scripts, incurs only minor degradation on the translation
performance for the original language pairs and provides competitive
performance even in the case where we only possess monolingual data for the new
languages.
|
Three-dimensional topological insulators (TIs) attract much attention due to
its topologically protected Dirac surface states. Doping into TIs or their
proximity with normal superconductors can promote the realization of
topological superconductivity(SC) and Majorana fermions with potential
applications in quantum computations. Here, an emergent superconductivity was
observed in local mesoscopic point-contacts on the topological insulator Bi2Se3
by applying a voltage pulse through the contacts, evidenced by the Andreev
reflection peak in the point-contact spectra and a visible resistance drop in
the four-probe electrical resistance measurements. More intriguingly, the
superconductivity can be erased with thermal cycles by warming up to high
temperatures (300 K) and induced again by the voltage pulse at the base
temperature (1.9 K), suggesting a significance for designing new types of
quantum devices. Nematic behaviour is also observed in the superconducting
state, similar to the case of CuxBi2Se3 as topological superconductor
candidates.
|
To provide high data rate aerial links for 5G and beyond wireless networks,
the integration of free-space optical (FSO) communications and aerial platforms
has been recently suggested as a practical solution. To fully reap the benefit
of aerial-based FSO systems, in this paper, an analytical channel model for a
long-range ground-to-air FSO link under the assumption of plane wave optical
beam profile at the receiver is derived. Particularly, the model includes the
combined effects of transmitter divergence angle, random wobbling of the
receiver, jitter due to beam wander, attenuation loss, and atmospheric
turbulence. Furthermore, a closed-form expression for the outage probability of
the considered link is derived which makes it possible to evaluate the
performance of such systems. Numerical results are then provided to corroborate
the accuracy of the proposed analytical expressions and to prove the
superiority of the proposed channel model over the previous models in
long-range aerial FSO links.
|
The origin of high-Tc superconductivity remains an enigma even though
tremendous research effort and progress have been made on cuprate and iron
pnictide superconductors. Aiming to mimic the cuprate-like electronic
configuration of transition metal, superconductivity has been recently found in
nickelates. This discovery hallmarks a new era in the search and understanding
of the high-Tc superconductivity. However, unlike the cuprate and iron
pnictide, in which the superconductivity was initially found in a compound
containing La, the superconductivity in the nickelate has only been observed in
Nd- and Pr-based compounds. This raises a central question of whether the f
electron of the rare-earth element is critical for superconductivity in the
nickelates. Here, we report the observation of superconductivity in
infinite-layer Ca-doped LaNiO2 (La1-xCaxNiO2) thin films and construct their
phase diagram. Unlike the metal-insulator transition in Nd- and Pr-based
nickelates, the undoped and underdoped La1-xCaxNiO2 thin films are entirely
insulating from 300 down to 2 K. A superconducting dome is observed from
0.15<x<0.3 with weakly insulating behavior at the overdoped regime. Moreover,
the sign of the Hall coefficient RH changes at low temperature for samples with
a higher doping level. However, distinct from the Nd- and Pr-based nickelates,
the RH-sign-change temperature remains around 35 K as the doping increases,
suggesting a different multiband structure in the La1-xCaxNiO2. These results
also emphasize the significant role of lattice correlation on the multiband
structures of the infinite-layer nickelates.
|
The main goal of the present paper is to evaluate the perturbed locations and
investigate the linear stability of the triangular points. We studied the
problem in the elliptic restricted three body problem frame of work. The
problem is generalized in the sense that the two primaries are considered as
triaxial bodies. It is found that the locations of these points are affected by
the triaxiality coefficients of the primaries and the eccentricity of orbits.
Also it is observed that the stability regions depend on the involved
perturbations. In addition to this we studied the periodic orbits in the
vicinity of the triangular points.
|
The research direction of identifying acoustic bio-markers of respiratory
diseases has received renewed interest following the onset of COVID-19
pandemic. In this paper, we design an approach to COVID-19 diagnostic using
crowd-sourced multi-modal data. The data resource, consisting of acoustic
signals like cough, breathing, and speech signals, along with the data of
symptoms, are recorded using a web-application over a period of ten months. We
investigate the use of statistical descriptors of simple time-frequency
features for acoustic signals and binary features for the presence of symptoms.
Unlike previous works, we primarily focus on the application of simple linear
classifiers like logistic regression and support vector machines for acoustic
data while decision tree models are employed on the symptoms data. We show that
a multi-modal integration of acoustics and symptoms classifiers achieves an
area-under-curve (AUC) of 92.40, a significant improvement over any individual
modality. Several ablation experiments are also provided which highlight the
acoustic and symptom dimensions that are important for the task of COVID-19
diagnostics.
|
Tensors are widely used to represent multiway arrays of data. The recovery of
missing entries in a tensor has been extensively studied, generally under the
assumption that entries are missing completely at random (MCAR). However, in
most practical settings, observations are missing not at random (MNAR): the
probability that a given entry is observed (also called the propensity) may
depend on other entries in the tensor or even on the value of the missing
entry. In this paper, we study the problem of completing a partially observed
tensor with MNAR observations, without prior information about the
propensities. To complete the tensor, we assume that both the original tensor
and the tensor of propensities have low multilinear rank. The algorithm first
estimates the propensities using a convex relaxation and then predicts missing
values using a higher-order SVD approach, reweighting the observed tensor by
the inverse propensities. We provide finite-sample error bounds on the
resulting complete tensor. Numerical experiments demonstrate the effectiveness
of our approach.
|
Electroweak radiative corrections to the cross section of the process $e^+
e^- \to Z H$ are considered. The complete one-loop electroweak radiative
corrections are evaluated with the help of the SANC system. Higher-order
contributions of the initial state radiation are computed in the QED structure
function formalism. Numerical results are produced by a new version of the
ReneSANCe event generator and MCSANCee integrator for the conditions of future
electron-positron colliders. The resulting theoretical uncertainty in the
description of this process is estimated.
|
This paper addresses the persistent monitoring problem defined on a network
where a set of nodes (targets) needs to be monitored by a team of dynamic
energy-aware agents. The objective is to control the agents' motion to jointly
optimize the overall agent energy consumption and a measure of overall node
state uncertainty, evaluated over a finite period of interest. To achieve these
objectives, we extend an established event-driven Receding Horizon Control
(RHC) solution by adding an optimal controller to account for agent motion
dynamics and associated energy consumption. The resulting RHC solution is
computationally efficient, distributed and on-line. Finally, numerical results
are provided highlighting improvements compared to an existing RHC solution
that uses energy-agnostic first-order agents.
|
Financial trading has been widely analyzed for decades with market
participants and academics always looking for advanced methods to improve
trading performance. Deep reinforcement learning (DRL), a recently
reinvigorated method with significant success in multiple domains, still has to
show its benefit in the financial markets. We use a deep Q-network (DQN) to
design long-short trading strategies for futures contracts. The state space
consists of volatility-normalized daily returns, with buying or selling being
the reinforcement learning action and the total reward defined as the
cumulative profits from our actions. Our trading strategy is trained and tested
both on real and simulated price series and we compare the results with an
index benchmark. We analyze how training based on a combination of artificial
data and actual price series can be successfully deployed in real markets. The
trained reinforcement learning agent is applied to trading the E-mini S&P 500
continuous futures contract. Our results in this study are preliminary and need
further improvement.
|
In this paper, we study the effects of rainbow gravity on relativistic
Bose-Einstein condensation and thermodynamics parameters. We initially
discussed some formal aspects of the model to only then compute the corrections
to the Bose-Einstein condensation. The calculations were carried out by
computing the generating functional, from which we extract the thermodynamics
parameters. The corrected critical temperature $T_c$ that sets the
Bose-Einstein Condensation was also computed for the three mostly adopted cases
for the rainbow functions. We have also obtained a phenomenological upper bound
for a combination of the quantities involved in the model, besides showing the
possibility of occurrence of the Bose-Einstein condensation in two spatial
dimensions under appropriate conditions on those functions. Finally, we have
discussed how harder is for the particles at an arbitrary temperature $T<T_c$
to enter the condensed state when compared with the usual scenario.
|
Compound chondrules, i.e. chondrules fused together, make a powerful probe of
the density and compositional diversity in chondrule-forming environments, but
their abundance among the dominating porphyritic textures may have been
drastically underestimated. I report herein microscopic observations and
LA-ICP-MS analyses of lobate chondrules in the CO3 chondrites Miller Range
07193 and 07342. Lobes in a given chondrule show correlated volatile and
moderately volatile element abundances but refractory element concentrations
are essentially independent. This indicates that they formed by the collision
of preexisting droplets whose refractory elements behaved in closed system,
while their more volatile elements were buffered by the same gaseous medium.
The presence of lobes would otherwise be difficult to explain, as surface
tension should have rapidly imposed a spherical shape at the temperature peak.
In fact, since most chondrules across chondrite groups are nonspherical, a
majority are probably compounds variously relaxed toward sphericity. The lack
of correlation of refractory elements between conjoined compound chondrule
components is inconsistent with derivation of chondrules from the disruption of
homogenized melt bodies as in impact scenarios and evokes rather the melting of
independent mm-size nebular aggregates. Yet a "nebular" setting for chondrule
formation would need to involve not only increased solid concentration, e.g. by
settling to the midplane, but also a boost in relative velocities between
droplets during chondrule-forming events to account for observed compound
chondrule frequencies .
|
Learning and analyzing rap lyrics is a significant basis for many web
applications, such as music recommendation, automatic music categorization, and
music information retrieval, due to the abundant source of digital music in the
World Wide Web. Although numerous studies have explored the topic, knowledge in
this field is far from satisfactory, because critical issues, such as prosodic
information and its effective representation, as well as appropriate
integration of various features, are usually ignored. In this paper, we propose
a hierarchical attention variational autoencoder framework (HAVAE), which
simultaneously consider semantic and prosodic features for rap lyrics
representation learning. Specifically, the representation of the prosodic
features is encoded by phonetic transcriptions with a novel and effective
strategy~(i.e., rhyme2vec). Moreover, a feature aggregation strategy is
proposed to appropriately integrate various features and generate
prosodic-enhanced representation. A comprehensive empirical evaluation
demonstrates that the proposed framework outperforms the state-of-the-art
approaches under various metrics in different rap lyrics learning tasks.
|
In addition to spectacular signatures such as black hole superradiance and
the rotation of CMB polarization, the plenitude of axions appearing in the
string axiverse may have potentially dangerous implications. An example is the
cosmological overproduction of relic axions and moduli by the misalignment
mechanism, more pronounced in regions where the signals mentioned above may be
observable, that is for large axion decay constant. In this work, we study the
minimal requirements to soften this problem and show that the fundamental
requirement is a long period of low-scale inflation. However, in this case, if
the inflationary Hubble scale is lower than around $O(100)$ eV, no relic DM
axion is produced in the early Universe. Cosmological production of some axions
may be activated, via the misalignment mechanism, if their potential minimum
changes between inflation and today. As a particular example, we study in
detail how the maximal-misalignment mechanism dilutes the effect of dangerous
axions and allows the production of axion DM in a controlled way. In this case,
the potential of the axion that realises the mechanism shifts by a factor
$\Delta\theta=\pi$ between the inflationary epoch and today, and the axion
starts to oscillate from the top of its potential. We also show that axions
with masses $m_a\sim O(1-100)\, H_0$ realising the maximal-misalignment
mechanism generically behave as dark energy with a decay constant that can take
values well below the Planck scale, avoiding problems associated to
super-Planckian scales. Finally, we briefly study the basic phenomenological
implications of the mechanism and comment on the compatibility of this type of
maximally-misaligned quintessence with the swampland criteria.
|
In this work, we present the first linear time deterministic algorithm
computing the 4-edge-connected components of an undirected graph. First, we
show an algorithm listing all 3-edge-cuts in a given 3-edge-connected graph,
and then we use the output of this algorithm in order to determine the
4-edge-connected components of the graph.
|
(abridged) Within the Orion A molecular cloud, the integral-shaped filament
(ISF) is a prominent, degree-long structure of dense gas and dust, with clear
signs of recent and on-going high-mass star formation. We used the ArTeMiS
bolometer camera at APEX to map a 0.6x0.2 deg^2 region covering OMC-1, OMC-2,
OMC-3 at 350 and 450 micron. We combined these data with Herschel-SPIRE maps to
recover extended emission. The combined Herschel-ArTeMiS maps provide details
on the distribution of dense, cold material, with a high spatial dynamic range,
from our 8'' resolution (0.016 pc) up to the size of the map ~10-15 deg. By
combining Herschel and ArTeMiS data at 160, 250, 350 and 450 micron, we
constructed high-resolution temperature and H2 column density maps. We
extracted radial profiles from the column density map in several,
representative portions of the ISF, that we fitted with Gaussian and Plummer
models to derive their intrinsic widths. We also compared the distribution of
material traced by ArTeMiS with that seen in the higher density tracer
N2H+(1-0) recently observed with the ALMA interferometer. All the radial
profiles that we extracted show clear deviation from a Gaussian, with evidence
for an inner plateau, previously not seen using Herschel-only data. We measure
intrinsic half-power widths in the range 0.06 to 0.11 pc. This is significantly
larger than the Gaussian widths measured for fibers seen in N2H+, which
probably traces only the dense innermost regions of the large-scale filament.
These half-power widths are within a factor of two of the value of 0.1 pc found
for a large sample of nearby filaments in various low-mass star-forming
regions, which tends to indicate that the physical conditions governing the
fragmentation of prestellar cores within transcritical or supercritical
filaments are the same over a large range of masses per unit length.
|
In this work, we study the following problem, that we refer to as Low Rank
column-wise Compressive Sensing (LRcCS): how to recover an $n \times q$
rank-$r$ matrix, $X^* =[x^*_1 , x^*_2 ,...x^*_q]$ from $m$ independent linear
projections of each of its $q$ columns, i.e., from $y_k := A_k x^*_k , k \in
[q]$, when $y_k$ is an $m$-length vector. The matrices $A_k$ are known and
mutually independent for different $k$. The regime of interest is low-rank,
i.e., $r \ll \min(n,q)$, and undersampled measurements, i.e., $m < n$. Even
though many LR recovery problems have been extensively studied in the last
decade, this particular problem has received little attention so far in terms
of methods with provable guarantees. We introduce a novel gradient descent (GD)
based solution called altGDmin. We show that, if all entries of all $A_k$s are
i.i.d. Gaussian, and if the right singular vectors of $X^*$ satisfy the
incoherence assumption, then $\epsilon$-accurate recovery of $X^*$ is possible
with $mq > C (n+q) r^2 \log(1/\epsilon)$ total samples and $O( mq nr \log
(1/\epsilon))$ time. Compared to existing work, to our best knowledge, this is
the fastest solution and, for $\epsilon < 1/\sqrt{r}$, it also has the best
sample complexity. Moreover, we show that a simple extension of our approach
also solves LR Phase Retrieval (LRPR), which is the magnitude-only
generalization of LRcCS. It involves recovering $X^*$ from the magnitudes of
entries of $y_k$. We show that altGDmin-LRPR has matching sample complexity and
better time complexity when compared with the (best) existing solution for
LRPR.
|
In this paper we propose a traffic surveillance camera calibration method
based on detection of pairs of vanishing points associated with vehicles in the
traffic surveillance footage. To detect the vanishing points we propose a CNN
which outputs heatmaps in which the positions of vanishing points are
represented using the diamond space parametrization which enables us to detect
vanishing points from the whole infinite projective space. From the detected
pairs of vanishing points for multiple vehicles in a scene we establish the
scene geometry by estimating the focal length of the camera and the orientation
of the road plane. We show that our method achieves competitive results on the
BrnoCarPark dataset while having fewer requirements than the current state of
the art approach.
|
We study the flow of elongated grains (wooden pegs of length $L$=20 mm with
circular cross section of diameter $d_c$=6 and 8 mm) from a silo with a
rotating bottom and a circular orifice of diameter $D$. In the small orifice
range ($D/d<5$) clogs are mostly broken by the rotating base, and the flow is
intermittent with avalanches and temporary clogs. Here
$d\equiv(\frac{3}{2}d_c^2L)^{1/3}$ is the effective grain diameter. Unlike for
spherical grains, for rods the flow rate $W$ clearly deviates from the power
law dependence $W\propto (D-kd)^{2.5}$ at lower orifice sizes in the
intermittent regime, where $W$ is measured in between temporary clogs only.
Instead, below about $D/d<3$ an exponential dependence $W\propto e^{\kappa D}$
is detected. Here $k$ and $\kappa$ are constants of order unity. Even more
importantly, rotating the silo base leads to a strong -- more than 50% --
decrease of the flow rate, which otherwise does not depend significantly on the
value of $\omega$ in the continuous flow regime. In the intermittent regime,
$W(\omega)$ appears to follow a non-monotonic trend, although with considerable
noise. A simple picture, in terms of the switching from funnel flow to mass
flow and the alignment of the pegs due to rotation, is proposed to explain the
observed difference between spherical and elongated grains. We also observe
shear induced orientational ordering of the pegs at the bottom such that their
long axes in average are oriented at a small angle $\langle\theta\rangle
\approx 15^\circ$ to the motion of the bottom.
|
Numerical computation of the Karhunen--Lo\`eve expansion is computationally
challenging in terms of both memory requirements and computing time. We compare
two state-of-the-art methods that claim to efficiently solve for the K--L
expansion: (1) the matrix-free isogeometric Galerkin method using interpolation
based quadrature proposed by the authors in [1] and (2) our new matrix-free
implementation of the isogeometric collocation method proposed in [2]. Two
three-dimensional benchmark problems indicate that the Galerkin method performs
significantly better for smooth covariance kernels, while the collocation
method performs slightly better for rough covariance kernels.
|
Learning concepts that are consistent with human perception is important for
Deep Neural Networks to win end-user trust. Post-hoc interpretation methods
lack transparency in the feature representations learned by the models. This
work proposes a guided learning approach with an additional concept layer in a
CNN- based architecture to learn the associations between visual features and
word phrases. We design an objective function that optimizes both prediction
accuracy and semantics of the learned feature representations. Experiment
results demonstrate that the proposed model can learn concepts that are
consistent with human perception and their corresponding contributions to the
model decision without compromising accuracy. Further, these learned concepts
are transferable to new classes of objects that have similar concepts.
|
Federated edge learning (FEEL) is a widely adopted framework for training an
artificial intelligence (AI) model distributively at edge devices to leverage
their data while preserving their data privacy. The execution of a power-hungry
learning task at energy-constrained devices is a key challenge confronting the
implementation of FEEL. To tackle the challenge, we propose the solution of
powering devices using wireless power transfer (WPT). To derive guidelines on
deploying the resultant wirelessly powered FEEL (WP-FEEL) system, this work
aims at the derivation of the tradeoff between the model convergence and the
settings of power sources in two scenarios: 1) the transmission power and
density of power-beacons (dedicated charging stations) if they are deployed, or
otherwise 2) the transmission power of a server (access-point). The development
of the proposed analytical framework relates the accuracy of distributed
stochastic gradient estimation to the WPT settings, the randomness in both
communication and WPT links, and devices' computation capacities. Furthermore,
the local-computation at devices (i.e., mini-batch size and processor clock
frequency) is optimized to efficiently use the harvested energy for gradient
estimation. The resultant learning-WPT tradeoffs reveal the simple scaling laws
of the model-convergence rate with respect to the transferred energy as well as
the devices' computational energy efficiencies. The results provide useful
guidelines on WPT provisioning to provide a guaranteer on learning performance.
They are corroborated by experimental results using a real dataset.
|
The dynamic response of power grids to small disturbances influences their
overall stability. This paper examines the effect of network topology on the
linearized time-invariant dynamics of electric power systems. The proposed
framework utilizes ${\cal H}_2$-norm based stability metrics to study the
optimal placement of lines on existing networks as well as the topology design
of new networks. The design task is first posed as an NP-hard mixed-integer
nonlinear program (MINLP) that is exactly reformulated as a mixed-integer
linear program (MILP) using McCormick linearization. To improve computation
time, graph-theoretic properties are exploited to derive valid inequalities
(cuts) and tighten bounds on the continuous optimization variables. Moreover, a
cutting plane generation procedure is put forth that is able to interject the
MILP solver and augment additional constraints to the problem on-the-fly. The
efficacy of our approach in designing optimal grid topologies is demonstrated
through numerical tests on the IEEE 39-bus network.
|
We study the connection of matter density and its tracers from the PDF
perspective. One aspect of this connection is the conditional expectation value
$\langle \delta_{\mathrm{tracer}}|\delta_m\rangle$ when averaging both tracer
and matter density over some scale. We present a new way to incorporate a
Lagrangian bias expansion of this expectation value into standard frameworks
for modelling the PDF of density fluctuations and counts-in-cells statistics.
Using N-body simulations and mock galaxy catalogs we confirm the accuracy of
this expansion and compare it to the more commonly used Eulerian
parametrization. For halos hosting typical luminous red galaxies, the
Lagrangian model provides a significantly better description of $\langle
\delta_{\mathrm{tracer}}|\delta_m\rangle$ at second order in perturbations. A
second aspect of the matter-tracer connection is shot-noise, \ie the scatter of
tracer density around $\langle \delta_{\mathrm{tracer}}|\delta_m\rangle$. It is
well known that this noise can be significantly non-Poissonian and we validate
the performance of a more general, two-parameter shot-noise model for different
tracers and simulations. Both parts of our analysis are meant to pave the way
for forthcoming applications to survey data.
|
Face masks have long been used in many areas of everyday life to protect
against the inhalation of hazardous fumes and particles. They also offer an
effective solution in healthcare for bi-directional protection against
air-borne diseases. Wearing and positioning the mask correctly is essential for
its function. Convolutional neural networks (CNNs) offer an excellent solution
for face recognition and classification of correct mask wearing and
positioning. In the context of the ongoing COVID-19 pandemic, such algorithms
can be used at entrances to corporate buildings, airports, shopping areas, and
other indoor locations, to mitigate the spread of the virus. These application
scenarios impose major challenges to the underlying compute platform. The
inference hardware must be cheap, small and energy efficient, while providing
sufficient memory and compute power to execute accurate CNNs at a reasonably
low latency. To maintain data privacy of the public, all processing must remain
on the edge-device, without any communication with cloud servers. To address
these challenges, we present a low-power binary neural network classifier for
correct facial-mask wear and positioning. The classification task is
implemented on an embedded FPGA, performing high-throughput binary operations.
Classification can take place at up to ~6400 frames-per-second, easily enabling
multi-camera, speed-gate settings or statistics collection in crowd settings.
When deployed on a single entrance or gate, the idle power consumption is
reduced to 1.6W, improving the battery-life of the device. We achieve an
accuracy of up to 98% for four wearing positions of the MaskedFace-Net dataset.
To maintain equivalent classification accuracy for all face structures,
skin-tones, hair types, and mask types, the algorithms are tested for their
ability to generalize the relevant features over all subjects using the
Grad-CAM approach.
|
We study and model the properties of galaxy clusters in the normal-branch
Dvali-Gabadadze-Porrati (nDGP) model of gravity, which is representative of a
wide class of theories which exhibit the Vainshtein screening mechanism. Using
the first cosmological simulations which incorporate both full baryonic physics
and nDGP, we find that, despite being efficiently screened within clusters, the
fifth force can raise the temperature of the intra-cluster gas, affecting the
scaling relations between the cluster mass and three observable mass proxies:
the gas temperature, the Compton $Y$-parameter of the Sunyaev-Zel'dovich effect
and the X-ray analogue of the $Y$-parameter. Therefore, unless properly
accounted for, this could lead to biased measurements of the cluster mass in
tests that make use of cluster observations, such as cluster number counts, to
probe gravity. Using a suite of dark-matter-only simulations, which span a wide
range of box sizes and resolutions, and which feature very different strengths
of the fifth force, we also calibrate general fitting formulae which can
reproduce the nDGP halo concentration at percent accuracy for $0\leq z\leq1$,
and halo mass function with $\lesssim3\%$ accuracy at $0\leq z\leq1$
(increasing to $\lesssim5\%$ for $1\leq z\leq 2$), over a halo mass range
spanning four orders of magnitude. Our model for the concentration can be used
for converting between halo mass overdensities and predicting statistics such
as the nonlinear matter power spectrum. The results of this work will form part
of a framework for unbiased constraints of gravity using the data from ongoing
and upcoming cluster surveys.
|
Sentiment analysis on software engineering (SE) texts has been widely used in
the SE research, such as evaluating app reviews or analyzing developers
sentiments in commit messages. To better support the use of automated sentiment
analysis for SE tasks, researchers built an SE-domain-specified sentiment
dictionary to further improve the accuracy of the results. Unfortunately,
recent work reported that current mainstream tools for sentiment analysis still
cannot provide reliable results when analyzing the sentiments in SE texts. We
suggest that the reason for this situation is because the way of expressing
sentiments in SE texts is largely different from the way in social network or
movie comments. In this paper, we propose to improve sentiment analysis in SE
texts by using sentence structures, a different perspective from building a
domain dictionary. Specifically, we use sentence structures to first identify
whether the author is expressing her sentiment in a given clause of an SE text,
and to further adjust the calculation of sentiments which are confirmed in the
clause. An empirical evaluation based on four different datasets shows that our
approach can outperform two dictionary-based baseline approaches, and is more
generalizable compared to a learning-based baseline approach.
|
This paper is dedicated to the spectral optimization problem $$
\mathrm{min}\left\{\lambda_1^s(\Omega)+\cdots+\lambda_m^s(\Omega) + \Lambda
\mathcal{L}_n(\Omega)\colon \Omega\subset D \mbox{ s-quasi-open}\right\} $$
where $\Lambda>0, D\subset \mathbb{R}^n$ is a bounded open set and
$\lambda_i^s(\Omega)$ is the $i$-th eigenvalues of the fractional Laplacian on
$\Omega$ with Dirichlet boundary condition on $\mathbb{R}^n\setminus \Omega$.
We first prove that the first $m$ eigenfunctions on an optimal set are locally
H\"{o}lder continuous in the class $C^{0,s}$ and, as a consequence, that the
optimal sets are open sets. Then, via a blow-up analysis based on a Weiss type
monotonicity formula, we prove that the topological boundary of a minimizer
$\Omega$ is composed of a relatively open regular part and a closed singular
part of Hausdorff dimension at most $n-n^*$, for some $n^*\geq 3$. Finally we
use a viscosity approach to prove $C^{1,\alpha}$-regularity of the regular part
of the boundary.
|
Deriving quantum error correction and quantum control from the Schrodinger
equation for a unified qubit-environment Hamiltonian will give insights into
how microscopic degrees of freedom affect the capability to control and correct
quantum information beyond that of phenomenological theory. Here, we
investigate the asymptotic reduced state of two qubits coupled to each other
solely via a common heat bath of linear harmonic oscillators and search for
evidence of fault-tolerant excited qubit states. We vary the Hamiltonian
parameters, including the qubit-qubit and qubit-bath detuning, the bath
spectral density, and whether or not we use the Markov approximation in the
calculation of our dynamics. In proximity to special values of these
parameters, we identify these states as asymptotic reduced states that are
arbitrarily pure, excited, unique, and have high singlet fidelity. We emphasize
the central role of the Lamb-shift as an agent responsible for fault tolerant
excitations. To learn how these parameters relate to performance, we discuss
numerical studies on fidelity and error recovery time.
|
In this work, we begin the study of a new class of dynamical systems
determined by interval maps generated by the symbolic action of erasing
substitution rules. We do this by discussing in some detail the geometric,
analytical, dynamical and arithmetic properties of a particular example, which
has the virtue of being arguably the simplest and that at the same time
produces interesting properties and new challenging problems.
|
During the performance verification phase of the SRG/eROSITA telescope, the
eROSITA Final Equatorial-Depth Survey (eFEDS) has been carried out. It covers a
140 deg$^2$ field located at 126$^\circ <$ R.A. $< 146^\circ$ and -3$^\circ <$
Dec. $< +6^\circ$ with a nominal exposure over the field of 2.2 ks. 542
candidate clusters were detected in this field, down to a flux limit $F_X \sim
10^{-14}$ erg s$^{-1}$ cm$^{-2}$ in the 0.5-2 keV band. In order to understand
radio-mode feedback in galaxy clusters, we study the radio emission of
brightest cluster galaxies of eFEDS clusters, and we relate it to the X-ray
properties of the host cluster. Using LOFAR we identify 227 radio galaxies
hosted in the BCGs of the 542 galaxy clusters and groups detected in eFEDS. We
treat non-detections as radio upper limits. We analyse the properties of radio
galaxies, such as redshift and luminosity distribution, offset from the cluster
centre, largest linear size and radio power. We study their relation to the
intracluster medium of the host cluster. We perform statistical tests to deal
with upper limits on the radio luminosities. BCGs with radio-loud AGN are more
likely to lie close to the cluster centre than radio-quiet BCGs. There is a
clear relation between the cluster's X-ray luminosity and the radio power of
the BCG. Statistical tests indicate that this correlation is not produced by
selection effects in the radio band. We see no apparent link between largest
linear size of the radio galaxy and central density of the host cluster.
Converting the radio luminosity to kinetic luminosity, we find that radiative
losses of the intracluster medium are in an overall balance with the heating
provided by the central AGN. Finally, we tentatively classify our objects into
disturbed and relaxed, and we show that the link between the AGN and the ICM
apparently holds regardless of the dynamical state of the cluster.
|
Many current neural networks for medical imaging generalise poorly to data
unseen during training. Such behaviour can be caused by networks overfitting
easy-to-learn, or statistically dominant, features while disregarding other
potentially informative features. For example, indistinguishable differences in
the sharpness of the images from two different scanners can degrade the
performance of the network significantly. All neural networks intended for
clinical practice need to be robust to variation in data caused by differences
in imaging equipment, sample preparation and patient populations.
To address these challenges, we evaluate the utility of spectral decoupling
as an implicit bias mitigation method. Spectral decoupling encourages the
neural network to learn more features by simply regularising the networks'
unnormalised prediction scores with an L2 penalty, thus having no added
computational costs.
We show that spectral decoupling allows training neural networks on datasets
with strong spurious correlations and increases networks' robustness for data
distribution shifts. To validate our findings, we train networks with and
without spectral decoupling to detect prostate cancer tissue slides and
COVID-19 in chest radiographs. Networks trained with spectral decoupling
achieve up to 9.5 percent point higher performance on external datasets.
Our results show that spectral decoupling helps with generalisation issues
associated with neural networks, and can be used to complement or replace
computationally expensive explicit bias mitigation methods, such as stain
normalization in histological images. We recommend using spectral decoupling as
an implicit bias mitigation method in any neural network intended for clinical
use.
|
Data-driven reduced order models (ROMs) are combined with the
Lippmann-Schwinger integral equation to produce a direct nonlinear inversion
method. The ROM is viewed as a Galerkin projection and is sparse due to Lanczos
orthogonalization. Embedding into the continuous problem, a data-driven
internal solution is produced. This internal solution is then used in the
Lippmann-Schwinger equation, thus making further iterative updates unnecessary.
We show numerical experiments for spectral domain domain data for which our
inversion is far superior to the Born inversion and works as well as when the
true internal solution is known.
|
This work deals with mixing and dissipation ehancement for the solution of
advection-diffusion equation driven by a Ornstein-Uhlenbeck velocity field. We
are able to prove a quantitative mixing result, uniform in the diffusion
parameter, and enhancement of dissipation over a finite time horizon.
|
In the past, several works have investigated ways for combining quantitative
and qualitative methods in research assessment exercises. In this work, we aim
at introducing a methodology to explore whether citation-based metrics,
calculated only considering open bibliographic and citation data, can yield
insights on how human peer-review of research assessment exercises is
conducted. To understand if and what metrics provide relevant information, we
propose to use a series of machine learning models to replicate the decisions
of the committees of the research assessment exercises.
|
Downlink beamforming is an essential technology for wireless cellular
networks; however, the design of beamforming vectors that maximize the weighted
sum rate (WSR) is an NP-hard problem and iterative algorithms are typically
applied to solve it. The weighted minimum mean square error (WMMSE) algorithm
is the most widely used one, which iteratively minimizes the WSR and converges
to a local optimal. Motivated by the recent developments in meta-learning
techniques to solve non-convex optimization problems, we propose a
meta-learning based iterative algorithm for WSR maximization in a MISO downlink
channel. A long-short-term-memory (LSTM) network-based meta-learning model is
built to learn a dynamic optimization strategy to update the variables
iteratively. The learned strategy aims to optimize each variable in a less
greedy manner compared to WMMSE, which updates variables by computing their
first-order stationary points at each iteration step. The proposed algorithm
outperforms WMMSE significantly in the high signal to noise ratio(SNR) regime
and shows the comparable performance when the SNR is low.
|
The representations of a $k$-graph $C^*$-algebra $C^*(\Lambda)$ which arise
from $\Lambda$-semibranching function systems are closely linked to the
dynamics of the $k$-graph $\Lambda$. In this paper, we undertake a systematic
analysis of the question of irreducibility for these representations. We
provide a variety of necessary and sufficient conditions for irreducibility, as
well as a number of examples indicating the optimality of our results. We also
explore the relationship between irreducible $\Lambda$-semibranching
representations and purely atomic representations of $C^*(\Lambda)$. Throughout
the paper, we work in the setting of row-finite source-free $k$-graphs; this
paper constitutes the first analysis of $\Lambda$-semibranching representations
at this level of generality.
|
Parton distributions can be defined in terms of the entropy of entanglement
between the spatial region probed by deep inelastic scattering (DIS) and the
rest of the proton. For very small $x$, the proton becomes a maximally
entangled state. This approach leads to a simple relation $S = \ln N $ between
the average number $N$ of color-singlet dipoles in the proton wave function and
the entropy of the produced hadronic state $S$. At small $x$, the multiplicity
of dipoles is given by the gluon structure function, $N = x G(x,Q^2)$.
Recently, the H1 Collaboration analyzed the entropy of the produced hadronic
state in DIS, and studied its relation to the gluon structure function; poor
agreement with the predicted relation was found. In this letter we argue that a
more accurate account of the number of color-singlet dipoles in the kinematics
of H1 experiment (where hadrons are detected in the current fragmentation
region) is given not by $xG(x,Q^2)$ but by the sea quark structure function
$x\Sigma(x,Q^2)$. Sea quarks originate from the splitting of gluons, so at
small $x$ $x\Sigma(x,Q^2)\,\sim\, xG(x,Q^2)$, but in the current fragmentation
region this proportionality is distorted by the contribution of the
quark-antiquark pair produced by the virtual photon splitting. In addition, the
multiplicity of color-singlet dipoles in the current fragmentation region is
quite small, and one needs to include $\sim 1/N$ corrections to $S= \ln N$
asymptotic formula. Taking both of these modifications into account, we find
that the data from the H1 Collaboration in fact agree well with the prediction
based on entanglement.
|
Semantic parsing is challenging due to the structure gap and the semantic gap
between utterances and logical forms. In this paper, we propose an unsupervised
semantic parsing method - Synchronous Semantic Decoding (SSD), which can
simultaneously resolve the semantic gap and the structure gap by jointly
leveraging paraphrasing and grammar constrained decoding. Specifically, we
reformulate semantic parsing as a constrained paraphrasing problem: given an
utterance, our model synchronously generates its canonical utterance and
meaning representation. During synchronous decoding: the utterance paraphrasing
is constrained by the structure of the logical form, therefore the canonical
utterance can be paraphrased controlledly; the semantic decoding is guided by
the semantics of the canonical utterance, therefore its logical form can be
generated unsupervisedly. Experimental results show that SSD is a promising
approach and can achieve competitive unsupervised semantic parsing performance
on multiple datasets.
|
The integration of small-scale Unmanned Aerial Vehicles (UAVs) into
Intelligent Transportation Systems (ITSs) will empower novel smart-city
applications and services. After the unforeseen outbreak of the COVID-19
pandemic, the public demand for delivery services has multiplied. Mobile
robotic systems inherently offer the potential for minimizing the amount of
direct human-to-human interactions with the parcel delivery process. The
proposed system-of-systems consists of various complex aspects such as
assigning and distributing delivery jobs, establishing and maintaining reliable
communication links between the vehicles, as well as path planning and mobility
control. In this paper, we apply a system-level perspective for identifying key
challenges and promising solution approaches for modeling, analysis, and
optimization of UAV-aided parcel delivery. We present a system-of-systems model
for UAV-assisted parcel delivery to cope with higher capacity requirements
induced by the COVID-19. To demonstrate the benefits of hybrid vehicular
delivery, we present a case study focusing on the prioritization of
time-critical deliveries such as medical goods. The results further confirm
that the capacity of traditional delivery fleets can be upgraded with drone
usage. Furthermore, we observe that the delay incurred by prioritizing
time-critical deliveries can be compensated with drone deployment. Finally,
centralized and decentralized communication approaches for data transmission
inside hybrid delivery fleets are compared.
|
Stepped wedge cluster randomized trials (SW-CRTs) with binary outcomes are
increasingly used in prevention and implementation studies. Marginal models
represent a flexible tool for analyzing SW-CRTs with population-averaged
interpretations, but the joint estimation of the mean and intraclass
correlation coefficients (ICCs) can be computationally intensive due to large
cluster-period sizes. Motivated by the need for marginal inference in SW-CRTs,
we propose a simple and efficient estimating equations approach to analyze
cluster-period means. We show that the quasi-score for the marginal mean
defined from individual-level observations can be reformulated as the
quasi-score for the same marginal mean defined from the cluster-period means.
An additional mapping of the individual-level ICCs into correlations for the
cluster-period means further provides a rigorous justification for the
cluster-period approach. The proposed approach addresses a long-recognized
computational burden associated with estimating equations defined based on
individual-level observations, and enables fast point and interval estimation
of the intervention effect and correlations. We further propose matrix-adjusted
estimating equations to improve the finite-sample inference for ICCs. By
providing a valid approach to estimate ICCs within the class of generalized
linear models for correlated binary outcomes, this article operationalizes key
recommendations from the CONSORT extension to SW-CRTs, including the reporting
of ICCs.
|
We present the design and experimental demonstration of an open-endcap radio
frequency trap to confine ion crystals in the radial-two dimensional (2D)
structural phase. The central axis of the trap is kept free of obstructions to
allow for site-resolved imaging of ions in the 2D crystal plane, and the
confining potentials are provided by four segmented blade electrodes. We
discuss the design challenges, fabrication techniques, and voltage requirements
for implementing this open-endcap trap. Finally, we validate its operation by
confining up to 29 ions in a 2D triangular lattice, oriented such that both
in-plane principal axes of the 2D crystal lie in the radial direction.
|
Meta-analysis is a powerful tool for drug safety assessment by synthesizing
treatment-related toxicological findings from independent clinical trials.
However, published clinical studies may or may not report all adverse events
(AEs) if the observed number of AEs were fewer than a pre-specified
study-dependent cutoff. Subsequently, with censored information ignored, the
estimated incidence rate of AEs could be significantly biased. To address this
non-ignorable missing data problem in meta-analysis, we propose a Bayesian
multilevel regression model to accommodate the censored rare event data. The
performance of the proposed Bayesian model of censored data compared to other
existing methods is demonstrated through simulation studies under various
censoring scenarios. Finally, the proposed approach is illustrated using data
from a recent meta-analysis of 125 clinical trials involving PD-1/PD-L1
inhibitors with respect to their toxicity profiles.
|
Ground Penetrating Radar (GPR) is an effective non-destructive evaluation
(NDE) device for inspecting and surveying subsurface objects (i.e., rebars,
utility pipes) in complex environments. However, the current practice for GPR
data collection requires a human inspector to move a GPR cart along pre-marked
grid lines and record the GPR data in both X and Y directions for
post-processing by 3D GPR imaging software. It is time-consuming and tedious
work to survey a large area. Furthermore, identifying the subsurface targets
depends on the knowledge of an experienced engineer, who has to make manual and
subjective interpretation that limits the GPR applications, especially in
large-scale scenarios. In addition, the current GPR imaging technology is not
intuitive, and not for normal users to understand, and not friendly to
visualize. To address the above challenges, this paper presents a novel robotic
system to collect GPR data, interpret GPR data, localize the underground
utilities, reconstruct and visualize the underground objects' dense point cloud
model in a user-friendly manner. This system is composed of three modules: 1) a
vision-aided Omni-directional robotic data collection platform, which enables
the GPR antenna to scan the target area freely with an arbitrary trajectory
while using a visual-inertial-based positioning module tags the GPR
measurements with positioning information; 2) a deep neural network (DNN)
migration module to interpret the raw GPR B-scan image into a cross-section of
object model; 3) a DNN-based 3D reconstruction method, i.e., GPRNet, to
generate underground utility model represented as fine 3D point cloud.
Comparative studies on synthetic and field GPR raw data with various
incompleteness and noise are performed.
|
This short paper describes an ongoing research project that requires the
automated self-play learning and evaluation of a large number of board games in
digital form. We describe the approach we are taking to determine relevant
features, for biasing MCTS playouts for arbitrary games played on arbitrary
geometries. Benefits of our approach include efficient implementation, the
potential to transfer learnt knowledge to new contexts, and the potential to
explain strategic knowledge embedded in features in human-comprehensible terms.
|
Interactive robots navigating photo-realistic environments face challenges
underlying vision-and-language navigation (VLN), but in addition, they need to
be trained to handle the dynamic nature of dialogue. However, research in
Cooperative Vision-and-Dialog Navigation (CVDN), where a navigator interacts
with a guide in natural language in order to reach a goal, treats the dialogue
history as a VLN-style static instruction. In this paper, we present VISITRON,
a navigator better suited to the interactive regime inherent to CVDN by being
trained to: i) identify and associate object-level concepts and semantics
between the environment and dialogue history, ii) identify when to interact vs.
navigate via imitation learning of a binary classification head. We perform
extensive ablations with VISITRON to gain empirical insights and improve
performance on CVDN. VISITRON is competitive with models on the static CVDN
leaderboard. We also propose a generalized interactive regime to fine-tune and
evaluate VISITRON and future such models with pre-trained guides for
adaptability.
|
We investigate the structure of the minimal displacement set in $8$-located
complexes with the SD'-property. We show that such set embeds isometrically
into the complex. Since $8$-location and simple connectivity imply Gromov
hyperbolicity, the minimal displacement set in such complex is systolic. Using
these results, we construct a low-dimensional classifying space for the family
of virtually cyclic subgroups of a group acting properly on an $8$-located
complex with the SD'-property.
|
The great performance of machine learning algorithms and deep neural networks
in several perception and control tasks is pushing the industry to adopt such
technologies in safety-critical applications, as autonomous robots and
self-driving vehicles. At present, however, several issues need to be solved to
make deep learning methods more trustworthy, predictable, safe, and secure
against adversarial attacks. Although several methods have been proposed to
improve the trustworthiness of deep neural networks, most of them are tailored
for specific classes of adversarial examples, hence failing to detect other
corner cases or unsafe inputs that heavily deviate from the training samples.
This paper presents a lightweight monitoring architecture based on coverage
paradigms to enhance the model robustness against different unsafe inputs. In
particular, four coverage analysis methods are proposed and tested in the
architecture for evaluating multiple detection logics. Experimental results
show that the proposed approach is effective in detecting both powerful
adversarial examples and out-of-distribution inputs, introducing limited
extra-execution time and memory requirements.
|
Out-of-time-order correlators (OTOCs) have become established as a tool to
characterise quantum information dynamics and thermalisation in interacting
quantum many-body systems. It was recently argued that the expected exponential
growth of the OTOC is connected to the existence of correlations beyond those
encoded in the standard Eigenstate Thermalisation Hypothesis (ETH). We show
explicitly, by an extensive numerical analysis of the statistics of operator
matrix elements in conjunction with a detailed study of OTOC dynamics, that the
OTOC is indeed a precise tool to explore the fine details of the ETH. In
particular, while short-time dynamics is dominated by correlations, the
long-time saturation behaviour gives clear indications of an operator-dependent
energy scale $\omega_{\textrm{GOE}}$ associated to the emergence of an
effective Gaussian random matrix theory. We provide an estimation of the
finite-size scaling of $\omega_{\textrm{GOE}}$ for the general class of
observables composed of sums of local operators in the infinite-temperature
regime and found linear behaviour for the models considered.
|
The ALICE collaboration at the large hadron collider (LHC) recently reported
high-statistics $p_t$ spectrum data from 5 TeV and 13 TeV $p$-$p$ collisions.
Particle data for each energy were partitioned into event classes based on the
total yields within two disjoint pseudorapidity $\eta$ intervals denoted by
acronyms V0M and SPD. For each energy the spectra resulting from the two
selection methods were then compared to a minimum-bias INEL $> 0$ average over
the entire event population. The nominal goal was determination of the role of
jets in high-multiplicity $p$-$p$ collisions and especially the jet
contribution to the low-$p_t$ parts of spectra. A related motivation was
response to recent claims of "collective" behavior and other nominal indicators
of quark-gluon plasma (QGP) formation in small collision systems. In the
present study a two-component (soft + hard) model (TCM) of hadron production in
$p$-$p$ collisions is applied to the ALICE spectrum data. As in previous TCM
studies of a variety of A-B collision systems the jet and nonjet contributions
to $p$-$p$ spectra are accurately separated over the entire $p_t$ acceptance.
Distinction is maintained among spectrum normalizations, jet contributions to
spectra and systematic biases resulting from V0M and SPD event selection. The
statistical significance of data-model differences is established. The effect
of {\em spherocity} (azimuthal asymmetry measure nominally sensitive to jet
production) on ensemble-mean $p_t$ vs event multiplicity $n_{ch}$ is
investigated and found to have little relation to jet production. The general
results of the TCM analysis are as expected from a conventional QCD description
of jet production in $p$-$p$ collisions.
|
We introduce the notion of Gelfand pairs and zonal spherical functions for
Iwahori-Hecke algebras.
|
Despite the immense societal importance of ethically designing artificial
intelligence (AI), little research on the public perceptions of ethical AI
principles exists. This becomes even more striking when considering that
ethical AI development has the aim to be human-centric and of benefit for the
whole society. In this study, we investigate how ethical principles
(explainability, fairness, security, accountability, accuracy, privacy, machine
autonomy) are weighted in comparison to each other. This is especially
important, since simultaneously considering ethical principles is not only
costly, but sometimes even impossible, as developers must make specific
trade-off decisions. In this paper, we give first answers on the relative
importance of ethical principles given a specific use case - the use of AI in
tax fraud detection. The results of a large conjoint survey (n=1099) suggest
that, by and large, German respondents found the ethical principles equally
important. However, subsequent cluster analysis shows that different preference
models for ethically designed systems exist among the German population. These
clusters substantially differ not only in the preferred attributes, but also in
the importance level of the attributes themselves. We further describe how
these groups are constituted in terms of sociodemographics as well as opinions
on AI. Societal implications as well as design challenges are discussed.
|
The Eastin-Knill theorem states that no quantum error correcting code can
have a universal set of transversal gates. For CSS codes that can implement
Clifford gates transversally it suffices to provide one additional non-Clifford
gate, such as the T-gate, to achieve universality. Common methods to implement
fault-tolerant T-gates like magic state distillation generate a significant
hardware overhead that will likely prevent their practical usage in the
near-term future. Recently methods have been developed to mitigate the effect
of noise in shallow quantum circuits that are not protected by error
correction. Error mitigation methods require no additional hardware resources
but suffer from a bad asymptotic scaling and apply only to a restricted class
of quantum algorithms. In this work, we combine both approaches and show how to
implement encoded Clifford+T circuits where Clifford gates are protected from
noise by error correction while errors introduced by noisy encoded T-gates are
mitigated using the quasi-probability method. As a result, Clifford+T circuits
with a number of T-gates inversely proportional to the physical noise rate can
be implemented on small error-corrected devices without magic state
distillation. We argue that such circuits can be out of reach for
state-of-the-art classical simulation algorithms.
|
In this paper we introduce the long-range dependent completely correlated
mixed fractional Brownian motion (ccmfBm). This is a process that is driven by
a mixture of Brownian motion (Bm) and a long-range dependent completely
correlated fractional Brownian motion (fBm, ccfBm) that is constructed from the
Brownian motion via the Molchan--Golosov representation. Thus, there is a
single Bm driving the mixed process. In the short time-scales the ccmfBm
behaves like the Bm (it has Brownian H\"older index and quadratic variation).
However, in the long time-scales it behaves like the fBm (it has long-range
dependence governed by the fBm's Hurst index). We provide a transfer principle
for the ccmfBm and use it to construct the Cameron--Martin--Girsanov--Hitsuda
theorem and prediction formulas. Finally, we illustrate the ccmfBm by
simulations.
|
We consider a time-delayed HIV/AIDS epidemic model with education
dissemination and study the asymptotic dynamics of solutions as well as the
asymptotic behavior of the endemic equilibrium with respect to the amount of
information disseminated about the disease. Under appropriate assumptions on
the infection rates, we show that if the basic reproduction number is less than
or equal to one, then the disease will be eradicated in the long run and any
solution to the Cauchy problem converges to the unique disease-free equilibrium
of the model. On the other hand, when the basic reproduction number is greater
than one, we prove that the disease will be permanent but its impact on the
population can be significantly minimized as the amount of education
dissemination increases. In particular, under appropriate hypothesis on the
model parameters, we establish that the size of the component of the infected
population of the endemic equilibrium decreases linearly as a function of the
amount of information disseminated. We also fit our model to a set of data on
HIV/AIDS in order to estimate the infection, effective response, and
information rates of the disease. We then use these estimates to present
numerical simulations to illustrate our theoretical findings.
|
The proposed space-borne laser interferometric gravitational wave (GW)
observatory TianQin adopts a geocentric orbit for its nearly equilateral
triangular constellation formed by three identical drag-free satellites. The
geocentric distance of each satellite is $\approx 1.0 \times 10^{5}
~\mathrm{km}$, which makes the armlengths of the interferometer be $\approx
1.73 \times 10^{5} ~\mathrm{km}$. It is aimed to detect the GWs in $0.1
~\mathrm{mHz}-1 ~\mathrm{Hz}$. For space-borne detectors, the armlengths are
unequal and change continuously which results in that the laser frequency noise
is nearly $7-8$ orders of magnitude higher than the secondary noises (such as
acceleration noise, optical path noise, etc.). The time delay interferometry
(TDI) that synthesizes virtual interferometers from time-delayed one-way
frequency measurements has been proposed to suppress the laser frequency noise
to the level that is comparable or below the secondary noises. In this work, we
evaluate the performance of various data combinations for both first- and
second-generation TDI based on the five-year numerically optimized orbits of
the TianQin's satellites which exhibit the actual rotating and flexing of the
constellation. We find that the time differences of symmetric interference
paths of the data combinations are $\sim 10^{-8}$ s for the first-generation
TDI and $\sim 10^{-12}$ s for the second-generation TDI, respectively. While
the second-generation TDI is guaranteed to be valid for TianQin, the
first-generation TDI is possible to be competent for GW signal detection with
improved stabilization of the laser frequency noise in the concerned GW
frequencies.
|
Medical imaging plays a pivotal role in diagnosis and treatment in clinical
practice. Inspired by the significant progress in automatic image captioning,
various deep learning (DL)-based architectures have been proposed for
generating radiology reports for medical images. However, model uncertainty
(i.e., model reliability/confidence on report generation) is still an
under-explored problem. In this paper, we propose a novel method to explicitly
quantify both the visual uncertainty and the textual uncertainty for the task
of radiology report generation. Such multi-modal uncertainties can sufficiently
capture the model confidence scores at both the report-level and the
sentence-level, and thus they are further leveraged to weight the losses for
achieving more comprehensive model optimization. Our experimental results have
demonstrated that our proposed method for model uncertainty characterization
and estimation can provide more reliable confidence scores for radiology report
generation, and our proposed uncertainty-weighted losses can achieve more
comprehensive model optimization and result in state-of-the-art performance on
a public radiology report dataset.
|
In a recent work, we provided a standardized and exact analytical formalism
for computing in the semiclassical regime the radiation force experienced by a
two-level atom interacting with any number of plane waves with arbitrary
intensities, frequencies, phases, and propagation directions [J. Opt. Soc. Am.
B \textbf{35}, 127-132 (2018)]. Here, we extend this treatment to the
multilevel atom case, where degeneracy of the atomic levels is considered and
polarization of light enters into play. A matrix formalism is developed to this
aim.
|
With the rising demand of smart mobility, ride-hailing service is getting
popular in the urban regions. These services maintain a system for serving the
incoming trip requests by dispatching available vehicles to the pickup points.
As the process should be socially and economically profitable, the task of
vehicle dispatching is highly challenging, specially due to the time-varying
travel demands and traffic conditions. Due to the uneven distribution of travel
demands, many idle vehicles could be generated during the operation in
different subareas. Most of the existing works on vehicle dispatching system,
designed static relocation centers to relocate idle vehicles. However, as
traffic conditions and demand distribution dynamically change over time, the
static solution can not fit the evolving situations. In this paper, we propose
a dynamic future demand aware vehicle dispatching system. It can dynamically
search the relocation centers considering both travel demand and traffic
conditions. We evaluate the system on real-world dataset, and compare with the
existing state-of-the-art methods in our experiments in terms of several
standard evaluation metrics and operation time. Through our experiments, we
demonstrate that the proposed system significantly improves the serving ratio
and with a very small increase in operation cost.
|
Nowadays, we are witnessing an increasing demand in both corporates and
academia for exploiting Deep Learning (DL) to solve complex real-world
problems. A DL program encodes the network structure of a desirable DL model
and the process by which the model learns from the training dataset. Like any
software, a DL program can be faulty, which implies substantial challenges of
software quality assurance, especially in safety-critical domains. It is
therefore crucial to equip DL development teams with efficient fault detection
techniques and tools. In this paper, we propose NeuraLint, a model-based fault
detection approach for DL programs, using meta-modelling and graph
transformations. First, we design a meta-model for DL programs that includes
their base skeleton and fundamental properties. Then, we construct a
graph-based verification process that covers 23 rules defined on top of the
meta-model and implemented as graph transformations to detect faults and design
inefficiencies in the generated models (i.e., instances of the meta-model).
First, the proposed approach is evaluated by finding faults and design
inefficiencies in 28 synthesized examples built from common problems reported
in the literature. Then NeuraLint successfully finds 64 faults and design
inefficiencies in 34 real-world DL programs extracted from Stack Overflow posts
and GitHub repositories. The results show that NeuraLint effectively detects
faults and design issues in both synthesized and real-world examples with a
recall of 70.5 % and a precision of 100 %. Although the proposed meta-model is
designed for feedforward neural networks, it can be extended to support other
neural network architectures such as recurrent neural networks. Researchers can
also expand our set of verification rules to cover more types of issues in DL
programs.
|
According to the "Hilbert Space Fundamentalism" Thesis, all features of a
physical system, including the $3$D-space, a preferred basis, and factorization
into subsystems, uniquely emerge from the state vector and the Hamiltonian
alone. I give a simplified account of the proof from arXiv:2102.08620 showing
that such emerging structures cannot be both unique and physically relevant.
|
The ${}^{12}\mathrm{C} + {}^{12}\mathrm{C}$ fusion reaction plays a vital
role in the explosive phenomena of the universe. The resonances in the Gamow
window rule its reaction rate and products. Hence, the determination of the
resonance parameters by nuclear models is indispensable as the direct
measurement is not feasible. Here, for the first time, we report the resonances
in the ${}^{12}\mathrm{C} + {}^{12}\mathrm{C}$ fusion reaction described by a
full-microscopic nuclear model. The model plausibly reproduces the measured
low-energy astrophysical $S$-factors and predicts the resonances in the Gamow
window. Contradictory to the hindrance model, we conclude that there is no
low-energy suppression of the $S$-factor.
|
When we look at the world around us, we see complex physical systems and
emergent phenomena. Emergence occurs when a system is observed to have
properties that its parts do not have on their own. These properties or
behaviors emerge only when the parts interact in a wider whole. Examples of
emergence can vary from the synchronization of pendulum clocks hanging on the
same wall to the phenomenon of life as an emergent property of chemistry. One
of the most complex systems that exist in nature is the human brain. It
contains on average 100 to 200 billion neurons and about 100 trillion synapses
connecting them. From this vast neuronal dynamics, the ability to learn and
store memory emerges as well as the ability to have complex cognitive skills,
conscious experience and a sense of self. In this work, we investigated how
complex systems like the human brain and chaotic systems create emergent
properties. In order to do so, we used network theory (paper 1), chaos and
synchronization theory (paper 2 and 3).
|
Background: The critical view of safety (CVS) is poorly adopted in surgical
practices although it is ubiquitously recommended to prevent major bile duct
injuries during laparoscopic cholecystectomy (LC). This study aims to determine
whether performing a short intraoperative time out can improve CVS
implementation. Methods: Surgeons performing LCs at an academic centre were
invited to perform a 5-second long time out to verify CVS before dividing the
cystic duct (5-second rule). The primary endpoint was to compare the rate of
CVS achievement between LCs performed in the year before and the year after the
5-second rule. The CVS achievement rate was computed using the mediated
video-based assessment of two independent reviewers. Clinical outcomes, LC
workflows, and postoperative reports were also compared. Results: Three hundred
and forty-three (171 before and 172 after the 5-second rule) of the 381 LCs
performed over the 2-year study were analysed. After the implementation of the
5-second rule, the rate of CVS achievement increased significantly (15.9 vs
44.1 %; P<0.001) as well as the rate of bailout procedures (8.2 vs 15.7 %;
P=0.045), the median time to clip the cystic duct or artery (17:26
[interquartile range: 16:46] vs 23:12 [17:16] minutes; P=0.007), and the rate
of postoperative CVS reporting (1.3 vs 28.8 %; P<0.001). Morbidity was
comparable (1.75 vs 2.33 % before and after the 5-second rule respectively;
P=0.685). Conclusion: Performing a short intraoperative time out improves CVS
implementation during LC. Systematic intraoperative cognitive aids should be
studied to sustain the uptake of guidelines.
|
We prove the support recovery for a general class of linear and nonlinear
evolutionary partial differential equation (PDE) identification from a single
noisy trajectory using $\ell_1$ regularized Pseudo-Least Squares
model~($\ell_1$-PsLS). In any associative $\mathbb{R}$-algebra generated by
finitely many differentiation operators that contain the unknown PDE operator,
applying $\ell_1$-PsLS to a given data set yields a family of candidate models
with coefficients $\mathbf{c}(\lambda)$ parameterized by the regularization
weight $\lambda\geq 0$. The trace of $\{\mathbf{c}(\lambda)\}_{\lambda\geq 0}$
suffers from high variance due to data noises and finite difference
approximation errors. We provide a set of sufficient conditions which guarantee
that, from a single trajectory data denoised by a Local-Polynomial filter, the
support of $\mathbf{c}(\lambda)$ asymptotically converges to the true
signed-support associated with the underlying PDE for sufficiently many data
and a certain range of $\lambda$. We also show various numerical experiments to
validate our theory.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.