abstract
stringlengths 42
2.09k
|
---|
One-shot voice conversion (VC), which performs conversion across arbitrary
speakers with only a single target-speaker utterance for reference, can be
effectively achieved by speech representation disentanglement. Existing work
generally ignores the correlation between different speech representations
during training, which causes leakage of content information into the speaker
representation and thus degrades VC performance. To alleviate this issue, we
employ vector quantization (VQ) for content encoding and introduce mutual
information (MI) as the correlation metric during training, to achieve proper
disentanglement of content, speaker and pitch representations, by reducing
their inter-dependencies in an unsupervised manner. Experimental results
reflect the superiority of the proposed method in learning effective
disentangled speech representations for retaining source linguistic content and
intonation variations, while capturing target speaker characteristics. In doing
so, the proposed approach achieves higher speech naturalness and speaker
similarity than current state-of-the-art one-shot VC systems. Our code,
pre-trained models and demo are available at
https://github.com/Wendison/VQMIVC.
|
One major impediment in rapidly deploying object detection models for
industrial applications is the lack of large annotated datasets. We currently
have presented the Sacked Carton Dataset(SCD) that contains carton images from
three scenarios, such as comprehensive pharmaceutical logistics company(CPLC),
e-commerce logistics company(ECLC), fruit market(FM). However, due to domain
shift, the model trained with one of the three scenarios in SCD has poor
generalization ability when applied to the rest scenarios. To solve this
problem, a novel image synthesis method is proposed to replace the foreground
texture of the source datasets with the texture of the target datasets. Our
method can keep the context relationship of foreground objects and backgrounds
unchanged and greatly augment the target datasets. We firstly propose a surface
segmentation algorithm to achieve texture decoupling of each instance.
Secondly, a contour reconstruction algorithm is proposed to keep the occlusion
and truncation relationship of the instance unchanged. Finally, the Gaussian
fusion algorithm is used to replace the foreground texture from the source
datasets with the texture from the target datasets. The novel image synthesis
method can largely boost AP by at least 4.3%~6.5% on RetinaNet and 3.4%~6.8% on
Faster R-CNN for the target domain. Code is available at
https://github.com/hustgetlijun/RCAN.
|
The Transient High Energy Sources and Early Universe Surveyor is an ESA M5
candidate mission currently in Phase A, with Launch in $\sim$2032. The aim of
the mission is to complete a Gamma Ray Burst survey and monitor transient X-ray
events. The University of Leicester is the PI institute for the Soft X-ray
Instrument (SXI), and is responsible for both the optic and detector
development. The SXI consists of two wide field, lobster eye X-ray modules.
Each module consists of 64 Micro Pore Optics (MPO) in an 8 by 8 array and 8
CMOS detectors in each focal plane. The geometry of the MPOs comprises a square
packed array of microscopic pores with a square cross-section, arranged over a
spherical surface with a radius of curvature twice the focal length of the
optic. Working in the photon energy range 0.3-5 keV, the optimum $L/d$ ratio
(length of pore $L$ and pore width $d$) is upwards of 50 and is constant across
the whole optic aperture for the SXI. The performance goal for the SXI modules
is an angular resolution of 4.5 arcmin, localisation accuracy of $\sim$1 arcmin
and employing an $L/d$ of 60. During the Phase A study, we are investigating
methods to improve the current performance and consistency of the MPOs, in
cooperation with the manufacturer Photonis France SAS. We present the optics
design of the THESEUS SXI modules and the programme of work designed to improve
the MPOs performance and the results from the study.
|
Recent advances in the literature have demonstrated that standard supervised
learning algorithms are ill-suited for problems with endogenous explanatory
variables. To correct for the endogeneity bias, many variants of nonparameteric
instrumental variable regression methods have been developed. In this paper, we
propose an alternative algorithm called boostIV that builds on the traditional
gradient boosting algorithm and corrects for the endogeneity bias. The
algorithm is very intuitive and resembles an iterative version of the standard
2SLS estimator. Moreover, our approach is data driven, meaning that the
researcher does not have to make a stance on neither the form of the target
function approximation nor the choice of instruments. We demonstrate that our
estimator is consistent under mild conditions. We carry out extensive Monte
Carlo simulations to demonstrate the finite sample performance of our algorithm
compared to other recently developed methods. We show that boostIV is at worst
on par with the existing methods and on average significantly outperforms them.
|
We report the interfacing of the Exciting-Plus ("EP") FLAPW DFT code with the
SIRIUS multi-functional DFT library. Use of the SIRIUS library enhances EP with
additional task parallelism in ground state DFT calculations. Without
significant change in the EP source code, the additional eigensystem solver
method from the SIRIUS library can be exploited for performance gains in
diagonalizing the Kohn-Sham Hamiltonian. We benchmark the interfaced code
against the original EP using small bulk systems, and then demonstrate
performance on much larger molecular magnet systems that are well beyond the
capability of the original EP code.
|
Chimera states have attracted significant attention as symmetry-broken states
exhibiting the unexpected coexistence of coherence and incoherence. Despite the
valuable insights gained from analyzing specific systems, an understanding of
the general physical mechanism underlying the emergence of chimeras is still
lacking. Here, we show that many stable chimeras arise because coherence in
part of the system is sustained by incoherence in the rest of the system. This
mechanism may be regarded as a deterministic analog of noise-induced
synchronization and is shown to underlie the emergence of strong chimeras.
These are chimera states whose coherent domain is formed by identically
synchronized oscillators. Recognizing this mechanism offers a new meaning to
the interpretation that chimeras are a natural link between coherence and
incoherence.
|
We present a probabilistic 3D generative model, named Generative Cellular
Automata, which is able to produce diverse and high quality shapes. We
formulate the shape generation process as sampling from the transition kernel
of a Markov chain, where the sampling chain eventually evolves to the full
shape of the learned distribution. The transition kernel employs the local
update rules of cellular automata, effectively reducing the search space in a
high-resolution 3D grid space by exploiting the connectivity and sparsity of 3D
shapes. Our progressive generation only focuses on the sparse set of occupied
voxels and their neighborhood, thus enabling the utilization of an expressive
sparse convolutional network. We propose an effective training scheme to obtain
the local homogeneous rule of generative cellular automata with sequences that
are slightly different from the sampling chain but converge to the full shapes
in the training data. Extensive experiments on probabilistic shape completion
and shape generation demonstrate that our method achieves competitive
performance against recent methods.
|
Theoretical models of a spin-polarized voltage probe (SPVP) tunnel-coupled to
the helical edge states (HES) of a quantum spin Hall system (QSHS) are studied.
Our first model of the SPVP comprises $N_{P}$ spin-polarized modes (subprobes),
each of which is locally tunnel-coupled to the HES, while the SPVP, as a whole,
is subjected to a self-consistency condition ensuring zero average current on
the probe. We carry out a numerical analysis which shows that the optimal
situation for reading off spin-resolved voltage from the HES depends on the
interplay of the probe-edge tunnel-coupling and the number of modes in the
probe ($N_P$). We further investigate the stability of our findings by
introducing Gaussian fluctuations in {\it{(i)}} the tunnel-coupling between the
subprobes and the HES about a chosen average value and {\it{(ii)}}
spin-polarization of the subprobes about a chosen direction of the net
polarization of SPVP. We also perform a numerical analysis corresponding to the
situation where four such SPVPs are implemented in a self-consistent fashion
across a ferromagnetic barrier on the HES and demonstrate that this model
facilitates the measurements of spin-resolved four-probe voltage drops across
the ferromagnetic barrier. As a second model, we employ the edge state of a
quantum anomalous Hall state (QAHS) as the SPVP which is tunnel-coupled over an
extended region with the HES. A two-dimensional lattice simulation for the
quantum transport of the proposed device setup comprising a junction of QSHS
and QAHS is considered and a feasibility study of using the edge of the QAHS as
an efficient spin-polarized voltage probe is carried out in presence of an
optimal strength of the disorder.
|
Planck data provide precise constraints on cosmological parameters when
assuming the base $\Lambda$CDM model, including a $0.17\%$ measurement of the
age of the Universe, $t_0=13.797 \pm 0.023\,{\rm Gyr}$. However, the
persistence of the "Hubble tension" calls the base $\Lambda$CDM model's
completeness into question and has spurred interest in models such as Early
Dark Energy (EDE) that modify the assumed expansion history of the Universe. We
investigate the effect of EDE on the redshift-time relation $z \leftrightarrow
t$ and find that it differs from the base $\Lambda$CDM model by at least
${\approx} 4\%$ at all $t$ and $z$. As long as EDE remains observationally
viable, any inferred $t \leftarrow z$ or $z \leftarrow t$ quoted to a higher
level of precision do not reflect the current status of our understanding of
cosmology. This uncertainty has important astrophysical implications: the
reionization epoch - $10>z>6$ - corresponds to disjoint lookback time periods
in the base $\Lambda$CDM and EDE models, and the EDE value of $t_0=13.25 \pm
0.17~{\rm Gyr}$ is in tension with published ages of some stars, star clusters,
and ultra-faint dwarf galaxies. However, most published stellar ages do not
include an uncertainty in accuracy (due to, e.g., uncertain distances and
stellar physics) that is estimated to be $\sim7-10\%$, potentially reconciling
stellar ages with $t_{0,\rm EDE}$. We discuss how the big data era for stars is
providing extremely precise ages ($<1\%$) and how improved distances and
treatment of stellar physics such as convection could result in ages accurate
to $4-5\%$, comparable to the current accuracy of $t \leftrightarrow z$. Such
precise and accurate stellar ages can provide detailed insight into the
high-redshift Universe independent of a cosmological model.
|
We are concerned with random ordinary differential equations (RODEs). Our
main question of interest is how uncertainties in system parameters propagate
through the possibly highly nonlinear dynamical system and affect the system's
bifurcation behavior. We come up with a methodology to determine the
probability of the occurrence of different types of bifurcations (sub- vs
super-critical) along a given bifurcation curve based on the probability
distribution of the input parameters. In a first step, we reduce the system's
behavior to the dynamics on its center manifold. We thereby still capture the
major qualitative behavior of the RODEs. In a second step, we analyze the
reduced RODEs and quantify the probability of the occurrence of different types
of bifurcations based on the (nonlinear) functional appearance of uncertain
parameters. To realize this major step, we present three approaches: an
analytical one, where the probability can be calculated explicitly based on
Mellin transformation and inversion, a semi-analytical one consisting of a
combination of the analytical approach with a moment-based numerical estimation
procedure, and a particular sampling-based approach using unscented
transformation. We complement our new methodology with various numerical
examples.
|
The "Subset Sum problem" is a very well-known NP-complete problem. In this
work, a top-k variation of the "Subset Sum problem" is considered. This problem
has wide application in recommendation systems, where instead of k best objects
the k best subsets of objects with the lowest (or highest) overall scores are
required. Given an input set R of n real numbers and a positive integer k, our
target is to generate the k best subsets of R such that the sum of their
elements is minimized. Our solution methodology is based on constructing a
metadata structure G for a given n. Each node of G stores a bit vector of size
n from which a subset of R can be retrieved. Here it is shown that the
construction of the whole graph G is not needed. To answer a query, only
implicit traversal of the required portion of G on demand is sufficient, which
obviously gets rid of the preprocessing step, thereby reducing the overall time
and space requirement. A modified algorithm is then proposed to generate each
subset incrementally, where it is shown that it is possible to do away with the
explicit storage of the bit vector. This not only improves the space
requirement but also improves the asymptotic time complexity. Finally, a
variation of our algorithm that reports only the top-k subset sums has been
compared with an existing algorithm, which shows that our algorithm performs
better both in terms of time and space requirement by a constant factor.
|
Attention mechanism enables the Graph Neural Networks(GNNs) to learn the
attention weights between the target node and its one-hop neighbors, the
performance is further improved. However, the most existing GNNs are oriented
to homogeneous graphs and each layer can only aggregate the information of
one-hop neighbors. Stacking multi-layer networks will introduce a lot of noise
and easily lead to over smoothing. We propose a Multi-hop Heterogeneous
Neighborhood information Fusion graph representation learning method (MHNF).
Specifically, we first propose a hybrid metapath autonomous extraction model to
efficiently extract multi-hop hybrid neighbors. Then, we propose a hop-level
heterogeneous Information aggregation model, which selectively aggregates
different-hop neighborhood information within the same hybrid metapath.
Finally, a hierarchical semantic attention fusion model (HSAF) is proposed,
which can efficiently integrate different-hop and different-path neighborhood
information respectively. This paper can solve the problem of aggregating the
multi-hop neighborhood information and can learn hybrid metapaths for target
task, reducing the limitation of manually specifying metapaths. In addition,
HSAF can extract the internal node information of the metapaths and better
integrate the semantic information of different levels. Experimental results on
real datasets show that MHNF is superior to state-of-the-art methods in node
classification and clustering tasks (10.94% - 69.09% and 11.58% - 394.93%
relative improvement on average, respectively).
|
We present a calculation of the up, down, strange and charm quark masses
performed within the lattice QCD framework. We use the twisted mass fermion
action and carry out simulations that include in the sea two light
mass-degenerate quarks, as well as the strange and charm quarks. In the
analysis we use gauge ensembles simulated at three values of the lattice
spacing and with light quarks that correspond to pion masses in the range from
350 MeV to the physical value, while the strange and charm quark masses are
tuned approximately to their physical values. We use several quantities to set
the scale in order to check for finite lattice spacing effects and in the
continuum limit we get compatible results. The quark mass renormalization is
carried out non-perturbatively using the RI'-MOM method converted into the
$\overline{\rm MS}$ scheme. For the determination of the quark masses we use
physical observables from both the meson and the baryon sectors, obtaining
$m_{ud} = 3.636(66)(^{+60}_{-57})$~MeV and $m_s =
98.7(2.4)(^{+4.0}_{-3.2})$~MeV in the $\overline{\rm MS}(2\,{\rm GeV})$ scheme
and $m_c = 1036(17)(^{+15}_{-8})$~MeV in the $\overline{\rm MS}(3\,{\rm GeV})$
scheme, where the first errors are statistical and the second ones are
combinations of systematic errors. For the quark mass ratios we get $m_s /
m_{ud} = 27.17(32)(^{+56}_{-38})$ and $m_c / m_s = 11.48(12)(^{+25}_{-19})$.
|
Coupled flow-induced flapping dynamics of flexible plates are governed by
three non-dimensional numbers: Reynolds number, mass-ratio, and non-dimensional
flexural rigidity. The traditional definition of these parameters is limited to
isotropic single-layered flexible plates. There is a need to define these
parameters for a more generic plate made of multiple isotropic layers placed on
top of each other. In this work, we derive the non-dimensional parameters for a
flexible plate of $n$-isotropic layers and validate the non-dimensional
parameters with the aid of numerical simulations.
|
To help agents reason about scenes in terms of their building blocks, we wish
to extract the compositional structure of any given scene (in particular, the
configuration and characteristics of objects comprising the scene). This
problem is especially difficult when scene structure needs to be inferred while
also estimating the agent's location/viewpoint, as the two variables jointly
give rise to the agent's observations. We present an unsupervised variational
approach to this problem. Leveraging the shared structure that exists across
different scenes, our model learns to infer two sets of latent representations
from RGB video input alone: a set of "object" latents, corresponding to the
time-invariant, object-level contents of the scene, as well as a set of "frame"
latents, corresponding to global time-varying elements such as viewpoint. This
factorization of latents allows our model, SIMONe, to represent object
attributes in an allocentric manner which does not depend on viewpoint.
Moreover, it allows us to disentangle object dynamics and summarize their
trajectories as time-abstracted, view-invariant, per-object properties. We
demonstrate these capabilities, as well as the model's performance in terms of
view synthesis and instance segmentation, across three procedurally generated
video datasets.
|
In recent years, the use of sophisticated statistical models that influence
decisions in domains of high societal relevance is on the rise. Although these
models can often bring substantial improvements in the accuracy and efficiency
of organizations, many governments, institutions, and companies are reluctant
to their adoption as their output is often difficult to explain in
human-interpretable ways. Hence, these models are often regarded as
black-boxes, in the sense that their internal mechanisms can be opaque to human
audit. In real-world applications, particularly in domains where decisions can
have a sensitive impact--e.g., criminal justice, estimating credit scores,
insurance risk, health risks, etc.--model interpretability is desired.
Recently, the academic literature has proposed a substantial amount of methods
for providing interpretable explanations to machine learning models. This
survey reviews the most relevant and novel methods that form the
state-of-the-art for addressing the particular problem of explaining individual
instances in machine learning. It seeks to provide a succinct review that can
guide data science and machine learning practitioners in the search for
appropriate methods to their problem domain.
|
The Kolkata Paise Restaurant Problem is a challenging game, in which $n$
agents must decide where to have lunch during their lunch break. The game is
very interesting because there are exactly $n$ restaurants and each restaurant
can accommodate only one agent. If two or more agents happen to choose the same
restaurant, only one gets served and the others have to return back to work
hungry. In this paper we tackle this problem from an entirely new angle. We
abolish certain implicit assumptions, which allows us to propose a novel
strategy that results in greater utilization for the restaurants. We emphasize
the spatially distributed nature of our approach, which, for the first time,
perceives the locations of the restaurants as uniformly distributed in the
entire city area. This critical change in perspective has profound
ramifications in the topological layout of the restaurants, which now makes it
completely realistic to assume that every agent has a second chance. Every
agent now may visit, in case of failure, more than one restaurants, within the
predefined time constraints.
|
A graph G is said to be orderenergetic, if its energy equal to its order and
it is said to be hypoenergetic if its energy less than its order. Two
non-isomorphic graphs of same order are said to be equienergetic if their
energies are equal. In this paper, we construct some new families of
orderenergetic graphs, hypoenergetic graphs, equienergetic graphs,
equiorderenergetic graphs and equihypoenergetic graphs.
|
In order to get $\lambda$-models with a rich structure of $\infty$-groupoid,
which we call "homotopy $\lambda$-models", a general technique is described for
solving domain equations on any cartesian closed $\infty$-category (c.c.i.)
with enough points. Finally, the technique is applied in a particular c.c.i.,
where some examples of homotopy $\lambda$-models are given.
|
The central engines of Active Galactic Nuclei (AGNs) are powered by accreting
supermassive black holes, and while AGNs are known to play an important role in
galaxy evolution, the key physical processes occur on scales that are too small
to be resolved spatially (aside from a few exceptional cases). Reverberation
mapping is a powerful technique that overcomes this limitation by using echoes
of light to determine the geometry and kinematics of the central regions.
Variable ionizing radiation from close to the black hole drives correlated
variability in surrounding gas/dust, but with a time delay due to the light
travel time between the regions, allowing reverberation mapping to effectively
replace spatial resolution with time resolution. Reverberation mapping is used
to measure black hole masses and to probe the innermost X-ray emitting region,
the UV/optical accretion disk, the broad emission line region and the dusty
torus. In this article we provide an overview of the technique and its varied
applications.
|
In this paper, we study the Feldman-Katok metric. We give entropy formulas by
replacing Bowen metric with Feldman-Katok metric. Some relative topics are also
discussed.
|
We propose two systematic constructions of deletion-correcting codes for
protecting quantum information. The first one works with qudits of any
dimension, but only one deletion is corrected and the constructed codes are
asymptotically bad. The second one corrects multiple deletions and can
construct asymptotically good codes. The second one also allows conversion of
stabilizer-based quantum codes to deletion-correcting codes, and entanglement
assistance.
|
Resistive random access memories are promising for non-volatile memory and
brain-inspired computing applications. High variability and low yield of these
devices are key drawbacks hindering reliable training of physical neural
networks. In this study, we show that doping an oxide electrolyte, Al2O3, with
electronegative metals makes resistive switching significantly more
reproducible, surpassing the reproducibility requirements for obtaining
reliable hardware neuromorphic circuits. The underlying mechanism is the ease
of creating oxygen vacancies in the vicinity of electronegative dopants, due to
the capture of the associated electrons by dopant mid-gap states, and the
weakening of Al-O bonds. These oxygen vacancies and vacancy clusters also bind
significantly to the dopant, thereby serving as preferential sites and building
blocks in the formation of conducting paths. We validate this theory
experimentally by implanting multiple dopants over a range of
electronegativities, and find superior repeatability and yield with highly
electronegative metals, Au, Pt and Pd. These devices also exhibit a gradual SET
transition, enabling multibit switching that is desirable for analog computing.
|
Characterizing the privacy degradation over compositions, i.e., privacy
accounting, is a fundamental topic in differential privacy (DP) with many
applications to differentially private machine learning and federated learning.
We propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP
and the PLD formalism) via the \emph{characteristic function} ($\phi$-function)
of a certain \emph{dominating} privacy loss random variable. We show that our
approach allows \emph{natural} adaptive composition like Renyi DP, provides
\emph{exactly tight} privacy accounting like PLD, and can be (often
\emph{losslessly}) converted to privacy profile and $f$-DP, thus providing
$(\epsilon,\delta)$-DP guarantees and interpretable tradeoff functions.
Algorithmically, we propose an \emph{analytical Fourier accountant} that
represents the \emph{complex} logarithm of $\phi$-functions symbolically and
uses Gaussian quadrature for numerical computation. On several popular DP
mechanisms and their subsampled counterparts, we demonstrate the flexibility
and tightness of our approach in theory and experiments.
|
The aim of this work is to determine abundances of neutron-capture elements
for thin- and thick-disc F, G, and K stars in several sky fields near the north
ecliptic pole and to compare the results with the Galactic chemical evolution
models, to explore elemental gradients according to stellar ages, mean
galactocentric distances, and maximum heights above the Galactic plane. The
observational data were obtained with the 1.65m telescope at the Moletai
Astronomical Observatory and a fibre-fed high-resolution spectrograph.
Elemental abundances were determined using a differential spectrum synthesis
with the MARCS stellar model atmospheres and accounting for the
hyperfine-structure effects. We determined abundances of Sr, Y, Zr, Ba, La, Ce,
Pr, Nd, Sm, and Eu for 424 thin- and 82 thick-disc stars. The sample of
thick-disc stars shows a clearly visible decrease in [Eu/Mg] with increasing
[Fe/H] compared to the thin-disc stars, bringing more evidence of a different
chemical evolution in these two Galactic components. Abundance correlation with
age slopes for the investigated thin-disc stars are slightly negative for the
majority of s-process dominated elements, while r-process dominated elements
have positive correlations. Our sample of thin-disc stars with ages spanning
from 0.1 to 9 Gyrs give the [Y/Mg]=0.022 ($\pm$0.015)-0.027 ($\pm$0.003)*age
[Gyr] relation. For the thick-disc stars, when we also took data from other
studies into account, we found that [Y/Mg] cannot serve as an age indicator.
The radial [El/Fe] gradients in the thin disc are negligible for the s-process
dominated elements and become positive for the r-process dominated elements.
The vertical gradients are negative for the light s-process dominated elements
and become positive for the r-process dominated elements. In the thick disc,
the radial [El/Fe] slopes are negligible, and the vertical slopes are
predominantly negative.
|
Independent cascade (IC) model is a widely used influence propagation model
for social networks. In this paper, we incorporate the concept and techniques
from causal inference to study the identifiability of parameters from
observational data in extended IC model with unobserved confounding factors,
which models more realistic propagation scenarios but is rarely studied in
influence propagation modeling before. We provide the conditions for the
identifiability or unidentifiability of parameters for several special
structures including the Markovian IC model, semi-Markovian IC model, and IC
model with a global unobserved variable. Parameter identifiability is important
for other tasks such as influence maximization under the diffusion networks
with unobserved confounding factors.
|
We investigate the reasons for the performance degradation incurred with
batch-independent normalization. We find that the prototypical techniques of
layer normalization and instance normalization both induce the appearance of
failure modes in the neural network's pre-activations: (i) layer normalization
induces a collapse towards channel-wise constant functions; (ii) instance
normalization induces a lack of variability in instance statistics, symptomatic
of an alteration of the expressivity. To alleviate failure mode (i) without
aggravating failure mode (ii), we introduce the technique "Proxy Normalization"
that normalizes post-activations using a proxy distribution. When combined with
layer normalization or group normalization, this batch-independent
normalization emulates batch normalization's behavior and consistently matches
or exceeds its performance.
|
We propose a family of lossy integer compressions for Stochastic Gradient
Descent (SGD) that do not communicate a single float. This is achieved by
multiplying floating-point vectors with a number known to every device and then
rounding to an integer number. Our theory shows that the iteration complexity
of SGD does not change up to constant factors when the vectors are scaled
properly. Moreover, this holds for both convex and non-convex functions, with
and without overparameterization. In contrast to other compression-based
algorithms, ours preserves the convergence rate of SGD even on non-smooth
problems. Finally, we show that when the data is significantly heterogeneous,
it may become increasingly hard to keep the integers bounded and propose an
alternative algorithm, IntDIANA, to solve this type of problems.
|
Intelligent reflecting surface (IRS) has emerged as a competitive solution to
address blockage issues in millimeter wave (mmWave) and Terahertz (THz)
communications due to its capability of reshaping wireless transmission
environments. Nevertheless, obtaining the channel state information of
IRS-assisted systems is quite challenging because of the passive
characteristics of the IRS. In this paper, we consider the problem of beam
training/alignment for IRS-assisted downlink mmWave/THz systems, where a
multi-antenna base station (BS) with a hybrid structure serves a single-antenna
user aided by IRS. By exploiting the inherent sparse structure of the
BS-IRS-user cascade channel, the beam training problem is formulated as a joint
sparse sensing and phaseless estimation problem, which involves devising a
sparse sensing matrix and developing an efficient estimation algorithm to
identify the best beam alignment from compressive phaseless measurements.
Theoretical analysis reveals that the proposed method can identify the best
alignment with only a modest amount of training overhead. Simulation results
show that, for both line-of-sight (LOS) and NLOS scenarios, the proposed method
obtains a significant performance improvement over existing state-of-art
methods. Notably, it can achieve performance close to that of the exhaustive
beam search scheme, while reducing the training overhead by 95%.
|
A single-hop beeping network is a distributed communication model in which
all stations can communicate with one another by transmitting only one-bit
messages, called beeps. This paper focuses on resolving the distributed
computing area's two fundamental problems: naming and counting problems. We are
particularly interested in optimizing the energy complexity and the running
time of algorithms to resolve these problems. Our contribution is to design
randomized algorithms with an optimal running time of O(n log n) and an energy
complexity of O(log n) for both the naming and counting problems on single-hop
beeping networks of n stations.
|
We initiate the study of dark matter models based on a gapped continuum. Dark
matter consists of a mixture of states with a continuous mass distribution,
which evolves as the universe expands. We present an effective field theory
describing the gapped continuum, outline the structure of the Hilbert space and
show how to deal with the thermodynamics of such a system. This formalism
enables us to study the cosmological evolution and phenomenology of gapped
continuum DM in detail. As a concrete example, we consider a weakly-interacting
continuum (WIC) model, a gapped continuum counterpart of the familiar WIMP. The
DM interacts with the SM via a Z-portal. The model successfully reproduces the
observed relic density, while direct detection constraints are avoided due to
the effect of continuum kinematics. The model has striking observational
consequences, including continuous decays of DM states throughout cosmological
history, as well as cascade decays of DM states produced at colliders. We also
describe how the WIC theory can arise from a local, unitary scalar QFT
propagating on a five-dimensional warped background with a soft wall.
|
In this paper we discuss the computation of Casimir energy on a quantum
computer. The Casimir energy is an ideal quantity to calculate on a quantum
computer as near term hybrid classical quantum algorithms exist to calculate
the ground state energy and the Casimir energy gives physical implications for
this quantity in a variety of settings. Depending on boundary conditions and
whether the field is bosonic or fermionic we illustrate how the Casimir energy
calculation can be set up on a quantum computer and calculated using the
Variational Quantum Eigensolver algorithm with IBM QISKit. We compare the
results based on a lattice regularization with a finite number of qubits with
the continuum calculation for free boson fields, free fermion fields and chiral
fermion fields. We use a regularization method introduced by Bergman and Thorn
to compute the Casimir energy of a chiral fermion. We show how the accuracy of
the calculation varies with the number of qubits. We show how the number of
Pauli terms which are used to represent the Hamiltonian on a quantum computer
scales with the number of qubits. We discuss the application of the Casimir
calculations on quantum computers to cosmology, nanomaterials, string models,
Kaluza Klein models and dark energy.
|
We study the average number $\mathcal{A}(G)$ of colors in the non-equivalent
colorings of a graph $G$. We show some general properties of this graph
invariant and determine its value for some classes of graphs. We then
conjecture several lower bounds on $\mathcal{A}(G)$ and prove that these
conjectures are true for specific classes of graphs such as triangulated graphs
and graphs with maximum degree at most 2.
|
Strong evidence suggests that transformative correlated electron behavior may
exist only in unrealized clean-limit 2D materials such as 1T-TaS2.
Unfortunately, experiment and theory suggest that extrinsic disorder in free
standing 2D layers impedes correlation-driven quantum behavior. Here we
demonstrate a new route to realizing fragile 2D quantum states through
epitaxial polytype engineering of van der Waals materials. The isolation of
truly 2D charge density waves (CDWs) between metallic layers stabilizes
commensurate long-range order and lifts the coupling between neighboring CDW
layers to restore mirror symmetries via interlayer CDW twinning. The
twinned-commensurate charge density wave (tC-CDW) reported herein has a single
metal-insulator phase transition at ~350 K as measured structurally and
electronically. Fast in-situ transmission electron microscopy and scanned
nanobeam diffraction map the formation of tC-CDWs. This work introduces
epitaxial polytype engineering of van der Waals materials to access latent 2D
ground states distinct from conventional 2D fabrication.
|
We find a minimal set of generators for the coordinate ring of Calogero-Moser
space $\mathcal{C}_3$ and the algebraic relations among them explicitly. We
give a new presentation for the algebra of $3\times3$ invariant matrices
involving the defining relations of $\mathbb{C}[\mathcal{C}_3]$. We find an
explicit description of the commuting variety of $3\times3$ matrices and its
orbits under the action of the affine Cremona group.
|
Ranking has always been one of the top concerns in information retrieval
researches. For decades, the lexical matching signal has dominated the ad-hoc
retrieval process, but solely using this signal in retrieval may cause the
vocabulary mismatch problem. In recent years, with the development of
representation learning techniques, many researchers turn to Dense Retrieval
(DR) models for better ranking performance. Although several existing DR models
have already obtained promising results, their performance improvement heavily
relies on the sampling of training examples. Many effective sampling strategies
are not efficient enough for practical usage, and for most of them, there still
lacks theoretical analysis in how and why performance improvement happens. To
shed light on these research questions, we theoretically investigate different
training strategies for DR models and try to explain why hard negative sampling
performs better than random sampling. Through the analysis, we also find that
there are many potential risks in static hard negative sampling, which is
employed by many existing training methods. Therefore, we propose two training
strategies named a Stable Training Algorithm for dense Retrieval (STAR) and a
query-side training Algorithm for Directly Optimizing Ranking pErformance
(ADORE), respectively. STAR improves the stability of DR training process by
introducing random negatives. ADORE replaces the widely-adopted static hard
negative sampling method with a dynamic one to directly optimize the ranking
performance. Experimental results on two publicly available retrieval benchmark
datasets show that either strategy gains significant improvements over existing
competitive baselines and a combination of them leads to the best performance.
|
The ability to reliably prepare non-classical states will play a major role
in the realization of quantum technology. NOON states, belonging to the class
of Schroedinger cat states, have emerged as a leading candidate for several
applications. Starting from a model of dipolar bosons confined to a closed
circuit of four sites, we show how to generate NOON states. This is achieved by
designing protocols to transform initial Fock states to NOON states through use
of time evolution, application of an external field, and local projective
measurements. By variation of the external field strength, we demonstrate how
the system can be controlled to encode a phase into a NOON state. We also
discuss the physical feasibility, via an optical lattice setup. Our proposal
illuminates the benefits of quantum integrable systems in the design of
atomtronic protocols.
|
Elastic similarity measures are a class of similarity measures specifically
designed to work with time series data. When scoring the similarity between two
time series, they allow points that do not correspond in timestamps to be
aligned. This can compensate for misalignments in the time axis of time series
data, and for similar processes that proceed at variable and differing paces.
Elastic similarity measures are widely used in machine learning tasks such as
classification, clustering and outlier detection when using time series data.
There is a multitude of research on various univariate elastic similarity
measures. However, except for multivariate versions of the well known Dynamic
Time Warping (DTW) there is a lack of work to generalise other similarity
measures for multivariate cases. This paper adapts two existing strategies used
in multivariate DTW, namely, Independent and Dependent DTW, to several commonly
used elastic similarity measures.
Using 23 datasets from the University of East Anglia (UEA) multivariate
archive, for nearest neighbour classification, we demonstrate that each measure
outperforms all others on at least one dataset and that there are datasets for
which either the dependent versions of all measures are more accurate than
their independent counterparts or vice versa. This latter finding suggests that
these differences arise from a fundamental property of the data. We also show
that an ensemble of such nearest neighbour classifiers is highly competitive
with other state-of-the-art multivariate time series classifiers.
|
We report the enhanced superconducting properties of double-chain based
superconductor Pr$_{2}$Ba$_{4}$Cu$_{7}$O$_{15-\delta}$ synthesized by the
citrate pyrolysis technique.
%In spite of the polycrystalline bulk samples, we obtained the higher
residual resistivity ratios (10-12).
The reduction heat treatment in vacuum results in the appearance of
superconducting state with $T_\mathrm{c}$=22-24 K, accompanied by the higher
residual resistivity ratios. The superconducting volume fractions are estimated
from the ZFC data to be 50$\sim55\%$, indicating the bulk superconductivity. We
evaluate from the magneto-transport data the temperature dependence of the
superconducting critical field, to establish the superconducting phase diagram.
The upper critical magnetic field is estimated to be about 35 T at low
temperatures from the resistive transition data using the
Werthamer-Helfand-Hohenberg formula. The Hall coefficient $R_{H}$ of the
48-h-reduced superconducting sample is determined to be -0.5$\times10^{-3}$
cm$^{3}$/C at 30 K, suggesting higher electron concentration. These findings
have a close relationship with homogeneous distributions of the superconducting
grains and improved weak links between their superconducting grains in the
present synthesis process.
|
Automatic speech recognition (ASR) models are typically designed to operate
on a single input data type, e.g. a single or multi-channel audio streamed from
a device. This design decision assumes the primary input data source does not
change and if an additional (auxiliary) data source is occasionally available,
it cannot be used. An ASR model that operates on both primary and auxiliary
data can achieve better accuracy compared to a primary-only solution; and a
model that can serve both primary-only (PO) and primary-plus-auxiliary (PPA)
modes is highly desirable. In this work, we propose a unified ASR model that
can serve both modes. We demonstrate its efficacy in a realistic scenario where
a set of devices typically stream a single primary audio channel, and two
additional auxiliary channels only when upload bandwidth allows it. The
architecture enables a unique methodology that uses both types of input audio
during training time. Our proposed approach achieves up to 12.5% relative
word-error-rate reduction (WERR) compared to a PO baseline, and up to 16.0%
relative WERR in low-SNR conditions. The unique training methodology achieves
up to 2.5% relative WERR compared to a PPA baseline.
|
The magnetic dipole moments of the $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,
$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ states are extracted in the framework
of the light-cone QCD sum rules. In the calculations, we use the hadronic
molecular form of interpolating currents, and photon distribution amplitudes to
get the magnetic dipole moment of $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,
$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ tetraquark states. The magnetic
dipole moments are obtained as $\mu_{Z_{c}} = 0.66^{+0.27}_{-0.25}$,
$\mu_{Z^{1}_{c}}=1.03^{+0.32}_{-0.29}$, $\mu_{Z_{cs}}=0.73^{+0.28}_{-0.26}$,
$\mu_{Z^1_{cs}}=0.77^{+0.27}_{-0.25}$ for the $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,
$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ states, respectively. We observe that
the results obtained for the $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,
$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ states are large enough to be
measured experimentally. As a by product, we predict the magnetic dipole
moments of the neutral $Z_{cs}(4000)$ and $Z_{cs}(4220)$ states. The results
presented here can serve to be helpful knowledge in experimental as well as
theoretical studies of the properties of hidden-charm tetraquark states with
and without strangeness.
|
Randomized Controlled Trials (RCTs) are often considered as the gold standard
to conclude on the causal effect of a given intervention on an outcome, but
they may lack of external validity when the population eligible to the RCT is
substantially different from the target population. Having at hand a sample of
the target population of interest allows to generalize the causal effect.
Identifying this target population treatment effect needs covariates in both
sets to capture all treatment effect modifiers that are shifted between the two
sets. However such covariates are often not available in both sets. Standard
estimators then use either weighting (IPSW), outcome modeling (G-formula), or
combine the two in doubly robust approaches (AIPSW). In this paper, after
completing existing proofs on the complete case consistency of those three
estimators, we compute the expected bias induced by a missing covariate,
assuming a Gaussian distribution and a semi-parametric linear model. This
enables sensitivity analysis for each missing covariate pattern, giving the
sign of the expected bias. We also show that there is no gain in imputing a
partially-unobserved covariate. Finally we study the replacement of a missing
covariate by a proxy. We illustrate all these results on simulations, as well
as semi-synthetic benchmarks using data from the Tennessee Student/Teacher
Achievement Ratio (STAR), and with a real-world example from critical care
medicine.
|
The present study shows how any De Morgan algebra may be enriched by a
'perfection operator' that allows one to express the Boolean properties of
negation-consistency and negation-determinedness. The corresponding variety of
'perfect paradefinite algebras' (PP-algebras) is shown to be term-equivalent to
the variety of involutive Stone algebras, introduced by R. Cignoli and M.
Sagastume, and more recently studied from a logical perspective by M. Figallo
and L. Cant\'u. Such equivalence then plays an important role in the
investigation of the 1-assertional logic and also the order-preserving logic
asssociated to the PP-algebras. The latter logic, which we call PP$\leq$,
happens to be characterised by a single 6-valued matrix and consists very
naturally in a Logic of Formal Inconsistency and Formal Undeterminedness. The
logic PP$\leq$ is here axiomatised, by means of an analytic finite
Hilbert-style calculus, and a related axiomatization procedure is presented
that covers the logics of other classes of De Morgan algebras as well as
super-Belnap logics enriched by a perfection connective.
|
This work uses genetic programming to explore the space of continuous
optimisers, with the goal of discovering novel ways of doing optimisation. In
order to keep the search space broad, the optimisers are evolved from scratch
using Push, a Turing-complete, general-purpose, language. The resulting
optimisers are found to be diverse, and explore their optimisation landscapes
using a variety of interesting, and sometimes unusual, strategies.
Significantly, when applied to problems that were not seen during training,
many of the evolved optimisers generalise well, and often outperform existing
optimisers. This supports the idea that novel and effective forms of
optimisation can be discovered in an automated manner. This paper also shows
that pools of evolved optimisers can be hybridised to further increase their
generality, leading to optimisers that perform robustly over a broad variety of
problem types and sizes.
|
This article is concerned with the global exact controllability for ideal
incompressible magnetohydrodynamics in a rectangular domain where the controls
are situated in both vertical walls. First, global exact controllability via
boundary controls is established for a related Els\"asser type system by
applying the return method, introduced in [Coron J.M., Math. Control Signals
Systems, 5(3) (1992) 295--312]. Similar results are then inferred for the
original magnetohydrodynamics system with the help of a special pressure-like
corrector in the induction equation. Overall, the main difficulties stem from
the nonlinear coupling between the fluid velocity and the magnetic field in
combination with the aim of exactly controlling the system. In order to
overcome some of the obstacles, we introduce ad-hoc constructions, such as
suitable initial data extensions outside of the physical part of the domain and
a certain weighted space.
|
We consider a stochastic game between three types of players: an inside
trader, noise traders and a market maker. In a similar fashion to Kyle's model,
we assume that the insider first chooses the size of her market-order and then
the market maker determines the price by observing the total order-flow
resulting from the insider and the noise traders transactions. In addition to
the classical framework, a revenue term is added to the market maker's
performance function, which is proportional to the order flow and to the size
of the bid-ask spread. We derive the maximizer for the insider's revenue
function and prove sufficient conditions for an equilibrium in the game. Then,
we use neural networks methods to verify that this equilibrium holds. We show
that the equilibrium state in this model experience interesting phase
transitions, as the weight of the revenue term in the market maker's
performance function changes. Specifically, the asset price in equilibrium
experience three different phases: a linear pricing rule without a spread, a
pricing rule that includes a linear mid-price and a bid-ask spread, and a
metastable state with a zero mid-price and a large spread.
|
Based on density functional theory (DFT), we investigate the electronic
properties of bulk and single-layer ZrTe$_4$Se. The band structure of bulk
ZrTe$_4$Se can produce a semimetal-to-topological insulator (TI) phase
transition under uniaxial strain. The maximum global band gap is 0.189 eV at
the 7\% tensile strain. Meanwhile, the Z$_2$ invariants (0; 110) demonstrate
conclusively it is a weak topological insulator (WTI). The two Dirac cones for
the (001) surface further confirm the nontrivial topological nature. The
single-layer ZrTe$_4$Se is a quantum spin Hall (QSH) insulator with a band gap
86.4 meV and Z$_2$=1, the nontrivial metallic edge states further confirm the
nontrivial topological nature. The maximum global band gap is 0.211 eV at the
tensile strain 8\%. When the compressive strain is more than 1\%, the band
structure of single-layer ZrTe$_4$Se produces a TI-to-semimetal transition.
These theoretical analysis may provide a method for searching large band gap
TIs and platform for topological nanoelectronic device applications.
|
Attosecond nonlinear Fourier transform (NFT) pump probe spectroscopy is an
experimental technique which allows investigation of the electronic excitation,
ionization, and unimolecular dissociation processes. The NFT spectroscopy
utilizes ultrafast multiphoton ionization in the extreme ultraviolet spectral
range and detects the dissociation products of the unstable ionized species. In
this paper, a quantum mechanical description of NFT spectra is suggested, which
is based on the second order perturbation theory in molecule-light interaction
and the high level ab initio calculations of CO2 and CO2+ in the Franck-Condon
zone. The calculations capture the characteristic features of the available
experimental NFT spectra of CO2. Approximate analytic expressions are derived
and used to assign the calculated spectra in terms of participating electronic
states and harmonic photon frequencies. The developed approach provides a
convenient framework within which the origin and the significance of near
harmonic and non-harmonic NFT spectral lines can be analyzed. The framework is
scalable and the spectra of di- and triatomic species as well as the
dependences on the control parameters can by predicted semi-quantitatively.
|
This work continues the study of the thermal Hamiltonian, initially proposed
by J. M. Luttinger in 1964 as a model for the conduction of thermal currents in
solids. The previous work [DL] contains a complete study of the "free" model in
one spatial dimension along with a preliminary scattering result for
convolution-type perturbations. This work complements the results obtained in
[DL] by providing a detailed analysis of the perturbation theory for the
one-dimensional thermal Hamiltonian. In more detail the following result are
established: the regularity and decay properties for elements in the domain of
the unperturbed thermal Hamiltonian; the determination of a class of
self-adjoint and relatively compact perturbations of the thermal Hamiltonian;
the proof of the existence and completeness of wave operators for a subclass of
such potentials.
|
We theoretically and observationally investigate different choices of initial
conditions for the primordial mode function that are imposed during an epoch
preceding inflation. By deriving predictions for the observables resulting from
several alternate quantum vacuum prescriptions we show some choices of vacua
are theoretically observationally distinguishable from others. Comparing these
predictions to the Planck 2018 observations via a Bayesian analysis shows no
significant evidence to favour any of the quantum vacuum prescriptions over the
others. In addition we consider frozen initial conditions, representing a
white-noise initial state at the big-bang singularity. Under certain
assumptions the cosmological concordance model and frozen initial conditions
are found to produce identical predictions for the cosmic microwave background
anisotropies. Frozen initial conditions may thus provide an alternative
theoretic paradigm to explain observations that were previously understood in
terms of the inflation of a quantum vacuum.
|
While self-supervised representation learning (SSL) has received widespread
attention from the community, recent research argue that its performance will
suffer a cliff fall when the model size decreases. The current method mainly
relies on contrastive learning to train the network and in this work, we
propose a simple yet effective Distilled Contrastive Learning (DisCo) to ease
the issue by a large margin. Specifically, we find the final embedding obtained
by the mainstream SSL methods contains the most fruitful information, and
propose to distill the final embedding to maximally transmit a teacher's
knowledge to a lightweight model by constraining the last embedding of the
student to be consistent with that of the teacher. In addition, in the
experiment, we find that there exists a phenomenon termed Distilling BottleNeck
and present to enlarge the embedding dimension to alleviate this problem. Our
method does not introduce any extra parameter to lightweight models during
deployment. Experimental results demonstrate that our method achieves the
state-of-the-art on all lightweight models. Particularly, when
ResNet-101/ResNet-50 is used as teacher to teach EfficientNet-B0, the linear
result of EfficientNet-B0 on ImageNet is very close to ResNet-101/ResNet-50,
but the number of parameters of EfficientNet-B0 is only 9.4\%/16.3\% of
ResNet-101/ResNet-50. Code is available at https://github.
com/Yuting-Gao/DisCo-pytorch.
|
Irreducible symplectic varieties are higher-dimensional analogues of K3
surfaces. In this paper, we prove the finiteness of twists of irreducible
symplectic varieties via a fixed finite field extension of characteristic $0$.
The main ingredient of the proof is the cone conjecture for irreducible
symplectic varieties, which was proved by Markman and Amerik--Verbitsky. As
byproducts, we also discuss the cone conjecture over non-closed fields by
Bright--Logan--van Luijk's method. We also give an application to the
finiteness of derived equivalent twists. Moreover, we discuss the case of K3
surfaces or Enriques surfaces over fields of positive characteristic.
|
We report the discovery of a new effect, namely, the effect of magnetically
induced transparency. The effect is observed in a magnetically active helically
structured periodical medium. Changing the external magnetic field and
absorption, one can tune the frequency and the linewidth of the transparency
band.
|
Absolute Concentration Robustness (ACR) was introduced by Shinar and Feinberg
as a way to define robustness of equilibrium species concentration in a mass
action dynamical system. Their aim was to devise a mathematical condition that
will ensure robustness in the function of the biological system being modeled.
The robustness of function rests on what we refer to as empirical
robustness--the concentration of a species remains unvarying, when measured in
the long run, across arbitrary initial conditions. While there is a positive
correlation between ACR and empirical robustness, ACR is neither necessary nor
sufficient for empirical robustness, a fact that can be noticed even in simple
biochemical systems. To develop a stronger connection with empirical
robustness, we define dynamic ACR, a property related to dynamics, rather than
only to equilibrium behavior, and one that guarantees convergence to a robust
value. We distinguish between wide basin and narrow basin versions of dynamic
ACR, related to the size of the set of initial values that do not result in
convergence to the robust value. We give numerous examples which help
distinguish the various flavors of ACR as well as clearly illustrate and
circumscribe the conditions that appear in the definitions. We discuss general
dynamical systems with ACR properties as well as parametrized families of
dynamical systems related to reaction networks. We discuss connections between
ACR and complex balance, two notions central to the theory of reaction
networks. We give precise conditions for presence and absence of dynamic ACR in
complex balanced systems, which in turn yields a large body of reaction
networks with dynamic ACR.
|
In this work, we study the secure index coding problem where there are
security constraints on both legitimate receivers and eavesdroppers. We develop
two performance bounds (i.e., converse results) on the symmetric secure
capacity. The first one is an extended version of the basic acyclic chain bound
(Liu and Sadeghi, 2019) that takes security constraints into account. The
second converse result is a novel information-theoretic lower bound on the
symmetric secure capacity, which is interesting as all the existing converse
results in the literature for secure index coding give upper bounds on the
capacity.
|
In this paper we consider the influence of relativistic rotation on the
confinement/deconfinement transition in gluodynamics within lattice simulation.
We perform the simulation in the reference frame which rotates with the system
under investigation, where rotation is reduced to external gravitational field.
To study the confinement/deconfinement transition the Polyakov loop and its
susceptibility are calculated for various lattice parameters and the values of
angular velocities which are characteristic for heavy-ion collision
experiments. Different types of boundary conditions (open, periodic, Dirichlet)
are imposed in directions, orthogonal to rotation axis. Our data for the
critical temperature are well described by a simple quadratic function
$T_c(\Omega)/T_c(0) = 1 + C_2 \Omega^2$ with $C_2>0$ for all boundary
conditions and all lattice parameters used in the simulations. From this we
conclude that the critical temperature of the confinement/deconfinement
transition in gluodynamics increases with increasing angular velocity. This
conclusion does not depend on the boundary conditions used in our study and we
believe that this is universal property of gluodynamics.
|
Non-destructive evaluation (NDE) through inspection and monitoring is an
integral part of asset integrity management. The relationship between the
condition of interest and the quantity measured by NDE is described with
probabilistic models such as PoD or ROC curves. These models are used to assess
the quality of the information provided by NDE systems, which is affected by
factors such as the experience of the inspector, environmental conditions, ease
of access, or imprecision in the measuring device. In this paper, we show how
the different probabilistic models of NDE are connected within a unifying
framework. Using this framework, we derive insights into how these models
should be learned, calibrated, and applied. We investigate how the choice of
the model can affect the maintenance decisions taken on the basis of NDE
results. In addition, we analyze the impact of experimental design on the
performance of a given NDE system in a decision-making context.
|
In a real-world setting biological agents do not have infinite resources to
learn new things. It is thus useful to recycle previously acquired knowledge in
a way that allows for faster, less resource-intensive acquisition of multiple
new skills. Neural networks in the brain are likely not entirely re-trained
with new tasks, but how they leverage existing computations to learn new tasks
is not well understood. In this work, we study this question in artificial
neural networks trained on commonly used neuroscience paradigms. Building on
recent work from the multi-task learning literature, we propose two
ingredients: (1) network modularity, and (2) learning task primitives.
Together, these ingredients form inductive biases we call structural and
functional, respectively. Using a corpus of nine different tasks, we show that
a modular network endowed with task primitives allows for learning multiple
tasks well while keeping parameter counts, and updates, low. We also show that
the skills acquired with our approach are more robust to a broad range of
perturbations compared to those acquired with other multi-task learning
strategies. This work offers a new perspective on achieving efficient
multi-task learning in the brain, and makes predictions for novel neuroscience
experiments in which targeted perturbations are employed to explore solution
spaces.
|
We prove that quantum information propagates with a finite velocity in any
model of interacting bosons whose (possibly time-dependent) Hamiltonian
contains spatially local single-boson hopping terms along with arbitrary local
density-dependent interactions. More precisely, with density matrix $\rho
\propto \exp[-\mu N]$ (with $N$ the total boson number), ensemble averaged
correlators of the form $\langle [A_0,B_r(t)]\rangle $, along with
out-of-time-ordered correlators, must vanish as the distance $r$ between two
local operators grows, unless $t \ge r/v$ for some finite speed $v$. In one
dimensional models, we give a useful extension of this result that demonstrates
the smallness of all matrix elements of the commutator $[A_0,B_r(t)]$ between
finite density states if $t/r$ is sufficiently small. Our bounds are relevant
for physically realistic initial conditions in experimentally realized models
of interacting bosons. In particular, we prove that $v$ can scale no faster
than linear in number density in the Bose-Hubbard model: this scaling matches
previous results in the high density limit. The quantum walk formalism
underlying our proof provides an alternative method for bounding quantum
dynamics in models with unbounded operators and infinite-dimensional Hilbert
spaces, where Lieb-Robinson bounds have been notoriously challenging to prove.
|
The Sihl river, located near the city of Zurich in Switzerland, is under
continuous and tight surveillance as it flows directly under the city's main
railway station. To issue early warnings and conduct accurate risk
quantification, a dense network of monitoring stations is necessary inside the
river basin. However, as of 2021 only three automatic stations are operated in
this region, naturally raising the question: how to extend this network for
optimal monitoring of extreme rainfall events?
So far, existing methodologies for station network design have mostly focused
on maximizing interpolation accuracy or minimizing the uncertainty of some
model's parameters estimates. In this work, we propose new principles inspired
from extreme value theory for optimal monitoring of extreme events. For
stationary processes, we study the theoretical properties of the induced
sampling design that yields non-trivial point patterns resulting from a
compromise between a boundary effect and the maximization of inter-location
distances. For general applications, we propose a theoretically justified
functional peak-over-threshold model and provide an algorithm for sequential
station selection. We then issue recommendations for possible extensions of the
Sihl river monitoring network, by efficiently leveraging both station and radar
measurements available in this region.
|
Spherical matrix arrays arguably represent an advantageous tomographic
detection geometry for non-invasive deep tissue mapping of vascular networks
and oxygenation with volumetric optoacoustic tomography (VOT). Hybridization of
VOT with ultrasound (US) imaging remains difficult with this configuration due
to the relatively large inter-element pitch of spherical arrays. We suggest a
new approach for combining VOT and US contrast-enhanced imaging employing
injection of clinically-approved microbubbles. Power Doppler (PD) and US
localization imaging were enabled with a sparse US acquisition sequence and
model-based inversion based on infimal convolution of total variation (ICTV)
regularization. Experiments in tissue-mimicking phantoms and in vivo in mice
demonstrate the powerful capabilities of the new dual-mode imaging system for
blood velocity mapping and anatomical imaging with enhanced resolution and
contrast.
|
We characterise the selection cuts and clustering properties of a
magnitude-limited sample of bright galaxies that is part of the Bright Galaxy
Survey (BGS) of the Dark Energy Spectroscopic Instrument (DESI) using the ninth
data release of the Legacy Imaging Surveys (DR9). We describe changes in the
DR9 selection compared to the DR8 one as explored in Ruiz-Macias et al. (2021).
We also compare the DR9 selection in three distinct regions: BASS/MzLS in the
north Galactic Cap (NGC), DECaLS in the NGC, and DECaLS in the south Galactic
Cap (SGC). We investigate the systematics associated with the selection and
assess its completeness by matching the BGS targets with the Galaxy and Mass
Assembly (GAMA) survey. We measure the angular clustering for the overall
bright sample (r $\leq$ 19.5) and as function of apparent magnitude and colour.
This enables to determine the clustering strength and slope by fitting a
power-law model that can be used to generate accurate mock catalogues for this
tracer. We use a counts-in-cells technique to explore higher-order statistics
and cross-correlations with external spectroscopic data sets in order to check
the evolution of the clustering with redshift and the redshift distribution of
the BGS targets using clustering-redshifts. While this work validates the
properties of the BGS bright targets, the final target selection pipeline and
clustering properties of the entire DESI BGS will be fully characterised and
validated with the spectroscopic data of Survey Validation.
|
In this work, we aim to improve the expressive capacity of waveform-based
discriminative music networks by modeling both sequential (temporal) and
hierarchical information in an efficient end-to-end architecture. We present
MuSLCAT, or Multi-scale and Multi-level Convolutional Attention Transformer, a
novel architecture for learning robust representations of complex music tags
directly from raw waveform recordings. We also introduce a lightweight variant
of MuSLCAT called MuSLCAN, short for Multi-scale and Multi-level Convolutional
Attention Network. Both MuSLCAT and MuSLCAN model features from multiple scales
and levels by integrating a frontend-backend architecture. The frontend targets
different frequency ranges while modeling long-range dependencies and
multi-level interactions by using two convolutional attention networks with
attention-augmented convolution (AAC) blocks. The backend dynamically
recalibrates multi-scale and level features extracted from the frontend by
incorporating self-attention. The difference between MuSLCAT and MuSLCAN is
their backend components. MuSLCAT's backend is a modified version of BERT.
While MuSLCAN's is a simple AAC block. We validate the proposed MuSLCAT and
MuSLCAN architectures by comparing them to state-of-the-art networks on four
benchmark datasets for music tagging and genre recognition. Our experiments
show that MuSLCAT and MuSLCAN consistently yield competitive results when
compared to state-of-the-art waveform-based models yet require considerably
fewer parameters.
|
Trading in Over-The-Counter (OTC) markets is facilitated by broker-dealers,
in comparison to public exchanges, e.g., the New York Stock Exchange (NYSE).
Dealers play an important role in stabilizing prices and providing liquidity in
OTC markets. We apply machine learning methods to model and predict the trading
behavior of OTC dealers for US corporate bonds. We create sequences of daily
historical transaction reports for each dealer over a vocabulary of US
corporate bonds. Using this history of dealer activity, we predict the future
trading decisions of the dealer. We consider a range of neural network-based
prediction models. We propose an extension, the Pointwise-Product ReZero (PPRZ)
Transformer model, and demonstrate the improved performance of our model. We
show that individual history provides the best predictive model for the most
active dealers. For less active dealers, a collective model provides improved
performance. Further, clustering dealers based on their similarity can improve
performance. Finally, prediction accuracy varies based on the activity level of
both the bond and the dealer.
|
Cosmology is well suited to study the effects of long range interactions due
to the large densities in the early Universe. In this article, we explore how
the energy density and equation of state of a fermion system diverge from the
commonly assumed ideal gas form under the presence of scalar long range
interactions with a range much smaller than cosmological scales. In this
scenario, "small"-scale physics can impact our largest-scale observations. As a
benchmark, we apply the formalism to self-interacting neutrinos, performing an
analysis to present and future cosmological data. Our results show that the
current cosmological neutrino mass bound is fully avoided in the presence of a
long range interaction, opening the possibility for a laboratory neutrino mass
detection in the near future. We also demonstrate an interesting
complementarity between neutrino laboratory experiments and the future EUCLID
survey.
|
This paper considers a new problem of adapting a pre-trained model of human
mesh reconstruction to out-of-domain streaming videos. However, most previous
methods based on the parametric SMPL model \cite{loper2015smpl} underperform in
new domains with unexpected, domain-specific attributes, such as camera
parameters, lengths of bones, backgrounds, and occlusions. Our general idea is
to dynamically fine-tune the source model on test video streams with additional
temporal constraints, such that it can mitigate the domain gaps without
over-fitting the 2D information of individual test frames. A subsequent
challenge is how to avoid conflicts between the 2D and temporal constraints. We
propose to tackle this problem using a new training algorithm named Bilevel
Online Adaptation (BOA), which divides the optimization process of overall
multi-objective into two steps of weight probe and weight update in a training
iteration. We demonstrate that BOA leads to state-of-the-art results on two
human mesh reconstruction benchmarks.
|
This paper introduces a conditional generative adversarial network to
redesign a street-level image of urban scenes by generating 1) an urban
intervention policy, 2) an attention map that localises where intervention is
needed, 3) a high-resolution street-level image (1024 X 1024 or 1536 X1536)
after implementing the intervention. We also introduce a new dataset that
comprises aligned street-level images of before and after urban interventions
from real-life scenarios that make this research possible. The introduced
method has been trained on different ranges of urban interventions applied to
realistic images. The trained model shows strong performance in re-modelling
cities, outperforming existing methods that apply image-to-image translation in
other domains that is computed in a single GPU. This research opens the door
for machine intelligence to play a role in re-thinking and re-designing the
different attributes of cities based on adversarial learning, going beyond the
mainstream of facial landmarks manipulation or image synthesis from semantic
segmentation.
|
Laser cooling of solids keeps attracting attention owing to abroad range of
its applications that extends from cm-sized all-optical cryocoolers for
airborne and space-based applications to cooling on nanoparticles for
biological and mesoscopic physics. Laser cooling of nanoparticles is a
challenging task. We propose to use Mie resonances to enhance anti-Stokes
fluorescence laser cooling in rare-earth (RE) doped nanoparticles made of
low-phonon glasses or crystals. As an example, we consider an Yb3+:YAG
nanosphere pumped at the long wavelength tail of the Yb3+ absorption spectrum
at 1030 nm. We show that if the radius of the nanosphere is adjusted to the
pump wavelength in such a manner that the pump excites some of its Mie resonant
modes, the cooling power density generated in the sample is considerably
enhanced and the temperature of the sample is consequently considerably (~ 63%)
decreased. This concept can be extended to nanoparticles of different shapes
and made from different low-phonon RE doped materials suitable for laser
cooling by anti-Stokes fluorescence.
|
Estimating 3D human poses from video is a challenging problem. The lack of 3D
human pose annotations is a major obstacle for supervised training and for
generalization to unseen datasets. In this work, we address this problem by
proposing a weakly-supervised training scheme that does not require 3D
annotations or calibrated cameras. The proposed method relies on temporal
information and triangulation. Using 2D poses from multiple views as the input,
we first estimate the relative camera orientations and then generate 3D poses
via triangulation. The triangulation is only applied to the views with high 2D
human joint confidence. The generated 3D poses are then used to train a
recurrent lifting network (RLN) that estimates 3D poses from 2D poses. We
further apply a multi-view re-projection loss to the estimated 3D poses and
enforce the 3D poses estimated from multi-views to be consistent. Therefore,
our method relaxes the constraints in practice, only multi-view videos are
required for training, and is thus convenient for in-the-wild settings. At
inference, RLN merely requires single-view videos. The proposed method
outperforms previous works on two challenging datasets, Human3.6M and
MPI-INF-3DHP. Codes and pretrained models will be publicly available.
|
Two dimensional (2D) transition metal dichalcogenide (TMDC) materials, such
as MoS2, WS2, MoSe2, and WSe2, have received extensive attention in the past
decade due to their extraordinary physical properties. The unique properties
make them become ideal materials for various electronic, photonic and
optoelectronic devices. However, their performance is limited by the relatively
weak light-matter interactions due to their atomically thin form factor.
Resonant nanophotonic structures provide a viable way to address this issue and
enhance light-matter interactions in 2D TMDCs. Here, we provide an overview of
this research area, showcasing relevant applications, including exotic light
emission, absorption and scattering features. We start by overviewing the
concept of excitons in 1L-TMDC and the fundamental theory of cavity-enhanced
emission, followed by a discussion on the recent progress of enhanced light
emission, strong coupling and valleytronics. The atomically thin nature of
1L-TMDC enables a broad range of ways to tune its electric and optical
properties. Thus, we continue by reviewing advances in TMDC-based tunable
photonic devices. Next, we survey the recent progress in enhanced light
absorption over narrow and broad bandwidths using 1L or few-layer TMDCs, and
their applications for photovoltaics and photodetectors. We also review recent
efforts of engineering light scattering, e.g., inducing Fano resonances,
wavefront engineering in 1L or few-layer TMDCs by either integrating resonant
structures, such as plasmonic/Mie resonant metasurfaces, or directly patterning
monolayer/few layers TMDCs. We then overview the intriguing physical properties
of different types of van der Waals heterostructures, and their applications in
optoelectronic and photonic devices. Finally, we draw our opinion on potential
opportunities and challenges in this rapidly developing field of research.
|
Event perception tasks such as recognizing and localizing actions in
streaming videos are essential for tackling visual understanding tasks.
Progress has primarily been driven by the use of large-scale, annotated
training data in a supervised manner. In this work, we tackle the problem of
learning \textit{actor-centered} representations through the notion of
continual hierarchical predictive learning to localize actions in streaming
videos without any training annotations. Inspired by cognitive theories of
event perception, we propose a novel, self-supervised framework driven by the
notion of hierarchical predictive learning to construct actor-centered features
by attention-based contextualization. Extensive experiments on three benchmark
datasets show that the approach can learn robust representations for localizing
actions using only one epoch of training, i.e., we train the model continually
in streaming fashion - one frame at a time, with a single pass through training
videos. We show that the proposed approach outperforms unsupervised and weakly
supervised baselines while offering competitive performance to fully supervised
approaches. Finally, we show that the proposed model can generalize to
out-of-domain data without significant loss in performance without any
finetuning for both the recognition and localization tasks.
|
One of the main reasons for the success of Evolutionary Algorithms (EAs) is
their general-purposeness, i.e., the fact that they can be applied
straightforwardly to a broad range of optimization problems, without any
specific prior knowledge. On the other hand, it has been shown that
incorporating a priori knowledge, such as expert knowledge or empirical
findings, can significantly improve the performance of an EA. However,
integrating knowledge in EAs poses numerous challenges. It is often the case
that the features of the search space are unknown, hence any knowledge
associated with the search space properties can be hardly used. In addition, a
priori knowledge is typically problem-specific and hard to generalize. In this
paper, we propose a framework, called Knowledge Integrated Evolutionary
Algorithm (KIEA), which facilitates the integration of existing knowledge into
EAs. Notably, the KIEA framework is EA-agnostic (i.e., it works with any
evolutionary algorithm), problem-independent (i.e., it is not dedicated to a
specific type of problems), expandable (i.e., its knowledge base can grow over
time). Furthermore, the framework integrates knowledge while the EA is running,
thus optimizing the use of the needed computational power. In the preliminary
experiments shown here, we observe that the KIEA framework produces in the
worst case an 80% improvement on the converge time, w.r.t. the corresponding
"knowledge-free" EA counterpart.
|
Object handover is a common human collaboration behavior that attracts
attention from researchers in Robotics and Cognitive Science. Though visual
perception plays an important role in the object handover task, the whole
handover process has been specifically explored. In this work, we propose a
novel rich-annotated dataset, H2O, for visual analysis of human-human object
handovers. The H2O, which contains 18K video clips involving 15 people who hand
over 30 objects to each other, is a multi-purpose benchmark. It can support
several vision-based tasks, from which, we specifically provide a baseline
method, RGPNet, for a less-explored task named Receiver Grasp Prediction.
Extensive experiments show that the RGPNet can produce plausible grasps based
on the giver's hand-object states in the pre-handover phase. Besides, we also
report the hand and object pose errors with existing baselines and show that
the dataset can serve as the video demonstrations for robot imitation learning
on the handover task. Dataset, model and code will be made public.
|
Prior works have found it beneficial to combine provably noise-robust loss
functions e.g., mean absolute error (MAE) with standard categorical loss
function e.g. cross entropy (CE) to improve their learnability. Here, we
propose to use Jensen-Shannon divergence as a noise-robust loss function and
show that it interestingly interpolate between CE and MAE with a controllable
mixing parameter. Furthermore, we make a crucial observation that CE exhibit
lower consistency around noisy data points. Based on this observation, we adopt
a generalized version of the Jensen-Shannon divergence for multiple
distributions to encourage consistency around data points. Using this loss
function, we show state-of-the-art results on both synthetic (CIFAR), and
real-world (e.g., WebVision) noise with varying noise rates.
|
In this paper, we investigate the outage performance of an intelligent
reflecting surface (IRS)-assisted non-orthogonal multiple access (NOMA) uplink,
in which a group of the surface reflecting elements are configured to boost the
signal of one of the user equipments (UEs), while the remaining elements are
used to boost the other UE. By approximating the received powers as Gamma
random variables, tractable expressions for the outage probability under NOMA
interference cancellation are obtained. We evaluate the outage over different
splits of the elements and varying pathloss differences between the two UEs.
The analysis shows that for small pathloss differences, the split should be
chosen such that most of the IRS elements are configured to boost the stronger
UE, while for large pathloss differences, it is more beneficial to boost the
weaker UE. Finally, we investigate a robust selection of the elements' split
under the criterion of minimizing the maximum outage between the two UEs.
|
This paper addresses issues surrounding the concept of fractional quantum
mechanics, related to lights propagation in inhomogeneous nonlinear media,
specifically restricted to a so called gravitational optics. Besides
Schr\"odinger Newton equation, we have also concerned with linear and nonlinear
Airy beam accelerations in flat and curved spaces and fractal photonics,
related to nonlinear Schr\"odinger equation, where impact of the fractional
Laplacian is discussed. Another important feature of the gravitational optics'
implementation is its geometry with the paraxial approximation, when quantum
mechanics, in particular, fractional quantum mechanics, is an effective
description of optical effects. In this case, fractional-time differentiation
reflexes this geometry effect as well.
|
In the first part of the paper, we study the Cauchy problem for the
advection-diffusion equation $\partial_t v + \text{div }(v\boldsymbol{b} ) =
\Delta v$ associated with a merely integrable, divergence-free vector field
$\boldsymbol{b}$ defined on the torus. We first introduce two notions of
solutions (distributional and parabolic), recalling the corresponding available
results of existence and uniqueness. Then, we establish a regularity criterion,
which in turn provides uniqueness for distributional solutions. This is
motivated by the recent results in [31] where the authors showed non-uniqueness
of distributional solutions to the advection-diffusion equation despite the
parabolic one is unique. In the second part of the paper, we precisely describe
the vanishing viscosity scheme for the transport/continuity equation drifted by
$\boldsymbol{b}$, i.e. $\partial_t u + \text{div }(u\boldsymbol{b} ) = 0$.
Under Sobolev assumptions on $\boldsymbol{b} $, we give two independent proofs
of the convergence of such scheme to the Lagrangian solution of the transport
equation. The first proof slightly generalizes the original one of [21]. The
other one is quantitative and yields rates of convergence. This offers a
completely general selection criterion for the transport equation (even beyond
the distributional regime) which compensates the wild non-uniqueness phenomenon
for solutions with low integrability arising from convex integration schemes,
as shown in recent works [10, 31, 32, 33], and rules out the possibility of
anomalous dissipation.
|
Quantile regression presents a complete picture of the effects on the
location, scale, and shape of the dependent variable at all points, not just
the mean. We focus on two challenges for citation count analysis by quantile
regression: discontinuity and substantial mass points at lower counts. A
Bayesian hurdle quantile regression model for count data with a substantial
mass point at zero was proposed by King and Song (2019). It uses quantile
regression for modeling the nonzero data and logistic regression for modeling
the probability of zeros versus nonzeros. We show that substantial mass points
for low citation counts will nearly certainly also affect parameter estimation
in the quantile regression part of the model, similar to a mass point at zero.
We update the King and Song model by shifting the hurdle point past the main
mass points. This model delivers more accurate quantile regression for
moderately to highly cited articles, especially at quantiles corresponding to
values just beyond the mass points, and enables estimates of the extent to
which factors influence the chances that an article will be low cited. To
illustrate the potential of this method, it is applied to simulated citation
counts and data from Scopus.
|
Financial markets are a source of non-stationary multidimensional time series
which has been drawing attention for decades. Each financial instrument has its
specific changing over time properties, making their analysis a complex task.
Improvement of understanding and development of methods for financial time
series analysis is essential for successful operation on financial markets. In
this study we propose a volume-based data pre-processing method for making
financial time series more suitable for machine learning pipelines. We use a
statistical approach for assessing the performance of the method. Namely, we
formally state the hypotheses, set up associated classification tasks, compute
effect sizes with confidence intervals, and run statistical tests to validate
the hypotheses. We additionally assess the trading performance of the proposed
method on historical data and compare it to a previously published approach.
Our analysis shows that the proposed volume-based method allows successful
classification of the financial time series patterns, and also leads to better
classification performance than a price action-based method, excelling
specifically on more liquid financial instruments. Finally, we propose an
approach for obtaining feature interactions directly from tree-based models on
example of CatBoost estimator, as well as formally assess the relatedness of
the proposed approach and SHAP feature interactions with a positive outcome.
|
Autonomous systems like aircraft and assistive robots often operate in
scenarios where guaranteeing safety is critical. Methods like Hamilton-Jacobi
reachability can provide guaranteed safe sets and controllers for such systems.
However, often these same scenarios have unknown or uncertain environments,
system dynamics, or predictions of other agents. As the system is operating, it
may learn new knowledge about these uncertainties and should therefore update
its safety analysis accordingly. However, work to learn and update safety
analysis is limited to small systems of about two dimensions due to the
computational complexity of the analysis. In this paper we synthesize several
techniques to speed up computation: decomposition, warm-starting, and adaptive
grids. Using this new framework we can update safe sets by one or more orders
of magnitude faster than prior work, making this technique practical for many
realistic systems. We demonstrate our results on simulated 2D and 10D
near-hover quadcopters operating in a windy environment.
|
Structural behaviour of PbMn$_{7}$O$_{12}$ has been studied by high
resolution synchrotron X-ray powder diffraction. This material belongs to a
family of quadruple perovskite manganites that exhibit an incommensurate
structural modulation associated with an orbital density wave. It has been
found that the structural modulation in PbMn$_{7}$O$_{12}$ onsets at 294 K with
the incommensurate propagation vector $\mathbf{k}_s=(0,0,\sim2.08)$. At 110 K
another structural transition takes place where the propagation vector suddenly
drops down to a \emph{quasi}-commensurate value $\mathbf{k}_s=(0,0,2.0060(6))$.
The \emph{quasi}-commensurate phase is stable in the temperature range of 40K -
110 K, and below 40 K the propagation vector jumps back to the incommensurate
value $\mathbf{k}_s=(0,0,\sim2.06)$. Both low temperature structural
transitions are strongly first order with large thermal hysteresis. The orbital
density wave in the \emph{quasi}-commensurate phase has been found to be
substantially suppressed in comparison with the incommensurate phases, which
naturally explains unusual magnetic behaviour recently reported for this
perovskite. Analysis of the refined structural parameters revealed that that
the presence of the \emph{quasi}-commensurate phase is likely to be associated
with a competition between the Pb$^{2+}$ lone electron pair and Mn$^{3+}$
Jahn-Teller instabilities.
|
We describe the new version (v3.06h) of the code HFODD that solves the
universal nonrelativistic nuclear DFT Hartree-Fock or Hartree-Fock-Bogolyubov
problem by using the Cartesian deformed harmonic-oscillator basis. In the new
version, we implemented the following new features: (i) zero-range three- and
four-body central terms, (ii) zero-range three-body gradient terms, (iii)
zero-range tensor terms, (iv) zero-range isospin-breaking terms, (v)
finite-range higher-order regularized terms, (vi) finite-range separable terms,
(vii) zero-range two-body pairing terms, (viii) multi-quasiparticle blocking,
(ix) Pfaffian overlaps, (x) particle-number and parity symmetry restoration,
(xi) axialization, (xii) Wigner functions, (xiii) choice of the
harmonic-oscillator basis, (xiv) fixed Omega partitions, (xv) consistency
formula between energy and fields, and we corrected several errors of the
previous versions.
|
This note derives parametrizations for surfaces of revolution that satisfy an
affine-linear relation between their respective curvature radii. Alongside,
parametrizations for the uniform normal offsets of those surfaces are obtained.
Those parametrizations are found explicitly for a countably-infinite many of
them, and of those, it is shown which are algebraic. Lastly, for those surfaces
which have a constant ratio of principal curvatures, parametrizations with a
constant angle between the parameter curves are found.
|
It is well-known that the univariate Multiquadric quasi-interpolation
operator is constructed based on the piecewise linear interpolation by |x|. In
this paper, we first introduce a new transcendental RBF based on the hyperbolic
tangent function as a smooth approximant to f(r)=r with higher accuracy and
better convergence properties than the multiquadric. Then Wu-Schaback's
quasi-interpolation formula is rewritten using the proposed RBF. It preserves
convexity and monotonicity. We prove that the proposed scheme converges with a
rate of O(h^2). So it has a higher degree of smoothness. Some numerical
experiments are given in order to demonstrate the efficiency and accuracy of
the method.
|
Satisfiability of boolean formulae (SAT) has been a topic of research in
logic and computer science for a long time. In this paper we are interested in
understanding the structure of satisfiable and unsatisfiable sentences. In
previous work we initiated a new approach to SAT by formulating a mapping from
propositional logic sentences to graphs, allowing us to find structural
obstructions to 2SAT (clauses with exactly 2 literals) in terms of graphs. Here
we generalize these ideas to multi-hypergraphs in which the edges can have more
than 2 vertices and can have multiplicity. This is needed for understanding the
structure of SAT for sentences made of clauses with 3 or more literals (3SAT),
which is a building block of NP-completeness theory. We introduce a decision
problem that we call GraphSAT, as a first step towards a structural view of
SAT. Each propositional logic sentence can be mapped to a multi-hypergraph by
associating each variable with a vertex (ignoring the negations) and each
clause with a hyperedge. Such a graph then becomes a representative of a
collection of possible sentences and we can then formulate the notion of
satisfiability of such a graph. With this coarse representation of classes of
sentences one can then investigate structural obstructions to SAT. To make the
problem tractable, we prove a local graph rewriting theorem which allows us to
simplify the neighborhood of a vertex without knowing the rest of the graph. We
use this to deduce several reduction rules, allowing us to modify a graph
without changing its satisfiability status which can then be used in a program
to simplify graphs. We study a subclass of 3SAT by examining sentences living
on triangulations of surfaces and show that for any compact surface there
exists a triangulation that can support unsatisfiable sentences, giving
specific examples of such triangulations for various surfaces.
|
Models whose ground states can be written as an exact matrix product state
(MPS) provide valuable insights into phases of matter. While MPS-solvable
models are typically studied as isolated points in a phase diagram, they can
belong to a connected network of MPS-solvable models, which we call the MPS
skeleton. As a case study where we can completely unearth this skeleton, we
focus on the one-dimensional BDI class -- non-interacting spinless fermions
with time-reversal symmetry. This class, labelled by a topological winding
number, contains the Kitaev chain and is Jordan-Wigner-dual to various
symmetry-breaking and symmetry-protected topological (SPT) spin chains. We show
that one can read off from the Hamiltonian whether its ground state is an MPS:
defining a polynomial whose coefficients are the Hamiltonian parameters,
MPS-solvability corresponds to this polynomial being a perfect square. We
provide an explicit construction of the ground state MPS, its bond dimension
growing exponentially with the range of the Hamiltonian. This complete
characterization of the MPS skeleton in parameter space has three significant
consequences: (i) any two topologically distinct phases in this class admit a
path of MPS-solvable models between them, including the phase transition which
obeys an area law for its entanglement entropy; (ii) we illustrate that the
subset of MPS-solvable models is dense in this class by constructing a sequence
of MPS-solvable models which converge to the Kitaev chain (equivalently, the
quantum Ising chain in a transverse field); (iii) a subset of these MPS states
can be particularly efficiently processed on a noisy intermediate-scale quantum
computer.
|
In 1979 I. Cior\u{a}nescu and L. Zsid\'o have proved a minimum modulus
theorem for entire functions dominated by the restriction to the positive half
axis of a canonical product of genus zero, having all roots on the positive
imaginary axis and satisfying a certain condition.
Here we prove that the above result is optimal: if a canonical product
{\omega} of genus zero, having all roots on the positive imaginary axis, does
not satisfy the condition in the 1979 paper, then always there exists an entire
function dominated by the restriction to the positive half axis of {\omega},
which does not satisfy the desired minimum modulus conclusion. This has
relevant implication concerning the subjectivity of ultra differential
operators with constant coefficients.
|
With the massive damage in the world caused by Coronavirus Disease 2019
SARS-CoV-2 (COVID-19), many related research topics have been proposed in the
past two years. The Chest Computed Tomography (CT) scans are the most valuable
materials to diagnose the COVID-19 symptoms. However, most schemes for COVID-19
classification of Chest CT scan is based on a single-slice level, implying that
the most critical CT slice should be selected from the original CT scan volume
manually. We simultaneously propose 2-D and 3-D models to predict the COVID-19
of CT scan to tickle this issue. In our 2-D model, we introduce the Deep
Wilcoxon signed-rank test (DWCC) to determine the importance of each slice of a
CT scan to overcome the issue mentioned previously. Furthermore, a
Convolutional CT scan-Aware Transformer (CCAT) is proposed to discover the
context of the slices fully. The frame-level feature is extracted from each CT
slice based on any backbone network and followed by feeding the features to our
within-slice-Transformer (WST) to discover the context information in the pixel
dimension. The proposed Between-Slice-Transformer (BST) is used to aggregate
the extracted spatial-context features of every CT slice. A simple classifier
is then used to judge whether the Spatio-temporal features are COVID-19 or
non-COVID-19. The extensive experiments demonstrated that the proposed CCAT and
DWCC significantly outperform the state-of-the-art methods.
|
Recovery of power flow to critical infrastructures, after grid failure, is a
crucial need arising in scenarios that are increasingly becoming more frequent.
This article proposes a power transition and recovery strategy by proposing a
mode-dependent droop control-based inverters. The control strategy of inverters
achieves the following objectives 1) regulate the output active and reactive
power by the droop-based inverters to a desired value while operating in
on-grid mode 2) seamless transition and recovery of power flow injections into
the critical loads in the network by inverters operating in off-grid mode after
the main grid fails; 3) require minimal information of grid/network status and
conditions for the mode transition of droop control. A framework for assessing
the stability of the system and to guide the choice of parameters for
controllers is developed using control-oriented modeling. A comprehensive
controller hardware-in-the-loop-based real-time simulation study on a
test-system based on the realistic electrical network of M-Health Fairview,
University of Minnesota Medical Center, corroborates the efficacy of the
proposed controller strategy.
|
Entity linking (EL) for the rapidly growing short text (e.g. search queries
and news titles) is critical to industrial applications. Most existing
approaches relying on adequate context for long text EL are not effective for
the concise and sparse short text. In this paper, we propose a novel framework
called Multi-turn Multiple-choice Machine reading comprehension (M3}) to solve
the short text EL from a new perspective: a query is generated for each
ambiguous mention exploiting its surrounding context, and an option selection
module is employed to identify the golden entity from candidates using the
query. In this way, M3 framework sufficiently interacts limited context with
candidate entities during the encoding process, as well as implicitly considers
the dissimilarities inside the candidate bunch in the selection stage. In
addition, we design a two-stage verifier incorporated into M3 to address the
commonly existed unlinkable problem in short text. To further consider the
topical coherence and interdependence among referred entities, M3 leverages a
multi-turn fashion to deal with mentions in a sequence manner by retrospecting
historical cues. Evaluation shows that our M3 framework achieves the
state-of-the-art performance on five Chinese and English datasets for the
real-world short text EL.
|
We give a review of the calculations of the masses of tetraquarks with two
and four heavy quarks in the framework of the relativistic quark model based on
the quasipotential approach and QCD. The diquark-antidiquark picture of heavy
tetraquarks is used. The quasipotentials of the quark-quark and
diquark-antidiquark interactions are constructed similarly to the previous
consideration of mesons and baryons. Diquarks are considered in the colour
triplet state. It is assumed that the diquark and antidiquark interact in the
tetraquark as a whole and the internal structure of the diquarks is taken into
account by the calculated form factor of the diquark-gluon interaction. All
parameters of the model are kept fixed from our previous calculations of meson
and baryon properties. A detailed comparison of the obtained predictions for
heavy tetraquark masses with available experimental data is given. Many
candidates for tetraquarks are found. It is argued that the structures in the
di-$J/\psi$ mass spectrum observed recently by the LHCb Collaboration can be
interpreted as $cc\bar c\bar c$ tetraquarks.
|
In the last two decades, optical vortices carried by twisted light wavefronts
have attracted a great deal of interest, providing not only new physical
insights into light-matter interactions, but also a transformative platform for
boosting optical information capacity. Meanwhile, advances in nanoscience and
nanotechnology lead to the emerging field of nanophotonics, offering an
unprecedented level of light manipulation via nanostructured materials and
devices. Many exciting ideas and concepts come up when optical vortices meet
nanophotonic devices. Here, we provide a mini review on recent achievements
made in nanophotonics for the generation and detection of optical vortices and
some of their applications.
|
Decision makers involved in the management of civil assets and systems
usually take actions under constraints imposed by societal regulations. Some of
these constraints are related to epistemic quantities, as the probability of
failure events and the corresponding risks. Sensors and inspectors can provide
useful information supporting the control process (e.g. the maintenance process
of an asset), and decisions about collecting this information should rely on an
analysis of its cost and value. When societal regulations encode an economic
perspective that is not aligned with that of the decision makers, the Value of
Information (VoI) can be negative (i.e., information sometimes hurts), and
almost irrelevant information can even have a significant value (either
positive or negative), for agents acting under these epistemic constraints. We
refer to these phenomena as Information Avoidance (IA) and Information
OverValuation (IOV). In this paper, we illustrate how to assess VoI in
sequential decision making under epistemic constraints (as those imposed by
societal regulations), by modeling a Partially Observable Markov Decision
Processes (POMDP) and evaluating non optimal policies via Finite State
Controllers (FSCs). We focus on the value of collecting information at current
time, and on that of collecting sequential information, we illustrate how these
values are related and we discuss how IA and IOV can occur in those settings.
|
In this work, we show the generative capability of an image classifier
network by synthesizing high-resolution, photo-realistic, and diverse images at
scale. The overall methodology, called Synthesize-It-Classifier (STIC), does
not require an explicit generator network to estimate the density of the data
distribution and sample images from that, but instead uses the classifier's
knowledge of the boundary to perform gradient ascent w.r.t. class logits and
then synthesizes images using Gram Matrix Metropolis Adjusted Langevin
Algorithm (GRMALA) by drawing on a blank canvas. During training, the
classifier iteratively uses these synthesized images as fake samples and
re-estimates the class boundary in a recurrent fashion to improve both the
classification accuracy and quality of synthetic images. The STIC shows the
mixing of the hard fake samples (i.e. those synthesized by the one hot class
conditioning), and the soft fake samples (which are synthesized as a convex
combination of classes, i.e. a mixup of classes) improves class interpolation.
We demonstrate an Attentive-STIC network that shows an iterative drawing of
synthesized images on the ImageNet dataset that has thousands of classes. In
addition, we introduce the synthesis using a class conditional score classifier
(Score-STIC) instead of a normal image classifier and show improved results on
several real-world datasets, i.e. ImageNet, LSUN, and CIFAR 10.
|
We prove new $L^p$-$L^q$-estimates for solutions to elliptic differential
operators with constant coefficients in $\mathbb{R}^3$. We use the estimates
for the decay of the Fourier transform of particular surfaces in $\mathbb{R}^3$
with vanishing Gaussian curvature due to Erd\H{o}s--Salmhofer to derive new
Fourier restriction--extension estimates. These allow for constructing
distributional solutions in $L^q(\mathbb{R}^3)$ for $L^p$-data via limiting
absorption by well-known means.
|
For the Langevin model of the dynamics of a Brownian particle with
perturbations orthogonal to its current velocity, in a regime when the particle
velocity modulus becomes constant, an equation for the characteristic function
$\psi (t,\lambda )=M\left[\exp (\lambda ,x(t))/V={\rm v}(0)\right]$ of the
position $x(t)$ of the Brownian particle. The obtained results confirm the
conclusion that the model of the dynamics of a Brownian particle, which
constructed on the basis of an unconventional physical interpretation of the
Langevin equations, i. e. stochastic equations with orthogonal influences,
leads to the interpretation of an ensemble of Brownian particles as a system
with wave properties. These results are consistent with the previously obtained
conclusions that, with a certain agreement of the coefficients in the original
stochastic equation, for small random influences and friction, the Langevin
equations lead to a description of the probability density of the position of a
particle based on wave equations. For large random influences and friction, the
probability density is a solution to the diffusion equation, with a diffusion
coefficient that is lower than in the classical diffusion model.
|
The advent of deep learning has brought an impressive advance to monocular
depth estimation, e.g., supervised monocular depth estimation has been
thoroughly investigated. However, the large amount of the RGB-to-depth dataset
may not be always available since collecting accurate depth ground truth
according to the RGB image is a time-consuming and expensive task. Although the
network can be trained on an alternative dataset to overcome the dataset scale
problem, the trained model is hard to generalize to the target domain due to
the domain discrepancy. Adversarial domain alignment has demonstrated its
efficacy to mitigate the domain shift on simple image classification tasks in
previous works. However, traditional approaches hardly handle the conditional
alignment as they solely consider the feature map of the network. In this
paper, we propose an adversarial training model that leverages semantic
information to narrow the domain gap. Based on the experiments conducted on the
datasets for the monocular depth estimation task including KITTI and
Cityscapes, the proposed compact model achieves state-of-the-art performance
comparable to complex latest models and shows favorable results on boundaries
and objects at far distances.
|
Reliable and accurate localization and mapping are key components of most
autonomous systems. Besides geometric information about the mapped environment,
the semantics plays an important role to enable intelligent navigation
behaviors. In most realistic environments, this task is particularly
complicated due to dynamics caused by moving objects, which can corrupt the
mapping step or derail localization. In this paper, we propose an extension of
a recently published surfel-based mapping approach exploiting three-dimensional
laser range scans by integrating semantic information to facilitate the mapping
process. The semantic information is efficiently extracted by a fully
convolutional neural network and rendered on a spherical projection of the
laser range data. This computed semantic segmentation results in point-wise
labels for the whole scan, allowing us to build a semantically-enriched map
with labeled surfels. This semantic map enables us to reliably filter moving
objects, but also improve the projective scan matching via semantic
constraints. Our experimental evaluation on challenging highways sequences from
KITTI dataset with very few static structures and a large amount of moving cars
shows the advantage of our semantic SLAM approach in comparison to a purely
geometric, state-of-the-art approach.
|
Robotic fabric manipulation has applications in home robotics, textiles,
senior care and surgery. Existing fabric manipulation techniques, however, are
designed for specific tasks, making it difficult to generalize across different
but related tasks. We build upon the Visual Foresight framework to learn fabric
dynamics that can be efficiently reused to accomplish different sequential
fabric manipulation tasks with a single goal-conditioned policy. We extend our
earlier work on VisuoSpatial Foresight (VSF), which learns visual dynamics on
domain randomized RGB images and depth maps simultaneously and completely in
simulation. In this earlier work, we evaluated VSF on multi-step fabric
smoothing and folding tasks against 5 baseline methods in simulation and on the
da Vinci Research Kit (dVRK) surgical robot without any demonstrations at train
or test time. A key finding was that depth sensing significantly improves
performance: RGBD data yields an 80% improvement in fabric folding success rate
in simulation over pure RGB data. In this work, we vary 4 components of VSF,
including data generation, visual dynamics model, cost function, and
optimization procedure. Results suggest that training visual dynamics models
using longer, corner-based actions can improve the efficiency of fabric folding
by 76% and enable a physical sequential fabric folding task that VSF could not
previously perform with 90% reliability. Code, data, videos, and supplementary
material are available at https://sites.google.com/view/fabric-vsf/.
|
It is essential to help drivers have appropriate understandings of level 2
automated driving systems for keeping driving safety. A human machine interface
(HMI) was proposed to present real time results of image recognition by the
automated driving systems to drivers. It was expected that drivers could better
understand the capabilities of the systems by observing the proposed HMI.
Driving simulator experiments with 18 participants were preformed to evaluate
the effectiveness of the proposed system. Experimental results indicated that
the proposed HMI could effectively inform drivers of potential risks
continuously and help drivers better understand the level 2 automated driving
systems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.