abstract
stringlengths 42
2.09k
|
---|
We present the GeneScore, a concept of feature reduction for Machine Learning
analysis of biomedical data. Using expert knowledge, the GeneScore integrates
different molecular data types into a single score. We show that the GeneScore
is superior to a binary matrix in the classification of cancer entities from
SNV, Indel, CNV, gene fusion and gene expression data. The GeneScore is a
straightforward way to facilitate state-of-the-art analysis, while making use
of the available scientific knowledge on the nature of molecular data features
used.
|
In their everyday life, the speech recognition performance of human listeners
is influenced by diverse factors, such as the acoustic environment, the talker
and listener positions, possibly impaired hearing, and optional hearing
devices. Prediction models come closer to considering all required factors
simultaneously to predict the individual speech recognition performance in
complex acoustic environments. While such predictions may still not be
sufficiently accurate for serious applications, they can already be performed
and demand an accessible representation. In this contribution, an interactive
representation of speech recognition performance is proposed, which focuses on
the listeners head orientation and the spatial dimensions of an acoustic scene.
A exemplary modeling toolchain, including an acoustic rendering model, a
hearing device model, and a listener model, was used to generate a data set for
demonstration purposes. Using the spatial speech recognition maps to explore
this data set demonstrated the suitability of the approach to observe possibly
relevant behavior. The proposed representation provides a suitable target to
compare and validate different modeling approaches in ecologically relevant
contexts. Eventually, it may serve as a tool to use validated prediction models
in the design of spaces and devices which take speech communication into
account.
|
Let $K$ be an imaginary quadratic field with class number 1, in this paper we
obtain the functional equation of the $p$-adic $L$-function of: (1) a small
slope $p$-stabilisation of a Bianchi modular form, and (2) a critical slope
$p$-stabilisation of a Base change Bianchi modular form that is
$\Sigma$-smooth. To treat case (2) we use $p$-adic families of Bianchi modular
forms.
|
For every prime number $p\geq 3$ and every integer $m\geq 1$, we prove the
existence of a continuous Galois representation $\rho: G_\mathbb{Q} \rightarrow
Gl_m(\mathbb{Z}_p)$ which has open image and is unramified outside
$\{p,\infty\}$ (resp. outside $\{2,p,\infty\}$) when $p\equiv 3$ mod $4$ (resp.
$p \equiv 1$ mod $4$).
|
Low-dimensional node embeddings play a key role in analyzing graph datasets.
However, little work studies exactly what information is encoded by popular
embedding methods, and how this information correlates with performance in
downstream machine learning tasks. We tackle this question by studying whether
embeddings can be inverted to (approximately) recover the graph used to
generate them. Focusing on a variant of the popular DeepWalk method (Perozzi et
al., 2014; Qiu et al., 2018), we present algorithms for accurate embedding
inversion - i.e., from the low-dimensional embedding of a graph G, we can find
a graph H with a very similar embedding. We perform numerous experiments on
real-world networks, observing that significant information about G, such as
specific edges and bulk properties like triangle density, is often lost in H.
However, community structure is often preserved or even enhanced. Our findings
are a step towards a more rigorous understanding of exactly what information
embeddings encode about the input graph, and why this information is useful for
learning tasks.
|
Here, we propose an original approach for human activity recognition (HAR)
with commercial IEEE 802.11ac (WiFi) devices, which generalizes across
different persons, days and environments. To achieve this, we devise a
technique to extract, clean and process the received phases from the channel
frequency response (CFR) of the WiFi channel, obtaining an estimate of the
Doppler shift at the receiver of the communication link. The Doppler shift
reveals the presence of moving scatterers in the environment, while not being
affected by (environment specific) static objects. The proposed HAR framework
is trained on data collected as a person performs four different activities and
is tested on unseen setups, to assess its performance as the person, the day
and/or the environment change with respect to those considered at training
time. In the worst case scenario, the proposed HAR technique reaches an average
accuracy higher than 95%, validating the effectiveness of the extracted Doppler
information, used in conjunction with a learning algorithm based on a neural
network, in recognizing human activities in a subject and environment
independent fashion.
|
Good approximate eigenstates of a Hamiltionian operator which poesses a point
as well as a continuous spectrum have beeen obtained using the Lanczos
algorithm. Iterating with the bare Hamiltonian operator yields spurious
solutions which can easily be identified. The rms radius of the ground state
eigenvector, for example, is calculated using the bare operator.
|
Frequency estimation is a fundamental problem in many areas. The well-known
A&M and its variant estimators have established an estimation framework by
iteratively interpolating the discrete Fourier transform (DFT) coefficients. In
general, those estimators require two DFT interpolations per iteration, have
uneven initial estimation performance against frequencies, and are incompetent
for small sample numbers due to low-order approximations involved. Exploiting
the iterative estimation framework of A&M, we unprecedentedly introduce the
Pad\'e approximation to frequency estimation, unveil some features about the
updating function used for refining the estimation in each iteration, and
develop a simple closed-form solution to solving the residual estimation error.
Extensive simulation results are provided, validating the superiority of the
new estimator over the state-the-art estimators in wide ranges of key
parameters.
|
The pandemic by COVID-19 is causing a devastating effect on the health of
global population. There are several efforts to prevent the spread of the
virus. Among those efforts, cleaning and disinfecting public areas have become
important tasks. In order to contribute in this direction, this paper proposes
a coverage path planning algorithm for a spraying drone, a micro aerial vehicle
that has mounted a sprayer/sprinkler system, to disinfect areas. In contrast
with planners in the state-of-the-art, this proposal presents i) a new
sprayer/sprinkler model that fits a more realistic coverage volume to the drop
dispersion and ii) a planning algorithm that efficiently restricts the flight
to the region of interest avoiding potential collisions in bounded scenes. The
drone with the algorithm has been tested in several simulation scenes, showing
that the algorithm is effective and covers more areas with respect to other
approaches in the literature. Note that the proposal is not limited to
disinfection applications, but can be applied to other ones, such as painting
or precision agriculture.
|
The combination of machine learning with control offers many opportunities,
in particular for robust control. However, due to strong safety and reliability
requirements in many real-world applications, providing rigorous statistical
and control-theoretic guarantees is of utmost importance, yet difficult to
achieve for learning-based control schemes. We present a general framework for
learning-enhanced robust control that allows for systematic integration of
prior engineering knowledge, is fully compatible with modern robust control and
still comes with rigorous and practically meaningful guarantees. Building on
the established Linear Fractional Representation and Integral Quadratic
Constraints framework, we integrate Gaussian Process Regression as a learning
component and state-of-the-art robust controller synthesis. In a concrete
robust control example, our approach is demonstrated to yield improved
performance with more data, while guarantees are maintained throughout.
|
Thermal conduction in polymer nanocomposites depends on several parameters
including the thermal conductivity and geometrical features of the
nanoparticles, the particle loading, their degree of dispersion and formation
of a percolating networks. To enhance efficiency of thermal contact between
free-standing conductive nanoparticles were previously proposed. This work
report for the first time the investigation of molecular junctions within a
graphene polymer nanocomposite. Molecular dynamics simulations were conducted
to investigate the thermal transport efficiency of molecular junctions in
polymer tight contact, to quantify the contribution of molecular junctions when
graphene and the molecular junctions are surrounded by polydimethylsiloxane
(PDMS). A strong dependence of the thermal conductance in PDMS/graphene model
was found, with best performances obtained with short and conformationally
rigid molecular junctions.
|
We consider adversarial training of deep neural networks through the lens of
Bayesian learning, and present a principled framework for adversarial training
of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on
techniques from constraint relaxation of non-convex optimisation problems and
modify the standard cross-entropy error model to enforce posterior robustness
to worst-case perturbations in $\epsilon$-balls around input points. We
illustrate how the resulting framework can be combined with methods commonly
employed for approximate inference of BNNs. In an empirical investigation, we
demonstrate that the presented approach enables training of certifiably robust
models on MNIST, FashionMNIST and CIFAR-10 and can also be beneficial for
uncertainty calibration. Our method is the first to directly train certifiable
BNNs, thus facilitating their deployment in safety-critical applications.
|
In this paper, we consider distributed Nash equilibrium seeking in monotone
and hypomonotone games. We first assume that each player has knowledge of the
opponents' decisions and propose a passivity-based modification of the standard
gradient-play dynamics, that we call "Heavy Anchor". We prove that Heavy Anchor
allows a relaxation of strict monotonicity of the pseudo-gradient, needed for
gradient-play dynamics, and can ensure exact asymptotic convergence in merely
monotone regimes. We extend these results to the setting where each player has
only partial information of the opponents' decisions. Each player maintains a
local decision variable and an auxiliary state estimate and communicates with
their neighbours to learn the opponents' actions. We modify Heavy Anchor via a
distributed Laplacian feedback and show how we can exploit
equilibrium-independent passivity properties to achieve convergence to a Nash
equilibrium in hypomonotone regimes.
|
In this paper we find curves minimizing the elastic energy among curves whose
length is fixed and whose ends are pinned. Applying the shooting method, we can
identify all critical points explicitly and determine which curve is the global
minimizer. As a result we show that the critical points consist of wavelike
elasticae and the minimizers do not have any loops or interior inflection
points.
|
This paper initiates a discussion of mechanism design when the participating
agents exhibit preferences that deviate from expected utility theory (EUT). In
particular, we consider mechanism design for systems where the agents are
modeled as having cumulative prospect theory (CPT) preferences, which is a
generalization of EUT preferences. We point out some of the key modifications
needed in the theory of mechanism design that arise from agents having CPT
preferences and some of the shortcomings of the classical mechanism design
framework. In particular, we show that the revelation principle, which has
traditionally played a fundamental role in mechanism design, does not continue
to hold under CPT. We develop an appropriate framework that we call mediated
mechanism design which allows us to recover the revelation principle for CPT
agents. We conclude with some interesting directions for future work.
|
We study the Becker-D\"oring bubblelator, a variant of the Becker-D\"oring
coagulation-fragmentation system that models the growth of clusters by gain or
loss of monomers. Motivated by models of gas evolution oscillators from
physical chemistry, we incorporate injection of monomers and depletion of large
clusters. For a wide range of physical rates, the Becker-D\"oring system itself
exhibits a dynamic phase transition as mass density increases past a critical
value. We connect the Becker-D\"oring bubblelator to a transport equation
coupled with an integrodifferential equation for excess monomer density by
formal asymptotics in the near-critical regime. For suitable
injection/depletion rates, we argue that time-periodic solutions appear via a
Hopf bifurcation. Numerics confirm that the generation and removal of large
clusters can become desynchronized, leading to temporal oscillations associated
with bursts of large-cluster nucleation.
|
Graph convolutional neural networks (GCNs) generalize tradition convolutional
neural networks (CNNs) from low-dimensional regular graphs (e.g., image) to
high dimensional irregular graphs (e.g., text documents on word embeddings).
Due to inevitable faulty data collection instruments, deceptive data
manipulation, or other system errors, the data might be error-contaminated.
Even a small amount of error such as noise can compromise the ability of GCNs
and render them inadmissible to a large extent. The key challenge is how to
effectively and efficiently employ GCNs in the presence of erroneous data. In
this paper, we propose a novel Robust Graph Convolutional Neural Networks for
possible erroneous single-view or multi-view data where data may come from
multiple sources. By incorporating an extra layers via Autoencoders into
traditional graph convolutional networks, we characterize and handle typical
error models explicitly. Experimental results on various real-world datasets
demonstrate the superiority of the proposed model over the baseline methods and
its robustness against different types of error.
|
Top-K SpMV is a key component of similarity-search on sparse embeddings. This
sparse workload does not perform well on general-purpose NUMA systems that
employ traditional caching strategies. Instead, modern FPGA accelerator cards
have a few tricks up their sleeve. We introduce a Top-K SpMV FPGA design that
leverages reduced precision and a novel packet-wise CSR matrix compression,
enabling custom data layouts and delivering bandwidth efficiency often
unreachable even in architectures with higher peak bandwidth. With HBM-based
boards, we are 100x faster than a multi-threaded CPU implementation and 2x
faster than a GPU with 20% higher bandwidth, with 14.2x higher
power-efficiency.
|
To extract the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $|V_{ub}|$, we
have re-analyzed all the available inputs (data and theory) on the $B\to\pi
l\nu$ decays including the newly available inputs on the form-factors from
light cone sum rule (LCSR) approach. We have reproduced and compared the
results with the procedure taken up by the Heavy Flavor Averaging Group
(HFLAV), while commenting on the effect of outliers on the fits. After removing
the outliers and creating a comparable group of data-sets, we mention a few
scenarios in the extraction of $|V_{ub}|$. In all those scenarios, the
extracted values of $|V_{ub}|$ are higher than that obtained by HFLAV. Our best
results for $|V_{ub}|^{exc.}$ are $(3.88 \pm 0.13)\times 10^{-3}$ and $(3.87
\pm 0.13)\times 10^{-3}$ in frequentist and Bayesian approaches, respectively,
which are consistent with that extracted from inclusive decays $|V_{ub}|^{inc}$
within $1~\sigma$ confidence interval.
|
Lithium metal has been an attractive candidate as a next generation anode
material. Despite its popularity, stability issues of lithium in the liquid
electrolyte and the formation of lithium whiskers have kept it from practical
use. Three-dimensional (3D) current collectors have been proposed as an
effective method to mitigate whiskers growth. Although extensive research
efforts have been done, the effects of three key parameters of the 3D current
collectors, namely the surface area, the tortuosity factor, and the surface
chemistry, on the performance of lithium metal batteries remain elusive.
Herein, we quantitatively studied the role of these three parameters by
synthesizing four types of porous copper networks with different sizes of
well-structured micro-channels. X-ray microscale computed tomography (micro-CT)
allowed us to assess the surface area, the pore size and the tortuosity factor
of the porous copper materials. A metallic Zn coating was also applied to study
the influence of surface chemistry on the performance of the 3D current
collectors. The effects of these parameters on the performance were studied in
detail through Scanning Electron Microscopy (SEM) and Titration Gas
Chromatography (TGC). Stochastic simulations further allowed us to interpret
the role of the tortuosity factor in lithiation. By understanding these
effects, the optimal range of the key parameters is found for the porous copper
anodes and their performance is predicted. Using these parameters to inform the
design of porous copper anodes for Li deposition, Coulombic efficiencies (CE)
of up to 99.56% are achieved, thus paving the way for the design of effective
3D current collector systems.
|
This position paper summarizes a recently developed research program focused
on inference in the context of data centric science and engineering
applications, and forecasts its trajectory forward over the next decade. Often
one endeavours in this context to learn complex systems in order to make more
informed predictions and high stakes decisions under uncertainty. Some key
challenges which must be met in this context are robustness, generalizability,
and interpretability. The Bayesian framework addresses these three challenges
elegantly, while bringing with it a fourth, undesirable feature: it is
typically far more expensive than its deterministic counterparts. In the 21st
century, and increasingly over the past decade, a growing number of methods
have emerged which allow one to leverage cheap low-fidelity models in order to
precondition algorithms for performing inference with more expensive models and
make Bayesian inference tractable in the context of high-dimensional and
expensive models. Notable examples are multilevel Monte Carlo (MLMC),
multi-index Monte Carlo (MIMC), and their randomized counterparts (rMLMC),
which are able to provably achieve a dimension-independent (including
$\infty-$dimension) canonical complexity rate with respect to mean squared
error (MSE) of $1/$MSE. Some parallelizability is typically lost in an
inference context, but recently this has been largely recovered via novel
double randomization approaches. Such an approach delivers i.i.d. samples of
quantities of interest which are unbiased with respect to the infinite
resolution target distribution. Over the coming decade, this family of
algorithms has the potential to transform data centric science and engineering,
as well as classical machine learning applications such as deep learning, by
scaling up and scaling out fully Bayesian inference.
|
We present a study on magnetotransport in films of the topological Dirac
semimetal Cd$_{3}$As$_{2}$ doped with Sb grown by molecular beam epitaxy. In
our weak antilocalization analysis, we find a significant enhancement of the
spin-orbit scattering rate, indicating that Sb doping leads to a strong
increase of the pristine band-inversion energy. We discuss possible origins of
this large enhancement by comparing Sb-doped Cd$_{3}$As$_{2}$ with other
compound semiconductors. Sb-doped Cd$_{3}$As$_{2}$ will be a suitable system
for further investigations and functionalization of topological Dirac
semimetals.
|
Ultra-reliable low latency communications (URLLC) arose to serve industrial
IoT (IIoT) use cases within the 5G. Currently, it has inherent limitations to
support future services. Based on state-of-the-art research and practical
deployment experience, in this article, we introduce and advocate for three
variants: broadband, scalable and extreme URLLC. We discuss use cases and key
performance indicators and identify technology enablers for the new service
modes. We bring practical considerations from the IIoT testbed and provide an
outlook toward some new research directions.
|
The design of algorithms that leverage machine learning alongside
combinatorial optimization techniques is a young but thriving area of
operations research. If trends emerge, the literature has still not converged
on the proper way of combining these two techniques or on the predictor
architectures that should be used. We focus on operations research problems for
which no efficient algorithms are known, but that are variants of classic
problems for which ones efficient algorithm exist. Elaborating on recent
contributions that suggest using a machine learning predictor to approximate
the variant by the classic problem, we introduce the notion of structured
approximation of an operations research problem by another. We provide a
generic learning algorithm to fit these approximations. This algorithm requires
only instances of the variant in the training set, unlike previous learning
algorithms that also require the solution of these instances. Using tools from
statistical learning theory, we prove a result showing the convergence speed of
the estimator, and deduce an approximation ratio guarantee on the performance
of the algorithm obtained for the variant. Numerical experiments on a single
machine scheduling and a stochastic vehicle scheduling problem from the
literature show that our learning algorithm is competitive with algorithms that
have access to optimal solutions, leading to state-of-the-art algorithms for
the variant considered.
|
Spatial reasoning on multi-view line drawings by state-of-the-art supervised
deep networks is recently shown with puzzling low performances on the SPARE3D
dataset. To study the reason behind the low performance and to further our
understandings of these tasks, we design controlled experiments on both input
data and network designs. Guided by the hindsight from these experiment
results, we propose a simple contrastive learning approach along with other
network modifications to improve the baseline performance. Our approach uses a
self-supervised binary classification network to compare the line drawing
differences between various views of any two similar 3D objects. It enables
deep networks to effectively learn detail-sensitive yet view-invariant line
drawing representations of 3D objects. Experiments show that our method could
significantly increase the baseline performance in SPARE3D, while some popular
self-supervised learning methods cannot.
|
We have entered an era of a pandemic that has shaken the world with major
impact to medical systems, economics and agriculture. Prominent computational
and mathematical models have been unreliable due to the complexity of the
spread of infections. Moreover, lack of data collection and reporting makes any
such modelling attempts unreliable. Hence we need to re-look at the situation
with the latest data sources and most comprehensive forecasting models. Deep
learning models such as recurrent neural networks are well suited for modelling
temporal sequences. In this paper, prominent recurrent neural networks, in
particular \textit{long short term memory} (LSTMs) networks, bidirectional
LSTM, and encoder-decoder LSTM models for multi-step (short-term) forecasting
the spread of COVID-infections among selected states in India. We select states
with COVID-19 hotpots in terms of the rate of infections and compare with
states where infections have been contained or reached their peak and provide
two months ahead forecast that shows that cases will slowly decline. Our
results show that long-term forecasts are promising which motivates the
application of the method in other countries or areas. We note that although we
made some progress in forecasting, the challenges in modelling remain due to
data and difficulty in capturing factors such as population density, travel
logistics, and social aspects such culture and lifestyle.
|
We measure the small-scale clustering of the Data Release 16 extended Baryon
Oscillation Spectroscopic Survey Luminous Red Galaxy sample, corrected for
fibre-collisions using Pairwise Inverse Probability weights, which give
unbiased clustering measurements on all scales. We fit to the monopole and
quadrupole moments and to the projected correlation function over the
separation range $7-60\,h^{-1}$Mpc with a model based on the Aemulus
cosmological emulator to measure the growth rate of cosmic structure,
parameterized by $f\sigma_8$. We obtain a measurement of
$f\sigma_8(z=0.737)=0.408\pm0.038$, which is $1.4\sigma$ lower than the value
expected from 2018 Planck data for a flat $\Lambda$CDM model, and is more
consistent with recent weak-lensing measurements. The level of precision
achieved is 1.7 times better than more standard measurements made using only
the large-scale modes of the same sample. We also fit to the data using the
full range of scales $0.1-60\,h^{-1}$Mpc modelled by the Aemulus cosmological
emulator and find a $4.5\sigma$ tension in the amplitude of the halo velocity
field with the Planck+$\Lambda$CDM model, driven by a mismatch on the
non-linear scales. This may not be cosmological in origin, and could be due to
a breakdown in the Halo Occupation Distribution model used in the emulator.
Finally, we perform a robust analysis of possible sources of systematics,
including the effects of redshift uncertainty and incompleteness due to target
selection that were not included in previous analyses fitting to clustering
measurements on small scales.
|
We study the physical properties of four-dimensional, string-theoretical,
horizonless "fuzzball" geometries by imaging their shadows. Their
microstructure traps light rays straying near the would-be horizon on
long-lived, highly redshifted chaotic orbits. In fuzzballs sufficiently near
the scaling limit this creates a shadow much like that of a black hole, while
avoiding the paradoxes associated with an event horizon. Observations of the
shadow size and residual glow can potentially discriminate between fuzzballs
away from the scaling limit and alternative models of black compact objects.
|
We introduce a simplified model of physiological coughing or sneezing, in the
form of a thin liquid layer subject to a rapid (30 m/s) air stream. The setup
is simulated using the Volume-Of-Fluid method with octree mesh adaptation, the
latter allowing grid sizes small enough to capture the Kolmogorov length scale.
The results confirm the trend to an intermediate distribution between a
Log-Normal and a Pareto distribution $P(d) \propto d^{-3.3}$ for the
distribution of droplet sizes in agreement with a previous re-analysis of
experimental results by one of the authors. The mechanism of atomisation does
not differ qualitatively from the multiphase mixing layer experiments and
simulations. No mechanism for a bimodal distribution, also sometimes observed,
is evidenced in these simulations.
|
We show that the ring of modular forms with characters for the even
unimodular lattice of signature (2,18) is obtained from the invariant ring of
$\mathrm{Sym}(\mathrm{Sym}^8(V) \oplus \mathrm{Sym}^{12}(V))$ with respect to
the action of $\mathrm{SL}(V)$ by adding a Borcherds product of weight 132 with
one relation of weight 264, where $V$ is a 2-dimensional $\mathbb{C}$-vector
space. The proof is based on the study of the moduli space of elliptic K3
surfaces with a section.
|
We present an overview of phase field modeling of active matter systems as a
tool for capturing various aspects of complex and active interfaces. We first
describe how interfaces between different phases are characterized in phase
field models and provide simple fundamental governing equations that describe
their evolution. For a simple model, we then show how physical properties of
the interface, such as surface tension and interface thickness, can be
recovered from these equations. We then explain how the phase field formulation
can be coupled to various active matter realizations and discuss three
particular examples of continuum biphasic active matter: active
nematic-isotropic interfaces, active matter in viscoelastic environments, and
active shells in fluid background. Finally, we describe how multiple phase
fields can be used to model active cellular monolayers and present a general
framework that can be applied to the study of tissue behaviour and collective
migration.
|
The chiral magnetic effect with a fluctuating chiral imbalance is more
realistic in the evolution of quark-gluon plasma, which reflects the random
gluonic topological transition. Incorporating this dynamics, we calculate the
chiral magnetic current in response to space-time dependent axial gauge
potential and magnetic field in AdS/CFT correspondence. In contrast to
conventional treatment of constant axial chemical potential, the response
function here is the AVV three-point function of the $\mathcal{N}=4$ super
Yang-Mills at strong coupling. Through an iterative solution of the nonlinear
equations of motion in Schwarzschild-AdS$_5$ background, we are able to express
the AVV function in terms of two Heun functions and prove its UV/IR finiteness,
as expected for $\mathcal{N}=4$ super Yang-Mills theory. We found that the
dependence of the chiral magnetic current on a non-constant chiral imbalance is
non-local, different from hydrodynamic approximation, and demonstrates the
subtlety of the infrared limit discovered in field theoretic approach. We
expect our results enrich the understanding of the phenomenology of the chiral
magnetic effect in the context of relativistic heavy ion collisions.
|
We re-examine the celebrated Doob--McKean identity that identifies a
conditioned one-dimensional Brownian motion as the radial part of a
3-dimensional Brownian motion or, equivalently, a Bessel-3 process, albeit now
in the analogous setting of isotropic $\alpha$-stable processes. We find a
natural analogue that matches the Brownian setting, with the role of the
Brownian motion replaced by that of the isotropic $\alpha$-stable process,
providing one interprets the components of the original identity in the right
way.
|
In this paper, we consider Bayesian point estimation and predictive density
estimation in the binomial case. After presenting preliminary results on these
problems, we compare the risk functions of the Bayes estimators based on the
truncated and untruncated beta priors and obtain dominance conditions when the
probability parameter is less than or equal to a known constant. The case where
there are both a lower bound restriction and an upper bound restriction is also
treated. Then our problems are shown to be related to similar problems in the
Poisson case. Finally, numerical studies are presented.
|
In this paper, we obtain a characterization of GVZ-groups in terms of
commutators and monolithic quotients. This characterization is based on
counting formulas due to Gallagher.
|
Network traffic is growing at an outpaced speed globally. The modern network
infrastructure makes classic network intrusion detection methods inefficient to
classify an inflow of vast network traffic. This paper aims to present a modern
approach towards building a network intrusion detection system (NIDS) by using
various deep learning methods. To further improve our proposed scheme and make
it effective in real-world settings, we use deep transfer learning techniques
where we transfer the knowledge learned by our model in a source domain with
plentiful computational and data resources to a target domain with sparse
availability of both the resources. Our proposed method achieved 98.30%
classification accuracy score in the source domain and an improved 98.43%
classification accuracy score in the target domain with a boost in the
classification speed using UNSW-15 dataset. This study demonstrates that deep
transfer learning techniques make it possible to construct large deep learning
models to perform network classification, which can be deployed in the real
world target domains where they can maintain their classification performance
and improve their classification speed despite the limited accessibility of
resources.
|
Efficient error-controlled lossy compressors are becoming critical to the
success of today's large-scale scientific applications because of the
ever-increasing volume of data produced by the applications. In the past
decade, many lossless and lossy compressors have been developed with distinct
design principles for different scientific datasets in largely diverse
scientific domains. In order to support researchers and users assessing and
comparing compressors in a fair and convenient way, we establish a standard
compression assessment benchmark -- Scientific Data Reduction Benchmark
(SDRBench). SDRBench contains a vast variety of real-world scientific datasets
across different domains, summarizes several critical compression quality
evaluation metrics, and integrates many state-of-the-art lossy and lossless
compressors. We demonstrate evaluation results using SDRBench and summarize six
valuable takeaways that are helpful to the in-depth understanding of lossy
compressors.
|
The logistic linear mixed model (LLMM) is one of the most widely used
statistical models. Generally, Markov chain Monte Carlo algorithms are used to
explore the posterior densities associated with the Bayesian LLMMs. Polson,
Scott and Windle's (2013) Polya-Gamma data augmentation (DA) technique can be
used to construct full Gibbs (FG) samplers for the LLMMs. Here, we develop
efficient block Gibbs (BG) samplers for Bayesian LLMMs using the Polya-Gamma DA
method. We compare the FG and BG samplers in the context of a real data
example, as the correlation between the fixed effects and the random effects
changes as well as when the dimensions of the design matrices vary. These
numerical examples demonstrate superior performance of the BG samplers over the
FG samplers. We also derive conditions guaranteeing geometric ergodicity of the
BG Markov chain when the popular improper uniform prior is assigned on the
regression coefficients, and proper or improper priors are placed on the
variance parameters of the random effects. This theoretical result has
important practical implications as it justifies the use of asymptotically
valid Monte Carlo standard errors for Markov chain based estimates of the
posterior quantities.
|
We consider bivariate polynomials over the skew field of quaternions, where
the indeterminates commute with all coefficients and with each other. We
analyze existence of univariate factorizations, that is, factorizations with
univariate linear factors. A necessary condition for existence of univariate
factorizations is factorization of the norm polynomial into a product of
univariate polynomials. This condition is, however, not sufficient. Our central
result states that univariate factorizations exist after multiplication with a
suitable univariate real polynomial as long as the necessary factorization
condition is fulfilled. We present an algorithm for computing this real
polynomial and a corresponding univariate factorization. If a univariate
factorization of the original polynomial exists, a suitable input of the
algorithm produces a constant multiplication factor, thus giving an a
posteriori condition for existence of univariate factorizations. Some
factorizations obtained in this way are of interest in mechanism science. We
present an example of a curious closed-loop mechanism with eight revolute
joints.
|
We deal with the construction of linear connections associated with second
order ordinary differential equations with and without first order constraints.
We use a novel method allowing glueing of submodule covariant derivatives to
produce new, closed form expressions for the Massa-Pagani connection and our
extension of it to the constrained case.
|
Speaker segmentation consists in partitioning a conversation between one or
more speakers into speaker turns. Usually addressed as the late combination of
three sub-tasks (voice activity detection, speaker change detection, and
overlapped speech detection), we propose to train an end-to-end segmentation
model that does it directly. Inspired by the original end-to-end neural speaker
diarization approach (EEND), the task is modeled as a multi-label
classification problem using permutation-invariant training. The main
difference is that our model operates on short audio chunks (5 seconds) but at
a much higher temporal resolution (every 16ms). Experiments on multiple speaker
diarization datasets conclude that our model can be used with great success on
both voice activity detection and overlapped speech detection. Our proposed
model can also be used as a post-processing step, to detect and correctly
assign overlapped speech regions. Relative diarization error rate improvement
over the best considered baseline (VBx) reaches 17% on AMI, 13% on DIHARD 3,
and 13% on VoxConverse.
|
Interpreting the environmental, behavioural and psychological data from
in-home sensory observations and measurements can provide valuable insights
into the health and well-being of individuals. Presents of neuropsychiatric and
psychological symptoms in people with dementia have a significant impact on
their well-being and disease prognosis. Agitation in people with dementia can
be due to many reasons such as pain or discomfort, medical reasons such as side
effects of a medicine, communication problems and environment. This paper
discusses a model for analysing the risk of agitation in people with dementia
and how in-home monitoring data can support them. We proposed a semi-supervised
model which combines a self-supervised learning model and a Bayesian ensemble
classification. We train and test the proposed model on a dataset from a
clinical study. The dataset was collected from sensors deployed in 96 homes of
patients with dementia. The proposed model outperforms the state-of-the-art
models in recall and f1-score values by 20%. The model also indicates better
generalisability compared to the baseline models.
|
With the rise of the "big data" phenomenon in recent years, data is coming in
many different complex forms. One example of this is multi-way data that come
in the form of higher-order tensors such as coloured images and movie clips.
Although there has been a recent rise in models for looking at the simple case
of three-way data in the form of matrices, there is a relative paucity of
higher-order tensor variate methods. The most common tensor distribution in the
literature is the tensor variate normal distribution; however, its use can be
problematic if the data exhibit skewness or outliers. Herein, we develop four
skewed tensor variate distributions which to our knowledge are the first skewed
tensor distributions to be proposed in the literature, and are able to
parameterize both skewness and tail weight. Properties and parameter estimation
are discussed, and real and simulated data are used for illustration.
|
We consider the additional entropy production (EP) incurred by a fixed
quantum or classical process on some initial state $\rho$, above the minimum EP
incurred by the same process on any initial state. We show that this additional
EP, which we term the "mismatch cost of $\rho$", has a universal
information-theoretic form: it is given by the contraction of the relative
entropy between $\rho$ and the least-dissipative initial state $\varphi$ over
time. We derive versions of this result for integrated EP incurred over the
course of a process, for trajectory-level fluctuating EP, and for instantaneous
EP rate. We also show that mismatch cost for fluctuating EP obeys an integral
fluctuation theorem. Our results demonstrate a fundamental relationship between
"thermodynamic irreversibility" (generation of EP) and "logical
irreversibility" (inability to know the initial state corresponding to a given
final state). We use this relationship to derive quantitative bounds on the
thermodynamics of quantum error correction and to propose a
thermodynamically-operationalized measure of the logical irreversibility of a
quantum channel. Our results hold for both finite and infinite dimensional
systems, and generalize beyond EP to many other thermodynamic costs, including
nonadiabatic EP, free energy loss, and entropy gain.
|
We consider two-dimensional Schroedinger equations with honeycomb potentials
and slow time-periodic forcing of the form: $$i\psi_t (t,x) =
H^\varepsilon(t)\psi=\left(H^0+2i\varepsilon A (\varepsilon t) \cdot \nabla
\right)\psi,\quad H^0=-\Delta +V (x) .$$ The unforced Hamiltonian, $H^0$, is
known to generically have Dirac (conical) points in its band spectrum. The
evolution under $H^\varepsilon(t)$ of {\it band limited Dirac wave-packets}
(spectrally localized near the Dirac point) is well-approximated on large time
scales ($t\lesssim \varepsilon^{-2+}$) by an effective time-periodic Dirac
equation with a gap in its quasi-energy spectrum. This quasi-energy gap is
typical of many reduced models of time-periodic (Floquet) materials and plays a
role in conclusions drawn about the full system: conduction vs. insulation,
topological vs. non-topological bands. Much is unknown about nature of the
quasi-energy spectrum of original time-periodic Schroedinger equation, and it
is believed that no such quasi-energy gap occurs. In this paper, we explain how
to transfer quasi-energy gap information about the effective Dirac dynamics to
conclusions about the full Schroedinger dynamics. We introduce the notion of an
{\it effective quasi-energy gap}, and establish its existence in the
Schroedinger model. In the current setting, an effective quasi-energy gap is an
interval of quasi-energies which does not support modes with large spectral
projection onto band-limited Dirac wave-packets. The notion of effective
quasi-energy gap is a physically relevant relaxation of the strict notion of
quasi-energy spectral gap; if a system is tuned to drive or measure at momenta
and energies near the Dirac point of $H^0$, then the resulting modes in the
effective quasi-energy gap will only be weakly excited and detected.
|
We prove that, for a Poisson vertex algebra V, the canonical injective
homomorphism of the variational cohomology of V to its classical cohomology is
an isomorphism, provided that V, viewed as a differential algebra, is an
algebra of differential polynomials in finitely many differential variables.
This theorem is one of the key ingredients in the computation of vertex algebra
cohomology. For its proof, we introduce the sesquilinear Hochschild and
Harrison cohomology complexes and prove a vanishing theorem for the symmetric
sesquilinear Harrison cohomology of the algebra of differential polynomials in
finitely many differential variables.
|
We address the problem of analysing the complexity of concurrent programs
written in Pi-calculus. We are interested in parallel complexity, or span,
understood as the execution time in a model with maximal parallelism. A type
system for parallel complexity has been recently proposed by Baillot and
Ghyselen but it is too imprecise for non-linear channels and cannot analyse
some concurrent processes. Aiming for a more precise analysis, we design a type
system which builds on the concepts of sized types and usages. The new variant
of usages we define accounts for the various ways a channel is employed and
relies on time annotations to track under which conditions processes can
synchronize. We prove that a type derivation for a process provides an upper
bound on its parallel complexity.
|
Deep neural networks are vulnerable to small input perturbations known as
adversarial attacks. Inspired by the fact that these adversaries are
constructed by iteratively minimizing the confidence of a network for the true
class label, we propose the anti-adversary layer, aimed at countering this
effect. In particular, our layer generates an input perturbation in the
opposite direction of the adversarial one and feeds the classifier a perturbed
version of the input. Our approach is training-free and theoretically
supported. We verify the effectiveness of our approach by combining our layer
with both nominally and robustly trained models and conduct large-scale
experiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and
ImageNet. Our layer significantly enhances model robustness while coming at no
cost on clean accuracy.
|
Coded caching is an emerging technique to reduce the data transmission load
during the peak-traffic times. In such a scheme, each file in the data center
or library is usually divided into a number of packets to pursue a low
broadcasting rate based on the designed placements at each user's cache.
However, the implementation complexity of this scheme increases as the number
of packets increases. It is crucial to design a scheme with a small
subpacketization level, while maintaining a relatively low transmission rate.
It is known that the design of caches in users (i.e., the placement phase) and
broadcasting (i.e., the delivery phase) can be unified in one matrix, namely
the placement delivery array (PDA). This paper proposes a novel PDA
construction by selecting proper orthogonal arrays (POAs), which generalizes
some known constructions but with a more flexible memory size. Based on the
proposed PDA construction, an effective transformation is further proposed to
enable a coded caching scheme to have a smaller subpacketization level.
Moreover, two new coded caching schemes with the coded placement are
considered. It is shown that the proposed schemes yield a lower
subpacketization level and transmission rate over some existing schemes.
|
The Newcomb-Benford law, also known as the first-digit law, gives the
probability distribution associated with the first digit of a dataset, so that,
for example, the first significant digit has a probability of $30.1$ % of being
$1$ and $4.58$ % of being $9$. This law can be extended to the second and next
significant digits. This article presents an introduction to the discovery of
the law, its derivation from the scale invariance property, as well as some
applications and examples, are presented. Additionally, a simple model of a
Markov process inspired by scale invariance is proposed. Within this model, it
is proved that the probability distribution irreversibly converges to the
Newcomb-Benford law, in analogy to the irreversible evolution toward
equilibrium of physical systems in thermodynamics and statistical mechanics.
|
Nature-inspired algorithms are commonly used for solving the various
optimization problems. In past few decades, various researchers have proposed a
large number of nature-inspired algorithms. Some of these algorithms have
proved to be very efficient as compared to other classical optimization
methods. A young researcher attempting to undertake or solve a problem using
nature-inspired algorithms is bogged down by a plethora of proposals that exist
today. Not every algorithm is suited for all kinds of problem. Some score over
others. In this paper, an attempt has been made to summarize various leading
research proposals that shall pave way for any new entrant to easily understand
the journey so far. Here, we classify the nature-inspired algorithms as natural
evolution based, swarm intelligence based, biological based, science based and
others. In this survey, widely acknowledged nature-inspired algorithms namely-
ACO, ABC, EAM, FA, FPA, GA, GSA, JAYA, PSO, SFLA, TLBO and WCA, have been
studied. The purpose of this review is to present an exhaustive analysis of
various nature-inspired algorithms based on its source of inspiration, basic
operators, control parameters, features, variants and area of application where
these algorithms have been successfully applied. It shall also assist in
identifying and short listing the methodologies that are best suited for the
problem.
|
Let $\mathbb{F}_q$ be a finite field of order $q$. In this paper, we study
the distribution of rectangles in a given set in $\mathbb{F}_q^2$. More
precisely, for any $0<\delta\le 1$, we prove that there exists an integer
$q_0=q_0(\delta)$ with the following property: if $q\ge q_0$ and $A$ is a
multiplicative subgroup of $\mathbb{F}^*_q$ with $|A|\ge q^{2/3}$, then any set
$S\subset \mathbb{F}_q^2$ with $|S|\ge \delta q^2$ contains at least $\gg
\frac{|S|^4|A|^2}{q^5}$ rectangles with side-lengths in $A$. We also consider
the case of rectangles with one fixed side-length and the other in a
multiplicative subgroup $A$.
|
Usually, managers or technical leaders in software projects assign issues
manually. This task may become more complex as more detailed is the issue
description. This complexity can also make the process more prone to errors
(misassignments) and time-consuming. In the literature, many studies aim to
address this problem by using machine learning strategies. Although there is no
specific solution that works for all companies, experience reports are useful
to guide the choices in industrial auto-assignment projects. This paper
presents an industrial initiative conducted in a global electronics company
that aims to minimize the time spent and the errors that can arise in the issue
assignment process. As main contributions, we present a literature review, an
industrial report comparing different algorithms, and lessons learned during
the project.
|
We study astrometric residuals from a simultaneous fit of Hyper Suprime-Cam
images. We aim to characterize these residuals and study the extent to which
they are dominated by atmospheric contributions for bright sources. We use
Gaussian process interpolation, with a correlation function (kernel), measured
from the data, to smooth and correct the observed astrometric residual field.
We find that Gaussian process interpolation with a von K\'arm\'an kernel allows
us to reduce the covariances of astrometric residuals for nearby sources by
about one order of magnitude, from 30 mas$^2$ to 3 mas$^2$ at angular scales of
~1 arcmin, and to halve the r.m.s. residuals. Those reductions using Gaussian
process interpolation are similar to recent result published with the Dark
Energy Survey dataset. We are then able to detect the small static astrometric
residuals due to the Hyper Suprime-Cam sensors effects. We discuss how the
Gaussian process interpolation of astrometric residuals impacts galaxy shape
measurements, in particular in the context of cosmic shear analyses at the
Rubin Observatory Legacy Survey of Space and Time.
|
The system of two nonlinear coupled oscillators is studied. As partial case
this system of equation is reduced to the Duffing oscillator which has many
applications for describing physical processes. It is well known that the
inverse scattering transform is one of the most powerful methods for solving
the Cauchy problems of partial differential equations. To solve the Cauchy
problem for nonlinear differential equations we can use the Lax pair
corresponding to this equation. The Lax pair for ordinary differential or
systems or for system ordinary differential equations allows us to find the
first integrals, which also allow us to solve the question of integrability for
differential equations. In this report we present the Lax pair for the system
of coupled oscillators. Using the Lax pair we get two first integrals for the
system of equations. The considered system of equations can be also reduced to
the fourth-order ordinary differential equation and the Lax pair can be used
for the ordinary differential equation of fourth order. Some special cases of
the system of equations are considered.
|
This paper continues the program initiated in the works by the authors [60],
[61] and [62] and by the authors with Li [51] and [52] to establish higher
order Poincar\'e-Sobolev, Hardy-Sobolev-Maz'ya, Adams and Hardy-Adams
inequalities on real hyperbolic spaces using the method of Helgason-Fourier
analysis on the hyperbolic spaces. The aim of this paper is to establish such
inequalities on the Siegel domains and complex hyperbolic spaces. Firstly, we
prove a factorization theorem for the operators on the complex hyperbolic space
which is closely related to Geller' operator, as well as the CR invariant
differential operators on the Heisenberg group and CR sphere. Secondly, by
using, among other things, the Kunze-Stein phenomenon on a closed linear group
$SU(1,n)$ and Helgason-Fourier analysis techniques on the complex hyperbolic
spaces, we establish the Poincar\'e-Sobolev, Hardy-Sobolev-Maz'ya inequality on
the Siegel domain $\mathcal{U}^{n}$ and the unit ball
$\mathbb{B}_{\mathbb{C}}^{n}$. Finally, we establish the sharp Hardy-Adams
inequalities and sharp Adams type inequalities on Sobolev spaces of any
positive fractional order on the complex hyperbolic spaces. The factorization
theorem we proved is of its independent interest in the Heisenberg group and CR
sphere and CR invariant differential operators therein.
|
Low Earth orbit (LEO) satellite constellations rely on inter-satellite links
(ISLs) to provide global connectivity. However, one significant challenge is to
establish and maintain inter-plane ISLs, which support communication between
different orbital planes. This is due to the fast movement of the
infrastructure and to the limited computation and communication capabilities on
the satellites. In this paper, we make use of antenna arrays with either Butler
matrix beam switching networks or digital beam steering to establish the
inter-plane ISLs in a LEO satellite constellation. Furthermore, we present a
greedy matching algorithm to establish inter-plane ISLs with the objective of
maximizing the sum of rates. This is achieved by sequentially selecting the
pairs, switching or pointing the beams and, finally, setting the data rates.
Our results show that, by selecting an update period of 30 seconds for the
matching, reliable communication can be achieved throughout the constellation,
where the impact of interference in the rates is less than 0.7 % when compared
to orthogonal links, even for relatively small antenna arrays. Furthermore,
doubling the number of antenna elements increases the rates by around one order
of magnitude.
|
Given a random real quadratic field from $\{ \mathbb{Q}(\sqrt{p}\,) ~|~ p
\text{ primes} \}$, the conjectural probability $\mathbb{P}(h=q)$ that it has
class number $q$ is given for all positive odd integers $q$. Some related
conjectures of the Cohen-Lenstra heuristic are given here as corollaries. These
results suggest that the set of real quadratic number fields may have some
natural hierarchical structures.
|
Dispersionless bands -- \emph{flatbands} -- provide an excellent testbed for
novel physical phases due to the fine-tuned character of flatband tight-binding
Hamiltonians. The accompanying macroscopic degeneracy makes any perturbation
relevant, no matter how small. For short-range hoppings flatbands support
compact localized states, which allowed to develop systematic flatband
generators in $d=1$ dimension in Phys. Rev. B {\bf 95} 115135 (2017) and Phys.
Rev. B {\bf 99} 125129 (2019). Here we extend this generator approach to $d=2$
dimensions. The \emph{shape} of a compact localized state turns into an
important additional flatband classifier. This allows us to obtain analytical
solutions for classes of $d=2$ flatband networks and to re-classify and
re-obtain known ones, such as the checkerboard, kagome, Lieb and Tasaki
lattices. Our generator can be straightforwardly generalized to three lattice
dimensions as well.
|
In this article we introduce the notion of a Ribaucour partial tube and use
it to derive several applications. These are based on a characterization of
Ribaucour partial tubes as the immersions of a product of two manifolds into a
space form such that the distributions given by the tangent spaces of the
factors are orthogonal to each other with respect to the induced metric, are
invariant under all shape operators, and one of them is spherical. Our first
application is a classification of all hypersurfaces with dimension at least
three of a space form that carry a spherical foliation of codimension one,
extending previous results by Dajczer, Rovenski and the second author for the
totally geodesic case. We proceed to prove a general decomposition theorem for
immersions of product manifolds, which extends several related results. Other
main applications concern the class of hypersurfaces of $\mathbb{R}^{n+1}$ that
are of Enneper type, that is, hypersurfaces that carry a family of lines of
curvature, correspondent to a simple principal curvature, whose orthogonal
$(n-1)$-dimensional distribution is integrable and whose leaves are contained
in hyperspheres or affine hyperplanes of $\mathbb{R}^{n+1}$. We show how
Ribaucour partial tubes in the sphere can be used to describe all
$n$-dimensional hypersurfaces of Enneper type for which the leaves of the
$(n-1)$-dimensional distribution are contained in affine hyperplanes of
$\mathbb{R}^{n+1}$, and then show how a general hypersurface of Enneper type
can be constructed in terms of a hypersurface in the latter class. We give an
explicit description of some special hypersurfaces of Enneper type, among which
are natural generalizations of the so called Joachimsthal surfaces.
|
This paper proposes a method to relax the conditional independence assumption
of connectionist temporal classification (CTC)-based automatic speech
recognition (ASR) models. We train a CTC-based ASR model with auxiliary CTC
losses in intermediate layers in addition to the original CTC loss in the last
layer. During both training and inference, each generated prediction in the
intermediate layers is summed to the input of the next layer to condition the
prediction of the last layer on those intermediate predictions. Our method is
easy to implement and retains the merits of CTC-based ASR: a simple model
architecture and fast decoding speed. We conduct experiments on three different
ASR corpora. Our proposed method improves a standard CTC model significantly
(e.g., more than 20 % relative word error rate reduction on the WSJ corpus)
with a little computational overhead. Moreover, for the TEDLIUM2 corpus and the
AISHELL-1 corpus, it achieves a comparable performance to a strong
autoregressive model with beam search, but the decoding speed is at least 30
times faster.
|
Physical-layer key generation (PKG) can generate symmetric keys between two
communication ends based on the reciprocal uplink and downlink channels. By
smartly reconfiguring the radio signal propagation, intelligent reflecting
surface (IRS) is able to improve the secret key rate of PKG. However, existing
works involving IRS-assisted PKG are concentrated in single-antenna wireless
networks. So this paper investigates the problem of PKG in the IRS-assisted
multiple-input single-output (MISO) system, which aims to maximize the secret
key rate by optimally designing the IRS passive beamforming. First, we analyze
the correlation between channel state information (CSI) of eavesdropper and
legitimate ends and derive the expression of the upper bound of secret key rate
under passive eavesdropping attack. Then, an optimal algorithm for designing
IRS reflecting coefficients based on Semi-Definite Relaxation (SDR) and Taylor
expansion is proposed to maximize the secret key rate. Numerical results show
that our optimal IRS-assisted PKG scheme can achieve much higher secret key
rate when compared with two benchmark schemes.
|
We investigate $\lambda$-Hilbert transform, $\lambda$-Possion integral and
conjugate $\lambda$-Poisson integral on the atomic Hardy space in the Dunkl
setting and establish a new version of Paley type inequality which extends the
results in \cite{F} and \cite{ZhongKai Li 3}.
|
Arc-locally semicomplete and arc-locally in-semicomplete digraphs were
introduced by Bang-Jensen as a common generalization of both semicomplete and
semicomplete bipartite digraphs in 1993. Later, Bang-Jensen (2004),
Galeana-Sanchez and Goldfeder (2009) and Wang and Wang (2009) provided a
characterization of strong arc-locally semicomplete digraphs. In 2009, Wang and
Wang characterized strong arc-locally in-semicomplete digraphs. In 2012,
Galeana-Sanchez and Goldfeder provided a characterization of all arc-locally
semicomplete digraphs which generalizes some results by Bang-Jensen. In this
paper, we characterize the structure of arbitrary connected arc-locally (out)
in-semicomplete digraphs and arbitrary connected arc-locally semicomplete
digraphs.
|
We study Markov population processes on large graphs, with the local state
transition rates of a single vertex being linear function of its neighborhood.
A simple way to approximate such processes is by a system of ODEs called the
homogeneous mean-field approximation (HMFA). Our main result is showing that
HMFA is guaranteed to be the large graph limit of the stochastic dynamics on a
finite time horizon if and only if the graph-sequence is quasi-random. Explicit
error bound is given and being of order $\frac{1}{\sqrt{N}}$ plus the largest
discrepancy of the graph. For Erd\H{o}s R\'{e}nyi and random regular graphs we
show an error bound of order the inverse square root of the average degree. In
general, diverging average degrees is shown to be a necessary condition for the
HMFA to be accurate. Under special conditions, some of these results also apply
to more detailed type of approximations like the inhomogenous mean field
approximation (IHMFA). We pay special attention to epidemic applications such
as the SIS process.
|
Downscaling aims to link the behaviour of the atmosphere at fine scales to
properties measurable at coarser scales, and has the potential to provide high
resolution information at a lower computational and storage cost than numerical
simulation alone. This is especially appealing for targeting convective scales,
which are at the edge of what is possible to simulate operationally. Since
convective scale weather has a high degree of independence from larger scales,
a generative approach is essential. We here propose a statistical method for
downscaling moist variables to convective scales using conditional Gaussian
random fields, with an application to wet bulb potential temperature (WBPT)
data over the UK. Our model uses an adaptive covariance estimation to capture
the variable spatial properties at convective scales. We further propose a
method for the validation, which has historically been a challenge for
generative models.
|
Quantum spins of mesoscopic size are a well-studied playground for
engineering non-classical states. If the spin represents the collective state
of an ensemble of qubits, its non-classical behavior is linked to entanglement
between the qubits. In this work, we report on an experimental study of
entanglement in dysprosium's electronic spin. Its ground state, of angular
momentum $J=8$, can formally be viewed as a set of $2J$ qubits symmetric upon
exchange. To access entanglement properties, we partition the spin by optically
coupling it to an excited state $J'=J-1$, which removes a pair of qubits in a
state defined by the light polarization. Starting with the well-known W and
squeezed states, we extract the concurrence of qubit pairs, which quantifies
their non-classical character. We also directly demonstrate entanglement
between the 14- and 2-qubit subsystems via an increase in entropy upon
partition. In a complementary set of experiments, we probe decoherence of a
state prepared in the excited level $J'=J+1$ and interpret spontaneous emission
as a loss of a qubit pair in a random state. This allows us to contrast the
robustness of pairwise entanglement of the W state with the fragility of the
coherence involved in a Schr\"odinger cat state. Our findings open up the
possibility to engineer novel types of entangled atomic ensembles, in which
entanglement occurs within each atom's electronic spin as well as between
different atoms.
|
High quality (HQ) video services occupy large portions of the total bandwidth
and are among the main causes of congestion at network bottlenecks. Since video
is resilient to data loss, throwing away less important video packets can ease
network congestion with minimal damage to video quality and free up bandwidth
for other data flows. Frame type is one of the features that can be used to
determine the importance of video packets, but this information is stored in
the packet payload. Due to limited processing power of devices in high
throughput/speed networks, data encryption and user credibility issues, it is
costly for the network to find the frame type of each packet. Therefore, a fast
and reliable standalone method to recognize video packet types at network level
is desired. This paper proposes a method to model the structure of live video
streams in a network node which results in determining the frame type of each
packet. It enables the network nodes to mark and if need be to discard less
important video packets ahead of congestion, and therefore preserve video
quality and free up bandwidth for more important packet types. The method does
not need to read the IP layer payload and uses only the packet header data for
decisions. Experimental results indicate while dropping packets under packet
type prediction degrades video quality with respect to its true type by 0.5-3
dB, it has 7-20 dB improvement over when packets are dropped randomly.
|
Improving wind turbine efficiency is essential for reducing the costs of
energy production. The highly nonlinear dynamics of the wind turbines and their
uncertain operating conditions have posed many challenges for their control
methods. In this work, a robust control strategy based on sliding mode and
adaptive fuzzy disturbance observer is proposed for speed tracking in a
variable speed wind turbine. First, the nonlinear mathematical model that
describes the dynamics of the variable speed wind turbine is derived. This
nonlinear model is then used to derive the control methodology and to find
stability and robustness conditions. The control approach is designed to track
the optimal wind speed that causes maximum energy extraction. The stability
condition was verified using the Lyapunov stability theory. A simulation study
was conducted to verify the method, and a comparative analysis was used to
measure its effectiveness. The results showed a high tracking ability and
robustness of the developed methodology. Moreover, higher power extraction was
observed when compared to a classical control method.
|
Many modern systems for speaker diarization, such as the recently-developed
VBx approach, rely on clustering of DNN speaker embeddings followed by
resegmentation. Two problems with this approach are that the DNN is not
directly optimized for this task, and the parameters need significant retuning
for different applications. We have recently presented progress in this
direction with a Leave-One-Out Gaussian PLDA (LGP) clustering algorithm and an
approach to training the DNN such that embeddings directly optimize performance
of this scoring method. This paper presents a new two-pass version of this
system, where the second pass uses finer time resolution to significantly
improve overall performance. For the Callhome corpus, we achieve the first
published error rate below 4% without any task-dependent parameter tuning. We
also show significant progress towards a robust single solution for multiple
diarization tasks.
|
We use 3D fully kinetic particle-in-cell simulations to study the occurrence
of magnetic reconnection in a simulation of decaying turbulence created by
anisotropic counter-propagating low-frequency Alfv\'en waves consistent with
critical-balance theory. We observe the formation of small-scale
current-density structures such as current filaments and current sheets as well
as the formation of magnetic flux ropes as part of the turbulent cascade. The
large magnetic structures present in the simulation domain retain the initial
anisotropy while the small-scale structures produced by the turbulent cascade
are less anisotropic. To quantify the occurrence of reconnection in our
simulation domain, we develop a new set of indicators based on intensity
thresholds to identify reconnection events in which both ions and electrons are
heated and accelerated in 3D particle-in-cell simulations. According to the
application of these indicators, we identify the occurrence of reconnection
events in the simulation domain and analyse one of these events in detail. The
event is related to the reconnection of two flux ropes, and the associated ion
and electron exhausts exhibit a complex three-dimensional structure. We study
the profiles of plasma and magnetic-field fluctuations recorded along
artificial-spacecraft trajectories passing near and through the reconnection
region. Our results suggest the presence of particle heating and acceleration
related to small-scale reconnection events within magnetic flux ropes produced
by the anisotropic Alfv\'enic turbulent cascade in the solar wind. These events
are related to current structures of order a few ion inertial lengths in size.
|
We propose the spatial-temporal aggregated predictor (STAP) modeling
framework to address measurement and estimation issues that arise when
assessing the relationship between built environment features (BEF) and health
outcomes. Many BEFs can be mapped as point locations and thus traditional
exposure metrics are based on the number of features within a pre-specified
spatial unit. The size of the spatial unit--or spatial scale--that is most
appropriate for a particular health outcome is unknown and its choice
inextricably impacts the estimated health effect. A related issue is the lack
of knowledge of the temporal scale--or the length of exposure time that is
necessary for the BEF to render its full effect on the health outcome. The
proposed STAP model enables investigators to estimate both the spatial and
temporal scales for a given BEF in a data-driven fashion, thereby providing a
flexible solution for measuring the relationship between outcomes and spatial
proximity to point-referenced exposures. Simulation studies verify the validity
of our method for estimating the scales as well as the association between
availability of BEFs' and health outcomes. We apply this method to estimate the
spatial-temporal association between supermarkets and BMI using data from the
Multi-Ethnic Atherosclerosis Study, demonstrating the method's applicability in
cohort studies.
|
In a rectangular domain, a boundary-value problem is considered for a
mixed-type equation with a regularized Caputo-like counterpart of hyper-Bessel
differential operator and the bi-ordinal Hilfer's fractional derivative. Using
the method of separation of variables, Laplace transform, a unique solvability
of the considered problem has been established. Moreover, we have found the
explicit solution of initial problems for a differential equation with the
bi-ordinal Hilfer's derivative and regularized Caputo-like counterpart of the
hyper-Bessel differential operator with the non-zero starting point.
|
Prediction of human actions in social interactions has important applications
in the design of social robots or artificial avatars. In this paper, we model
human interaction generation as a discrete multi-sequence generation problem
and present SocialInteractionGAN, a novel adversarial architecture for
conditional interaction generation. Our model builds on a recurrent
encoder-decoder generator network and a dual-stream discriminator. This
architecture allows the discriminator to jointly assess the realism of
interactions and that of individual action sequences. Within each stream a
recurrent network operating on short subsequences endows the output signal with
local assessments, better guiding the forthcoming generation. Crucially,
contextual information on interacting participants is shared among agents and
reinjected in both the generation and the discriminator evaluation processes.
We show that the proposed SocialInteractionGAN succeeds in producing high
realism action sequences of interacting people, comparing favorably to a
diversity of recurrent and convolutional discriminator baselines. Evaluations
are conducted using modified Inception Score and Fr{\'e}chet Inception Distance
metrics, that we specifically design for discrete sequential generated data.
The distribution of generated sequences is shown to approach closely that of
real data. In particular our model properly learns the dynamics of interaction
sequences, while exploiting the full range of actions.
|
We consider, in general terms, the possible parameter space of thermal dark
matter candidates. We assume that the dark matter particle is fundamental and
was in thermal equilibrium in a hidden sector with a temperature $T'$, which
may differ from that of the Standard Model temperature, $T$. The candidates lie
in a region in the $T'/T$ vs. $m_{\rm dm}$ plane, which is bounded by both
model-independent theoretical considerations and observational constraints. The
former consists of limits from dark matter candidates that decoupled when
relativistic (the relativistic floor) and from those that decoupled when
non-relativistic with the largest annihilation cross section allowed by
unitarity (the unitarity wall), while the latter concerns big bang
nucleosynthesis ($N_{\rm eff}$ ceiling) and free streaming. We present three
simplified dark matter scenarios, demonstrating concretely how each fits into
the domain.
|
We design a multi-purpose environment for autonomous UAVs offering different
communication services in a variety of application contexts (e.g., wireless
mobile connectivity services, edge computing, data gathering). We develop the
environment, based on OpenAI Gym framework, in order to simulate different
characteristics of real operational environments and we adopt the Reinforcement
Learning to generate policies that maximize some desired performance.The
quality of the resulting policies are compared with a simple baseline to
evaluate the system and derive guidelines to adopt this technique in different
use cases. The main contribution of this paper is a flexible and extensible
OpenAI Gym environment, which allows to generate, evaluate, and compare
policies for autonomous multi-drone systems in multi-service applications. This
environment allows for comparative evaluation and benchmarking of different
approaches in a variety of application contexts.
|
The discovery of superconductivity in the infinite-layer nickelates has
opened new perspectives in the context of quantum materials. We analyze, via
first-principles calculations, the electronic properties of La$_2$NiO$_3$F --
the first single-layer T'-type nickelate -- and compare these properties with
those of related nickelates and isostructural cuprates. We find that
La$_2$NiO$_3$F is essentially a single-band system with a Fermi surface
dominated by the Ni-3$d_{x^2-y^2}$ states with an exceptional 2D character. In
addition, the hopping ratio is similar to that of the highest $T_c$ cuprates
and there is a remarkable $e_g$ splitting together with a charge transfer
energy of 3.6~eV. According to these descriptors, along with a comparison to
Nd$_2$CuO$_4$, we thus indicate single-layer T'-type nickelates of this class
as very promising analogs of cuprate-like physics while keeping distinct
Ni$^{1+}$ features.
|
We conducted an investigation to find when a mistake was introduced in a
widely accessed Internet document, namely the RFC index. With great surprise,
we discovered that a it may go unnoticed for a very long period, namely more
that twenty-six years. This raises some questions to what does it mean to have
open access and the meaning of Linus' laws that "given enough eyeballs, all
bugs are shallow"
|
In this paper, we reformulate the Bakry-\'Emery curvature on a weighted graph
in terms of the smallest eigenvalue of a rank one perturbation of the so-called
curvature matrix using Schur complement. This new viewpoint allows us to show
various curvature function properties in a very conceptual way. We show that
the curvature, as a function of the dimension parameter, is analytic, strictly
monotone increasing and strictly concave until a certain threshold after which
the function is constant. Furthermore, we derive the curvature of the Cartesian
product using the crucial observation that the curvature matrix of the product
is the direct sum of each component. Our approach of the curvature functions of
graphs can be employed to establish analogous results for the curvature
functions of weighted Riemannian manifolds. Moreover, as an application, we
confirm a conjecture (in a general weighted case) of the fact that the
curvature does not decrease under certain graph modifications.
|
For the first time, basing both on experimental facts and our theoretical
consideration, we show that Fermi systems with flat bands should be tuned with
the superconducting state. Experimental measurements on magic-angle twisted
bilayer graphene of the Fermi velocity $V_F$ as a function of the temperature
$T_c$ of superconduction phase transition have revealed $V_F\propto T_c\propto
1/N_s(0)$, where $N_s(0)$ is the density of states at the Fermi level. We show
that the high-$T_c$ compounds $\rm Bi_2Sr_2CaCu_2O_{8+x}$ exhibit the same
behavior. Such observation is a challenge to theories of high-$T_c$
superconductivity, since $V_F$ is negatively correlated with $T_c$, for
$T_c\propto 1/V_F\propto N_s(0)$. We show that the theoretical idea of forming
flat bands in strongly correlated Fermi systems can explain this behavior and
other experimental data collected on both $\rm Bi_2Sr_2CaCu_2O_{8+x}$ and
twisted bilayer graphene. Our findings place stringent constraints on theories
describing the nature of high-$T_c$ superconductivity and the deformation of
flat band by the superconducting phase transition.
|
This paper presents a novel, non-standard set of vector instruction types for
exploring custom SIMD instructions in a softcore. The new types allow
simultaneous access to a relatively high number of operands, reducing the
instruction count where applicable. Additionally, a high-performance
open-source RISC-V (RV32 IM) softcore is introduced, optimised for exploring
custom SIMD instructions and streaming performance. By providing instruction
templates for instruction development in HDL/Verilog, efficient FPGA-based
instructions can be developed with few low-level lines of code. In order to
improve custom SIMD instruction performance, the softcore's cache hierarchy is
optimised for bandwidth, such as with very wide blocks for the last-level
cache. The approach is demonstrated on example memory-intensive applications on
an FPGA. Although the exploration is based on the softcore, the goal is to
provide a means to experiment with advanced SIMD instructions which could be
loaded in future CPUs that feature reconfigurable regions as custom
instructions. Finally, we provide some insights on the challenges and
effectiveness of such future micro-architectures.
|
The law of centripetal force governing the motion of celestial bodies in
eccentric conic sections, has been established and thoroughly investigated by
Sir Isaac Newton in his Principia Mathematica. Yet its profound implications on
the understanding of such motions is still evolving. In a paper to the royal
academy of science, Sir Willian Hamilton demonstrated that this law underlies
the circular character of hodographs for Kepler orbits. A fact which was the
object of ulterior research and exploration by Richard Feynman and many other
authors [1]. In effect, a minute examination of the geometry of elliptic
trajectories, reveals interesting geometric properties and relations,
altogether, combined with the law of conservation of angular momentum lead
eventually, and without any recourse to dealing with differential equations, to
the appearance of the equation of the trajectory and to the derivation of the
equation of its corresponding hodograph. On this respect, and for the sake of
founding the approach on solid basis, I devised two mathematical theorems; one
concerning the existence of geometric means, and the other is related to
establishing the parametric equation of an off-center circle, altogether
compounded with other simple arguments ultimately give rise to the inverse
square law of force that governs the motion of bodies in elliptic trajectories,
as well as to the equation of their inherent circular hodographs.
|
3D point-clouds and 2D images are different visual representations of the
physical world. While human vision can understand both representations,
computer vision models designed for 2D image and 3D point-cloud understanding
are quite different. Our paper investigates the potential for transferability
between these two representations by empirically investigating whether this
approach works, what factors affect the transfer performance, and how to make
it work even better. We discovered that we can indeed use the same neural net
model architectures to understand both images and point-clouds. Moreover, we
can transfer pretrained weights from image models to point-cloud models with
minimal effort. Specifically, based on a 2D ConvNet pretrained on an image
dataset, we can transfer the image model to a point-cloud model by
\textit{inflating} 2D convolutional filters to 3D then finetuning its input,
output, and optionally normalization layers. The transferred model can achieve
competitive performance on 3D point-cloud classification, indoor and driving
scene segmentation, even beating a wide range of point-cloud models that adopt
task-specific architectures and use a variety of tricks.
|
We present a study of the environment of 27 z=3-4.5 bright quasars from the
MUSE Analysis of Gas around Galaxies (MAGG) survey. With medium-depth MUSE
observations (4 hours on target per field), we characterise the effects of
quasars on their surroundings by studying simultaneously the properties of
extended gas nebulae and Lyalpha emitters (LAEs) in the quasar host haloes. We
detect extended (up to ~ 100 kpc) Lyalpha emission around all MAGG quasars,
finding a very weak redshift evolution between z=3 and z=6. By stacking the
MUSE datacubes, we confidently detect extended emission of CIV and only
marginally detect extended HeII up to ~40 kpc, implying that the gas is metal
enriched. Moreover, our observations show a significant overdensity of LAEs
within 300 km/s from the quasar systemic redshifts estimated from the nebular
emission. The luminosity functions and equivalent width distributions of these
LAEs show similar shapes with respect to LAEs away from quasars suggesting that
the Lyalpha emission of the majority of these sources is not significantly
boosted by the quasar radiation or other processes related to the quasar
environment. Within this framework, the observed LAE overdensities and our
kinematic measurements imply that bright quasars at z=3-4.5 are hosted by
haloes in the mass range ~ 10^{12.0}-10^{12.5} Msun.
|
Although deep neural networks are successful for many tasks in the speech
domain, the high computational and memory costs of deep neural networks make it
difficult to directly deploy highperformance Neural Network systems on
low-resource embedded devices. There are several mechanisms to reduce the size
of the neural networks i.e. parameter pruning, parameter quantization, etc.
This paper focuses on how to apply binary neural networks to the task of
speaker verification. The proposed binarization of training parameters can
largely maintain the performance while significantly reducing storage space
requirements and computational costs. Experiment results show that, after
binarizing the Convolutional Neural Network, the ResNet34-based network
achieves an EER of around 5% on the Voxceleb1 testing dataset and even
outperforms the traditional real number network on the text-dependent dataset:
Xiaole while having a 32x memory saving.
|
In this paper, we present a model-free learning-based control scheme for the
soft snake robot to improve its contact-aware locomotion performance in a
cluttered environment. The control scheme includes two cooperative controllers:
A bio-inspired controller (C1) that controls both the steering and velocity of
the soft snake robot, and an event-triggered regulator (R2) that controls the
steering of the snake in anticipation of obstacle contacts and during contact.
The inputs from the two controllers are composed as the input to a Matsuoka CPG
network to generate smooth and rhythmic actuation inputs to the soft snake. To
enable stable and efficient learning with two controllers, we develop a
game-theoretic process, fictitious play, to train C1 and R2 with a shared
potential-field-based reward function for goal tracking tasks. The proposed
approach is tested and evaluated in the simulator and shows significant
improvement of locomotion performance in the obstacle-based environment
comparing to two baseline controllers.
|
The primary objective of this paper is the study of different instances of
the elliptic Stark conjectures of Darmon, Lauder and Rotger, in a situation
where the elliptic curve attached to the modular form $f$ has split
multiplicative reduction at $p$ and the arithmetic phenomena are specially
rich. For that purpose, we resort to the principle of improved $p$-adic
$L$-functions and study their $\mathcal L$-invariants. We further interpret
these results in terms of derived cohomology classes coming from the setting of
diagonal cycles, showing that the same $\mathcal L$-invariant which arises in
the theory of $p$-adic $L$-functions also governs the arithmetic of Euler
systems. Thus, we can reduce, in the split multiplicative situation, the
conjecture of Darmon, Lauder and Rotger to a more familiar statement about
higher order derivatives of a triple product $p$-adic $L$-function at a point
lying inside the region of classical interpolation, in the realm of the more
well-known exceptional zero conjectures.
|
The allocation of venture capital is one of the primary factors determining
who takes products to market, which startups succeed or fail, and as such who
gets to participate in the shaping of our collective economy. While gender
diversity contributes to startup success, most funding is allocated to
male-only entrepreneurial teams. In the wake of COVID-19, 2020 is seeing a
notable decline in funding to female and mixed-gender teams, giving raise to an
urgent need to study and correct the longstanding gender bias in startup
funding allocation. We conduct an in-depth data analysis of over 48,000
companies on Crunchbase, comparing funding allocation based on the gender
composition of founding teams. Detailed findings across diverse industries and
geographies are presented. Further, we construct machine learning models to
predict whether startups will reach an equity round, revealing the surprising
finding that the CEO's gender is the primary determining factor for attaining
funding. Policy implications for this pressing issue are discussed.
|
Macroscopic realism (MR) is the notion that a time-evolving system possesses
definite properties, irrespective of past or future measurements. Quantum
mechanical theories can, however, produce violations of MR. Most research to
date has focused on a single set of conditions for MR, the Leggett-Garg
inequalities (LGIs), and on a single data set, the "standard data set", which
consists of single-time averages and second-order correlators of a dichotomic
variable Q for three times. However, if such conditions are all satisfied, then
where is the quantum behaviour? In this paper, we provide an answer to this
question by considering expanded data sets obtained from finer-grained
measurements and MR conditions on those sets. We consider three different
situations in which there are violations of MR that go undetected by the
standard LGIs. First, we explore higher-order LGIs on a data set involving
third- and fourth-order correlators, using a spin-1/2 and spin-1 system.
Second, we explore the pentagon inequalities (PIs) and a data set consisting of
all possible averages and second-order correlators for measurements of Q at
five times. Third, we explore the LGIs for a trichotomic variable and
measurements made with a trichotomic operator to, again, identify violations
for a spin-1 system beyond those seen with a single dichotomic variable. We
also explore the regimes in which combinations of two and three-time LGIs can
be satisfied and violated in a spin-1 system, extending recent work. We discuss
the possible experimental implementation of all the above results.
|
The carrier transport and the motion of a vortex system in a mixed state of
an electron-doped high-temperature superconductors Nd2-xCexCuO4 were
investigated. To study the anisotropy of galvanomagnetic effects of highly
layered NdCeCuO system we have synthesized Nd2-xCexCuO4/SrTiO3 epitaxial films
with non-standart orientations of the c-axis and conductive CuO2 layers
relative to the substrate. The variation ofe the angle of inclination of the
magnetic field B, relative to the current J, reveals that the behavior of both
the in-plane r_xx(B) and the out-plane r_xy(B) resistivities in the mixed state
is mainly determined by the perpendicular to J component of B, that indicates
the crucial role of the Lorentz force F_L~[JxB] and defines the motion of
Josephson vortices across the CuO2 layers.
|
We consider the problem of finding an inductive construction, based on vertex
splitting, of triangulated spheres with a fixed number of additional edges
(braces). We show that for any positive integer $b$ there is such an inductive
construction of triangulations with $b$ braces, having finitely many base
graphs. In particular we establish a bound for the maximum size of a base graph
with $b$ braces that is linear in $b$. In the case that $b=1$ or $2$ we
determine the list of base graphs explicitly. Using these results we show that
doubly braced triangulations are (generically) minimally rigid in two distinct
geometric contexts arising from a hypercylinder in $\mathbb{R}^4$ and a class
of mixed norms on $\mathbb{R}^3$.
|
The narrow escape problem is a first-passage problem concerned with randomly
moving particles in a physical domain, being trapped by absorbing surface traps
(windows), such that the measure of traps is small compared to the domain size.
The expected value of time required for a particle to escape is defined as mean
first passage time (MFPT), which satisfies the Poisson partial differential
equation subject to a mixed Dirichlet-Neumann boundary condition. The primary
objective of this work is a direct numerical simulation of multiple particles
undergoing Brownian motion in a three-dimensional sphere with boundary traps,
compute MFPT values by averaging Brownian escape times, and compare the results
with asymptotic results obtained by solving the Poisson PDE problem. A
comprehensive study of results obtained from the simulations shows that the
difference between Brownian and asymptotic results for the escape times mostly
not exceed $1\%$ accuracy. This comparison in some sense validates the narrow
escape PDE problem itself as an approximation (averaging) of the multiple
physical Brownian motion runs. This work also predicted that how many
single-particle simulations are required to match the predicted asymptotic
averaged MFPT values. The next objective of this work is to study dynamics of
Brownian particles near the boundary by estimating the average percentage of
time spent by Brownian particle near the domain boundary for both the
anisotropic and isotropic diffusion. It is shown that the Brownian particles
spend more in the boundary layer than predicted by the boundary layer relative
volume, with the effect being more pronounced in a narrow layer near the
spherical wall. It is also shown that taking into account anisotropic diffusion
yields larger times a particle spends near the boundary, and smaller escape
times than those predicted by the isotropic diffusion model.
|
This paper considers the narrow escape problem of a Brownian particle within
a three-dimensional Riemannian manifold under the influence of the force field.
We compute an asymptotic expansion of mean sojourn time for Brownian particles.
As an auxiliary result, we obtain the singular structure for the restricted
Neumann Green's function which may be of independent interest.
|
I propose the use of two magnetic Wollaston prisms to correct the linear
Larmor phase aberration of MIEZE, introduced by the transverse size of the
sample. With this approach, the resolution function of MIEZE can be optimized
for any scattering angle of interest. The optimum magnetic fields required for
the magnetic Wollaston prisms depend only on the scattering angle and the
frequency of the RF flippers and they are independent of the neutron wavelength
and beam divergence, which makes it suitable for both pulsed and constant
wavelength neutron sources.
|
We consider $n$ independent $p$-dimensional Gaussian vectors with covariance
matrix having Toeplitz structure. We test that these vectors have independent
components against a stationary distribution with sparse Toeplitz covariance
matrix, and also select the support of non-zero entries. We assume that the
non-zero values can occur in the recent past (time-lag less than $p/2$). We
build test procedures that combine a sum and a scan-type procedures, but are
computationally fast, and show their non-asymptotic behaviour in both one-sided
(only positive correlations) and two-sided alternatives, respectively. We also
exhibit a selector of significant lags and bound the Hamming-loss risk of the
estimated support. These results can be extended to the case of nearly Toeplitz
covariance structure and to sub-Gaussian vectors. Numerical results illustrate
the excellent behaviour of both test procedures and support selectors - larger
the dimension $p$, faster are the rates.
|
A graph $G$ is called interval colorable if it has a proper edge coloring
with colors $1,2,3,\dots$ such that the colors of the edges incident to every
vertex of $G$ form an interval of integers. Not all graphs are interval
colorable; in fact, quite few families have been proved to admit interval
colorings. In this paper we introduce and investigate a new notion, the
interval coloring thickness of a graph $G$, denoted
${\theta_{\mathrm{int}}}(G)$, which is the minimum number of interval colorable
edge-disjoint subgraphs of $G$ whose union is $G$.
Our investigation is motivated by scheduling problems with compactness
requirements, in particular, problems whose solution may consist of several
schedules, but where each schedule must not contain any waiting periods or idle
times for all involved parties. We first prove that every connected properly
$3$-edge colorable graph with maximum degree $3$ is interval colorable, and
using this result, we deduce an upper bound on ${\theta_{\mathrm{int}}}(G)$ for
general graphs $G$. We demonstrate that this upper bound can be improved in the
case when $G$ is bipartite, planar or complete multipartite and consider some
applications in timetabling.
|
CO$_2$ dissociation stimulated by vibrational excitation in non-equilibrium
discharges has drawn lots of attention. Ns-discharges are known for their
highly non-equilibrium conditions. It is therefore of interest to investigate
the CO$_2$ excitation in such discharges. In this paper, we demonstrate the
ability for monitoring the time evolution of CO$_2$ ro-vibrational excitation
with a well-selected wavelength window around 2289.0 cm$^{-1}$ and a single CW
quantum cascade laser (QCL) with both high accuracy and temporal resolution.
The rotational and vibrational temperatures for both the symmetric and the
asymmetric modes of CO$_2$ in the afterglow of CO$_2$ + He ns-discharge were
measured with a temporal resolution of 1.5 $\mu$s. The non-thermal feature and
the preferential excitation of the asymmetric stretch mode of CO$_2$ were
experimentally observed, with a peak temperature of $T_{v3, max}$ = 966 $\pm$
1.5 K, $T_{v12, max}$ = 438.4 $\pm$ 1.2 K and $T_{rot}$ = 334.6 $\pm$ 0.6 K
reached at 3 $\mu$s after the nanosecond pulse. In the following relaxation
process, an exponential decay with a time constant of 69 $\mu$s was observed
for the asymmetric stretch (001) state, consistent with the dominant
deexcitation mechanism due to VT transfer with He and deexcitation on the wall.
Furthermore, a synchronous oscillation of the gas temperature and the total
pressure was also observed and can be explained by a two-line thermometry and
adiabatic process. The period of the oscillation and its dependence on the gas
components is consistent with a standing acoustic wave excited by the
ns-discharge.
|
Let $G=(V,E)$ be a graph and $P\subseteq V$ a set of points. Two points are
mutually visible if there is a shortest path between them without further
points. $P$ is a mutual-visibility set if its points are pairwise mutually
visible. The mutual-visibility number of $G$ is the size of any largest
mutual-visibility set. In this paper we start the study about this new
invariant and the mutual-visibility sets in undirected graphs. We introduce the
mutual-visibility problem which asks to find a mutual-visibility set with a
size larger than a given number. We show that this problem is NP-complete,
whereas, to check whether a given set of points is a mutual-visibility set is
solvable in polynomial time. Then we study mutual-visibility sets and
mutual-visibility numbers on special classes of graphs, such as block graphs,
trees, grids, tori, complete bipartite graphs, cographs. We also provide some
relations of the mutual-visibility number of a graph with other invariants.
|
Binary metallic phosphide, Nb2P5, belongs to technologically important class
of materials. Quite surprisingly, a large number of physical properties of
Nb2P5, including elastic properties and their anisotropy, acoustic, electronic
(DOS, charge density distribution, electron density difference),
thermo-physical, bonding characteristics, and optical properties have not been
investigated at all. In the present work we have explored all these properties
in details for the first time employing density functional theory based
first-principles method. Nb2P5 is found to be a mechanically stable,
elastically anisotropic compound with weak brittle character. The bondings
among the atoms are dominated by covalent and ionic contributions with small
signature of metallic feature. The compound possesses high level of
machinability. Nb2P5 is a moderately hard compound. The band structure
calculations reveal metallic conduction with a large electronic density of
states at the Fermi level. Calculated values of different thermal properties
indicate that Nb2P5 has the potential to be used as a thermal barrier coating
material. The energy dependent optical parameters show close agreement with the
underlying electronic band structure. The optical absorption and reflectivity
spectra and the static index of refraction of Nb2P5 show that the compound
holds promise to be used in optoelectronic device sector. Unlike notable
anisotropy in elastic and mechanical properties, the optical parameters are
found to be almost isotropic.
|
Bayesian nonparametric hierarchical priors are highly effective in providing
flexible models for latent data structures exhibiting sharing of information
between and across groups. Most prominent is the Hierarchical Dirichlet Process
(HDP), and its subsequent variants, which model latent clustering between and
across groups. The HDP, may be viewed as a more flexible extension of Latent
Dirichlet Allocation models (LDA), and has been applied to, for example, topic
modelling, natural language processing, and datasets arising in health-care. We
focus on analogous latent feature allocation models, where the data structures
correspond to multisets or unbounded sparse matrices. The fundamental
development in this regard is the Hierarchical Indian Buffet process (HIBP),
which utilizes a hierarchy of Beta processes over J groups, where each group
generates binary random matrices, reflecting within group sharing of features,
according to beta-Bernoulli IBP priors. To encompass HIBP versions of
non-Bernoulli extensions of the IBP, we introduce hierarchical versions of
general spike and slab IBP. We provide explicit novel descriptions of the
marginal, posterior and predictive distributions of the HIBP and its
generalizations which allow for exact sampling and simpler practical
implementation. We highlight common structural properties of these processes
and establish relationships to existing IBP type and related models arising in
the literature. Examples of potential applications may involve topic models,
Poisson factorization models, random count matrix priors and neural network
models
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.