abstract
stringlengths 42
2.09k
|
---|
Modular transformations of string theory are shown to play a crucial role in
the discussion of discrete flavor symmetries in the Standard Model. They
include CP transformations and provide a unification of CP with traditional
flavor symmetries within the framework of the "eclectic flavor" scheme. The
unified flavor group is non-universal in moduli space and exhibits the
phenomenon of "Local Flavor Unification", where different sectors of the theory
(like quarks and leptons) can be subject to different flavor structures.
|
The cost of using a blockchain infrastructure as well as the time required to
search and retrieve information from it must be considered when designing a
decentralized application. In this work, we examine a comprehensive set of data
management approaches for Ethereum applications and assess the associated cost
in gas as well as the retrieval performance. More precisely, we analyze the
storage and retrieval of various-sized data, utilizing smart contract storage.
In addition, we study hybrid approaches by using IPFS and Swarm as storage
platforms along with Ethereum as a timestamping proof mechanism. Such schemes
are especially effective when large chunks of data have to be managed.
Moreover, we present methods for low-cost data handling in Ethereum, namely the
event-logs, the transaction payload, and the almost surprising exploitation of
unused function arguments. Finally, we evaluate these methods on a
comprehensive set of experiments.
|
Air pollutants, such as particulate matter, negatively impact human health.
Most existing pollution monitoring techniques use stationary sensors, which are
typically sparsely deployed. However, real-world pollution distributions vary
rapidly with position and the visual effects of air pollution can be used to
estimate concentration, potentially at high spatial resolution. Accurate
pollution monitoring requires either densely deployed conventional point
sensors, at-a-distance vision-based pollution monitoring, or a combination of
both.
The main contribution of this paper is that to the best of our knowledge, it
is the first publicly available, high temporal and spatial resolution air
quality dataset containing simultaneous point sensor measurements and
corresponding images. The dataset enables, for the first time, high spatial
resolution evaluation of image-based air pollution estimation algorithms. It
contains PM2.5, PM10, temperature, and humidity data. We evaluate several
state-of-art vision-based PM concentration estimation algorithms on our dataset
and quantify the increase in accuracy resulting from higher point sensor
density and the use of images. It is our intent and belief that this dataset
can enable advances by other research teams working on air quality estimation.
Our dataset is available at
https://github.com/implicitDeclaration/HVAQ-dataset/tree/master.
|
In this paper, important concepts from finite group theory are translated to
localities, in particular to linking localities. Here localities are group-like
structures associated to fusion systems which were introduced by Chermak.
Linking localities (by Chermak also called proper localities) are special kinds
of localities which correspond to linking systems. Thus they contain the
algebraic information that is needed to study $p$-completed classifying spaces
of fusion systems as generalizations of $p$-completed classifying spaces of
finite groups.
Because of the group-like nature of localities, there is a natural notion of
partial normal subgroups. Given a locality $\mathcal{L}$ and a partial normal
subgroup $\mathcal{N}$ of $\mathcal{L}$, we show that there is a largest
partial normal subgroup $\mathcal{N}^\perp$ of $\mathcal{L}$ which, in a
certain sense, commutes elementwise with $\mathcal{N}$ and thus morally plays
the role of a "centralizer" of $\mathcal{N}$ in $\mathcal{L}$. This leads to a
nice notion of the generalized Fitting subgroup $F^*(\mathcal{L})$ of a linking
locality $\mathcal{L}$. Building on these results we define and study special
kinds of linking localities called regular localities. It turns out that there
is a theory of components of regular localities akin to the theory of
components of finite groups. The main concepts we introduce and work with in
the present paper (in particular $\mathcal{N}^\perp$ in the special case of
linking localities, $F^*(\mathcal{L})$, regular localities and components of
regular localities) were already introduced and studied in a preprint by
Chermak. However, we give a different and self-contained approach to the
subject where we reprove Chermak's theorems and also show several new results.
|
Double-descent curves in neural networks describe the phenomenon that the
generalisation error initially descends with increasing parameters, then grows
after reaching an optimal number of parameters which is less than the number of
data points, but then descends again in the overparameterised regime. Here we
use a neural network Gaussian process (NNGP) which maps exactly to a fully
connected network (FCN) in the infinite width limit, combined with techniques
from random matrix theory, to calculate this generalisation behaviour, with a
particular focus on the overparameterised regime. An advantage of our NNGP
approach is that the analytical calculations are easier to interpret. We argue
that neural network generalization performance improves in the
overparameterised regime precisely because that is where they converge to their
equivalent Gaussian process.
|
Quantum computing has been attracting tremendous efforts in recent years. One
prominent application is to perform quantum simulations of electron
correlations in large molecules and solid-state materials, where orbital
degrees of freedom are crucial to quantitatively model electronic properties.
Electron orbitals unlike quantum spins obey crystal symmetries, making the
atomic orbital in optical lattices a natural candidate to emulate electron
orbitals. Here, we construct atom-orbital qubits by manipulating $s$- and
$d$-orbitals of atomic Bose-Einstein condensation in an optical lattice.
Noise-resilient quantum gate operations are achieved by performing holonomic
quantum control, which admits geometrical protection. We find it is critical to
eliminate the orbital leakage error in the system. The gate robustness is
tested by varying the intensity of the laser forming the lattice. Our work
opens up wide opportunities for atom-orbital based quantum information
processing, of vital importance to programmable quantum simulations of
multi-orbital physics in molecules and quantum materials.
|
The x-vector architecture has recently achieved state-of-the-art results on
the speaker verification task. This architecture incorporates a central layer,
referred to as temporal pooling, which stacks statistical parameters of the
acoustic frame distribution. This work proposes to highlight the significant
effect of the temporal pooling content on the training dynamics and task
performance. An evaluation with different pooling layers is conducted, that is,
including different statistical measures of central tendency. Notably, 3rd and
4th moment-based statistics (skewness and kurtosis) are also tested to complete
the usual mean and standard-deviation parameters. Our experiments show the
influence of the pooling layer content in terms of speaker verification
performance, but also for several classification tasks (speaker, channel or
text related), and allow to better reveal the presence of external information
to the speaker identity depending on the layer content.
|
We give some Korovkin-type theorems on convergence and estimates of rates of
approximations of nets of functions, satisfying suitable axioms, whose
particular cases are filter/ideal convergence, almost convergence and
triangular A-statistical convergence, where A is a non-negative summability
method. Furthermore, we give some applications to Mellin-type convolution and
bivariate Kantorovich-type discrete operators.
|
We construct an agent-based SEIR model to simulate COVID-19 spread at a
16000-student mostly non-residential urban university during the Fall 2021
Semester. We find that mRNA vaccine coverage above 80% makes it possible to
safely reopen to in-person instruction. If vaccine coverage is 100%, then our
model indicates that facemask use is not necessary. Our simulations with
vaccine coverage below 70% exhibit a right-skew for total infections over the
semester, which suggests that high levels of infection are not exceedingly rare
with campus social connections the main transmission route. Less effective
vaccines or incidence of new variants may require additional intervention such
as screening testing to reopen safely.
|
A big challenge in current biology is to understand the exact
self-organization mechanism underlying complex multi-physics coupling
developmental processes. With multiscale computations of from subcellular gene
expressions to cell population dynamics that is based on first principles, we
show that cell cycles can self-organize into periodic stripes in the
development of E. coli populations from one single cell, relying on the moving
graded nutrient concentration profile, which provides directing positional
information for cells to keep their cycle phases in place. Resultantly, the
statistical cell cycle distribution within the population is observed to
collapse to a universal function and shows a scale invariance. Depending on the
radial distribution mode of genetic oscillations in cell populations, a
transition between gene patterns is achieved. When an inhibitor-inhibitor gene
network is subsequently activated by a gene-oscillatory network, cell
populations with zebra stripes can be established, with the positioning
precision of cell-fate-specific domains influenced by cells' speed of free
motions. Such information may provide important implications for understanding
relevant dynamic processes of multicellular systems, such as biological
development.
|
The Perdew-Zunger self-interaction correction(PZ-SIC) improves the
performance of density functional approximations(DFAs) for the properties that
involve significant self-interaction error(SIE), as in stretched bond
situations, but overcorrects for equilibrium properties where SIE is
insignificant. This overcorrection is often reduced by LSIC, local scaling of
the PZ-SIC to the local spin density approximation(LSDA). Here we propose a new
scaling factor to use in an LSIC-like approach that satisfies an additional
important constraint: the correct coefficient of atomic number Z in the
asymptotic expansion of the exchange-correlation(xc) energy for atoms. LSIC and
LSIC+ are scaled by functions of the iso-orbital indicator z{\sigma}, which
distinguishes one-electron regions from many-electron regions. LSIC+ applied to
LSDA works better for many equilibrium properties than LSDA-LSIC and the
Perdew, Burke, and Ernzerhof(PBE) generalized gradient approximation(GGA), and
almost as well as the strongly constrained and appropriately normed(SCAN)
meta-GGA. LSDA-LSIC and LSDA-LSIC+, however, both fail to predict interaction
energies involving weaker bonds, in sharp contrast to their earlier successes.
It is found that more than one set of localized SIC orbitals can yield a nearly
degenerate energetic description of the same multiple covalent bond, suggesting
that a consistent chemical interpretation of the localized orbitals requires a
new way to choose their Fermi orbital descriptors. To make a locally
scaled-down SIC to functionals beyond LSDA requires a gauge transformation of
the functional's energy density. The resulting SCAN-sdSIC, evaluated on
SCAN-SIC total and localized orbital densities, leads to an acceptable
description of many equilibrium properties including the dissociation energies
of weak bonds.
|
A sizable $\cos 4\phi$ azimuthal asymmetry in exclusive di-pion production
near $\rho^0$ resonance peak in ultraperipheral heavy ion collisions recently
has been reported by STAR collaboration. We show that both elliptic gluon
Wigner distribution and final state soft photon radiation can give rise to this
azimuthal asymmetry. The fact that the QED effect alone severely underestimates
the observed asymmetry might signal the existence of the nontrivial correlation
in quantum phase distribution of gluons.
|
We study random compact subsets of R^3 which can be described as "random
Menger sponges". We use those random sets to construct a pair of compact sets A
and B in R^3 which are of the same positive measure, such that A can be covered
by finitely many translates of B, B can be covered by finitely many translates
of A, and yet A and B are not equidecomposable. Furthermore, we construct the
first example of a compact subset of R^3 of positive measure which is not a
domain of expansion. This answers a question of Adrian Ioana.
|
Workflow decision making is critical to performing many practical workflow
applications. Scheduling in edge-cloud environments can address the high
complexity of workflow applications, while decreasing the data transmission
delay between the cloud and end devices. However, due to the heterogeneous
resources in edge-cloud environments and the complicated data dependencies
between the tasks in a workflow, significant challenges for workflow scheduling
remain, including the selection of an optimal tasks-servers solution from the
possible numerous combinations. Existing studies are mainly done subject to
rigorous conditions without fluctuations, ignoring the fact that workflow
scheduling is typically present in uncertain environments. In this study, we
focus on reducing the execution cost of workflow applications mainly caused by
task computation and data transmission, while satisfying the workflow deadline
in uncertain edge-cloud environments. The Triangular Fuzzy Numbers (TFNs) are
adopted to represent the task processing time and data transferring time. A
cost-driven fuzzy scheduling strategy based on an Adaptive Discrete Particle
Swarm Optimization (ADPSO) algorithm is proposed, which employs the operators
of Genetic Algorithm (GA). This strategy introduces the randomly two-point
crossover operator, neighborhood mutation operator, and adaptive multipoint
mutation operator of GA to effectively avoid converging on local optima. The
experimental results show that our strategy can effectively reduce the workflow
execution cost in uncertain edge-cloud environments, compared with other
benchmark solutions.
|
While classical spin systems in random networks have been intensively
studied, much less is known about quantum magnets in random graphs. Here, we
investigate interacting quantum spins on small-world networks, building on
mean-field theory and extensive quantum Monte Carlo simulations. Starting from
one-dimensional (1D) rings, we consider two situations: all-to-all interacting
and long-range interactions randomly added. The effective infinite dimension of
the lattice leads to a magnetic ordering at finite temperature $T_\mathrm{c}$
with mean-field criticality. Nevertheless, in contrast to the classical case,
we find two distinct power-law behaviors for $T_\mathrm{c}$ versus the average
strength of the extra couplings. This is controlled by a competition between a
characteristic length scale of the random graph and the thermal correlation
length of the underlying 1D system, thus challenging mean-field theories. We
also investigate the fate of a gapped 1D spin chain against the small-world
effect.
|
Bitcoin is built on a blockchain, an immutable decentralised ledger that
allows entities (users) to exchange Bitcoins in a pseudonymous manner. Bitcoins
are associated with alpha-numeric addresses and are transferred via
transactions. Each transaction is composed of a set of input addresses
(associated with unspent outputs received from previous transactions) and a set
of output addresses (to which Bitcoins are transferred). Despite Bitcoin was
designed with anonymity in mind, different heuristic approaches exist to detect
which addresses in a specific transaction belong to the same entity. By
applying these heuristics, we build an Address Correspondence Network: in this
representation, addresses are nodes are connected with edges if at least one
heuristic detects them as belonging to the same entity. %addresses are nodes
and edges are drawn between addresses detected as belonging to the same entity
by at least one heuristic. %nodes represent addresses and edges model the
likelihood that two nodes belong to the same entity %In this network, connected
components represent sets of addresses controlled by the same entity. In this
paper, we analyse for the first time the Address Correspondence Network and
show it is characterised by a complex topology, signalled by a broad, skewed
degree distribution and a power-law component size distribution. Using a
large-scale dataset of addresses for which the controlling entities are known,
we show that a combination of external data coupled with standard community
detection algorithms can reliably identify entities. The complex nature of the
Address Correspondence Network reveals that usage patterns of individual
entities create statistical regularities; and that these regularities can be
leveraged to more accurately identify entities and gain a deeper understanding
of the Bitcoin economy as a whole.
|
Federated learning plays an important role in the process of smart cities.
With the development of big data and artificial intelligence, there is a
problem of data privacy protection in this process. Federated learning is
capable of solving this problem. This paper starts with the current
developments of federated learning and its applications in various fields. We
conduct a comprehensive investigation. This paper summarize the latest research
on the application of federated learning in various fields of smart cities.
In-depth understanding of the current development of federated learning from
the Internet of Things, transportation, communications, finance, medical and
other fields. Before that, we introduce the background, definition and key
technologies of federated learning. Further more, we review the key
technologies and the latest results. Finally, we discuss the future
applications and research directions of federated learning in smart cities.
|
We report a combined experimental and theoretical study of the PdSe2-xTex
system. With increasing Te fraction, structural evolutions, first from an
orthorhombic phase (space group Pbca) to a monoclinic phase (space group C2/c)
and then to a trigonal phase (space group P-3m1), are observed accompanied with
clearly distinct electrical transport behavior. The monoclinic phase (C2/c) is
a completely new polymorphic phase and is discovered within a narrow range of
Te composition (0.3 \leq x \leq 0.8). This phase has a different packing
sequence from all known transition metal dichalcogenides to date. Electronic
calculations and detailed transport analysis of the new polymorphic
PdSe1.3Te0.7 phase are presented. In the trigonal phase region,
superconductivity with enhanced critical temperature is also observed within a
narrow range of Te content (1.0 \leq x \leq 1.2). The rich phase diagram, new
polymorphic structure as well as abnormally enhanced superconductivity could
further stimulate more interest to explore new types of polymorphs and
investigate their transport and electronic properties in the transition metal
dichalcogenides family that are of significant interest.
|
Although the frequency-division duplex (FDD) massive multiple-input
multiple-output (MIMO) system can offer high spectral and energy efficiency, it
requires to feedback the downlink channel state information (CSI) from users to
the base station (BS), in order to fulfill the precoding design at the BS.
However, the large dimension of CSI matrices in the massive MIMO system makes
the CSI feedback very challenging, and it is urgent to compress the feedback
CSI. To this end, this paper proposes a novel dilated convolution based CSI
feedback network, namely DCRNet. Specifically, the dilated convolutions are
used to enhance the receptive field (RF) of the proposed DCRNet without
increasing the convolution size. Moreover, advanced encoder and decoder blocks
are designed to improve the reconstruction performance and reduce computational
complexity as well. Numerical results are presented to show the superiority of
the proposed DCRNet over the conventional networks. In particular, the proposed
DCRNet can achieve almost the state-of-the-arts (SOTA) performance with much
lower floating point operations (FLOPs). The open source code and checkpoint of
this work are available at https://github.com/recusant7/DCRNet.
|
This paper considers the approximation of a monomial $x^n$ over the interval
$[-1,1]$ by a lower-degree polynomial. This polynomial approximation can be
easily computed analytically and is obtained by truncating the analytical
Chebyshev series expansion of $x^n$. The error in the polynomial approximation
in the supremum norm has an exact expression with an interesting probabilistic
interpretation. We use this interpretation along with concentration
inequalities to develop a useful upper bound for the error.
|
In this letter, we present experimental data demonstrating spin wave
interference detection using spin Hall effect (ISHE). Two coherent spin waves
are excited in a yttrium-iron garnet (YIG) waveguide by continuous microwave
signals. The initial phase difference between the spin waves is controlled by
the external phase shifter. The ISHE voltage is detected at a distance of 2 mm
and 4 mm away from the spin wave generating antennae by an attached Pt layer.
Experimental data show ISHE voltage oscillation as a function of the phase
difference between the two interfering spin waves. This experiment demonstrates
an intriguing possibility of using ISHE in spin wave logic circuit converting
spin wave phase into an electric signal
|
Initial powder mixtures of Cu, Fe and Co are exposed to severe plastic
deformation by high-pressure torsion to prepare solid solutions. A broad range
of compositions is investigated, whereas this study aims at the synthesis of
soft magnetic materials and therefore at the formation of a homogeneous and
nanocrystalline microstructure. For intermediate ferromagnetic contents,
high-pressure torsion at room temperature yields single-phase supersaturated
solid solutions. For higher ferromagnetic contents, two consecutive steps of
high-pressure torsion deformation at different temperatures yield the desired
nanocrystalline microstructure. Depending on the Co-to-Fe-ratio, either a
single-phase supersaturated solid solution or a nanocomposite forms. The
composite exhibits an enhanced magnetic moment, indicating the formation of a
(Fe,Co)-alloy upon severe plastic deformation. Soft magnetic properties are
verified for large Co-to-Fe-ratios and this microstructure is found to remain
stable up to 400 {\deg}C.
|
Let n be a positive integer and t a non-zero integer. We consider the
elliptic curve over Q given by E : y 2 = x 3 + tx 2 -- n 2 (t + 3n 2)x + n 6.
It is a special case of an elliptic surface studied recently by Bettin, David
and Delaunay [2] and it generalizes Washington's family. The point (0, n 3)
belongs to E(Q) and we obtain some results about its nondivisibility in E(Q).
Our work extends to this two-parameter family of elliptic curves a previous
study of Duquesne (mainly stated for n = 1 and t > 0).
|
The dataset presented provides high-resolution images of real, filled out
bank checks containing various complex backgrounds, and handwritten text and
signatures in the respective fields, along with both pixel-level and
patch-level segmentation masks for the signatures on the checks. The images of
bank checks were obtained from different sources, including other publicly
available check datasets, publicly available images on the internet, as well as
scans and images of real checks. Using the GIMP graphics software, pixel-level
segmentation masks for signatures on these checks were manually generated as
binary images. An automated script was then used to generate patch-level masks.
The dataset was created to train and test networks for extracting signatures
from bank checks and other similar documents with very complex backgrounds.
|
Automated driving is now possible in diverse road and traffic conditions.
However, there are still situations that automated vehicles cannot handle
safely and efficiently. In this case, a Transition of Control (ToC) is
necessary so that the driver takes control of the driving. Executing a ToC
requires the driver to get full situation awareness of the driving environment.
If the driver fails to get back the control in a limited time, a Minimum Risk
Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g.,
decelerating to full stop). The execution of ToCs requires some time and can
cause traffic disruption and safety risks that increase if several vehicles
execute ToCs/MRMs at similar times and in the same area. This study proposes to
use novel C-ITS traffic management measures where the infrastructure exploits
V2X communications to assist Connected and Automated Vehicles (CAVs) in the
execution of ToCs. The infrastructure can suggest a spatial distribution of
ToCs, and inform vehicles of the locations where they could execute a safe stop
in case of MRM. This paper reports the first field operational tests that
validate the feasibility and quantify the benefits of the proposed
infrastructure-assisted ToC and MRM management. The paper also presents the CAV
and roadside infrastructure prototypes implemented and used in the trials. The
conducted field trials demonstrate that infrastructure-assisted traffic
management solutions can reduce safety risks and traffic disruptions.
|
Commonly, machine learning models minimize an empirical expectation. As a
result, the trained models typically perform well for the majority of the data
but the performance may deteriorate in less dense regions of the dataset. This
issue also arises in generative modeling. A generative model may overlook
underrepresented modes that are less frequent in the empirical data
distribution. This problem is known as complete mode coverage. We propose a
sampling procedure based on ridge leverage scores which significantly improves
mode coverage when compared to standard methods and can easily be combined with
any GAN. Ridge leverage scores are computed by using an explicit feature map,
associated with the next-to-last layer of a GAN discriminator or of a
pre-trained network, or by using an implicit feature map corresponding to a
Gaussian kernel. Multiple evaluations against recent approaches of complete
mode coverage show a clear improvement when using the proposed sampling
strategy.
|
Recently, deep generative models for molecular graphs are gaining more and
more attention in the field of de novo drug design. A variety of models have
been developed to generate topological structures of drug-like molecules, but
explorations in generating three-dimensional structures are still limited.
Existing methods have either focused on low molecular weight compounds without
considering drug-likeness or generate 3D structures indirectly using atom
density maps. In this work, we introduce Ligand Neural Network (L-Net), a novel
graph generative model for designing drug-like molecules with high-quality 3D
structures. L-Net directly outputs the topological and 3D structure of
molecules (including hydrogen atoms), without the need for additional atom
placement or bond order inference algorithm. The architecture of L-Net is
specifically optimized for drug-like molecules, and a set of metrics is
assembled to comprehensively evaluate its performance. The results show that
L-Net is capable of generating chemically correct, conformationally valid, and
highly druglike molecules. Finally, to demonstrate its potential in
structure-based molecular design, we combine L-Net with MCTS and test its
ability to generate potential inhibitors targeting ABL1 kinase.
|
Information management has enter a completely new era, quantum era. However,
there exists a lack of sufficient theory to extract truly useful quantum
information and transfer it to a form which is intuitive and straightforward
for decision making. Therefore, based on the quantum model of mass function, a
fortified dual check system is proposed to ensure the judgment generated
retains enough high accuracy. Moreover, considering the situations in real
life, everything takes place in an observable time interval, then the concept
of time interval is introduced into the frame of the check system. The proposed
model is very helpful in disposing uncertain quantum information in this paper.
And some applications are provided to verify the rationality and correctness of
the proposed method.
|
In this paper we prove upper and lower bounds on the minimal spherical
dispersion. In particular, we see that the inverse $N(\varepsilon,d)$ of the
minimal spherical dispersion is, for fixed $\varepsilon>0$, up to logarithmic
terms linear in the dimension $d$. We also derive upper and lower bounds on the
expected dispersion for points chosen independently and uniformly at random
from the Euclidean unit sphere.
|
We prove global well-posedness for a coupled Dirac--Klein-Gordon (DKG) system
in $1+2$ dimensions under the assumption of small, compact, high-regularity
data. We reveal hidden structure within the Klein-Gordon part of the system,
which allows us to treat the nonlinearities which are below-critical in two
spatial dimensions. Furthermore, we provide the first asymptotic decay results
for the DKG system in $1+2$ dimensions.
|
The autonomous automotive industry is one of the largest and most
conventional projects worldwide, with many technology companies effectively
designing and orienting their products towards automobile safety and accuracy.
These products are performing very well over the roads in developed countries.
But can fail in the first minute in an underdeveloped country because there is
much difference between a developed country environment and an underdeveloped
country environment. The following study proposed to train these Artificial
intelligence models in environment space in an underdeveloped country like
Pakistan. The proposed approach on image classification uses convolutional
neural networks for image classification for the model. For model pre-training
German traffic signs data set was selected then fine-tuned on Pakistan's
dataset. The experimental setup showed the best results and accuracy from the
previously conducted experiments. In this work to increase the accuracy, more
dataset was collected to increase the size of images in every class in the data
set. In the future, a low number of classes are required to be further
increased where more images for traffic signs are required to be collected to
get more accuracy on the training of the model over traffic signs of Pakistan's
most used and popular roads motorway and national highway, whose traffic signs
color, size, and shapes are different from common traffic signs.
|
Internet of Things Driven Data Analytics (IoT-DA) has the potential to excel
data-driven operationalisation of smart environments. However, limited research
exists on how IoT-DA applications are designed, implemented, operationalised,
and evolved in the context of software and system engineering life-cycle. This
article empirically derives a framework that could be used to systematically
investigate the role of software engineering (SE) processes and their
underlying practices to engineer IoT-DA applications. First, using existing
frameworks and taxonomies, we develop an evaluation framework to evaluate
software processes, methods, and other artefacts of SE for IoT-DA. Secondly, we
perform a systematic mapping study to qualitatively select 16 processes (from
academic research and industrial solutions) of SE for IoT-DA. Thirdly, we apply
our developed evaluation framework based on 17 distinct criterion (a.k.a.
process activities) for fine-grained investigation of each of the 16 SE
processes. Fourthly, we apply our proposed framework on a case study to
demonstrate development of an IoT-DA healthcare application. Finally, we
highlight key challenges, recommended practices, and the lessons learnt based
on framework's support for process-centric software engineering of IoT-DA. The
results of this research can facilitate researchers and practitioners to
engineer emerging and next-generation of IoT-DA software applications.
|
The IoT consists of a lot of devices such as embedded systems, wireless
sensor nodes (WSNs), control systems, etc. It is essential for some of these
devices to protect information that they process and transmit. The issue is
that an adversary may steal these devices to gain a physical access to the
device. There is a variety of ways that allows to reveal cryptographic keys.
One of them are optical Fault Injection attacks. We performed successful
optical Fault Injections into different type of gates, in particular INV, NAND,
NOR, FF. In our work we concentrate on the selection of the parameters
configured by an attacker and their influence on the success of the Fault
Injections.
|
Liquid-liquid phase separation (LLPS) is currently of great interest in cell
biology. LLPS is an example of what is called an emergent phenomenon -- an idea
that comes from condensed-matter physics. Emergent phenomena have the
characteristic feature of having a switch-like response. I show that the Hill
equation of biochemistry can be used as a simple model of strongly cooperative,
switch-like, behaviour. One result is that a switch-like response requires
relatively few molecules, even ten gives a strongly switch-like response. Thus
if a biological function enabled by LLPS relies on LLPS to provide a
switch-like response to a stimulus, then condensates large enough to be visible
in optical microscopy are not needed.
|
Ensemble methods are generally regarded to be better than a single model if
the base learners are deemed to be "accurate" and "diverse." Here we
investigate a semi-supervised ensemble learning strategy to produce
generalizable blind image quality assessment models. We train a multi-head
convolutional network for quality prediction by maximizing the accuracy of the
ensemble (as well as the base learners) on labeled data, and the disagreement
(i.e., diversity) among them on unlabeled data, both implemented by the
fidelity loss. We conduct extensive experiments to demonstrate the advantages
of employing unlabeled data for BIQA, especially in model generalization and
failure identification.
|
The surge in the internet of things (IoT) devices seriously threatens the
current IoT security landscape, which requires a robust network intrusion
detection system (NIDS). Despite superior detection accuracy, existing machine
learning or deep learning based NIDS are vulnerable to adversarial examples.
Recently, generative adversarial networks (GANs) have become a prevailing
method in adversarial examples crafting. However, the nature of discrete
network traffic at the packet level makes it hard for GAN to craft adversarial
traffic as GAN is efficient in generating continuous data like image synthesis.
Unlike previous methods that convert discrete network traffic into a grayscale
image, this paper gains inspiration from SeqGAN in sequence generation with
policy gradient. Based on the structure of SeqGAN, we propose Attack-GAN to
generate adversarial network traffic at packet level that complies with domain
constraints. Specifically, the adversarial packet generation is formulated into
a sequential decision making process. In this case, each byte in a packet is
regarded as a token in a sequence. The objective of the generator is to select
a token to maximize its expected end reward. To bypass the detection of NIDS,
the generated network traffic and benign traffic are classified by a black-box
NIDS. The prediction results returned by the NIDS are fed into the
discriminator to guide the update of the generator. We generate malicious
adversarial traffic based on a real public available dataset with attack
functionality unchanged. The experimental results validate that the generated
adversarial samples are able to deceive many existing black-box NIDS.
|
In sponsored search, retrieving synonymous keywords for exact match type is
important for accurately targeted advertising. Data-driven deep learning-based
method has been proposed to tackle this problem. An apparent disadvantage of
this method is its poor generalization performance on entity-level long-tail
instances, even though they might share similar concept-level patterns with
frequent instances. With the help of a large knowledge base, we find that most
commercial synonymous query-keyword pairs can be abstracted into meaningful
conceptual patterns through concept tagging. Based on this fact, we propose a
novel knowledge-driven conceptual retrieval framework to mitigate this problem,
which consists of three parts: data conceptualization, matching via conceptual
patterns and concept-augmented discrimination. Both offline and online
experiments show that our method is very effective. This framework has been
successfully applied to Baidu's sponsored search system, which yields a
significant improvement in revenue.
|
In this paper, a unified gas-kinetic wave-particle scheme (UGKWP) for the
disperse dilute gas-particle multiphase flow is proposed. The gas phase is
always in the hydrodynamic regime. However, the particle phase covers different
flow regimes from particle trajectory crossing to the hydrodynamic wave
interaction with the variation of local particle phase Knudsen number. The
UGKWP is an appropriate method for the capturing of the multiscale transport
mechanism in the particle phase through its coupled wave-particle formulation.
In the regime with intensive particle collision, the evolution of solid
particle will be followed by the analytic wave with quasi-equilibrium
distribution; while in the rarefied regime the non-equilibrium particle phase
will be captured through particle tracking and collision, which plays a
decisive role in recovering particle trajectory crossing behavior. The
gas-kinetic scheme (GKS) is employed for the simulation of gas flow. In the
highly collision regime for the particles, no particles will be sampled in
UGKWP and the wave formulation for solid particle with the hydrodynamic gas
phase will reduce the system to the two-fluid Eulerian model. On the other
hand, in the collisionless regime for the solid particle, the free transport of
solid particle will be followed in UGKWP, and coupled system will return to the
Eulerian-Lagrangian formulation for the gas and particle. The scheme will be
tested for in all flow regimes, which include the non-equilibrium particle
trajectory crossing, the particle concentration under different Knudsen number,
and the dispersion of particle flow with the variation of Stokes number. A
experiment of shock-induced particle bed fluidization is simulated and the
results are compared with experimental measurements. These numerical solutions
validate suitability of the proposed scheme for the simulation of gas-particle
multiphase flow.
|
Manic episodes of bipolar disorder can lead to uncritical behaviour and
delusional psychosis, often with destructive consequences for those affected
and their surroundings. Early detection and intervention of a manic episode are
crucial to prevent escalation, hospital admission and premature death. However,
people with bipolar disorder may not recognize that they are experiencing a
manic episode and symptoms such as euphoria and increased productivity can also
deter affected individuals from seeking help. This work proposes to perform
user-independent, automatic mood-state detection based on actigraphy and
electrodermal activity acquired from a wrist-worn device during mania and after
recovery (euthymia). This paper proposes a new deep learning-based ensemble
method leveraging long (20h) and short (5 minutes) time-intervals to
discriminate between the mood-states. When tested on 47 bipolar patients, the
proposed classification scheme achieves an average accuracy of 91.59% in
euthymic/manic mood-state recognition.
|
Biological infants are naturally curious and try to comprehend their physical
surroundings by interacting, in myriad multisensory ways, with different
objects - primarily macroscopic solid objects - around them. Through their
various interactions, they build hypotheses and predictions, and eventually
learn, infer and understand the nature of the physical characteristics and
behavior of these objects. Inspired thus, we propose a model for
curiosity-driven learning and inference for real-world AI agents. This model is
based on the arousal of curiosity, deriving from observations along
discontinuities in the fundamental macroscopic solid-body physics parameters,
i.e., shape constancy, spatial-temporal continuity, and object permanence. We
use the term body-budget to represent the perceived fundamental properties of
solid objects. The model aims to support the emulation of learning from scratch
followed by substantiation through experience, irrespective of domain, in
real-world AI agents.
|
In previous works, questioning the mathematical nature of the connection in
the translations gauge theory formulation of Teleparallel Equivalent to General
Relativity (TEGR) Theory led us to propose a new formulation using a Cartan
connection. In this review, we summarize the presentation of that proposal and
discuss it from a gauge theoretic perspective.
|
Emotion dynamics modeling is a significant task in emotion recognition in
conversation. It aims to predict conversational emotions when building
empathetic dialogue systems. Existing studies mainly develop models based on
Recurrent Neural Networks (RNNs). They cannot benefit from the power of the
recently-developed pre-training strategies for better token representation
learning in conversations. More seriously, it is hard to distinguish the
dependency of interlocutors and the emotional influence among interlocutors by
simply assembling the features on top of RNNs. In this paper, we develop a
series of BERT-based models to specifically capture the inter-interlocutor and
intra-interlocutor dependencies of the conversational emotion dynamics.
Concretely, we first substitute BERT for RNNs to enrich the token
representations. Then, a Flat-structured BERT (F-BERT) is applied to link up
utterances in a conversation directly, and a Hierarchically-structured BERT
(H-BERT) is employed to distinguish the interlocutors when linking up
utterances. More importantly, a Spatial-Temporal-structured BERT, namely
ST-BERT, is proposed to further determine the emotional influence among
interlocutors. Finally, we conduct extensive experiments on two popular emotion
recognition in conversation benchmark datasets and demonstrate that our
proposed models can attain around 5\% and 10\% improvement over the
state-of-the-art baselines, respectively.
|
We introduce a symmetric fractional-order reduction (SFOR) method to
construct numerical algorithms on general nonuniform temporal meshes for
semilinear fractional diffusion-wave equations. By using the novel order
reduction method, the governing problem is transformed to an equivalent coupled
system, where the explicit orders of time-fractional derivatives involved are
all $\alpha/2$ $(1<\alpha<2)$. The linearized L1 scheme and Alikhanov scheme
are then proposed on general time meshes. Under some reasonable regularity
assumptions and weak restrictions on meshes, the optimal convergence is derived
for the two kinds of difference schemes by $H^2$ energy method. An adaptive
time stepping strategy which based on the (fast linearized) L1 and Alikhanov
algorithms is designed for the semilinear diffusion-wave equations. Numerical
examples are provided to confirm the accuracy and efficiency of proposed
algorithms.
|
The use of neural networks and reinforcement learning has become increasingly
popular in autonomous vehicle control. However, the opaqueness of the resulting
control policies presents a significant barrier to deploying neural
network-based control in autonomous vehicles. In this paper, we present a
reinforcement learning based approach to autonomous vehicle longitudinal
control, where the rule-based safety cages provide enhanced safety for the
vehicle as well as weak supervision to the reinforcement learning agent. By
guiding the agent to meaningful states and actions, this weak supervision
improves the convergence during training and enhances the safety of the final
trained policy. This rule-based supervisory controller has the further
advantage of being fully interpretable, thereby enabling traditional validation
and verification approaches to ensure the safety of the vehicle. We compare
models with and without safety cages, as well as models with optimal and
constrained model parameters, and show that the weak supervision consistently
improves the safety of exploration, speed of convergence, and model
performance. Additionally, we show that when the model parameters are
constrained or sub-optimal, the safety cages can enable a model to learn a safe
driving policy even when the model could not be trained to drive through
reinforcement learning alone.
|
In this article, we consider the class of 2-Calabi-Yau tilted algebras that
are defined by a quiver with potential whose dual graph is a tree. We call
these algebras \emph{dimer tree algebras} because they can also be realized as
quotients of dimer algebras on a disc. These algebras are wild in general. For
every such algebra $B$, we construct a polygon $\mathcal{S}$ with a
checkerboard pattern in its interior that gives rise to a category
$\text{Diag}(\mathcal{S})$. The indecomposable objects of
$\text{Diag}(\mathcal{S})$ are the 2-diagonals in $\mathcal{S}$, and its
morphisms are given by certain pivoting moves between the 2-diagonals. We
conjecture that the category $\text{Diag}(\mathcal{S})$ is equivalent to the
stable syzygy category over the algebra $B$, such that the rotation of the
polygon corresponds to the shift functor on the syzygies. In particular, the
number of indecomposable syzygies is finite and the projective resolutions are
periodic. We prove the conjecture in the special case where every chordless
cycle in the quiver is of length three.
As a consequence, we obtain an explicit description of the projective
resolutions. Moreover, we show that the syzygy category is equivalent to the
2-cluster category of type $\mathbb{A}$, and we introduce a new derived
invariant for the algebra $B$ that can be read off easily from the quiver.
|
Transformer language models have shown remarkable ability in detecting when a
word is anomalous in context, but likelihood scores offer no information about
the cause of the anomaly. In this work, we use Gaussian models for density
estimation at intermediate layers of three language models (BERT, RoBERTa, and
XLNet), and evaluate our method on BLiMP, a grammaticality judgement benchmark.
In lower layers, surprisal is highly correlated to low token frequency, but
this correlation diminishes in upper layers. Next, we gather datasets of
morphosyntactic, semantic, and commonsense anomalies from psycholinguistic
studies; we find that the best performing model RoBERTa exhibits surprisal in
earlier layers when the anomaly is morphosyntactic than when it is semantic,
while commonsense anomalies do not exhibit surprisal at any intermediate layer.
These results suggest that language models employ separate mechanisms to detect
different types of linguistic anomalies.
|
We search for a first-order phase transition gravitational wave signal in 45
pulsars from the NANOGrav 12.5 year dataset. We find that the data can be
explained in terms of a strong first order phase transition taking place at
temperatures below the electroweak scale. In our search, we find that the
signal from a first order phase transition is degenerate with that generated by
Supermassive Black Hole Binary mergers. An interesting open question is how
well gravitational wave observatories could separate such signals.
|
The paper is a continuation of arXiv:2012.10364, where the approach was
developed to constructing the exact matrix model for any generalized Ising
system, and such model was constructed for certain 2d system. In this paper,
the properties of the model are specified for light block diagonalization. A
corresponding example is considered. For the example, general exact partition
function is obtained and analysed. The analysis shows that the free energy does
not depend on the amount of rows with a large amount of cells. For the example
with light boundary conditions, the partition function is obtained and the
specific free energy per spin is plotted.
|
We consider the problem of interpretable network representation learning for
samples of network-valued data. We propose the Principal Component Analysis for
Networks (PCAN) algorithm to identify statistically meaningful low-dimensional
representations of a network sample via subgraph count statistics. The PCAN
procedure provides an interpretable framework for which one can readily
visualize, explore, and formulate predictive models for network samples. We
furthermore introduce a fast sampling-based algorithm, sPCAN, which is
significantly more computationally efficient than its counterpart, but still
enjoys advantages of interpretability. We investigate the relationship between
these two methods and analyze their large-sample properties under the common
regime where the sample of networks is a collection of kernel-based random
graphs. We show that under this regime, the embeddings of the sPCAN method
enjoy a central limit theorem and moreover that the population level embeddings
of PCAN and sPCAN are equivalent. We assess PCAN's ability to visualize,
cluster, and classify observations in network samples arising in nature,
including functional connectivity network samples and dynamic networks
describing the political co-voting habits of the U.S. Senate. Our analyses
reveal that our proposed algorithm provides informative and discriminatory
features describing the networks in each sample. The PCAN and sPCAN methods
build on the current literature of network representation learning and set the
stage for a new line of research in interpretable learning on network-valued
data. Publicly available software for the PCAN and sPCAN methods are available
at https://www.github.com/jihuilee/.
|
Real-time visual localization of needles is necessary for various surgical
applications, including surgical automation and visual feedback. In this study
we investigate localization and autonomous robotic control of needles in the
context of our magneto-suturing system. Our system holds the potential for
surgical manipulation with the benefit of minimal invasiveness and reduced
patient side effects. However, the non-linear magnetic fields produce
unintuitive forces and demand delicate position-based control that exceeds the
capabilities of direct human manipulation. This makes automatic needle
localization a necessity. Our localization method combines neural network-based
segmentation and classical techniques, and we are able to consistently locate
our needle with 0.73 mm RMS error in clean environments and 2.72 mm RMS error
in challenging environments with blood and occlusion. The average localization
RMS error is 2.16 mm for all environments we used in the experiments. We
combine this localization method with our closed-loop feedback control system
to demonstrate the further applicability of localization to autonomous control.
Our needle is able to follow a running suture path in (1) no blood, no tissue;
(2) heavy blood, no tissue; (3) no blood, with tissue; and (4) heavy blood,
with tissue environments. The tip position tracking error ranges from 2.6 mm to
3.7 mm RMS, opening the door towards autonomous suturing tasks.
|
We consider the propagation of acoustic waves in a 2D waveguide unbounded in
one direction and containing a compact obstacle. The wavenumber is fixed so
that only one mode can propagate. The goal of this work is to propose a method
to cloak the obstacle. More precisely, we add to the geometry thin outer
resonators of width $\varepsilon$ and we explain how to choose their positions
as well as their lengths to get a transmission coefficient approximately equal
to one as if there were no obstacle. In the process we also investigate several
related problems. In particular, we explain how to get zero transmission and
how to design phase shifters. The approach is based on asymptotic analysis in
presence of thin resonators. An essential point is that we work around
resonance lengths of the resonators. This allows us to obtain effects of order
one with geometrical perturbations of width $\varepsilon$. Various numerical
experiments illustrate the theory.
|
Starting from an anti-symplectic involution on a K3 surface, one can consider
a natural Lagrangian subvariety inside the moduli space of sheaves over the K3.
One can also construct a Prymian integrable system following a construction of
Markushevich--Tikhomirov, extended by Arbarello--Sacc\`a--Ferretti, Matteini
and Sawon--Chen. In this article we address a question of Sawon, showing that
these integrable systems and their associated natural Lagrangians degenerate,
respectively, into fix loci of involutions considered by Heller--Schaposnik,
Garcia-Prada--Wilkins and Basu--Garcia-Prada.
Along the way we find interesting results such as the proof that the
Donagi--Ein--Lazarsfeled degeneration is a degeneration of symplectic
varieties, a generalization of this degeneration, originally described for K3
surfaces, to the case of an arbitrary smooth projective surface, and a
description of the behaviour of certain involutions under this degeneration.
|
Earthquakes are lethal and costly. This study aims at avoiding these
catastrophic events by the application of injection policies retrieved through
reinforcement learning. With the rapid growth of artificial intelligence,
prediction-control problems are all the more tackled by function approximation
models that learn how to control a specific task, even for systems with
unmodeled/unknown dynamics and important uncertainties. Here, we show for the
first time the possibility of controlling earthquake-like instabilities using
state-of-the-art deep reinforcement learning techniques. The controller is
trained using a reduced model of the physical system, i.e, the spring-slider
model, which embodies the main dynamics of the physical problem for a given
earthquake magnitude. Its robustness to unmodeled dynamics is explored through
a parametric study. Our study is a first step towards minimizing seismicity in
industrial projects (geothermal energy, hydrocarbons production, CO2
sequestration) while, in a second step for inspiring techniques for natural
earthquakes control and prevention.
|
In this work we study the decidability of the global modal logic arising from
Kripke frames evaluated on certain residuated lattices (including all BL
algebras), known in the literature as crisp modal many-valued logics. We
exhibit a large family of these modal logics that are undecidable, in
opposition to classical modal logic and to the propositional logics defined
over the same classes of algebras. These include the global modal logics
arising from the standard Lukasiewicz and Product algebras. Furthermore, it is
shown that global modal Lukasiewicz and Product logics are not recursively
axiomatizable. We conclude the paper by solving negatively the open question of
whether a global modal logic coincides with the local modal logic closed under
the unrestricted necessitation rule.
|
We study a variant of online convex optimization where the player is
permitted to switch decisions at most $S$ times in expectation throughout $T$
rounds. Similar problems have been addressed in prior work for the discrete
decision set setting, and more recently in the continuous setting but only with
an adaptive adversary. In this work, we aim to fill the gap and present
computationally efficient algorithms in the more prevalent oblivious setting,
establishing a regret bound of $O(T/S)$ for general convex losses and
$\widetilde O(T/S^2)$ for strongly convex losses. In addition, for stochastic
i.i.d.~losses, we present a simple algorithm that performs $\log T$ switches
with only a multiplicative $\log T$ factor overhead in its regret in both the
general and strongly convex settings. Finally, we complement our algorithms
with lower bounds that match our upper bounds in some of the cases we consider.
|
Forecasting financial time series is considered to be a difficult task due to
the chaotic feature of the series. Statistical approaches have shown solid
results in some specific problems such as predicting market direction and
single-price of stocks; however, with the recent advances in deep learning and
big data techniques, new promising options have arises to tackle financial time
series forecasting. Moreover, recent literature has shown that employing a
combination of statistics and machine learning may improve accuracy in the
forecasts in comparison to single solutions. Taking into consideration the
mentioned aspects, in this work, we proposed the MegazordNet, a framework that
explores statistical features within a financial series combined with a
structured deep learning model for time series forecasting. We evaluated our
approach predicting the closing price of stocks in the S&P 500 using different
metrics, and we were able to beat single statistical and machine learning
methods.
|
We describe an algorithm for computing a $\Q$-rational model for the quotient
of a modular curve by an automorphism group, under mild assumptions on the
curve and the automorphisms, by determining $q$-expansions for a basis of the
corresponding space of cusp forms. We also give a moduli interpretation for
general morphisms between modular curves.
|
We show that, given an almost-source algebra $A$ of a $p$-block of a finite
group $G$, then the unit group of $A$ contains a basis stabilized by the left
and right multiplicative action of the defect group if and only if, in a sense
to be made precise, certain relative multiplicities of local pointed groups are
invariant with respect to the fusion system. We also show that, when $G$ is
$p$-solvable, those two equivalent conditions hold for some almost-source
algebra of the given $p$-block. One motive lies in the fact that, by a theorem
of Linckelmann, if the two equivalent conditions hold for $A$, then any stable
basis for $A$ is semicharacteristic for the fusion system.
|
Sound Source Localization (SSL) are used to estimate the position of sound
sources. Various methods have been used for detecting sound and its
localization. This paper presents a system for stationary sound source
localization by cubical microphone array consisting of eight microphones placed
on four vertical adjacent faces which is mounted on three wheel
omni-directional drive for the inspection and monitoring of the disaster
victims in disaster areas. The proposed method localizes sound source on a 3D
space by grid search method using Generalized Cross Correlation Phase Transform
(GCC-PHAT) which is robust when operating in real life scenario where there is
lack of visibility. The computed azimuth and elevation angle of victimized
human voice are fed to embedded omni-directional drive system which navigates
the vehicle automatically towards the stationary sound source.
|
To any $k$-dimensional subspace of $\mathbb Q^n$ one can naturally associate
a point in the Grassmannian ${\rm Gr}_{n,k}(\mathbb R)$ and two shapes of
lattices of rank $k$ and $n-k$ respectively. These lattices originate by
intersecting the $k$-dimensional subspace with the lattice $\mathbb Z^n$. Using
unipotent dynamics we prove simultaneous equidistribution of all of these
objects under a congruence conditions when $(k,n) \neq (2,4)$.
|
We find an explicit presentation of relative linear Steinberg groups
$\mathrm{St}(n, R, I)$ for any ring $R$ and $n \geq 4$ by generators and
relations as abstract groups. We also prove a similar result for relative
simply laced Steinberg groups $\mathrm{St}(\Phi; R, I)$ for commutative $R$ and
$\Phi \in \{\mathsf A_\ell, \mathsf D_\ell, \mathsf E_\ell \mid \ell \geq 3\}$.
|
Soft electronics are a promising and revolutionary alternative for
traditional electronics when safe physical interaction between machines and the
human body is required. Among various materials architectures developed for
producing soft and stretchable electronics, Liquid-Metal Embedded Elastomers
(LMEEs), which contain Ga-based inclusions as a conductive phase, has drawn
considerable attention in various emerging fields such as wearable computing
and bio-inspired robotics. This is because LMEEs exhibit a unique combination
of desirable mechanical, electrical, and thermal properties. For instance,
these so-called multifunctional materials can undergo large deformations as
high as 600% strain without losing their electrical conductivity. Moreover, the
desperation of conductive liquid-metal inclusions within the entire medium of
an elastomer makes it possible to fabricate autonomously self-healing circuits
that maintain their electrical functionality after extreme mechanical damage
induction. The electrically self-healing property is of great importance for
further progress in autonomous soft robotics, where materials are subjected to
various modes of mechanical damage such as tearing. In this short review, we
review the fundamental characteristics of LMEEs, their advantages over other
conductive composites, materials used in LMMEs, their preparation and
activation process, and the fabrication process of self-healing circuits.
Additionally, we will review the soft-lithography-enabled techniques for
liquid-metal pattering.
|
The research in anomaly detection lacks a unified definition of what
represents an anomalous instance. Discrepancies in the nature itself of an
anomaly lead to multiple paradigms of algorithms design and experimentation.
Predictive maintenance is a special case, where the anomaly represents a
failure that must be prevented. Related time-series research as outlier and
novelty detection or time-series classification does not apply to the concept
of an anomaly in this field, because they are not single points which have not
been seen previously and may not be precisely annotated. Moreover, due to the
lack of annotated anomalous data, many benchmarks are adapted from supervised
scenarios.
To address these issues, we generalise the concept of positive and negative
instances to intervals to be able to evaluate unsupervised anomaly detection
algorithms. We also preserve the imbalance scheme for evaluation through the
proposal of the Preceding Window ROC, a generalisation for the calculation of
ROC curves for time-series scenarios. We also adapt the mechanism from a
established time-series anomaly detection benchmark to the proposed
generalisations to reward early detection. Therefore, the proposal represents a
flexible evaluation framework for the different scenarios. To show the
usefulness of this definition, we include a case study of Big Data algorithms
with a real-world time-series problem provided by the company ArcelorMittal,
and compare the proposal with an evaluation method.
|
For any graph $G$ of order $p$, a bijection $f: V(G)\to [1,p]$ is called a
numbering of the graph $G$ of order $p$. The strength $str_f(G)$ of a numbering
$f: V(G)\to [1,p]$ of $G$ is defined by $str_f(G) = \max\{f(u)+f(v)\; |\; uv\in
E(G)\},$ and the strength $str(G)$ of a graph $G$ itself is $str(G) =
\min\{str_f(G)\;|\; f \mbox{ is a numbering of } G\}.$ A numbering $f$ is
called a strength labeling of $G$ if $str_f(G)=str(G)$. In this paper, we
obtained a sufficient condition for a graph to have $str(G)=|V(G)|+\d(G)$.
Consequently, many questions raised in [Bounds for the strength of graphs, {\it
Aust. J. Combin.} {\bf72(3)}, (2018) 492--508] and [On the strength of some
trees, {\it AKCE Int. J. Graphs Comb.} (Online 2019)
doi.org/10.1016/j.akcej.2019.06.002] are solved. Moreover, we showed that every
graph $G$ either has $str(G)=|V(G)|+\d(G)$ or is a proper subgraph of a graph
$H$ that has $str(H) = |V(H)| + \d(H)$ with $\d(H)=\d(G)$. Further, new good
lower bounds of $str(G)$ are also obtained. Using these, we determined the
strength of 2-regular graphs and obtained new lower bounds of $str(Q_n)$ for
various $n$, where $Q_n$ is the $n$-regular hypercube.
|
Today's intelligent applications can achieve high performance accuracy using
machine learning (ML) techniques, such as deep neural networks (DNNs).
Traditionally, in a remote DNN inference problem, an edge device transmits raw
data to a remote node that performs the inference task. However, this may incur
high transmission energy costs and puts data privacy at risk. In this paper, we
propose a technique to reduce the total energy bill at the edge device by
utilizing model compression and time-varying model split between the edge and
remote nodes. The time-varying representation accounts for time-varying
channels and can significantly reduce the total energy at the edge device while
maintaining high accuracy (low loss). We implement our approach in an image
classification task using the MNIST dataset, and the system environment is
simulated as a trajectory navigation scenario to emulate different channel
conditions. Numerical simulations show that our proposed solution results in
minimal energy consumption and $CO_2$ emission compared to the considered
baselines while exhibiting robust performance across different channel
conditions and bandwidth regime choices.
|
Neural networks-based learning of the distribution of non-dispatchable
renewable electricity generation from sources such as photovoltaics (PV) and
wind as well as load demands has recently gained attention. Normalizing flow
density models are particularly well suited for this task due to the training
through direct log-likelihood maximization. However, research from the field of
image generation has shown that standard normalizing flows can only learn
smeared-out versions of manifold distributions. Previous works on normalizing
flow-based scenario generation do not address this issue, and the smeared-out
distributions result in the sampling of noisy time series. In this paper, we
propose reducing the dimensionality through principal component analysis (PCA),
which sets up the normalizing flow in a lower-dimensional space while
maintaining the direct and computationally efficient likelihood maximization.
We train the resulting principal component flow (PCF) on data of PV and wind
power generation as well as load demand in Germany in the years 2013 to 2015.
The results of this investigation show that the PCF preserves critical features
of the original distributions, such as the probability density and frequency
behavior of the time series. The application of the PCF is, however, not
limited to renewable power generation but rather extends to any data set, time
series, or otherwise, which can be efficiently reduced using PCA.
|
Cosmic rays interacting with the atmosphere result in a flux of secondary
particles including muons and electrons. Atmospheric ray tomography (ART) uses
the muons and electrons for detecting objects and their composition. This paper
presents new methods and a proof-of-concept tomography system developed for the
ART of low-Z materials. We introduce the Particle Track Filtering (PTF) and
Multi-Modality Tomographic Reconstruction (MMTR) methods. Based on Geant4
models we optimized the tomography system, the parameters of PTF and MMTR.
Based on plastic scintillating fiber arrays we achieved the spatial resolution
120 $\mu$m and 1 mrad angular resolution in the track reconstruction. We
developed a novel edge detection method to separate the logical volumes of
scanned object. We show its effectiveness on single (e.g. water, aluminum) and
double material (e.g. explosive RDX in flesh) objects. The tabletop tomograph
we built showed excellent agreement between simulations and measurements. We
are able to increase the discriminating power of ART on low-Z materials
significantly. This work opens up new routes for the commercialization of ART
tomography.
|
Stochastic gradient descent (SGD) has become the most attractive optimization
method in training large-scale deep neural networks due to its simplicity, low
computational cost in each updating step, and good performance. Standard excess
risk bounds show that SGD only needs to take one pass over the training data
and more passes could not help to improve the performance. Empirically, it has
been observed that SGD taking more than one pass over the training data
(multi-pass SGD) has much better excess risk bound performance than the SGD
only taking one pass over the training data (one-pass SGD). However, it is not
very clear that how to explain this phenomenon in theory. In this paper, we
provide some theoretical evidences for explaining why multiple passes over the
training data can help improve performance under certain circumstance.
Specifically, we consider smooth risk minimization problems whose objective
function is non-convex least squared loss. Under Polyak-Lojasiewicz (PL)
condition, we establish faster convergence rate of excess risk bound for
multi-pass SGD than that for one-pass SGD.
|
Two-dimensional magnetic skyrmions are particle-like magnetic domains in
magnetic thin films. The kinetic property of the magnetic skyrmions at finite
temperature is well described by the Thiele equation, including a stochastic
field and a finite mass. In this paper, the validity of the constant-mass
approximation is examined by comparing the Fourier spectrum of Brownian motions
described by the Thiele equation and the Landau-Lifshitz-Gilbert equation.
Then, the 4-dimensional Fokker-Planck equation is derived from the Thiele
equation with a mass-term. Consequently, an expression of the diffusion flow
and diffusion constant in a tensor form is derived, extending Chandrasekhar's
method for Thiele dynamics.
|
Nano Electro Mechanical (NEM) contact switches have been widely studied as
one of the alternative for classical field effect transistor (FET). An ideal
NEM contact switch with hysteresis free switching slope (SS) of 0 mV/dec is
desired to achieve the ultimate scaling of the complementary metal oxide
semiconductor (CMOS) integrated circuits (IC) but never realized. Here we show,
low pull-in voltage, hysteresis free graphene based NEM contact switch with hBN
as a contact larger. The hysteresis voltage is greatly reduced by exploiting
the weak adhesion energy between the graphene and hexagonal boron nitride
(hBN). The graphene NEM contact switch with hBN as contact exhibits low pull-in
voltage of < 2 V, high contact life time of more than 6x10^4 switching cycles,
ON/OFF ratio of 10^4 orders of magnitude and hysteresis voltage of as small as
< 0.1 V. Our G-hBN NEM contact switch can be potentially used in ultra-low
power energy efficient CMOS IC's.
|
In recent work [arXiv:2003.06939v2] a novel fermion to qubit mapping --
called the compact encoding -- was introduced which outperforms all previous
local mappings in both the qubit to mode ratio, and the locality of mapped
operators. There the encoding was demonstrated for square and hexagonal
lattices. Here we present an extension of that work by illustrating how to
apply the compact encoding to other regular lattices. We give constructions for
variants of the compact encoding on all regular tilings with maximum degree 4.
These constructions yield edge operators with Pauli weight at most 3 and use
fewer than 1.67 qubits per fermionic mode. Additionally we demonstrate how the
compact encoding may be applied to a cubic lattice, yielding edge operators
with Pauli weight no greater than 4 and using approximately 2.5 qubits per
mode. In order to properly analyse the compact encoding on these lattices a
more general group theoretic framework is required, which we elaborate upon in
this work. We expect this framework to find use in the design of fermion to
qubit mappings more generally.
|
We propose and demonstrate a method to characterize a gated InGaAs
single-photon detector (SPD). Ultrashort weak coherent pulses, from a
mode-locked sub-picosecond pulsed laser, were used to measure photon counts, at
varying arrival times relative to the start of the SPD gate voltage. The uneven
detection probabilities within the gate window were used to estimate the
afterpulse probability with respect to various detector parameters: excess
bias, width of gate window and hold-off time. We estimated a lifetime of 2.1
microseconds for the half-life of trapped carriers, using a power-law fit to
the decay in afterpulse probability. Finally, we quantify the timing jitter of
the SPD using a time to digital converter with a resolution of 55 ps.
|
A review is made of the field of contextuality in quantum mechanics. We study
the historical emergence of the concept from philosophical and logical issues.
We present and compare the main theoretical frameworks that have been derived.
Finally, we focus on the complex task of establishing experimental tests of
contextuality. Throughout this work, we try to show that the conceptualisation
of contextuality has progressed through different complementary perspectives,
before summoning them together to analyse the signification of contextuality
experiments. Doing so, we argue that contextuality emerged as a discrete
logical problem and developed into a quantifiable quantum resource.
|
High-performance hybrid automatic speech recognition (ASR) systems are often
trained with clustered triphone outputs, and thus require a complex training
pipeline to generate the clustering. The same complex pipeline is often
utilized in order to generate an alignment for use in frame-wise cross-entropy
training. In this work, we propose a flat-start factored hybrid model trained
by modeling the full set of triphone states explicitly without relying on
clustering methods. This greatly simplifies the training of new models.
Furthermore, we study the effect of different alignments used for Viterbi
training. Our proposed models achieve competitive performance on the
Switchboard task compared to systems using clustered triphones and other
flat-start models in the literature.
|
Motivated by M-theory, we study rank n K-theoretic Donaldson-Thomas theory on
a toric threefold X. In the presence of compact four-cycles, we discuss how to
include the contribution of D4-branes wrapping them. Combining this with a
simple assumption on the (in)dependence on Coulomb moduli in the 7d theory, we
show that the partition function factorizes and, when X is Calabi-Yau and it
admits an ADE ruling, it reproduces the 5d master formula for the geometrically
engineered theory on A(n-1) ALE space, thus extending the usual geometric
engineering dictionary to n>1. We finally speculate about implications for
instanton counting on Taub-NUT.
|
We consider a finite abelian group $M$ of odd exponent $n$ with a symplectic
form $\omega: M\times M\to \mu_n$ and the Heisenberg extension $1\to \mu_n\to
H\to M\to 1$ with the commutator $\omega$. According to the Stone - von Neumann
theorem, $H$ admits an irreducible representation with the tautological central
character (defined up to a non-unique isomorphism). We construct such
irreducible representation of $H$ defined up to a unique isomorphism, so
canonical in this sense.
|
In this paper, we study communication-efficient distributed stochastic
gradient descent (SGD) with data sets of users distributed over a certain area
and communicating through wireless channels. Since the time for one iteration
in the proposed approach is independent of the number of users, it is
well-suited to scalable distributed SGD. Furthermore, since the proposed
approach is based on preamble-based random access, which is widely adopted for
machine-type communication (MTC), it can be easily employed for training models
with a large number of devices in various Internet-of-Things (IoT) applications
where MTC is used for their connectivity. For fading channel, we show that
noncoherent combining can be used. As a result, no channel state information
(CSI) estimation is required. From analysis and simulation results, we can
confirm that the proposed approach is not only scalable, but also provides
improved performance as the number of devices increases.
|
Using a systematic, symmetry-preserving continuum approach to the Standard
Model strong-interaction bound-state problem, we deliver parameter-free
predictions for all semileptonic $B_c \to \eta_c, J/\psi$ transition form
factors on the complete domains of empirically accessible momentum transfers.
Working with branching fractions calculated therefrom, the following values of
the ratios for $\tau$ over $\mu$ final states are obtained:
$R_{\eta_c}=0.313(22)$ and $R_{J/\psi}=0.242(47)$. Combined with other recent
results, our analysis confirms a $2\sigma$ discrepancy between the Standard
Model prediction for $R_{J/\psi}$ and the single available experimental result.
|
Session-based recommendation aims to predict user the next action based on
historical behaviors in an anonymous session. For better recommendations, it is
vital to capture user preferences as well as their dynamics. Besides, user
preferences evolve over time dynamically and each preference has its own
evolving track. However, most previous works neglect the evolving trend of
preferences and can be easily disturbed by the effect of preference drifting.
In this paper, we propose a novel Preference Evolution Networks for
session-based Recommendation (PEN4Rec) to model preference evolving process by
a two-stage retrieval from historical contexts. Specifically, the first-stage
process integrates relevant behaviors according to recent items. Then, the
second-stage process models the preference evolving trajectory over time
dynamically and infer rich preferences. The process can strengthen the effect
of relevant sequential behaviors during the preference evolution and weaken the
disturbance from preference drifting. Extensive experiments on three public
datasets demonstrate the effectiveness and superiority of the proposed model.
|
The objective of this work is to localize sound sources that are visible in a
video without using manual annotations. Our key technical contribution is to
show that, by training the network to explicitly discriminate challenging image
fragments, even for images that do contain the object emitting the sound, we
can significantly boost the localization performance. We do so elegantly by
introducing a mechanism to mine hard samples and add them to a contrastive
learning formulation automatically. We show that our algorithm achieves
state-of-the-art performance on the popular Flickr SoundNet dataset.
Furthermore, we introduce the VGG-Sound Source (VGG-SS) benchmark, a new set of
annotations for the recently-introduced VGG-Sound dataset, where the sound
sources visible in each video clip are explicitly marked with bounding box
annotations. This dataset is 20 times larger than analogous existing ones,
contains 5K videos spanning over 200 categories, and, differently from Flickr
SoundNet, is video-based. On VGG-SS, we also show that our algorithm achieves
state-of-the-art performance against several baselines.
|
In this paper, several weighted summation formulas of $q$-hyperharmonic
numbers are derived. As special cases, several formulas of hyperharmonic
numbers of type $\sum_{\ell=1}^{n} {\ell}^{p} H_{\ell}^{(r)}$ and
$\sum_{\ell=0}^{n} {\ell}^{p} H_{n-\ell}^{(r)}$ are obtained.
|
Estimating 3D bounding boxes from monocular images is an essential component
in autonomous driving, while accurate 3D object detection from this kind of
data is very challenging. In this work, by intensive diagnosis experiments, we
quantify the impact introduced by each sub-task and found the `localization
error' is the vital factor in restricting monocular 3D detection. Besides, we
also investigate the underlying reasons behind localization errors, analyze the
issues they might bring, and propose three strategies. First, we revisit the
misalignment between the center of the 2D bounding box and the projected center
of the 3D object, which is a vital factor leading to low localization accuracy.
Second, we observe that accurately localizing distant objects with existing
technologies is almost impossible, while those samples will mislead the learned
network. To this end, we propose to remove such samples from the training set
for improving the overall performance of the detector. Lastly, we also propose
a novel 3D IoU oriented loss for the size estimation of the object, which is
not affected by `localization error'. We conduct extensive experiments on the
KITTI dataset, where the proposed method achieves real-time detection and
outperforms previous methods by a large margin. The code will be made available
at: https://github.com/xinzhuma/monodle.
|
We introduce and analyze a space-time hybridized discontinuous Galerkin
method for the evolutionary Navier--Stokes equations. Key features of the
numerical scheme include point-wise mass conservation, energy stability, and
pressure robustness. We prove that there exists a solution to the resulting
nonlinear algebraic system in two and three spatial dimensions, and that this
solution is unique in two spatial dimensions under a small data assumption. A
priori error estimates are derived for the velocity in a mesh-dependent energy
norm.
|
Modeling a crystal as a periodic point set, we present a fingerprint
consisting of density functions that facilitates the efficient search for new
materials and material properties. We prove invariance under isometries,
continuity, and completeness in the generic case, which are necessary features
for the reliable comparison of crystals. The proof of continuity integrates
methods from discrete geometry and lattice theory, while the proof of generic
completeness combines techniques from geometry with analysis. The fingerprint
has a fast algorithm based on Brillouin zones and related inclusion-exclusion
formulae. We have implemented the algorithm and describe its application to
crystal structure prediction.
|
The decomposition of the overall effect of a treatment into direct and
indirect effects is here investigated with reference to a recursive system of
binary random variables. We show how, for the single mediator context, the
marginal effect measured on the log odds scale can be written as the sum of the
indirect and direct effects plus a residual term that vanishes under some
specific conditions. We then extend our definitions to situations involving
multiple mediators and address research questions concerning the decomposition
of the total effect when some mediators on the pathway from the treatment to
the outcome are marginalized over. Connections to the counterfactual
definitions of the effects are also made. Data coming from an encouragement
design on students' attitude to visit museums in Florence, Italy, are
reanalyzed. The estimates of the defined quantities are reported together with
their standard errors to compute p-values and form confidence intervals.
|
Room temperature two-dimensional (2D) ferromagnetism is highly desired in
practical spintronics applications. Recently, 1T phase CrTe2 (1T-CrTe2)
nanosheets with five and thicker layers have been successfully synthesized,
which all exhibit the properties of ferromagnetic (FM) metals with Curie
temperatures around 305 K. However, whether the ferromagnetism therein can be
maintained when continuously reducing the nanosheet's thickness to monolayer
limit remains unknown. Here, through first-principles calculations, we explore
the evolution of magnetic properties of 1 to 6 layers CrTe2 nanosheets and
several interesting points are found: First, unexpectedly, monolayer CrTe2
prefers a zigzag antiferromagnetic (AFM) state with its energy much lower than
that of FM state. Second, in 2 to 4 layers CrTe2, both the intralayer and
interlayer magnetic coupling are AFM. Last, when the number of layers is equal
to or greater than five, the intralayer and interlayer magnetic coupling become
FM. Theoretical analysis reveals that the in-plane lattice contraction of few
layer CrTe2 compared to bulk is the main factor producing intralayer AFM-FM
magnetic transition. At the same time, as long as the intralayer coupling gets
FM, the interlayer coupling will concomitantly switch from AFM to FM. Such
highly thickness dependent magnetism provides a new perspective to control the
magnetic properties of 2D materials.
|
Whereas the Si photonic platform is highly attractive for scalable optical
quantum information processing, it lacks practical solutions for efficient
photon generation. Self-assembled semiconductor quantum dots (QDs) efficiently
emitting photons in the telecom bands ($1460-1625$ nm) allow for heterogeneous
integration with Si. In this work, we report on a novel, robust, and
industry-compatible approach for achieving single-photon emission from InAs/InP
QDs heterogeneously integrated with a Si substrate. As a proof of concept, we
demonstrate a simple vertical emitting device, employing a metallic mirror
beneath the QD emitter, and experimentally obtained photon extraction
efficiencies of $\sim10\%$. Nevertheless, the figures of merit of our
structures are comparable with values previously only achieved for QDs emitting
at shorter wavelength or by applying technically demanding fabrication
processes. Our architecture and the simple fabrication procedure allows for the
demonstration of a single-photon generation with purity $\mathcal{P}>98\%$ at
the liquid helium temperature and $\mathcal{P}=75\%$ at $80$ K.
|
Crowd-sourced traffic data offer great promise in environmental modeling.
However, archives of such traffic data are typically not made available for
research; instead, the data must be acquired in real time. The objective of
this paper is to present methods we developed for acquiring and analyzing time
series of real-time crowd-sourced traffic data. We present scripts, which can
be run in Unix/Linux like computational environments, to automatically download
tiles of crowd-sourced Google traffic congestion maps for a user-specifiable
region of interest. Broad and international applicability of our method is
demonstrated for Manhattan in New York City and Mexico City. We also
demonstrate that Google traffic data can be used to quantify decreases in
traffic congestion due to social distancing policies implemented to curb the
COVID-19 pandemic in the South Bronx in New York City.
|
Research has shown that Educational Robotics (ER) enhances student
performance, interest, engagement and collaboration. However, until now, the
adoption of robotics in formal education has remained relatively scarce. Among
other causes, this is due to the difficulty of determining the alignment of
educational robotic learning activities with the learning outcomes envisioned
by the curriculum, as well as their integration with traditional, non-robotics
learning activities that are well established in teachers' practices. This work
investigates the integration of ER into formal mathematics education, through a
quasi-experimental study employing the Thymio robot and Scratch programming to
teach geometry to two classes of 15-year-old students, for a total of 26
participants. Three research questions were addressed: (1) Should an ER-based
theoretical lecture precede, succeed or replace a traditional theoretical
lecture? (2) What is the students' perception of and engagement in the ER-based
lecture and exercises? (3) Do the findings differ according to students' prior
appreciation of mathematics? The results suggest that ER activities are as
valid as traditional ones in helping students grasp the relevant theoretical
concepts. Robotics activities seem particularly beneficial during exercise
sessions: students freely chose to do exercises that included the robot, rated
them as significantly more interesting and useful than their traditional
counterparts, and expressed their interest in introducing ER in other
mathematics lectures. Finally, results were generally consistent between the
students that like and did not like mathematics, suggesting the use of robotics
as a means to broaden the number of students engaged in the discipline.
|
Recent studies on the analysis of the multilingual representations focus on
identifying whether there is an emergence of language-independent
representations, or whether a multilingual model partitions its weights among
different languages. While most of such work has been conducted in a
"black-box" manner, this paper aims to analyze individual components of a
multilingual neural translation (NMT) model. In particular, we look at the
encoder self-attention and encoder-decoder attention heads (in a many-to-one
NMT model) that are more specific to the translation of a certain language pair
than others by (1) employing metrics that quantify some aspects of the
attention weights such as "variance" or "confidence", and (2) systematically
ranking the importance of attention heads with respect to translation quality.
Experimental results show that surprisingly, the set of most important
attention heads are very similar across the language pairs and that it is
possible to remove nearly one-third of the less important heads without hurting
the translation quality greatly.
|
The CMS experiment at the LHC has measured the differential cross sections of
Z bosons decaying to pairs of leptons, as functions of transverse momentum and
rapidity, in lead-lead collisions at a nucleon-nucleon center-of-mass energy of
5.02 TeV. The measured Z boson elliptic azimuthal anisotropy coefficient is
compatible with zero, showing that Z bosons do not experience significant
final-state interactions in the medium produced in the collision. Yields of Z
bosons are compared to Glauber model predictions and are found to deviate from
these expectations in peripheral collisions, indicating the presence of initial
collision geometry and centrality selection effects. The precision of the
measurement allows, for the first time, for a data-driven determination of the
nucleon-nucleon integrated luminosity as a function of lead-lead centrality,
thereby eliminating the need for its estimation based on a Glauber model.
|
We study the statistical theory of offline reinforcement learning (RL) with
deep ReLU network function approximation. We analyze a variant of fitted-Q
iteration (FQI) algorithm under a new dynamic condition that we call Besov
dynamic closure, which encompasses the conditions from prior analyses for deep
neural network function approximation. Under Besov dynamic closure, we prove
that the FQI-type algorithm enjoys the sample complexity of
$\tilde{\mathcal{O}}\left( \kappa^{1 + d/\alpha} \cdot \epsilon^{-2 -
2d/\alpha} \right)$ where $\kappa$ is a distribution shift measure, $d$ is the
dimensionality of the state-action space, $\alpha$ is the (possibly fractional)
smoothness parameter of the underlying MDP, and $\epsilon$ is a user-specified
precision. This is an improvement over the sample complexity of
$\tilde{\mathcal{O}}\left( K \cdot \kappa^{2 + d/\alpha} \cdot \epsilon^{-2 -
d/\alpha} \right)$ in the prior result [Yang et al., 2019] where $K$ is an
algorithmic iteration number which is arbitrarily large in practice.
Importantly, our sample complexity is obtained under the new general dynamic
condition and a data-dependent structure where the latter is either ignored in
prior algorithms or improperly handled by prior analyses. This is the first
comprehensive analysis for offline RL with deep ReLU network function
approximation under a general setting.
|
As a means for testing whether a group of agents jointly maximize random
utility, we introduce the correlated random utility model. The correlated
random utility model asks that agents face correlated random draws of
preferences which govern their decisions. We study joint random utility
maximization through the lens of joint stochastic choice data (correlated
choice rule), a novel type of data to the stochastic choice framework. Key is
the property of marginality, which demands the independence of any given
agent's marginal choices from the budgets faced by the remaining agents.
Marginality permits the construction of well-defined marginal stochastic choice
functions. Marginality and non-negativity of an analogue of the Block-Marschak
polynomials characterize joint random utility maximization for small
environments. For larger environments, we offer an example of a correlated
choice rule establishing that each of the marginal stochastic choice rule may
be stochastically rational while the correlated choice rule is not.
|
(abridged) Context. The origin of hot exozodiacal dust and its connection
with outer dust reservoirs remains unclear. Aims. We aim to explore the
possible connection between hot exozodiacal dust and warm dust reservoirs (>
100 K) in asteroid belts. Methods. We use precision near-infrared
interferometry with VLTI/PIONIER to search for resolved emission at H band
around a selected sample of nearby stars. Results. Our observations reveal the
presence of resolved near-infrared emission around 17 out of 52 stars, four of
which are shown to be due to a previously unknown stellar companion. The 13
other H-band excesses are thought to originate from the thermal emission of hot
dust grains. Taking into account earlier PIONIER observations, and after
reevaluating the warm dust content of all our PIONIER targets through spectral
energy distribution modeling, we find a detection rate of 17.1(+8.1)(-4.6)% for
H-band excess around main sequence stars hosting warm dust belts, which is
statistically compatible with the occurrence rate of 14.6(+4.3)(-2.8)% found
around stars showing no signs of warm dust. After correcting for the
sensitivity loss due to partly unresolved hot disks, under the assumption that
they are arranged in a thin ring around their sublimation radius, we however
find tentative evidence at the 3{\sigma} level that H-band excesses around
stars with outer dust reservoirs (warm or cold) could be statistically larger
than H-band excesses around stars with no detectable outer dust. Conclusions.
Our observations do not suggest a direct connection between warm and hot dust
populations, at the sensitivity level of the considered instruments, although
they bring to light a possible correlation between the level of H-band excesses
and the presence of outer dust reservoirs in general.
|
We perform Brownian dynamics simulations of active stiff polymers undergoing
run-reverse dynamics, and so mimic bacterial swimming, in porous media. In
accord with recent experiments of \emph{Escherichia coli}, the polymer dynamics
are characterized by trapping phases interrupted by directed hopping motion
through the pores. We find that the effective translational diffusivities of
run-reverse agents can be enhanced up to two orders in magnitude, compared to
their non-reversing counterparts, and exhibit a non-monotonic behavior as a
function of the reversal rate, which we rationalize using a coarse-grained
model. Furthermore, we discover a geometric criterion for the optimal
spreading, which emerges when their run lengths are comparable to the longest
straight path available in the porous medium. More significantly, our criterion
unifies results for porous media with disparate pore sizes and shapes and thus
provides a fundamental principle for optimal transport of microorganisms and
cargo-carriers in densely-packed biological and environmental settings.
|
In this paper, we study normal magnetic curves in $C$-manifolds. We prove
that magnetic trajectories with respect to the contact magnetic fields are
indeed $\theta_{\alpha }$-slant curves with certain curvature functions. Then,
we give the parametrizations of normal magnetic curves in $\mathbb{R}^{2n+s}$
with its structures as a $C$-manifold.
|
The population protocol model describes a network of $n$ anonymous agents who
cannot control with whom they interact. The agents collectively solve some
computational problem through random pairwise interactions, each agent updating
its own state in response to seeing the state of the other agent. They are
equivalent to the model of chemical reaction networks, describing abstract
chemical reactions such as $A+B \rightarrow C+D$, when the latter is subject to
the restriction that all reactions have two reactants and two products, and all
rate constants are 1. The counting problem is that of designing a protocol so
that $n$ agents, all starting in the same state, eventually converge to states
where each agent encodes in its state an exact or approximate description of
population size $n$. In this survey paper, we describe recent algorithmic
advances on the counting problem.
|
For mission-critical sensing and control applications such as those to be
enabled by 5G Ultra-Reliable, Low-Latency Communications (URLLC), it is
critical to ensure the communication quality of individual packets.
Prior studies have considered Probabilistic Per-packet Real-time
Communications (PPRC) guarantees for single-cell, single-channel networks with
implicit deadline constraints, but they have not considered real-world
complexities such as inter-cell interference and multiple communication
channels.
Towards ensuring PPRC in multi-cell, multi-channel wireless networks, we
propose a real-time scheduling algorithm based on
\emph{local-deadline-partition (LDP)}. The LDP algorithm is suitable for
distributed implementation, and it ensures probabilistic per-packet real-time
guarantee for multi-cell, multi-channel networks with general deadline
constraints. We also address the associated challenge of the schedulability
test of PPRC traffic. In particular, we propose the concept of \emph{feasible
set} and identify a closed-form sufficient condition for the schedulability of
PPRC traffic.
We propose a distributed algorithm for the schedulability test, and the
algorithm includes a procedure for finding the minimum sum work density of
feasible sets which is of interest by itself. We also identify a necessary
condition for the schedulability of PPRC traffic, and use numerical studies to
understand a lower bound on the approximation ratio of the LDP algorithm.
We experimentally study the properties of the LDP algorithm and observe that
the PPRC traffic supportable by the LDP algorithm is significantly higher than
that of a state-of-the-art algorithm.
|
Reinforcement learning (RL)-based neural architecture search (NAS) generally
guarantees better convergence yet suffers from the requirement of huge
computational resources compared with gradient-based approaches, due to the
rollout bottleneck -- exhaustive training for each sampled generation on proxy
tasks. In this paper, we propose a general pipeline to accelerate the
convergence of the rollout process as well as the RL process in NAS. It is
motivated by the interesting observation that both the architecture and the
parameter knowledge can be transferred between different experiments and even
different tasks. We first introduce an uncertainty-aware critic (value
function) in Proximal Policy Optimization (PPO) to utilize the architecture
knowledge in previous experiments, which stabilizes the training process and
reduces the searching time by 4 times. Further, an architecture knowledge pool
together with a block similarity function is proposed to utilize parameter
knowledge and reduces the searching time by 2 times. It is the first to
introduce block-level weight sharing in RLbased NAS. The block similarity
function guarantees a 100% hitting ratio with strict fairness. Besides, we show
that a simply designed off-policy correction factor used in "replay buffer" in
RL optimization can further reduce half of the searching time. Experiments on
the Mobile Neural Architecture Search (MNAS) search space show the proposed
Fast Neural Architecture Search (FNAS) accelerates standard RL-based NAS
process by ~10x (e.g. ~256 2x2 TPUv2 x days / 20,000 GPU x hour -> 2,000 GPU x
hour for MNAS), and guarantees better performance on various vision tasks.
|
This study creates a physiologically realistic virtual patient database
(VPD), representing the human arterial system, for the primary purpose of
studying the affects of arterial disease on haemodynamics. A low dimensional
representation of an anatomically detailed arterial network is outlined, and a
physiologically realistic posterior distribution for its parameters is
constructed through a Bayesian approach. This approach combines both
physiological/geometrical constraints and the available measurements reported
in the literature. A key contribution of this work is to present a framework
for including all such available information for the creation of virtual
patients (VPs). The Markov Chain Monte Carlo (MCMC) method is used to sample
random VPs from this posterior distribution, and the pressure and flow-rate
profiles associated with the VPs are computed through a model of pulse wave
propagation. This combination of the arterial network parameters (representing
the VPs) and the haemodynamics waveforms of pressure and flow-rates at various
locations (representing functional response of the VPs) makes up the VPD. While
75,000 VPs are sampled from the posterior distribution, 10,000 are discarded as
the initial burn-in period. A further 12,857 VPs are subsequently removed due
to the presence of negative average flow-rate. Due to an undesirable behaviour
observed in some VPs -- asymmetric under- and over-damped pressure and
flow-rate profiles in the left and right sides of the arterial system -- a
filter is proposed for their removal. The final VPD has 28,868 subjects. It is
shown that the methodology is appropriate by comparing the VPD statistics to
those reported in literature across real populations. A good agreement between
the two is found while respecting physiological/geometrical constraints. The
pre-filter database is made available at
https://doi.org/10.5281/zenodo.4549764.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.