abstract
stringlengths 42
2.09k
|
---|
We establish a platform to transfer $L_p$-completely bounded maps on tensor
products of von Neumann algebras to $L_p$-completely bounded maps on the
corresponding amalgamated free products. As a consequence, we obtain a
H\"ormander-Mikhlin multiplier theory for free products of groups. Let
$\mathbb{F}_\infty$ be a free group on infinite generators $\{g_1,
g_2,\cdots\}$. Given $d\ge1$ and a bounded symbol $m$ on $\mathbb{Z}^d$
satisfying the classical H\"ormander-Mikhlin condition, the linear map
$M_m:\mathbb{C}[\mathbb{F}_\infty]\to \mathbb{C}[\mathbb{F}_\infty]$ defined by
$\lambda(g)\mapsto m(k_1,\cdots, k_d)\lambda(g)$ for $g=g_{i_1}^{k_1}\cdots
g_{i_n}^{k_n}\in\mathbb{F}_\infty$ in reduced form (with $k_l=0$ in
$m(k_1,\cdots, k_d)$ for $l>n$), extends to a complete bounded map on
$L_p(\widehat{\mathbb{F}}_\infty)$ for all $1<p<\infty$, where
$\widehat{\mathbb{F}}_\infty$ is the group von Neumann algebra of
$\mathbb{F}_\infty$. A similar result holds for any free product of discrete
groups.
|
[Context & motivation] Driven by the need for faster time-to-market and
reduced development lead-time, large-scale systems engineering companies are
adopting agile methods in their organizations. This agile transformation is
challenging and it is common that adoption starts bottom-up with agile software
teams within the context of traditional company structures.
[Question/Problem] This creates the challenge of agile teams working within a
document-centric and plan-driven (or waterfall) environment. While it may be
desirable to take the best of both worlds, it is not clear how that can be
achieved especially with respect to managing requirements in large-scale
systems.
[Principal ideas/Results] This paper presents an exploratory case study at an
automotive company, focusing on two departments of a large-scale systems
company that is in the process of company-wide agile adoption.
[Contribution] We present challenges related to requirements engineering that
agile teams face while working within a larger plan-driven context and propose
potential strategies to mitigate the challenges. Challenges relate to, e.g.,
development teams not being aware of the high-level requirement and dealing
with flexibility of writing user stories. We found that strategies for
overcoming most of these challenges are still lacking and thus call for more
research.
|
One of the critical challenges facing imaging studies of the 21-cm signal at
the Epoch of Reionization (EoR) is the separation of astrophysical foreground
contamination. These foregrounds are known to lie in a wedge-shaped region of
$(k_{\perp},k_{\parallel})$ Fourier space. Removing these Fourier modes excises
the foregrounds at grave expense to image fidelity, since the cosmological
information at these modes is also removed by the wedge filter. However, the
21-cm EoR signal is non-Gaussian, meaning that the lost wedge modes are
correlated to the surviving modes by some covariance matrix. We have developed
a machine learning-based method which exploits this information to identify
ionized regions within a wedge-filtered image. Our method reliably identifies
the largest ionized regions and can reconstruct their shape, size, and location
within an image. We further demonstrate that our method remains viable when
instrumental effects are accounted for, using the Hydrogen Epoch of
Reionization Array and the Square Kilometre Array as fiducial instruments. The
ability to recover spatial information from wedge-filtered images unlocks the
potential for imaging studies using current- and next-generation instruments
without relying on detailed models of the astrophysical foregrounds themselves.
|
We consider a cognitive radio based Internet of Things (CR-IoT) system where
the secondary IoT device (SD) accesses the licensed channel during the
transmission vacancies of the primary IoT device (PD). We focus on the impact
of the IoT devices' heterogeneous traffic pattern on the energy efficiency and
on the age of information (AoI) performance of the SD. We first derive
closed-form expressions of the energy efficiency and the average AoI, and
subsequently explore their convexity and monotonicity to the transmit power.
Following these characterizations, an optimal transmit power optimization
algorithm (TPOA) is proposed for the SD to maximize the energy efficiency while
maintaining the average AoI under a predefined threshold. Numerical results
verify the different preferences of the SD toward different PD traffic
patterns, and provides insights into the tradeoff between the energy efficiency
and the average AoI.
|
We consider the decoupling theory of a broad class of $C^5$ surfaces
$\mathbb{M} \subset \mathbb{R}^3$ lacking planar points. In particular, our
approach also applies to surfaces which are not graphed by mixed homogeneous
polynomials. The study of $\mathbb{M}$ furnishes opportunity to recast
iterative linear decoupling in a more general form. Here, Taylor-based analysis
is combined with efforts to build a library of canonical surfaces
(non-cylindrical in general) by which $\mathbb{M}$ may be approximated for
decoupling purposes. The work presented may be generalized to the consideration
of other surfaces not addressed.
|
Reconstructing interactions from observational data is a critical need for
investigating natural biological networks, wherein network dimensionality (i.e.
number of interacting components) is usually high and interactions are
time-varying. These pose a challenge to existing methods that can quantify only
small interaction networks or assume static interactions under steady state.
Here, we proposed a novel approach to reconstruct high-dimensional,
time-varying interaction networks using empirical time series. This method,
named "multiview distance regularized S-map", generalized the state space
reconstruction to accommodate high dimensionality and overcome difficulties in
quantifying massive interactions with limited data. When we evaluated this
method using the time series generated from a large theoretical model involving
hundreds of interacting species, estimated interaction strengths were in good
agreement with theoretical expectations. As a result, reconstructed networks
preserved important topological properties, such as centrality, strength
distribution and derived stability measures. Moreover, our method effectively
forecasted the dynamic behavior of network nodes. Applying this method to a
natural bacterial community helped identify keystone species from the
interaction network and revealed the mechanisms governing the dynamical
stability of bacterial community. Our method overcame the challenge of high
dimensionality and disentangled complex time-varying interactions in large
natural dynamical systems.
|
A central goal of synthetic biology is the design of molecular controllers
that can manipulate the dynamics of intracellular networks in a stable and
accurate manner. To address the fact that detailed knowledge about
intracellular networks is unavailable, integral-feedback controllers (IFCs)
have been put forward for controlling molecular abundances. These controllers
can maintain accuracy in spite of the uncertainties in the controlled networks.
However, this desirable feature is achieved only if stability is also
maintained. In this paper, we show that molecular IFCs can suffer from a
hazardous instability called negative-equilibrium catastrophe (NEC), whereby
all nonnegative equilibria vanish under the action of the controllers, and some
of the molecular abundances blow up. We show that unimolecular IFCs do not
exist due to a NEC. We then derive a family of bimolecular IFCs that are
safeguarded against NECs when uncertain unimolecular networks, with any number
of molecular species, are controlled. However, when IFCs are applied on
uncertain bimolecular (and hence most intracellular) networks, we show that
preventing NECs generally becomes an intractable problem as the number of
interacting molecular species increases.
|
The Dirac Equation is solved approximately for relativistic generalized
Woods-Saxon potential including Coulomb-like tensor potential in exact
pseudospin and spin symmetry limits. The bound states energy eigenvalues are
found by using wavefunction boundary conditions, and corresponding radial
wavefunctions are obtained in terms of hypergeometric function. Some numerical
examples are given for the dependence of bound states energy eigenvalues on
quantum numbers and potential parameters.
|
We introduce a new framework for solving an important class of computational
problems involving finite permutation groups, which includes calculating set
stabilisers, intersections of subgroups, and isomorphisms of combinatorial
structures. Our techniques are inspired by and generalise 'partition
backtrack', which is the current state-of-the-art algorithm introduced by
Jeffrey Leon in 1991. But, instead of ordered partitions, we use labelled
directed graphs to organise our backtrack search algorithms, which allows for a
richer representation of many problems while often resulting in smaller search
spaces. In this article we present the theory underpinning our framework, we
describe our algorithms, and we show the results of some experiments. An
implementation of our algorithms is available as free software in the
GraphBacktracking package for GAP.
|
The early development of a zygote can be mathematically described by a
developmental tree. To compare developmental trees of different species, we
need to define distances on trees. If children cells after a division are not
distinguishable, developmental trees are represented by the space of rooted
trees with possibly repeated labels, where all vertices are unordered. On this
space, we define two metrics: the best-match metric and the left-regular
metric, which show some advantages over existing methods. If children cells
after a division are partially distinguishable, developmental trees are
represented by the space of rooted trees with possibly repeated labels, where
vertices can be ordered or unordered. This space cannot have a metric. Instead,
we define a semimetric, which is a variant of the best-match metric. To compute
the best-match distance between two trees, the expected time complexity and
worst-case time complexity are both $\mathcal{O}(n^2)$, where $n$ is the tree
size. To compute the left-regular distance between two trees, the expected time
complexity is $\mathcal{O}(n)$, and the worst-case time complexity is
$\mathcal{O}(n\log n)$.
|
It has been recognized for some time that even for perfect conductors, the
interaction Casimir entropy, due to quantum/thermal fluctuations, can be
negative. This result was not considered problematic because it was thought
that the self-entropies of the bodies would cancel this negative interaction
entropy, yielding a total entropy that was positive. In fact, this cancellation
seems not to occur. The positive self-entropy of a perfectly conducting sphere
does indeed just cancel the negative interaction entropy of a system consisting
of a perfectly conducting sphere and plate, but a model with weaker coupling in
general possesses a regime where negative self-entropy appears. The physical
meaning of this surprising result remains obscure. In this paper we re-examine
these issues, using improved physical and mathematical techniques, partly based
on the Abel-Plana formula, and present numerical results for arbitrary
temperatures and couplings, which exhibit the same remarkable features.
|
A software architect uses quality requirements to design the architecture of
a system. However, it is essential to ensure that the system's final
architectural design achieves the standard quality requirements. The existing
architectural evaluation frameworks require basic skills and experience for
practical usage, which novice software architects lack.
We propose a framework that enables novice software architects to infer the
system's quality requirements and tactics using the software architectural
block-line diagram. The framework takes an image as input, extracts various
components and connections, and maps them to viable architectural patterns,
followed by identifying the system's corresponding quality attributes (QAs) and
tactics. The framework includes a specifically trained machine learning model
based on image processing and semantic similarity methods to assist software
architects in evaluating a given design by a) evaluating an input architectural
design based on the architectural patterns present in it, b) lists out the
strengths and weaknesses of the design in terms of QAs, c) recommends the
necessary architectural tactics that can be embedded in the design to achieve
the lacking QAs.
To train our framework, we developed a dataset of 2,035 architectural images
from fourteen architectural patterns such as Client-Server, Microservices, and
Model View Controller, available at
https://www.doi.org/10.6084/m9.figshare.14156408. The framework achieves a
Correct Recognition Rate of 98.71% in identifying the architectural patterns.
We evaluated the proposed framework's effectiveness and usefulness by using
controlled and experimental groups, in which the experimental group performed
approximately 150% better than the controlled group. The experiments were
performed as a part of the Masters of Computer Science course in an Engineering
Institution.
|
Hyperproperties are system properties that require quantification over
multiple execution traces of a system. Hyperproperties can express several
specifications of interest for cyber-physical systems--such as opacity,
robustness, and noninterference--which cannot be expressed using linear-time
properties. This paper presents for the first time a discretization-free
approach for the formal verification of discrete-time uncertain dynamical
systems against hyperproperties. The proposed approach involves decomposition
of complex hyperproperties into several verification conditions by exploiting
the automata-based structures corresponding to the complements of the original
specifications. These verification conditions are then discharged by
synthesizing so-called augmented barrier certificates, which provide certain
safety guarantees for the underlying system. For systems with polynomial-type
dynamics, we present a sound procedure to synthesize polynomial-type augmented
barrier certificates by reducing the problem to sum-of-squares optimizations.
We demonstrate the effectiveness of our proposed approaches on two physical
case studies against two important hyperproperties: initial-state opacity and
initial-state robustness.
|
Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO)
systems have been considered as one of the primary candidates for the fifth
generation (5G) and beyond 5G wireless communication networks to satisfy the
ever-increasing capacity demands. Full-duplex technology can further enhance
the advantages of mmWave massive MIMO systems. However, strong
self-interference (SI) is the major limiting factor in full-duplex technology.
Hence, this paper proposes a novel angular-based joint hybrid
precoding/combining (AB-JHPC) technique for the full-duplex mmWave massive-MIMO
systems. Our primary goals are listed as: (i) improving the self-interference
cancellation (SIC), (ii) increasing the intended signal power, (iii) decreasing
the channel estimation overhead, (iv) designing the massive MIMO systems with a
low number of RF chains. First, the RF-stage of AB-JHPC is developed via slow
time-varying angle-of-departure (AoD) and angle-of-arrival (AoA) information. A
joint transmit/receive RF beamformer design is proposed for covering
(excluding) the AoD/AoA support of intended (SI) channel. Second, the BB-stage
of AB-JHPC is constructed via the reduced-size effective intended channel.
After using the well-known singular value decomposition(SVD) approach at the
BB-stage, we also propose a new semi-blind minimum mean square error (S-MMSE)
technique to further suppress the residual SI power by using AoD/AoA
parameters. The numerical results demonstrate that the SI signal is remarkably
canceled via the proposed AB-JHPC technique. It is shown that AB-JHPC achieves
85.7 dB SIC and the total amount of SIC almost linearly increases via antenna
isolation techniques. We observe that the proposed full-duplex mmWave massive
MIMO systems double the achievable rate capacity compared to its half-duplex
counterpart as the antenna array size increases and the transmit/receive
antenna isolation improves.
|
We discuss the spectral decomposition of the hypergeometric differential
operators on the line $\mathrm{Re}\, z=1/2$. Such operators arise in the
problem of decomposition of tensor products of unitary representations of the
universal covering of the group $SL(2\,{\mathbb R}$. Our main purpose is a
search of natural bases in generalized eigenspaces and variants of the
inversion formula.
|
In this article we prove the existence of a new family of periodic solutions
for discrete, nonlinear Schrodinger equations subject to spatially localized
driving and damping and we show numerically that they provide a more accurate
approximation to metastable states in these systems than previous proposals. We
also study the stability properties of these solutions and show that they fit
well with a previously proposed mechanism for the emergence and persistence of
metastable behavior.
|
Serverless computing has emerged as a new paradigm for running short-lived
computations in the cloud. Due to its ability to handle IoT workloads, there
has been considerable interest in running serverless functions at the edge.
However, the constrained nature of the edge and the latency sensitive nature of
workloads result in many challenges for serverless platforms. In this paper, we
present LaSS, a platform that uses model-driven approaches for running
latency-sensitive serverless computations on edge resources. LaSS uses
principled queuing-based methods to determine an appropriate allocation for
each hosted function and auto-scales the allocated resources in response to
workload dynamics. LaSS uses a fair-share allocation approach to guarantee a
minimum of allocated resources to each function in the presence of overload. In
addition, it utilizes resource reclamation methods based on container deflation
and termination to reassign resources from over-provisioned functions to
under-provisioned ones. We implement a prototype of our approach on an
OpenWhisk serverless edge cluster and conduct a detailed experimental
evaluation. Our results show that LaSS can accurately predict the resources
needed for serverless functions in the presence of highly dynamic workloads,
and reprovision container capacity within hundreds of milliseconds while
maintaining fair share allocation guarantees.
|
In this paper, we propose a simple enhancement for the passkey entry protocol
in the authentication stage 1 of Secure Simple Pairing using preexisting
cryptographic hash functions and random integer generation present in the
protocol. The new protocol is more secure and efficient than previous known
protocols. Our research mainly focuses on strengthening the passkey entry
protocol and protecting the devices against passive eavesdropping and active
Man-in-the-middle (MITM) attacks in both Bluetooth Basic Rate/Enhanced Data
Rate (BR/EDR) and Bluetooth Low Energy (Bluetooth LE). This method can be used
for any device which uses the passkey entry protocol.
|
Optimal mechanical impact absorbers are reusable and exhibit high specific
energy absorption. The forced intrusion of liquid water in hydrophobic
nanoporous materials, such as zeolitic imidazolate frameworks (ZIFs), presents
an attractive pathway to engineer such systems. However, to harness their full
potential, it is crucial to understand the underlying water intrusion and
ex-trusion mechanisms under realistic, high-rate deformation conditions.
Herein, we report a critical increase of the energy absorption capacity of
confined water-ZIF systems at elevated strain rates. Starting from ZIF-8 as
proof-of-concept, we demonstrate that this attractive rate depend-ence is
generally applicable to cage-type ZIFs but disappears for channel-containing
zeolites. Molecular simulations reveal that this phenomenon originates from the
intrinsic nanosecond timescale needed for critical-sized water clusters to
nucleate inside the nanocages, expediting water transport through the
framework. Harnessing this fundamental understanding, design rules are
formulated to construct effective, tailorable, and reusable impact energy
absorbers for challenging new applications.
|
Monitoring and controlling the state of polarization of electromagnetic waves
is of significant interest for various basic and practical applications such as
linear position sensing and medical imaging. Here, we propose the first
conformal digital metamaterial absorber to detect the polarization state of THz
incident waves. The proposed polarimeter is capable of characterizing four
independent polarization states of (TE, TM, $\pm {45^\circ}$ linear, and
RCP/LCP) by observing the reflectivity of the structure with respect to the x-
and y-direction. Besides, the proposed structure displays a strong absorptivity
above 90\% up to the incidence angle of $50^{\circ}$ for oblique incident waves
with different polarizations. By mere changing the bias voltage of two
orthogonal VO2 microwires via two independent computer-programmed multichannel
DC network, distinct conditions for reflected waves occurs under excitations of
different polarizations, whereby the polarization state of the incident wave
may readily be estimated. We believe that the proposed metasurface-based
polarimeter can pave the way for polarization detection applications on curved
surfaces.
|
For reductive groups $G$ over a number field we discuss automorphic liftings
from cuspidal irreducible automorphic representations $\pi$ of $G(\mathbb{A})$
to cuspidal irreducible automorphic representations on $H(\mathbb{A})$ for the
quasi-split inner form $H$ of $G$. We show the existence of cohomological
nontrivial weak global liftings in many cases. A priori these weak liftings do
not give a description of the precise nature of the corresponding local
liftings at the ramified places and in particular do not characterize the image
of the lift. For inner forms of the group $H=\mathrm{GSp}(4)$ however we
address these finer details. Especially, we prove the recent conjectures of
Ibukiyama and Kitayama on paramodular newforms of squarefree level.
|
In this work, we investigate gravitational baryogenesis in the framework of
$f(P)$ gravity to understand the applicability of this class of modified
gravity in addressing the baryon asymmetry of the Universe. For the analysis,
we set $f(P) = \alpha P$ where $\alpha$ is the model parameter. We found that
in $f(P)$ gravity, the CP-violating interaction acquires a modification through
the addition of the nontopological cubic term $P$ in addition to the Ricci
scalar $R$ and the mathematical expression of the baryon-to-entropy ratio
depends not only on the time derivative of $R$ but also the time derivative of
$P$. Additionally, we also investigate the consequences of a more complete and
generalized CP-violating interaction proportional to $f(P)$ instead of $P$ in
addressing the baryon asymmetry of the Universe. For this type of interaction,
we report that the baryon-to-entropy ratio is proportional to $\dot{R}$,
$\dot{P}$ and $f^{'}(P)$. We report that for both of these cases, rational
values of $\alpha$ and $\chi$ generate acceptable baryon-to-entropy ratios
compatible with observations.
|
We study the dark matter phenomenology of Standard Model extensions
addressing the reported anomaly in the $R_K$ observable at one-loop. The
article covers the case of fermionic singlet DM coupling leptophilically,
quarkphilically or amphiphilically to the SM. The setup utilizes a large
coupling of the new particle content to the second lepton generation to explain
the $R_K$ anomaly, which in return tends to diminish the dark matter relic
density. Further, dark matter direct detection experiments provide stringent
bounds even in cases where the dark matter candidate only contributes a small
fraction of the observed dark matter energy density. In fact, direct detection
rules out all considered models as an explanation for the $R_K$ anomaly in the
case of Dirac dark matter. Conversely, for Majorana dark matter, the $R_K$
anomaly can be addressed in agreement with direct detection in coannihilation
scenarios. For leptophilic dark matter this region only exists for $M_\text{DM}
\lesssim 1000 \, \mathrm{GeV}$ and dark matter is underabundant. Quarkphilic
and amphiphilic scenarios even provide narrow regions of parameter space where
the observed relic density can be reproduced while offering an explanation to
$R_K$ in agreement with direct detection experiments.
|
This paper introduces new algorithm for line extraction from laser range data
including methodology for efficient computation. The task is cast to series of
one dimensional problems in various spaces. A fast and simple specialization of
DBSCAN algorithm is proposed to solve one dimensional subproblems. Experiments
suggest that the method is suitable for real-time applications, handles noise
well and may be useful in practice.
|
We introduce a new sparse sliced inverse regression estimator called Cholesky
matrix penalization and its adaptive version for achieving sparsity in
estimating the dimensions of the central subspace. The new estimators use the
Cholesky decomposition of the covariance matrix of the covariates and include a
regularization term in the objective function to achieve sparsity in a
computationally efficient manner. We establish the theoretical values of the
tuning parameters that achieve estimation and variable selection consistency
for the central subspace. Furthermore, we propose a new projection information
criterion to select the tuning parameter for our proposed estimators and prove
that the new criterion facilitates selection consistency. The Cholesky matrix
penalization estimator inherits the strength of the Matrix Lasso and the Lasso
sliced inverse regression estimator; it has superior performance in numerical
studies and can be adapted to other sufficient dimension methods in the
literature.
|
Let $1<p<\infty$ and let $n\geq 1$. It is proved that a function $f:{\mathbb
R}\to {\mathbb C}$ is $n$-times Fr\'echet differentiable on ${\mathcal S}^p$ at
every self-adjoint operator if and only if $f$ is $n$-times differentiable,
$f',f'',\ldots,f^{(n)}$ are bounded and $f^{(n)}$ is uniformly continuous.
|
The knowledge of distribution grid models, including topologies and line
impedances, is essential to grid monitoring, control and protection. However,
this information is often unavailable, incomplete or outdated. The increasing
deployment of smart meters (SMs) provides a unique opportunity to address this
issue. This paper proposes a two-stage data-driven framework for distribution
grid modeling using SM data. In the first stage, we propose to identify the
topology via reconstructing a weighted Laplacian matrix of distribution
networks, which is mathematically proven to be robust against moderately
heterogeneous R/X profiles. In the second stage, we develop nonlinear least
absolute deviations (LAD) and least squares (LS) regression models to estimate
line impedances of single branches based on a nonlinear inverse power flow,
which is then embedded within a bottom-up sweep algorithm to achieve the
identification across the network in a branch-wise manner. Because the
estimation models are inherently non-convex programs and NP-hard, we specially
address their tractable convex relaxations and verify the exactness. In
addition, we design a conductor library to significantly narrow down the
solution space. Numerical results on the modified IEEE 13-bus, 37-bus and
69-bus test feeders validate the effectiveness of the proposed methods.
|
The early and robust detection of anomalies occurring in discrete
manufacturing processes allows operators to prevent harm, e.g. defects in
production machinery or products. While current approaches for data-driven
anomaly detection provide good results on the exact processes they were trained
on, they often lack the ability to flexibly adapt to changes, e.g. in products.
Continual learning promises such flexibility, allowing for an automatic
adaption of previously learnt knowledge to new tasks. Therefore, this article
discusses different continual learning approaches from the group of
regularization strategies, which are implemented, evaluated and compared based
on a real industrial metal forming dataset.
|
The possible symmetries of the superconducting pair amplitude is a
consequence of the fermionic nature of the Cooper pairs. For spin-$1/2$ systems
this leads to the $\mathcal{SPOT}=-1$ classification of superconductivity,
where $\mathcal{S}$, $\mathcal{P}$, $\mathcal{O}$, and $\mathcal{T}$ refer to
the exchange operators for spin, parity, orbital, and time between the paired
electrons. However, this classification no longer holds for higher spin
fermions, where each electron also possesses a finite orbital angular momentum
strongly coupled with the spin degree of freedom, giving instead a conserved
total angular moment. For such systems, we here instead introduce the
$\mathcal{JPT}=-1$ classification, where $\mathcal{J}$ is the exchange operator
for the $z$-component of the total angular momentum quantum numbers. We then
specifically focus on spin-$3/2$ fermion systems and several superconducting
cubic half-Heusler compounds that have recently been proposed to be spin-$3/2$
superconductors. By using a generic Hamiltonian suitable for these compounds we
calculate the superconducting pair amplitudes and find finite pair amplitudes
for all possible symmetries obeying the $\mathcal{JPT}=-1$ classification,
including all possible odd-frequency (odd-$\omega$) combinations. Moreover, one
of the very interesting properties of spin-$3/2$ superconductors is the
possibility of them hosting a Bogoliubov Fermi surface (BFS), where the
superconducting energy gap is closed across a finite area. We show that a
spin-$3/2$ superconductor with a pair potential satisfying an odd-gap
time-reversal product and being non-commuting with the normal-state Hamiltonian
hosts both a BFS and has finite odd-$\omega$ pair amplitudes. We then reduce
the full spin-$3/2$ Hamiltonian to an effective two-band model and show that
odd-$\omega$ pairing is inevitably present in superconductors with a BFS and
vice versa.
|
Utilization of Machine Learning (ML) algorithms, especially Deep Neural
Network (DNN) models, becomes a widely accepted standard in many domains more
particularly IoT-based systems. DNN models reach impressive performances in
several sensitive fields such as medical diagnosis, smart transport or security
threat detection, and represent a valuable piece of Intellectual Property. Over
the last few years, a major trend is the large-scale deployment of models in a
wide variety of devices. However, this migration to embedded systems is slowed
down because of the broad spectrum of attacks threatening the integrity,
confidentiality and availability of embedded models. In this review, we cover
the landscape of attacks targeting the confidentiality of embedded DNN models
that may have a major impact on critical IoT systems, with a particular focus
on model extraction and data leakage. We highlight the fact that Side-Channel
Analysis (SCA) is a relatively unexplored bias by which model's confidentiality
can be compromised. Input data, architecture or parameters of a model can be
extracted from power or electromagnetic observations, testifying a real need
from a security point of view.
|
This paper studies the nature of fractional linear transformations in a
general relativity context as well as in a quantum theoretical framework. Two
features are found to deserve special attention: the first is the possibility
of separating the limit-point condition at infinity into loxodromic,
hyperbolic, parabolic and elliptic cases. This is useful in a context in which
one wants to look for a correspondence between essentially self-adjoint
spherically symmetric Hamiltonians of quantum physics and the theory of
Bondi-Metzner-Sachs transformations in general relativity. The analogy
therefore arising, suggests that further investigations might be performed for
a theory in which the role of fractional linear maps is viewed as a bridge
between the quantum theory and general relativity. The second aspect to point
out is the possibility of interpreting the limit-point condition at both ends
of the positive real line, for a second-order singular differential operator,
which occurs frequently in applied quantum mechanics, as the limiting procedure
arising from a very particular Kleinian group which is the hyperbolic cyclic
group. In this framework, this work finds that a consistent system of equations
can be derived and studied. Hence one is led to consider the entire
transcendental functions, from which it is possible to construct a fundamental
system of solutions of a second-order differential equation with singular
behavior at both ends of the positive real line, which in turn satisfy the
limit-point conditions.
|
We examine how introduction of Shared Connected and Automated vehicles
(SCAVs) as a new mobility mode could affect travel demand, welfare, as well as
traffic congestion in the network. To do so, we adapt an agent-based day-to-day
adjustment process and develop a central dispatching system, which is
implemented on an in-house traffic microsimulator. We consider a two-sided
market in which demand and SCAV fleet size change endogenously. For dispatching
SCAV fleet size, we take changing traffic conditions into account. There are
two available transport modes: private Connected Automated Vehicles (CAVs) and
SCAVs. The designed system is applied on downtown Toronto network using real
data. The results show that demand of SCAVs goes up by 43 per cent over seven
study days from 670 trips on the first day to 959 trips on the seventh day.
Whereas, there is a 10 per cent reduction in private CAV demand from 2807 trips
to 2518 trips during the same duration. Moreover, total travel time of the
network goes down by seven per cent indicating that traffic congestion was
reduced in the network.
|
The following paper proposes a new approach to determine whether a logical
(CNF) formula is satisfiable or not using probability theory methods.
Furthermore, we will introduce an algorithm that speeds up the standard
solution for (CNF-SAT) in some cases. It is known that any (CNF) formula is
solved with a time complexity of $2^n$ where n is the number of different
literals in the (CNF) formula. In our approach, we will follow an enhanced
method from a probabilistic point of view that does not always increase
exponentially with the number of different literals. This will enhance the
chance of determining whether a large formula is satisfiable or not in many
cases. Additionally, we will point out at some promising properties that follow
from applying probability theory concepts and axioms to logic, which might
originate more insights about the satisfiability of logical formulas.
|
We show that $\Theta$-positive Anosov representations
$\rho:\Gamma\to\mathsf{PO}(p,q)$ of a surface group $\Gamma$ satisfy root vs
weight collar lemmas for all the Anosov roots, and are positively ratioed with
respect to all such roots. From this we deduce that $\Theta$-positive Anosov
representations $\rho:\Gamma\to\mathsf{PO}(p,q)$ form connected components of
character varieties.
|
In recent years, biometric authentication technology for smartphones has
become widespread, with the mainstream methods being fingerprint authentication
and face recognition. However, fingerprint authentication cannot be used when
hands are wet, and face recognition cannot be used when a person is wearing a
mask. Therefore, we examine a personal authentication system using the pinna as
a new approach for biometric authentication on smartphones. Authentication
systems based on the acoustic transfer function of the pinna (PRTF: Pinna
Related Transfer Function) have been investigated. However, the authentication
accuracy decreases due to the positional fluctuation across each measurement.
In this paper, we propose multimodal personal authentication on smartphones
using PRTF. The pinna image and positional sensor information are used with the
PRTF, and the effectiveness of the authentication method is examined. We
demonstrate that the proposed authentication system can compensate for the
positional changes in each measurement and improve robustness.
|
We propose a Concentrated Document Topic Model(CDTM) for unsupervised text
classification, which is able to produce a concentrated and sparse document
topic distribution. In particular, an exponential entropy penalty is imposed on
the document topic distribution. Documents that have diverse topic
distributions are penalized more, while those having concentrated topics are
penalized less. We apply the model to the benchmark NIPS dataset and observe
more coherent topics and more concentrated and sparse document-topic
distributions than Latent Dirichlet Allocation(LDA).
|
We show that neural networks with absolute value activation function and with
the path norm, the depth, the width and the network weights having logarithmic
dependence on $1/\varepsilon$ can $\varepsilon$-approximate functions that are
analytic on certain regions of $\mathbb{C}^d$.
|
We consider a bilevel attacker-defender problem to find the worst-case attack
on the relays that control the transmission grid. The attacker maximizes load
shed by infiltrating a number of relays and rendering the components connected
to them inoperable. The defender responds by minimizing the load shed,
re-dispatching using a DC optimal power flow (DCOPF) problem on the remaining
network. Though worst-case interdiction problems on the transmission grid are
well-studied, there remains a need for exact and scalable methods. Methods
based on using duality on the inner problem rely on the bounds of the dual
variables of the defender problem in order to reformulate the bilevel problem
as a mixed integer linear problem. Valid dual bounds tend to be large,
resulting in weak linear programming relaxations and making the problem
difficult to solve at scale. Often smaller heuristic bounds are used, resulting
in a lower bound. In this work we also consider a lower bound, where instead of
bounding the dual variables, we drop the constraints corresponding to Ohm's
law, relaxing DCOPF to capacitated network flow. We present theoretical results
showing that, for uncongested networks, approximating DCOPF with network flow
yields the same set of injections, which suggests that this restriction likely
gives a high-quality lower bound in the uncongested case. Furthermore, we show
that in the network flow relaxation of the defender problem, the duals are
bounded by 1, so we can solve our restriction exactly. Last, we see empirically
that this formulation scales well computationally. Through experiments on 16
networks with up to 6468 buses, we find that this bound is almost always as
tight as we can get from guessing the dual bounds, even for congested networks.
In addition, calculating the bound is approximately 150 times faster than
achieving the same bound with the reformulation guessing the dual bounds.
|
A variety is a class of algebraic structures axiomatized by a set of
equations. An equation is linear if there is at most one occurrence of an
operation symbol on each side. We show that a variety axiomatized by linear
equations has the strong amalgamation property.
Suppose further that the language has no constant symbol and, for each
equation, either one side is operation-free, or exactly the same variables
appear on both sides. Then also the joint embedding property holds.
Examples include most varieties defining classical Maltsev conditions. In a
few special cases, the above properties are preserved when further unary
operations appear in the equations.
|
Let $\mathcal{L}=(L,[\cdot\,,\cdot],\delta)$ be an algebraic Lie algebroid
over a smooth projective curve of genus $g\geq 2$ such that $L$ is a line
bundle whose degree is less than $2-2g$. Let $r$ and $d$ be coprime numbers. We
prove that the motivic class (in the Grothendieck ring of varieties) of the
moduli space of $\mathcal{L}$-connections of rank $r$ and degree $d$ over $X$
does not depend on the Lie algebroid structure $[\cdot\,,\cdot]$ and $\delta$
of $\mathcal{L}$ and neither on the line bundle $L$ itself, but only the degree
of $L$ (and of course on $r,d,g$ and $X$). In particular it is equal to the
motivic class of the moduli space of $K_X(D)$-twisted Higgs bundles of rank $r$
and degree $d$, for $D$ any divisor of positive degree. As a consequence,
similar results (actually a little stronger) are obtained for the corresponding
$E$-polynomials. Some applications of these results are then deduced.
|
The main two families of real hypersurfaces in complex space forms are Hopf
and ruled. However, very little is known about real hypersurfaces in the
indefinite complex projective space $\mathbb{C}P^n_p$. In a previous work,
Kimura and the second author introduced Hopf real hypersurfaces in
$\mathbb{C}P^n_p$. In this paper, ruled real hypersurfaces in the indefinite
complex projective space are introduced, as those whose maximal holomorphic
distribution is integrable, and such that the leaves are totally geodesic
holomorphic hyperplanes. A detailed description of the shape operator is
computed, obtaining two main different families. A method of construction is
exhibited, by gluing in a suitable way totally geodesic holomorphic hyperplanes
along a non-null curve. Next, the classification of all minimal ruled real
hypersurfaces is obtained, in terms of three main families of curves, namely
geodesics, totally real circles and a third case which is not a Frenet curve,
but can be explicitly computed. Four examples are described.
|
A remarkable result at the intersection of number theory and group theory
states that the order of a finite group $G$ (denoted $|G|$) is divisible by the
dimension $d_R$ of any irreducible complex representation of $G$. We show that
the integer ratios ${ |G|^2 / d_R^2 } $ are combinatorially constructible using
finite algorithms which take as input the amplitudes of combinatoric
topological strings ($G$-CTST) of finite groups based on 2D Dijkgraaf-Witten
topological field theories ($G$-TQFT2). The ratios are also shown to be
eigenvalues of handle creation operators in $G$-TQFT2/$G$-CTST. These strings
have recently been discussed as toy models of wormholes and baby universes by
Marolf and Maxfield, and Gardiner and Megas. Boundary amplitudes of the
$G$-TQFT2/$G$-CTST provide algorithms for combinatoric constructions of
normalized characters. Stringy S-duality for closed $G$-CTST gives a dual
expansion generated by disconnected entangled surfaces. There are universal
relations between $G$-TQFT2 amplitudes due to the finiteness of the number $K $
of conjugacy classes. These relations can be labelled by Young diagrams and are
captured by null states in an inner product constructed by coupling the
$G$-TQFT2 to a universal TQFT2 based on symmetric group algebras. We discuss
the scenario of a 3D holographic dual for this coupled theory and the
implications of the scenario for the factorization puzzle of 2D/3D holography
raised by wormholes in 3D.
|
Exploration of new superconductors has always been one of the research
directions in condensed matter physics. We report here a new layered
heterostructure of [(Fe,Al)(OH)2][FeSe]1.2, which is synthesized by the
hydrothermal ion-exchange technique. The structure is suggested by a
combination of X-ray powder diffraction and the electron diffraction (ED).
[(Fe,Al)(OH)2][FeSe]1.2 is composed of the alternating stacking of tetragonal
FeSe layer and hexagonal (Fe,Al)(OH)2 layer. In [(Fe,Al)(OH)2][FeSe]1.2, there
exists mismatch between the FeSe sub-layer and (Fe,Al)(OH)2 sub-layer, and the
lattice of the layered heterostructure is quasi-commensurate. The
as-synthesized [(Fe,Al)(OH)2][FeSe]1.2 is non-superconducting due to the Fe
vacancies in the FeSe layer. The superconductivity with a Tc of 40 K can be
achieved after a lithiation process, which is due to the elimination of the Fe
vacancies in the FeSe layer. The Tc is nearly the same as that of (Li,Fe)OHFeSe
although the structure of [(Fe,Al)(OH)2][FeSe]1.2 is quite different from that
of (Li,Fe)OHFeSe. The new layered heterostructure of [(Fe,Al)(OH)2][FeSe]1.2
contains an iron selenium tetragonal lattice interleaved with a hexagonal metal
hydroxide lattice. These results indicate that the superconductivity is very
robust for FeSe-based superconductors. It opens a path for exploring
super-conductivity in iron-base superconductors.
|
We present Graph Neural Diffusion (GRAND) that approaches deep learning on
graphs as a continuous diffusion process and treats Graph Neural Networks
(GNNs) as discretisations of an underlying PDE. In our model, the layer
structure and topology correspond to the discretisation choices of temporal and
spatial operators. Our approach allows a principled development of a broad new
class of GNNs that are able to address the common plights of graph learning
models such as depth, oversmoothing, and bottlenecks. Key to the success of our
models are stability with respect to perturbations in the data and this is
addressed for both implicit and explicit discretisation schemes. We develop
linear and nonlinear versions of GRAND, which achieve competitive results on
many standard graph benchmarks.
|
We introduce the notion of soficity for locally compact groups and list a
number of open problems.
|
In this study, an order by disorder mechanism has been proposed in a
two-dimensional PXP model, where the extensive degeneracy of the classical
ground-state manifold is due to strict occupation constraints instead of
geometrical frustrations. By performing an unbias large-scale quantum monte
carlos simulation, we find that local quantum fluctuations, which usually work
against long-range ordering, lift the macroscopic classical degeneracy and give
rise to a compressible ground state with charge-density-wave long-range order.
A simple trial wavefunction has been proposed to capture the essence of the
ground-state of the two-dimensional PXP model. The finite temperature
properties of this model have also been studied, and we find a thermal phase
transition with an universality class of two-dimensional Ising model.
|
Theoretical models of galaxy-AGN co-evolution ascribe an important role for
the feedback process to a short, luminous, obscured, and dust-enshrouded phase
during which the accretion rate of the SMBH is expected to be at its maximum
and the associated AGN-driven winds are also predicted to be maximally
developed. To test this scenario, we have isolated a text-book candidate from
the eROSITA Final Equatorial-Depth Survey (eFEDS) obtained within the
Performance and Verification program of the eROSITA telescope on board Spectrum
Roentgen Gamma. From an initial catalog of 246 hard X-ray selected sources
matched with the photometric and spectroscopic information available within the
eROSITA and Hyper Suprime-Cam consortia, three candidates Quasars in the
feedback phase have been isolated applying the diagnostic proposed in Brusa et
al. (2015). Only one source (eFEDSU J091157.5+014327) has a spectrum already
available (from SDSS-DR16, z=0.603) and it unambiguously shows the presence of
a broad component (FWHM~1650 km/s) in the [OIII]5007 line. The associated
observed L_[OIII] is ~2.6x10^{42} erg/s, one to two orders of magnitude larger
than that observed in local Seyferts and comparable to those observed in a
sample of z~0.5 Type 1 Quasars. From the multiwavelength data available we
derive an Eddington Ratio (L_bol/L_Edd) of ~0.25, and a bolometric correction
in the hard X-ray of k_bol~10, lower than those observed for objects at similar
bolometric luminosity. The presence of an outflow, the high X-ray luminosity
and moderate X-ray obscuration (L_X~10^44.8 erg/s, N_H~2.7x10^22 cm^-2) and the
red optical color, all match the prediction of quasars in the feedback phase
from merger driven models. Forecasting to the full eROSITA all-sky survey with
its spectroscopic follow-up, we predict that by the end of 2024 we will have a
sample of few hundreds such objects at z=0.5-2.
|
The simultaneous rise of machine learning as a service and concerns over user
privacy have increasingly motivated the need for private inference (PI). While
recent work demonstrates PI is possible using cryptographic primitives, the
computational overheads render it impractical. The community is largely
unprepared to address these overheads, as the source of slowdown in PI stems
from the ReLU operator whereas optimizations for plaintext inference focus on
optimizing FLOPs. In this paper we re-think the ReLU computation and propose
optimizations for PI tailored to properties of neural networks. Specifically,
we reformulate ReLU as an approximate sign test and introduce a novel
truncation method for the sign test that significantly reduces the cost per
ReLU. These optimizations result in a specific type of stochastic ReLU. The key
observation is that the stochastic fault behavior is well suited for the
fault-tolerant properties of neural network inference. Thus, we provide
significant savings without impacting accuracy. We collectively call the
optimizations Circa and demonstrate improvements of up to 4.7x storage and 3x
runtime over baseline implementations; we further show that Circa can be used
on top of recent PI optimizations to obtain 1.8x additional speedup.
|
Let $\zeta^*(s)=\sum_{n=1}^{+\infty}(-1)^n/n^s$ and $\tau$ the operator
defined on the Frechet space of holomorphic functions in $\{s\in \mathbb C
:1/2< Re \, s<1\}$ by $\tau f(s)= f(s-2i\pi/\log 2)$. We show that the Riemann
Hypothesis is equivalent to the strong recurrence of $\zeta^*(s)$ for $\tau$.
It follows that a sufficient condition for $RH$ would be that every sum of a
series of eigenvectors with unimodular eigenvalues for an operator $u$ is
strongly recurrent for $u$. But we give a counterexample showing that it is not
the case.
|
We present the results of a weekly monitoring of the new black hole candidate
X-ray binary MAXI J1631-472 carried out with the MeerKAT radio interferometer,
the Neil Gehrels Swift Observatory, and the Monitor of All-sky X-ray Image
(MAXI) instrument, during its 2018-2019 outburst. The source exhibits a number
of X-ray states, in particular both high- and low-luminosity hard states
bracketed by extended soft states. Radio flaring is observed shortly after a
transition from hard/intermediate states to the soft state. This is broadly in
agreement with existing empirical models, but its extended duration hints at
multiple unresolved flares and/or jet-ISM interactions. In the hard state
radio:X-ray plane, the source is revealed to be 'radio quiet' at high
luminosities, but to rejoin the `standard' track at lower luminosities, an
increasingly commonly-observed pattern of behaviour.
|
Automated surgical gesture recognition is of great importance in
robot-assisted minimally invasive surgery. However, existing methods assume
that training and testing data are from the same domain, which suffers from
severe performance degradation when a domain gap exists, such as the simulator
and real robot. In this paper, we propose a novel unsupervised domain
adaptation framework which can simultaneously transfer multi-modality
knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using
temporal cues in videos, and inherent correlations in multi-modal towards
recognizing gesture. Specifically, we first propose an MDO-K to align
kinematics, which exploits temporal continuity to transfer motion directions
with smaller gap rather than position values, relieving the adaptation burden.
Moreover, we propose a KV-Relation-ATT to transfer the co-occurrence signals of
kinematics and vision. Such features attended by correlation similarity are
more informative for enhancing domain-invariance of the model. Two feature
alignment strategies benefit the model mutually during the end-to-end learning
process. We extensively evaluate our method for gesture recognition using DESK
dataset with peg transfer procedure. Results show that our approach recovers
the performance with great improvement gains, up to 12.91% in ACC and 20.16% in
F1score without using any annotations in real robot.
|
Personalized recommendation system has become pervasive in various video
platform. Many effective methods have been proposed, but most of them didn't
capture the user's multi-level interest trait and dependencies between their
viewed micro-videos well. To solve these problems, we propose a Self-over-Co
Attention module to enhance user's interest representation. In particular, we
first use co-attention to model correlation patterns across different levels
and then use self-attention to model correlation patterns within a specific
level. Experimental results on filtered public datasets verify that our
presented module is useful.
|
Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples/attacks, raising concerns about their reliability in safety-critical
applications. A number of defense methods have been proposed to train robust
DNNs resistant to adversarial attacks, among which adversarial training has so
far demonstrated the most promising results. However, recent studies have shown
that there exists an inherent tradeoff between accuracy and robustness in
adversarially-trained DNNs. In this paper, we propose a novel technique Dual
Head Adversarial Training (DH-AT) to further improve the robustness of existing
adversarial training methods. Different from existing improved variants of
adversarial training, DH-AT modifies both the architecture of the network and
the training strategy to seek more robustness. Specifically, DH-AT first
attaches a second network head (or branch) to one intermediate layer of the
network, then uses a lightweight convolutional neural network (CNN) to
aggregate the outputs of the two heads. The training strategy is also adapted
to reflect the relative importance of the two heads. We empirically show, on
multiple benchmark datasets, that DH-AT can bring notable robustness
improvements to existing adversarial training methods. Compared with TRADES,
one state-of-the-art adversarial training method, our DH-AT can improve the
robustness by 3.4% against PGD40 and 2.3% against AutoAttack, and also improve
the clean accuracy by 1.8%.
|
Context. The Sagittarius (Sgr) dwarf galaxy is merging with the Milky Way,
and the study of its globular clusters (GCs) is important to understand the
history and outcome of this ongoing process. Aims. Our main goal is to
characterize the GC system of the Sgr dwarf galaxy. This task is hampered by
high foreground stellar contamination, mostly from the Galactic bulge. Methods.
We performed a GC search specifically tailored to find new GC members within
the main body of this dwarf galaxy using the combined data of the VISTA
Variables in the Via Lactea Extended Survey (VVVX) near-infrared survey and the
Gaia Early Data Release 3 (EDR3) optical database. Results. We applied proper
motion (PM) cuts to discard foreground bulge and disk stars, and we found a
number of GC candidates in the main body of the Sgr dwarf galaxy. We selected
the best GCs as those objects that have significant overdensities above the
stellar background of the Sgr galaxy and that possess color-magnitude diagrams
(CMDs) with well-defined red giant branches (RGBs) consistent with the distance
and reddening of this galaxy. Conclusions. We discover eight new GC members of
the Sgr galaxy, which adds up to 29 total GCs known in this dwarf galaxy. This
total number of GCs shows that the Sgr dwarf galaxy hosts a rather rich GC
system. Most of the new GCs appear to be predominantly metal-rich and have low
luminosity. In addition, we identify ten other GC candidates that are more
uncertain and need more data for proper confirmation.
|
Nano-optic imagers that modulate light at sub-wavelength scales could unlock
unprecedented applications in diverse domains ranging from robotics to
medicine. Although metasurface optics offer a path to such ultra-small imagers,
existing methods have achieved image quality far worse than bulky refractive
alternatives, fundamentally limited by aberrations at large apertures and low
f-numbers. In this work, we close this performance gap by presenting the first
neural nano-optics. We devise a fully differentiable learning method that
learns a metasurface physical structure in conjunction with a novel, neural
feature-based image reconstruction algorithm. Experimentally validating the
proposed method, we achieve an order of magnitude lower reconstruction error.
As such, we present the first high-quality, nano-optic imager that combines the
widest field of view for full-color metasurface operation while simultaneously
achieving the largest demonstrated 0.5 mm, f/2 aperture.
|
In this paper, we address Novel Class Discovery (NCD), the task of unveiling
new classes in a set of unlabeled samples given a labeled dataset with known
classes. We exploit the peculiarities of NCD to build a new framework, named
Neighborhood Contrastive Learning (NCL), to learn discriminative
representations that are important to clustering performance. Our contribution
is twofold. First, we find that a feature extractor trained on the labeled set
generates representations in which a generic query sample and its neighbors are
likely to share the same class. We exploit this observation to retrieve and
aggregate pseudo-positive pairs with contrastive learning, thus encouraging the
model to learn more discriminative representations. Second, we notice that most
of the instances are easily discriminated by the network, contributing less to
the contrastive loss. To overcome this issue, we propose to generate hard
negatives by mixing labeled and unlabeled samples in the feature space. We
experimentally demonstrate that these two ingredients significantly contribute
to clustering performance and lead our model to outperform state-of-the-art
methods by a large margin (e.g., clustering accuracy +13% on CIFAR-100 and +8%
on ImageNet).
|
Reinforcement learning is a powerful approach to learn behaviour through
interactions with an environment. However, behaviours are usually learned in a
purely reactive fashion, where an appropriate action is selected based on an
observation. In this form, it is challenging to learn when it is necessary to
execute new decisions. This makes learning inefficient, especially in
environments that need various degrees of fine and coarse control. To address
this, we propose a proactive setting in which the agent not only selects an
action in a state but also for how long to commit to that action. Our TempoRL
approach introduces skip connections between states and learns a skip-policy
for repeating the same action along these skips. We demonstrate the
effectiveness of TempoRL on a variety of traditional and deep RL environments,
showing that our approach is capable of learning successful policies up to an
order of magnitude faster than vanilla Q-learning.
|
We study a two-player Stackelberg game with incomplete information such that
the follower's strategy belongs to a known family of parameterized functions
with an unknown parameter vector. We design an adaptive learning approach to
simultaneously estimate the unknown parameter and minimize the leader's cost,
based on adaptive control techniques and hysteresis switching. Our approach
guarantees that the leader's cost predicted using the parameter estimate
becomes indistinguishable from its actual cost in finite time, up to a
preselected, arbitrarily small error threshold. Also, the first-order necessary
condition for optimality holds asymptotically for the predicted cost.
Additionally, if a persistent excitation condition holds, then the parameter
estimation error becomes bounded by a preselected, arbitrarily small threshold
in finite time as well. For the case where there is a mismatch between the
follower's strategy and the parameterized function that is known to the leader,
our approach is able to guarantee the same convergence results for error
thresholds larger than the size of the mismatch. The algorithms and the
convergence results are illustrated via a simulation example in the domain of
network security.
|
A marked Petri net is lucent if there are no two different reachable markings
enabling the same set of transitions, i.e., states are fully characterized by
the transitions they enable. Characterizing the class of systems that are
lucent is a foundational and also challenging question. However, little
research has been done on the topic. In this paper, it is shown that all
free-choice nets having a home cluster are lucent. These nets have a so-called
home marking such that it is always possible to reach this marking again. Such
a home marking can serve as a regeneration point or as an end-point. The result
is highly relevant because in many applications, we want the system to be
lucent and many well-behaved process models fall into the class identified in
this paper. Unlike previous work, we do not require the marked Petri net to be
live and strongly connected. Most of the analysis techniques for free-choice
nets are tailored towards well-formed nets. The approach presented in this
paper provides a novel perspective enabling new analysis techniques for
free-choice nets that do not need to be well-formed. Therefore, we can also
model systems and processes that are terminating and/or have an initialization
phase.
|
It is becoming increasingly popular for distributed systems to exploit
offload to reduce load on the CPU. Remote Direct Memory Access (RDMA) offload,
in particular, has become popular. However, RDMA still requires CPU
intervention for complex offloads that go beyond simple remote memory access.
As such, the offload potential is limited and RDMA-based systems usually have
to work around such limitations.
We present RedN, a principled, practical approach to implementing complex
RDMA offloads, without requiring any hardware modifications. Using
self-modifying RDMA chains, we lift the existing RDMA verbs interface to a
Turing complete set of programming abstractions. We explore what is possible in
terms of offload complexity and performance with a commodity RDMA NIC. We show
how to integrate these RDMA chains into applications, such as the Memcached
key-value store, allowing us to offload complex tasks such as key lookups. RedN
can reduce the latency of key-value get operations by up to 2.6x compared to
state-of-the-art KV designs that use one-sided RDMA primitives (e.g., FaRM-KV),
as well as traditional RPC-over-RDMA approaches. Moreover, compared to these
baselines, RedN provides performance isolation and, in the presence of
contention, can reduce latency by up to 35x while providing applications with
failure resiliency to OS and process crashes.
|
Let $\mathcal{E}$ be a $\mathbb{Q}$-isogeny class of elliptic curves defined
over $\mathbb{Q}$. The isogeny graph associated to $\mathcal{E}$ is a graph
which has a vertex for each element of $\mathcal{E}$ and an edge for each
$\mathbb{Q}$-isogeny of prime degree that maps one element of $\mathcal{E}$ to
another element of $\mathcal{E}$, with the degree recorded as a label of the
edge. The isogeny-torsion graph associated to $\mathcal{E}$ is the isogeny
graph associated to $\mathcal{E}$ where, in addition, we label each vertex with
the abstract group structure of the torsion subgroup over $\mathbb{Q}$ of the
corresponding elliptic curve. The main result of the article is a determination
of which isogeny-torsion graphs associated to $\mathbb{Q}$-isogeny classes of
elliptic curves defined over $\mathbb{Q}$ correspond to infinitely many
$\textit{j}$-invariants.
|
The mechanism of thermal driving for launching mass outflows is
interconnected with classical thermal instability (TI). In a recent paper, we
demonstrated that as a result of this interconnectedness, radial wind solutions
of X-ray heated flows are prone to becoming clumpy. In this paper, we first
show that the Bernoulli function determines whether or not the entropy mode can
grow due to TI in dynamical flows. Based on this finding, we identify a
critical `unbound' radius beyond which TI should accompany thermal driving. Our
numerical disk wind simulations support this result and reveal that clumpiness
is a consequence of buoyancy disrupting the stratified structure of steady
state solutions. Namely, instead of a smooth transition layer separating the
highly ionized disk wind from the cold phase atmosphere below, hot bubbles
formed from TI rise up and fragment the atmosphere. These bubbles first appear
within large scale vortices that form below the transition layer, and they
result in the episodic production of distinctive cold phase structures referred
to as irradiated atmospheric fragments (IAFs). Upon interacting with the wind,
IAFs advect outward and develop extended crests. The subsequent disintegration
of the IAFs takes place within a turbulent wake that reaches high elevations
above the disk. We show that this dynamics has the following observational
implications: dips in the absorption measure distribution are no longer
expected within TI zones and there can be a less sudden desaturation of X-ray
absorption lines such as \OVIII as well as multiple absorption troughs in
\FeXXVK.
|
Advances in high-precision dielectric spectroscopy has enabled access to
non-linear susceptibilities of polar molecular liquids. The observed
non-monotonic behavior has been claimed to provide strong support for theories
of dynamic arrest based on thermodynamic amorphous order. Here we approach this
question from the perspective of dynamic facilitation, an alternative view
focusing on emergent kinetic constraints underlying the dynamic arrest of a
liquid approaching its glass transition. We derive explicit expressions for the
frequency-dependent higher-order dielectric susceptibilities exhibiting a
non-monotonic shape, the height of which increases as temperature is lowered.
We demonstrate excellent agreement with the experimental data for glycerol,
challenging the idea that non-linear response functions reveal correlated
relaxation in supercooled liquids.
|
We give concrete, "infinitesimal" conditions for a proper geodesically
complete CAT(0) space to have semistable fundamental group at infinity.
|
We investigate the potential of type II supernovae (SNe) to constrain
axion-like particles (ALPs) coupled simultaneously to nucleons and electrons.
ALPs coupled to nucleons can be efficiently produced in the SN core via
nucleon-nucleon bremsstrahlung and, for a wide range of parameters, leave the
SN unhindered, producing a large ALP flux. For masses exceeding 1 MeV, these
ALPs would decay into electron-positron pairs, generating a positron flux. In
the case of Galactic SNe, the annihilation of the created positrons with the
electrons present in the Galaxy would contribute to the 511 keV annihilation
line. Using the SPI (SPectrometer on INTEGRAL) observation of this line, allows
us to exclude a wide range of the axion-electron coupling, $10^{-19} \lesssim
g_{ae} \lesssim 10^{-11}$, for $g_{ap}\sim 10^{-9}$. Additionally, ALPs from
extra-galactic SNe decaying into electron-positron pairs would yield a
contribution to the cosmic X-ray background. In this case, we constrain the
ALP-electron coupling down to $g_{ae} \sim 10^{-20}$.
|
The point-to-set principle \cite{LutLut17} characterizes the Hausdorff
dimension of a subset $E\subseteq\R^n$ by the \textit{effective} (or
algorithmic) dimension of its individual points. This characterization has been
used to prove several results in classical, i.e., without any computability
requirements, analysis. Recent work has shown that algorithmic techniques can
be fruitfully applied to Marstrand's projection theorem, a fundamental result
in fractal geometry.
In this paper, we introduce an extension of point-to-set principle - the
notion of \textit{optimal oracles} for subsets $E\subseteq\R^n$. One of the
primary motivations of this definition is that, if $E$ has optimal oracles,
then the conclusion of Marstrand's projection theorem holds for $E$. We show
that every analytic set has optimal oracles. We also prove that if the
Hausdorff and packing dimensions of $E$ agree, then $E$ has optimal oracles.
Moreover, we show that the existence of sufficiently nice outer measures on $E$
implies the existence of optimal Hausdorff oracles. In particular, the
existence of exact gauge functions for a set $E$ is sufficient for the
existence of optimal Hausdorff oracles, and is therefore sufficient for
Marstrand's theorem. Thus, the existence of optimal oracles extends the
currently known sufficient conditions for Marstrand's theorem to hold.
Under certain assumptions, every set has optimal oracles. However, assuming
the axiom of choice and the continuum hypothesis, we construct sets which do
not have optimal oracles. This construction naturally leads to a generalization
of Davies theorem on projections.
|
In this study, we tried to see and characterize potential threats to digital
activism in the internet-active nation of Indonesia by doing network analysis
on a recent digital activism event on Twitter, which protested against a recent
law related to alcoholic beverage investment. We hoped insights from the study
can help the nation moving forward as public discourses are likely to stay
online post-COVID. From this study, we found that threats in form of hashtag
hijackings happen often in digital activism, and there were traces of a
systematic information campaign in our observed case. We also found that the
usage of bots is prevalent in and they showed significant activity, although
the extent to which they influenced the conversation needs to be followed
through more. These threats are something to think about as activism goes
increasingly digital after COVID-19 as it can imbue unwanted messages, sow
polarization, and distract the conversation from the real issue.
|
Diffusion tensor imaging (DTI) is a prevalent neuroimaging tool in analyzing
the anatomical structure. The distinguishing feature of DTI is that the
voxel-wise variable is a 3x3 positive definite matrix other than a scalar,
describing the diffusion process at the voxel. Recently, several statistical
methods have been proposed to analyze the DTI data. This paper focuses on the
statistical inference of eigenvalues of DTI because it provides more
transparent clinical interpretations. However, the statistical inference of
eigenvalues is statistically challenging because few treat these responses as
random eigenvalues. In our paper, we rely on the distribution of the Wishart
matrix's eigenvalues to model the random eigenvalues. A hierarchical model
which captures the eigenvalues' randomness and spatial auto-correlation is
proposed to infer the local covariate effects. The Monte-Carlo
Expectation-Maximization algorithm is implemented for parameter estimation.
Both simulation studies and application to IXI data-set are used to demonstrate
our proposal. The results show that our proposal is more proper in analyzing
auto-correlated random eigenvalues compared to alternatives.
|
Structured stochastic multi-armed bandits provide accelerated regret rates
over the standard unstructured bandit problems. Most structured bandits,
however, assume the knowledge of the structural parameter such as Lipschitz
continuity, which is often not available. To cope with the latent structural
parameter, we consider a transfer learning setting in which an agent must learn
to transfer the structural information from the prior tasks to the next task,
which is inspired by practical problems such as rate adaptation in wireless
link. We propose a novel framework to provably and accurately estimate the
Lipschitz constant based on previous tasks and fully exploit it for the new
task at hand. We analyze the efficiency of the proposed framework in two folds:
(i) the sample complexity of our estimator matches with the
information-theoretic fundamental limit; and (ii) our regret bound on the new
task is close to that of the oracle algorithm with the full knowledge of the
Lipschitz constant under mild assumptions. Our analysis reveals a set of useful
insights on transfer learning for latent Lipschitzconstants such as the
fundamental challenge a learner faces. Our numerical evaluations confirm our
theoretical findings and show the superiority of the proposed framework
compared to baselines.
|
Aiming at the traditional grasping method for manipulators based on 2D
camera, when faced with the scene of gathering or covering, it can hardly
perform well in unstructured scenes that appear as gathering and covering, for
the reason that can't recognize objects accurately in cluster scenes from a
single perspective and the manipulators can't make the environment better for
grasping. In this case, a novel method of pushing-grasping collaborative based
on the deep Q-network in dual perspectives is proposed in this paper. This
method adopts an improved deep Q network algorithm, with an RGB-D camera to
obtain the information of objects' RGB images and point clouds from two
perspectives, and combines the pushing and grasping actions so that the trained
manipulator can make the scenes better for grasping so that it can perform well
in more complicated grasping scenes. What's more, we improved the reward
function of the deep Q-network and propose the piecewise reward function to
speed up the convergence of the deep Q-network. We trained different models and
tried different methods in the V-REP simulation environment, and it concluded
that the method proposed in this paper converges quickly and the success rate
of grasping objects in unstructured scenes raises up to 83.5%. Besides, it
shows the generalization ability and well performance when novel objects appear
in the scenes that the manipulator has never grasped before.
|
Several calibration techniques have been proposed in the literature for the
calibration of two-component two-dimensional (2C-2D) particle image velocimetry
(PIV) and three-component two-dimensional (3C-2D) stereoscopic PIV (SPIV)
systems. These techniques generally involve the use of a calibration target
that is assumed to be at the exact centre of the laser sheet within the field
of view (FOV), which in practice is very difficult to achieve. In 3C-2D SPIV,
several methods offer different correction schemes based on the computation of
a disparity map, which are aimed at correcting errors produced due to this
misalignment. These techniques adjust the calibration of individual cameras to
reduce the disparity error, but in doing so can create unintended errors in the
measurement position and/or the velocity measurements, such as introducing a
bias in the measured three-component (3-C) displacements. This paper introduces
a novel method to ensure accurate alignment of the laser sheet with the
calibration target so that the uncertainty in displacement measurements is less
than or equal to the uncertainty inherent to the PIV and hence, no correction
scheme is required. The proposed method has been validated with a simple
experiment in which true displacements are given to a particle container
(illuminated by an aligned laser sheet) and the measured 3C displacements are
compared with the given true displacements. An uncertainty of less than 7.6
micrometres (equivalent to 0.114 pixels) in the measured 3C displacements
demonstrates the effectiveness of the new alignment method and eliminates the
need for any ad hoc post-correction scheme.
|
This note is a short description of TeCoMiner, an interactive tool for
exploring the topic content of text collections. Unlike other topic modeling
tools, TeCoMiner is not based on some generative probabilistic model but on
topological considerations about co-occurrence networks of terms. We outline
the methods used for identifying topics, describe the features of the tool, and
sketch an application, using a corpus of policy related scientific news on
environmental issues published by the European Commission over the last decade.
|
Light fidelity (LiFi), which is based on visible light communications (VLC),
is celebrated as a cutting-edge technological paradigm that is envisioned to be
an indispensable part of 6G systems. Nonetheless, LiFi performance is subject
to efficiently overcoming the line-of-sight blockage, whose adverse effect on
wireless reception reliability becomes even more pronounced in highly dynamic
environments, such as vehicular application scenarios. Meanwhile,
reconfigurable intelligent surfaces (RIS) emerged recently as a revolutionary
concept that transfers the physical propagation environment into a fully
controllable and customisable space in a low-cost low-power fashion. We
anticipate that the integration of RIS in LiFi-enabled networks will not only
support blockage mitigation but will also provision complex interactions among
network entities, and is hence manifested as a promising platform that enables
a plethora of technological trends and new applications. In this article, for
the first time in the open literature, we set the scene for a holistic overview
of RIS-assisted LiFi systems. Specifically, we explore the underlying RIS
architecture from the perspective of physics and present a forward-looking
vision that outlines potential operational elements supported by RIS-enabled
transceivers and RIS-enabled environments. Finally, we highlight major
associated challenges and offer a look ahead toward promising future
directions.
|
We consider a system of charged one-dimensional spin-$\frac{1}{2}$ fermions
at low temperature. We study how the energy of a highly-excited quasiparticle
(or hole) relaxes toward the chemical potential in the regime of weak
interactions. The dominant relaxation processes involve collisions with two
other fermions. We find a dramatic enhancement of the relaxation rate at low
energies, with the rate scaling as the inverse sixth power of the excitation
energy. This behavior is caused by the long-range nature of the Coulomb
interaction.
|
In order to objectively assess new medical imaging technologies via
computer-simulations, it is important to account for all sources of variability
that contribute to image data. One important source of variability that can
significantly limit observer performance is associated with the variability in
the ensemble of objects to-be-imaged. This source of variability can be
described by stochastic object models (SOMs), which are generative models that
can be employed to sample from a distribution of to-be-virtually-imaged
objects. It is generally desirable to establish SOMs from experimental imaging
measurements acquired by use of a well-characterized imaging system, but this
task has remained challenging. Deep generative neural networks, such as
generative adversarial networks (GANs) hold potential for such tasks. To
establish SOMs from imaging measurements, an AmbientGAN has been proposed that
augments a GAN with a measurement operator. However, the original AmbientGAN
could not immediately benefit from modern training procedures and GAN
architectures, which limited its ability to be applied to realistically sized
medical image data. To circumvent this, in this work, a modified AmbientGAN
training strategy is proposed that is suitable for modern progressive or
multi-resolution training approaches such as employed in the Progressive
Growing of GANs and Style-based GANs. AmbientGANs established by use of the
proposed training procedure are systematically validated in a controlled way by
use of computer-simulated measurement data corresponding to a stylized imaging
system. Finally, emulated single-coil experimental magnetic resonance imaging
data are employed to demonstrate the methods under less stylized conditions.
|
In this paper, we study the evolution of opinions over social networks with
bounded confidence in social cliques. Node initial opinions are independently
and identically distributed; at each time step, nodes review the average
opinions of a randomly selected local clique. The clique averages may represent
local group pressures on peers. Then nodes update their opinions under bounded
confidence: only when the difference between an agent individual opinion and
the corresponding local clique pressure is below a threshold, this agent
opinion is updated according to the DeGroot rule as a weighted average of the
two values. As a result, this opinion dynamics is a generalization of the
classical Deffuant-Weisbuch model in which only pairwise interactions take
place. First of all, we prove conditions under which all node opinions converge
to finite limits. We show that in the limits the event that all nodes achieve a
consensus, and the event that all nodes achieve pairwise distinct limits, i.e.,
social disagreements, are both nontrivial events. Next, we show that opinion
fluctuations may take place in the sense that at least one agent in the network
fails to hold a converging opinion trajectory. In fact, we prove that this
fluctuation event happens with a strictly positive probability, and also
constructively present an initial value event under which the fluctuation event
arises with probability one. These results add to the understanding of the role
of bounded confidence in social opinion dynamics, and the possibility of
fluctuation reveals that bringing in cliques in Deffuant-Weisbuch models have
fundamentally changed the behavior of such opinion dynamical processes.
|
Modeling complex systems and data using the language of graphs and networks
has become an essential topic across a range of different disciplines.
Arguably, this network-based perspective derives is success from the relative
simplicity of graphs: A graph consists of nothing more than a set of vertices
and a set of edges, describing relationships between pairs of such vertices.
This simple combinatorial structure makes graphs interpretable and flexible
modeling tools. The simplicity of graphs as system models, however, has been
scrutinized in the literature recently. Specifically, it has been argued from a
variety of different angles that there is a need for higher-order networks,
which go beyond the paradigm of modeling pairwise relationships, as
encapsulated by graphs. In this survey article we take stock of these recent
developments. Our goals are to clarify (i) what higher-order networks are, (ii)
why these are interesting objects of study, and (iii) how they can be used in
applications.
|
At sufficiently low temperatures magnetic materials often enter a correlated
phase hosting collective, coherent magnetic excitations such as magnons or
triplons. Drawing on the enormous progress on topological materials of the last
few years, recent research has led to new insights into the geometry and
topology of these magnetic excitations. Berry phases associated to magnetic
dynamics can lead to observable consequences in heat and spin transport while
analogues of topological insulators and semimetals can arise within magnon band
structures from natural magnetic couplings. Magnetic excitations offer a
platform to explore the interplay of magnetic symmetries and topology, to drive
topological transitions using magnetic fields. examine the effects of
interactions on topological bands and to generate topologically protected spin
currents at interfaces. In this review, we survey progress on all these topics,
highlighting aspects of topological matter that are unique to magnon systems
and the avenues yet to be fully investigated.
|
Parameter estimation procedures provide valuable guidance in the
understanding and improvement of organic solar cells and other devices. They
often rely on one-dimensional models, but in the case of bulk-heterojunction
(BHJ) designs, it is not straightforward that these models' parameters have a
consistent physical interpretation. Indeed, contrarily to two- or
three-dimensional models, the BHJ morphology is not explicitly described in
one-dimensional models and must be implicitly expressed through effective
parameters. In order to inform experimental decisions, a helpful parameter
estimation method must establish that one can correctly interpret the provided
parameters. However, only a few works have been undertaken to reach that
objective in the context of BHJ organic solar cells. In this work, a realistic
two-dimensional model of BHJ solar cells is used to investigate the behavior of
state-of-the-art parameter estimation procedures in situations that emulate
experimental conditions. We demonstrate that fitting solely current-voltage
characteristics by an effective medium one-dimensional model can yield
nonsensical results, which may lead to counter-productive decisions about
future design choices. In agreement with previously published literature, we
explicitly demonstrate that fitting several characterization results together
can drastically improve the robustness of the parameter estimation. Based on a
detailed analysis of parameter estimation results, a set of recommendations is
formulated to avoid the most problematic pitfalls and increase awareness about
the limitations that cannot be circumvented.
|
In this work, we investigate dynamic oversampling techniques for large-scale
multiple-antenna systems equipped with low-cost and low-power 1-bit
analog-to-digital converters at the base stations. To compensate for the
performance loss caused by the coarse quantization, oversampling is applied at
the receiver. Unlike existing works that use uniform oversampling, which
samples the signal at a constant rate, a novel dynamic oversampling scheme is
proposed. The basic idea is to perform time-varying nonuniform oversampling,
which selects samples with nonuniform patterns that vary over time. We consider
two system design criteria: a design that maximizes the achievable sum rate and
another design that minimizes the mean square error of detected symbols.
Dynamic oversampling is carried out using a dimension reduction matrix
$\mathbf{\Delta}$, which can be computed by the generalized eigenvalue
decomposition or by novel submatrix-level feature selection algorithms.
Moreover, the proposed scheme is analyzed in terms of convergence,
computational complexity and power consumption at the receiver. Simulations
show that systems with the proposed dynamic oversampling outperform those with
uniform oversampling in terms of computational cost, achievable sum rate and
symbol error rate performance.
|
We propose a novel codimension-n holography, called cone holography, between
a gravitational theory in $(d+1)$-dimensional conical spacetime and a CFT on
the $(d+1-n)$-dimensional defects. Similar to wedge holography, the cone
holography can be obtained by taking the zero-volume limit of holographic
defect CFT. Remarkably, it can be regarded as a holographic dual of the edge
modes on the defects. For one class of solutions, we prove that the cone
holography is equivalent to AdS/CFT, by showing that the classical
gravitational action and thus the CFT partition function in large N limit are
the same for the two theories. In general, cone holography and AdS/CFT are
different due to the infinite towers of massive Kaluza-Klein modes on the
branes. We test cone holography by studying Weyl anomaly, Entanglement/R\'enyi
entropy and correlation functions, and find good agreements between the
holographic and the CFT results. In particular, the c-theorem is obeyed by cone
holography. These are strong supports for our proposal. We discuss two kinds of
boundary conditions, the mixed boundary condition and Neumann boundary
condition, and find that they both define a consistent theory of cone
holography. We also analyze the mass spectrum on the brane and find that the
larger the tension is, the more continuous the mass spectrum is. The cone
holography can be regarded as a generalization of the wedge holography, and it
is closely related to the defect CFT, entanglement/R\'enyi entropy and
AdS/BCFT(dCFT). Thus it is expected to have a wide range of applications.
|
We consider the monotone inclusion problems in real Hilbert spaces. Proximal
splitting algorithms are very popular technique to solve it and generally
achieve weak convergence under mild assumptions. Researchers assume strong
conditions like strong convexity or strong monotonicity on the considered
operators to prove strong convergence of the algorithms. Mann iteration method
and normal S-iteration method are popular methods to solve fixed point
problems. We propose a new common fixed point algorithm based on normal
S-iteration method {using Tikhonov regularization }to find common fixed point
of nonexpansive operators and prove strong convergence of the generated
sequence to the set of common fixed points without assuming strong convexity
and strong monotonicity. Based on the proposed fixed point algorithm, we
propose a forward-backward-type algorithm and a Douglas-Rachford algorithm in
connection with Tikhonov regularization to find the solution of monotone
inclusion problems. Further, we consider the complexly structured monotone
inclusion problems which are very popular these days. We also propose a
strongly convergent forward-backward-type primal-dual algorithm and a
Douglas-Rachford-type primal-dual algorithm to solve the monotone inclusion
problems. Finally, we conduct a numerical experiment to solve image deblurring
problems.
|
Evolved low- to intermediate-mass stars are known to shed their gaseous
envelope into a large, dusty, molecule-rich circumstellar nebula which
typically develops a high degree of structural complexity. Most of the
large-scale, spatially correlated structures in the nebula are thought to
originate from the interaction of the stellar wind with a companion. As part of
the Atomium large programme, we observed the M-type asymptotic giant branch
(AGB) star R Hydrae with ALMA. The morphology of the inner wind of R Hya, which
has a known companion at ~3500 au, was determined from maps of CO and SiO
obtained at high angular resolution. A map of the CO emission reveals a
multi-layered structure consisting of a large elliptical feature at an angular
scale of ~10'' that is oriented along the north-south axis. The wind morphology
within the elliptical feature is dominated by two hollow bubbles. The bubbles
are on opposite sides of the AGB star and lie along an axis with a position
angle of ~115 deg. Both bubbles are offset from the central star, and their
appearance in the SiO channel maps indicates that they might be shock waves
travelling through the AGB wind. An estimate of the dynamical age of the
bubbles yields an age of the order of 100 yr, which is in agreement with the
previously proposed elapsed time since the star last underwent a thermal pulse.
When the CO and SiO emission is examined on subarcsecond angular scales, there
is evidence for an inclined, differentially rotating equatorial density
enhancement, strongly suggesting the presence of a second nearby companion. The
position angle of the major axis of this disc is ~70 deg in the plane of the
sky. We tentatively estimate that a lower limit on the mass of the nearby
companion is ~0.65 Msol on the basis of the highest measured speeds in the disc
and the location of its inner rim at ~6 au from the AGB star.
|
Synthesized speech from articulatory movements can have real-world use for
patients with vocal cord disorders, situations requiring silent speech, or in
high-noise environments. In this work, we present EMA2S, an end-to-end
multimodal articulatory-to-speech system that directly converts articulatory
movements to speech signals. We use a neural-network-based vocoder combined
with multimodal joint-training, incorporating spectrogram, mel-spectrogram, and
deep features. The experimental results confirm that the multimodal approach of
EMA2S outperforms the baseline system in terms of both objective evaluation and
subjective evaluation metrics. Moreover, results demonstrate that joint
mel-spectrogram and deep feature loss training can effectively improve system
performance.
|
Neural Ordinary Differential Equations (NODEs) use a neural network to model
the instantaneous rate of change in the state of a system. However, despite
their apparent suitability for dynamics-governed time-series, NODEs present a
few disadvantages. First, they are unable to adapt to incoming data points, a
fundamental requirement for real-time applications imposed by the natural
direction of time. Second, time series are often composed of a sparse set of
measurements that could be explained by many possible underlying dynamics.
NODEs do not capture this uncertainty. In contrast, Neural Processes (NPs) are
a family of models providing uncertainty estimation and fast data adaptation
but lack an explicit treatment of the flow of time. To address these problems,
we introduce Neural ODE Processes (NDPs), a new class of stochastic processes
determined by a distribution over Neural ODEs. By maintaining an adaptive
data-dependent distribution over the underlying ODE, we show that our model can
successfully capture the dynamics of low-dimensional systems from just a few
data points. At the same time, we demonstrate that NDPs scale up to challenging
high-dimensional time-series with unknown latent dynamics such as rotating
MNIST digits.
|
We show that the permanent of a matrix can be written as the expectation
value of a function of random variables each with zero mean and unit variance.
This result is used to show that Glynn's theorem and a simplified MacMahon
theorem extend from a common probabilistic interpretation of the permanent.
Combining the methods in these two proofs, we prove a new result that relates
the permanent of a matrix to the expectation value of a product of hyperbolic
trigonometric functions, or, equivalently, the partition function of a spin
system. We conclude by discussing how the main theorem can be generalized and
how the techniques used to prove it can be applied to more general problems in
combinatorics.
|
To improve the detection accuracy and generalization of steganalysis, this
paper proposes the Steganalysis Contrastive Framework (SCF) based on
contrastive learning. The SCF improves the feature representation of
steganalysis by maximizing the distance between features of samples of
different categories and minimizing the distance between features of samples of
the same category. To decrease the computing complexity of the contrastive loss
in supervised learning, we design a novel Steganalysis Contrastive Loss
(StegCL) based on the equivalence and transitivity of similarity. The StegCL
eliminates the redundant computing in the existing contrastive loss. The
experimental results show that the SCF improves the generalization and
detection accuracy of existing steganalysis DNNs, and the maximum promotion is
2% and 3% respectively. Without decreasing the detection accuracy, the training
time of using the StegCL is 10% of that of using the contrastive loss in
supervised learning.
|
We prove the existence and the Besov regularity of the density of the
solution to a general parabolic SPDE which includes the stochastic Burgers
equation on an unbounded domain. We use an elementary approach based on the
fractional integration by parts.
|
In the new era of very large telescopes, where data is crucial to expand
scientific knowledge, we have witnessed many deep learning applications for the
automatic classification of lightcurves. Recurrent neural networks (RNNs) are
one of the models used for these applications, and the LSTM unit stands out for
being an excellent choice for the representation of long time series. In
general, RNNs assume observations at discrete times, which may not suit the
irregular sampling of lightcurves. A traditional technique to address irregular
sequences consists of adding the sampling time to the network's input, but this
is not guaranteed to capture sampling irregularities during training.
Alternatively, the Phased LSTM unit has been created to address this problem by
updating its state using the sampling times explicitly. In this work, we study
the effectiveness of the LSTM and Phased LSTM based architectures for the
classification of astronomical lightcurves. We use seven catalogs containing
periodic and nonperiodic astronomical objects. Our findings show that LSTM
outperformed PLSTM on 6/7 datasets. However, the combination of both units
enhances the results in all datasets.
|
Recurrent event analyses have found a wide range of applications in
biomedicine, public health, and engineering, among others, where study subjects
may experience a sequence of event of interest during follow-up. The R package
reReg (Chiou and Huang 2021) offers a comprehensive collection of practical and
easy-to-use tools for regression analysis of recurrent events, possibly with
the presence of an informative terminal event. The regression framework is a
general scale-change model which encompasses the popular Cox-type model, the
accelerated rate model, and the accelerated mean model as special cases.
Informative censoring is accommodated through a subject-specific frailty
without no need for parametric specification. Different regression models are
allowed for the recurrent event process and the terminal event. Also included
are visualization and simulation tools.
|
Segmentation of additive manufacturing (AM) defects in X-ray Computed
Tomography (XCT) images is challenging, due to the poor contrast, small sizes
and variation in appearance of defects. Automatic segmentation can, however,
provide quality control for additive manufacturing. Over recent years,
three-dimensional convolutional neural networks (3D CNNs) have performed well
in the volumetric segmentation of medical images. In this work, we leverage
techniques from the medical imaging domain and propose training a 3D U-Net
model to automatically segment defects in XCT images of AM samples. This work
not only contributes to the use of machine learning for AM defect detection but
also demonstrates for the first time 3D volumetric segmentation in AM. We train
and test with three variants of the 3D U-Net on an AM dataset, achieving a mean
intersection of union (IOU) value of 88.4%.
|
Retinal artery/vein (A/V) classification is a critical technique for
diagnosing diabetes and cardiovascular diseases. Although deep learning based
methods achieve impressive results in A/V classification, their performances
usually degrade severely when being directly applied to another database, due
to the domain shift, e.g., caused by the variations in imaging protocols. In
this paper, we propose a novel vessel-mixing based consistency regularization
framework, for cross-domain learning in retinal A/V classification. Specially,
to alleviate the severe bias to source domain, based on the label smooth prior,
the model is regularized to give consistent predictions for unlabeled
target-domain inputs that are under perturbation. This consistency
regularization implicitly introduces a mechanism where the model and the
perturbation is opponent to each other, where the model is pushed to be robust
enough to cope with the perturbation. Thus, we investigate a more difficult
opponent to further inspire the robustness of model, in the scenario of retinal
A/V, called vessel-mixing perturbation. Specially, it effectively disturbs the
fundus images especially the vessel structures by mixing two images regionally.
We conduct extensive experiments on cross-domain A/V classification using four
public datasets, which are collected by diverse institutions and imaging
devices. The results demonstrate that our method achieves the state-of-the-art
cross-domain performance, which is also close to the upper bound obtained by
fully supervised learning on target domain.
|
This paper studies equilibrium quality of semi-separable position auctions
(known as the Ad Types setting) with greedy or optimal allocation combined with
generalized second-price (GSP) or Vickrey-Clarke-Groves (VCG) pricing. We make
three contributions: first, we give upper and lower bounds on the Price of
Anarchy (PoA) for auctions which use greedy allocation with GSP pricing, greedy
allocations with VCG pricing, and optimal allocation with GSP pricing. Second,
we give Bayes-Nash equilibrium characterizations for two-player, two-slot
instances (for all auction formats) and show that there exists both a revenue
hierarchy and revenue equivalence across some formats. Finally, we use
no-regret learning algorithms and bidding data from a large online advertising
platform and no-regret learning algorithms to evaluate the performance of the
mechanisms under semi-realistic conditions. For welfare, we find that the
optimal-to-realized welfare ratio (an empirical PoA analogue) is broadly better
than our upper bounds on PoA; For revenue, we find that the hierarchy in
practice may sometimes agree with simple theory, but generally appears
sensitive to the underlying distribution of bidder valuations.
|
This paper introduces the notion of an Input Constrained Control Barrier
Function (ICCBF), as a method to synthesize safety-critical controllers for
non-linear control affine systems with input constraints. The method identifies
a subset of the safe set of states, and constructs a controller to render the
subset forward invariant. The feedback controller is represented as the
solution to a quadratic program, which can be solved efficiently for real-time
implementation. Furthermore, we show that ICCBFs are a generalization of Higher
Order Control Barrier Functions, and thus are applicable to systems of
non-uniform relative degree. Simulation results are presented for the adaptive
cruise control problem, and a spacecraft rendezvous problem.
|
Electron-hole asymmetry is a fundamental property in solids that can
determine the nature of quantum phase transitions and the regime of operation
for devices. The observation of electron-hole asymmetry in graphene and
recently in the phase diagram of bilayer graphene has spurred interest into
whether it stems from disorder or from fundamental interactions such as
correlations. Here, we report an effective new way to access electron-hole
asymmetry in 2D materials by directly measuring the quasiparticle self-energy
in graphene/Boron Nitride field effect devices. As the chemical potential moves
from the hole to the electron doped side, we see an increased strength of
electronic correlations manifested by an increase in the band velocity and
inverse quasiparticle lifetime. These results suggest that electronic
correlations play an intrinsic role in driving electron hole asymmetry in
graphene and provide a new insight for asymmetries in more strongly correlated
materials.
|
Finding shortest paths in a given network (e.g., a computer network or a road
network) is a well-studied task with many applications. We consider this task
under the presence of an adversary, who can manipulate the network by
perturbing its edge weights to gain an advantage over others. Specifically, we
introduce the Force Path Problem as follows. Given a network, the adversary's
goal is to make a specific path the shortest by adding weights to edges in the
network. The version of this problem in which the adversary can cut edges is
NP-complete. However, we show that Force Path can be solved to within arbitrary
numerical precision in polynomial time. We propose the PATHPERTURB algorithm,
which uses constraint generation to build a set of constraints that require
paths other than the adversary's target to be sufficiently long. Across a
highly varied set of synthetic and real networks, we show that the optimal
solution often reduces the required perturbation budget by about half when
compared to a greedy baseline method.
|
We explore a two-qubit system defined on valley isospins of two electrons
confined in a gate-defined double quantum dot created within a MoS$_2$
monolayer flake. We show how to initialize, control, interact and read out such
valley qubits only by electrical means using voltages applied to the local
planar gates, which are layered on the top of the flake. By demonstrating the
two-qubit exchange or readout via the Pauli blockade, we prove that valley
qubits in transition-metal-dichalcogenide semiconductors family fulfill the
universality criteria and represent a scalable quantum computing platform. Our
numerical experiments are based on the tight-binding model for a MoS$_2$
monolayer, which gives single-electron eigenstates that are then used to
construct a basis of Slater-determinants for the two-electron configuration
space. We express screened electron-electron interactions in this basis by
calculating the Coulomb matrix elements using localized Slater-type orbitals.
Then we solve the time-dependent Schr\"odinger equation and obtain an exact
time-evolution of the two-electron system. During the evolution we
simultaneously solve the Poison equation, finding the confinement potential
controlled via voltages applied to the gates.
|
This paper is concerned with the problem of representing and learning the
optimal control law for the linear quadratic Gaussian (LQG) optimal control
problem. In recent years, there is a growing interest in re-visiting this
classical problem, in part due to the successes of reinforcement learning (RL).
The main question of this body of research (and also of our paper) is to
approximate the optimal control law {\em without} explicitly solving the
Riccati equation. For this purpose, a novel simulation-based algorithm, namely
an ensemble Kalman filter (EnKF), is introduced in this paper. The algorithm is
used to obtain formulae for optimal control, expressed entirely in terms of the
EnKF particles. For the general partially observed LQG problem, the proposed
EnKF is combined with a standard EnKF (for the estimation problem) to obtain
the optimal control input based on the use of the separation principle. A
nonlinear extension of the algorithm is also discussed which clarifies the
duality roots of the proposed EnKF. The theoretical results and algorithms are
illustrated with numerical experiments.
|
Computational couplings of Markov chains provide a practical route to
unbiased Monte Carlo estimation that can utilize parallel computation. However,
these approaches depend crucially on chains meeting after a small number of
transitions. For models that assign data into groups, e.g. mixture models, the
obvious approaches to couple Gibbs samplers fail to meet quickly. This failure
owes to the so-called "label-switching" problem; semantically equivalent
relabelings of the groups contribute well-separated posterior modes that impede
fast mixing and cause large meeting times. We here demonstrate how to avoid
label switching by considering chains as exploring the space of partitions
rather than labelings. Using a metric on this space, we employ an optimal
transport coupling of the Gibbs conditionals. This coupling outperforms
alternative couplings that rely on labelings and, on a real dataset, provides
estimates more precise than usual ergodic averages in the limited time regime.
Code is available at github.com/tinnguyen96/coupling-Gibbs-partition.
|
Standard machine learning approaches require centralizing the users' data in
one computer or a shared database, which raises data privacy and
confidentiality concerns. Therefore, limiting central access is important,
especially in healthcare settings, where data regulations are strict. A
potential approach to tackling this is Federated Learning (FL), which enables
multiple parties to collaboratively learn a shared prediction model by using
parameters of locally trained models while keeping raw training data locally.
In the context of AI-assisted pain-monitoring, we wish to enable
confidentiality-preserving and unobtrusive pain estimation for long-term
pain-monitoring and reduce the burden on the nursing staff who perform frequent
routine check-ups. To this end, we propose a novel Personalized Federated Deep
Learning (PFDL) approach for pain estimation from face images. PFDL performs
collaborative training of a deep model, implemented using a lightweight CNN
architecture, across different clients (i.e., subjects) without sharing their
face images. Instead of sharing all parameters of the model, as in standard FL,
PFDL retains the last layer locally (used to personalize the pain estimates).
This (i) adds another layer of data confidentiality, making it difficult for an
adversary to infer pain levels of the target subject, while (ii) personalizing
the pain estimation to each subject through local parameter tuning. We show
using a publicly available dataset of face videos of pain (UNBC-McMaster
Shoulder Pain Database), that PFDL performs comparably or better than the
standard centralized and FL algorithms, while further enhancing data privacy.
This, has the potential to improve traditional pain monitoring by making it
more secure, computationally efficient, and scalable to a large number of
individuals (e.g., for in-home pain monitoring), providing timely and
unobtrusive pain measurement.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.