abstract
stringlengths 42
2.09k
|
---|
This article investigates the energy efficiency issue in non-orthogonal
multiple access (NOMA)-enhanced Internet-of-Things (IoT) networks, where a
mobile unmanned aerial vehicle (UAV) is exploited as a flying base station to
collect data from ground devices via the NOMA protocol. With the aim of
maximizing network energy efficiency, we formulate a joint problem of UAV
deployment, device scheduling and resource allocation. First, we formulate the
joint device scheduling and spectrum allocation problem as a three-sided
matching problem, and propose a novel low-complexity near-optimal algorithm. We
also introduce the novel concept of `exploration' into the matching game for
further performance improvement. By algorithm analysis, we prove the
convergence and stability of the final matching state. Second, in an effort to
allocate proper transmit power to IoT devices, we adopt the Dinkelbach's
algorithm to obtain the optimal power allocation solution. Furthermore, we
provide a simple but effective approach based on disk covering problem to
determine the optimal number and locations of UAV's stop points to ensure that
all IoT devices can be fully covered by the UAV via line-of-sight (LoS) links
for the sake of better channel condition. Numerical results unveil that: i) the
proposed joint UAV deployment, device scheduling and resource allocation scheme
achieves much higher EE compared to predefined stationary UAV deployment case
and fixed power allocation scheme, with acceptable complexity; and ii) the
UAV-aided IoT networks with NOMA greatly outperforms the OMA case in terms of
number of accessed devices.
|
Deformable convolution networks (DCNs) proposed to address the image
recognition with geometric or photometric variations typically involve
deformable convolution that convolves on arbitrary locations of input features.
The locations change with different inputs and induce considerable dynamic and
irregular memory accesses which cannot be handled by classic neural network
accelerators (NNAs). Moreover, bilinear interpolation (BLI) operation that is
required to obtain deformed features in DCNs also cannot be deployed on
existing NNAs directly. Although a general purposed processor (GPP) seated
along with classic NNAs can process the deformable convolution, the processing
on GPP can be extremely slow due to the lack of parallel computing capability.
To address the problem, we develop a DCN accelerator on existing NNAs to
support both the standard convolution and deformable convolution. Specifically,
for the dynamic and irregular accesses in DCNs, we have both the input and
output features divided into tiles and build a tile dependency table (TDT) to
track the irregular tile dependency at runtime. With the TDT, we further
develop an on-chip tile scheduler to handle the dynamic and irregular accesses
efficiently. In addition, we propose a novel mapping strategy to enable
parallel BLI processing on NNAs and apply layer fusion techniques for more
energy-efficient DCN processing. According to our experiments, the proposed
accelerator achieves orders of magnitude higher performance and energy
efficiency compared to the typical computing architectures including ARM,
ARM+TPU, and GPU with 6.6\% chip area penalty to a classic NNA.
|
Cloud providers offer end-users various pricing schemes to allow them to
tailor VMs to their needs, e.g., a pay-as-you-go billing scheme, called
\textit{on-demand}, and a discounted contract scheme, called \textit{reserved
instances}. This paper presents a cloud broker which offers users both the
flexibility of on-demand instances and some level of discounts found in
reserved instances. The broker employs a buy-low-and-sell-high strategy that
places user requests into a resource pool of pre-purchased discounted cloud
resources. By analysing user request time-series data, the broker takes a
risk-oriented approach to dynamically adjust the resource pool.
This approach does not require a training process which is useful at
processing the large data stream. The broker is evaluated with high-frequency
real cloud datasets from Alibaba. The results show that the overall profit of
the broker is close to the theoretical optimal scenario where user requests can
be perfectly predicted.
|
Thermal states of light are widely used in quantum optics for various quantum
phenomena testing. Particularly, they can be utilized for characterization of
photon creation and photon annihilation operations. During the last decade the
problem of photon subtraction from multimode quantum states become of much
significance. Therefore, in this work we present a technique for statistical
parameter estimation of multimode multiphoton subtracted thermal states of
light, which can be used for multimode photon annihilation test.
|
The recent experimental data of anomalous magnetic moments strongly indicate
the existence of new physics beyond the standard model. An energetic $\mu^+$
beam is a potential option to the expected neutrino factories, the future muon
colliders and the $\mu$SR(the spin rotation, resonance and relaxation)
technology. It is proposed a prompt acceleration scheme of the $\mu^+$ beam in
a donut wakefield driven by a shaped Laguerre-Gaussian (LG) laser pulse. The
forward part of the donut wakefield can accelerate and also focus positive
particle beams effectively. The LG laser is shaped by a near-critical-density
plasma. The shaped LG laser has the shorter rise time and can enlarge the
acceleration field. The acceleration field driven by a shaped LG laser pulse is
six times higher than that driven by a normal LG laser pulse. The simulation
results show that the $ \mu^+$ bunch can be accelerated from $200\mathrm{MeV}$
to 2GeV and the transversal size of the $\mu^+$ bunch is also focused from
initial $\omega_0=5\mu m$ to $\omega=1\mu m$ within several picoseconds.
|
HO Puppis (HO Pup) was considered as a Be-star candidate based on its
gamma-Cassiopeiae-type light curve, but lacked spectroscopic confirmation.
Using distance measured from Gaia Data Release 2 and the
spectral-energy-distribution (SED) fit on broadband photometry, the Be-star
nature of HO Pup is ruled out. Furthermore, based on the 28,700 photometric
data points collected from various time-domain surveys and dedicated
intensive-monitoring observations, the light curves of HO Pup closely resemble
IW And-type stars (as pointed out in Kimura et al. 2020a), exhibiting
characteristics such as quasi-standstill phase, brightening, and dips. The
light curve of HO Pup displays various variability timescales, including
brightening cycles ranging from 23 to 61 days, variations with periods between
3.9 days and 50 minutes during the quasi-standstill phase, and a semi-regular
~14-day period for the dip events. We have also collected time-series spectra
(with various spectral resolutions), in which Balmer emission lines and other
expected spectral lines for an IW And-type star were detected (even though some
of these lines were also expected to be present for Be stars). We detect Bowen
fluorescence near the brightening phase, and that can be used to discriminate
between IW And-type stars and Be stars. Finally, despite only observing for
four nights, the polarization variation was detected, indicating that HO Pup
has significant intrinsic polarization.
|
We study an effective one-dimensional quantum model that includes friction
and spin-orbit coupling (SOC), and show that the model exhibits spin
polarization when both terms are finite. Most important, strong spin
polarization can be observed even for moderate SOC, provided that friction is
strong. Our findings might help to explain the pronounced effect of chirality
on spin distribution and transport in chiral molecules. In particular, our
model implies static magnetic properties of a chiral molecule, which lead to
Shiba-like states when a molecule is placed on a superconductor, in accordance
with recent experimental data.
|
A recent paper by Mucino, Okon and Sudarsky attempts an assessment of the
Relational Interpretation of quantum mechanics. The paper presupposes
assumptions that are precisely those questioned in the Relational
Interpretation, thus undermining the value of the assessment.
|
Any large-scale spiking neuromorphic system striving for complexity at the
level of the human brain and beyond will need to be co-optimized for
communication and computation. Such reasoning leads to the proposal for
optoelectronic neuromorphic platforms that leverage the complementary
properties of optics and electronics. Starting from the conjecture that future
large-scale neuromorphic systems will utilize integrated photonics and fiber
optics for communication in conjunction with analog electronics for
computation, we consider two possible paths towards achieving this vision. The
first is a semiconductor platform based on analog CMOS circuits and
waveguide-integrated photodiodes. The second is a superconducting approach that
utilizes Josephson junctions and waveguide-integrated superconducting
single-photon detectors. We discuss available devices, assess scaling
potential, and provide a list of key metrics and demonstrations for each
platform. Both platforms hold potential, but their development will diverge in
important respects. Semiconductor systems benefit from a robust fabrication
ecosystem and can build on extensive progress made in purely electronic
neuromorphic computing but will require III-V light source integration with
electronics at an unprecedented scale, further advances in ultra-low
capacitance photodiodes, and success from emerging memory technologies.
Superconducting systems place near theoretically minimum burdens on light
sources (a tremendous boon to one of the most speculative aspects of either
platform) and provide new opportunities for integrated, high-endurance synaptic
memory. However, superconducting optoelectronic systems will also contend with
interfacing low-voltage electronic circuits to semiconductor light sources, the
serial biasing of superconducting devices on an unprecedented scale, a less
mature fabrication ecosystem, and cryogenic infrastructure.
|
Motivation: Manual curation of genome-scale reconstructions is laborious, yet
existing automated curation tools typically do not take species-specific
experimental data and manually refined genome annotations into account.
Results: We developed DEMETER, a COBRA Toolbox extension that enables the
efficient simultaneous refinement of thousands of draft genome-scale
reconstructions while ensuring adherence to the quality standards in the field,
agreement with available experimental data, and refinement of pathways based on
manually refined genome annotations. Availability: DEMETER and tutorials are
available at https://github.com/opencobra/cobratoolbox.
|
This work proposes to learn fair low-rank tensor decompositions by
regularizing the Canonical Polyadic Decomposition factorization with the kernel
Hilbert-Schmidt independence criterion (KHSIC). It is shown, theoretically and
empirically, that a small KHSIC between a latent factor and the sensitive
features guarantees approximate statistical parity. The proposed algorithm
surpasses the state-of-the-art algorithm, FATR (Zhu et al., 2018), in
controlling the trade-off between fairness and residual fit on synthetic and
real data sets.
|
Chinese character recognition has attracted much research interest due to its
wide applications. Although it has been studied for many years, some issues in
this field have not been completely resolved yet, e.g. the zero-shot problem.
Previous character-based and radical-based methods have not fundamentally
addressed the zero-shot problem since some characters or radicals in test sets
may not appear in training sets under a data-hungry condition. Inspired by the
fact that humans can generalize to know how to write characters unseen before
if they have learned stroke orders of some characters, we propose a
stroke-based method by decomposing each character into a sequence of strokes,
which are the most basic units of Chinese characters. However, we observe that
there is a one-to-many relationship between stroke sequences and Chinese
characters. To tackle this challenge, we employ a matching-based strategy to
transform the predicted stroke sequence to a specific character. We evaluate
the proposed method on handwritten characters, printed artistic characters, and
scene characters. The experimental results validate that the proposed method
outperforms existing methods on both character zero-shot and radical zero-shot
tasks. Moreover, the proposed method can be easily generalized to other
languages whose characters can be decomposed into strokes.
|
Tracking objects of interest in a video is one of the most popular and widely
applicable problems in computer vision. However, with the years, a Cambrian
explosion of use cases and benchmarks has fragmented the problem in a multitude
of different experimental setups. As a consequence, the literature has
fragmented too, and now novel approaches proposed by the community are usually
specialised to fit only one specific setup. To understand to what extent this
specialisation is necessary, in this work we present UniTrack, a solution to
address five different tasks within the same framework. UniTrack consists of a
single and task-agnostic appearance model, which can be learned in a supervised
or self-supervised fashion, and multiple ``heads'' that address individual
tasks and do not require training. We show how most tracking tasks can be
solved within this framework, and that the same appearance model can be
successfully used to obtain results that are competitive against specialised
methods for most of the tasks considered. The framework also allows us to
analyse appearance models obtained with the most recent self-supervised
methods, thus extending their evaluation and comparison to a larger variety of
important problems.
|
We consider the motion of a gyroscope on a closed timelike curve (CTC). A
gyroscope is identified with a unit-length spacelike vector - a spin-vector -
orthogonal to the tangent to the CTC, and satisfying the equations of
Fermi-Walker transport along the curve. We investigate the consequences of the
periodicity of the coefficients of the transport equations, which arise from
the periodicty of the CTC, which is assumed to be piecewise $C^2$. We show that
every CTC with period $T$ admits at least one $T-$periodic spin-vector.
Further, either every other spin-vector is $T-$periodic, or no others are. It
follows that gyroscopes carried by CTCs are either all $T-$periodic, or are
generically not $T-$periodic. We consider examples of spacetimes admitting
CTCs, and address the question of whether $T-$periodicity of gyroscopic motion
occurs generically or only on a negligible set for these CTCs. We discuss these
results from the perspective of principles of consistency in spacetimes
admitting CTCs.
|
In this paper I formulate Minimal Requirements for Candidate Predictions in
quantum field theories, inspired by viewing the standard model as an effective
field theory. I then survey standard effective field theory regularization
procedures, to see if the vacuum expectation value of energy density
($\langle\rho\rangle$) is a quantity that meets these requirements. The verdict
is negative, leading to the conclusion that $\langle\rho\rangle$ is not a
physically significant quantity in the standard model. Rigorous extensions of
flat space quantum field theory eliminate $\langle\rho\rangle$ from their
conceptual framework, indicating that it lacks physical significance in the
framework of quantum field theory more broadly. This result has consequences
for problems in cosmology and quantum gravity, as it suggests that the correct
solution to the cosmological constant problem involves a revision of the vacuum
concept within quantum field theory.
|
In this work, we obtain the Schr\"odinger equation solutions for the Varshni
potential using the Nikiforov-Uvarov method. The energy eigenvalues are
obtained in non-relativistic regime. The corresponding eigenfunction is
obtained in terms of Laguerre polynomials. We applied the present results to
calculate heavy-meson masses of charmonium and bottomonium .The mass spectra
for charmonium and bottomonium multiplets have predicted numerically. The
results are in good agreement with experimental data and the work of other
researchers.
|
We determine explicitly and discuss in detail the effects of the joint
presence of a longitudinal and a transversal (random) magnetic field on the
phases of the Random Energy Model (REM) and its hierarchical generalization,
the GREM. Our results extent known results both in the classical case of
vanishing transversal field and in the quantum case for vanishing longitudinal
field. Following Derrida and Gardner, we argue that the longitudinal field has
to be implemented hierarchically also in the Quantum GREM. We show that this
ensures the shrinking of the spin glass phase in the presence of the magnetic
fields as also expected for the Quantum Sherrington-Kirkpatrick model.
|
Starting from a discrete $C^*$-dynamical system $(\mathfrak{A}, \theta,
\omega_o)$, we define and study most of the main ergodic properties of the
crossed product $C^*$-dynamical system $(\mathfrak{A}\rtimes_\alpha\mathbb{Z},
\Phi_{\theta, u},\om_o\circ E)$,
$E:\mathfrak{A}\rtimes_\alpha\mathbb{Z}\rightarrow\ga$ being the canonical
conditional expectation of $\mathfrak{A}\rtimes_\alpha\mathbb{Z}$ onto
$\mathfrak{A}$, provided $\a\in\aut(\ga)$ commute with the $*$-automorphism
$\th$ up tu a unitary $u\in\ga$. Here, $\Phi_{\theta,
u}\in\aut(\mathfrak{A}\rtimes_\alpha\mathbb{Z})$ can be considered as the fully
noncommutative generalisation of the celebrated skew-product defined by H.
Anzai for the product of two tori in the classical case.
|
Extended scalar sectors are a common feature of almost all beyond Standard
Model (SM) scenarios which, in fact, can address many of the SM shortcomings
solely on their own. While many beyond SM scenarios have lost their appeal due
to the non-observation of their predicted particles or are experimentally
inaccessible, scalar extensions are well within the reach of many current and
upcoming experiments. Here, we discuss the novel phenomenon of dark
CP-violation which was introduced for the first time in the context of
non-minimal Higgs frameworks with an extended dark sector and point out its
experimental probes.
|
Electrically addressing spin systems is predicted to be a key component in
developing scalable semiconductor-based quantum processing architectures, to
enable fast spin qubit manipulation and long-distance entanglement via
microwave photons. However, single spins have no electric dipole, and therefore
a spin-orbit mechanism must be integrated in the qubit design. Here, we propose
to couple microwave photons to atomically precise donor spin qubit devices in
silicon using the hyperfine interaction intrinsic to donor systems and an
electrically-induced spin-orbit coupling. We characterise a one-electron system
bound to a tunnel-coupled donor pair (1P-1P) using the tight-binding method,
and then estimate the spin-photon coupling achievable under realistic
assumptions. We address the recent experiments on double quantum dots (DQDs) in
silicon and indicate the differences between DQD and 1P-1P systems. Our
analysis shows that it is possible to achieve strong spin-photon coupling in
1P-1P systems in realistic device conditions without the need for an external
magnetic field gradient.
|
In the past years, EUV lithography scanner systems have entered High-Volume
Manufacturing for state-of-the-art Integrated Circuits (IC), with critical
dimensions down to 10 nm. This technology uses 13.5 nm EUV radiation, which is
shaped and transmitted through a near-vacuum H2 background gas. This gas is
excited into a low-density H2 plasma by the EUV radiation, as generated in
pulsed mode operation by the Laser-Produced Plasma (LPP) in the EUV Source.
Thus, in the confinement created by the walls and mirrors within the scanner
system, a reductive plasma environment is created that must be understood in
detail to maximize mirror transmission over lifetime and to minimize molecular
and particle contamination in the scanner. Besides the irradiated mirrors,
reticle and wafer, also the plasma and radical load to the surrounding
construction materials must be considered. This paper will provide an overview
of the EUV-induced plasma in scanner context. Special attention will be given
to the plasma parameters in a confined geometry, such as may be found in the
scanner area near the reticle. Also, the translation of these specific plasma
parameters to off-line setups will be discussed.
|
There has been much interest in novel models of dark matter that exhibit
interesting behavior on galactic scales. A primary motivation is the observed
Baryonic Tully-Fisher Relation in which the mass of galaxies increases as the
quartic power of rotation speed. This scaling is not obviously accounted for by
standard cold dark matter. This has prompted the development of dark matter
models that exhibit some form of so-called MONDian phenomenology to account for
this galactic scaling, while also recovering the success of cold dark matter on
large scales. A beautiful example of this are the so-called superfluid dark
matter models, in which a complex bosonic field undergoes spontaneous symmetry
breaking on galactic scales, entering a superfluid phase with a 3/2 kinetic
scaling in the low energy effective theory, that mediates a long-ranged MONDian
force. In this work we examine the causality and locality properties of these
and other related models. We show that the Lorentz invariant completions of the
superfluid models exhibit high energy perturbations that violate global
hyperbolicity of the equations of motion in the MOND regime and can be
superluminal in other parts of phase space. We also examine a range of
alternate models, finding that they also exhibit forms of non-locality.
|
In this paper, we propose a class of monitoring statistics for a mean shift
in a sequence of high-dimensional observations. Inspired by the recent
U-statistic based retrospective tests developed by Wang et al.(2019) and Zhang
et al.(2020), we advance the U-statistic based approach to the sequential
monitoring problem by developing a new adaptive monitoring procedure that can
detect both dense and sparse changes in real-time. Unlike Wang et al.(2019) and
Zhang et al.(2020), where self-normalization was used in their tests, we
instead introduce a class of estimators for $q$-norm of the covariance matrix
and prove their ratio consistency. To facilitate fast computation, we further
develop recursive algorithms to improve the computational efficiency of the
monitoring procedure. The advantage of the proposed methodology is demonstrated
via simulation studies and real data illustrations.
|
We examine the elements of the balance equation of entropy in open quantum
evolutions, and their response as we go over from a Markovian to a
non-Markovian situation. In particular, we look at the heat current and entropy
production rate in the non-Markovian reduced evolution, and a Markovian limit
of the same, experienced by one of two interacting systems immersed in a
Markovian bath. The analysis naturally leads us to define a heat current
deficit and an entropy production rate deficit, being differences between the
global and local versions of the corresponding quantities. The investigation
brings us, in certain cases, to a complementarity of the timeintegrated heat
current deficit with the relative entropy of entanglement between the two
systems.
|
In this thesis we discuss various classical problems in enumerative geometry.
We are focused on ideas and methods which can be used explicitly for practical
computations. Our approach is based on studying the limits of elliptic stable
envelopes with shifted equivariant or Kahler variables from elliptic cohomology
to K-theory.
We prove that for a variety X we can obtain K-theoretic stable envelopes for
the variety X^G of the G-fixed points of X, where G is a cyclic group acting on
X preserving the symplectic form.
We formalize the notion of symplectic duality, also known as 3-dimensional
mirror symmetry. We obtain a factorization theorem about the limit of elliptic
stable envelopes to a wall, which generalizes the result of M.Aganagic and
A.Okounkov. This approach allows us to extend the action of quantum groups,
quantum Weyl groups, R-matrices etc., to actions on the K-theory of the
symplectic dual variety. In the case of the Hilbert scheme of points in plane,
our results imply the conjectures of E.Gorsky and A.Negut.
We propose a new approach to K-theoretic quantum difference equations.
|
This paper deals with the output regulation problem of a linear
time-invariant system in the presence of sporadically available measurement
streams. A regulator with a continuous intersample injection term is proposed,
where the intersample injection is provided by a linear dynamical system and
the state of which is reset with the arrival of every new measurement updates.
The resulting system is augmented with a timer triggering an instantaneous
update of the new measurement and the overall system is then analyzed in a
hybrid system framework. With the Lyapunov based stability analysis, we offer
sufficient conditions to ensure the objectives of the output regulation problem
are achieved under intermittency of the measurement streams. Then, from the
solution to linear matrix inequalities, a numerically tractable regulator
design procedure is presented. Finally, with the help of an illustrative
example, the effectiveness of the theoretical results are validated.
|
Group-10 transition metal dichalcogenides (TMDs) are rising in prominence
within the highly innovative field of 2D materials. While PtS2 has been
investigated for potential electronic applications, due to its high
charge-carrier mobility and strong layer-dependent bandgap, it has proven to be
one of the more difficult TMDs to synthesise. In contrast to most TMDs, Pt has
a significantly more stable monosulfide, the non-layered PtS. The existence of
two stable platinum sulfides, sometimes within the same sample, has resulted in
much confusion between the materials in the literature. Neither of these Pt
sulfides have been thoroughly characterised as-of-yet. Here we utilise
time-efficient, scalable methods to synthesise high-quality thin films of both
Pt sulfides on a variety of substrates. The competing nature of the sulfides
and limited thermal stability of these materials is demonstrated. We report
peak-fitted X-ray photoelectron spectra, and Raman spectra using a variety of
laser wavelengths, for both materials. This systematic characterisation
provides a guide to differentiate between the sulfides using relatively simple
methods which is essential to enable future work on these interesting
materials.
|
Let $f(\mathbb{z},\bar{\mathbb{z}})$ be a convenient Newton non-degenerate
mixed polynomial with strongly polar non-negative mixed weighted homogeneous
face functions. We consider a convenient regular simplicial cone subdivision
$\Sigma^*$ which is admissible for $f$ and take the toric modification
$\hat{\pi} : X \to \mathbb{C}^n$ associated with $\Sigma^*$. We show that the
toric modification resolves topologically the singularity of the mixed
hypersurface germ defined by $f(\mathbb{z},\bar{\mathbb{z}})$ under the
Assumption (*) (Theorem 32). This result is an extension of the first part of
Theorem 11 ([4]) by Mutsuo Oka. We also consider some typical examples (\S 9).
|
The spectrally resolved differential cross section of Compton scattering, $d
\sigma / d \omega' \vert_{\omega' = const}$, rises from small towards larger
laser intensity parameter $\xi$, reaches a maximum, and falls towards the
asymptotic strong-field region. Expressed by invariant quantities: $d \sigma
/du \vert_{u = const}$ rises from small towards larger values of $\xi$, reaches
a maximum at $\xi_{max} = \frac49 {\cal K} u m^2 / k \cdot p$, ${\cal K} =
{\cal O} (1)$, and falls at $\xi > \xi_{max}$ like $\propto \xi^{-3/2} \exp
\left (- \frac{2 u m^2}{3 \xi \, k \cdot p} \right )$ at $u \ge 1$. [The
quantity $u$ is the Ritus variable related to the light-front momentum-fraction
$s = (1 + u)/u = k \cdot k' / k \cdot p$ of the emitted photon (four-momentum
$k'$, frequency $\omega'$), and $k \cdot p/m^2$ quantifies the invariant energy
in the entrance channel of electron (four-momentum $p$, mass $m$) and laser
(four-wave vector $k$).] Such a behavior of a differential observable is to be
contrasted with the laser intensity dependence of the total probability,
$\lim_{\chi = \xi k \cdot p/m^2, \xi \to \infty} \mathbb{P} \propto \alpha
\chi^{2/3} m^2 / k \cdot p$, which is governed by the soft spectral part.
We combine the hard-photon yield from Compton with the seeded Breit-Wheeler
pair production in a folding model and obtain a rapidly increasing $e^+ e^-$
pair number at $\xi \lesssim 4$. Laser bandwidth effects are quantified in the
weak-field limit of the related trident pair production.
|
Semantic web technologies have shown their effectiveness, especially when it
comes to knowledge representation, reasoning, and data integration. However,
the original semantic web vision, whereby machine readable web data could be
automatically actioned upon by intelligent software web agents, has yet to be
realised. In order to better understand the existing technological
opportunities and challenges, in this paper we examine the status quo in terms
of intelligent software web agents, guided by research with respect to
requirements and architectural components, coming from the agents community. We
use the identified requirements to both further elaborate on the semantic web
agent motivating use case scenario, and to summarise different perspectives on
the requirements from the semantic web agent literature. We subsequently
propose a hybrid semantic web agent architecture, and use the various
components and subcomponents in order to provide a focused discussion in
relation to existing semantic web standards and community activities. Finally,
we highlight open research opportunities and challenges and take a broader
perspective of the research by discussing the potential for intelligent
software web agents as an enabling technology for emerging domains, such as
digital assistants, cloud computing, and the internet of things.
|
In the type IIB maximally supersymmetric pp-wave background, stringy excited
modes are described by BMN (Berenstein-Madalcena-Nastase) operators in the dual
$\mathcal{N}=4$ super-Yang-Mills theory. In this paper, we continue the studies
of higher genus free BMN correlators with more stringy modes, mostly focusing
on the case of genus one and four stringy modes in different transverse
directions. Surprisingly, we find that the non negativity of torus two-point
functions, which is a consequence of a previously proposed probability
interpretation and has been verified in the cases with two and three stringy
modes, is no longer true for the case of four or more stringy modes.
Nevertheless, the factorization formula, which is also a proposed holographic
dictionary relating the torus two-point function to a string diagram
calculation, is still valid. We also check the correspondence of planar
three-point functions with Green-Schwarz string vertex with many string modes.
We discuss some issues in the case of multiple stringy modes in the same
transverse direction. Our calculations provide some new perspectives on pp-wave
holography.
|
Famously mathematical finance was started by Bachelier in his 1900 PhD thesis
where - among many other achievements - he also provides a formal derivation of
the Kolmogorov forward equation. This forms also the basis for Dupire's (again
formal) solution to the problem of finding an arbitrage free model calibrated
to the volatility surface. The later result has rigorous counterparts in the
theorems of Kellerer and Lowther. In this survey article we revisit these
hallmarks of stochastic finance, highlighting the role played by some optimal
transport results in this context.
|
In this paper, a Gauss-Seidel method with oblique direction (GSO) is proposed
for finding the least-squares solution to a system of linear equations, where
the coefficient matrix may be full rank or rank deficient and the system is
overdetermined or underdetermined. Through this method, the number of iteration
steps and running time can be reduced to a greater extent to find the
least-squares solution, especially when the columns of matrix A are close to
linear correlation. It is theoretically proved that GSO method converges to the
least-squares solution. At the same time, a randomized version--randomized
Gauss-Seidel method with oblique direction (RGSO) is established, and its
convergence is proved. Theoretical proof and numerical results show that the
GSO method and the RGSO method are more efficient than the coordinate descent
(CD) method and the randomized coordinate descent (RCD) method.
|
The Reconfigurable Intelligent Surface (RIS) constitutes one of the prominent
technologies for the next 6-th Generation (6G) of wireless communications. It
is envisioned to enhance signal coverage in cases where obstacles block the
direct communication from Base Stations (BSs), and when high carrier
frequencies are used that are sensitive to attenuation losses. In the
literature, the exploitation of RISs is exclusively based on traditional
coherent demodulation, which necessitates the availability of Channel State
Information (CSI). Given the CSI, a multi-antenna BS or a dedicated controller
computes the pre/post spatial coders and the RIS configuration. The latter
tasks require significant amount of time and resources, which may not be
affordable when the channel is time-varying or the CSI is not accurate enough.
In this paper, we consider the uplink between a single-antenna user and a
multi-antenna BS and present a novel RIS-empowered Orthogonal Frequency
Division Multiplexing (OFDM) communication system based on the differential
phase shift keying, which is suitable for high noise and/or mobility scenarios.
Considering both an idealistic and a realistic channel model, analytical
expressions for the Signal-to-Interference and Noise Ratio (SINR) and the
Symbol Error Probability (SEP) of the proposed non-coherent RIS-empowered
system are presented. Our extensive computer simulation results verify the
accuracy of the presented analysis and showcase the proposed system's
performance and superiority over coherent demodulation in different mobility
and spatial correlation scenarios.
|
Semantic diversity in Genetic Programming has proved to be highly beneficial
in evolutionary search. We have witnessed a surge in the number of scientific
works in the area, starting first in discrete spaces and moving then to
continuous spaces. The vast majority of these works, however, have focused
their attention on single-objective genetic programming paradigms, with a few
exceptions focusing on Evolutionary Multi-objective Optimization (EMO). The
latter works have used well-known robust algorithms, including the
Non-dominated Sorting Genetic Algorithm II and the Strength Pareto Evolutionary
Algorithm, both heavily influenced by the notion of Pareto dominance. These
inspiring works led us to make a step forward in EMO by considering
Multi-objective Evolutionary Algorithms Based on Decomposition (MOEA/D). We
show, for the first time, how we can promote semantic diversity in MOEA/D in
Genetic Programming.
|
In this paper, we define a notion of containment and avoidance for subsets of
$\mathbb{R}^2$. Then we introduce a new, continuous and super-additive extremal
function for subsets $P \subseteq \mathbb{R}^2$ called $px(n, P)$, which is the
supremum of $\mu_2(S)$ over all open $P$-free subsets $S \subseteq [0, n]^2$,
where $\mu_2(S)$ denotes the Lebesgue measure of $S$ in $\mathbb{R}^2$. We show
that $px(n, P)$ fully encompasses the Zarankiewicz problem and more generally
the 0-1 matrix extremal function $ex(n, M)$ up to a constant factor. More
specifically, we define a natural correspondence between finite subsets $P
\subseteq \mathbb{R}^2$ and 0-1 matrices $M_P$, and we prove that $px(n, P) =
\Theta(ex(n, M_P))$ for all finite subsets $P \subseteq \mathbb{R}^2$, where
the constants in the bounds depend only on the distances between the points in
$P$.
We also discuss bounded infinite subsets $P$ for which $px(n, P)$ grows
faster than $ex(n, M)$ for all fixed 0-1 matrices $M$. In particular, we show
that $px(n, P) = \Theta(n^{2})$ for any open subset $P \subseteq \mathbb{R}^2$.
We prove an even stronger result, that if $Q_P$ is the set of points with
rational coordinates in any open subset $P \subseteq \mathbb{R}^2$, then $px(n,
Q_P) = \Theta(n^2)$. Finally, we obtain a strengthening of the
K\H{o}vari-S\'{o}s-Tur\'{a}n theorem that applies to infinite subsets of
$\mathbb{R}^2$. Specifically, for subsets $P_{s, t, c} \subseteq \mathbb{R}^2$
consisting of $t$ horizontal line segments of length $s$ with left endpoints on
the same vertical line with consecutive segments a distance of $c$ apart, we
prove that $px(n, P_{s, t,c}) = O(s^{\frac{1}{t}}n^{2-\frac{1}{t}})$, where the
constant in the bound depends on $t$ and $c$. When $t = 2$, we show that this
bound is sharp up to a constant factor that depends on $c$.
|
Nonreciprocal signal operation is highly desired for various acoustic
applications, where protection from unwanted backscattering can be realized so
that transmitting and receiving signals are processed in a full-duplex mode.
Here we present the realization of a class of nonreciprocal circulators based
on simply structured acoustic metagratings, which consist only of a few solid
cylinders and a steady fluid flow with low velocity. These innovative
metagratings are intelligently designed via a diffraction analysis of the
linearized potential flow equation and a genetic-algorithm-based optimization
process. Unitary reflection efficiency between desired ports of the circulators
are demonstrated through full-wave numerical simulations, confirming
nonreciprocal and robust circulation of the acoustic signal over a broad range
of flow velocity magnitude and profile. Our design provides a feasible degree
of tunability, including switching from reciprocal to nonreciprocal operation
and reversing the handedness of the circulator, presenting a convenient but
efficient approach for the realization of nonreciprocal acoustic devices from
wavelength-thick metagratings. It may find applications in various scenarios
including underwater communication, energy harvesting, and acoustic sensing.
|
This work presents a self-heating study of a 40-nm bulk-CMOS technology in
the ambient temperature range from 300 K down to 4.2 K. A custom test chip was
designed and fabricated for measuring both the temperature rise in the MOSFET
channel and in the surrounding silicon substrate, using the gate resistance and
silicon diodes as sensors, respectively. Since self-heating depends on factors
such as device geometry and power density, the test structure characterized in
this work was specifically designed to resemble actual devices used in
cryogenic qubit control ICs. Severe self-heating was observed at deep-cryogenic
ambient temperatures, resulting in a channel temperature rise exceeding 50 K
and having an impact detectable at a distance of up to 30 um from the device.
By extracting the thermal resistance from measured data at different
temperatures, it was shown that a simple model is able to accurately predict
channel temperatures over the full ambient temperature range from
deep-cryogenic to room temperature. The results and modeling presented in this
work contribute towards the full self-heating-aware IC design-flow required for
the reliable design and operation of cryo-CMOS circuits.
|
In this digital era, almost in every discipline people are using automated
systems that generate information represented in document format in different
natural languages. As a result, there is a growing interest towards better
solutions for finding, organizing and analyzing these documents. In this paper,
we propose a system that clusters Amharic text documents using Encyclopedic
Knowledge (EK) with neural word embedding. EK enables the representation of
related concepts and neural word embedding allows us to handle the contexts of
the relatedness. During the clustering process, all the text documents pass
through preprocessing stages. Enriched text document features are extracted
from each document by mapping with EK and word embedding model. TF-IDF weighted
vector of enriched feature was generated. Finally, text documents are clustered
using popular spherical K-means algorithm. The proposed system is tested with
Amharic text corpus and Amharic Wikipedia data. Test results show that the use
of EK with word embedding for document clustering improves the average accuracy
over the use of only EK. Furthermore, changing the size of the class has a
significant effect on accuracy.
|
We study the scheduling problem of makespan minimization while taking machine
conflicts into account. Machine conflicts arise in various settings, e.g.,
shared resources for pre- and post-processing of tasks or spatial restrictions.
In this context, each job has a blocking time before and after its processing
time, i.e., three parameters. We seek for conflict-free schedules in which the
blocking times of no two jobs intersect on conflicting machines. Given a set of
jobs, a set of machines, and a graph representing machine conflicts, the
problem SchedulingWithMachineConflicts (SMC), asks for a conflict-free schedule
of minimum makespan.
We show that, unless $\textrm{P}=\textrm{NP}$, SMC on $m$ machines does not
allow for a $\mathcal{O}(m^{1-\varepsilon})$-approximation algorithm for any
$\varepsilon>0$, even in the case of identical jobs and every choice of fixed
positive parameters, including the unit case. Complementary, we provide
approximation algorithms when a suitable collection of independent sets is
given. Finally, we present polynomial time algorithms to solve the problem for
the case of unit jobs on special graph classes. Most prominently, we solve it
for bipartite graphs by using structural insights for conflict graphs of star
forests.
|
This paper presents a novel approach to characterize the dynamics of the
limit spectrum of large random matrices. This approach is based upon the notion
we call "spectral dominance". In particular, we show that the limit spectral
measure can be determined as the derivative of the unique viscosity solution of
a partial integro-differential equation. This also allows to make general and
"short" proofs for the convergence problem. We treat the cases of Dyson
Brownian motions, Wishart processes and present a general class of models for
which this characterization holds.
|
We give a new construction of Lascoux-Sch\"utzenberger's charge statistic in
type A which is motivated by the geometric Satake equivalence
|
Effective Capacity defines the maximum communication rate subject to a
specific delay constraint, while effective energy efficiency (EEE) indicates
the ratio between effective capacity and power consumption. We analyze the EEE
of ultra-reliable networks operating in the finite blocklength regime. We
obtain a closed form approximation for the EEE in quasi-static Nakagami-$m$
(and Rayleigh as sub-case) fading channels as a function of power, error
probability, and latency. Furthermore, we characterize the QoS constrained EEE
maximization problem for different power consumption models, which shows a
significant difference between finite and infinite blocklength coding with
respect to EEE and optimal power allocation strategy. As asserted in the
literature, achieving ultra-reliability using one transmission consumes huge
amount of power, which is not applicable for energy limited IoT devices. In
this context, accounting for empty buffer probability in machine type
communication (MTC) and extending the maximum delay tolerance jointly enhances
the EEE and allows for adaptive retransmission of faulty packets. Our analysis
reveals that obtaining the optimum error probability for each transmission by
minimizing the non-empty buffer probability approaches EEE optimality, while
being analytically tractable via Dinkelbach's algorithm. Furthermore, the
results illustrate the power saving and the significant EEE gain attained by
applying adaptive retransmission protocols, while sacrificing a limited
increase in latency.
|
In this paper, we introduce a non-abelian exterior product of Hom-Leibniz
algebras and investigate its relative to the Hopf's formula. We also construct
an eight-term exact sequence in the homology of Hom-Leibniz algebras. Finally,
we relate the notion of capability of a Hom-Leibniz algebra to its exterior
product.
|
This paper proposes a nonlinear control architecture for flexible aircraft
simultaneous trajectory tracking and load alleviation. By exploiting the
control redundancy, the gust and maneuver loads are alleviated without
degrading the rigid-body command tracking performance. The proposed control
architecture contains four cascaded control loops: position control, flight
path control, attitude control, and optimal multi-objective wing control. Since
the position kinematics are not influenced by model uncertainties, the
nonlinear dynamic inversion control is applied. On the contrary, the flight
path dynamics are perturbed by both model uncertainties and atmospheric
disturbances; thus the incremental sliding mode control is adopted.
Lyapunov-based analyses show that this method can simultaneously reduce the
model dependency and the minimum possible gains of conventional sliding mode
control methods. Moreover, the attitude dynamics are in the strict-feedback
form; thus the incremental backstepping sliding mode control is applied.
Furthermore, a novel load reference generator is designed to distinguish the
necessary loads for performing maneuvers from the excessive loads. The load
references are realized by the inner-loop optimal wing controller, while the
excessive loads are naturalized by flaps without influencing the outer-loop
tracking performance. The merits of the proposed control architecture are
verified by trajectory tracking tasks and gust load alleviation tasks in
spatial von Karman turbulence fields.
|
This paper presents a simple Fourier-matching method to rigorously study
resonance frequencies of a sound-hard slab with a finite number of arbitrarily
shaped cylindrical holes of diameter ${\cal O}(h)$ for $h\ll1$. Outside the
holes, a sound field can be expressed in terms of its normal derivatives on the
apertures of holes. Inside each hole, since the vertical variable can be
separated, the field can be expressed in terms of a countable set of Fourier
basis functions. Matching the field on each aperture yields a linear system of
countable equations in terms of a countable set of unknown Fourier
coefficients. The linear system can be reduced to a finite-dimensional linear
system based on the invertibility of its principal submatrix, which is proved
by the well-posedness of a closely related boundary value problem for each hole
in the limiting case $h\to 0$, so that only the leading Fourier coefficient of
each hole is preserved in the finite-dimensional system. The resonance
frequencies are those making the resulting finite-dimensional linear system
rank deficient. By regular asymptotic analysis for $h \ll 1$, we get a
systematic asymptotic formula for characterizing the resonance frequencies by
the 3D subwavelength structure. The formula reveals an important fact that when
all holes are of the same shape, the Q-factor for any resonance frequency
asymptotically behaves as ${\cal O}(h^{-2})$ for $h\ll1$ with its prefactor
independent of shapes of holes.
|
In this paper we analyze the propositional extensions of the minimal
classical modal logic system E, which form a lattice denoted as CExtE. Our
method of analysis uses algebraic calculations with canonical forms, which are
a generalization of the normal forms applicable to normal modal logics. As an
application, we identify a group of automorphisms of CExtE that is isomorphic
to the symmetric group S4.
|
We show that the quantum geometry of the Fermi surface can be numerically
described by a 3-dimensional discrete quantum manifold. This approach not only
avoids singularities in the Fermi sea, but it also enables the precise
computation of the intrinsic Hall conductivity resolved in spin, as well as any
other local properties of the Fermi surface. The method assures numerical
accuracy when the Fermi level is arbitrarily close to singularities, and it
remains robust when Kramers degeneracy is protected by symmetry. The approach
is demonstrated by calculating the anomalous Hall and spin Hall conductivities
of a 2-band lattice model of a Weyl semimetal and a full-band ab-initio model
of zincblende GaAs.
|
Doping can profoundly affect the electronic- and optical-structure of
semiconductors. Here we address the effect of surplus charges on non-radiative
(NR) exciton and trion decay in doped semiconducting single-wall carbon
nanotubes. The dependence of exciton photoluminescence quantum yields and
exciton decay on the doping level, with its characteristically
stretched-exponential kinetics, is attributed to diffusion-limited NR decay at
charged impurity sites. By contrast, trion decay is unimolecular with a rate
constant of $2.0\,\rm ps^{-1}$. Our experiments thus show that charged
impurities not only trap trions and scavenge mobile excitons but that they also
facilitate efficient NR energy dissipation for both.
|
The nearby face-on spiral galaxy NGC 2617 underwent an unambiguous
'inside-out' multi-wavelength outburst in Spring 2013, and a dramatic Seyfert
type change probably between 2010 and 2012, with the emergence of broad optical
emission lines. To search for the jet activity associated with this variable
accretion activity, we carried out multi-resolution and multi-wavelength radio
observations. Using the very long baseline interferometric (VLBI) observations
with the European VLBI Network (EVN) at 1.7 and 5.0 GHz, we find that NGC 2617
shows a partially synchrotron self-absorbed compact radio core with a
significant core shift, and an optically thin steep-spectrum jet extending
towards the north up to about two parsecs in projection. We also observed NGC
2617 with the electronic Multi-Element Remotely Linked Interferometer Network
(e-MERLIN) at 1.5 and 5.5 GHz, and revisited the archival data of the Very
Large Array (VLA) and the Very Long Baseline Array (VLBA). The radio core had a
stable flux density of about 1.4 mJy at 5.0 GHz between 2013 June and 2014
January, in agreement with the expectation of a supermassive black hole in the
low accretion rate state. The northern jet component is unlikely to be
associated with the 'inside-out' outburst of 2013. Moreover, we report that
most optically selected changing-look AGN at z<0.83 are sub-mJy radio sources
in the existing VLA surveys at 1.4 GHz, and it is unlikely that they are more
active than normal AGN at radio frequencies.
|
With the development of the 5G and Internet of Things, amounts of wireless
devices need to share the limited spectrum resources. Dynamic spectrum access
(DSA) is a promising paradigm to remedy the problem of inefficient spectrum
utilization brought upon by the historical command-and-control approach to
spectrum allocation. In this paper, we investigate the distributed DSA problem
for multi-user in a typical multi-channel cognitive radio network. The problem
is formulated as a decentralized partially observable Markov decision process
(Dec-POMDP), and we proposed a centralized off-line training and distributed
on-line execution framework based on cooperative multi-agent reinforcement
learning (MARL). We employ the deep recurrent Q-network (DRQN) to address the
partial observability of the state for each cognitive user. The ultimate goal
is to learn a cooperative strategy which maximizes the sum throughput of
cognitive radio network in distributed fashion without coordination information
exchange between cognitive users. Finally, we validate the proposed algorithm
in various settings through extensive experiments. From the simulation results,
we can observe that the proposed algorithm can converge fast and achieve almost
the optimal performance.
|
We introduce the notion of eigenstate of an operator in an abstract
C*-algebra, and prove several properties. Most significantly, if the operator
is self-adjoint, then every element of its spectrum has a corresponding
eigenstate.
|
Blockchain applications that rely on the Proof-of-Work (PoW) have
increasingly become energy inefficient with a staggering carbon footprint. In
contrast, energy-efficient alternative consensus protocols such as
Proof-of-Stake (PoS) may cause centralization and unfairness in the blockchain
system. To address these challenges, we propose a modular version of PoS-based
blockchain systems called epos that resists the centralization of network
resources by extending mining opportunities to a wider set of stakeholders.
Moreover, epos leverages the in-built system operations to promote fair mining
practices by penalizing malicious entities. We validate epos's achievable
objectives through theoretical analysis and simulations. Our results show that
epos ensures fairness and decentralization, and can be applied to existing
blockchain applications.
|
We consider nonnegative time series forecasting framework. Based on recent
advances in Nonnegative Matrix Factorization (NMF) and Archetypal Analysis, we
introduce two procedures referred to as Sliding Mask Method (SMM) and Latent
Clustered Forecast (LCF). SMM is a simple and powerful method based on time
window prediction using Completion of Nonnegative Matrices. This new procedure
combines low nonnegative rank decomposition and matrix completion where the
hidden values are to be forecasted. LCF is two stage: it leverages archetypal
analysis for dimension reduction and clustering of time series, then it uses
any black-box supervised forecast solver on the clustered latent
representation. Theoretical guarantees on uniqueness and robustness of the
solution of NMF Completion-type problems are also provided for the first time.
Finally, numerical experiments on real-world and synthetic data-set confirms
forecasting accuracy for both the methodologies.
|
In this study, we extend upon the model by Haring et al. (2001) by
introducing retrial phenomenon in multi-server queueing system. When at most g
number of guard channels are available, it allows new calls to join the retrial
group. This retrial group is called orbit and can hold a maximum of m retrial
calls. The impact of retrial over certain performance measures is numerically
investigated. The focus of this work is to construct optimization problems to
determine the optimal number of channels, the optimal number of guard channels
and the optimal orbit size. Further, it has been emphasized that the proposed
model with retrial phenomenon reduces the blocking probability of new calls in
the system.
|
With the rapidly evolving next-generation systems-of-systems, we face new
security, resilience, and operational assurance challenges. In the face of the
increasing attack landscape, it is necessary to cater to efficient mechanisms
to verify software and device integrity to detect run-time modifications.
Towards this direction, remote attestation is a promising defense mechanism
that allows a third party, the verifier, to ensure a remote device's (the
prover's) integrity. However, many of the existing families of attestation
solutions have strong assumptions on the verifying entity's trustworthiness,
thus not allowing for privacy preserving integrity correctness. Furthermore,
they suffer from scalability and efficiency issues. This paper presents a
lightweight dynamic configuration integrity verification that enables inter and
intra-device attestation without disclosing any configuration information and
can be applied on both resource-constrained edge devices and cloud services.
Our goal is to enhance run-time software integrity and trustworthiness with a
scalable solution eliminating the need for federated infrastructure trust.
|
Understanding ventilation strategy of a supercavity is important for
designing high-speed underwater vehicles wherein an artificial gas pocket is
created behind a flow separation device for drag reduction. Our study
investigates the effect of flow unsteadiness on the ventilation requirements to
form (CQf) and collapse (CQc) a supercavity. Imposing flow unsteadiness on the
incoming flow has shown an increment in higher CQf at low free stream velocity
and lower CQf at high free stream velocity. High-speed imaging reveals
distinctly different behaviors in the recirculation region for low and high
freestream velocity under unsteady flows. At low free stream velocities, the
recirculation region formed downstream of a cavitator shifted vertically with
flow unsteadiness, resulting in lower bubble collision and coalescence
probability, which is critical for the supercavity formation process. The
recirculation region negligibly changed with flow unsteadiness at high free
stream velocity and less ventilation is required to form a supercavity compared
to that of the steady incoming flow. Such a difference is attributed to the
increased transverse Reynolds stress that aids bubble collision in a confined
space of the recirculation region. CQc is found to heavily rely on the vertical
component of the flow unsteadiness and the free stream velocity. Interfacial
instability located upper rear of the supercavity develops noticeably with flow
unsteadiness and additional bubbles formed by the distorted interface shed from
the supercavity, resulting in an increased CQc. Further analysis on the
quantification of such additional bubble leakage rate indicates that the
development and amplitude of the interfacial instability accounts for the
variation of CQc under a wide range of flow unsteadiness. Our study provides
some insights on the design of a ventilation strategy for supercavitating
vehicles in practice.
|
Emotion recognition and understanding is a vital component in human-machine
interaction. Dimensional models of affect such as those using valence and
arousal have advantages over traditional categorical ones due to the complexity
of emotional states in humans. However, dimensional emotion annotations are
difficult and expensive to collect, therefore they are not as prevalent in the
affective computing community. To address these issues, we propose a method to
generate synthetic images from existing categorical emotion datasets using face
morphing as well as dimensional labels in the circumplex space with full
control over the resulting sample distribution, while achieving augmentation
factors of at least 20x or more.
|
The paper proposes a new technique that substantially improves blind digital
modulation identification (DMI) algorithms that are based on higher-order
statistics (HOS). The proposed technique takes advantage of noise power
estimation to make an offset on higher-order moments (HOM), thus getting an
estimate of noise-free HOM. When tested for multiple-antenna systems, the
proposed method outperforms other DMI algorithms, in terms of identification
accuracy, that are based only on cumulants or do not consider HOM denoising,
even for a receiver with impairments. The improvement is achieved with the same
order of complexity of the common HOS-based DMI algorithms in the same context.
|
The Internet-of-Things (IoT) is an emerging and cognitive technology which
connects a massive number of smart physical devices with virtual objects
operating in diverse platforms through the internet. IoT is increasingly being
implemented in distributed settings, making footprints in almost every sector
of our life. Unfortunately, for healthcare systems, the entities connected to
the IoT networks are exposed to an unprecedented level of security threats.
Relying on a huge volume of sensitive and personal data, IoT healthcare systems
are facing unique challenges in protecting data security and privacy. Although
blockchain has posed to be the solution in this scenario thanks to its inherent
distributed ledger technology (DLT), it suffers from major setbacks of
increasing storage and computation requirements with the network size. This
paper proposes a holochain-based security and privacy-preserving framework for
IoT healthcare systems that overcomes these challenges and is particularly
suited for resource constrained IoT scenarios. The performance and thorough
security analyses demonstrate that a holochain-based IoT healthcare system is
significantly better compared to blockchain and other existing systems.
|
Context. Luminous Blue Variables (LBVs) are thought to be in a transitory
phase between O stars on the main-sequence and the Wolf-Rayet stage. Recent
studies suggest that they might be formed through binary interaction. Only a
few are known in binary systems but their multiplicity fraction is uncertain.
Aims. This study aims at deriving the binary fraction among the Galactic
(confirmed and candidate) LBV population. We combine multi-epoch spectroscopy
and long-baseline interferometry. Methods. We use cross-correlation to measure
their radial velocities. We identify spectroscopic binaries through significant
RV variability (larger than 35 km/s). We investigate the observational biases
to establish the intrinsic binary fraction. We use CANDID to detect
interferometric companions, derive their parameters and positions. Results. We
derive an observed spectroscopic binary fraction of 26 %. Considering period
and mass ratio ranges from Porb=1 to 1000 days, and q = 0.1-1.0, and a
representative set of orbital parameter distributions, we find a bias-corrected
binary fraction of 62%. From interferometry, we detect 14 companions out of 18
objects, providing a binary fraction of 78% at projected separations between 1
and 120 mas. From the derived primary diameters, and the distances of these
objects, we measure for the first time the exact radii of Galactic LBVs to be
between 100 and 650 Rsun, making unlikely to have short-period systems.
Conclusions. This analysis shows that the binary fraction among the Galactic
LBV population is large. If they form through single-star evolution, their
orbit must be initially large. If they form through binary channel that implies
that either massive stars in short binary systems must undergo a phase of fully
non-conservative mass transfer to be able to sufficiently widen the orbit or
that LBVs form through merging in initially binary or triple systems.
|
Anantharaman and Le Masson proved that any family of eigenbases of the
adjacency operators of a family of graphs is quantum ergodic (a form of
delocalization) assuming the graphs satisfy conditions of expansion and high
girth. In this paper, we show that neither of these two conditions is
sufficient by itself to necessitate quantum ergodicity. We also show that
having conditions of expansion and a specific relaxation of the high girth
constraint present in later papers on quantum ergodicity is not sufficient. We
do so by proving new properties of the Cartesian product of two graphs where
one is infinite.
|
We present the development and characterization of a generic, reconfigurable,
low-cost ($<$ 350 USD) software-defined digital receiver system (DRS) for
temporal correlation measurements in atomic spin ensembles. We demonstrate the
use of the DRS as a component of a high resolution magnetometer. Digital
receiver based fast Fourier transform spectrometers (FFTS) are generally
superior in performance in terms of signal-to-noise ratio (SNR) compared to
traditional swept-frequency spectrum analyzers (SFSA). In applications where
the signals being analyzed are very narrow band in frequency domain, recording
them at high speeds over a reduced bandwidth provides flexibility to study them
for longer periods. We have built the DRS on the STEMLab 125-14 FPGA platform
and it has two different modes of operation: FFT Spectrometer and real time raw
voltage recording mode. We evaluate its performance by using it in atomic spin
noise spectroscopy (SNS). We demonstrate that the SNR is improved by more than
one order of magnitude with the FFTS as compared to that of the commercial
SFSA. We also highlight that with this DRS operating in the triggered data
acquisition mode one can achieve spin noise (SN) signal with high SNR in a
recording time window as low as 100 msec. We make use of this feature to
perform time resolved high-resolution magnetometry. While the receiver was
initially developed for SNS experiments, it can be easily used for other
atomic, molecular and optical (AMO) physics experiments as well.
|
We introduce novel methods for encoding acyclicity and s-t-reachability
constraints for propositional formulas with underlying directed graphs. They
are based on vertex elimination graphs, which makes them suitable for cases
where the underlying graph is sparse. In contrast to solvers with ad hoc
constraint propagators for acyclicity and reachability constraints such as
GraphSAT, our methods encode these constraints as standard propositional
clauses, making them directly applicable with any SAT solver. An empirical
study demonstrates that our methods together with an efficient SAT solver can
outperform both earlier encodings of these constraints as well as GraphSAT,
particularly when underlying graphs are sparse.
|
It is known that generalized deformation in the sense of Hitchin-Gaultieri is
a geometric realization of the degree-2 component of Kontsevich-Barannikov's
homological approach to extended deformation. Through extended deformation, one
associates a Frobenius structure to the extended moduli space. In this notes,
we prove that on primary Kodaira manifolds the restriction of the Frobenius
structure on the degree-2 component of the extended moduli space is trivial. It
generalizes the author's past observation on Kodaira surface.
|
Video activity recognition by deep neural networks is impressive for many
classes. However, it falls short of human performance, especially for
challenging to discriminate activities. Humans differentiate these complex
activities by recognising critical spatio-temporal relations among explicitly
recognised objects and parts, for example, an object entering the aperture of a
container. Deep neural networks can struggle to learn such critical
relationships effectively. Therefore we propose a more human-like approach to
activity recognition, which interprets a video in sequential temporal phases
and extracts specific relationships among objects and hands in those phases.
Random forest classifiers are learnt from these extracted relationships. We
apply the method to a challenging subset of the something-something dataset and
achieve a more robust performance against neural network baselines on
challenging activities.
|
We study properties of twisted unions of metric spaces introduced by Johnson,
Lindenstrauss, and Schechtman, and by Naor and Rabani. In particular, we prove
that under certain natural mild assumptions twisted unions of $L_1$-embeddable
metric spaces also embed in $L_1$ with distortions bounded above by constants
that do not depend on the metric spaces themselves, or on their size, but only
on certain general parameters. This answers a question stated by Naor and by
Naor and Rabani.
In the second part of the paper we give new simple examples of metric spaces
such their every embedding into $L_p$, $1\le p<\infty$, has distortion at least
$3$, but which are a union of two subsets, each isometrically embeddable in
$L_p$. This extends an analogous result of K.~Makarychev and Y.~Makarychev from
Hilbert spaces to $L_p$-spaces, $1\le p<\infty$.
|
Multi-channel speech enhancement aims to extract clean speech from a noisy
mixture using signals captured from multiple microphones. Recently proposed
methods tackle this problem by incorporating deep neural network models with
spatial filtering techniques such as the minimum variance distortionless
response (MVDR) beamformer. In this paper, we introduce a different research
direction by viewing each audio channel as a node lying in a non-Euclidean
space and, specifically, a graph. This formulation allows us to apply graph
neural networks (GNN) to find spatial correlations among the different channels
(nodes). We utilize graph convolution networks (GCN) by incorporating them in
the embedding space of a U-Net architecture. We use LibriSpeech dataset and
simulate room acoustics data to extensively experiment with our approach using
different array types, and number of microphones. Results indicate the
superiority of our approach when compared to prior state-of-the-art method.
|
Quantum tomography is an important tool for the characterisation of quantum
operations. In this paper, we present a framework of quantum tomography in
fermionic systems. Compared with qubit systems, fermions obey the
superselection rule, which sets constraints on states, processes and
measurements in a fermionic system. As a result, we can only partly reconstruct
an operation that acts on a subset of fermion modes, and the full
reconstruction always requires at least one ancillary fermion mode in addition
to the subset. We also report a protocol for the full reconstruction based on
gates in Majorana fermion quantum computer, including a set of circuits for
realising the informationally-complete state preparation and measurement.
|
Distant boundaries in linear non-Hermitian lattices can dramatically change
energy eigenvalues and corresponding eigenstates in a nonlocal way. This effect
is known as non-Hermitian skin effect (NHSE). Combining non-Hermitian skin
effect with nonlinear effects can give rise to a host of novel phenomenas,
which may be used for nonlinear structure designs. Here we study nonlinear
non-Hermitian skin effect and explore nonlocal and substantial effects of edges
on stationary nonlinear solutions. We show that fractal and continuum bands
arise in a long lattice governed by a nonreciprocal discrete nonlinear
Schrodinger equation. We show that stationary solutions are localized at the
edge in the continuum band. We consider a non-Hermitian Ablowitz-Ladik model
and show that nonlinear exceptional point disappears if the lattice is
infinitely long.
|
While the family of layered pnictides $ABX_2$ ($A$ : rare or alkaline earth
metals, $B$ : transition metals, $X$ : Sb/Bi) can host Dirac dispersions based
on Sb/Bi square nets, nearly half of them has not been synthesized yet for
possible combinations of the $A$ and $B$ cations. Here we report the
fabrication of EuCdSb$_{\mathrm{2}}$ with the largest $B$-site ionic radius,
which is stabilized for the first time in thin film form by molecular beam
deposition. EuCdSb$_{\mathrm{2}}$ crystallizes in an orthorhombic $Pnma$
structure and exhibits antiferromagnetic ordering of the Eu magnetic moments at
$T_\mathrm{N}=15$K. Our successful growth will be an important step for further
exploring novel Dirac materials using film techniques.
|
The "eternal war in cache" has reached browsers, with multiple cache-based
side-channel attacks and countermeasures being suggested. A common approach for
countermeasures is to disable or restrict JavaScript features deemed essential
for carrying out attacks. To assess the effectiveness of this approach, in this
work we seek to identify those JavaScript features which are essential for
carrying out a cache-based attack. We develop a sequence of attacks with
progressively decreasing dependency on JavaScript features, culminating in the
first browser-based side-channel attack which is constructed entirely from
Cascading Style Sheets (CSS) and HTML, and works even when script execution is
completely blocked. We then show that avoiding JavaScript features makes our
techniques architecturally agnostic, resulting in microarchitectural website
fingerprinting attacks that work across hardware platforms including Intel
Core, AMD Ryzen, Samsung Exynos, and Apple M1 architectures. As a final
contribution, we evaluate our techniques in hardened browser environments
including the Tor browser, Deter-Fox (Cao el al., CCS 2017), and Chrome Zero
(Schwartz et al., NDSS 2018). We confirm that none of these approaches
completely defend against our attacks. We further argue that the protections of
Chrome Zero need to be more comprehensively applied, and that the performance
and user experience of Chrome Zero will be severely degraded if this approach
is taken.
|
Current open-domain question answering systems often follow a
Retriever-Reader architecture, where the retriever first retrieves relevant
passages and the reader then reads the retrieved passages to form an answer. In
this paper, we propose a simple and effective passage reranking method, named
Reader-guIDEd Reranker (RIDER), which does not involve training and reranks the
retrieved passages solely based on the top predictions of the reader before
reranking. We show that RIDER, despite its simplicity, achieves 10 to 20
absolute gains in top-1 retrieval accuracy and 1 to 4 Exact Match (EM) gains
without refining the retriever or reader. In addition, RIDER, without any
training, outperforms state-of-the-art transformer-based supervised rerankers.
Remarkably, RIDER achieves 48.3 EM on the Natural Questions dataset and 66.4 EM
on the TriviaQA dataset when only 1,024 tokens (7.8 passages on average) are
used as the reader input after passage reranking.
|
A metal can be driven to an insulating phase through distinct mechanisms. A
possible way is via the Coulomb interaction, which then defines the Mott
metal-insulator transition (MIT). Another possibility is the MIT driven by
disorder, the so-called Anderson MIT. Here we analyze interacting particles in
disordered Hubbard chains $-$ thus comprising the Mott-Anderson physics $-$ by
investigating the ground-state entanglement with density functional theory. The
localization signature on entanglement is found to be a local minimum at a
certain critical density. Individually, the Mott (Anderson) MIT has a single
critical density whose minimum entanglement decreases as the interaction
(disorder) enhances. While in the Mott MIT entanglement saturates at finite
values, characterizing partial localization, in the Anderson MIT the system
reaches full localization, with zero entanglement, for sufficiently strong
disorder. In the combined Mott-Anderson MIT, we find three critical densities
referring to local minima on entanglement. One of them is the same as for the
Anderson MIT, but now the presence of interaction requires a stronger disorder
potential to induce localization. A second critical density is related to the
Mott MIT, but due to disorder it is displaced by a factor proportional to the
concentration of impurities. The third local minimum on entanglement is unique
to the concomitant presence of disorder and interaction, found to be related to
an effective density phenomenon, thus referred to as a Mott-like MIT. Since
entanglement has been intrinsically connected to the magnetic susceptibility
$-$ a quantity promptly available in cold atoms experiments $-$ our detailed
numerical description might be useful for the experimental investigation of
Mott-Anderson MIT.
|
Today's world is a globalized and connected one, where people are
increasingly moving around and interacting with a greater number of services
and devices of all kinds, including those that allow them to monitor their
health. However, each company, institution or health system usually store its
patients' data in an isolated way. Although this approach can have some
benefits related with privacy, security, etc., it also implies that each one of
them generates different, incomplete and possibly contradictory views of a
patient's health data, losing part of the value that this information could
bring to the patient. That is the reason why researchers from all over the
world are determined to replace the current institution-centered health systems
with new patient-centered ones. In these new systems, all the health
information of a patient is integrated into a unique global vision. However,
some questions are still unanswered. Specifically, who should store and
maintain the information of a given patient and how should this information be
made available for other systems. To address this situation, this work proposes
a new solution towards making the Personal Health Trajectory of patients
available for both, the patients themselves and health institutions. By using
the concept of blockchains' federation and web services access to the global
vision of a person health can be granted to existing and new solutions. To
demonstrate the viability of the proposal, an implementation is provided
alongside the obtained results in a potential scenario.
|
This paper proposes a novel framework for lung sound event detection,
segmenting continuous lung sound recordings into discrete events and performing
recognition on each event. Exploiting the lightweight nature of Temporal
Convolution Networks (TCNs) and their superior results compared to their
recurrent counterparts, we propose a lightweight, yet robust, and completely
interpretable framework for lung sound event detection. We propose the use of a
multi-branch TCN architecture and exploit a novel fusion strategy to combine
the resultant features from these branches. This not only allows the network to
retain the most salient information across different temporal granularities and
disregards irrelevant information, but also allows our network to process
recordings of arbitrary length. Results: The proposed method is evaluated on
multiple public and in-house benchmarks of irregular and noisy recordings of
the respiratory auscultation process for the identification of numerous
auscultation events including inhalation, exhalation, crackles, wheeze,
stridor, and rhonchi. We exceed the state-of-the-art results in all
evaluations. Furthermore, we empirically analyse the effect of the proposed
multi-branch TCN architecture and the feature fusion strategy and provide
quantitative and qualitative evaluations to illustrate their efficiency.
Moreover, we provide an end-to-end model interpretation pipeline that
interprets the operations of all the components of the proposed framework. Our
analysis of different feature fusion strategies shows that the proposed feature
concatenation method leads to better suppression of non-informative features,
which drastically reduces the classifier overhead resulting in a robust
lightweight network.The lightweight nature of our model allows it to be
deployed in end-user devices such as smartphones, and it has the ability to
generate predictions in real-time.
|
In the last decade, deep neural networks have proven to be very powerful in
computer vision tasks, starting a revolution in the computer vision and machine
learning fields. However, deep neural networks, usually, are not robust to
perturbations of the input data. In fact, several studies showed that slightly
changing the content of the images can cause a dramatic decrease in the
accuracy of the attacked neural network. Several methods able to generate
adversarial samples make use of gradients, which usually are not available to
an attacker in real-world scenarios. As opposed to this class of attacks,
another class of adversarial attacks, called black-box adversarial attacks,
emerged, which does not make use of information on the gradients, being more
suitable for real-world attack scenarios. In this work, we compare three
well-known evolution strategies on the generation of black-box adversarial
attacks for image classification tasks. While our results show that the
attacked neural networks can be, in most cases, easily fooled by all the
algorithms under comparison, they also show that some black-box optimization
algorithms may be better in "harder" setups, both in terms of attack success
rate and efficiency (i.e., number of queries).
|
The rapid spread of the novel corona virus, SARS-CoV-2, has prompted an
unprecedented response from governments across the world. A third of the world
population have been placed in varying degrees of lockdown, and the Internet
has become the primary medium for conducting most businesses and schooling
activities. This paper aims to provide a multi-prospective account of Internet
performance during the first wave of the pandemic. We investigate the
performance of the Internet control plane and data plane from a number of
globally spread vantage points. We also look closer at two case studies. First,
we look at growth in video traffic during the pandemic, using traffic logs from
a global video conferencing provider. Second, we leverage a country-wide
deployment of measurement probes to assess the performance of mobile networks
during the outbreak. We find that the lockdown has visibly impacted almost all
aspects of Internet performance. Access networks have experienced an increase
in peak and off-peak end to end latency. Mobile networks exhibit significant
changes in download speed, while certain types of video traffic has increased
by an order of magnitude. Despite these changes, the Internet seems to have
coped reasonably well with the lockdown traffic.
|
For an integer $k\ge 3$, a $k$-path vertex cover of a graph $G=(V,E)$ is a
set $T\subseteq V$ that shares a vertex with every path subgraph of order $k$
in $G$. The minimum cardinality of a $k$-path vertex cover is denoted by
$\psi_k(G)$. We give estimates -- mostly upper bounds -- on $\psi_k(G)$ in
terms of various parameters, including vertex degrees and the number of
vertices and edges. The problem is also considered on chordal graphs and planar
graphs.
|
High-dimensional limit theorems have been shown to be useful to derive tuning
rules for finding the optimal scaling in random walk Metropolis algorithms. The
assumptions under which weak convergence results are proved are however
restrictive; the target density is typically assumed to be of a product form.
Users may thus doubt the validity of such tuning rules in practical
applications. In this paper, we shed some light on optimal scaling problems
from a different perspective, namely a large-sample one. This allows to prove
weak convergence results under realistic assumptions and to propose novel
parameter-dimension-dependent tuning guidelines. The proposed guidelines are
consistent with previous ones when the target density is close to having a
product form, but significantly different otherwise.
|
The drone-based last-mile delivery is an emerging technology to deliver
parcels using drones loaded on a truck. As more and more autonomous vehicles
(AVs) will be available for delivery services, an opportunity is arising to
fully automate drone-based last-mile delivery. In this paper, we integrate AVs
with drone-based last-mile delivery aiming to fully automate the last-mile
delivery process. We define a new problem called the autonomous vehicle routing
problem with drones (A-VRPD). A-VRPD is to select AVs from a pool of available
AVs and to schedule them to serve customers with an objective of minimizing the
total operational cost. We formulate A-VRPD as an Integer Linear Programming
(ILP) and propose a greedy algorithm to solve the problem based on real-world
operational costs for different types of AVs, traveling distances calculated
considering the current traffic conditions, and varying load capacities of AVs.
Extensive simulations performed under various random delivery scenarios
demonstrate that the proposed algorithm effectively increases profits for both
the delivery company and AV owners compared with traditional VRP-D (and TSP-D)
algorithm-based approaches.
|
This paper presents two modifications for Loidreau's code-based cryptosystem.
Loidreau's cryptosystem is a rank metric code-based cryptosystem constructed by
using Gabidulin codes in the McEliece setting. Recently a polynomial-time key
recovery attack was proposed to break Loidreau's cryptosystem in some cases. To
prevent this attack, we propose the use of subcodes to disguise the secret
codes in Modification \Rmnum{1}. In Modification \Rmnum{2}, we choose a random
matrix of low column rank over $\mathbb{F}_q$ to mix with the secret matrix.
According to our analysis, these two modifications can both resist the existing
structural attacks. Additionally, we adopt the systematic generator matrix of
the public code to make a reduction in the public-key size. In additon to
stronger resistance against structural attacks and more compact representation
of public keys, our modifications also have larger information transmission
rates.
|
Built upon a sample of 134 quasars that was dedicated to a systematic study
of \mgii-BAL variability from Yi et al. (2019a), we investigate these quasars
showing \mgii-BAL disappearance or emergence with the aid of at least three
epoch optical spectra sampled more than 15 yr in the observed frame. We
identified 3/3 quasars undergoing pristine/tentative BAL transformations. The
incidence of pristine BAL transformations in the sample is therefore derived to
be 2.2$_{-1.2}^{+2.2}$\%, consistent with that of high-ionization BAL
transformations from the literature. Adopting an average \mgii-BAL
disappearance timescale of rest-frame 6.89 yr among the six quasars, the
average characteristic lifetime of \mgii\ BALs in the sample is constrained to
be $>$160 yr along our line of sight. There is a diversity of BAL-profile
variability observed in the six quasars, probably reflecting a variety of
mechanisms at work. Our investigations of \mgii-BAL transitions, combined with
observational studies of BAL transitions from the literature, imply an overall
FeLoBAL/LoBAL$\rightarrow$HiBAL/non-BAL transformation sequence along with a
decrease in reddening. This sequence is consistent with the evacuation models
for the origin of commonly seen blue quasars, in which LoBAL quasars are in a
shorted-lived, blowout phase.
|
Trust is essential for sustaining cooperation among humans. The same
principle applies during interaction with computers and robots: if we do not
trust them, we will not accept help from them. Extensive evidence has shown
that our trust in other agents depends on their performance. However, in
uncertain environments, humans may not be able to estimate correctly other
agents' performance, potentially leading to distrust or over-trust in peers and
machines. In the current study, we investigate whether humans' trust towards
peers, computers and robots is biased by prior beliefs in uncertain interactive
settings. Participants made perceptual judgments and observed the simulated
estimates of either a human participant, a computer or a social robot.
Participants could modify their judgments based on this feedback. Results show
that participants' belief about the nature of the interacting partner biased
their compliance with the partners' judgments, although the partners' judgments
were identical. Surprisingly, the social robot was trusted more than the
computer and the human partner. Trust in the alleged human partner was not
fully predicted by its perceived performance, suggesting the emergence of
normative processes in peer interaction. Our findings offer novel insights in
the understanding of the mechanisms underlying trust towards peers and
autonomous agents.
|
Entropic causal inference is a framework for inferring the causal direction
between two categorical variables from observational data. The central
assumption is that the amount of unobserved randomness in the system is not too
large. This unobserved randomness is measured by the entropy of the exogenous
variable in the underlying structural causal model, which governs the causal
relation between the observed variables. Kocaoglu et al. conjectured that the
causal direction is identifiable when the entropy of the exogenous variable is
not too large. In this paper, we prove a variant of their conjecture. Namely,
we show that for almost all causal models where the exogenous variable has
entropy that does not scale with the number of states of the observed
variables, the causal direction is identifiable from observational data. We
also consider the minimum entropy coupling-based algorithmic approach presented
by Kocaoglu et al., and for the first time demonstrate algorithmic
identifiability guarantees using a finite number of samples. We conduct
extensive experiments to evaluate the robustness of the method to relaxing some
of the assumptions in our theory and demonstrate that both the constant-entropy
exogenous variable and the no latent confounder assumptions can be relaxed in
practice. We also empirically characterize the number of observational samples
needed for causal identification. Finally, we apply the algorithm on Tuebingen
cause-effect pairs dataset.
|
The purpose of this paper is to provide answers to some questions raised in a
paper by Kaneko and Koike about the modularity of the solutions of a
differential equations of hypergeometric type. In particular, we provide a
number-theoretic explanation of why the modularity of the solutions occurs in
some cases and does not occur in other cases. This also proves their conjecture
on the completeness of the list of modular solutions after adding some missing
cases.
|
The possibility to detect circumbinary planets and to study stellar magnetic
fields through binary stars has sparked an increase in the research activity in
this area. In this paper we revisit the connection between stellar magnetic
fields and the gravitational quadrupole moment $Q_{xx}$. We present three
magnetohydrodynamical simulations of solar mass stars with rotation periods of
8.3, 1.2, and 0.8 days and perform a detailed analysis of the magnetic and
density fields using a spherical harmonic decomposition. The extrema of
$Q_{xx}$ are associated with changes of the magnetic field structure. This is
evident in the simulation with a rotation period of 1.2 days. Its magnetic
field has a much more complex behaviour than other models as the large-scale
non-axisymmetric field dominates throughout the simulation and the axisymmetric
component is predominantly hemispheric. This triggers variations in the density
field that follow the magnetic field asymmetry with respect to the equator,
changing the $zz$ component of the inertia tensor, and thus modulating
$Q_{xx}$. The magnetic fields of the other two runs are less variable in time
and more symmetric with respect to the equator such that there are no large
variations in the density, therefore only small variations in $Q_{xx}$ are
seen. If interpreted via the classical Applegate mechanism (tidal locking), the
quadrupole moment variations obtained in the simulations are about two orders
of magnitude below the observed values. However, if no tidal locking is
assumed, our results are compatible with the observed eclipsing time
variations.
|
Food image segmentation is a critical and indispensible task for developing
health-related applications such as estimating food calories and nutrients.
Existing food image segmentation models are underperforming due to two reasons:
(1) there is a lack of high quality food image datasets with fine-grained
ingredient labels and pixel-wise location masks -- the existing datasets either
carry coarse ingredient labels or are small in size; and (2) the complex
appearance of food makes it difficult to localize and recognize ingredients in
food images, e.g., the ingredients may overlap one another in the same image,
and the identical ingredient may appear distinctly in different food images. In
this work, we build a new food image dataset FoodSeg103 (and its extension
FoodSeg154) containing 9,490 images. We annotate these images with 154
ingredient classes and each image has an average of 6 ingredient labels and
pixel-wise masks. In addition, we propose a multi-modality pre-training
approach called ReLeM that explicitly equips a segmentation model with rich and
semantic food knowledge. In experiments, we use three popular semantic
segmentation methods (i.e., Dilated Convolution based, Feature Pyramid based,
and Vision Transformer based) as baselines, and evaluate them as well as ReLeM
on our new datasets. We believe that the FoodSeg103 (and its extension
FoodSeg154) and the pre-trained models using ReLeM can serve as a benchmark to
facilitate future works on fine-grained food image understanding. We make all
these datasets and methods public at
\url{https://xiongweiwu.github.io/foodseg103.html}.
|
Tumours behave as moving targets that can evade chemotherapeutic treatments
by rapidly acquiring resistance via various mechanisms. In Balaz et al. (2021,
Biosystems; 199:104290) we initiated the development of the agent-based
open-ended evolutionary simulator of novel drug delivery systems (DDS). It is
an agent-based simulator where evolvable agents can change their perception of
the environment and thus adapt to tumour mutations. Here we mapped the
parameters of evolvable agent properties to the realistic biochemical
boundaries and test their efficacy by simulating their behaviour at the cell
scale using the stochastic simulator, STEPS. We show that the shape of the
parameter space evolved in our simulator is comparable to those obtained by the
rational design.
|
Real-time estimation of actual environment depth is an essential module for
various autonomous system tasks such as localization, obstacle detection and
pose estimation. During the last decade of machine learning, extensive
deployment of deep learning methods to computer vision tasks yielded successful
approaches for realistic depth synthesis out of a simple RGB modality. While
most of these models rest on paired depth data or availability of video
sequences and stereo images, there is a lack of methods facing single-image
depth synthesis in an unsupervised manner. Therefore, in this study, latest
advancements in the field of generative neural networks are leveraged to fully
unsupervised single-image depth synthesis. To be more exact, two
cycle-consistent generators for RGB-to-depth and depth-to-RGB transfer are
implemented and simultaneously optimized using the Wasserstein-1 distance. To
ensure plausibility of the proposed method, we apply the models to a self
acquised industrial data set as well as to the renown NYU Depth v2 data set,
which allows comparison with existing approaches. The observed success in this
study suggests high potential for unpaired single-image depth estimation in
real world applications.
|
Adverse drug events (ADEs) are unexpected incidents caused by the
administration of a drug or medication. To identify and extract these events,
we require information about not just the drug itself but attributes describing
the drug (e.g., strength, dosage), the reason why the drug was initially
prescribed, and any adverse reaction to the drug. This paper explores the
relationship between a drug and its associated attributes using relation
extraction techniques. We explore three approaches: a rule-based approach, a
deep learning-based approach, and a contextualized language model-based
approach. We evaluate our system on the n2c2-2018 ADE extraction dataset. Our
experimental results demonstrate that the contextualized language model-based
approach outperformed other models overall and obtain the state-of-the-art
performance in ADE extraction with a Precision of 0.93, Recall of 0.96, and an
$F_1$ score of 0.94; however, for certain relation types, the rule-based
approach obtained a higher Precision and Recall than either learning approach.
|
Homomorphic Encryption (HE), allowing computations on encrypted data
(ciphertext) without decrypting it first, enables secure but prohibitively slow
Neural Network (HENN) inference for privacy-preserving applications in clouds.
To reduce HENN inference latency, one approach is to pack multiple messages
into a single ciphertext in order to reduce the number of ciphertexts and
support massive parallelism of Homomorphic Multiply-Add (HMA) operations
between ciphertexts. However, different ciphertext packing schemes have to be
designed for different convolution layers and each of them introduces overheads
that are far more expensive than HMA operations. In this paper, we propose a
low-rank factorization method called FFConv to unify convolution and ciphertext
packing. To our knowledge, FFConv is the first work that is capable of
accelerating the overheads induced by different ciphertext packing schemes
simultaneously, without incurring a significant increase in noise budget.
Compared to prior art LoLa and Falcon, our method reduces the inference latency
by up to 87% and 12%, respectively, with comparable accuracy on MNIST and
CIFAR-10.
|
This paper adds to the fundamental body of work on benchmarking the
robustness of deep learning (DL) classifiers. We innovate a new benchmarking
methodology to evaluate robustness of DL classifiers. Also, we introduce a new
four-quadrant statistical visualization tool, including minimum accuracy,
maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking
robustness of DL classifiers. To measure robust DL classifiers, we created a
comprehensive 69 benchmarking image set, including a clean set, sets with
single factor perturbations, and sets with two-factor perturbation conditions.
After collecting experimental results, we first report that using two-factor
perturbed images improves both robustness and accuracy of DL classifiers. The
two-factor perturbation includes (1) two digital perturbations (salt & pepper
noise and Gaussian noise) applied in both sequences, and (2) one digital
perturbation (salt & pepper noise) and a geometric perturbation (rotation)
applied in both sequences. All source codes, related image sets, and
preliminary data, figures are shared on a GitHub website to support future
academic research and industry projects. The web resources locate at
https://github.com/caperock/robustai
|
This paper theoretically analyzes cable network disconnection due to randomly
occurring natural disasters, where the disaster-endurance (DE) levels of the
network are determined by a network entity such as the type of shielding method
used for a duct containing cables. The network operator can determine which
parts have a high DE level. When a part of a network can be protected, the
placement of that part can be specified to decrease the probability of
disconnecting two given nodes.
The maximum lower bound of the probability of connecting two given nodes is
explicitly derived. Conditions decreasing (not decreasing) the probability of
connecting two given nodes with a partially protected network are provided.
|
Helium implantation in surfaces is of interest for plasma facing materials
and other nuclear applications. Vanadium as both a representative bcc material
and a material relevant for fusion applications is implanted using a Helium ion
beam microscope, and the resulting swelling and nanomechanical properties are
quantified. These values are put in correlation to data obtained from micro
residual stress measurements using a focused ion beam based ring-core
technique. We found that the swelling measured is similar to literature values.
Further we are able to measure the surface stress caused by the implantation
and find it approaches the yield strength of the material at blistering doses.
The simple calculations performed in the present work, along with several
geometrical considerations deduced from experimental results confirm the
driving force for blister formation comes from bulging resulting mainly from
gas pressure buildup, rather than solely stress induced buckling.
|
This paper addresses finite-time horizon optimal control of single-loop
networked control systems with stochastically modeled communication channel and
disturbances. To cope with the uncertainties, an optimization-based control
scheme is proposed which uses a disturbance feedback and the age of information
as central aspects. The disturbance feedback is an extension of the control law
used for balanced stochastic optimal control previously proposed for control
systems without network. Balanced optimality is understood as a compromise
between minimizing of expected deviations from the reference and minimization
of the uncertainty of future states. Time-varying state constraints as well as
time-invariant input constraints are considered, and the controllers are
synthesized by semi-definite programs.
|
Aims: LS 5039 is an enigmatic high-mass gamma-ray binary which hosts a
powerful O6.5V companion, but the nature of the compact object is still to be
established using multi-wavelength observations. Methods: We analyzed
phase-resolved multi-instrument spectra of nonthermal emission from LS 5039 in
order to produce reliable spectral models, which can be further employed to
select between various scenarios and theoretical models of the binary. Results:
The combined phase-resolved hard X-ray and MeV-range gamma-ray spectra obtained
with XMM-Newton, Suzaku, NuSTAR, INTEGRAL, and COMPTEL indicate a meaningful
spectral hardening above 50~keV. The spectral break observed in both major
phases of the binary may indicate the presence of a hardening in the spectrum
of accelerated leptons which could originate from the interaction of wind from
the O6.5V companion star with the relativistic outflow from a yet unidentified
compact object.
|
We consider the standard first passage percolation model in the rescaled
lattice $\mathbb Z^d/n$ for $d\geq 2$ and a bounded domain $\Omega$ in $\mathbb
R^d$. We denote by $\Gamma^1$ and $\Gamma^2$ two disjoint subsets of $\partial
\Omega$ representing respectively the sources and the sinks, \textit{i.e.},
where the water can enter in $\Omega$ and escape from $\Omega$. A cutset is a
set of edges that separates $\Gamma ^1$ from $\Gamma^2$ in $\Omega$, it has a
capacity given by the sum of the capacities of its edges. Under some
assumptions on $\Omega$ and the distribution of the capacities of the edges, we
already know a law of large numbers for the sequence of minimal cutsets
$(\mathcal E_n^{min})_{n\geq 1}$: the sequence $(\mathcal E_n^{min})_{n\geq 1}$
converges almost surely to the set of solutions of a continuous deterministic
problem of minimal cutset in an anisotropic network. We aim here to derive a
large deviation principle for cutsets and deduce by contraction principle a
lower large deviation principle for the maximal flow in $\Omega$.
|
We consider vacuum metrics admitting conformal compactification which is
smooth up to the scri $\mathscr{I^+}$. We write metric in the Bondi-Sachs form
and expand it into power series in the inverse affine distance $1/r$. Like in
the case of the luminosity distance, given the news tensor and initial data for
a part of metric the Einstein equations define coefficients of the series in a
recursive way. This is also true in the stationary case however now the news
tensor vanishes and the role of initial data is taken by multipole moments. We
find an approximate form of metric and show that for nonvanishing mass it tends
to the Kerr metric as it is the case at spacelike inifinity.
|
This paper presents an experimental adaptation of a non-collaborative robot
arm to collaborate with the environment, as one step towards adapting legacy
robotic machinery to fit in industry 4.0 requirements. A cloud-based internet
of things (CIoT) service is employed to connect, supervise and control a
robotic arm's motion using the added wireless sensing devices to the
environment. A programmable automation controller (PAC) unit, connected to the
robot arm receives the most recent changes and updates the motion of the robot
arm. The experimental results show that the proposed non-expensive service is
tractable and adaptable to higher level for machine to machine collaboration.
The proposed approach in this paper has industrial and educational
applications. In the proposed approach, the CIoT technology is added as a
technology interface between the sensors to the environment and the robotic
arm. The proposed approach is versatile and fits to variety of applications to
meet the flexible requirements of industry 4.0. The proposed approach has been
implemented in an experiment using MECA 500 robot arm and AMAX 5580
programmable automation controller and ultrasonic proximity wireless sensor.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.