abstract
stringlengths 42
2.09k
|
---|
Cytology is a low-cost and non-invasive diagnostic procedure employed to
support the diagnosis of a broad range of pathologies. Computer Vision
technologies, by automatically generating quantitative and objective
descriptions of examinations' contents, can help minimize the chances of
misdiagnoses and shorten the time required for analysis. To identify the
state-of-art of computer vision techniques currently applied to cytology, we
conducted a Systematic Literature Review. We analyzed papers published in the
last 5 years. The initial search was executed in September 2020 and resulted in
431 articles. After applying the inclusion/exclusion criteria, 157 papers
remained, which we analyzed to build a picture of the tendencies and problems
present in this research area, highlighting the computer vision methods,
staining techniques, evaluation metrics, and the availability of the used
datasets and computer code. As a result, we identified that the most used
methods in the analyzed works are deep learning-based (70 papers), while fewer
works employ classic computer vision only (101 papers). The most recurrent
metric used for classification and object detection was the accuracy (33 papers
and 5 papers), while for segmentation it was the Dice Similarity Coefficient
(38 papers). Regarding staining techniques, Papanicolaou was the most employed
one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of
the datasets used in the papers are publicly available, with the DTU/Herlev
dataset being the most used one. We conclude that there still is a lack of
high-quality datasets for many types of stains and most of the works are not
mature enough to be applied in a daily clinical diagnostic routine. We also
identified a growing tendency towards adopting deep learning-based approaches
as the methods of choice.
|
We demonstrate coherent control of photoemission from a gold needle tip using
a two-color laser field. The relative phase between a fundamental field and its
second harmonic imprints a strong modulation on the emitted photocurrent with
up to 96.5 % contrast. The contrast as a function of the second harmonic
intensity can be described by three interfering quantum pathways. Increasing
the bias voltage applied to the tip reduces the maximum achievable contrast and
modifies the weights of the involved pathways. Simulations based on the
time-dependent Schr\"odinger equation reproduce the characteristic cooperative
signal and its dependence on the second harmonic intensity, which further
confirms the involvement of three emission pathways.
|
In Guo and Peng's article [Spherically convex sets and spherically convex
functions, J. Convex Anal. 28 (2021), 103--122], one defines the notions of
spherical convex sets and functions on "general curved surfaces" in
$\mathbb{R}^{n}$ $(n\ge2)$, one studies several properties of these classes of
sets and functions, and one establishes analogues of Radon, Helly,
Carath\'eodory and Minkowski theorems for spherical convex sets, as well as
some properties of spherical convex functions which are analogous to those of
usual convex functions. In obtaining such results, the authors use an analytic
approach based on their definitions. Our aim in this note is to provide simpler
proofs for the results on spherical convex sets; our proofs are based on some
characterizations/representations of spherical convex sets by usual convex sets
in $\mathbb{R}^{n}$.
|
New categories can be discovered by transforming semantic features into
synthesized visual features without corresponding training samples in zero-shot
image classification. Although significant progress has been made in generating
high-quality synthesized visual features using generative adversarial networks,
guaranteeing semantic consistency between the semantic features and visual
features remains very challenging. In this paper, we propose a novel zero-shot
learning approach, GAN-CST, based on class knowledge to visual feature learning
to tackle the problem. The approach consists of three parts, class knowledge
overlay, semi-supervised learning and triplet loss. It applies class knowledge
overlay (CKO) to obtain knowledge not only from the corresponding class but
also from other classes that have the knowledge overlay. It ensures that the
knowledge-to-visual learning process has adequate information to generate
synthesized visual features. The approach also applies a semi-supervised
learning process to re-train knowledge-to-visual model. It contributes to
reinforcing synthesized visual features generation as well as new category
prediction. We tabulate results on a number of benchmark datasets demonstrating
that the proposed model delivers superior performance over state-of-the-art
approaches.
|
Almost flat finitely generated projective Hilbert C*-module bundles were
successfully used by Hanke and Schick to prove special cases of the Strong
Novikov Conjecture. Dadarlat later showed that it is possible to calculate the
index of a K-homology class $\eta\in K_*(M)$ twisted with an almost flat bundle
in terms of the image of $\eta$ under Lafforgue's assembly map and the almost
representation associated to the bundle. Mishchenko used flat
infinite-dimensional bundles equipped with a Fredholm operator in order to
prove special cases of the Novikov higher signature conjecture.
We show how to generalize Dadarlat's theorem to the case of an
infinite-dimensional bundle equipped with a continuous family of Fredholm
operators on the fibers. Along the way, we show that special cases of the
Strong Novikov Conjecture can be proven if there exist sufficiently many almost
flat bundles with Fredholm operator.
To this end, we introduce the concept of an asymptotically flat Fredholm
bundle and its associated asymptotic Fredholm representation, and prove an
index theorem which relates the index of the asymptotic Fredholm bundle with
the so-called asymptotic index of the associated asymptotic Fredholm
representation.
|
We report the first mode-locked fiber laser to operate in the femtosecond
regime well beyond 3 {\mu}m. The laser uses dual-wavelength pumping and
non-linear polarisation rotation to produce 3.5 {\mu}m wavelength pulses with
minimum duration of 580 fs at a repetition rate of 68 MHz. The pulse energy is
3.2 nJ, corresponding to a peak power of 5.5 kW.
|
Interferometry can completely redirect light, providing the potential for
strong and controllable optical forces. However, small particles do not
naturally act like interferometric beamsplitters, and the optical scattering
from them is not generally thought to allow efficient interference. Instead,
optical trapping is typically achieved via deflection of the incident field.
Here we show that a suitably structured incident field can achieve
beamsplitter-like interactions with scattering particles. The resulting trap
offers order-of-magnitude higher stiffness than the usual Gaussian trap in one
axis, even when constrained to phase-only structuring. We demonstrate trapping
of 3.5 to 10.0~$\mu$m silica spheres, achieving stiffness up to 27.5$\pm$4.1
times higher than is possible using Gaussian traps, and two orders of magnitude
higher measurement signal-to-noise ratio. These results are highly relevant to
many applications, including cellular manipulation, fluid dynamics,
micro-robotics, and tests of fundamental physics.
|
We report the direct observation of intervalley exciton between the Q
conduction valley and $\Gamma$ valence valley in bilayer WSe$_2$ by
photoluminescence. The Q$\Gamma$ exciton lies at ~18 meV below the QK exciton
and dominates the luminescence of bilayer WSe$_2$. By measuring the exciton
spectra at gate-tunable electric field, we reveal different interlayer electric
dipole moments and Stark shifts between Q$\Gamma$ and QK excitons. Notably, we
can use the electric field to switch the energy order and dominant luminescence
between Q$\Gamma$ and QK excitons. Both Q$\Gamma$ and QK excitons exhibit
pronounced phonon replicas, in which two-phonon replicas outshine the
one-phonon replicas due to the existence of (nearly) resonant exciton-phonon
scatterings and numerous two-phonon scattering paths. We can simulate the
replica spectra by comprehensive theoretical modeling and calculations. The
good agreement between theory and experiment for the Stark shifts and phonon
replicas strongly supports our assignment of Q$\Gamma$ and QK excitons.
|
Cosmological phase transitions proceed via the nucleation of bubbles that
subsequently expand and collide. The resulting gravitational wave spectrum
depends crucially on the bubble wall velocity. Microscopic calculations of this
velocity are challenging even in weakly coupled theories. We use holography to
compute the wall velocity from first principles in a strongly coupled,
non-Abelian, four-dimensional gauge theory. The wall velocity is determined
dynamically in terms of the nucleation temperature. We find an approximately
linear relation between the velocity and the ratio $\Delta
\mathcal{P}/\mathcal{E}$, with $\Delta \mathcal{P}$ the pressure difference
between the inside and the outside of the bubble and $\mathcal{E}$ the energy
density outside the bubble. Up to a rescaling, the wall profile is well
approximated by that of an equilibrium, phase-separated configuration at the
critical temperature. We verify that ideal hydrodynamics provides a good
description of the system everywhere except near the wall.
|
Nanocontact properties of two-dimensional (2D) materials are closely
dependent on their unique nanomechanical systems, such as the number of atomic
layers and the supporting substrate. Here, we report a direct observation of
toplayer-dependent crystallographic orientation imaging of 2D materials with
the transverse shear microscopy (TSM). Three typical nanomechanical systems,
MoS2 on the amorphous SiO2/Si, graphene on the amorphous SiO2/Si, and MoS2 on
the crystallized Al2O3, have been investigated in detail. This experimental
observation reveals that puckering behaviour mainly occurs on the top layer of
2D materials, which is attributed to its direct contact adhesion with the AFM
tip. Furthermore, the result of crystallographic orientation imaging of
MoS2/SiO2/Si and MoS2/Al2O3 indicated that the underlying crystalline
substrates almost do not contribute to the puckering effect of 2D materials.
Our work directly revealed the top layer dependent puckering properties of 2D
material, and demonstrate the general applications of TSM in the bilayer 2D
systems.
|
We present a stochastic modeling framework for atomistic propagation of a
Mode I surface crack, with atoms interacting according to the Lennard-Jones
interatomic potential at zero temperature. Specifically, we invoke the
Cauchy-Born rule and the maximum entropy principle to infer probability
distributions for the parameters of the interatomic potential. We then study
how uncertainties in the parameters propagate to the quantities of interest
relevant to crack propagation, namely, the critical stress intensity factor and
the lattice trapping range. For our numerical investigation, we rely on an
automated version of the so-called numerical-continuation enhanced flexible
boundary (NCFlex) algorithm.
|
The count-min sketch (CMS) is a randomized data structure that provides
estimates of tokens' frequencies in a large data stream using a compressed
representation of the data by random hashing. In this paper, we rely on a
recent Bayesian nonparametric (BNP) view on the CMS to develop a novel
learning-augmented CMS under power-law data streams. We assume that tokens in
the stream are drawn from an unknown discrete distribution, which is endowed
with a normalized inverse Gaussian process (NIGP) prior. Then, using
distributional properties of the NIGP, we compute the posterior distribution of
a token's frequency in the stream, given the hashed data, and in turn
corresponding BNP estimates. Applications to synthetic and real data show that
our approach achieves a remarkable performance in the estimation of
low-frequency tokens. This is known to be a desirable feature in the context of
natural language processing, where it is indeed common in the context of the
power-law behaviour of the data.
|
The collision dynamics of hard spheres and cylindrical pores is solved
exactly, which is the minimal model for a regularly porous membrane.
Nonequilibrium event-driven molecular dynamics simulations are used to show
that the permeability $P$ of hard spheres of size $\sigma$ through cylinderical
pores of size $d$ follow the hindered diffusion mechanism due to size exclusion
as $P \propto (1-\sigma/d)^2$. Under this law, the separation of binary
mixtures of large and small particles exhibits a linear relationship between
$\alpha^{-1/2}$ and $P^{-1/2}$, where $\alpha$ and $P$ are the selectivity and
permeability of the smaller particle, respectively. The mean permeability
through polydisperse pores is the sum of permeabilities of individual pores,
weighted by the fraction of the single pore area over the total pore area.
|
We want in this article to show the usefulness of Quantum Turing Machine
(QTM) in a high-level didactic context as well as in theoretical studies. We
use QTM to show its equivalence with quantum circuit model for Deutsch and
Deutsch-Jozsa algorithms. Further we introduce a strategy of translation from
Quantum Circuit to Quantum Turing models by these examples. Moreover we
illustrate some features of Quantum Computing such as superposition from a QTM
point of view and starting with few simple examples very known in Quantum
Circuit form.
|
Purpose: This article develops theoretical, algorithmic, perceptual, and
interaction aspects of script legibility enhancement in the visible light
spectrum for the purpose of scholarly editing of papyri texts. - Methods: Novel
legibility enhancement algorithms based on color processing and visual
illusions are compared to classic methods in a user experience experiment. -
Results: (1) The proposed methods outperformed the comparison methods. (2)
Users exhibited a broad behavioral spectrum, under the influence of factors
such as personality and social conditioning, tasks and application domains,
expertise level and image quality, and affordances of software, hardware, and
interfaces. No single enhancement method satisfied all factor configurations.
Therefore, it is suggested to offer users a broad choice of methods to
facilitate personalization, contextualization, and complementarity. (3) A
distinction is made between casual and critical vision on the basis of signal
ambiguity and error consequences. The criteria of a paradigm for enhancing
images for critical applications comprise: interpreting images skeptically;
approaching enhancement as a system problem; considering all image structures
as potential information; and making uncertainty and alternative
interpretations explicit, both visually and numerically.
|
We present Omnidirectional Neural Radiance Fields (OmniNeRF), the first
method to the application of parallax-enabled novel panoramic view synthesis.
Recent works for novel view synthesis focus on perspective images with limited
field-of-view and require sufficient pictures captured in a specific condition.
Conversely, OmniNeRF can generate panorama images for unknown viewpoints given
a single equirectangular image as training data. To this end, we propose to
augment the single RGB-D panorama by projecting back and forth between a 3D
world and different 2D panoramic coordinates at different virtual camera
positions. By doing so, we are able to optimize an Omnidirectional Neural
Radiance Field with visible pixels collecting from omnidirectional viewing
angles at a fixed center for the estimation of new viewing angles from varying
camera positions. As a result, the proposed OmniNeRF achieves convincing
renderings of novel panoramic views that exhibit the parallax effect. We
showcase the effectiveness of each of our proposals on both synthetic and
real-world datasets.
|
Let $X$ be an integrable discrete random variable over $\{0, 1, 2, \ldots\}$
with $\mathbb{P}(X = i + 1) \leq \mathbb{P}(X = i)$ for all $i$. Then for any
integer $a \geq 1$, $\mathbb{P}(X \leq a) \leq \mathbb{E}[X] / (2a - 1)$. Let
$W$ be an discrete random variable over $\{\ldots, -2, -1, 0, 1, 2, \ldots\}$
with finite second moment where the $\mathbb{P}(W = i)$ values are unimodal.
Then $\mathbb{P}(|W - \mathbb{E}[W]| \geq a) \leq (\mathbb{V}(W) + 1 / 12) /
(2(a - 1 / 2)^2)$.
|
We point out that light gauge boson mediators could induce new interference
effects in neutrino-electron scattering that can be used to enhance the
sensitivity of neutrino-flavor-selective high-intensity neutrino experiments,
such as DUNE. We particularly emphasize a destructive interference effect,
leading to a deficit between the Standard Model expectation and the
experimental measurement of the differential cross-sections, which is prominent
only in either the neutrino or the antineutrino mode, depending on the mediator
couplings. Therefore, the individual neutrino (or antineutrino) mode could
allow for sensitivity reaches superior to the combined analysis, and moreover,
could distinguish between different types of gauge boson mediators.
|
Modeling and simulation of disease spreading in pedestrian crowds has been
recently become a topic of increasing relevance. In this paper, we consider the
influence of the crowd motion in a complex dynamical environment on the course
of infection of the pedestrians. To model the pedestrian dynamics we consider a
kinetic equation for multi-group pedestrian flow based on a social force model
coupled with an Eikonal equation. This model is coupled with a non-local SEIS
contagion model for disease spread, where besides the description of local
contacts also the influence of contact times has been modelled. Hydrodynamic
approximations of the coupled system are derived. Finally, simulations of the
hydrodynamic model are carried out using a mesh-free particle method. Different
numerical test cases are investigated including uni- and bi-directional flow in
a passage with and without obstacles.
|
We describe the formalization of the existence and uniqueness of Haar measure
in the Lean theorem prover. The Haar measure is an invariant regular measure on
locally compact groups, and it has not been formalized in a proof assistant
before. We will also discuss the measure theory library in Lean's mathematical
library \textsf{mathlib}, and discuss the construction of product measures and
the proof of Fubini's theorem for the Bochner integral.
|
This paper is concerned with linear parameter-dependent systems and considers
the notion uniform ensemble reachability. The focus of this work is on
constructive methods to compute suitable parameter-independent open-loop inputs
for such systems. In contrast to necessary and sufficient conditions for
ensemble reachability, computational methods have to distinguish between
continuous-time and discrete-time systems. Based on recently derived sufficient
conditions and techniques from complex approximation we present two algorithms
for discrete-time singe-input linear systems. Moreover, we illustrate that one
method can also be applied to certain continuous-time single-input systems.
|
The orchestra performance is full of sublime rich sounds. In particular, the
unison of violins sounds different from the solo violin. We try to clarify this
difference and similarity of unison and solo numerically analyzing the beat of
`violins` with timbre, vibrato, melody, and resonance. Characteristic
properties appear in the very low-frequency part in the power spectrum of the
wave amplitude squared. This ultra-buss richness (UBR) can be a new
characteristic of sound on top of the well-known pitch, loudness, and timbre,
although being inaudible directly. We find this UBR is always characterized by
a power-law at low-frequency with the index around -1 and appears everywhere in
music and thus being universal. Furthermore, we explore this power-law property
towards much smaller frequency regions and suggest possible relation to the 1/f
noise often found in music and many other fields in nature.
|
Tackling online hatred using informed textual responses - called counter
narratives - has been brought under the spotlight recently. Accordingly, a
research line has emerged to automatically generate counter narratives in order
to facilitate the direct intervention in the hate discussion and to prevent
hate content from further spreading. Still, current neural approaches tend to
produce generic/repetitive responses and lack grounded and up-to-date evidence
such as facts, statistics, or examples. Moreover, these models can create
plausible but not necessarily true arguments. In this paper we present the
first complete knowledge-bound counter narrative generation pipeline, grounded
in an external knowledge repository that can provide more informative content
to fight online hatred. Together with our approach, we present a series of
experiments that show its feasibility to produce suitable and informative
counter narratives in in-domain and cross-domain settings.
|
Pretraining Bidirectional Encoder Representations from Transformers (BERT)
for downstream NLP tasks is a non-trival task. We pretrained 5 BERT models that
differ in the size of their training sets, mixture of formal and informal
Arabic, and linguistic preprocessing. All are intended to support Arabic
dialects and social media. The experiments highlight the centrality of data
diversity and the efficacy of linguistically aware segmentation. They also
highlight that more data or more training step do not necessitate better
models. Our new models achieve new state-of-the-art results on several
downstream tasks. The resulting models are released to the community under the
name QARiB.
|
We show that any weakly separated Bessel system of model spaces in the Hardy
space on the unit disc is a Riesz system and we highlight some applications to
interpolating sequences of matrices. This will be done without using the recent
solution of the Feichtinger conjecture, whose natural generalization to
multi-dimensional model sub-spaces of $\mathrm{H}^2$ turns out to be false.
|
In quantum metrology, entanglement represents a valuable resource that can be
used to overcome the Standard Quantum Limit (SQL) that bounds the precision of
sensors that operate with independent particles. Measurements beyond the SQL
are typically enabled by relatively simple entangled states (squeezed states
with Gaussian probability distributions), where quantum noise is redistributed
between different quadratures. However, due to both fundamental limitations and
the finite measurement resolution achieved in practice, sensors based on
squeezed states typically operate far from the true fundamental limit of
quantum metrology, the Heisenberg Limit. Here, by implementing an effective
time-reversal protocol through a controlled sign change in an optically
engineered many-body Hamiltonian, we demonstrate atomic-sensor performance with
non-Gaussian states beyond the limitations of spin squeezing, and without the
requirement of extreme measurement resolution. Using a system of 350 neutral
$^{171}$Yb atoms, this signal amplification through time-reversed interaction
(SATIN) protocol achieves the largest sensitivity improvement beyond the SQL
($11.8 \pm 0.5$~dB) demonstrated in any interferometer to date. Furthermore, we
demonstrate a precision improving in proportion to the particle number
(Heisenberg scaling), at fixed distance of 12.6~dB from the Heisenberg Limit.
These results pave the way for quantum metrology using complex entangled
states, with potential broad impact in science and technology. Potential
applications include searches for dark matter and for physics beyond the
standard model, tests of the fundamental laws of physics, timekeeping, and
geodesy.
|
We prove that many of the recently-constructed algebras and categories which
appear in categorification can be equipped with an action of the Lie algebra
sl_2 by derivations. The representations which appear are filtered by tensor
products of coverma modules. In a future paper, we will address the
implications of this structure for categorification.
|
Modern engineering education tends to focus on mathematics and fundamentals,
eschewing critical reflections on technology and the field of engineering. In
this paper, I present an elective engineering course and a 3-lecture module in
an introductory course that emphasize engaging with the social impacts of
technology.
|
We study how the presence of committed volunteers influences the collective
helping behavior in emergency evacuation scenarios. In this study, committed
volunteers do not change their decision to help injured persons, implying that
other evacuees may adapt their helping behavior through strategic interactions.
An evolutionary game theoretic model is developed which is then coupled to a
pedestrian movement model to examine the collective helping behavior in
evacuations. By systematically controlling the number of committed volunteers
and payoff parameters, we have characterized and summarized various collective
helping behaviors in phase diagrams. From our numerical simulations, we observe
that the existence of committed volunteers can promote cooperation but adding
additional committed volunteers is effective only above a minimum number of
committed volunteers. This study also highlights that the evolution of
collective helping behavior is strongly affected by the evacuation process.
|
Let $(M,g)$ be a smooth Anosov Riemannian manifold and $\mathcal{C}^\sharp$
the set of its primitive closed geodesics. Given a Hermitian vector bundle
$\mathcal{E}$ equipped with a unitary connection $\nabla^{\mathcal{E}}$, we
define $\mathcal{T}^\sharp(\mathcal{E}, \nabla^{\mathcal{E}})$ as the sequence
of traces of holonomies of $\nabla^{\mathcal{E}}$ along elements of
$\mathcal{C}^\sharp$. This descends to a homomorphism on the additive moduli
space $\mathbb{A}$ of connections up to gauge $\mathcal{T}^\sharp: (\mathbb{A},
\oplus) \to \ell^\infty(\mathcal{C}^\sharp)$, which we call the
$\textit{primitive trace map}$. It is the restriction of the well-known
$\textit{Wilson loop}$ operator to primitive closed geodesics.
The main theorem of this paper shows that the primitive trace map
$\mathcal{T}^\sharp$ is locally injective near generic points of $\mathbb{A}$
when $\dim(M) \geq 3$. We obtain global results in some particular cases: flat
bundles, direct sums of line bundles, and general bundles in negative curvature
under a spectral assumption which is satisfied in particular for connections
with small curvature. As a consequence of the main theorem, we also derive a
spectral rigidity result for the connection Laplacian.
The proofs are based on two new ingredients: a Liv\v{s}ic-type theorem in
hyperbolic dynamical systems showing that the cohomology class of a unitary
cocycle is determined by its trace along closed primitive orbits, and a theorem
relating the local geometry of $\mathbb{A}$ with the Pollicott-Ruelle resonance
near zero of a certain natural transport operator.
|
We present a detailed investigation of millimeter-wave line emitters ALMA
J010748.3-173028 (ALMA-J0107a) and ALMA J010747.0-173010 (ALMA-J0107b), which
were serendipitously uncovered in the background of the nearby galaxy VV114
with spectral scan observations at $\lambda$ = 2 - 3 mm. Via Atacama Large
Millimeter/submillimeter Array (ALMA) detection of CO(4-3), CO(3-2), and
[CI](1-0) lines for both sources, their spectroscopic redshifts are
unambiguously determined to be $z= 2.4666\pm0.0002$ and $z=2.3100\pm0.0002$,
respectively. We obtain the apparent molecular gas masses $M_{\rm gas}$ of
these two line emitters from [CI] line fluxes as $(11.2 \pm 3.1) \times 10^{10}
M_\odot$ and $(4.2 \pm 1.2) \times 10^{10} M_\odot$, respectively. The observed
CO(4-3) velocity field of ALMA-J0107a exhibits a clear velocity gradient across
the CO disk, and we find that ALMA-J0107a is characterized by an inclined
rotating disk with a significant turbulence, that is, a deprojected maximum
rotation velocity to velocity dispersion ratio $v_{\rm max}/\sigma_{v}$ of $1.3
\pm 0.3$. We find that the dynamical mass of ALMA-J0107a within the CO-emitting
disk computed from the derived kinetic parameters, $(1.1 \pm 0.2) \times
10^{10}\ M_\odot$, is an order of magnitude smaller than the molecular gas mass
derived from dust continuum emission, $(3.2\pm1.6)\times10^{11}\ M_{\odot}$. We
suggest this source is magnified by a gravitational lens with a magnification
of $\mu \gtrsim10$, which is consistent with the measured offset from the
empirical correlation between CO-line luminosity and width.
|
Biosignals are nowadays important subjects for scientific researches from
both theory and applications especially with the appearance of new pandemics
threatening humanity such as the new Coronavirus. One aim in the present work
is to prove that Wavelets may be successful machinery to understand such
phenomena by applying a step forward extension of wavelets to multiwavelets. We
proposed in a first step to improve the multiwavelet notion by constructing
more general families using independent components for multi-scaling and
multiwavelet mother functions. A special multiwavelet is then introduced,
continuous and discrete multiwavelet transforms are associated, as well as new
filters and algorithms of decomposition and reconstruction. The constructed
multiwavelet framework is applied for some experimentations showing fast
algorithms, ECG signal, and a strain of Coronavirus processing.
|
We study the dynamics of the group of holomorphic automorphisms of the affine
cubic surfaces \begin{align*} S_{A,B,C,D} = \{(x,y,z) \in \mathbb{C}^3 \, : \,
x^2 + y^2 + z^2 +xyz = Ax + By+Cz+D\}, \end{align*} where $A,B,C,$ and $D$ are
complex parameters. We focus on a finite index subgroup $\Gamma_{A,B,C,D} <
{\rm Aut}(S_{A,B,C,D})$ whose action not only describes the dynamics of
Painlev\'e 6 differential equations but also arises naturally in the context of
character varieties. We define the Julia and Fatou sets of this group action
and prove that there is a dense orbit in the Julia set. In order to show that
the Julia set is ``large'' we consider a second dichotomy, between locally
discrete and locally non-discrete dynamics. For an open set in parameter space,
$\mathcal{N} \subset \mathbb{C}^4$, we show that there simultaneously exists an
open set in $S_{A,B,C,D}$ on which $\Gamma_{A,B,C,D}$ acts locally discretely
and a second open set in $S_{A,B,C,D}$ on which $\Gamma_{A,B,C,D}$ acts locally
non-discretely. After removing a countable union of real-algebraic
hypersurfaces from $\mathcal{N}$ we show that $\Gamma_{A,B,C,D}$ simultaneously
exhibits a non-empty Fatou set and also a Julia set having non-trivial
interior. The open set $\mathcal{N}$ contains a natural family of parameters
previously studied by Dubrovin-Mazzocco.
The interplay between the Fatou/Julia dichotomy and the locally
discrete/non-discrete dichotomy plays a major theme in this paper and seems
bound to play an important role in further dynamical studies of holomorphic
automorphism groups.
|
Smart power grids are one of the most complex cyber-physical systems,
delivering electricity from power generation stations to consumers. It is
critically important to know exactly the current state of the system as well as
its state variation tendency; consequently, state estimation and state
forecasting are widely used in smart power grids. Given that state forecasting
predicts the system state ahead of time, it can enhance state estimation
because state estimation is highly sensitive to measurement corruption due to
the bad data or communication failures. In this paper, a hybrid deep
learningbased method is proposed for power system state forecasting. The
proposed method leverages Convolutional Neural Network (CNN) for predicting
voltage magnitudes and a Deep Recurrent Neural Network (RNN) for predicting
phase angels. The proposed CNN-RNN model is evaluated on the IEEE 118-bus
benchmark. The results demonstrate that the proposed CNNRNN model achieves
better results than the existing techniques in the literature by reducing the
normalized Root Mean Squared Error (RMSE) of predicted voltages by 10%. The
results also show a 65% and 35% decrease in the average and maximum absolute
error of voltage magnitude forecasting.
|
In several previous studies, quasars exhibiting broad emission lines with
>1000 km/s velocity offsets with respect to the host galaxy rest frame have
been discovered. One leading hypothesis for the origin of these velocity-offset
broad lines is the dynamics of a binary supermassive black hole (SMBH). We
present high-resolution radio imaging of 34 quasars showing these
velocity-offset broad lines with the Very Long Baseline Array (VLBA), aimed at
finding evidence for the putative binary SMBHs (such as dual radio cores), and
testing the competing physical models. We detect exactly half of the target
sample from our VLBA imaging, after implementing a 5 detection limit. While we
do not resolve double radio sources in any of the targets, we obtain limits on
the instantaneous projected separations of a radio-emitting binary for all of
the detected sources under the assumption that a binary still exists within our
VLBA angular resolution limits. We also assess the likelihood that a
radio-emitting companion SMBH exists outside of our angular resolution limits,
but its radio luminosity is too weak to produce a detectable signal in the VLBA
data. Additionally, we compare the precise sky positions afforded by these data
to optical positions from both the SDSS and Gaia DR2 source catalogs. We find
projected radio/optical separations on the order of 10 pc for three quasars.
Finally, we explore how future multi-wavelength campaigns with optical, radio,
and X-ray observatories can help discriminate further between the competing
physical models.
|
We compute holographic complexity for the non-supersymmetric Janus
deformation of AdS$_5$ according to the volume conjecture. The result is
characterized by a power-law ultraviolet divergence. When a ball-shaped region
located around the interface is considered, a sub-leading logarithmic divergent
term and a finite part appear in the corresponding subregion volume complexity.
Using two different prescriptions to regularize the divergences, we find that
the coefficient of the logarithmic term is universal.
|
We consider networks of small, autonomous devices that communicate with each
other wirelessly. Minimizing energy usage is an important consideration in
designing algorithms for such networks, as battery life is a crucial and
limited resource. Working in a model where both sending and listening for
messages deplete energy, we consider the problem of finding a maximal matching
of the nodes in a radio network of arbitrary and unknown topology.
We present a distributed randomized algorithm that produces, with high
probability, a maximal matching. The maximum energy cost per node is $O(\log^2
n)$, where $n$ is the size of the network. The total latency of our algorithm
is $O(n \log n)$ time steps. We observe that there exist families of network
topologies for which both of these bounds are simultaneously optimal up to
polylog factors, so any significant improvement will require additional
assumptions about the network topology.
We also consider the related problem of assigning, for each node in the
network, a neighbor to back up its data in case of node failure. Here, a key
goal is to minimize the maximum load, defined as the number of nodes assigned
to a single node. We present a decentralized low-energy algorithm that finds a
neighbor assignment whose maximum load is at most a polylog($n$) factor bigger
that the optimum.
|
Classical approaches for OLAP assume that the data of all tables is complete.
However, in case of incomplete tables with missing tuples, classical approaches
fail since the result of a SQL aggregate query might significantly differ from
the results computed on the full dataset. Today, the only way to deal with
missing data is to manually complete the dataset which causes not only high
efforts but also requires good statistical skills to determine when a dataset
is actually complete. In this paper, we propose an automated approach for
relational data completion called ReStore using a new class of (neural)
schema-structured completion models that are able to synthesize data which
resembles the missing tuples. As we show in our evaluation, this efficiently
helps to reduce the relative error of aggregate queries by up to 390% on
real-world data compared to using the incomplete data directly for query
answering.
|
Cultural diversity encoded within languages of the world is at risk, as many
languages have become endangered in the last decades in a context of growing
globalization. To preserve this diversity, it is first necessary to understand
what drives language extinction, and which mechanisms might enable coexistence.
Here, we study language shift mechanisms using theoretical and data-driven
perspectives. A large-scale empirical analysis of multilingual societies using
Twitter and census data yields a wide diversity of spatial patterns of language
coexistence. It ranges from a mixing of language speakers to segregation with
multilinguals on the boundaries of disjoint linguistic domains. To understand
how these different states can emerge and, especially, become stable, we
propose a model in which language coexistence is reached when learning the
other language is facilitated and when bilinguals favor the use of the
endangered language. Simulations carried out in a metapopulation framework
highlight the importance of spatial interactions arising from people mobility
to explain the stability of a mixed state or the presence of a boundary between
two linguistic regions. Further, we find that the history of languages is
critical to understand their present state.
|
In this paper we develop a new approach to the study of uncountable
fundamental groups by using Hurewicz fibrations with the unique path-lifting
property (lifting spaces for short) as a replacement for covering spaces. In
particular, we consider the inverse limit of a sequence of covering spaces of
$X$. It is known that the path-connectivity of the inverse limit can be
expressed by means of the derived inverse limit functor $\varprojlim^1$, which
is, however, notoriously difficult to compute when the $\pi_1(X)$ is
uncountable.To circumvent this difficulty, we express the set of
path-components of the inverse limit, $\widehat X$, in terms of the functors
$\varprojlim$ and $\varprojlim^1$ applied to sequences of countable groups
arising from polyhedral approximations of $X$.
A consequence of our computation is that path-connectedness of lifting space
implies that $\pi_1(\tilde X)$ supplements $\pi_1(X)$ in $\check\pi_1(X)$ where
$\check\pi_1(X)$ is the inverse limit of fundamental groups of polyhedral
approximations of $X$. As an application we show that $\mathcal G\cdot
\ker_{\mathbb Z}(\widehat F)= \widehat F\ne\mathcal G\cdot
\ker_{B(1,n)}(\widehat F)$, where $\widehat F$ is the canonical inverse limit
of finite rank free groups, $\mathcal G$ is the fundamental group of the
Hawaiian Earring, and $\ker_A(\widehat F)$ is the intersection of kernels of
homomorphisms from $\widehat{F}$ to $A$.
|
Singular beams have attracted great attention due to their optical properties
and broad applications from light manipulation to optical communications.
However, there has been a lack of practical schemes with which to achieve
switchable singular beams with sub-wavelength resolution using ultrathin and
flat optical devices. In this work, we demonstrate the generation of switchable
vector and vortex beams utilizing dynamic metasurfaces at visible frequencies.
The dynamic functionality of the metasurface pixels is enabled by the
utilization of magnesium nanorods, which possess plasmonic reconfigurability
upon hydrogenation and dehydrogenation. We show that switchable vector beams of
different polarization states and switchable vortex beams of different
topological charges can be implemented through simple hydrogenation and
dehydrogenation of the same metasurfaces. Furthermore, we demonstrate a
two-cascade metasurface scheme for holographic pattern switching, taking
inspiration from orbital angular momentum-shift keying. Our work provides an
additional degree of freedom to develop high-security optical elements for
anti-counterfeiting applications.
|
In this paper, we considered the gravitational collapse of a symmetric
radiating star consisting of perfect fluid (baryonic) in the background of dark
energy (DE) with general equation of state. The effect of DE on the singularity
formation has been discussed first separately (only DE present) and then
combination of both baryonic and DE interaction. We have also showed that DE
components play important role in the formation of Black-Hole(BH). In some
cases the collapse of radiating star leads to black hole formation and in other
cases it forms Naked-Singularity(or, eternally collapse). The present work is
in itself remarkable to describe the effect of dark energy on singularity
formation in radiating star.
|
This book chapter describes a novel approach to training machine learning
systems by means of a hybrid computer setup i.e. a digital computer tightly
coupled with an analog computer. As an example a reinforcement learning system
is trained to balance an inverted pendulum which is simulated on an analog
computer, thus demonstrating a solution to the major challenge of adequately
simulating the environment for reinforcement learning.
|
In the present work, we tackle the regular language indexing problem by first
studying the hierarchy of $p$-sortable languages: regular languages accepted by
automata of width $p$. We show that the hierarchy is strict and does not
collapse, and provide (exponential in $p$) upper and lower bounds relating the
minimum widths of equivalent NFAs and DFAs. Our bounds indicate the importance
of being able to index NFAs, as they enable indexing regular languages with
much faster and smaller indexes. Our second contribution solves precisely this
problem, optimally: we devise a polynomial-time algorithm that indexes any NFA
with the optimal value $p$ for its width, without explicitly computing $p$
(NP-hard to find). In particular, this implies that we can index in polynomial
time the well-studied case $p=1$ (Wheeler NFAs). More in general, in polynomial
time we can build an index breaking the worst-case conditional lower bound of
$\Omega(|P| m)$, whenever the input NFA's width is $p \in o(\sqrt{m})$.
|
The modern search for extraterrestrial intelligence (SETI) began with the
seminal publications of Cocconi & Morrison (1959) and Schwartz & Townes (1961),
who proposed to search for narrow-band signals in the radio spectrum, and for
optical laser pulses. Over the last six decades, more than one hundred
dedicated search programs have targeted these wavelengths; all with null
results. All of these campaigns searched for classical communications, that is,
for a significant number of photons above a noise threshold; with the
assumption of a pattern encoded in time and/or frequency space. I argue that
future searches should also target quantum communications. They are preferred
over classical communications with regards to security and information
efficiency, and they would have escaped detection in all previous searches. The
measurement of Fock state photons or squeezed light would indicate the
artificiality of a signal. I show that quantum coherence is feasible over
interstellar distances, and explain for the first time how astronomers can
search for quantum transmissions sent by ETI to Earth, using commercially
available telescopes and receiver equipment.
|
Entanglement entropy (EE) in interacting field theories has two important
issues: renormalization of UV divergences and non-Gaussianity of the vacuum. In
this letter, we investigate them in the framework of the two-particle
irreducible formalism. In particular, we consider EE of a half space in an
interacting scalar field theory. It is formulated as $\mathbb{Z}_M$ gauge
theory on Feynman diagrams: $\mathbb{Z}_M$ fluxes are assigned on plaquettes
and summed to obtain EE. Some configurations of fluxes are interpreted as
twists of propagators and vertices. The former gives a Gaussian part of EE
written in terms of a renormalized 2-point function while the latter reflects
non-Gaussianity of the vacuum.
|
It is known that the Cabibbo-Kobayashi-Maskawa (CKM) $n\times n$ matrix can
be represented by a real matrix iff there is no CP-violation, and then the
Jarlskog invariants vanish. We investigate sufficient conditions for the
opposite statement to hold, paying particular attention to degenerate cases. We
find that higher Jarlskog invariants are needed for $n\geq 4$. One generic
sufficient condition is provided by the existence of a so-called echelon cross.
|
Quantum contextuality takes an important place amongst the concepts of
quantum computing that bring an advantage over its classical counterpart. For a
large class of contextuality proofs, aka. observable-based proofs of the
Kochen-Specker Theorem, we first formulate the contextuality property as the
absence of solutions to a linear system. Then we explain why subgeometries of
binary symplectic polar spaces are candidates for contextuality proofs. We
report first results of a software that generates these subgeometries and
decides their contextuality. The proofs we consider involve more contexts and
observables than the smallest known proofs. This intermediate size property of
those proofs is interesting for experimental tests, but could also be
interesting in quantum game theory.
|
A q-Gauss-Newton algorithm is an iterative procedure that solves nonlinear
unconstrained optimization problems based on minimization of the sum squared
errors of the objective function residuals. Main advantage of the algorithm is
that it approximates matrix of q-second order derivatives with the first-order
q-Jacobian matrix. For that reason, the algorithm is much faster than
q-steepest descent algorithms. The convergence of q-GN method is assured only
when the initial guess is close enough to the solution. In this paper the
influence of the parameter q to the non-linear problem solving is presented
through three examples. The results show that the q-GD algorithm finds an
optimal solution and speeds up the iterative procedure.
|
We study the runtime verification of hyperproperties, expressed in the
temporal logic HyperLTL, as a means to inspect a system with respect to
security polices. Runtime monitors for hyperproperties analyze trace logs that
are organized by common prefixes in the form of a tree-shaped Kripke structure,
or are organized both by common prefixes and by common suffixes in the form of
an acyclic Kripke structure. Unlike runtime verification techniques for trace
properties, where the monitor tracks the state of the specification but usually
does not need to store traces, a monitor for hyperproperties repeatedly model
checks the growing Kripke structure. This calls for a rigorous complexity
analysis of the model checking problem over tree-shaped and acyclic Kripke
structures. We show that for trees, the complexity in the size of the Kripke
structure is L-complete independently of the number of quantifier alternations
in the HyperLTL formula. For acyclic Kripke structures, the complexity is
PSPACE-complete (in the level of the polynomial hierarchy that corresponds to
the number of quantifier alternations). The combined complexity in the size of
the Kripke structure and the length of the HyperLTL formula is PSPACE-complete
for both trees and acyclic Kripke structures, and is as low as NC for the
relevant case of trees and alternation-free HyperLTL formulas. Thus, the size
and shape of both the Kripke structure and the formula have significant impact
on the complexity of the model checking problem.
|
Resource allocation is investigated for offloading computational-intensive
tasks in multi-hop mobile edge computing (MEC) system. The envisioned system
has both the cooperative access points (AP) with the computing capability and
the MEC servers. A user-device (UD) therefore first uploads a computing task to
the nearest AP, and the AP can either locally process the received task or
offload to MEC server. In order to utilize the radio resource blocks (RRBs) in
the APs efficiently, we exploit the non-orthogonal multiple access for
offloading the tasks from the UDs to the AP(s). For the considered NOMA-enabled
multi-hop MEC computing system, our objective is to minimize both the latency
and energy consumption of the system jointly. Towards this goal, a joint
optimization problem is formulated by taking the offloading decision of the
APs, the scheduling among the UDs, RRBs, and APs, and UDs' transmit power
allocation into account. To solve this problem efficiently, (i) a conflict
graph-based approach is devised that solves the scheduling among the UDs, APs,
and RRBs, the transmit power control, and the APs' computation resource
allocation jointly, and (ii) a low-complexity pruning graph-based approach is
also devised. The efficiency of the proposed graph-based approaches over
several benchmark schemes is verified via extensive simulations.
|
Decentralized optimization over time-varying graphs has been increasingly
common in modern machine learning with massive data stored on millions of
mobile devices, such as in federated learning. This paper revisits the widely
used accelerated gradient tracking and extends it to time-varying graphs. We
prove the $O((\frac{\gamma}{1-\sigma_{\gamma}})^2\sqrt{\frac{L}{\epsilon}})$
and
$O((\frac{\gamma}{1-\sigma_{\gamma}})^{1.5}\sqrt{\frac{L}{\mu}}\log\frac{1}{\epsilon})$
complexities for the practical single loop accelerated gradient tracking over
time-varying graphs when the problems are nonstrongly convex and strongly
convex, respectively, where $\gamma$ and $\sigma_{\gamma}$ are two common
constants charactering the network connectivity, $\epsilon$ is the desired
precision, and $L$ and $\mu$ are the smoothness and strong convexity constants,
respectively. Our complexities improve significantly over the ones of
$O(\frac{1}{\epsilon^{5/7}})$ and
$O((\frac{L}{\mu})^{5/7}\frac{1}{(1-\sigma)^{1.5}}\log\frac{1}{\epsilon})$,
respectively, which were proved in the original literature only for static
graphs, where $\frac{1}{1-\sigma}$ equals $\frac{\gamma}{1-\sigma_{\gamma}}$
when the network is time-invariant. When combining with a multiple consensus
subroutine, the dependence on the network connectivity constants can be further
improved to $O(1)$ and $O(\frac{\gamma}{1-\sigma_{\gamma}})$ for the
computation and communication complexities, respectively. When the network is
static, by employing the Chebyshev acceleration, our complexities exactly match
the lower bounds without hiding any poly-logarithmic factor for both
nonstrongly convex and strongly convex problems.
|
Motivated by questions of Fouvry and Rudnick on the distribution of Gaussian
primes, we develop a very general setting in which one can study inequities in
the distribution of analogues of primes through analytic properties of
infinitely many $L$-functions. In particular, we give a heuristic argument for
the following claim : for more than half of the prime numbers that can be
written as a sum of two square, the odd square is the square of a positive
integer congruent to $1 \bmod 4$.
|
Methane ebullition (bubbling) from lake sediments is an important methane
flux into the atmosphere. Previous studies have focused on the open-water
season, showing that temperature variations, pressure fluctuations and
wind-induced currents can affect ebullition. However, ebullition surveys during
the ice-cover are rare despite the prevalence of seasonally ice-covered lakes,
and the factors controlling ebullition are poorly understood. Here, we present
a month-long, high frequency record of acoustic ebullition data from an
ice-covered lake. The record shows that ebullition occurs almost exclusively
when atmospheric pressure drops below a threshold that is approximately equal
to the long-term average pressure. The intensity of ebullition is proportional
to the amount by which the pressure drops below this threshold. In addition,
field measurements of turbidity, in conjunction with laboratory experiments,
provide evidence that ebullition is responsible for previously unexplained
elevated levels of turbidity during ice-cover.
|
We present SrvfNet, a generative deep learning framework for the joint
multiple alignment of large collections of functional data comprising
square-root velocity functions (SRVF) to their templates. Our proposed
framework is fully unsupervised and is capable of aligning to a predefined
template as well as jointly predicting an optimal template from data while
simultaneously achieving alignment. Our network is constructed as a generative
encoder-decoder architecture comprising fully-connected layers capable of
producing a distribution space of the warping functions. We demonstrate the
strength of our framework by validating it on synthetic data as well as
diffusion profiles from magnetic resonance imaging (MRI) data.
|
We study the time evolution of the excess value of capacity of entanglement
between a locally excited state and ground state in free, massless fermionic
theory and free Yang-Mills theory in four spacetime dimensions. Capacity has
non-trivial time evolution and is sensitive to the partial entanglement
structure, and shows a universal peak at early times. We define a quantity, the
normalized "Page time", which measures the timescale when capacity reaches its
peak. This quantity turns out to be a characteristic property of the inserted
operator. This firmly establishes capacity as a valuable measure of
entanglement structure of an operator, especially at early times similar in
spirit to the Renyi entropies at late times. Interestingly, the time evolution
of capacity closely resembles its evolution in microcanonical and canonical
ensemble of the replica wormhole model in the context of the black hole
information paradox.
|
We have performed density-matrix renormalization group studies of a square
lattice $t$-$J$ model with small hole doping, $\delta\ll 1$, on long 4 and 6
leg cylinders. We include frustration in the form of a second-neighbor exchange
coupling, $J_2 = J_1/2$, such that the undoped ($\delta=0$) "parent" state is a
quantum spin liquid. In contrast to the relatively short range superconducting
(SC) correlations that have been observed in recent studies of the 6-leg
cylinder in the absence of frustration, we find power law SC correlations with
a Luttinger exponent, $K_{sc} \approx 1$, consistent with a strongly diverging
SC susceptibility, $\chi \sim T^{-(2-K_{sc})}$ as the temperature $T\to 0$. The
spin-spin correlations - as in the undoped state - fall exponentially
suggesting that the SC "pairing" correlations evolve smoothly from the
insulating parent state.
|
We find a phenomenon in a non-gravitational gauge theory analogous to the
replica wormhole in a quantum gravity theory. We consider a reservoir of a
scalar field coupled with a gauge theory contained in a region with a boundary
by an axion-like coupling. When the replica trick is used to compute the
entanglement entropy for a subregion in the reservoir, a tuple of instantons
distributed across the replica sheets gives a non-perturbative contribution. As
an explicit and solvable example, we consider a discrete scalar field coupled
to a 2d pure gauge theory and observe how the replica instantons reproduce the
entropy directly calculated from the reduced density matrix. In addition, we
notice that the entanglement entropy can detect the confinement of a 2d gauge
theory.
|
GRB200522A is a short duration gamma-ray burst (GRB) at redshift $z$=0.554
characterized by a bright infrared counterpart. A possible, although not
unambiguous, interpretation of the observed emission is the onset of a luminous
kilonova powered by a rapidly rotating and highly-magnetized neutron star,
known as magnetar. A bright radio flare, arising from the interaction of the
kilonova ejecta with the surrounding medium, is a prediction of this model.
Whereas the available dataset remains open to multiple interpretations (e.g.
afterglow, r-process kilonova, magnetar-powered kilonova), long-term radio
monitoring of this burst may be key to discriminate between models. We present
our late-time upper limit on the radio emission of GRB200522A, carried out with
the Karl G. Jansky Very Large Array at 288 days after the burst. For kilonova
ejecta with energy $E_{\rm ej} \approx 10^{53} \rm erg$, as expected for a
long-lived magnetar remnant, we can already rule out ejecta masses $M_{\rm ej}
\lesssim0.03 \mathrm{M}_\odot$ for the most likely range of circumburst
densities $n\gtrsim 10^{-3}$ cm$^{-3}$. Observations on timescales of
$\approx$3-10 yr after the merger will probe larger ejecta masses up to $M_{\rm
ej} \sim 0.1 \mathrm{M}_\odot$, providing a robust test to the magnetar
scenario.
|
Most real-world datasets are inherently heterogeneous graphs, which involve a
diversity of node and relation types. Heterogeneous graph embedding is to learn
the structure and semantic information from the graph, and then embed it into
the low-dimensional node representation. Existing methods usually capture the
composite relation of a heterogeneous graph by defining metapath, which
represent a semantic of the graph. However, these methods either ignore node
attributes, or discard the local and global information of the graph, or only
consider one metapath. To address these limitations, we propose a
Metapaths-guided Neighbors-aggregated Heterogeneous Graph Neural Network(MHN)
to improve performance. Specially, MHN employs node base embedding to
encapsulate node attributes, BFS and DFS neighbors aggregation within a
metapath to capture local and global information, and metapaths aggregation to
combine different semantics of the heterogeneous graph. We conduct extensive
experiments for the proposed MHN on three real-world heterogeneous graph
datasets, including node classification, link prediction and online A/B test on
Alibaba mobile application. Results demonstrate that MHN performs better than
other state-of-the-art baselines.
|
Universal quantifiers occur frequently in proof obligations produced by
program verifiers, for instance, to axiomatize uninterpreted functions and to
express properties of arrays. SMT-based verifiers typically reason about them
via E-matching, an SMT algorithm that requires syntactic matching patterns to
guide the quantifier instantiations. Devising good matching patterns is
challenging. In particular, overly restrictive patterns may lead to spurious
verification errors if the quantifiers needed for a proof are not instantiated;
they may also conceal unsoundness caused by inconsistent axiomatizations. In
this paper, we present the first technique that identifies and helps the users
remedy the effects of overly restrictive matching patterns. We designed a novel
algorithm to synthesize missing triggering terms required to complete a proof.
Tool developers can use this information to refine their matching patterns and
prevent similar verification errors, or to fix a detected unsoundness.
|
The popularity of 3D displays has risen drastically over the past few decades
but these displays are still merely a novelty compared to their true potential.
The development has mostly focused on Head Mounted Displays (HMD) development
for Virtual Reality and in general ignored non-HMD 3D displays. This is due to
the inherent difficulty in the creation of these displays and their
impracticability in general use due to cost, performance, and lack of
meaningful use cases. In fairness to the hardware manufacturers who have made
striking innovations in this field, there has been a dereliction of duty of
software developers and researchers in terms of developing software to best
utilize these displays.
This paper will seek to identify what areas of future software development
could mitigate this dereliction. To achieve this goal, the paper will first
examine the current state of the art and perform a comparative analysis on
different types of 3D displays, from this analysis a clear researcher gap
exists in terms of software development for Light field displays which are the
current state of the art of non-HMD-based 3D displays.
The paper will then outline six distinct areas where the context-awareness
concept will allow for non-HMD-based 3D displays in particular light field
displays that can not only compete but surpass their HMD-based brethren for
many specific use cases.
|
The evolution of the biosphere unfolds as a luxuriant generative process of
new living forms and functions. Organisms adapt to their environment, exploit
novel opportunities that are created in this continuous blooming dynamics.
Affordances play a fundamental role in the evolution of the biosphere, for
organisms can exploit them for new morphological and behavioral adaptations
achieved by heritable variations and selection. This way, the opportunities
offered by affordances are then actualized as ever novel adaptations. In this
paper we maintain that affordances elude a formalization that relies on set
theory: we argue that it is not possible to apply set theory to affordances,
therefore we cannot devise a set-based mathematical theory of the diachronic
evolution of the biosphere.
|
This paper proposes a deep unfitted Nitsche method for computing elliptic
interface problems with high contrasts in high dimensions. To capture
discontinuities of the solution caused by interfaces, we reformulate the
problem as an energy minimization involving two weakly coupled components. This
enables us to train two deep neural networks to represent two components of the
solution in high-dimensional. The curse of dimensionality is alleviated by
using the Monte-Carlo method to discretize the unfitted Nitsche energy
function. We present several numerical examples to show the performance of the
proposed method.
|
In this paper, we propose a novel multi-color balance method for reducing
color distortions caused by lighting effects. The proposed method allows us to
adjust three target-colors chosen by a user in an input image so that each
target color is the same as the corresponding destination (benchmark) one. In
contrast, white balancing is a typical technique for reducing the color
distortions, however, they cannot remove lighting effects on colors other than
white. In an experiment, the proposed method is demonstrated to be able to
remove lighting effects on selected three colors, and is compared with existing
white balance adjustments.
|
Because of the ultrafast and photon-driven nature of the transport in their
active region, we demonstrate that quantum cascade lasers can be operated as
resonantly amplified terahertz detectors. Tunable responsivities up to 50 V/W
and noise equivalent powers down to 100 pW/sqrt(Hz) are demonstrated at 4.7
THz. Constant peak responsivities with respect to the detector temperature are
observed up to 80K. Thanks to the sub-ps intersubband lifetime electrical
bandwidths larger than 20 GHz can be obtained, allowing the detection of
optical beatnotes from quantum cascade THz frequency combs.
|
An almost self-centered graph is a connected graph of order $n$ with exactly
$n-2$ central vertices, and an almost peripheral graph is a connected graph of
order $n$ with exactly $n-1$ peripheral vertices. We determine (1) the maximum
girth of an almost self-centered graph of order $n;$ (2) the maximum
independence number of an almost self-centered graph of order $n$ and radius
$r;$ (3) the minimum order of a $k$-regular almost self-centered graph and (4)
the maximum size of an almost peripheral graph of order $n;$ (5) which numbers
are possible for the maximum degree of an almost peripheral graph of order $n;$
(6) the maximum number of vertices of maximum degree in an almost peripheral
graph of order $n$ whose maximum degree is the second largest possible.
Whenever the extremal graphs have a neat form, we also describe them.
|
We develop a formalism for constructing stochastic upper bounds on the
expected full sample risk for supervised classification tasks via the Hilbert
coresets approach within a transductive framework. We explicitly compute tight
and meaningful bounds for complex datasets and complex hypothesis classes such
as state-of-the-art deep neural network architectures. The bounds we develop
exhibit nice properties: i) the bounds are non-uniform in the hypothesis space,
ii) in many practical examples, the bounds become effectively deterministic by
appropriate choice of prior and training data-dependent posterior distributions
on the hypothesis space, and iii) the bounds become significantly better with
increase in the size of the training set. We also lay out some ideas to explore
for future research.
|
Traditional approaches for data anonymization consider relational data and
textual data independently. We propose rx-anon, an anonymization approach for
heterogeneous semi-structured documents composed of relational and textual
attributes. We map sensitive terms extracted from the text to the structured
data. This allows us to use concepts like k-anonymity to generate a joined,
privacy-preserved version of the heterogeneous data input. We introduce the
concept of redundant sensitive information to consistently anonymize the
heterogeneous data. To control the influence of anonymization over unstructured
textual data versus structured data attributes, we introduce a modified,
parameterized Mondrian algorithm. The parameter $\lambda$ allows to give
different weight on the relational and textual attributes during the
anonymization process. We evaluate our approach with two real-world datasets
using a Normalized Certainty Penalty score, adapted to the problem of jointly
anonymizing relational and textual data. The results show that our approach is
capable of reducing information loss by using the tuning parameter to control
the Mondrian partitioning while guaranteeing k-anonymity for relational
attributes as well as for sensitive terms. As rx-anon is a framework approach,
it can be reused and extended by other anonymization algorithms, privacy
models, and textual similarity metrics.
|
There are several challenges in creating an electronic archery scoring system
using computer vision techniques. Variability of light, reconstruction of the
target from several images, variability of target configuration, and filtering
noise were significant challenges during the creation of this scoring system.
This paper discusses the approach used to determine where an arrow hits a
target, for any possible single or set of targets and provides an algorithm
that balances the difficulty of robust arrow detection while retaining the
required accuracy.
|
Facial expressions are the most common universal forms of body language. In
the past few years, automatic facial expression recognition (FER) has been an
active field of research. However, it is still a challenging task due to
different uncertainties and complications. Nevertheless, efficiency and
performance are yet essential aspects for building robust systems. We proposed
two models, EmoXNet which is an ensemble learning technique for learning
convoluted facial representations, and EmoXNetLite which is a distillation
technique that is useful for transferring the knowledge from our ensemble model
to an efficient deep neural network using label-smoothen soft labels for able
to effectively detect expressions in real-time. Both of the techniques
performed quite well, where the ensemble model (EmoXNet) helped to achieve
85.07% test accuracy on FER2013 with FER+ annotations and 86.25% test accuracy
on RAF-DB. Moreover, the distilled model (EmoXNetLite) showed 82.07% test
accuracy on FER2013 with FER+ annotations and 81.78% test accuracy on RAF-DB.
Results show that our models seem to generalize well on new data and are
learned to focus on relevant facial representations for expressions
recognition.
|
We study the propagation of wavepackets along weakly curved interfaces
between topologically distinct media. Our Hamiltonian is an adiabatic
modulation of Dirac operators omnipresent in the topological insulators
literature. Using explicit formulas for straight edges, we construct a family
of solutions that propagates, for long times, unidirectionally and
dispersion-free along the curved edge. We illustrate our results through
various numerical simulations.
|
We study the robust double auction mechanisms, that is, the double auction
mechanisms that satisfy dominant strategy incentive compatibility, ex-post
individual rationality, ex-post budget balance and feasibility. We first
establish that the price in any deterministic robust mechanism does not depend
on the valuations of the trading players. We next establish that, with the
non-bossiness assumption, the price in any deterministic robust mechanism does
not depend on players' valuations at all, whether trading or non-trading, i.e.,
the price is posted in advance. Our main result is a characterization result
that, with the non-bossiness assumption along with other assumptions on the
properties of the mechanism, the posted price mechanism with an exogenous
rationing rule is the only deterministic robust double auction mechanism. We
also show that, even without the non-bossiness assumption, it is quite
difficult to find a reasonable robust double auction mechanism other than the
posted price mechanism with rationing.
|
The risk for severe illness and mortality from COVID-19 significantly
increases with age. As a result, age-stratified modeling for COVID-19 dynamics
is the key to study how to reduce hospitalizations and mortality from COVID-19.
By taking advantage of network theory, we develop an age-stratified epidemic
model for COVID-19 in complex contact networks. Specifically, we present an
extension of standard SEIR (susceptible-exposed-infectious-removed)
compartmental model, called age-stratified SEAHIR
(susceptible-exposedasymptomatic-hospitalized-infectious-removed) model, to
capture the spread of COVID-19 over multitype random networks with general
degree distributions. We derive several key epidemiological metrics and then
propose an age-stratified vaccination strategy to decrease the mortality and
hospitalizations. Through extensive study, we discover that the outcome of
vaccination prioritization depends on the reproduction number R0. Specifically,
the elderly should be prioritized only when R0 is relatively high. If ongoing
intervention policies, such as universal masking, could suppress R0 at a
relatively low level, prioritizing the high-transmission age group (i.e.,
adults aged 20-39) is most effective to reduce both mortality and
hospitalizations. These conclusions provide useful recommendations for
age-based vaccination prioritization for COVID-19.
|
Large samples of experimentally produced graphene are polycrystalline. For
the study of this material, it helps to have realistic computer samples that
are also polycrystalline. A common approach to produce such samples in computer
simulations is based on the method of Wooten, Winer, and Weaire, originally
introduced for the simulation of amorphous silicon. We introduce an early
rejection variation of their method, applied to graphene, which exploits the
local nature of the structural changes to achieve a significant speed-up in the
relaxation of the material, without compromising the dynamics. We test it on a
3,200 atoms sample, obtaining a speedup between one and two orders of
magnitude. We also introduce a further variation called early decision
specifically for relaxing large samples even faster and we test it on two
samples of 10,024 and 20,000 atoms, obtaining a further speed-up of an order of
magnitude. Furthermore, we provide a graphical manipulation tool to remove
unwanted artifacts in a sample, such as bond crossings.
|
Popular blockchains such as Ethereum and several others execute complex
transactions in blocks through user-defined scripts known as smart contracts.
Serial execution of smart contract transactions/atomic-units (AUs) fails to
harness the multiprocessing power offered by the prevalence of multi-core
processors. By adding concurrency to the execution of AUs, we can achieve
better efficiency and higher throughput.
In this paper, we develop a concurrent miner that proposes a block by
executing the AUs concurrently using optimistic Software Transactional Memory
systems (STMs). It captures the independent AUs in a concurrent bin and
dependent AUs in the block graph (BG) efficiently. Later, we propose a
concurrent validator that re-executes the same AUs concurrently and
deterministically using a concurrent bin followed by a BG given by the miner to
verify the proposed block. We rigorously prove the correctness of concurrent
execution of AUs and achieve significant performance gain over the
state-of-the-art.
|
In this work we study a system of two galaxies, Astarte and Adonis, at z
$\sim $2 when the Universe was undergoing its peak of star formation activity.
Astarte is a dusty star-forming galaxy at the massive-end of the main sequence
(MS) and Adonis is a less-massive, bright in ultraviolet (UV), companion galaxy
with an optical spectroscopic redshift. We analyse the physical properties of
this system, and probe the gas mass of Astarte with its ALMA CO emission, to
investigate whether this ultra-massive galaxy is quenching or not. We use
CIGALE - a spectral energy distribution modeling code - to derive the key
physical properties of Astarte and Adonis, mainly their star formation rates
(SFRs), stellar masses, and dust luminosities. We inspect the variation of the
physical parameters depending on the assumed dust attenuation law. We also
estimate the molecular gas mass of Astarte from its CO emission, using
different $\alpha_{CO}$ and transition ratios ($r_{31}$) and discuss the
implication of the various assumptions on the gas mass derivation. We find that
Astarte exhibits a MS-like star formation activity, while Adonis is undergoing
a strong starburst (SB) phase. The molecular gas mass of Astarte is far below
the gas fraction of typical star-forming galaxies at z=2. This low gas content
and high SFR, result in a depletion time of $0.22\pm0.07$ Gyrs, slightly
shorter than what is expected for a MS galaxy at this redshift. The CO
luminosity versus the total IR luminosity suggests a MS-like activity assuming
a galactic conversion factor and a low transition ratio. The SFR of Astarte is
of the same order using different attenuation laws, unlike its stellar mass
that increases using shallow attenuation laws. We discuss these properties and
suggest that Astarte might be experiencing a recent decrease of star formation
activity and is quenching through the MS following a SB epoch.
|
Quantum coherences, correlations and collective effects can be harnessed to
the advantage of quantum batteries. Here, we introduce a feasible structure
engineering scheme that is applicable to spin-based open quantum batteries. Our
scheme, which builds solely upon a modulation of spin energy gaps, allows
engineered quantum batteries to exploit spin-spin correlations for mitigating
environment-induced aging. As a result of this advantage, an engineered quantum
battery can preserve relatively more energy as compared with its non-engineered
counterpart over the course of the storage phase. Particularly, the excess in
stored energy is independent of system size. This implies a scale-invariant
passive protection strategy, which we demonstrate on an engineered quantum
battery with staggered spin energy gaps. Our findings establish structure
engineering as a useful route for advancing quantum batteries, and bring new
perspectives on efficient quantum battery designs.
|
Predicting the binding of viral peptides to the major histocompatibility
complex with machine learning can potentially extend the computational
immunology toolkit for vaccine development, and serve as a key component in the
fight against a pandemic. In this work, we adapt and extend USMPep, a recently
proposed, conceptually simple prediction algorithm based on recurrent neural
networks. Most notably, we combine regressors (binding affinity data) and
classifiers (mass spectrometry data) from qualitatively different data sources
to obtain a more comprehensive prediction tool. We evaluate the performance on
a recently released SARS-CoV-2 dataset with binding stability measurements.
USMPep not only sets new benchmarks on selected single alleles, but
consistently turns out to be among the best-performing methods or, for some
metrics, to be even the overall best-performing method for this task.
|
Shift scheduling impacts healthcare workers' well-being because it sets the
frame for their social life and recreational activities. Since it is complex
and time-consuming, it has become a target for automation. However, existing
systems mostly focus on improving efficiency. The workers' needs and their
active participation do not play a pronounced role. Contrasting this trend, we
designed a social practice-based, worker-centered, and well-being-oriented
self-scheduling system which gives healthcare workers more control during shift
planning. In a following nine month appropriation study, we found that workers
who were cautious about their social standing in the group or who had a more
spontaneous personal lifestyle used our system less often than others.
Moreover, we revealed several conflict prevention practices and suggest to
shift the focus away from a competitive shift distribution paradigm towards
supporting these pro-social practices. We conclude with guidelines to support
individual planning practices, self-leadership, and for dealing with conflicts.
|
The cost of a partitioned fluid-structure interaction scheme is typically
assessed by the number of coupling iterations required per time step, while
ignoring the Newton loops within the nonlinear sub-solvers. In this work, we
discuss why these single-field iterations deserve more attention when
evaluating the coupling's efficiency and how to find the optimal number of
Newton steps per coupling iteration.
|
High-dimensional expanders generalize the notion of expander graphs to
higher-dimensional simplicial complexes. In contrast to expander graphs, only a
handful of high-dimensional expander constructions have been proposed, and no
elementary combinatorial construction with near-optimal expansion is known. In
this paper, we introduce an improved combinatorial high-dimensional expander
construction, by modifying a previous construction of Liu, Mohanty, and Yang
(ITCS 2020), which is based on a high-dimensional variant of a tensor product.
Our construction achieves a spectral gap of $\Omega(\frac{1}{k^2})$ for random
walks on the $k$-dimensional faces, which is only quadratically worse than the
optimal bound of $\Theta(\frac{1}{k})$. Previous combinatorial constructions,
including that of Liu, Mohanty, and Yang, only achieved a spectral gap that is
exponentially small in $k$. We also present reasoning that suggests our
construction is optimal among similar product-based constructions.
|
We introduce a novel multi-resolution Localized Orthogonal Decomposition
(LOD) for time-harmonic acoustic scattering problems that can be modeled by the
Helmholtz equation. The method merges the concepts of LOD and operator-adapted
wavelets (gamblets) and proves its applicability for a class of complex-valued,
non-hermitian and indefinite problems. It computes hierarchical bases that
block-diagonalize the Helmholtz operator and thereby decouples the
discretization scales. Sparsity is preserved by a novel localization strategy
that improves stability properties even in the elliptic case. We present a
rigorous stability and a-priori error analysis of the proposed method for
homogeneous media. In addition, we investigate the fast solvability of the
blocks by a standard iterative method. A sequence of numerical experiments
illustrates the sharpness of the theoretical findings and demonstrates the
applicability to scattering problems in heterogeneous media.
|
In this paper, we introduce the concept of the (higher order) Appell-Carlitz
numbers which unifies the definitions of several special numbers in positive
characteristic, such as the Bernoulli-Carlitz numbers and the Cauchy-Carlitz
numbers.Their generating function is usually named Hurwitz series in the
function field arithmetic. By using Hasse-Teichm\"uller derivatives, we also
obtain several properties of the (higher order) Appell-Carlitz numbers,
including a recurrence formula, two closed forms expressions, and a determinant
expression.
The recurrence formula implies Carlitz's recurrence formula for
Bernoulli-Carlitz numbers. Two closed from expressions implies the
corresponding results for Bernoulli-Carlitz and Cauchy-Carlitz numbers . The
determinant expression implies the corresponding results for Bernoulli-Carlitz
and Cauchy-Carlitz numbers, which are analogues of the classical determinant
expressions of Bernoulli and Cauchy numbers stated in an article by Glaisher in
1875.
|
We study the phase controlled transmission properties in a compound system
consisting of a 3D copper cavity and an yttrium iron garnet (YIG) sphere. By
tuning the relative phase of the magnon pumping and cavity probe tones,
constructive and destructive interferences occur periodically, which strongly
modify both the cavity field transmission spectra and the group delay of light.
Moreover, the tunable amplitude ratio between pump-probe tones allows us to
further improve the signal absorption or amplification, accompanied by either
significantly enhanced optical advance or delay. Both the phase and
amplitude-ratio can be used to realize in-situ tunable and switchable fast-slow
light. The tunable phase and amplitude-ratio lead to the zero reflection of the
transmitted light and an abrupt fast-slow light transition. Our results confirm
that direct magnon pumping through the coupling loops provides a versatile
route to achieve controllable signal transmission, storage, and communication,
which can be further expanded to the quantum regime, realizing coherent-state
processing or quantum-limited precise measurements.
|
After showing the efficiency of feedforward networks to estimate control in
high dimension in the global optimization of some storages problems, we develop
a modification of an algorithm based on some dynamic programming principle. We
show that classical feedforward networks are not effective to estimate Bellman
values for reservoir problems and we propose some neural networks giving far
better results. At last, we develop a new algorithm mixing LP resolution and
conditional cuts calculated by neural networks to solve some stochastic linear
problems.
|
In modern networks, the use of drones as mobile base stations (MBSs) has been
discussed for coverage flexibility. However, the realization of drone-based
networks raises several issues. One of the critical issues is drones are
extremely power-hungry. To overcome this, we need to characterize a new type of
drones, so-called charging drones, which can deliver energy to MBS drones.
Motivated by the fact that the charging drones also need to be charged, we
deploy ground-mounted charging towers for delivering energy to the charging
drones. We introduce a new energy-efficiency maximization problem, which is
partitioned into two independently separable tasks. More specifically, as our
first optimization task, two-stage charging matching is proposed due to the
inherent nature of our network model, where the first matching aims to schedule
between charging towers and charging drones while the second matching solves
the scheduling between charging drones and MBS drones. We analyze how to
convert the formulation containing non-convex terms to another one only with
convex terms. As our second optimization task, each MBS drone conducts
energy-aware time-average transmit power allocation minimization subject to
stability via Lyapunov optimization. Our solutions enable the MBS drones to
extend their lifetimes; in turn, network coverage-time can be extended.
|
Scientific research changed profoundly over the last 30 years, in all its
aspects. Scientific publishing has changed as well, mainly because of the
strong increased number of submitted papers and because of the appearance of
Open Access journals and publishers. We propose some reflections on these
issues.
|
In marginally jammed solids confined by walls, we calculate the particle and
ensemble averaged value of an order parameter, $\left<\Psi(r)\right>$, as a
function of the distance to the wall, $r$. Being a microscopic indicator of
structural disorder and particle mobility in solids, $\Psi$ is by definition
the response of the mean square particle displacement to the increase of
temperature in the harmonic approximation and can be directly calculated from
the normal modes of vibration of the zero-temperature solids. We find that, in
confined jammed solids, $\left<\Psi(r)\right>$ curves at different pressures
can collapse onto the same master curve following a scaling function,
indicating the criticality of the jamming transition. The scaling collapse
suggests a diverging length scale and marginal instability at the jamming
transition, which should be accessible to sophisticatedly designed experiments.
Moreover, $\left<\Psi(r)\right>$ is found to be significantly suppressed when
approaching the wall and anisotropic in directions perpendicular and parallel
to the wall. This finding can be applied to understand the $r$-dependence and
anisotropy of the structural relaxation in confined supercooled liquids,
providing another example of understanding or predicting behaviors of
supercooled liquids from the perspective of the zero-temperature amorphous
solids.
|
In this paper we present a two-step neural network model to separate
detections of solar system objects from optical and electronic artifacts in
data obtained with the "Asteroid Terrestrial-impact Last Alert System" (ATLAS),
a near-Earth asteroid sky survey system [arXiv:1802.00879]. A convolutional
neural network [arXiv:1807.10912] is used to classify small "postage-stamp"
images of candidate detections of astronomical sources into eight classes,
followed by a multi-layered perceptron that provides a probability that a
temporal sequence of four candidate detections represents a real astronomical
source. The goal of this work is to reduce the time delay between Near-Earth
Object (NEO) detections and submission to the Minor Planet Center. Due to the
rare and hazardous nature of NEOs [Harris and D'Abramo, 2015], a low false
negative rate is a priority for the model. We show that the model reaches
99.6\% accuracy on real asteroids in ATLAS data with a 0.4\% false negative
rate. Deployment of this model on ATLAS has reduced the amount of NEO
candidates that astronomers must screen by 90%, thereby bringing ATLAS one step
closer to full autonomy.
|
We study wireless networks where signal propagation delays are multiples of a
time interval. Such a network can be modelled as a weighted hypergraph. The
link scheduling problem of such a wireless network is closely related to the
independent sets of the periodic hypergraph induced by the weighted hypergraph.
As the periodic graph has infinitely many vertices, existing characterizations
of graph independent sets cannot be applied to study link scheduling
efficiently. To characterize the rate region of link scheduling, a directed
graph of finite size called scheduling graph is derived to capture a certain
conditional independence property of link scheduling over time. A
collision-free schedule is equivalent to a path in the scheduling graph, and
hence the rate region is equivalent to the convex hull of the rate vectors
associated with the cycles of the scheduling graph. With the maximum
independent set problem as a special case, calculating the whole rate region is
NP hard and also hard to approximate. We derive two algorithms that benefit
from a partial order on the paths in the scheduling graph, and can potentially
find schedules that are not dominated by the existing cycle enumerating
algorithms running in a given time. The first algorithm calculates the rate
region incrementally in the cycle lengths so that a subset of the rate region
corresponding to short cycles can be obtained efficiently. The second algorithm
enumerates cycles associated with a maximal subgraph of the scheduling graph.
In addition to scheduling a wireless network, the independent sets of periodic
hypergraphs also find applications in some operational research problems.
|
Cross-document event coreference resolution is a foundational task for NLP
applications involving multi-text processing. However, existing corpora for
this task are scarce and relatively small, while annotating only modest-size
clusters of documents belonging to the same topic. To complement these
resources and enhance future research, we present Wikipedia Event Coreference
(WEC), an efficient methodology for gathering a large-scale dataset for
cross-document event coreference from Wikipedia, where coreference links are
not restricted within predefined topics. We apply this methodology to the
English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our
dataset creation method is generic and can be applied with relatively little
effort to other Wikipedia languages. To set baseline results, we develop an
algorithm that adapts components of state-of-the-art models for within-document
coreference resolution to the cross-document setting. Our model is suitably
efficient and outperforms previously published state-of-the-art results for the
task.
|
Few-layered transition metal dichalcogenides (TMDs) are increasingly popular
materials for optoelectronics and catalysis. Amongst the various types of TMDs
available today, rhenium-chalcogenides (ReX2) stand out due to their remarkable
electronic structure, such as the occurrence of anisotropic excitons and
potential direct bandgap behavior throughout multi-layered stacks. In this
letter, we have analyzed the nature and dynamics of charge carriers in highly
crystalline liquid-phase exfoliated ReS2, using a unique combination of optical
pump-THz probe and broadband transient absorption spectroscopy. Two distinct
time regimes are identified, both of which are dominated by unbound charge
carriers despite the high exciton binding energy. In the first time regime, the
unbound charge carriers cause an increase and a broadening of the exciton
absorption band. In the second time regime, a peculiar narrowing of the
excitonic absorption profile is observed, which we assign to the presence of
built-in fields and/or charged defects. Our results pave the way to analyze
spectrally complex transient absorption measurements on layered TMD materials
and indicate the potential for ReS2 to produce mobile free charge carriers, a
feat relevant for photovoltaic applications.
|
We propose a new framework, inspired by random matrix theory, for analyzing
the dynamics of stochastic gradient descent (SGD) when both number of samples
and dimensions are large. This framework applies to any fixed stepsize and the
finite sum setting. Using this new framework, we show that the dynamics of SGD
on a least squares problem with random data become deterministic in the large
sample and dimensional limit. Furthermore, the limiting dynamics are governed
by a Volterra integral equation. This model predicts that SGD undergoes a phase
transition at an explicitly given critical stepsize that ultimately affects its
convergence rate, which we also verify experimentally. Finally, when input data
is isotropic, we provide explicit expressions for the dynamics and average-case
convergence rates (i.e., the complexity of an algorithm averaged over all
possible inputs). These rates show significant improvement over the worst-case
complexities.
|
This paper describes our method for tuning a transformer-based pretrained
model, to adaptation with Reliable Intelligence Identification on Vietnamese
SNSs problem. We also proposed a model that combines bert-base pretrained
models with some metadata features, such as the number of comments, number of
likes, images of SNS documents,... to improved results for VLSP shared task:
Reliable Intelligence Identification on Vietnamese SNSs. With appropriate
training techniques, our model is able to achieve 0.9392 ROC-AUC on public test
set and the final version settles at top 2 ROC-AUC (0.9513) on private test
set.
|
This paper presents the first model-free, simulator-free reinforcement
learning algorithm for Constrained Markov Decision Processes (CMDPs) with
sublinear regret and zero constraint violation. The algorithm is named Triple-Q
because it includes three key components: a Q-function (also called
action-value function) for the cumulative reward, a Q-function for the
cumulative utility for the constraint, and a virtual-Queue that
(over)-estimates the cumulative constraint violation. Under Triple-Q, at each
step, an action is chosen based on the pseudo-Q-value that is a combination of
the three "Q" values. The algorithm updates the reward and utility Q-values
with learning rates that depend on the visit counts to the corresponding
(state, action) pairs and are periodically reset. In the episodic CMDP setting,
Triple-Q achieves $\tilde{\cal O}\left(\frac{1 }{\delta}H^4
S^{\frac{1}{2}}A^{\frac{1}{2}}K^{\frac{4}{5}} \right)$ regret, where $K$ is the
total number of episodes, $H$ is the number of steps in each episode, $S$ is
the number of states, $A$ is the number of actions, and $\delta$ is Slater's
constant. Furthermore, Triple-Q guarantees zero constraint violation, both on
expectation and with a high probability, when $K$ is sufficiently large.
Finally, the computational complexity of Triple-Q is similar to SARSA for
unconstrained MDPs and is computationally efficient.
|
During the image acquisition process, noise is usually added to the data
mainly due to physical limitations of the acquisition sensor, and also
regarding imprecisions during the data transmission and manipulation. In that
sense, the resultant image needs to be processed to attenuate its noise without
losing details. Non-learning-based strategies such as filter-based and noise
prior modeling have been adopted to solve the image denoising problem.
Nowadays, learning-based denoising techniques showed to be much more effective
and flexible approaches, such as Residual Convolutional Neural Networks. Here,
we propose a new learning-based non-blind denoising technique named Attention
Residual Convolutional Neural Network (ARCNN), and its extension to blind
denoising named Flexible Attention Residual Convolutional Neural Network
(FARCNN). The proposed methods try to learn the underlying noise expectation
using an Attention-Residual mechanism. Experiments on public datasets corrupted
by different levels of Gaussian and Poisson noise support the effectiveness of
the proposed approaches against some state-of-the-art image denoising methods.
ARCNN achieved an overall average PSNR results of around 0.44dB and 0.96dB for
Gaussian and Poisson denoising, respectively FARCNN presented very consistent
results, even with slightly worsen performance compared to ARCNN.
|
The COVID-19 pandemic has proved to be one of the most disruptive public
health emergencies in recent memory. Among non-pharmaceutical interventions,
social distancing and lockdown measures are some of the most common tools
employed by governments around the world to combat the disease. While
mathematical models of COVID-19 are ubiquitous, few have leveraged network
theory in a general way to explain the mechanics of social distancing. In this
paper, we build on existing network models for heterogeneous, clustered
networks with random link activation/deletion dynamics to put forth realistic
mechanisms of social distancing using piecewise constant activation/deletion
rates. We find our models are capable of rich qualitative behavior, and offer
meaningful insight with relatively few intervention parameters. In particular,
we find that the severity of social distancing interventions and when they
begin have more impact than how long it takes for the interventions to take
full effect.
|
Kubelka-Munk (K-M) theory has been successfully used to estimate pigment
concentrations in the pigment mixtures of modern paintings in spectral imagery.
In this study the single-constant K-M theory has been utilized for the
classification of green pigments in the Selden Map of China, a navigational map
of the South China Sea likely created in the early seventeenth century.
Hyperspectral data of the map was collected at the Bodleian Library, University
of Oxford, and can be used to estimate the pigment diversity, and spatial
distribution, within the map. This work seeks to assess the utility of
analyzing the data in the K/S space from Kubelka-Munk theory, as opposed to the
traditional reflectance domain. We estimate the dimensionality of the data and
extract endmembers in the reflectance domain. Then we perform linear unmixing
to estimate abundances in the K/S space, and following Bai, et al. (2017), we
perform a classification in the abundance space. Finally, due to the lack of
ground truth labels, the classification accuracy was estimated by computing the
mean spectrum of each class as the representative signature of that class, and
calculating the root mean squared error with all the pixels in that class to
create a spatial representation of the error. This highlights both the
magnitude of, and any spatial pattern in, the errors, indicating if a
particular pigment is not well modeled in this approach.
|
Objective: The automatic discrimination between the coughing sounds produced
by patients with tuberculosis (TB) and those produced by patients with other
lung ailments.
Approach: We present experiments based on a dataset of 1358 forced cough
recordings obtained in a developing-world clinic from 16 patients with
confirmed active pulmonary TB and 35 patients suffering from respiratory
conditions suggestive of TB but confirmed to be TB negative. Using nested
cross-validation, we have trained and evaluated five machine learning
classifiers: logistic regression (LR), support vector machines (SVM), k-nearest
neighbour (KNN), multilayer perceptrons (MLP) and convolutional neural networks
(CNN).
Main Results: Although classification is possible in all cases, the best
performance is achieved using LR. In combination with feature selection by
sequential forward selection (SFS), our best LR system achieves an area under
the ROC curve (AUC) of 0.94 using 23 features selected from a set of 78
high-resolution mel-frequency cepstral coefficients (MFCCs). This system
achieves a sensitivity of 93\% at a specificity of 95\% and thus exceeds the
90\% sensitivity at 70\% specificity specification considered by the World
Health Organisation (WHO) as a minimal requirement for a community-based TB
triage test.
Significance: The automatic classification of cough audio sounds, when
applied to symptomatic patients requiring investigation for TB, can meet the
WHO triage specifications for the identification of patients who should undergo
expensive molecular downstream testing. This makes it a promising and viable
means of low cost, easily deployable frontline screening for TB, which can
benefit especially developing countries with a heavy TB burden.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.