abstract
stringlengths 42
2.09k
|
---|
We construct regular, asymptotically flat black holes of higher order scalar
tensor (DHOST) theories, which are obtained by making use of a generalized
Kerr-Schild solution generating method. The solutions depend on a mass
integration constant, admit a smooth core of chosen regularity, and generically
have an inner and outer event horizon. In particular, below a certain mass
threshold, we find massive, horizonless, particle-like solutions. We scan
through possible observational signatures ranging from weak to strong gravity
and study the thermodynamics of our regular solutions, comparing them, when
possible, to General Relativity black holes and their thermodynamic laws.
|
We calculate the superconformal indices of a class of six-dimensional ${\cal
N}=(1,0)$ superconformal field theories realized on M5-branes at
$\mathbb{C}^2/\mathbb{Z}_k$ singularity by using the method developed in
previous works of the authors and collaborators. We use the AdS/CFT
correspondence, and finite $N$ corrections are included as the contribution of
M2-branes wrapped on two-cycles in $S^4/\mathbb{Z}_k$. We confirm that the
indices are consistent with the expected flavor symmetries.
|
There is a growing desire to create computer systems that can communicate
effectively to collaborate with humans on complex, open-ended activities.
Assessing these systems presents significant challenges. We describe a
framework for evaluating systems engaged in open-ended complex scenarios where
evaluators do not have the luxury of comparing performance to a single right
answer. This framework has been used to evaluate human-machine creative
collaborations across story and music generation, interactive block building,
and exploration of molecular mechanisms in cancer. These activities are
fundamentally different from the more constrained tasks performed by most
contemporary personal assistants as they are generally open-ended, with no
single correct solution, and often no obvious completion criteria.
We identified the Key Properties that must be exhibited by successful
systems. From there we identified "Hallmarks" of success -- capabilities and
features that evaluators can observe that would be indicative of progress
toward achieving a Key Property. In addition to being a framework for
assessment, the Key Properties and Hallmarks are intended to serve as goals in
guiding research direction.
|
We consider the problem of efficient ultra-massive multiple-input
multiple-output (UM-MIMO) data detection in terahertz (THz)-band non-orthogonal
multiple access (NOMA) systems. We argue that the most common THz NOMA
configuration is power-domain superposition coding over quasi-optical
doubly-massive MIMO channels. We propose spatial tuning techniques that modify
antenna subarray arrangements to enhance channel conditions. Towards recovering
the superposed data at the receiver side, we propose a family of data detectors
based on low-complexity channel matrix puncturing, in which higher-order
detectors are dynamically formed from lower-order component detectors. We first
detail the proposed solutions for the case of superposition coding of multiple
streams in point-to-point THz MIMO links. We then extend the study to
multi-user NOMA, in which randomly distributed users get grouped into narrow
cell sectors and are allocated different power levels depending on their
proximity to the base station. We show that successive interference
cancellation is carried with minimal performance and complexity costs under
spatial tuning. We derive approximate bit error rate (BER) equations, and we
propose an architectural design to illustrate complexity reductions. Under
typical THz conditions, channel puncturing introduces more than an order of
magnitude reduction in BER at high signal-to-noise ratios while reducing
complexity by approximately 90%.
|
We analyze the correlation coefficient T(E_e), which was introduced by Ebel
and Feldman (Nucl. Phys. 4, 213 (1957)). The correlation coefficient T(E_e) is
induced by the correlations of the neutron spin with the antineutrino
3-momentum and the electron spin with the electron 3-momentum. Such a
correlation structure is invariant under discrete P, C and T symmetries. The
correlation coefficient T(E_e), calculated to leading order in the large
nucleon mass m_N expansion, is equal to T(E_e) = - 2 g_A(1 + g_A)/(1 + 3 g^2_A)
= - B_0, i.e. of order |T(E_e)| ~ 1, where $g_A$ is the axial coupling
constant. Within the Standard Model (SM) we describe the correlation
coefficient $T(E_e)$ at the level of 10^{-3} by taking into the radiative
corrections of order O(\alpha/\pi) or the outer model-independent radiative
corrections, where \alpha is the fine-structure constant, and the corrections
of order O(E_e/m_N), caused by weak magnetism and proton recoil. We calculate
also the contributions of interactions beyond the SM, including the
contributions of the second class currents.
|
Interactions between the intra- and inter-domain routing protocols received
little attention despite playing an important role in forwarding transit
traffic. More precisely, by default, IGP distances are taken into account by
BGP to select the closest exit gateway for the transit traffic (hot-potato
routing). Upon an IGP update, the new best gateway may change and should be
updated through the (full) re-convergence of BGP, causing superfluous BGP
processing and updates in many cases. We propose OPTIC (Optimal Protection
Technique for Inter-intra domain Convergence), an efficient way to assemble
both protocols without losing the hot-potato property. OPTIC pre-computes sets
of gateways (BGP next-hops) shared by groups of prefixes. Such sets are
guaranteed to contain the post-convergence gateway after any single IGP event
for the grouped prefixes. The new optimal exits can be found through a single
walk-through of each set, allowing the transit traffic to benefit from optimal
BGP routes almost as soon as the IGP converges. Compared to vanilla BGP,
OPTIC's structures allow it to consider a reduced number of entries: this
number can be reduced by 99\% for stub networks. The update of OPTIC's
structures, which is not required as long as border routers remain at least
bi-connected, scales linearly in time with its number of groups.
|
Immigration to the United States is certainly not a new phenomenon, and it is
therefore natural for immigration, culture and identity to be given due
attention by the public and policy makers. However, current discussion of
immigration, legal and illegal, and the philosophical underpinnings is lost in
translation, not necessarily on ideological lines, but on political
orientation. In this paper we reexamine the philosophical underpinnings of the
melting pot versus multiculturalism as antecedents and precedents of current
immigration debate and how the core issues are lost in translation. We take a
brief look at immigrants and the economy to situate the current immigration
debate. We then discuss the two philosophical approaches to immigration and how
the understanding of the philosophical foundations can help streamline the
current immigration debate.
|
How the solar electromagnetic energy entering the Earth's atmosphere varied
since pre-industrial times is an important consideration in the climate change
debate. Detrimental to this debate, estimates of the change in total solar
irradiance (TSI) since the Maunder minimum, an extended period of weak solar
activity preceding the industrial revolution, differ markedly, ranging from a
drop of 0.75 Wm-2 to a rise of 6.3 Wm-2. Consequently, the exact contribution
by solar forcing to the rise in global temperatures over the past centuries
remains inconclusive. Adopting a novel approach based on state-of-the-art solar
imagery and numerical simulations, we establish the TSI level of the Sun when
it is in its least-active state to be 2.0 +/- 0.7 Wm-2 below the 2019 level.
This means TSI could not have risen since the Maunder minimum by more than this
amount, thus restricting the possible role of solar forcing in global warming.
|
We calculate the lifetime of the deuteron from dimension-six quark operators
that violate baryon number by one unit. We construct an effective field theory
for $|\Delta B|=1$ interactions that give rise to nucleon and $\Delta B=1$
deuteron decay in a systematic expansion. We show that up to and including
next-to-leading order the deuteron decay rate is given by the sum of the decay
rates of the free proton and neutron. The first nuclear correction is expected
to contribute at the few-percent level and comes with an undetermined
low-energy constant. We discuss its relation to earlier potential-model
calculations.
|
Quantum resources and protocols are known to outperform their classical
counterparts in variety of communication and information processing tasks.
Random Access Codes (RACs) are one such cryptographically significant family of
bipartite communication tasks, wherein, the sender encodes a data set
(typically a string of input bits) onto a physical system of bounded dimension
and transmits it to the receiver, who then attempts to guess a randomly chosen
part of the sender's data set (typically one of the sender's input bits). In
this work, we introduce a generalization of this task wherein the receiver, in
addition to the individual input bits, aims to retrieve randomly chosen
functions of sender's input string. Specifically, we employ sets of mutually
unbiased balanced functions (MUBS), such that perfect knowledge of any one of
the constituent functions yields no knowledge about the others. We investigate
and bound the performance of (i.) classical, (ii.) quantum prepare and measure,
and (iii.) entanglement assisted classical communication (EACC) protocols for
the resultant generalized RACs (GRACs). Finally, we detail the case of GRACs
with three input bits, find maximal success probabilities for classical,
quantum and EACC protocols, along with the effect of noisy quantum channels on
the performance of quantum protocols. Moreover, with the help of this case
study, we reveal several characteristic properties of the GRACs which deviate
from the standard RACs.
|
Machine translation models have discrete vocabularies and commonly use
subword segmentation techniques to achieve an 'open vocabulary.' This approach
relies on consistent and correct underlying unicode sequences, and makes models
susceptible to degradation from common types of noise and variation. Motivated
by the robustness of human language processing, we propose the use of visual
text representations, which dispense with a finite set of text embeddings in
favor of continuous vocabularies created by processing visually rendered text
with sliding windows. We show that models using visual text representations
approach or match performance of traditional text models on small and larger
datasets. More importantly, models with visual embeddings demonstrate
significant robustness to varied types of noise, achieving e.g., 25.9 BLEU on a
character permuted German-English task where subword models degrade to 1.9.
|
Spine-related diseases have high morbidity and cause a huge burden of social
cost. Spine imaging is an essential tool for noninvasively visualizing and
assessing spinal pathology. Segmenting vertebrae in computed tomography (CT)
images is the basis of quantitative medical image analysis for clinical
diagnosis and surgery planning of spine diseases. Current publicly available
annotated datasets on spinal vertebrae are small in size. Due to the lack of a
large-scale annotated spine image dataset, the mainstream deep learning-based
segmentation methods, which are data-driven, are heavily restricted. In this
paper, we introduce a large-scale spine CT dataset, called CTSpine1K, curated
from multiple sources for vertebra segmentation, which contains 1,005 CT
volumes with over 11,100 labeled vertebrae belonging to different spinal
conditions. Based on this dataset, we conduct several spinal vertebrae
segmentation experiments to set the first benchmark. We believe that this
large-scale dataset will facilitate further research in many spine-related
image analysis tasks, including but not limited to vertebrae segmentation,
labeling, 3D spine reconstruction from biplanar radiographs, image
super-resolution, and enhancement.
|
A classical branch of graph algorithms is graph transversals, where one seeks
a minimum-weight subset of nodes in a node-weighted graph $G$ which intersects
all copies of subgraphs~$F$ from a fixed family $\mathcal F$. Many such graph
transversal problems have been shown to admit polynomial-time approximation
schemes (PTAS) for planar input graphs $G$, using a variety of techniques like
the shifting technique (Baker, J. ACM 1994), bidimensionality (Fomin et al.,
SODA 2011), or connectivity domination (Cohen-Addad et al., STOC 2016). These
techniques do not seem to apply to graph transversals with parity constraints,
which have recently received significant attention, but for which no PTASs are
known. In the even-cycle transversal (\ECT) problem, the goal is to find a
minimum-weight hitting set for the set of even cycles in an undirected graph.
For ECT, Fiorini et al. (IPCO 2010) showed that the integrality gap of the
standard covering LP relaxation is $\Theta(\log n)$, and that adding sparsity
inequalities reduces the integrality gap to~10. Our main result is a
primal-dual algorithm that yields a $47/7\approx6.71$-approximation for ECT on
node-weighted planar graphs, and an integrality gap of the same value for the
standard LP relaxation on node-weighted planar graphs.
|
Geo-indistinguishability and expected inference error are two complementary
notions for location privacy. The joint guarantee of differential privacy
(indistinguishability) and distortion privacy (inference error) limits the
information leakage. In this paper, we analyze the differential privacy of PIVE
dynamic location obfuscation mechanism proposed by Yu, Liu and Pu (ISOC Network
and Distributed System Security Symposium, 2017) and show that PIVE fails to
offer differential privacy guarantees on adaptive protection location set as
claimed. Specifically, we demonstrate that different protection location sets
could intersect with one another due to the defined search algorithm and then
different locations in the same protection location set could have different
protection diameters. As a result, we can show that the proof of differential
privacy for PIVE is incorrect. We also make some detailed discussions on
feasible privacy frameworks with achieving personalized error bounds.
|
We introduced a generalized Wilson line gauge link that reproduces both
staple and near straight links in different limits. We then studied the
gauge-invariant bi-local orbital angular momentum operator with such a general
gauge link, in the framework of Chen et. al. decomposition of gauge fields. At
the appropriate combination of limits, the operator reproduces both
Jaffe-Manohar and Ji's operator structure and offers a continuous analytical
interpolation between the two in the small-$x$ limit. We also studied the
potential OAM which is defined as the difference between the two, and how it
depends on the geometry or orientation of the gauge links.
|
Graphs naturally lend themselves to model the complexities of Hyperspectral
Image (HSI) data as well as to serve as semi-supervised classifiers by
propagating given labels among nearest neighbours. In this work, we present a
novel framework for the classification of HSI data in light of a very limited
amount of labelled data, inspired by multi-view graph learning and graph signal
processing. Given an a priori superpixel-segmented hyperspectral image, we seek
a robust and efficient graph construction and label propagation method to
conduct semi-supervised learning (SSL). Since the graph is paramount to the
success of the subsequent classification task, particularly in light of the
intrinsic complexity of HSI data, we consider the problem of finding the
optimal graph to model such data. Our contribution is two-fold: firstly, we
propose a multi-stage edge-efficient semi-supervised graph learning framework
for HSI data which exploits given label information through pseudo-label
features embedded in the graph construction. Secondly, we examine and enhance
the contribution of multiple superpixel features embedded in the graph on the
basis of pseudo-labels in an extension of the previous framework, which is less
reliant on excessive parameter tuning. Ultimately, we demonstrate the
superiority of our approaches in comparison with state-of-the-art methods
through extensive numerical experiments.
|
The quasiparticle spectra of atomically thin semiconducting transition metal
dichalcogenides (TMDCs) and their response to an ultrafast optical excitation
critically depend on interactions with the underlying substrate. Here, we
present a comparative time- and angle-resolved photoemission spectroscopy
(TR-ARPES) study of the transient electronic structure and ultrafast carrier
dynamics in the single- and bilayer TMDCs MoS$_2$ and WS$_2$ on three different
substrates: Au(111), Ag(111) and graphene/SiC. The photoexcited quasiparticle
bandgaps are observed to vary over the range of 1.9-2.3 eV between our systems.
The transient conduction band signals decay on a sub-100 fs timescale on the
metals, signifying an efficient removal of photoinduced carriers into the bulk
metallic states. On graphene, we instead observe two timescales on the order of
200 fs and 50 ps, respectively, for the conduction band decay in MoS$_2$. These
multiple timescales are explained by Auger recombination involving MoS$_2$ and
in-gap defect states. In bilayer TMDCs on metals we observe a complex
redistribution of excited holes along the valence band that is substantially
affected by interactions with the continuum of bulk metallic states.
|
In a domain $\Omega\subseteq \mathbb{R}^\mathbf{N}$ we consider compact,
Birman-Schwinger type, operators of the form
$\mathbf{T}_{P,\mathfrak{A}}=\mathfrak{A}^*P\mathfrak{A}$; here $P$ is a
singular Borel measure in $\Omega$ and $\mathfrak{A}$ is a noncritical order
$-l\ne -\mathbf{N}/2$ pseudodifferential operator. For a class of such
operators, we obtain estimates and a proper version of H.Weyl's asymptotic law
for eigenvalues, with order depending on dimensional characteristics of the
measure. A version of the CLR estimate for singular measures is proved. For
non-selfadjoint operators of the form $P_2 \mathfrak{A} P_1$ and
$\mathfrak{A}_2 P \mathfrak{A}_1$ with singular measures $P,P_1,P_2$ and
negative order pseudodifferential operators
$\mathfrak{A},\mathfrak{A}_1,\mathfrak{A}_2$ we obtain estimates for singular
numbers.
|
In this paper we study the deterministic and stochastic homogenisation of
free-discontinuity functionals under \emph{linear} growth and coercivity
conditions. The main novelty of our deterministic result is that we work under
very general assumptions on the integrands which, in particular, are not
required to be periodic in the space variable. Combining this result with the
pointwise Subadditive Ergodic Theorem by Akcoglu and Krengel, we prove a
stochastic homogenisation result, in the case of stationary random integrands.
In particular, we characterise the limit integrands in terms of asymptotic cell
formulas, as in the classical case of periodic homogenisation.
|
Phase-locked laser arrays have been extensively investigated in terms of
their stability and nonlinear dynamics. Specifically, enhancing the
phase-locking stability allows laser arrays to generate high-power and
steerable coherent optical beams for a plethora of applications, including
remote sensing and optical communications. Compared to other coupling
architectures, laterally coupled lasers are especially desirable since they
allow for denser integration and simpler fabrication process. Here, we present
the theoretical effects of varying the spontaneous emission factor $\beta$, an
important parameter for micro- and nanoscale lasers, on the stability
conditions of phase-locking for two laterally coupled semiconductor lasers.
Through bifurcation analyses, we observe that increasing $\beta$ contributes to
the expanding of the in-phase stability region under all scenarios considered,
including complex coupling coefficients, varying pump rates, and frequency
detuning. Moreover, the effect is more pronounced for $\beta$ approaching 1,
thus underlining the significant advantages of implementing nanolasers with
intrinsically high $\beta$ in phase-locked laser arrays for high-power
generation. We also show that the steady-state phase differences can be widely
tuned - up to $\pi$ radians - by asymmetrically pumping high-$\beta$ coupled
lasers. This demonstrates the potential of high-$\beta$ nanolasers in building
next-generation optical phased arrays requiring wide scanning angles with
ultra-high resolution.
|
Mutations on Brauer configurations are introduced and associated with some
suitable automata in order to solve generalizations of the Chicken McNugget
problem. Besides, based on marked order polytopes a new class of diophantine
equations called Gelfand-Tsetlin equations are also solved. The approach allows
giving an algebraic description of the schedule of an AES key via some suitable
non-deterministic finite automata (NFA).
|
This paper reports on a new analysis of archival ALMA $870\,\mu$m dust
continuum observations. Along with the previously observed bright inner ring
($r \sim 20-40\,$au), two addition substructures are evident in the new
continuum image: a wide dust gap, $r \sim 40-150\,$au, and a faint outer ring
ranging from $r \sim 150\,$au to $r \sim 250\,$au and whose presence was
formerly postulated in low-angular-resolution ALMA cycle 0 observations but
never before observed. Notably, the dust emission of the outer ring is not
homogeneous, and it shows two prominent azimuthal asymmetries that resemble an
eccentric ring with eccentricity $e = 0.07 $. The characteristic double-ring
dust structure of HD 100546 is likely produced by the interaction of the disk
with multiple giant protoplanets. This paper includes new
smoothed-particle-hydrodynamic simulations with two giant protoplanets, one
inside of the inner dust cavity and one in the dust gap. The simulations
qualitatively reproduce the observations, and the final masses and orbital
distances of the two planets in the simulations are 3.1 $M_{J}$ at 15 au and
8.5 $M_{J}$ at 110 au, respectively. The massive outer protoplanet
substantially perturbs the disk surface density distribution and gas dynamics,
producing multiple spiral arms both inward and outward of its orbit. This can
explain the observed perturbed gas dynamics inward of 100 au as revealed by
ALMA observations of CO.
Finally, the reduced dust surface density in the $\sim 40-150\,$au dust gap
can nicely clarify the origin of the previously detected H$_2$O gas and ice
emission.
|
In this paper, we give the calculation of the jumps of the ramification
groups of some finite non-abelian Galois extensions of over complete discrete
valuation fields of positive characteristic $p$ of perfect residue field. The
Galois group is of order $p^{n+1}$, where $2\leq n\leq p$.
|
Copositive optimization is a special case of convex conic programming, and it
optimizes a linear function over the cone of all completely positive matrices
under linear constraints. Copositive optimization provides powerful relaxations
of NP-hard quadratic problems or combinatorial problems, but there are still
many open problems regarding copositive or completely positive matrices. In
this paper, we focus on one of such open problems; finding a completely
positive (CP) factorization for a given completely positive matrix. We treat it
as a nonsmooth Riemannian optimization, i.e., a minimization of a nonsmooth
function over the Riemannian manifolds. To solve this problem, we present a
general smoothing framework for nonsmooth Riemannian optimization and guarantee
convergence to a stationary point of the original problem. An advantage is that
we can implement it quickly with minimal effort by directly using the existing
standard smooth Riemannian solvers, such as Manopt. Numerical experiments show
the efficiency of our method especially for large-scale CP factorizations.
|
Adversarial training is among the most effective techniques to improve the
robustness of models against adversarial perturbations. However, the full
effect of this approach on models is not well understood. For example, while
adversarial training can reduce the adversarial risk (prediction error against
an adversary), it sometimes increase standard risk (generalization error when
there is no adversary). Even more, such behavior is impacted by various
elements of the learning problem, including the size and quality of training
data, specific forms of adversarial perturbations in the input, model
overparameterization, and adversary's power, among others. In this paper, we
focus on \emph{distribution perturbing} adversary framework wherein the
adversary can change the test distribution within a neighborhood of the
training data distribution. The neighborhood is defined via Wasserstein
distance between distributions and the radius of the neighborhood is a measure
of adversary's manipulative power. We study the tradeoff between standard risk
and adversarial risk and derive the Pareto-optimal tradeoff, achievable over
specific classes of models, in the infinite data limit with features dimension
kept fixed. We consider three learning settings: 1) Regression with the class
of linear models; 2) Binary classification under the Gaussian mixtures data
model, with the class of linear classifiers; 3) Regression with the class of
random features model (which can be equivalently represented as two-layer
neural network with random first-layer weights). We show that a tradeoff
between standard and adversarial risk is manifested in all three settings. We
further characterize the Pareto-optimal tradeoff curves and discuss how a
variety of factors, such as features correlation, adversary's power or the
width of two-layer neural network would affect this tradeoff.
|
We present a simple $O(n^4)$-time algorithm for computing optimal search
trees with two-way comparisons. The only previous solution to this problem, by
Anderson et al., has the same running time, but is significantly more
complicated and is restricted to the variant where only successful queries are
allowed. Our algorithm extends directly to solve the standard full variant of
the problem, which also allows unsuccessful queries and for which no
polynomial-time algorithm was previously known. The correctness proof of our
algorithm relies on a new structural theorem for two-way-comparison search
trees.
|
Flight delays impose challenges that impact any flight transportation system.
Predicting when they are going to occur is an important way to mitigate this
issue. However, the behavior of the flight delay system varies through time.
This phenomenon is known in predictive analytics as concept drift. This paper
investigates the prediction performance of different drift handling strategies
in aviation under different scales (models trained from flights related to a
single airport or the entire flight system). Specifically, two research
questions were proposed and answered: (i) How do drift handling strategies
influence the prediction performance of delays? (ii) Do different scales change
the results of drift handling strategies? In our analysis, drift handling
strategies are relevant, and their impacts vary according to scale and machine
learning models used.
|
Swimming bacteria in passive nematics in the form of lyotropic liquid
crystals are defined as a new class of active matter known as living liquid
crystals in recent studies. It has also been shown that liquid crystal
solutions are promising candidates for trapping and detecting bacteria. We ask
the question, can a similar class of matter be designed for background nematics
which are also active? Hence, we developed a minimal model for the mixture of
polar particles in active nematics. It is found that the active nematics in
such a mixture are highly sensitive to the presence of polar particles, and
show the formation of large scale higher order structures for a relatively low
polar particle density. Upon increasing the density of polar particles,
different phases of active nematics are found and it is observed that the
system shows two phase transitions. The first phase transition is a first order
transition from quasi-long ranged ordered active nematics to disordered active
nematics with larger scale structures. On further increasing density of polar
particles, the system transitions to a third phase, where polar particles form
large, mutually aligned clusters. These clusters sweep the whole system and
enforce local order in the nematics. The current study can be helpful for
detecting the presence of very low densities of polar swimmers in active
nematics and can be used to design and control different structures in active
nematics.
|
Sequential plateau transitions of quantum spin chains ($S$=1,3/2,2 and 3) are
demonstrated by a spin pump using dimerization and staggered magnetic field as
synthetic dimensions. The bulk is characterized by the Chern number associated
with the boundary twist and the pump protocol as a time. It counts the number
of critical points in the loop that is specified by the $Z_2$ Berry phases.
With open boundary condition, discontinuity of the spin weighted center of mass
due to emergent effective edge spins also characterizes the pump as the bulk
edge correspondence. It requires extra level crossings in the pump as a
super-selection rule that is consistent with the Valence Bond Solid (VBS)
picture.
|
Chest computed tomography (CT) has played an essential diagnostic role in
assessing patients with COVID-19 by showing disease-specific image features
such as ground-glass opacity and consolidation. Image segmentation methods have
proven to help quantify the disease burden and even help predict the outcome.
The availability of longitudinal CT series may also result in an efficient and
effective method to reliably assess the progression of COVID-19, monitor the
healing process and the response to different therapeutic strategies. In this
paper, we propose a new framework to identify infection at a voxel level
(identification of healthy lung, consolidation, and ground-glass opacity) and
visualize the progression of COVID-19 using sequential low-dose non-contrast CT
scans. In particular, we devise a longitudinal segmentation network that
utilizes the reference scan information to improve the performance of disease
identification. Experimental results on a clinical longitudinal dataset
collected in our institution show the effectiveness of the proposed method
compared to the static deep neural networks for disease quantification.
|
High-quality automatic speech recognition (ASR) is essential for virtual
assistants (VAs) to work well. However, ASR often performs poorly on VA
requests containing named entities. In this work, we start from the observation
that many ASR errors on named entities are inconsistent with real-world
knowledge. We extend previous discriminative n-gram language modeling
approaches to incorporate real-world knowledge from a Knowledge Graph (KG),
using features that capture entity type-entity and entity-entity relationships.
We apply our model through an efficient lattice rescoring process, achieving
relative sentence error rate reductions of more than 25% on some synthesized
test sets covering less popular entities, with minimal degradation on a
uniformly sampled VA test set.
|
Valtancoli in his paper entitled [P. Valtancoli, Canonical transformations,
and minimal length J. Math. Phys. 56, 122107 (2015)] has shown how the
deformation of the canonical transformations can be made compatible with the
deformed Poisson brackets. Based on this work and through an appropriate
canonical transformation, we solve the problem of one dimensional (1D) damped
harmonic oscillator at the classical limit of the Snyder-de Sitter (SdS) space.
We show that the equations of the motion can be described by trigonometric
functions with frequency and period depending on the deformed and the damped
parameters. We eventually discuss the influences of these parameters on the
motion of the system.
|
Most sensor setups for onboard autonomous perception are composed of LiDARs
and vision systems, as they provide complementary information that improves the
reliability of the different algorithms necessary to obtain a robust scene
understanding. However, the effective use of information from different sources
requires an accurate calibration between the sensors involved, which usually
implies a tedious and burdensome process. We present a method to calibrate the
extrinsic parameters of any pair of sensors involving LiDARs, monocular or
stereo cameras, of the same or different modalities. The procedure is composed
of two stages: first, reference points belonging to a custom calibration target
are extracted from the data provided by the sensors to be calibrated, and
second, the optimal rigid transformation is found through the registration of
both point sets. The proposed approach can handle devices with very different
resolutions and poses, as usually found in vehicle setups. In order to assess
the performance of the proposed method, a novel evaluation suite built on top
of a popular simulation framework is introduced. Experiments on the synthetic
environment show that our calibration algorithm significantly outperforms
existing methods, whereas real data tests corroborate the results obtained in
the evaluation suite. Open-source code is available at
https://github.com/beltransen/velo2cam_calibration
|
We propose a quantum enhanced interferometric protocol for gravimetry and
force sensing using cold atoms in an optical lattice supported by a
standing-wave cavity. By loading the atoms in partially delocalized
Wannier-Stark states, it is possible to cancel the undesirable inhomogeneities
arising from the mismatch between the lattice and cavity fields and to generate
spin squeezed states via a uniform one-axis twisting model. The quantum
enhanced sensitivity of the states is combined with the subsequent application
of a compound pulse sequence that allows to separate atoms by several lattice
sites. This, together with the capability to load small atomic clouds in the
lattice at micrometric distances from a surface, make our setup ideal for
sensing short-range forces. We show that for arrays of $10^4$ atoms, our
protocol can reduce the required averaging time by a factor of $10$ compared to
unentangled lattice-based interferometers after accounting for primary sources
of decoherence.
|
Given a graph, the densest subgraph problem asks for a set of vertices such
that the average degree among these vertices is maximized. Densest subgraph has
numerous applications in learning, e.g., community detection in social
networks, link spam detection, correlation mining, bioinformatics, and so on.
Although there are efficient algorithms that output either exact or approximate
solutions to the densest subgraph problem, existing algorithms may violate the
privacy of the individuals in the network, e.g., leaking the
existence/non-existence of edges.
In this paper, we study the densest subgraph problem in the framework of the
differential privacy, and we derive the first upper and lower bounds for this
problem. We show that there exists a linear-time $\epsilon$-differentially
private algorithm that finds a $2$-approximation of the densest subgraph with
an extra poly-logarithmic additive error. Our algorithm not only reports the
approximate density of the densest subgraph, but also reports the vertices that
form the dense subgraph.
Our upper bound almost matches the famous $2$-approximation by Charikar both
in performance and in approximation ratio, but we additionally achieve
differential privacy. In comparison with Charikar's algorithm, our algorithm
has an extra poly-logarithmic additive error. We partly justify the additive
error with a new lower bound, showing that for any differentially private
algorithm that provides a constant-factor approximation, a sub-logarithmic
additive error is inherent.
We also practically study our differentially private algorithm on real-world
graphs, and we show that in practice the algorithm finds a solution which is
very close to the optimal
|
We introduce the notion of Drinfeld twists for both set-theoretical YBE
solutions and skew braces. We give examples of such twists and show that all
twists between skew braces come from families of isomorphisms between their
additive groups. We then describe the relation between these definitions and
co-twists on FRT-type Hopf algebras in the category $\mathrm{SupLat}$, and
prove that any co-twist on a co-quasitriangular Hopf algebra in
$\mathrm{SupLat}$ induces a Drinfeld twist on its remnant skew brace. We go on
to classify co-twists on bicrossproduct Hopf algebras coming from groups with
unique factorisation, and the twists which they induce on skew braces.
|
We establish a construction for the entanglement wedge in asymptotically flat
bulk geometries for subsystems in dual $(1+1)$-dimensional Galilean conformal
field theories in the context of flat space holography. In this connection we
propose a definition for the bulk entanglement wedge cross section for
bipartite states of such dual non relativistic conformal field theories.
Utilizing our construction for the entanglement wedge cross section we compute
the entanglement negativity for such bipartite states through the
generalization of an earlier proposal, in the context of the usual $AdS/CFT$
scenario, to flat space holography. The entanglement negativity obtained from
our construction exactly reproduces earlier holographic results and match with
the corresponding field theory replica technique results in the large central
charge limit.
|
The gravity model has been a useful framework to describe macroscopic flow
patterns in geographically correlated systems. In the general framework of the
gravity model, the flow between two nodes decreases with distance and has been
set to be proportional to the suitably defined mass of each node. Despite the
frequent successful applications of the gravity model and its alternatives, the
existing models certainly possess a serious logical drawback from a theoretical
perspective. In particular, the mass in the gravity model has been either
assumed to be proportional to the total in- and out-flow of the corresponding
node or simply assigned the other node attribute external to the gravity model
formulation. In the present work, we propose a general novel framework in which
the mass as well as the distance-dependent deterrence function can be computed
iteratively in a self-consistent manner within the framework only based on the
flow data as input. We validate our suggested methodology in an artificial
synthetic flow data to find the near-perfect agreement between the input
information and the results from our framework. We also apply our method to the
real international trade network data and discuss implications of the results.
|
In this chapter we argue that discourses on AI must transcend the language of
'ethics' and engage with power and political economy in order to constitute
'Good Data'. In particular, we must move beyond the depoliticised language of
'ethics' currently deployed (Wagner 2018) in determining whether AI is 'good'
given the limitations of ethics as a frame through which AI issues can be
viewed. In order to circumvent these limits, we use instead the language and
conceptualisation of 'Good Data', as a more expansive term to elucidate the
values, rights and interests at stake when it comes to AI's development and
deployment, as well as that of other digital technologies. Good Data
considerations move beyond recurring themes of data protection/privacy and the
FAT (fairness, transparency and accountability) movement to include explicit
political economy critiques of power. Instead of yet more ethics principles
(that tend to say the same or similar things anyway), we offer four 'pillars'
on which Good Data AI can be built: community, rights, usability and politics.
Overall we view AI's 'goodness' as an explicly political (economy) question of
power and one which is always related to the degree which AI is created and
used to increase the wellbeing of society and especially to increase the power
of the most marginalized and disenfranchised. We offer recommendations and
remedies towards implementing 'better' approaches towards AI. Our strategies
enable a different (but complementary) kind of evaluation of AI as part of the
broader socio-technical systems in which AI is built and deployed.
|
In this work, we develop an optimal transport (OT) based framework to select
informative prototypical examples that best represent a given target dataset.
Summarizing a given target dataset via representative examples is an important
problem in several machine learning applications where human understanding of
the learning models and underlying data distribution is essential for decision
making. We model the prototype selection problem as learning a sparse
(empirical) probability distribution having the minimum OT distance from the
target distribution. The learned probability measure supported on the chosen
prototypes directly corresponds to their importance in representing the target
data. We show that our objective function enjoys a key property of
submodularity and propose an efficient greedy method that is both
computationally fast and possess deterministic approximation guarantees.
Empirical results on several real world benchmarks illustrate the efficacy of
our approach.
|
Isolated neutron stars are prime targets for continuous-wave (CW) searches by
ground-based gravitational$-$wave interferometers. Results are presented from a
CW search targeting ten pulsars. The search uses a semicoherent algorithm,
which combines the maximum-likelihood $\mathcal{F}$-statistic with a hidden
Markov model (HMM) to efficiently detect and track quasi$-$monochromatic
signals which wander randomly in frequency. The targets, which are associated
with TeV sources detected by the High Energy Stereoscopic System (H.E.S.S.),
are chosen to test for gravitational radiation from young, energetic pulsars
with strong $\mathrm{\gamma}$-ray emission, and take maximum advantage of the
frequency tracking capabilities of HMM compared to other CW search algorithms.
The search uses data from the second observing run of the Advanced Laser
Interferometer Gravitational-Wave Observatory (aLIGO). It scans 1$-$Hz
sub-bands around $f_*$, 4$f_*$/3, and 2$f_*$, where $f_*$ denotes the star's
rotation frequency, in order to accommodate a physically plausible frequency
mismatch between the electromagnetic and gravitational-wave emission. The 24
sub-bands searched in this study return 5,256 candidates above the Gaussian
threshold with a false alarm probability of 1$\%$ per sub-band per target. Only
12 candidates survive the three data quality vetoes which are applied to
separate non$-$Gaussian artifacts from true astrophysical signals. CW searches
using the data from subsequent observing runs will clarify the status of the
remaining candidates.
|
The real-space Green's function code FEFF has been extensively developed and
used for calculations of x-ray and related spectra, including x-ray absorption
(XAS), x-ray emission (XES), inelastic x-ray scattering, and electron energy
loss spectra (EELS). The code is particularly useful for the analysis and
interpretation of the XAS fine-structure (EXAFS) and the near-edge structure
(XANES) in materials throughout the periodic table. Nevertheless, many
applications, such as non-equilibrium systems, and the analysis of ultra-fast
pump-probe experiments, require extensions of the code including
finite-temperature and auxiliary calculations of structure and vibrational
properties. To enable these extensions, we have developed in tandem, a new
version FEFF10, and new FEFF based workflows for the Corvus workflow manager,
which allow users to easily augment the capabilities of FEFF10 via auxiliary
codes. This coupling facilitates simplified input and automated calculations of
spectra based on advanced theoretical techniques. The approach is illustrated
with examples of high temperature behavior, vibrational properties, many-body
excitations in XAS, super-heavy materials, and fits of calculated spectra to
experiment.
|
In a topological insulator (TI)/magnetic insulator (MI) hetero-structure,
large spin-orbit coupling of the TI and inversion symmetry breaking at the
interface could foster non-planar spin textures such as skyrmions at the
interface. This is observed as topological Hall effect in a conventional Hall
set-up. While this effect has been observed at the interface of TI/MI, where MI
beholds perpendicular magnetic anisotropy, non-trivial spin-textures that
develop in interfacial MI with in-plane magnetic anisotropy is under-reported.
In this work, we study Bi$_2$Te$_3$/EuS hetero-structure using planar Hall
effect (PHE). We observe planar topological Hall and spontaneous planar Hall
features that are characteristic of non-trivial in-plane spin textures at the
interface. We find that the latter is minimum when the current and magnetic
field directions are aligned parallel, and maximum when they are aligned
perpendicularly within the sample plane, which maybe attributed to the
underlying planar anisotropy of the spin-texture. These results demonstrate the
importance of PHE for sensitive detection and characterization of non-trivial
magnetic phase that has evaded exploration in the TI/MI interface.
|
Recently, the problem of spin and orbital angular momentum (AM) separation
has widely been discussed. Nowadays, all discussions about the possibility to
separate the spin AM from the orbital AM in the gauge invariant manner are
based on the ansatz that the gluon field can be presented in form of the
decomposition where the physical gluon components are additive to the pure
gauge gluon components, i.e. $A_\mu = A_\mu^{\text{phys}}+A_\mu^{\text{pure}}$.
In the present paper, we show that in the non-Abelian gauge theory this gluon
decomposition has a strong mathematical evidence in the frame of the contour
gauge conception. In other words, we reformulate the gluon decomposition ansatz
as a theorem on decomposition and, then, we use the contour gauge to prove this
theorem. In the first time, we also demonstrate that the contour gauge
possesses the special kind of residual gauge related to the boundary field
configurations and expressed in terms of the pure gauge fields. As a result,
the trivial boundary conditions lead to the inference that the decomposition
includes the physical gluon configurations only provided the contour gauge
condition.
|
We present a study of 41 dwarf galaxies hosting active massive black holes
(BHs) using Hubble Space Telescope observations. The host galaxies have stellar
masses in the range of $M_\star \sim 10^{8.5}-10^{9.5}~M_\odot$ and were
selected to host active galactic nuclei (AGNs) based on narrow emission line
ratios derived from Sloan Digital Sky Survey spectroscopy. We find a wide range
of morphologies in our sample including both regular and irregular dwarf
galaxies. We fit the HST images of the regular galaxies using GALFIT and find
that the majority are disk-dominated with small pseudobulges, although we do
find a handful of bulge-like/elliptical dwarf galaxies. We also find an
unresolved source of light in all of the regular galaxies, which may indicate
the presence of a nuclear star cluster and/or the detection of AGN continuum.
Three of the galaxies in our sample appear to be Magellanic-type dwarf
irregulars and two galaxies exhibit clear signatures of interactions/mergers.
This work demonstrates the diverse nature of dwarf galaxies hosting
optically-selected AGNs. It also has implications for constraining the origin
of the first BH seeds using the local BH occupation fraction at low masses --
we must account for the various types of dwarf galaxies that may host BHs.
|
The results of the investigation of the core-envelope model presented in Negi
et al. \cite{Ref1} have been discussed in view of the reference \cite{Ref2} .
It is seen that there are significant changes in the results to be addressed.
In addition, I have also calculated the gravitational binding energy, causality
and pulsational stability of the structures which were not considered in Negi
et al. \cite{Ref1} . The modified results have important consequences to model
neutron stars and pulsars. The maximum neutron star mass obtained in this study
corresponds to the mean value of the classical results obtained by Rhodes \&
Ruffini \cite {Ref3} and the upper bound on neutron star mass obtained by
Kalogera \& Byam \cite {Ref4} and is much closer to the most recent theoretical
estimate made by Sotani \cite{Ref5}. On one hand, when there are only few
equations of state (EOSs) available in the literature which can fulfil the
recent observational constraint imposed by the largest neutron star masses
around 2$M_\odot$\cite{Ref6}, \cite{Ref7}, \cite{Ref8}, the present analytic
models, on the other hand, can comfortably satisfy this constraint.
Furthermore, the maximum allowed value of compactness parameter $u(\equiv M/a$;
mass to size ratio in geometrized units) $ \leq 0.30$ obtained in this study is
also consistent with an absolute maximum value of $ u_{\rm max} =
0.333^{+0.001}_{-0.005}$ resulting from the observation of binary neutron stars
merger GW170817 (see, e.g.\cite{Ref9}).
|
Traditional power system frequency dynamics are driven by Newtonian physics,
where the ubiquitous synchronous generator (SG) maps second order frequency
trajectories following power imbalances. The integration of sustainable,
renewable energy resources is primarily accomplished with inverters that
convert DC power into AC power, which hitherto use grid-following control
strategies that require an established voltage waveform. A 100\% integration of
this particular control strategy is untenable and attention has recently
shifted to grid-forming (GFM) control, where the inverter directly regulates
frequency and voltage. We compare and analyze the frequency interactions of
multi-loop droop GFMs and SGs. Full order dynamical models are reduced to
highlight the disparate power conversion processes, and singular perturbation
theory is applied to illuminate an order reduction in frequency response of the
GFM. Extensive electromagnetic transient domain simulations of the 9- and
39-bus test systems confirm the order reduction and the associated decoupling
of the nadir and rate of change of frequency. Finally, matrix pencil analysis
of the system oscillatory modes show a general increase in primary mode damping
with GFMs, although an unexpected, substantial increase in mode frequency and
decrease in damping is observed for high penetrations of GFMs in the 39-bus
system.
|
When interacting motile units self-organize into flocks, they realize one of
the most robust ordered state found in nature. However, after twenty five years
of intense research, the very mechanism controlling the ordering dynamics of
both living and artificial flocks has remained unsettled. Here, combining
active-colloid experiments, numerical simulations and analytical work, we
explain how flocking liquids heal their spontaneous flows initially plagued by
collections of topological defects to achieve long-ranged polar order even in
two dimensions. We demonstrate that the self-similar ordering of flocking
matter is ruled by a living network of domain walls linking all $\pm 1$
vortices, and guiding their annihilation dynamics. Crucially, this singular
orientational structure echoes the formation of extended density patterns in
the shape of interconnected bow ties. We establish that this double structure
emerges from the interplay between self-advection and density gradients
dressing each $-1$ topological charges with four orientation walls. We then
explain how active Magnus forces link all topological charges with extended
domain walls, while elastic interactions drive their attraction along the
resulting filamentous network of polarization singularities. Taken together our
experimental, numerical and analytical results illuminate the suppression of
all flow singularities, and the emergence of pristine unidirectional order in
flocking matter.
|
We present an approach to studying and predicting the spatio-temporal
progression of infectious diseases. We treat the problem by adopting a partial
differential equation (PDE) version of the Susceptible, Infected, Recovered,
Deceased (SIRD) compartmental model of epidemiology, which is achieved by
replacing compartmental populations by their densities. Building on our recent
work (Computational Mechanics, 66, 1177, 2020), we replace our earlier use of
global polynomial basis functions with those having local support, as
epitomized in the finite element method, for the spatial representation of the
SIRD parameters. The time dependence is treated by inferring constant
parameters over time intervals that coincide with the time step in
semi-discrete numerical implementations. In combination, this amounts to a
scheme of field inversion of the SIRD parameters over each time step. Applied
to data over ten months of 2020 for the pandemic in the US state of Michigan
and to all of Mexico, our system inference via field inversion infers
spatio-temporally varying PDE SIRD parameters that replicate the progression of
the pandemic with high accuracy. It also produces accurate predictions, when
compared against data, for a three week period into 2021. Of note is the
insight that is suggested on the spatio-temporal variation of infection,
recovery and death rates, as well as patterns of the population's mobility
revealed by diffusivities of the compartments.
|
In semi-supervised domain adaptation, a few labeled samples per class in the
target domain guide features of the remaining target samples to aggregate
around them. However, the trained model cannot produce a highly discriminative
feature representation for the target domain because the training data is
dominated by labeled samples from the source domain. This could lead to
disconnection between the labeled and unlabeled target samples as well as
misalignment between unlabeled target samples and the source domain. In this
paper, we propose a novel approach called Cross-domain Adaptive Clustering to
address this problem. To achieve both inter-domain and intra-domain adaptation,
we first introduce an adversarial adaptive clustering loss to group features of
unlabeled target data into clusters and perform cluster-wise feature alignment
across the source and target domains. We further apply pseudo labeling to
unlabeled samples in the target domain and retain pseudo-labels with high
confidence. Pseudo labeling expands the number of ``labeled" samples in each
class in the target domain, and thus produces a more robust and powerful
cluster core for each class to facilitate adversarial learning. Extensive
experiments on benchmark datasets, including DomainNet, Office-Home and Office,
demonstrate that our proposed approach achieves the state-of-the-art
performance in semi-supervised domain adaptation.
|
We numerically investigate the energy and arrival-time noise of ultrashort
laser pulses produced via resonant dispersive wave emission in gas-filled
hollow-core waveguides under the influence of pump-laser instability. We find
that for low pump energy, fluctuations in the pump energy are strongly
amplified. However, when the generation process is saturated, the energy of the
resonant dispersive wave can be significantly less noisy than that of the pump
pulse. This holds for a variety of generation conditions and while still
producing few-femtosecond pulses. We further find that the arrival-time jitter
of the generated pulse remains well below one femtosecond even for a
conservative estimate of the pump pulse energy noise, and that photoionisation
and plasma dynamics can lead to exceptional stability for some generation
conditions. By applying our analysis to a scaled-down system, we demonstrate
that our results hold for frequency conversion schemes based on both small-core
microstructured fibre and large-core hollow capillary fibre.
|
Over the past decade, Deep Convolutional Neural Networks have been widely
adopted for medical image segmentation and shown to achieve adequate
performance. However, due to the inherent inductive biases present in the
convolutional architectures, they lack understanding of long-range dependencies
in the image. Recently proposed Transformer-based architectures that leverage
self-attention mechanism encode long-range dependencies and learn
representations that are highly expressive. This motivates us to explore
Transformer-based solutions and study the feasibility of using
Transformer-based network architectures for medical image segmentation tasks.
Majority of existing Transformer-based network architectures proposed for
vision applications require large-scale datasets to train properly. However,
compared to the datasets for vision applications, for medical imaging the
number of data samples is relatively low, making it difficult to efficiently
train transformers for medical applications. To this end, we propose a Gated
Axial-Attention model which extends the existing architectures by introducing
an additional control mechanism in the self-attention module. Furthermore, to
train the model effectively on medical images, we propose a Local-Global
training strategy (LoGo) which further improves the performance. Specifically,
we operate on the whole image and patches to learn global and local features,
respectively. The proposed Medical Transformer (MedT) is evaluated on three
different medical image segmentation datasets and it is shown that it achieves
better performance than the convolutional and other related transformer-based
architectures. Code: https://github.com/jeya-maria-jose/Medical-Transformer
|
In this paper, we attempt to propose Ekeland's variational principle for
interval-valued functions (IVFs). To develop the variational principle, we
study the concept of sequence of intervals. In the sequel, the idea of
gH-semicontinuity for IVFs is explored. A necessary and sufficient condition
for an IVF to be gH-continuous in terms of gH-lower and upper semicontinuity is
given. Moreover, we prove a characterization for gH-lower semicontinuity by the
level sets of the IVF. With the help of this characterization result, we ensure
the existence of a minimum for an extended gH-lower semicontinuous,
level-bounded and proper IVF. To find an approximate minima of a gH-lower
semicontinuous and gH-Gateaux differentiable IVF, the proposed Ekeland's
variational principle is used.
|
In this paper, we address risk aggregation and capital allocation problems in
the presence of dependence between risks. The dependence structure is defined
by a mixed Bernstein copula which represents a generalization of the well-known
Archimedean copulas. Using this new copula, the probability density function
and the cumulative distribution function of the aggregate risk are obtained.
Then, closed-form expressions for basic risk measures, such as tail
value-at-risk(TVaR) and TVaR-based allocations, are derived.
|
Word emphasis in textual content aims at conveying the desired intention by
changing the size, color, typeface, style (bold, italic, etc.), and other
typographical features. The emphasized words are extremely helpful in drawing
the readers' attention to specific information that the authors wish to
emphasize. However, performing such emphasis using a soft keyboard for social
media interactions is time-consuming and has an associated learning curve. In
this paper, we propose a novel approach to automate the emphasis word detection
on short written texts. To the best of our knowledge, this work presents the
first lightweight deep learning approach for smartphone deployment of emphasis
selection. Experimental results show that our approach achieves comparable
accuracy at a much lower model size than existing models. Our best lightweight
model has a memory footprint of 2.82 MB with a matching score of 0.716 on
SemEval-2020 public benchmark dataset.
|
Physicists are starting to work in areas where noisy signal analysis is
required. In these fields, such as Economics, Neuroscience, and Physics, the
notion of causality should be interpreted as a statistical measure. We
introduce to the lay reader the Granger causality between two time series and
illustrate ways of calculating it: a signal $X$ ``Granger-causes'' a signal $Y$
if the observation of the past of $X$ increases the predictability of the
future of $Y$ when compared to the same prediction done with the past of $Y$
alone. In other words, for Granger causality between two quantities it suffices
that information extracted from the past of one of them improves the forecast
of the future of the other, even in the absence of any physical mechanism of
interaction. We present derivations of the Granger causality measure in the
time and frequency domains and give numerical examples using a non-parametric
estimation method in the frequency domain. Parametric methods are addressed in
the Appendix. We discuss the limitations and applications of this method and
other alternatives to measure causality.
|
Reverberation mapping (RM) is an efficient method to investigate the physical
sizes of the broad line region (BLR) and dusty torus in an active galactic
nucleus (AGN). The Spectro-Photometer for the History of the Universe, Epoch of
Reionization and Ices Explorer (SPHEREx) mission will provide multi-epoch
spectroscopic data at optical and near-infrared wavelengths. These data can be
used for RM experiments for bright AGNs. We present results of a feasibility
test using SPHEREx data in the SPHEREx deep regions for the torus RM
measurements. We investigate the physical properties of bright AGNs in the
SPHEREx deep field. Based on this information, we compute the efficiency of
detecting torus time lags in simulated light curves. We demonstrate that, in
combination with the complementary optical data with a depth of $\sim20$ mag in
$B-$band, lags of $\le 750$ days for tori can be measured for more than
$\sim200$ bright AGNs. If high signal-to-noise ratio photometric data with a
depth of $\sim21-22$ mag are available, RM measurements can be applied for up
to $\sim$900 objects. When complemented by well-designed early optical
observations, SPHEREx can provide a unique dataset for studies of the physical
properties of dusty tori in bright AGNs.
|
Unrestricted particle transport through microfluidic channels is of paramount
importance to a wide range of applications, including lab-on-a-chip devices. In
this article, we study using video microscopy the electro-osmotic aggregation
of colloidal particles at the opening of a micrometer-sized silica channel in
presence of a salt gradient. Particle aggregation eventually leads to clogging
of the channel, which may be undone by a time-adjusted reversal of the applied
electric potential. We numerically model our system via the
Stokes-Poisson-Nernst-Planck equations in a geometry that approximates the real
sample. This allows us to identify the transport processes induced by the
electric field and salt gradient and to provide evidence that a balance thereof
leads to aggregation. We further demonstrate experimentally that a net flow of
colloids through the channel may be achieved by applying a square-waveform
electric potential with an appropriately tuned duty cycle. Our results serve to
guide the design of microfluidic and nanofluidic pumps that allow for
controlled particle transport and provide new insights for anti-fouling in
ultra-filtration.
|
Let $R$ be a commutative ring with nonzero identity, and $\delta
:\mathcal{I(R)}\rightarrow\mathcal{I(R)}$ be an ideal expansion where
$\mathcal{I(R)}$ the set of all ideals of $R$. In this paper, we introduce the
concept of $\delta$-$n$-ideals which is an extension of $n$-ideals in
commutative rings. We call a proper ideal $I$ of $R$ a $\delta$-$n$-ideal if
whenever $a,b\in R$ with$\ ab\in I$ and $a\notin\sqrt{0}$, then $b\in
\delta(I)$. For example, $\delta_{1}$ is defined by $\delta_{1}(I)=\sqrt{I}.$ A
number of results and characterizations related to $\delta$-$n$-ideals are
given. Furthermore, we present some results related to quasi $n$-ideals which
is for the particular case $\delta=\delta_{1}.$
|
The possibility of targeting the causal genes along with the mechanisms of
pathogenically complex diseases has led to numerous studies on the genetic
etiology of some diseases. In particular, studies have added more genes to the
list of type 1 diabetes mellitus (T1DM) suspect genes, necessitating an update
for the interest of all stakeholders. Therefore this review articulates T1DM
suspect genes and their pathophysiology. Notable electronic databases,
including Medline, Scopus, PubMed, and Google-Scholar were searched for
relevant information. The search identified over 73 genes suspected in the
pathogenesis of T1DM, with human leukocyte antigen, insulin gene, and cytotoxic
T lymphocyte-associated antigen 4 accounting for most of the cases. Mutations
in these genes, along with environmental factors, may produce a defective
immune response in the pancreas, resulting in \b{eta}-cell autoimmunity,
insulin deficiency, and hyperglycemia. The mechanisms leading to these cellular
reactions are gene-specific and, if targeted in diabetic individuals, may lead
to improved treatment. Medical practitioners are advised to formulate treatment
procedures that target these genes in patients with T1DM.
|
Unsupervised cross-lingual pretraining has achieved strong results in neural
machine translation (NMT), by drastically reducing the need for large parallel
data. Most approaches adapt masked-language modeling (MLM) to
sequence-to-sequence architectures, by masking parts of the input and
reconstructing them in the decoder. In this work, we systematically compare
masking with alternative objectives that produce inputs resembling real (full)
sentences, by reordering and replacing words based on their context. We
pretrain models with different methods on English$\leftrightarrow$German,
English$\leftrightarrow$Nepali and English$\leftrightarrow$Sinhala monolingual
data, and evaluate them on NMT. In (semi-) supervised NMT, varying the
pretraining objective leads to surprisingly small differences in the finetuned
performance, whereas unsupervised NMT is much more sensitive to it. To
understand these results, we thoroughly study the pretrained models using a
series of probes and verify that they encode and use information in different
ways. We conclude that finetuning on parallel data is mostly sensitive to few
properties that are shared by most models, such as a strong decoder, in
contrast to unsupervised NMT that also requires models with strong
cross-lingual abilities.
|
In this article we consider zero and non-zero sum risk-sensitive average
criterion games for semi-Markov processes with a finite state space. For the
zero-sum case, under suitable assumptions we show that the game has a value. We
also establish the existence of a stationary saddle point equilibrium. For the
non-zero sum case, under suitable assumptions we establish the existence of a
stationary Nash equilibrium.
|
Understanding the water oxidation mechanism in Photosystem II (PSII)
stimulates the design of biomimetic artificial systems that can convert solar
energy into hydrogen fuel efficiently. The Sr2+ substituted PSII is active but
slower than with the native Ca2+ as an oxygen evolving catalyst. Here, we use
Density Functional Theory (DFT) to compare the energetics of the S2 to S3
transition in the Mn4O5Ca2+ and Mn4O5Sr2+ clusters. The calculations show that
deprotonation of the water bound to Ca2+ (W3), required for the S2 to S3
transition, is energetically more favorable in Mn4O5Ca2+ than Mn4O5Sr2+. In
addition, we have calculated the pKa of the water that bridges Mn4 and the
Ca2+/Sr2+ in the S2 using continuum electrostatics. The calculations show that
the pKa is higher by 4 pH units in the Mn4O5Sr2+.
|
Microphone array techniques can improve the acoustic sensing performance on
drones, compared to the use of a single microphone. However, multichannel sound
acquisition systems are not available in current commercial drone platforms. To
encourage the research in drone audition, we present an embedded sound
acquisition and recording system with eight microphones and a multichannel
sound recorder mounted on a quadcopter. In addition to recording and storing
locally the sound from multiple microphones simultaneously, the embedded system
can connect wirelessly to a remote terminal to transfer audio files for further
processing. This will be the first stage towards creating a fully embedded
solution for drone audition. We present experimental results obtained by
state-of-the-art drone audition algorithms applied to the sound recorded by the
embedded system.
|
We evaluate the effectiveness of semi-supervised learning (SSL) on a
realistic benchmark where data exhibits considerable class imbalance and
contains images from novel classes. Our benchmark consists of two fine-grained
classification datasets obtained by sampling classes from the Aves and Fungi
taxonomy. We find that recently proposed SSL methods provide significant
benefits, and can effectively use out-of-class data to improve performance when
deep networks are trained from scratch. Yet their performance pales in
comparison to a transfer learning baseline, an alternative approach for
learning from a few examples. Furthermore, in the transfer setting, while
existing SSL methods provide improvements, the presence of out-of-class is
often detrimental. In this setting, standard fine-tuning followed by
distillation-based self-training is the most robust. Our work suggests that
semi-supervised learning with experts on realistic datasets may require
different strategies than those currently prevalent in the literature.
|
In this paper we investigate the algebraic structure of the truth tables of
all bracketed formulae with n distinct variables connected by the binary
connective of implication.
|
Character linking, the task of linking mentioned people in conversations to
the real world, is crucial for understanding the conversations. For the
efficiency of communication, humans often choose to use pronouns (e.g., "she")
or normal phrases (e.g., "that girl") rather than named entities (e.g.,
"Rachel") in the spoken language, which makes linking those mentions to real
people a much more challenging than a regular entity linking task. To address
this challenge, we propose to incorporate the richer context from the
coreference relations among different mentions to help the linking. On the
other hand, considering that finding coreference clusters itself is not a
trivial task and could benefit from the global character information, we
propose to jointly solve these two tasks. Specifically, we propose C$^2$, the
joint learning model of Coreference resolution and Character linking. The
experimental results demonstrate that C$^2$ can significantly outperform
previous works on both tasks. Further analyses are conducted to analyze the
contribution of all modules in the proposed model and the effect of all
hyper-parameters.
|
We continue the line of work initiated by Goldreich and Ron (Journal of the
ACM, 2017) on testing dynamic environments and propose to pursue a systematic
study of the complexity of testing basic dynamic environments and local rules.
As a first step, in this work we focus on dynamic environments that correspond
to elementary cellular automata that evolve according to threshold rules.
Our main result is the identification of a set of conditions on local rules,
and a meta-algorithm that tests evolution according to local rules that satisfy
the conditions. The meta-algorithm has query complexity poly$ (1/\epsilon) $,
is non-adaptive and has one-sided error. We show that all the threshold rules
satisfy the set of conditions, and therefore are poly$ (1/\epsilon) $-testable.
We believe that this is a rich area of research and suggest a variety of open
problems and natural research directions that may extend and expand our
results.
|
Multi-party machine learning is a paradigm in which multiple participants
collaboratively train a machine learning model to achieve a common learning
objective without sharing their privately owned data. The paradigm has recently
received a lot of attention from the research community aimed at addressing its
associated privacy concerns. In this work, we focus on addressing the concerns
of data privacy, model privacy, and data quality associated with
privacy-preserving multi-party machine learning, i.e., we present a scheme for
privacy-preserving collaborative learning that checks the participants' data
quality while guaranteeing data and model privacy. In particular, we propose a
novel metric called weight similarity that is securely computed and used to
check whether a participant can be categorized as a reliable participant (holds
good quality data) or not. The problems of model and data privacy are tackled
by integrating homomorphic encryption in our scheme and uploading encrypted
weights, which prevent leakages to the server and malicious participants,
respectively. The analytical and experimental evaluations of our scheme
demonstrate that it is accurate and ensures data and model privacy.
|
We study $^5$He variationally as the first $p$-shell nucleus in the
tensor-optimized antisymmetrized molecular dynamics (TOAMD) using the bare
nucleon--nucleon interaction without any renormalization. In TOAMD, the central
and tensor correlation operators promote the AMD's Gaussian wave function to a
sophisticated many-body state including the short-range and tensor correlations
with high-momentum nucleon pairs. We develop a successive approach by applying
these operators successively with up to double correlation operators to get
converging results. We obtain satisfactory results for $^5$He, not only for the
ground state but also for the excited state, and discuss explicitly the
correlated Hamiltonian components in each state. We also show the importance of
the independent optimization of the correlation functions in the variation of
the total energy beyond the condition assuming common correlation forms used in
the Jastrow approach.
|
Maintaining a $k$-core decomposition quickly in a dynamic graph is an
important problem in many applications, including social network analytics,
graph visualization, centrality measure computations, and community detection
algorithms. The main challenge for designing efficient $k$-core decomposition
algorithms is that a single change to the graph can cause the decomposition to
change significantly.
We present the first parallel batch-dynamic algorithm for maintaining an
approximate $k$-core decomposition that is efficient in both theory and
practice. Given an initial graph with $m$ edges, and a batch of $B$ updates,
our algorithm maintains a $(2 + \delta)$-approximation of the coreness values
for all vertices (for any constant $\delta > 0$) in $O(B\log^2 m)$ amortized
work and $O(\log^2 m \log\log m)$ depth (parallel time) with high probability.
Our algorithm also maintains a low out-degree orientation of the graph in the
same bounds. We implemented and experimentally evaluated our algorithm on a
30-core machine with two-way hyper-threading on $11$ graphs of varying
densities and sizes. Compared to the state-of-the-art algorithms, our algorithm
achieves up to a 114.52x speedup against the best multicore implementation and
up to a 497.63x speedup against the best sequential algorithm, obtaining
results for graphs that are orders-of-magnitude larger than those used in
previous studies.
In addition, we present the first approximate static $k$-core algorithm with
linear work and polylogarithmic depth. We show that on a 30-core machine with
two-way hyper-threading, our implementation achieves up to a 3.9x speedup in
the static case over the previous state-of-the-art parallel algorithm.
|
Salient object detection (SOD) is viewed as a pixel-wise saliency modeling
task by traditional deep learning-based methods. A limitation of current SOD
models is insufficient utilization of inter-pixel information, which usually
results in imperfect segmentation near edge regions and low spatial coherence.
As we demonstrate, using a saliency mask as the only label is suboptimal. To
address this limitation, we propose a connectivity-based approach called
bilateral connectivity network (BiconNet), which uses connectivity masks
together with saliency masks as labels for effective modeling of inter-pixel
relationships and object saliency. Moreover, we propose a bilateral voting
module to enhance the output connectivity map, and a novel edge feature
enhancement method that efficiently utilizes edge-specific features. Through
comprehensive experiments on five benchmark datasets, we demonstrate that our
proposed method can be plugged into any existing state-of-the-art
saliency-based SOD framework to improve its performance with negligible
parameter increase.
|
Reinforcement Learning (RL) has been able to solve hard problems such as
playing Atari games or solving the game of Go, with a unified approach. Yet
modern deep RL approaches are still not widely used in real-world applications.
One reason could be the lack of guarantees on the performance of the
intermediate executed policies, compared to an existing (already working)
baseline policy. In this paper, we propose an online model-free algorithm that
solves conservative exploration in the policy optimization problem. We show
that the regret of the proposed approach is bounded by
$\tilde{\mathcal{O}}(\sqrt{T})$ for both discrete and continuous parameter
spaces.
|
Rap generation, which aims to produce lyrics and corresponding singing beats,
needs to model both rhymes and rhythms. Previous works for rap generation
focused on rhyming lyrics but ignored rhythmic beats, which are important for
rap performance. In this paper, we develop DeepRapper, a Transformer-based rap
generation system that can model both rhymes and rhythms. Since there is no
available rap dataset with rhythmic beats, we develop a data mining pipeline to
collect a large-scale rap dataset, which includes a large number of rap songs
with aligned lyrics and rhythmic beats. Second, we design a Transformer-based
autoregressive language model which carefully models rhymes and rhythms.
Specifically, we generate lyrics in the reverse order with rhyme representation
and constraint for rhyme enhancement and insert a beat symbol into lyrics for
rhythm/beat modeling. To our knowledge, DeepRapper is the first system to
generate rap with both rhymes and rhythms. Both objective and subjective
evaluations demonstrate that DeepRapper generates creative and high-quality
raps with rhymes and rhythms. Code will be released on GitHub.
|
The simulation of multi-body systems with frictional contacts is a
fundamental tool for many fields, such as robotics, computer graphics, and
mechanics. Hard frictional contacts are particularly troublesome to simulate
because they make the differential equations stiff, calling for computationally
demanding implicit integration schemes. We suggest to tackle this issue by
using exponential integrators, a long-standing class of integration schemes
(first introduced in the 60's) that in recent years has enjoyed a resurgence of
interest. We show that this scheme can be easily applied to multi-body systems
subject to stiff viscoelastic contacts, producing accurate results at lower
computational cost than \changed{classic explicit or implicit schemes}. In our
tests with quadruped and biped robots, our method demonstrated stable behaviors
with large time steps (10 ms) and stiff contacts ($10^5$ N/m). Its excellent
properties, especially for fast and coarse simulations, make it a valuable
candidate for many applications in robotics, such as simulation, Model
Predictive Control, Reinforcement Learning, and controller design.
|
The crossover in solving linear programs is a procedure to recover an optimal
corner/extreme point from an approximately optimal inner point generated by
interior-point method or emerging first-order methods. Unfortunately it is
often observed that the computation time of this procedure can be much longer
than the time of the former stage. Our work shows that this bottleneck can be
significantly improved if the procedure can smartly take advantage of the
problem characteristics and implement customized strategies. For the problem
with the network structure, our approach can even start from an inexact
solution of interior-point method as well as other emerging first-order
algorithms. It fully exploits the network structure to smartly evaluate
columns' potential of forming the optimal basis and efficiently identifies a
nearby basic feasible solution. For the problem with a large optimal face, we
propose a perturbation crossover approach to find a corner point of the optimal
face. The comparison experiments with state-of-art commercial LP solvers on
classical linear programming problem benchmarks, network flow problem
benchmarks and MINST datasets exhibit its considerable advantages in practice.
|
Synthetic Magnetic Resonance (MR) imaging predicts images at new design
parameter settings from a few observed MR scans. Model-based methods, that use
both the physical and statistical properties underlying the MR signal and its
acquisition, can predict images at any setting from as few as three scans,
allowing it to be used in individualized patient- and anatomy-specific
contexts. However, the estimation problem in model-based synthetic MR imaging
is ill-posed and so regularization, in the form of correlated Gaussian Markov
Random Fields, is imposed on the voxel-wise spin-lattice relaxation time,
spin-spin relaxation time and the proton density underlying the MR image. We
develop theoretically sound but computationally practical matrix-free
estimation methods for synthetic MR imaging. Our evaluations demonstrate
excellent ability of our methods to synthetize MR images in a clinical
framework and also estimation and prediction accuracy and consistency. An added
strength of our model-based approach, also developed and illustrated here, is
the accurate estimation of standard errors of regional means in the synthesized
images.
|
Universal Domain Adaptation (UNDA) aims to handle both domain-shift and
category-shift between two datasets, where the main challenge is to transfer
knowledge while rejecting unknown classes which are absent in the labeled
source data but present in the unlabeled target data. Existing methods manually
set a threshold to reject unknown samples based on validation or a pre-defined
ratio of unknown samples, but this strategy is not practical. In this paper, we
propose a method to learn the threshold using source samples and to adapt it to
the target domain. Our idea is that a minimum inter-class distance in the
source domain should be a good threshold to decide between known or unknown in
the target. To learn the inter-and intra-class distance, we propose to train a
one-vs-all classifier for each class using labeled source data. Then, we adapt
the open-set classifier to the target domain by minimizing class entropy. The
resulting framework is the simplest of all baselines of UNDA and is insensitive
to the value of a hyper-parameter yet outperforms baselines with a large
margin.
|
Following Demidovich's concept and definition of convergent systems, we
analyze the optimal nonlinear damping control, recently proposed [1] for the
second-order systems. Targeting the problem of output regulation,
correspondingly tracking of $\mathcal{C}^1$-trajectories, it is shown that all
solutions of the control system are globally uniformly asymptotically stable.
The existence of the unique limit solution in the origin of the control error
and its time derivative coordinates are shown in the sense of Demidovich's
convergent dynamics. Explanative numerical examples are also provided along
with analysis.
|
We review the development of thermodynamic protein hydropathic scaling
theory, starting from backgrounds in mathematics and statistical mechanics, and
leading to biomedical applications. Darwinian evolution has organized each
protein family in different ways, but dynamical hydropathic scaling theory is
both simple and effective in providing readily transferable dynamical insights
for many proteins represented in the uncounted amino acid sequences, as well as
the 90 thousand static structures contained in the online Protein Data Base.
Critical point theory is general, and recently it has proved to be the most
effective way of describing protein networks that have evolved towards nearly
perfect functionality in given environments, self-organized criticality.
Darwinian evolutionary patterns are governed by common dynamical hydropathic
scaling principles, which can be quantified using scales that have been
developed bioinformatically by studying thousands of static PDB structures. The
most effective dynamical scales involve hydropathic globular sculpting
interactions averaged over length scales centered on domain dimensions. A
central feature of dynamical hydropathic scaling theory is the characteristic
domain length associated with a given protein functionality. Evolution has
functioned in such a way that the minimal critical length scale established so
far is about nine amino acids, but in some cases it is much larger. Some
ingenuity is needed to find this primary length scale, as shown by the examples
discussed here. Often a survey of the Darwinian evolution of a protein sequence
suggests a means of determining the critical length scale. The evolution of
Coronavirus is an interesting application; it identifies critical mutations.
|
In functional analysis, there are different notions of limit for a bounded
sequence of $L^1$ functions. Besides the pointwise limit, that does not always
exist, the behaviour of a bounded sequence of $L^1$ functions can be described
in terms of its weak-$\star$ limit or by introducing a measure-valued notion of
limit in the sense of Young measures. Working in Robinson's framework of
analysis with infinitesimals, we show that for every bounded sequence
$\{z_n\}_{n \in \mathbb{N}}$ of $L^1$ functions there exists a function of a
hyperfinite domain (i.e.\ a grid function) that represents both the
weak-$\star$ and the Young measure limits of the sequence. This result has
relevant applications to the study of nonlinear PDEs. We discuss the example of
an ill-posed forward-backward parabolic equation.
|
We resolve several puzzles related to the electromagnetic response of
topological superconductors in 3+1 dimensions. In particular we show by an
analytical calculation that the interface between a topological and normal
superconductor does not exhibit any quantum Hall effect as long as time
reversal invariance is preserved. We contrast this with the analogous case of a
topological insulator to normal insulator interface. The difference is that in
the topological insulator the electromagnetic vector potential couples to a
vector current in a theory with a Dirac mass, while in the superconductor a
pair of Weyl fermions are gapped by Majorana masses and the electromagnetic
vector potential couples to their axial currents.
|
A novel scheme is proposed for generating a polarized positron beam via
multiphoton Breit-Wheeler process during the collision of a 10 GeV, pC seeding
electron beam with the other 1 GeV, nC driving electron beam. The driving beam
provides the strong self-generated field, and a suitable transverse deviation
distance between two beams enables the field experienced by the seeding beam to
be unipolar, which is crucial for realizing the positron polarization. We
employ the particle simulation with a Monte-Carlo method to calculate the spin-
and polarization-resolved photon emission and electron-positron pair production
in the local constant field approximation. Our simulation results show that a
highly polarized positron beam with polarization above $40\%$ can be generated
in several femtoseconds, which is robust with respect to parameters of two
electron beams. Based on an analysis of the influence of $\gamma$-photon
polarization on the polarized pair production, we find that a polarized seeding
beam of the proper initial polarization can further improve the positron
polarization to $60\%$.
|
Financial portfolio management is one of the most applicable problems in
reinforcement learning (RL) owing to its sequential decision-making nature.
Existing RL-based approaches, while inspiring, often lack scalability,
reusability, or profundity of intake information to accommodate the
ever-changing capital markets. In this paper, we propose MSPM, a modularized
and scalable, multi-agent RL-based system for financial portfolio management.
MSPM involves two asynchronously updated units: an Evolving Agent Module (EAM)
and Strategic Agent Module (SAM). A self-sustained EAM produces
signal-comprised information for a specific asset using heterogeneous data
inputs, and each EAM employs its reusability to have connections to multiple
SAMs. An SAM is responsible for asset reallocation in a portfolio using
profound information from the connected EAMs. With the elaborate architecture
and the multi-step condensation of volatile market information, MSPM aims to
provide a customizable, stable, and dedicated solution to portfolio management,
unlike existing approaches. We also tackle the data-shortage issue of
newly-listed stocks by transfer learning, and validate the indispensability of
EAM with four different portfolios. Experiments on 8-year U.S. stock market
data prove the effectiveness of MSPM in profit accumulation, by its
outperformance over existing benchmarks.
|
We present the upper bound of the essential norm of the composition operator
over the Polylogarithmic Hardy space PL2(D;s).The results involve the
Nevanlinna counting function for PL2(D;s). We first prove the Littlewood-Paley
Identity for PL2(D;s) which leads to the Nevanlinna counting function for
PL2(D;s). With all these results, not only we get the upper bound of the
essential norm of the composition operator over PL2(D;s) but also we get an
upper bound in terms of the angular derivative and essential norm of
composition operator over the Hardy space H2.
|
We present an optimization-based method to efficiently calculate accurate
nonlinear models of Taylor vortex flow. We use the resolvent formulation of
McKeon & Sharma (2010) to model these Taylor vortex solutions by treating the
nonlinearity not as an inherent part of the governing equations but rather as a
triadic constraint which must be satisfied by the model solution. We exploit
the low rank linear dynamics of the system to calculate an efficient basis for
our solution, the coefficients of which are then calculated through an
optimization problem where the cost function to be minimized is the triadic
consistency of the solution with itself as well as with the input mean flow.
Our approach constitutes, what is to the best of our knowledge, the first fully
nonlinear and self-sustaining, resolvent-based model described in the
literature. We compare our results to direct numerical simulation of Taylor
Couette flow at up to five times the critical Reynolds number, and show that
our model accurately captures the structure of the flow. Additionally, we find
that as the Reynolds number increases the flow undergoes a fundamental
transition from a classical weakly nonlinear regime, where the forcing cascade
is strictly down scale, to a fully nonlinear regime characterized by the
emergence of an inverse (up scale) forcing cascade. Triadic contributions from
the inverse and traditional cascade destructively interfere implying that the
accurate modeling of a certain Fourier mode requires knowledge of its immediate
harmonic and sub-harmonic. We show analytically that this finding is a direct
consequence of the structure of the quadratic nonlinearity of the governing
equations formulated in Fourier space. Finally, we show that using our model
solution as an initial condition to a higher Reynolds number DNS significantly
reduces the time to convergence.
|
In this paper, we study the problem of federated learning over a wireless
channel with user sampling, modeled by a Gaussian multiple access channel,
subject to central and local differential privacy (DP/LDP) constraints. It has
been shown that the superposition nature of the wireless channel provides a
dual benefit of bandwidth efficient gradient aggregation, in conjunction with
strong DP guarantees for the users. Specifically, the central DP privacy
leakage has been shown to scale as $\mathcal{O}(1/K^{1/2})$, where $K$ is the
number of users. It has also been shown that user sampling coupled with
orthogonal transmission can enhance the central DP privacy leakage with the
same scaling behavior. In this work, we show that, by join incorporating both
wireless aggregation and user sampling, one can obtain even stronger privacy
guarantees. We propose a private wireless gradient aggregation scheme, which
relies on independently randomized participation decisions by each user. The
central DP leakage of our proposed scheme scales as $\mathcal{O}(1/K^{3/4})$.
In addition, we show that LDP is also boosted by user sampling. We also present
analysis for the convergence rate of the proposed scheme and study the
tradeoffs between wireless resources, convergence, and privacy theoretically
and empirically for two scenarios when the number of sampled participants are
$(a)$ known, or $(b)$ unknown at the parameter server.
|
We introduce a novel hybrid algorithm to simulate the real-time evolution of
quantum systems using parameterized quantum circuits. The method, named
"projected - Variational Quantum Dynamics" (p-VQD) realizes an iterative,
global projection of the exact time evolution onto the parameterized manifold.
In the small time-step limit, this is equivalent to the McLachlan's variational
principle. Our approach is efficient in the sense that it exhibits an optimal
linear scaling with the total number of variational parameters. Furthermore, it
is global in the sense that it uses the variational principle to optimize all
parameters at once. The global nature of our approach then significantly
extends the scope of existing efficient variational methods, that instead
typically rely on the iterative optimization of a restricted subset of
variational parameters. Through numerical experiments, we also show that our
approach is particularly advantageous over existing global optimization
algorithms based on the time-dependent variational principle that, due to a
demanding quadratic scaling with parameter numbers, are unsuitable for large
parameterized quantum circuits.
|
Crises like the COVID-19 pandemic pose a serious challenge to health-care
institutions. They need to plan the resources required for handling the
increased load, for instance, hospital beds and ventilators. To support the
resource planning of local health authorities from the Cologne region,
BaBSim.Hospital, a tool for capacity planning based on discrete event
simulation, was created. The predictive quality of the simulation is determined
by 29 parameters. Reasonable default values of these parameters were obtained
in detailed discussions with medical professionals. We aim to investigate and
optimize these parameters to improve BaBSim.Hospital. First approaches with
"out-of-the-box" optimization algorithms failed. Implementing a surrogate-based
optimization approach generated useful results in a reasonable time. To
understand the behavior of the algorithm and to get valuable insights into the
fitness landscape, an in-depth sensitivity analysis was performed. The
sensitivity analysis is crucial for the optimization process because it allows
focusing the optimization on the most important parameters. We illustrate how
this reduces the problem dimension without compromising the resulting accuracy.
The presented approach is applicable to many other real-world problems, e.g.,
the development of new elevator systems to cover the last mile or simulation of
student flow in academic study periods.
|
The recent investigation of chains of Rydberg atoms has brought back the
problem of commensurate-incommensurate transitions into the focus of current
research. In 2D classical systems, or in 1D quantum systems, the commensurate
melting of a period-p phase with p larger than 4 is known to take place through
an intermediate floating phase where correlations between domain walls or
particles decay only as a power law, but when p is equal to 3 or 4, it has been
argued by Huse and Fisher that the transition could also be direct and
continuous in a non-conformal chiral universality class with a dynamical
exponent larger than 1. This is only possible however if the floating phase
terminates at a Lifshitz point before reaching the conformal point, a
possibility debated since then. Here we argue that this is a generic feature of
models where the number of particles is not conserved because the exponent of
the floating phase changes along the Pokrovsky-Talapov transition and can thus
reach the value at which the floating phase becomes unstable. Furthermore, we
show numerically that this scenario is realized in an effective model of the
period-3 phase of Rydberg chains in which hard-core bosons are created and
annihilated three by three: The Luttinger liquid parameter reaches the critical
value $p^2/8=9/8$ along the Pokrovsky-Talapov transition, leading to a Lifshitz
point that separates the floating phase from a chiral transition. Implications
beyond Rydberg atoms are briefly discussed.
|
Two-dimensional (2D) magnetic materials are essential for the development of
the next-generation spintronic technologies. Recently, layered van der Waals
(vdW) compound MnBi2Te4 (MBT) has attracted great interest, and its 2D
structure has been reported to host coexisting magnetism and topology. Here, we
design several conceptual nanodevices based on MBT monolayer (MBT-ML) and
reveal their spin-dependent transport properties by means of the
first-principles calculations. The pn-junction diodes and sub-3-nm pin-junction
field-effect transistors (FETs) show a strong rectifying effect and a spin
filtering effect, with an ideality factor n close to 1 even at a reasonably
high temperature. In addition, the pip- and nin-junction FETs give an
interesting negative differential resistive (NDR) effect. The gate voltages can
tune currents through these FETs in a large range. Furthermore, the MBT-ML has
a strong response to light. Our results uncover the multifunctional nature of
MBT-ML, pave the road for its applications in diverse next-generation
semiconductor spin electric devices.
|
This is a brief survey of classical and recent results about the typical
behavior of eigenvalues of large random matrices, written for mathematicians
and others who study and use matrices but may not be accustomed to thinking
about randomness.
|
Inspection of available data on the decay exponent for the kinetic energy of
homogeneous and isotropic turbulence (HIT) shows that it varies by as much as
100\%. Measurements and simulations often show no correspondence with
theoretical arguments, which are themselves varied. This situation is
unsatisfactory given that HIT is a building block of turbulence theory and
modeling. We take recourse to a large base of direct numerical simulations and
study decaying HIT for a variety of initial conditions. We show that the
Kolmogorov decay exponent and the Birkhoff-Saffman decay are both readily
observed, albeit approximately, for long periods of time if the initial
conditions are appropriately arranged. We also present, for both cases, other
turbulent statistics such as the velocity derivative skewness, energy spectra
and dissipation, and show that the decay and growth laws are approximately as
expected theoretically, though the wavenumber spectrum near the origin begins
to change relatively quickly, suggesting that the invariants do not strictly
exist. We comment briefly on why the decay exponent has varied so widely in
past experiments and simulations.
|
This research paper proposes a novel Neighbourhood Rough Set based approach
for supervised Multi-document Text Summarization (MDTS) with analysis and
impact on the summarization results for MDTS. Here, Rough Set based LERS
algorithm is improved using Neighborhood Rough Set which is itself a novel
combination called Neighborhood-LERS to be experimented for evaluations of
efficacy and efficiency. In this paper, we shall apply and evaluate the
proposed Neighborhood-LERS for Multi-document Summarization which here is
proved experimentally to be superior to the base LERS technique for MDTS.
|
We experimentally demonstrate the efficient generation of circularly
polarized pulses tunable from the vacuum to deep ultraviolet (160-380 nm)
through resonant dispersive wave emission from optical solitons in a gas-filled
hollow capillary fiber. In the deep ultraviolet we measure up to 13 microjoule
of pulse energy, and from numerical simulations, we estimate the shortest
output pulse duration to be 8.5 fs. We also experimentally verify that simply
scaling the pulse energy by 3/2 between linearly and circularly polarized
pumping closely reproduces the soliton and dispersive wave dynamics. Based on
previous results with linearly polarized self-compression and resonant
dispersive wave emission, we expect our technique to be extended to produce
circularly polarized few-fs pulses further into the vacuum ultraviolet, and few
to sub-fs circularly polarized pulses in the near-infrared.
|
As the representations output by Graph Neural Networks (GNNs) are
increasingly employed in real-world applications, it becomes important to
ensure that these representations are fair and stable. In this work, we
establish a key connection between counterfactual fairness and stability and
leverage it to propose a novel framework, NIFTY (uNIfying Fairness and
stabiliTY), which can be used with any GNN to learn fair and stable
representations. We introduce a novel objective function that simultaneously
accounts for fairness and stability and develop a layer-wise weight
normalization using the Lipschitz constant to enhance neural message passing in
GNNs. In doing so, we enforce fairness and stability both in the objective
function as well as in the GNN architecture. Further, we show theoretically
that our layer-wise weight normalization promotes counterfactual fairness and
stability in the resulting representations. We introduce three new graph
datasets comprising of high-stakes decisions in criminal justice and financial
lending domains. Extensive experimentation with the above datasets demonstrates
the efficacy of our framework.
|
The estimation of the covariance function of a stochastic process, or signal,
is of integral importance for a multitude of signal processing applications. In
this work, we derive closed-form expressions for the variance of covariance
estimates for mixed-spectrum signals, i.e., spectra containing both absolutely
continuous and singular parts. The results cover both finite-sample and
asymptotic regimes, allowing for assessing the exact speed of convergence of
estimates to their expectations, as well as their limiting behavior. As is
shown, such covariance estimates may converge even for non-ergodic processes.
Furthermore, we consider approximating signals with arbitrary spectral
densities by sequences of singular spectrum, i.e., sinusoidal, processes, and
derive the limiting behavior of covariance estimates as both the sample size
and the number of sinusoidal components tend to infinity. We show that the
asymptotic regime variance can be described by a time-frequency resolution
product, with dramatically different behavior depending on how the sinusoidal
approximation is constructed. In a few numerical examples we illustrate the
theory and the corresponding implications for direction of arrival estimation.
|
Successful navigation of a rigid-body traveling with six degrees of freedom
(6 DoF) requires accurate estimation of attitude , position, and linear
velocity. The true navigation dynamics are highly nonlinear and are modeled on
the matrix Lie group of SE2(3). This paper presents novel geometric nonlinear
continuous stochastic navigation observers on SE2(3) capturing the true
nonlinearity of the problem. The proposed observers combines IMU and landmark
measurements. It efficiently handles the IMU measurement noise. The proposed
observers are guaranteed to be almost semi-globally uniformly ultimately
bounded in the mean square. Quaternion representation is provided. A real-world
quadrotor measurement dataset is used to validate the effectiveness of the
proposed observers in its discrete form. Keywords: Inertial navigation,
stochastic system, Brownian motion process, stochastic filter algorithm,
stochastic differential equation, Lie group, SE(3), SO(3), pose estimator,
position, attitude, feature measurement, inertial measurement unit, IMU.
|
We study a graph search problem in which a team of searchers attempts to find
a mobile target located in a graph. Assuming that (a) the visibility field of
the searchers is limited, (b) the searchers have unit speed and (c) the target
has infinite speed, we formulate the Limited Visibility Graph Search (LVGS)
problem and present the LVGS algorithm, which produces a search schedule
guaranteed to find the target in the minimum possible number of steps. Our LVGS
algorithm is a conversion of Guibas and Lavalle's polygonal region search
algorithm.
|
The objective of this paper is to analyze the existence of equilibria for a
class of deterministic mean field games of controls. The interaction between
players is due to both a congestion term and a price function which depends on
the distributions of the optimal strategies. Moreover, final state and mixed
state-control constraints are considered, the dynamics being nonlinear and
affine with respect to the control. The existence of equilibria is obtained by
Kakutani's theorem, applied to a fixed point formulation of the problem.
Finally, uniqueness results are shown under monotonicity assumptions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.