abstract
stringlengths 42
2.09k
|
---|
In this paper we introduce a realistic and challenging, multi-source and
multi-room acoustic environment and an improved algorithm for the estimation of
source-dominated microphone clusters in acoustic sensor networks. Our proposed
clustering method is based on a single microphone per node and on unsupervised
clustered federated learning which employs a light-weight autoencoder model. We
present an improved clustering control strategy that takes into account the
variability of the acoustic scene and allows the estimation of a dynamic range
of clusters using reduced amounts of training data. The proposed approach is
optimized using clustering-based measures and validated via a network-wide
classification task.
|
In previous work, we constructed for a smooth complex variety $X$ and for a
linear algebraic group $G$ a mixed Hodge structure on the complete local ring
$\widehat{\mathcal{O}}_\rho$ to the moduli space of representations of the
fundamental group $\pi_1(X,x)$ into $G$ at a representation $\rho$ underlying a
variation of mixed Hodge structure. We now show that the jump ideals $J_k^i
\subset \widehat{\mathcal{O}}_\rho$, defining the locus of representations such
the the dimension of the cohomology of $X$ in degree $i$ of the associated
local system is greater than $k$, are sub-mixed Hodge structures; this is in
accordance with various known motivicity results for these loci. In rank one we
also recover, and find new cases, where these loci are translated sub-tori of
the moduli of representations. Our methods are first transcendental, relying on
Hodge theory, and then combined with tools of homotopy and algebra.
|
We show that the concept of the ADM mass in general relativity can be
understood as the limit of the total mean curvature plus the total defect of
dihedral angle of the boundary of large Riemannian polyhedra. We also express
the $n$-dimensional mass as a suitable integral of geometric quantities that
determine the $(n-1)$-dimensional mass.
|
Treatment planning in low-dose-rate prostate brachytherapy (LDR-PB) aims to
produce arrangement of implantable radioactive seeds that deliver a minimum
prescribed dose to the prostate whilst minimizing toxicity to healthy tissues.
There can be multiple seed arrangements that satisfy this dosimetric criterion,
not all deemed 'acceptable' for implant from a physician's perspective. This
leads to plans that are subjective to the physician's/centre's preference,
planning style, and expertise. We propose a method that aims to reduce this
variability by training a model to learn from a large pool of successful
retrospective LDR-PB data (961 patients) and create consistent plans that mimic
the high-quality manual plans. Our model is based on conditional generative
adversarial networks that use a novel loss function for penalizing the model on
spatial constraints of the seeds. An optional optimizer based on a simulated
annealing (SA) algorithm can be used to further fine-tune the plans if
necessary (determined by the treating physician). Performance analysis was
conducted on 150 test cases demonstrating comparable results to that of the
manual prehistorical plans. On average, the clinical target volume covering
100% of the prescribed dose was 98.9% for our method compared to 99.4% for
manual plans. Moreover, using our model, the planning time was significantly
reduced to an average of 2.5 mins/plan with SA, and less than 3 seconds without
SA. Compared to this, manual planning at our centre takes around 20 mins/plan.
|
We call a real algebraic hypersurface in $(\mathbb{C}^*)^n$ simplicial if it
is given by a real Laurent polynomial in $n$-variables that has exactly $n+1$
monomials with non-zero coefficients and such that the convex hull in
$\mathbb{R}^n$ of the $n+1$ points of $\mathbb{Z} ^n$ corresponding to the
exponents is a non-degenerate $n$-dimensional simplex. Such hypersurfaces are
natural building blocks from which more complicated objects can be constructed,
for example using O. Viro's Patchworking method. Drawing inspiration from
related work by G. Kerr and I. Zharkov, we describe the action of the complex
conjugation on the homology of the coamoebas of simplicial real algebraic
hypersurfaces, hoping it might prove useful in a variety of problems related to
topology of real algebraic varieties. In particular, assuming a reasonable
conjecture, we identify the conditions under which such a hypersurface is
Galois maximal.
|
We consider a stationary Markov process that models certain queues with a
bulk service of fixed number m of admitted customers. We find an integral
expression of its transition probability function in terms of certain
multi-orthogonal polynomials. We study the convergence of the appropriate
scheme of simultaneous quadrature rules to design an algorithm for computing
this integral expression.
|
There is a growing interest in the area of machine learning and creativity.
This survey presents an overview of the history and the state of the art of
computational creativity theories, machine learning techniques, including
generative deep learning, and corresponding automatic evaluation methods. After
presenting a critical discussion of the key contributions in this area, we
outline the current research challenges and emerging opportunities in this
field.
|
For the first time the scientific community in Latin America working at the
forefront of research in high energy, cosmology and astroparticle physics
(HECAP) have come together to discuss and provide scientific input towards the
development of a regional strategy.
The present document, the Latin American HECAP Physics Briefing Book, is the
result of this ambitious bottom-up effort. This report contains the work
performed by the Preparatory Group to synthesize the main contributions and
discussions for each of the topical working groups. This briefing book
discusses the relevant emerging projects developing in the region and considers
potentially impactful future initiatives and participation of the Latin
American HECAP community in international flagship projects to provide the
essential input for the creation of a long-term HECAP strategy in the region.
|
We propose a collocation and quasi-collocation method for solving second
order boundary value problems $L_2 y=f$, in which the differential operator
$L_2$ can be represented in the product formulation, aiming mostly on singular
and singularly perturbed boundary value problems. Seeking an approximating
Canonical Complete Chebyshev spline $s$ by a collocation method leads to demand
that $L_2s$ interpolates the function $f$. On the other hand, in
quasi-collocation method we require that $L_2 s$ is equal to an approximation
of $f$ by the Schoenberg operator. We offer the calculation of both methods
based on the Green's function, and give their error bounds.
|
We study theoretically the phase diagram of strongly-coupled two-dimensional
Bose-Fermi mixtures interacting with attractive short-range potentials as a
function of the particle densities. We focus on the limit where the size of the
bound state between a boson and a fermion is small compared to the average
inter-boson separation and develop a functional renormalization group approach
that accounts for the bound-state physics arising from the extended Fr\"ohlich
Hamiltonian. By including three-body correlations we are able to reproduce the
polaron-to-molecule transition in two-dimensional Fermi gases in the extreme
limit of vanishing boson density. We predict frequency- and momentum-resolved
spectral functions and study the impact of three-body correlations on
quasiparticle properties. At finite boson density, we find that when the bound
state energy exceeds the Fermi energy by a critical value, the fermions and
bosons can form a fermionic composite with a well-defined Fermi surface. These
composites constitute a Fermi sea of dressed Feshbach molecules in the case of
ultracold atoms while in the case of atomically thin semiconductors a trion
liquid emerges. As the boson density is increased further, the effective energy
gap of the composites decreases, leading to a transition into a
strongly-correlated phase where polarons are hybridized with molecular degrees
of freedom. We highlight the universal connection between two-dimensional
semiconductors and ultracold atoms and we discuss perspectives for further
exploring the rich structure of strongly-coupled Bose-Fermi mixtures in these
complementary systems.
|
This paper concerns a convex, stochastic zeroth-order optimization (S-ZOO)
problem, where the objective is to minimize the expectation of a cost function
and its gradient is not accessible directly. To solve this problem, traditional
optimization techniques mostly yield query complexities that grow polynomially
with dimensionality, i.e., the number of function evaluations is a polynomial
function of the number of decision variables. Consequently, these methods may
not perform well in solving massive-dimensional problems arising in many modern
applications. Although more recent methods can be provably
dimension-insensitive, almost all of them work with arguably more stringent
conditions such as everywhere sparse or compressible gradient. Thus, prior to
this research, it was unknown whether dimension-insensitive S-ZOO is possible
without such conditions. In this paper, we give an affirmative answer to this
question by proposing a sparsity-inducing stochastic gradient-free (SI-SGF)
algorithm. It is proved to achieve dimension-insensitive query complexity in
both convex and strongly convex cases when neither gradient sparsity nor
gradient compressibility is satisfied. Our numerical results demonstrate the
strong potential of the proposed SI-SGF compared with existing alternatives.
|
Using brain imaging quantitative traits (QTs) to identify the genetic risk
factors is an important research topic in imaging genetics. Many efforts have
been made via building linear models, e.g. linear regression (LR), to extract
the association between imaging QTs and genetic factors such as single
nucleotide polymorphisms (SNPs). However, to the best of our knowledge, these
linear models could not fully uncover the complicated relationship due to the
loci's elusive and diverse impacts on imaging QTs. Though deep learning models
can extract the nonlinear relationship, they could not select relevant genetic
factors. In this paper, we proposed a novel multi-task deep feature selection
(MTDFS) method for brain imaging genetics. MTDFS first adds a multi-task
one-to-one layer and imposes a hybrid sparsity-inducing penalty to select
relevant SNPs making significant contributions to abnormal imaging QTs. It then
builds a multi-task deep neural network to model the complicated associations
between imaging QTs and SNPs. MTDFS can not only extract the nonlinear
relationship but also arms the deep neural network with the feature selection
capability. We compared MTDFS to both LR and single-task DFS (DFS) methods on
the real neuroimaging genetic data. The experimental results showed that MTDFS
performed better than both LR and DFS in terms of the QT-SNP relationship
identification and feature selection. In a word, MTDFS is powerful for
identifying risk loci and could be a great supplement to the method library for
brain imaging genetics.
|
A rectangular dual of a graph $G$ is a contact representation of $G$ by
axis-aligned rectangles such that (i)~no four rectangles share a point and
(ii)~the union of all rectangles is a rectangle. The partial representation
extension problem for rectangular duals asks whether a given partial
rectangular dual can be extended to a rectangular dual, that is, whether there
exists a rectangular dual where some vertices are represented by prescribed
rectangles. Combinatorially, a rectangular dual can be described by a regular
edge labeling (REL), which determines the orientations of the rectangle
contacts.
We describe two approaches to solve the partial representation extension
problem for rectangular duals with given REL. On the one hand, we characterise
the RELs that admit an extension, which leads to a linear-time testing
algorithm. In the affirmative, we can construct an extension in linear time.
This partial representation extension problem can also be formulated as a
linear program (LP). We use this LP to solve the simultaneous representation
problem for the case of rectangular duals when each input graph is given
together with a REL.
|
Suitable composable data center networks (DCNs) are essential to support the
disaggregation of compute components in highly efficient next generation data
centers (DCs). However, designing such composable DCNs can be challenging. A
composable DCN that adopts a full mesh backplane between disaggregated compute
components within a rack and employs dedicated interfaces on each
point-to-point link is wasteful and expensive. In this paper, we propose and
describe two (i.e., electrical, and electrical-optical) variants of a network
for composable DC (NetCoD). NetCoD adopts a targeted design to reduce the
number of transceivers required when a mesh physical backplane is deployed
between disaggregated compute components in the same rack. The targeted design
leverages optical communication techniques and components to achieve this with
minimal or no network performance degradation. We formulate a MILP model to
evaluate the performance of both variants of NetCoD in rack-scale composable
DCs that implement different forms of disaggregation. The electrical-optical
variant of NetCoD achieves similar performance as a reference network while
utilizing fewer transceivers per compute node. The targeted adoption of optical
technologies by both variants of NetCoD achieves greater (4 - 5 times greater)
utilization of available network throughput than the reference network which
implements a generic design. Under the various forms of disaggregation
considered, both variant of NetCoD achieve near-optimal compute energy
efficiency in the composable DC while satisfying both compute and network
constraints. This is because marginal concession of optimal compute energy
efficiency is often required to achieve overall optimal energy efficiency in
composable DCs.
|
The present study reports the effect of different source terms on the near
and far-field acoustic characteristics of compressible flow over a rectangular
cavity using hybrid computational aeroacoustics methodology. We use a low
dispersive and dissipative compressible fluid flow solver in conjunction with
an acoustic perturbation equation solver based on the spectral/hp element
method. The hybrid approach involves calculating the base fields and the
acoustic sources from a fluid simulation in the first step. In the next step,
the acoustic solver utilizes the variables to predict the acoustic propagation
due to the given sources. The validation of the methodology against benchmark
cases provides quite accurate results while compared against the existing
literature. The study is then extended to assess the importance of the entropy
source term for the flow over a rectangular cavity. The predictions of hybrid
simulations with vortex and entropy source terms reproduce the perturbation
pressure values very close to the existing direct numerical simulation results.
Moreover, the results suggest that the use of just the vortex source terms
over-predicts the perturbation pressure near the source region. Finally, we
have carried out detailed simulations with all the source terms to investigate
the noise sources for compressible flow over the cavity for different Mach
number ranges (M = 0.4; 0.5; 0.6; 0.7; 1.5). The obtained acoustic spectra and
the sound directivity are in close agreement with the reference experiment.
|
Rare extragalactic objects can carry substantial information about the past,
present, and future universe. Given the size of astronomical databases in the
information era it can be assumed that very many outlier galaxies are included
in existing and future astronomical databases. However, manual search for these
objects is impractical due to the required labor, and therefore the ability to
detect such objects largely depends on computer algorithms. This paper
describes an unsupervised machine learning algorithm for automatic detection of
outlier galaxy images, and its application to several Hubble Space Telescope
fields. The algorithm does not require training, and therefore is not dependent
on the preparation of clean training sets. The application of the algorithm to
a large collection of galaxies detected a variety of outlier galaxy images. The
algorithm is not perfect in the sense that not all objects detected by the
algorithm are indeed considered outliers, but it reduces the dataset by two
orders of magnitude to allow practical manual identification. The catalogue
contains 147 objects that would be very difficult to identify without using
automation.
|
We study a perturbation family of N=2 3d gauge theories and its relation to
quantum K-theory. A 3d version of the Intriligator-Vafa formula is given for
the quantum K-theory ring of Grassmannians. The 3d BPS half-index of the gauge
theory is connected to the theory of bilateral hypergeometric q-series, and to
modular q-characters of a class of conformal field theories in a certain
massless limit. Turning on 3d Wilson lines at torsion points leads to mock
modular behavior. Perturbed correlators in the IR regime are computed by
determining the UV-IR map in the presence of deformations.
|
In this paper, we investigate the classical and Bayesian estimation of
unknown parameters of the Gumbel type-II distribution based on adaptive type-II
progressive hybrid censored sample (AT-II PHCS). The maximum likelihood
estimates (MLEs) and maximum product spacing estimates (MPSEs) are developed
and computed numerically using Newton-Raphson method. Bayesian approaches are
employed to estimate parameters under symmetric and asymmetric loss functions.
Bayesian estimates are not in explicit forms. Thus, Bayesian estimates are
obtained by using Markov chain Monte Carlo (MCMC) method along with the
Metropolis-Hastings (MH) algorithm. Based on the normality property of MLEs the
asymptotic confidence intervals are constructed. Also, bootstrap intervals and
highest posterior density (HPD) credible intervals are constructed. Further a
Monte Carlo simulation study is carried out. Finally, the data set based on the
death rate due to Covid-19 in India is analyzed for illustration of the
purpose.
|
We give a new and conceptually straightforward proof of the well-known
presentation for the Temperley-Lieb algebra, via an alternative new
presentation. Our method involves twisted semigroup algebras, and we make use
of two apparently new submonoids of the Temperley-Lieb monoid.
|
We present a null-stream-based Bayesian unmodeled framework to probe generic
gravitational-wave polarizations. Generic metric theories allow six
gravitational-wave polarization states, but general relativity only permits the
existence of two of them namely the tensorial polarizations. The strain signal
measured by an interferometer is a linear combination of the polarization modes
and such a linear combination depends on the geometry of the detector and the
source location. The detector network of Advanced LIGO and Advanced Virgo
allows us to measure different linear combinations of the polarization modes
and therefore we can constrain the polarization content by analyzing how the
polarization modes are linearly combined. We propose the basis formulation to
construct a null stream along the polarization basis modes without requiring
modeling the basis explicitly. We conduct a mock data study and we show that
the framework is capable of probing pure and mixed polarizations in the
Advanced LIGO-Advanced Virgo 3-detector network without knowing the sky
location of the source from electromagnetic counterparts. We also discuss the
effect of the presence of the uncaptured orthogonal polarization component in
the framework, and we propose using the plug-in method to test the existence of
the orthogonal polarizations.
|
The Kitaev model realizes a quantum spin liquid where the spin excitations
are fractionalized into itinerant Majorana fermions and localized
$\mathbb{Z}_2$ vortices. Quantum entanglement between the fractional
excitations can be utilized for decoherence-free topological quantum
computation. Of particular interest is the anyonic statistics realized by
braiding the vortex excitations under a magnetic field. Despite the promising
potential, the practical methodology for creation and control of the vortex
excitations remains elusive thus far. Here we theoretically propose how one can
create and move the vortices in the Kitaev spin liquid. We find that the
vortices are induced by a local modulation of the exchange interaction;
especially, the local Dzyaloshinskii-Moriya (symmetric off-diagonal)
interaction can create vortices most efficiently in the (anti)ferromagnetic
Kitaev model, as it effectively flips the sign of the Kitaev interaction. We
test this idea by performing the {\it ab initio} calculation for a candidate
material $\alpha$-RuCl$_3$ through the manipulation of the ligand positions
that breaks the inversion symmetry and induces the local Dzyaloshinskii-Moriya
interaction. We also demonstrate a braiding of vortices by adiabatically and
successively changing the local bond modulations.
|
In this paper, a low-cost, simple and reliable bi-static Radar Cross Section
(RCS) measurement method making use a historic Marconi set-up is presented. It
uses a transmitting (Tx) antenna (located at a constant position, at a
reference angle of {\theta} = 0o) and a receiver (Rx) antenna (mounted on a
moveable arm calibrated in the azimuthal direction with an accuracy of 0.1o). A
time gating method is used to extract the information from the reflection in
the time domain; applying time filter allows removing the antenna side lobe
effects and other ambient noise. In this method, the Rx antenna (on the movable
arm) is used to measure the reflected field in the angular range from 1o to 90o
of reflection from the structure (printed PCB) and from the reference
configuration represented by a ground (GND) plane of the same dimension. The
time gating method is then applied to each pair of PCB / GND measurements to
extract the bi-static RCS pattern of the structure at a given frequency. Here
comparison of measurement results carried out at 18 GHz and 32 GHz with
simulation indicates the successful performance of the proposed method. It can
be used as a low-cost, reliable and available option in future measurement and
scientific research.
|
Rare events arising in nonlinear atmospheric dynamics remain hard to predict
and attribute. We address the problem of forecasting rare events in a
prototypical example, Sudden Stratospheric Warmings (SSWs). Approximately once
every other winter, the boreal stratospheric polar vortex rapidly breaks down,
shifting midlatitude surface weather patterns for months. We focus on two key
quantities of interest: the probability of an SSW occurring, and the expected
lead time if it does occur, as functions of initial condition. These
\emph{optimal forecasts} concretely measure the event's progress. Direct
numerical simulation can estimate them in principle, but is prohibitively
expensive in practice: each rare event requires a long integration to observe,
and the cost of each integration grows with model complexity. We describe an
alternative approach using integrations that are \emph{short} compared to the
timescale of the warming event. We compute the probability and lead time
efficiently by solving equations involving the transition operator, which
encodes all information about the dynamics. We relate these optimal forecasts
to a small number of interpretable physical variables, suggesting optimal
measurements for forecasting. We illustrate the methodology on a prototype SSW
model developed by Holton and Mass (1976) and modified by stochastic forcing.
While highly idealized, this model captures the essential nonlinear dynamics of
SSWs and exhibits the key forecasting challenge: the dramatic separation in
timescales between a single event and the return time between successive
events. Our methodology is designed to fully exploit high-dimensional data from
models and observations, and has the potential to identify detailed predictors
of many complex rare events in meteorology.
|
We introduce a multilabel probing task to assess the morphosyntactic
representations of word embeddings from multilingual language models. We
demonstrate this task with multilingual BERT (Devlin et al., 2018), training
probes for seven typologically diverse languages of varying morphological
complexity: Afrikaans, Croatian, Finnish, Hebrew, Korean, Spanish, and Turkish.
Through this simple but robust paradigm, we show that multilingual BERT renders
many morphosyntactic features easily and simultaneously extractable (e.g.,
gender, grammatical case, pronominal type). We further evaluate the probes on
six "held-out" languages in a zero-shot transfer setting: Arabic, Chinese,
Marathi, Slovenian, Tagalog, and Yoruba. This style of probing has the added
benefit of revealing the linguistic properties that language models recognize
as being shared across languages. For instance, the probes performed well on
recognizing nouns in the held-out languages, suggesting that multilingual BERT
has a conception of noun-hood that transcends individual languages; yet, the
same was not true of adjectives.
|
Online tracking of multiple objects in videos requires strong capacity of
modeling and matching object appearances. Previous methods for learning
appearance embedding mostly rely on instance-level matching without considering
the temporal continuity provided by videos. We design a new instance-to-track
matching objective to learn appearance embedding that compares a candidate
detection to the embedding of the tracks persisted in the tracker. It enables
us to learn not only from videos labeled with complete tracks, but also
unlabeled or partially labeled videos. We implement this learning objective in
a unified form following the spirit of constrastive loss. Experiments on
multiple object tracking datasets demonstrate that our method can effectively
learning discriminative appearance embeddings in a semi-supervised fashion and
outperform state of the art methods on representative benchmarks.
|
Unobserved confounding is one of the greatest challenges for causal
discovery. The case in which unobserved variables have a widespread effect on
many of the observed ones is particularly difficult because most pairs of
variables are conditionally dependent given any other subset, rendering the
causal effect unidentifiable. In this paper we show that beyond conditional
independencies, under the principle of independent mechanisms, unobserved
confounding in this setting leaves a statistical footprint in the observed data
distribution that allows for disentangling spurious and causal effects. Using
this insight, we demonstrate that a sparse linear Gaussian directed acyclic
graph among observed variables may be recovered approximately and propose an
adjusted score-based causal discovery algorithm that may be implemented with
general purpose solvers and scales to high-dimensional problems. We find, in
addition, that despite the conditions we pose to guarantee causal recovery,
performance in practice is robust to large deviations in model assumptions.
|
Recently Brakerski, Christiano, Mahadev, Vazirani and Vidick (FOCS 2018) have
shown how to construct a test of quantumness based on the learning with errors
(LWE) assumption: a test that can be solved efficiently by a quantum computer
but cannot be solved by a classical polynomial-time computer under the LWE
assumption. This test has lead to several cryptographic applications. In
particular, it has been applied to producing certifiable randomness from a
single untrusted quantum device, self-testing a single quantum device and
device-independent quantum key distribution.
In this paper, we show that this test of quantumness, and essentially all the
above applications, can actually be implemented by a very weak class of quantum
circuits: constant-depth quantum circuits combined with logarithmic-depth
classical computation. This reveals novel complexity-theoretic properties of
this fundamental test of quantumness and gives new concrete evidence of the
superiority of small-depth quantum circuits over classical computation.
|
Pion electroproduction off the proton is analyzed in a new framework based on
a general parametrization of transition amplitudes, including constraints from
gauge invariance and threshold behavior. Data with energies $1.13~{\rm
GeV}<W<1.6~{\rm GeV}$ and $Q^2$ below $6~{\rm GeV}^2$ are included. The model
is an extension of the latest J\"ulich-Bonn solution incorporating constraints
from pion-induced and photoproduction data. Performing large scale fits
($\sim10^5$ data) we find a set of solutions with $\chi^2_{\rm dof}=1.69-1.81$
which allows us to assess the systematic uncertainty of the approach.
|
A good distortion representation is crucial for the success of deep blind
image quality assessment (BIQA). However, most previous methods do not
effectively model the relationship between distortions or the distribution of
samples with the same distortion type but different distortion levels. In this
work, we start from the analysis of the relationship between perceptual image
quality and distortion-related factors, such as distortion types and levels.
Then, we propose a Distortion Graph Representation (DGR) learning framework for
IQA, named GraphIQA, in which each distortion is represented as a graph, \ieno,
DGR. One can distinguish distortion types by learning the contrast relationship
between these different DGRs, and infer the ranking distribution of samples
from different levels in a DGR. Specifically, we develop two sub-networks to
learn the DGRs: a) Type Discrimination Network (TDN) that aims to embed DGR
into a compact code for better discriminating distortion types and learning the
relationship between types; b) Fuzzy Prediction Network (FPN) that aims to
extract the distributional characteristics of the samples in a DGR and predicts
fuzzy degrees based on a Gaussian prior. Experiments show that our GraphIQA
achieves the state-of-the-art performance on many benchmark datasets of both
synthetic and authentic distortions.
|
The OSIRIS detector is a subsystem of the liquid scintillator fillling chain
of the JUNO reactor neutrino experiment. Its purpose is to validate the
radiopurity of the scintillator to assure that all components of the JUNO
scintillator system work to specifications and only neutrino-grade scintillator
is filled into the JUNO Central Detector. The aspired sensitivity level of
$10^{-16}$ g/g of $^{238}$U and $^{232}$Th requires a large ($\sim$20 m$^3$)
detection volume and ultralow background levels. The present paper reports on
the design and major components of the OSIRIS detector, the detector simulation
as well as the measuring strategies foreseen and the sensitivity levels to U/Th
that can be reached in this setup.
|
Let $\mathbb{F}_{q}$ be the finite field with $q$ elements. This paper mainly
researches the polynomial representation of double cyclic codes over
$\mathbb{F}_{q}+v\mathbb{F}_{q}+v^2\mathbb{F}_{q}$ with $v^3=v$. Firstly, we
give the generating polynomials of these double cyclic codes. Secondly, we show
the generating matrices of them. Meanwhile, we get quantitative information
related to them by the matrix forms. Finally, we investigate the relationship
between the generators of double cyclic codes and their duals.
|
Nano quadcopters are ideal for gas source localization (GSL) as they are
safe, agile and inexpensive. However, their extremely restricted sensors and
computational resources make GSL a daunting challenge. In this work, we propose
a novel bug algorithm named `Sniffy Bug', which allows a fully autonomous swarm
of gas-seeking nano quadcopters to localize a gas source in an unknown,
cluttered and GPS-denied environments. The computationally efficient, mapless
algorithm foresees in the avoidance of obstacles and other swarm members, while
pursuing desired waypoints. The waypoints are first set for exploration, and,
when a single swarm member has sensed the gas, by a particle swarm
optimization-based procedure. We evolve all the parameters of the bug (and PSO)
algorithm, using our novel simulation pipeline, `AutoGDM'. It builds on and
expands open source tools in order to enable fully automated end-to-end
environment generation and gas dispersion modeling, allowing for learning in
simulation. Flight tests show that Sniffy Bug with evolved parameters
outperforms manually selected parameters in cluttered, real-world environments.
|
The textbook Newton's iteration is practically inapplicable on solutions of
nonlinear systems with singular Jacobians. By a simple modification, a novel
extension of Newton's iteration regains its local quadratic convergence toward
nonisolated solutions that are semiregular as properly defined regardless of
whether the system is square, underdetermined or overdetermined while Jacobians
can be rank-deficient. Furthermore, the iteration serves as a regularization
mechanism for computing singular solutions from empirical data. When a system
is perturbed, its nonisolated solutions can be altered substantially or even
disappear. The iteration still locally converges to a stationary point that
approximates a singular solution of the underlying system with an error bound
in the same order of the data accuracy. Geometrically, the iteration
approximately approaches the nearest point on the solution manifold. The method
simplifies the modeling of nonlinear systems by permitting nonisolated
solutions and enables a wide range of applications in algebraic computation.
|
Noether's theorem identifies fundamental conserved quantities, called Noether
charges, from a Hamiltonian. To-date Noether charges remain largely elusive
within theories of gravity: We do not know how to directly measure them, and
their physical interpretation remains unsettled in general spacetimes. Here we
show that the surface gravity as naturally defined for a family of observers in
arbitrarily dynamical spacetimes is a directly measurable Noether charge. This
Noether charge reduces to the accepted value on stationary horizons, and, when
integrated over a closed surface, yields an energy with the characteristics of
gravitating mass. Stokes' theorem then identifies the gravitating energy
density as the time-component of a locally conserved Noether current in general
spacetimes. Our conclusion, that this Noether charge is extractable from
astronomical observations, holds the potential for determining the detailed
distribution of the gravitating mass in galaxies, galaxy clusters and beyond.
|
The strong excitonic effect in monolayer transition metal dichalcogenide
(TMD) semiconductors has enabled many fascinating light-matter interaction
phenomena. Examples include strongly coupled exciton-polaritons and nearly
perfect atomic monolayer mirrors. The strong light-matter interaction also
opens the door for dynamical control of mechanical motion through the exciton
resonance of monolayer TMDs. Here we report the observation of
exciton-optomechanical coupling in a suspended monolayer MoSe2 mechanical
resonator. By moderate optical pumping near the MoSe2 exciton resonance, we
have observed optical damping and anti-damping of mechanical vibrations as well
as the optical spring effect. The exciton-optomechanical coupling strength is
also gate-tunable. Our observations can be understood in a model based on
photothermal backaction and gate-induced mirror symmetry breaking in the device
structure. The observation of gate-tunable exciton-optomechanical coupling in a
monolayer semiconductor may find applications in nanoelectromechanical systems
(NEMS) and in exciton-optomechanics.
|
We study efficient quantum certification algorithms for quantum state set and
unitary quantum channel. We present an algorithm that uses
$O(\varepsilon^{-4}\ln |\mathcal{P}|)$ copies of an unknown state to
distinguish whether the unknown state is contained in or $\varepsilon$-far from
a finite set $\mathcal{P}$ of known states with respect to the trace distance.
This algorithm is more sample-efficient in some settings. Previous study showed
that one can distinguish whether an unknown unitary $U$ is equal to or
$\varepsilon$-far from a known or unknown unitary $V$ in fixed dimension with
$O(\varepsilon^{-2})$ uses of the unitary, in which the Choi state is used and
thus an ancilla system is needed. We give an algorithm that distinguishes the
two cases with $O(\varepsilon^{-1})$ uses of the unitary, using much fewer or
no ancilla compared with previous results.
|
We consider the recent privacy preserving methods that train the models not
on original images, but on mixed images that look like noise and hard to trace
back to the original images. We explain that those mixed images will be samples
on the decision boundaries of the trained model, and although such methods
successfully hide the contents of images from the entity in charge of federated
learning, they provide crucial information to that entity about the decision
boundaries of the trained model. Once the entity has exact samples on the
decision boundaries of the model, they may use it for effective adversarial
attacks on the model during training and/or afterwards. If we have to hide our
images from that entity, how can we trust them with the decision boundaries of
our model? As a remedy, we propose a method to encrypt the images, and have a
decryption module hidden inside the model. The entity in charge of federated
learning will only have access to a set of complex-valued coefficients, but the
model will first decrypt the images and then put them through the convolutional
layers. This way, the entity will not see the training images and they will not
know the location of the decision boundaries of the model.
|
The ability to explain decisions to its end-users is a necessity to deploy AI
as critical decision support. Yet making AI explainable to end-users is a
relatively ignored and challenging problem. To bridge the gap, we first
identified twelve end-user-friendly explanatory forms that do not require
technical knowledge to comprehend, including feature-, example-, and rule-based
explanations. We then instantiated the explanatory forms as prototyping cards
in four AI-assisted critical decision-making tasks, and conducted a user study
to co-design low-fidelity prototypes with 32 layperson participants. The
results verified the relevance of using the explanatory forms as building
blocks of explanations, and identified their proprieties (pros, cons,
applicable explainability needs, and design implications). The explanatory
forms, their proprieties, and prototyping support constitute the
End-User-Centered explainable AI framework EUCA. It serves as a practical
prototyping toolkit for HCI/AI practitioners and researchers to build
end-user-centered explainable AI.
The EUCA framework is available at http://weina.me/end-user-xai
|
Motivated by Stanley's conjecture on the multiplication of Jack symmetric
functions, we prove a couple of identities showing that skew Jack symmetric
functions are semi-invariant up to translation and rotation of a $\pi$ angle of
the skew diagram. It follows that, in some special cases, the coefficients of
the skew Jack symmetric functions with respect to the basis of the monomial
symmetric functions are polynomials with nonnegative integer coefficients.
|
eQuilibrator (equilibrator.weizmann.ac.il) is a calculator for biochemical
equilibrium constants and Gibbs free energies, originally designed as a
web-based interface. While the website now counts ${\sim}1000$ distinct monthly
users, its design could not accommodate larger compound databases and it lacked
an application programming interface (API) for integration in other tools
developed by the systems biology community. Here, we report a new python-based
package for eQuilibrator, that comes with many new features such as a 50-fold
larger compound database, the ability to add novel compound structures,
improvements in speed and memory use, and correction for Mg2+ ion
concentrations. Moreover, it adds the ability to compute the covariance matrix
of the uncertainty between estimates, for which we show the advantages and
describe the application in metabolic modeling. We foresee that these
improvements will make thermodynamic modeling more accessible and facilitate
the integration of eQuilibrator into other software platforms.
|
Mixtures of Hidden Markov Models (MHMMs) are frequently used for clustering
of sequential data. An important aspect of MHMMs, as of any clustering
approach, is that they can be interpretable, allowing for novel insights to be
gained from the data. However, without a proper way of measuring
interpretability, the evaluation of novel contributions is difficult and it
becomes practically impossible to devise techniques that directly optimize this
property. In this work, an information-theoretic measure (entropy) is proposed
for interpretability of MHMMs, and based on that, a novel approach to improve
model interpretability is proposed, i.e., an entropy-regularized Expectation
Maximization (EM) algorithm. The new approach aims for reducing the entropy of
the Markov chains (involving state transition matrices) within an MHMM, i.e.,
assigning higher weights to common state transitions during clustering. It is
argued that this entropy reduction, in general, leads to improved
interpretability since the most influential and important state transitions of
the clusters can be more easily identified. An empirical investigation shows
that it is possible to improve the interpretability of MHMMs, as measured by
entropy, without sacrificing (but rather improving) clustering performance and
computational costs, as measured by the v-measure and number of EM iterations,
respectively.
|
The penalized Cox proportional hazard model is a popular analytical approach
for survival data with a large number of covariates. Such problems are
especially challenging when covariates vary over follow-up time (i.e., the
covariates are time-dependent). The standard R packages for fully penalized Cox
models cannot currently incorporate time-dependent covariates. To address this
gap, we implement a variant of gradient descent algorithm (proximal gradient
descent) for fitting penalized Cox models. We apply our implementation to real
and simulated data sets.
|
In this paper, a new learning algorithm for Federated Learning (FL) is
introduced. The proposed scheme is based on a weighted gradient aggregation
using two-step optimization to offer a flexible training pipeline. Herein, two
different flavors of the aggregation method are presented, leading to an order
of magnitude improvement in convergence speed compared to other distributed or
FL training algorithms like BMUF and FedAvg. Further, the aggregation algorithm
acts as a regularizer of the gradient quality. We investigate the effect of our
FL algorithm in supervised and unsupervised Speech Recognition (SR) scenarios.
The experimental validation is performed based on three tasks: first, the
LibriSpeech task showing a speed-up of 7x and 6% word error rate reduction
(WERR) compared to the baseline results. The second task is based on session
adaptation providing 20% WERR over a powerful LAS model. Finally, our
unsupervised pipeline is applied to the conversational SR task. The proposed FL
system outperforms the baseline systems in both convergence speed and overall
model performance.
|
Starting from the Bonn potential, relativistic Brueckner-Hartree-Fock (RBHF)
equations are solved for nuclear matter in the full Dirac space, which provides
a unique way to determine the single-particle potentials and avoids the
approximations applied in the RBHF calculations in the Dirac space with
positive-energy states (PESs) only. The uncertainties of the RBHF calculations
in the Dirac space with PESs only are investigated, and the importance of the
RBHF calculations in the full Dirac space is demonstrated. In the RBHF
calculations in the full Dirac space, the empirical saturation properties of
symmetric nuclear matter are reproduced, and the obtained equation of state
agrees with the results based on the relativistic Green's function approach up
to the saturation density.
|
Deep neural networks (DNNs) are prominent due to their superior performance
in many fields. The deep-learning-as-a-service (DLaaS) paradigm enables
individuals and organizations (clients) to outsource their DNN learning tasks
to the cloud-based platforms. However, the DLaaS server may return incorrect
DNN models due to various reasons (e.g., Byzantine failures). This raises the
serious concern of how to verify if the DNN models trained by potentially
untrusted DLaaS servers are indeed correct. To address this concern, in this
paper, we design VeriDL, a framework that supports efficient correctness
verification of DNN models in the DLaaS paradigm. The key idea of VeriDL is the
design of a small-size cryptographic proof of the training process of the DNN
model, which is associated with the model and returned to the client. Through
the proof, VeriDL can verify the correctness of the DNN model returned by the
DLaaS server with a deterministic guarantee and cheap overhead. Our experiments
on four real-world datasets demonstrate the efficiency and effectiveness of
VeriDL.
|
An important task at future colliders is the investigation of the Higgs-boson
sector. Here the measurement of the triple Higgs coupling(s) plays a special
role. Based on previous analyses, within the framework of Two Higgs Doublet
Models (2HDM) type~I and~II, we define and analyze several two-dimensional
benchmark planes, that are over large parts in agreement with all theoretical
and experimental constraints. For these planes we evaluate di-Higgs production
cross sections at future high-energy $e^+e^-$ colliders, such as ILC or CLIC.
We consider two different channels for the neutral di-Higgs pairs $h_i
h_j=hh,hH,HH,AA$: $e^+e^- \to h_i h_j Z$ and $e^+e^- \to h_i h_j \nu \bar \nu$.
In both channels the various triple Higgs-boson couplings contribute
substantially. We find regions with a strong enhancement of the production
channel of two SM-like light Higgs bosons and/or with very large production
cross sections involving one light and one heavy or two heavy 2HDM Higgs
bosons, offering interesting prospects for the ILC or CLIC. The mechanisms
leading to these enhanced production cross sections are analyzed in detail. We
propose the use of cross section distributions with the invariant mass of the
two final Higgs bosons where the contributions from intermediate resonant and
non-resonant BSM Higgs bosons play a crucial role. We outline which process at
which center-of-mass energy would be best suited to probe the corresponding
triple Higgs-boson couplings.
|
We propose Nester, a method for injecting neural networks into constrained
structured predictors. The job of the neural network(s) is to compute an
initial, raw prediction that is compatible with the input data but does not
necessarily satisfy the constraints. The structured predictor then builds a
structure using a constraint solver that assembles and corrects the raw
predictions in accordance with hard and soft constraints. In doing so, Nester
takes advantage of the features of its two components: the neural network
learns complex representations from low-level data while the constraint
programming component reasons about the high-level properties of the prediction
task. The entire architecture can be trained in an end-to-end fashion. An
empirical evaluation on handwritten equation recognition shows that Nester
achieves better performance than both the neural network and the constrained
structured predictor on their own, especially when training examples are
scarce, while scaling to more complex problems than other neuro-programming
approaches. Nester proves especially useful to reduce errors at the semantic
level of the problem, which is particularly challenging for neural network
architectures.Sub
|
Given an $(r + 1)$-chromatic graph $H$, the fundamental edge stability result
of Erd\H{o}s and Simonovits says that all $n$-vertex $H$-free graphs have at
most $(1 - 1/r + o(1)) \binom{n}{2}$ edges, and any $H$-free graph with that
many edges can be made $r$-partite by deleting $o(n^{2})$ edges.
Here we consider a natural variant of this -- the minimum degree stability of
$H$-free graphs. In particular, what is the least $c$ such that any $n$-vertex
$H$-free graph with minimum degree greater than $cn$ can be made $r$-partite by
deleting $o(n^{2})$ edges? We determine this least value for all 3-chromatic
$H$ and for very many non-3-colourable $H$ (all those in which one is commonly
interested) as well as bounding it for the remainder. This extends the
Andr\'{a}sfai-Erd\H{o}s-S\'{o}s theorem and work of Alon and Sudakov.
|
We present a detailed study of the decoherence correction to surface-hopping
that was recently derived from the exact factorization approach. Ab initio
multiple spawning calculations that use the same initial conditions and same
electronic structure method are used as a reference for three molecules:
ethylene, methaniminium cation, and fulvene, for which non-adiabatic dynamics
follows a photo-excitation. A comparison with the Granucci-Persico energy-based
decoherence correction, and the augmented fewest-switches surface-hopping
scheme shows that the three decoherence-corrected methods operate on individual
trajectories in a qualitatively different way, but results averaged over
trajectories are similar for these systems.
|
HOMFLY polynomials are one of the major knot invariants being actively
studied. They are difficult to compute in the general case but can be far more
easily expressed in certain specific cases. In this paper, we examine two
particular knots, as well as one more general infinite class of knots. From our
calculations, we see some apparent patterns in the polynomials for the knots
$9_{35}$ and $9_{46}$, and in particular their $F$-factors. These properties
are of a form that seems conducive to finding a general formula for them, which
would yield a general formula for the HOMFLY polynomials of the two knots.
Motivated by these observations, we demonstrate and conjecture some properties
both of the $F$-factors and HOMFLY polynomials of these knots and of the more
general class that contains them, namely pretzel knots with 3 odd parameters.
We make the first steps toward a matrix-less general formula for the HOMFLY
polynomials of these knots.
|
We have applied relativistic coupled-cluster (RCC) theory to determine the
isotope shift (IS) constants of the first eight low-lying states of the Li,
Be$^+$ and Ar$^{15+}$ isoelectronic systems. Though the RCC theory with
singles, doubles and triples approximation (RCCSDT method) is an exact method
for these systems for a given set of basis functions, we notice large
differences in the results from this method when various procedures in the RCC
theory framework are adopted to estimate the IS constants. This has been
demonstrated by presenting the IS constants of the aforementioned states from
the finite-field, expectation value and analytical response (AR) approaches of
the RCCSDT method. Contributions from valence triple excitations, Breit
interaction and lower-order QED effects to the evaluation of these IS constants
are also highlighted. Our results are compared with high-precision calculations
reported using few-body methods wherever possible. We find that results from
the AR procedure are more reliable than the other two approaches. This analysis
is crucial for understanding the roles of electron correlation effects in the
accurate determination of IS constants in the heavier atomic systems, where
few-body methods cannot be applied.
|
Modern neuroscience employs in silico experimentation on ever-increasing and
more detailed neural networks. The high modelling detail goes hand in hand with
the need for high model reproducibility, reusability and transparency. Besides,
the size of the models and the long timescales under study mandate the use of a
simulation system with high computational performance, so as to provide an
acceptable time to result. In this work, we present EDEN (Extensible Dynamics
Engine for Networks), a new general-purpose, NeuroML-based neural simulator
that achieves both high model flexibility and high computational performance,
through an innovative model-analysis and code-generation technique. The
simulator runs NeuroML v2 models directly, eliminating the need for users to
learn yet another simulator-specific, model-specification language. EDEN's
functional correctness and computational performance were assessed through
NeuroML models available on the NeuroML-DB and Open Source Brain model
repositories. In qualitative experiments, the results produced by EDEN were
verified against the established NEURON simulator, for a wide range of models.
At the same time, computational-performance benchmarks reveal that EDEN runs up
to 2 orders-of-magnitude faster than NEURON on a typical desktop computer, and
does so without additional effort from the user. Finally, and without added
user effort, EDEN has been built from scratch to scale seamlessly over multiple
CPUs and across computer clusters, when available.
|
Critical role of Internet of Things (IoT) in various domains like smart city,
healthcare, supply chain and transportation has made them the target of
malicious attacks. Past works in this area focused on centralized Intrusion
Detection System (IDS), assuming the existence of a central entity to perform
data analysis and identify threats. However, such IDS may not always be
feasible, mainly due to spread of data across multiple sources and gathering at
central node can be costly. Also, the earlier works primarily focused on
improving True Positive Rate (TPR) and ignored the False Positive Rate (FPR),
which is also essential to avoid unnecessary downtime of the systems. In this
paper, we first present an architecture for IDS based on hybrid ensemble model,
named PHEC, which gives improved performance compared to state-of-the-art
architectures. We then adapt this model to a federated learning framework that
performs local training and aggregates only the model parameters. Next, we
propose Noise-Tolerant PHEC in centralized and federated settings to address
the label-noise problem. The proposed idea uses classifiers using weighted
convex surrogate loss functions. Natural robustness of KNN classifier towards
noisy data is also used in the proposed architecture. Experimental results on
four benchmark datasets drawn from various security attacks show that our model
achieves high TPR while keeping FPR low on noisy and clean data. Further, they
also demonstrate that the hybrid ensemble models achieve performance in
federated settings close to that of the centralized settings.
|
A deep Transformer model with good evaluation score does not mean each
subnetwork (a.k.a transformer block) learns reasonable representation.
Diagnosing abnormal representation and avoiding it can contribute to achieving
a better evaluation score. We propose an innovative perspective for analyzing
attention patterns: summarize block-level patterns and assume abnormal patterns
contribute negative influence. We leverage Wav2Vec 2.0 as a research target and
analyze a pre-trained model's pattern. All experiments leverage
Librispeech-100-clean as training data. Through avoiding diagnosed abnormal
ones, our custom Wav2Vec 2.0 outperforms the original version about 4.8%
absolute word error rate (WER) on test-clean with viterbi decoding. Our version
is still 0.9% better when decoding with a 4-gram language model. Moreover, we
identify that avoiding abnormal patterns is the main contributor for
performance boosting.
|
For each $p\geq 1$, the star automaton group $\mathcal{G}_{S_p}$ is an
automaton group which can be defined starting from a star graph on $p+1$
vertices. We study Schreier graphs associated with the action of the group
$\mathcal{G}_{S_p}$ on the regular rooted tree $T_{p+1}$ of degree $p+1$ and on
its boundary $\partial T_{p+1}$. With the transitive action on the $n$-th level
of $T_{p+1}$ is associated a finite Schreier graph $\Gamma^p_n$, whereas there
exist uncountably many orbits of the action on the boundary, represented by
infinite Schreier graphs which are obtained as limits of the sequence
$\{\Gamma_n^p\}_{n\geq 1}$ in the Gromov-Hausdorff topology. We obtain an
explicit description of the spectrum of the graphs $\{\Gamma_n^p\}_{n\geq 1}$.
Then, by using amenability of $\mathcal{G}_{S_p}$, we prove that the spectrum
of each infinite Schreier graph is the union of a Cantor set of zero Lebesgue
measure, which is the Julia set of the quadratic map $f_p(z) = z^2-2(p-1)z
-2p$, and a countable collection of isolated points supporting the KNS spectral
measure. We also give a complete classification of the infinite Schreier graphs
up to isomorphism of unrooted graphs, showing that they may have $1$, $2$ or
$2p$ ends, and that the case of $1$ end is generic with respect to the uniform
measure on $\partial T_{p+1}$.
|
Policy optimization methods remain a powerful workhorse in empirical
Reinforcement Learning (RL), with a focus on neural policies that can easily
reason over complex and continuous state and/or action spaces. Theoretical
understanding of strategic exploration in policy-based methods with non-linear
function approximation, however, is largely missing. In this paper, we address
this question by designing ENIAC, an actor-critic method that allows non-linear
function approximation in the critic. We show that under certain assumptions,
e.g., a bounded eluder dimension $d$ for the critic class, the learner finds a
near-optimal policy in $O(\poly(d))$ exploration rounds. The method is robust
to model misspecification and strictly extends existing works on linear
function approximation. We also develop some computational optimizations of our
approach with slightly worse statistical guarantees and an empirical adaptation
building on existing deep RL tools. We empirically evaluate this adaptation and
show that it outperforms prior heuristics inspired by linear methods,
establishing the value via correctly reasoning about the agent's uncertainty
under non-linear function approximation.
|
A new model maps a quantum random walk described by a Hadamard operator to a
particular case of a birth and death process. The model is represented by a 2D
Markov chain with a stochastic matrix, i.e., all the transition rates are
positive, although the Hadamard operator contains negative entries (this is
possible by increasing the dimensionality of the system). The probability
distribution of the walker population is preserved using the Markovian
property. By applying a proper transformation to the population distribution of
the random walk, the probability distributions of the quantum states |0>, 1>
are revealed. Thus, the new model has two unique properties: it reveals the
probability distribution of the quantum states as a unitary system and
preserves the population distribution of the random walker as a Markovian
system.
|
Population size estimation based on the capture-recapture experiment is an
interesting problem in various fields including epidemiology, criminology,
demography, etc. In many real-life scenarios, there exists inherent
heterogeneity among the individuals and dependency between capture and
recapture attempts. A novel trivariate Bernoulli model is considered to
incorporate these features, and the Bayesian estimation of the model parameters
is suggested using data augmentation. Simulation results show robustness under
model misspecification and the superiority of the performance of the proposed
method over existing competitors. The method is applied to analyse real case
studies on epidemiological surveillance. The results provide interesting
insight on the heterogeneity and dependence involved in the capture-recapture
mechanism. The methodology proposed can assist in effective decision-making and
policy formulation.
|
The period in which hydrogen in the intergalactic medium (IGM) is ionized,
known as the Epoch of Reionization (EoR) is still poorly understood. The timing
and duration of the EoR is expected to be governed by the underlying
astrophysics. Furthermore, most models of reionization predict a correlation
between the density and ionization field. Here we consider using the mean
dispersion measure (DM) of high redshift Fast Radio Bursts (FRBs) as a probe of
the underlying astrophysics and morphology of the EoR. To do this, we forecast
observational scenarios by building mock data sets of non-repeating FRBs
between redshifts $8\leq z \leq 10$. It is assumed that all FRBs have
accompanying spectroscopic redshift measurements. We find that samples of 100
high redshift FRBs, in the above mentioned narrow redshift range, can rule out
uncorrelated reionization at $68\%$ credibility, while larger samples, $\geq
10^4$ FRBs, can rule out uncorrelated reionization at $95\%$ credibility. We
also find 100 high redshift FRBs can rule out scenarios where the Universe is
entirely neutral at $z = 10$ with $68\%$ credibility. Further with $\geq 10^5$
FRBs, we can constrain the duration $\Delta z$ of reionization (duration
between mean ionized fraction 0.25 to 0.75) to $\Delta z = 2.0^{+0.5}_{-0.4}$,
and the midpoint of reionization to $z = 7.8^{+0.4}_{-0.2}$ at $95\%$
credibility.
|
First-order methods for solving convex optimization problems have been at the
forefront of mathematical optimization in the last 20 years. The rapid
development of this important class of algorithms is motivated by the success
stories reported in various applications, including most importantly machine
learning, signal processing, imaging and control theory. First-order methods
have the potential to provide low accuracy solutions at low computational
complexity which makes them an attractive set of tools in large-scale
optimization problems. In this survey we cover a number of key developments in
gradient-based optimization methods. This includes non-Euclidean extensions of
the classical proximal gradient method, and its accelerated versions.
Additionally we survey recent developments within the class of projection-free
methods, and proximal versions of primal-dual schemes. We give complete proofs
for various key results, and highlight the unifying aspects of several
optimization algorithms.
|
Currently, due to the COVID-19 pandemic the public life in most European
countries stopped almost completely due to measures against the spread of the
virus. Efforts to limit the number of new infections are threatened by the
advent of new variants of the SARS-COV-2 virus, most prominent the B.1.1.7
strain with higher infectivity. In this article we consider a basic two-strain
SIR model to explain the spread of those variants in Germany on small time
scales. For a linearized version of the model we calculate relevant variables
like the time of minimal infections or the dynamics of the share of variants
analytically. These analytical approximations and numerical simulations are in
a good agreement to data reported by the Robert Koch Institute (RKI) in
Germany.
|
This paper generalizes the concept of index and co-index and some related
results for free actions of G = S0 on a paracompact Hausdorff space which were
introduced by Conner and Floyd. We define the index and co-index of a
finitistic free G-space X, where G = Sd , d = 1 or 3 and prove that the index
of X is not more than the mod 2 cohomology index of X. We observe that the
index and co-index of a (2n + 1)-sphere (resp. (4n+3)-sphere) for the action of
componentwise multiplication of G = S1 (resp. S3) is n.
We also determine the orbit spaces of free actions of G = S3 on a finitistic
space X with the mod 2 cohomology and the rational cohomology product of
spheres. The orbit spaces of circle actions on the mod 2 cohomology X is also
discussed. Using these calculation, we obtain an upper bound of the index of X
and the Borsuk-Ulam type results.
|
There has been considerable interest in properties of condensed matter at
finite temperature, including non-equilibrium behavior and extreme conditions
up to the warm dense matter regime. Such behavior is encountered, e.g., in
experimental time resolved x-ray absorption spectroscopy (XAS) in the presence
of intense laser fields. In an effort to simulate such behavior, we present an
approach for calculations of finite-temperature x-ray absorption spectra in
arbitrary materials, using a generalization of the real-space Green's function
formalism. The method is incorporated as an option in the core-level x-ray
spectroscopy code FEFF10. To illustrate the approach, we present calculations
for several materials together with comparisons to experiment and with other
methods.
|
Let $G$ be a 4-chromatic maximal planar graph (MPG) with the minimum degree
of at least 4 and let $C$ be an even-length cycle of $G$.If $|f(C)|=2$ for
every $f$ in some Kempe equivalence class of $G$, then we call $C$ an unchanged
bichromatic cycle (UBC) of $G$, and correspondingly $G$ an unchanged
bichromatic cycle maximal planar graph (UBCMPG) with respect to $C$, where
$f(C)=\{f(v)| v\in V(C)\}$. For an UBCMPG $G$ with respect to an UBC $C$, the
subgraph of $G$ induced by the set of edges belonging to $C$ and its interior
(or exterior), denoted by $G^C$, is called a base-module of $G$; in particular,
when the length of $C$ is equal to four, we use $C_4$ instead of $C$ and call
$G^{C_4}$ a 4-base-module. In this paper, we first study the properties of
UBCMPGs and show that every 4-base-module $G^{C_4}$ contains a 4-coloring under
which $C_4$ is bichromatic and there are at least two bichromatic paths with
different colors between one pair of diagonal vertices of $C_4$ (these paths
are called module-paths). We further prove that every 4-base-module $G^{C_4}$
contains a 4-coloring (called decycle coloring) for which the ends of a
module-path are colored by distinct colors. Finally, based on the technique of
the contracting and extending operations of MPGs, we prove that
55-configurations and 56-configurations are reducible by converting the
reducibility problem of these two classes of configurations into the decycle
coloring problem of 4-base-modules.
|
Graph neural networks (GNN) have been successful in many fields, and derived
various researches and applications in real industries. However, in some
privacy sensitive scenarios (like finance, healthcare), training a GNN model
centrally faces challenges due to the distributed data silos. Federated
learning (FL) is a an emerging technique that can collaboratively train a
shared model while keeping the data decentralized, which is a rational solution
for distributed GNN training. We term it as federated graph learning (FGL).
Although FGL has received increasing attention recently, the definition and
challenges of FGL is still up in the air. In this position paper, we present a
categorization to clarify it. Considering how graph data are distributed among
clients, we propose four types of FGL: inter-graph FL, intra-graph FL and
graph-structured FL, where intra-graph is further divided into horizontal and
vertical FGL. For each type of FGL, we make a detailed discussion about the
formulation and applications, and propose some potential challenges.
|
We investigate a multiphase Cahn-Hilliard model for tumor growth with general
source terms. The multiphase approach allows us to consider multiple cell types
and multiple chemical species (oxygen and/or nutrients) that are consumed by
the tumor. Compared to classical two-phase tumor growth models, the multiphase
model can be used to describe a stratified tumor exhibiting several layers of
tissue (e.g., proliferating, quiescent and necrotic tissue) more precisely. Our
model consists of a convective Cahn-Hilliard type equation to describe the
tumor evolution, a velocity equation for the associated volume-averaged
velocity field, and a convective reaction-diffusion type equation to describe
the density of the chemical species. The velocity equation is either
represented by Darcy's law or by the Brinkman equation. We first construct a
global weak solution of the multiphase Cahn-Hilliard-Brinkman model. After
that, we show that such weak solutions of the system converge to a weak
solution of the multiphase Cahn-Hilliard-Darcy system as the viscosities tend
to zero in some suitable sense. This means that the existence of a global weak
solution to the Cahn-Hilliard-Darcy system is also established.
|
Nested networks or slimmable networks are neural networks whose architectures
can be adjusted instantly during testing time, e.g., based on computational
constraints. Recent studies have focused on a "nested dropout" layer, which is
able to order the nodes of a layer by importance during training, thus
generating a nested set of sub-networks that are optimal for different
configurations of resources. However, the dropout rate is fixed as a
hyper-parameter over different layers during the whole training process.
Therefore, when nodes are removed, the performance decays in a human-specified
trajectory rather than in a trajectory learned from data. Another drawback is
the generated sub-networks are deterministic networks without well-calibrated
uncertainty. To address these two problems, we develop a Bayesian approach to
nested neural networks. We propose a variational ordering unit that draws
samples for nested dropout at a low cost, from a proposed Downhill
distribution, which provides useful gradients to the parameters of nested
dropout. Based on this approach, we design a Bayesian nested neural network
that learns the order knowledge of the node distributions. In experiments, we
show that the proposed approach outperforms the nested network in terms of
accuracy, calibration, and out-of-domain detection in classification tasks. It
also outperforms the related approach on uncertainty-critical tasks in computer
vision.
|
On 6 January 2021, a mob of right-wing conservatives stormed the USA Capitol
Hill interrupting the session of congress certifying 2020 Presidential election
results. Immediately after the start of the event, posts related to the riots
started to trend on social media. A social media platform which stood out was a
free speech endorsing social media platform Parler; it is being claimed as the
platform on which the riots were planned and talked about. Our report presents
a contrast between the trending content on Parler and Twitter around the time
of riots. We collected data from both platforms based on the trending hashtags
and draw comparisons based on what are the topics being talked about, who are
the people active on the platforms and how organic is the content generated on
the two platforms. While the content trending on Twitter had strong resentments
towards the event and called for action against rioters and inciters, Parler
content had a strong conservative narrative echoing the ideas of voter fraud
similar to the attacking mob. We also find a disproportionately high
manipulation of traffic on Parler when compared to Twitter.
|
We present a number of complexity results concerning the problem of counting
vertices of an integral polytope defined by a system of linear inequalities.
The focus is on polytopes with small integer vertices, particularly 0/1
polytopes and half-integral polytopes.
|
We denote by $\pi_k(R_n)$ the $k$-th homotopy group of the $n$-th rotation
group $R_n$ and $\pi_k(R_n:2)$ the 2-primary components of it. We determine the
group structures of $\pi_k(R_n:2)$ for $k = 23$ and $24$ by use of the
fibration $R_{n+1}\overset{R_n}{\longrightarrow}S^n$. The method is based on
Toda's composition methods.
|
Let $M_1$ and $M_2$ be functions on $[0,1]$ such that $M_1(t^{1/p})$ and
$M_2(t^{1/p})$ are Orlicz functions for some $p \in (0,1].$ Assume that
$M_2^{-1} (1/t)/M_1^{-1} (1/t)$ is non-decreasing for $t \geq 1.$ Let
$(\alpha_i)_{i=1}^\infty$ be a non-increasing sequence of non-negative real
numbers. Under some conditions on $(\alpha_i)_{i=1}^\infty,$ sharp two-sided
estimates for entropy numbers of diagonal operators $T_\alpha :\ell_{M_1}
\rightarrow \ell_{M_2}$ generated by $(\alpha_i)_{i=1}^\infty,$ where
$\ell_{M_1}$ and $\ell_{M_2}$ are Orlicz sequence spaces, are proved. The
results generalise some works of Edmunds and Netrusov and hence a result of
Cobos, K\"{u}hn and Schonbek.
|
We show that if $N$ is a closed manifold of dimension $n=4$ (resp. $n=5$)
with $\pi_2(N) = 0$ (resp. $\pi_2(N)=\pi_3(N)=0$) that admits a metric of
positive scalar curvature, then a finite cover $\hat N$ of $N$ is homotopy
equivalent to $S^n$ or connected sums of $S^{n-1}\times S^1$. Our approach
combines recent advances in the study of positive scalar curvature with a novel
argument of Alpert--Balitskiy--Guth.
Additionally, we prove a more general mapping version of this result. In
particular, this implies that if $N$ is a closed manifold of dimensions $4$ or
$5$, and $N$ admits a map of nonzero degree to a closed aspherical manifold,
then $N$ does not admit any Riemannian metric with positive scalar curvature.
|
Offline reinforcement learning (RL) tries to learn the near-optimal policy
with recorded offline experience without online exploration. Current offline RL
research includes: 1) generative modeling, i.e., approximating a policy using
fixed data; and 2) learning the state-action value function. While most
research focuses on the state-action function part through reducing the
bootstrapping error in value function approximation induced by the distribution
shift of training data, the effects of error propagation in generative modeling
have been neglected. In this paper, we analyze the error in generative
modeling. We propose AQL (action-conditioned Q-learning), a residual generative
model to reduce policy approximation error for offline RL. We show that our
method can learn more accurate policy approximations in different benchmark
datasets. In addition, we show that the proposed offline RL method can learn
more competitive AI agents in complex control tasks under the multiplayer
online battle arena (MOBA) game Honor of Kings.
|
In this paper, we focus on studying duplicate logging statements, which are
logging statements that have the same static text message. We manually studied
over 4K duplicate logging statements and their surrounding code in five
large-scale open source systems. We uncovered five patterns of duplicate
logging code smells. For each instance of the duplicate logging code smell, we
further manually identify the potentially problematic and justifiable cases.
Then, we contact developers to verify our manual study result. We integrated
our manual study result and the feedback of developers into our automated
static analysis tool, DLFinder, which automatically detects problematic
duplicate logging code smells. We evaluated DLFinder on the five manually
studied systems and three additional systems. In total, combining the results
of DLFinder and our manual analysis, we reported 91 problematic duplicate
logging code smell instances to developers and all of them have been fixed. We
further study the relationship between duplicate logging statements, including
the problematic instances of duplicate logging code smells, and code clones. We
find that 83% of the duplicate logging code smell instances reside in cloned
code, but 17% of them reside in micro-clones that are difficult to detect using
automated clone detection tools. We also find that more than half of the
duplicate logging statements reside in cloned code snippets, and a large
portion of them reside in very short code blocks which may not be effectively
detected by existing code clone detection tools. Our study shows that, in
addition to general source code that implements the business logic, code clones
may also result in bad logging practices that could increase maintenance
difficulties.
|
The real world is awash with multi-agent problems that require collective
action by self-interested agents, from the routing of packets across a computer
network to the management of irrigation systems. Such systems have local
incentives for individuals, whose behavior has an impact on the global outcome
for the group. Given appropriate mechanisms describing agent interaction,
groups may achieve socially beneficial outcomes, even in the face of short-term
selfish incentives. In many cases, collective action problems possess an
underlying graph structure, whose topology crucially determines the
relationship between local decisions and emergent global effects. Such
scenarios have received great attention through the lens of network games.
However, this abstraction typically collapses important dimensions, such as
geometry and time, relevant to the design of mechanisms promoting cooperation.
In parallel work, multi-agent deep reinforcement learning has shown great
promise in modelling the emergence of self-organized cooperation in complex
gridworld domains. Here we apply this paradigm in graph-structured collective
action problems. Using multi-agent deep reinforcement learning, we simulate an
agent society for a variety of plausible mechanisms, finding clear transitions
between different equilibria over time. We define analytic tools inspired by
related literatures to measure the social outcomes, and use these to draw
conclusions about the efficacy of different environmental interventions. Our
methods have implications for mechanism design in both human and artificial
agent systems.
|
In this paper, we tackle the problem of human-robot coordination in sequences
of manipulation tasks. Our approach integrates hierarchical human motion
prediction with Task and Motion Planning (TAMP). We first devise a hierarchical
motion prediction approach by combining Inverse Reinforcement Learning and
short-term motion prediction using a Recurrent Neural Network. In a second
step, we propose a dynamic version of the TAMP algorithm Logic-Geometric
Programming (LGP). Our version of Dynamic LGP, replans periodically to handle
the mismatch between the human motion prediction and the actual human behavior.
We assess the efficacy of the approach by training the prediction algorithms
and testing the framework on the publicly available MoGaze dataset.
|
We constructed involutions for a Halphen pencil of index 2, and proved that
the birational mapping corresponding to the autonomous reduction of the
elliptic Painlev\'e equation for the same pencil can be obtained as the
composition of two such involutions.
|
Let $(M, g, f)$ be a $4$-dimensional complete noncompact gradient shrinking
Ricci soliton with the equation $Ric+\nabla^2f=\lambda g$, where $\lambda$ is a
positive real number. We prove that if $M$ has constant scalar curvature
$S=2\lambda$, it must be a quotient of $\mathbb{S}^2\times \mathbb{R}^2$.
Together with the known results, this implies that a $4$-dimensional complete
gradient shrinking Ricci soliton has constant scalar curvature if and only if
it is rigid, that is, it is either Einstein, or a finite quotient of Gaussian
shrinking soliton $\Bbb{R}^4$, $\Bbb{S}^{2}\times\Bbb{R}^{2}$ or
$\Bbb{S}^{3}\times\Bbb{R}$.
|
Minimization of suitable statistical distances~(between the data and model
densities) has proved to be a very useful technique in the field of robust
inference. Apart from the class of $\phi$-divergences of \cite{a} and \cite{b},
the Bregman divergence (\cite{c}) has been extensively used for this purpose.
However, since the data density must have a linear presence in the cross
product term of the Bregman divergence involving both the data and model
densities, several useful divergences cannot be captured by the usual Bregman
form. In this respect, we provide an extension of the ordinary Bregman
divergence by considering an exponent of the density function as the argument
rather than the density function itself. We demonstrate that many useful
divergence families, which are not ordinarily Bregman divergences, can be
accommodated within this extended description. Using this formulation, one can
develop many new families of divergences which may be useful in robust
inference. In particular, through an application of this extension, we propose
the new class of the GSB divergence family. We explore the applicability of the
minimum GSB divergence estimator in discrete parametric models. Simulation
studies as well as conforming real data examples are given to demonstrate the
performance of the estimator and to substantiate the theory developed.
|
Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$, and let
$d(u,w)$ denote the length of a $u-w$ geodesic in $G$. For any $v\in V(G)$ and
$e=xy\in E(G)$, let $d(e,v)=\min\{d(x,v),d(y,v)\}$. For distinct $e_1, e_2\in
E(G)$, let $R\{e_1,e_2\}=\{z\in V(G):d(z,e_1)\neq d(z,e_2)\}$. Kelenc et al.
[Discrete Appl. Math. 251 (2018) 204-220] introduced the edge dimension of a
graph: A vertex subset $S\subseteq V(G)$ is an edge resolving set of $G$ if
$|S\cap R\{e_1,e_2\}|\ge 1$ for any distinct $e_1, e_2\in E(G)$, and the edge
dimension $edim(G)$ of $G$ is the minimum cardinality among all edge resolving
sets of $G$.
For a real-valued function $g$ defined on $V(G)$ and for $U\subseteq V(G)$,
let $g(U)=\sum_{s\in U}g(s)$. Then $g:V(G)\rightarrow[0,1]$ is an edge
resolving function of $G$ if $g(R\{e_1,e_2\})\ge1$ for any distinct $e_1,e_2\in
E(G)$. The fractional edge dimension $edim_f(G)$ of $G$ is
$\min\{g(V(G)):g\mbox{ is an edge resolving function of }G\}$. Note that
$edim_f(G)$ reduces to $edim(G)$ if the codomain of edge resolving functions is
restricted to $\{0,1\}$.
We introduce and study fractional edge dimension and obtain some general
results on the edge dimension of graphs. We show that there exist two
non-isomorphic graphs on the same vertex set with the same edge metric
coordinates. We construct two graphs $G$ and $H$ such that $H \subset G$ and
both $edim(H)-edim(G)$ and $edim_f(H)-edim_f(G)$ can be arbitrarily large. We
show that a graph $G$ with $edim(G)=2$ cannot have $K_5$ or $K_{3,3}$ as a
subgraph, and we construct a non-planar graph $H$ satisfying $edim(H)=2$. It is
easy to see that, for any connected graph $G$ of order $n\ge3$, $1\le edim_f(G)
\le \frac{n}{2}$; we characterize graphs $G$ satisfying $edim_f(G)=1$ and
examine some graph classes satisfying $edim_f(G)=\frac{n}{2}$. We also
determine the fractional edge dimension for some classes of graphs.
|
Mathematical modeling is an essential step, for example, to analyze the
transient behavior of a dynamical process and to perform engineering studies
such as optimization and control. With the help of first-principles and expert
knowledge, a dynamic model can be built, but for complex dynamic processes,
appearing, e.g., in biology, chemical plants, neuroscience, financial markets,
this often remains an onerous task. Hence, data-driven modeling of the dynamics
process becomes an attractive choice and is supported by the rapid advancement
in sensor and measurement technology. A data-driven approach, namely operator
inference framework, models a dynamic process, where a particular structure of
the nonlinear term is assumed. In this work, we suggest combining the operator
inference with certain deep neural network approaches to infer the unknown
nonlinear dynamics of the system. The approach uses recent advancements in deep
learning and possible prior knowledge of the process if possible. We also
briefly discuss several extensions and advantages of the proposed methodology.
We demonstrate that the proposed methodology accomplishes the desired tasks for
dynamics processes encountered in neural dynamics and the glycolytic
oscillator.
|
We present the first lattice-QCD determination of the form factors describing
the semileptonic decays $\Lambda_b \to \Lambda_c^*(2595)\ell^-\bar{\nu}$ and
$\Lambda_b \to \Lambda_c^*(2625)\ell^-\bar{\nu}$, where the $\Lambda_c^*(2595)$
and $\Lambda_c^*(2625)$ are the lightest charm baryons with $J^P=\frac12^-$ and
$J^P=\frac32^-$, respectively. These decay modes provide new opportunities to
test lepton flavor universality and also play an important role in global
analyses of the strong interactions in $b\to c$ semileptonic decays. We
determine the full set of vector, axial vector, and tensor form factors for
both decays, but only in a small kinematic region near the zero-recoil point.
The lattice calculation uses three different ensembles of gauge-field
configurations with $2+1$ flavors of domain-wall fermions, and we perform
extrapolations of the form factors to the continuum limit and physical pion
mass. We present Standard-Model predictions for the differential decay rates
and angular observables. In the kinematic region considered, the differential
decay rate for the $\frac12^-$ final state is found to be approximately 2.5
times larger than the rate for the $\frac32^-$ final state. We also test the
compatibility of our form-factor results with zero-recoil sum rules.
|
With the two flavor Nambu-Jona-Lasinio (NJL) model we carry out a
phenomenological study on the chiral phase structure, mesonic properties and
transport properties in a momentum anisotropic quark matter. To calculate
transport coefficients we have utilized the kinetic theory in the relaxation
time approximation, where the momentum anisotropy is embedded in the estimation
of both distribution function and the relaxation time by introducing an
anisotropy parameter $\xi$ . It is shown that an increase of the anisotropy
parameter $\xi$ may results in a catalysis of chiral symmetry breaking. The
critical endpoint (CEP) is shifted to smaller temperatures and larger quark
chemical potentials as $\xi $ increases, the impact of momentum anisotropy on
temperature of CEP is almost the same as that on the quark chemical potential
of CEP. The meson masses and the associated decay widths also exhibit a
significant $\xi$ dependence. It is observed that the temperature behavior of
scaled shear viscosity $\eta/T^3$ and scaled electrical conductivity
$\sigma_{el}/T$ exhibit a similar dip structure, with the minima of both
$\eta/T^3$ and $\sigma_{el}/T$ shifting toward higher temperatures with
increasing $\xi$. Furthermore, we demonstrate that the Seebeck coefficient $S$
decreases when temperature goes up and its sign is positive, indicating the
dominant carriers for converting the temperature gradient to the electric field
are up-quarks. The Seebeck coefficient $S$ is significantly enhanced with a
large $\xi$ for the temperature below the critical temperature.
|
We propose an analytical framework to model the effect of single and multiple
mechanical surface oscillators on the dynamics of vertically polarized elastic
waves propagating in a semi-infinite medium. The formulation extends the
canonical Lamb's problem, originally developed to obtain the wavefield induced
by a harmonic line source in an elastic half-space, to the scenario where a
finite cluster of vertical oscillators is attached to the medium surface. In
short, our approach utilizes the solution of the classical Lamb's problem as
Green's function to formulate the multiple scattered fields generated by the
resonators. For an arbitrary number of resonators, arranged atop the elastic
half-space in an arbitrary configuration, the displacement fields are obtained
in closed-form and validated with numerics developed in a two-dimensional
finite element environment.
|
We study principal-agent problems in which a principal commits to an
outcome-dependent payment scheme (a.k.a. contract) so as to induce an agent to
take a costly, unobservable action. We relax the assumption that the principal
perfectly knows the agent by considering a Bayesian setting where the agent's
type is unknown and randomly selected according to a given probability
distribution, which is known to the principal. Each agent's type is
characterized by her own action costs and action-outcome distributions. In the
literature on non-Bayesian principal-agent problems, considerable attention has
been devoted to linear contracts, which are simple, pure-commission payment
schemes that still provide nice approximation guarantees with respect to
principal-optimal (possibly non-linear) contracts. While in non-Bayesian
settings an optimal contract can be computed efficiently, this is no longer the
case for our Bayesian principal-agent problems. This further motivates our
focus on linear contracts, which can be optimized efficiently given their
single-parameter nature. Our goal is to analyze the properties of linear
contracts in Bayesian settings, in terms of approximation guarantees with
respect to optimal contracts and general tractable contracts (i.e.,
efficiently-computable ones). First, we study the approximation guarantees of
linear contracts with respect to optimal ones, showing that the former suffer
from a multiplicative loss linear in the number of agent's types. Nevertheless,
we prove that linear contracts can still provide a constant multiplicative
approximation $\rho$ of the optimal principal's expected utility, though at the
expense of an exponentially-small additive loss $2^{-\Omega(\rho)}$. Then, we
switch to tractable contracts, showing that, surprisingly, linear contracts
perform well among them.
|
In this paper, we make a first attempt to incorporate both commuting demand
and transit network connectivity in bus route planning (CT-Bus), and formulate
it as a constrained optimization problem: planning a new bus route with k edges
over an existing transit network without building new bus stops to maximize a
linear aggregation of commuting demand and connectivity of the transit network.
We prove the NP-hardness of CT-Bus and propose an expansion-based greedy
algorithm that iteratively scans potential candidate paths in the network. To
boost the efficiency of computing the connectivity of new networks with
candidate paths, we convert it to a matrix trace estimation problem and employ
a Lanczos method to estimate the natural connectivity of the transit network
with a guaranteed error bound. Furthermore, we derive upper bounds on the
objective values and use them to greedily select candidates for expansion. Our
experiments conducted on real-world transit networks in New York City and
Chicago verify the efficiency, effectiveness, and scalability of our
algorithms.
|
We consider a problem wherein jobs arrive at random times and assume random
values. Upon each job arrival, the decision-maker must decide immediately
whether or not to accept the job and gain the value on offer as a reward, with
the constraint that they may only accept at most $n$ jobs over some reference
time period. The decision-maker only has access to $M$ independent realisations
of the job arrival process. We propose an algorithm, Non-Parametric Sequential
Allocation (NPSA), for solving this problem. Moreover, we prove that the
expected reward returned by the NPSA algorithm converges in probability to
optimality as $M$ grows large. We demonstrate the effectiveness of the
algorithm empirically on synthetic data and on public fraud-detection datasets,
from where the motivation for this work is derived.
|
Deep generative models have been shown powerful in generating novel molecules
with desired chemical properties via their representations such as strings,
trees or graphs. However, these models are limited in recommending synthetic
routes for the generated molecules in practice. We propose a generative model
to generate molecules via multi-step chemical reaction trees. Specifically, our
model first propose a chemical reaction tree with predicted reaction templates
and commercially available molecules (starting molecules), and then perform
forward synthetic steps to obtain product molecules. Experiments show that our
model can generate chemical reactions whose product molecules are with desired
chemical properties. Also, the complete synthetic routes for these product
molecules are provided.
|
Field-induced reorientation of colloidal particles is especially relevant to
manipulate the optical properties of a nanomaterial for target applications. We
have recently shown that surprisingly feeble external stimuli are able to
transform uniaxial nematic liquid crystals (LCs) of cuboidal particles into
biaxial nematic LCs. In the light of these results, here we apply an external
field that forces the reorientation of colloidal cuboids in nematic LCs and
sparks a uniaxial-to-biaxial texture switching. By Dynamic Monte Carlo
simulation, we investigate the unsteady-state reorientation dynamics at the
particle scale when the field is applied (uniaxial-to-biaxial switching) and
then removed (biaxial-to-uniaxial switching). We detect a strong correlation
between the response time, being the time taken for the system to reorient, and
particle anisotropy, which spans from rod-like to plate-like geometries.
Interestingly, self-dual shaped cuboids, theoretically considered as the most
suitable to promote phase biaxiality for being exactly in between prolate and
oblate particles, exhibit surprisingly slow response times, especially if
compared to prolate cuboids.
|
In orthogonal time frequency space (OTFS) modulation, information-carrying
symbols reside in the delay-Doppler (DD) domain. By operating in the DD domain,
an appealing property for communication arises: time-frequency (TF) dispersive
channels encountered in high mobility environments become time-invariant. The
time-invariance of the channel in the DD domain enables efficient equalizers
for time-frequency dispersive channels. In this paper, we propose an OTFS
system based on the discrete Zak transform. The presented formulation not only
allows an efficient implementation of OTFS but also simplifies the derivation
and analysis of the input-output relation of TF dispersive channel in the DD
domain.
|
The open-world deployment of Machine Learning (ML) algorithms in
safety-critical applications such as autonomous vehicles needs to address a
variety of ML vulnerabilities such as interpretability, verifiability, and
performance limitations. Research explores different approaches to improve ML
dependability by proposing new models and training techniques to reduce
generalization error, achieve domain adaptation, and detect outlier examples
and adversarial attacks. In this paper, we review and organize practical ML
techniques that can improve the safety and dependability of ML algorithms and
therefore ML-based software. Our organization maps state-of-the-art ML
techniques to safety strategies in order to enhance the dependability of the ML
algorithm from different aspects, and discuss research gaps as well as
promising solutions.
|
Method: We develop CNN-based methods for automatic ICD coding based on
clinical text from intensive care unit (ICU) stays. We come up with the Shallow
and Wide Attention convolutional Mechanism (SWAM), which allows our model to
learn local and low-level features for each label. The key idea behind our
model design is to look for the presence of informative snippets in the
clinical text that correlated with each code, and we infer that there exists a
correspondence between "informative snippet" and convolution filter. Results:
We evaluate our approach on MIMIC-III, an open-access dataset of ICU medical
records. Our approach substantially outperforms previous results on top-50
medical code prediction on MIMIC-III dataset. We attribute this improvement to
SWAM, by which the wide architecture gives the model ability to more
extensively learn the unique features of different codes, and we prove it by
ablation experiment. Besides, we perform manual analysis of the performance
imbalance between different codes, and preliminary conclude the characteristics
that determine the difficulty of learning specific codes. Conclusions: We
present SWAM, an explainable CNN approach for multi-label document
classification, which employs a wide convolution layer to learn local and
low-level features for each label, yields strong improvements over previous
metrics on the ICD-9 code prediction task, while providing satisfactory
explanations for its internal mechanics.
|
We report a new measurement of the beam-spin asymmetry $\boldsymbol{\Sigma}$
for the $\vec{\gamma} n \rightarrow K^+\Sigma^-$ reaction using quasi-free
neutrons in a liquid-deuterium target. The new dataset includes data at
previously unmeasured photon energy and angular ranges, thereby providing new
constraints on partial wave analyses used to extract properties of the excited
nucleon states. The experimental data were obtained using the CEBAF Large
Acceptance Spectrometer (CLAS), housed in Hall B of the Thomas Jefferson
National Accelerator Facility (JLab). The CLAS detector measured reaction
products from a liquid-deuterium target produced by an energy-tagged, linearly
polarised photon beam with energies in the range 1.1 to 2.3 GeV. Predictions
from an isobar model indicate strong sensitivity to $N(1720)3/2^+$,
$\Delta(1900)1/2^-$, and $N(1895)1/2^-$, with the latter being a state not
considered in previous photoproduction analyses. When our data are incorporated
in the fits of partial-wave analyses, one observes significant changes in
$\gamma$-$n$ couplings of the resonances which have small branching ratios to
the $\pi N$ channel.
|
Voice Conversion (VC) is a technique that aims to transform the
non-linguistic information of a source utterance to change the perceived
identity of the speaker. While there is a rich literature on VC, most proposed
methods are trained and evaluated on clean speech recordings. However, many
acoustic environments are noisy and reverberant, severely restricting the
applicability of popular VC methods to such scenarios. To address this
limitation, we propose Voicy, a new VC framework particularly tailored for
noisy speech. Our method, which is inspired by the de-noising auto-encoders
framework, is comprised of four encoders (speaker, content, phonetic and
acoustic-ASR) and one decoder. Importantly, Voicy is capable of performing
non-parallel zero-shot VC, an important requirement for any VC system that
needs to work on speakers not seen during training. We have validated our
approach using a noisy reverberant version of the LibriSpeech dataset.
Experimental results show that Voicy outperforms other tested VC techniques in
terms of naturalness and target speaker similarity in noisy reverberant
environments.
|
Fundus photography has routinely been used to document the presence and
severity of retinal degenerative diseases such as age-related macular
degeneration (AMD), glaucoma, and diabetic retinopathy (DR) in clinical
practice, for which the fovea and optic disc (OD) are important retinal
landmarks. However, the occurrence of lesions, drusen, and other retinal
abnormalities during retinal degeneration severely complicates automatic
landmark detection and segmentation. Here we propose HBA-U-Net: a U-Net
backbone enriched with hierarchical bottleneck attention. The network consists
of a novel bottleneck attention block that combines and refines self-attention,
channel attention, and relative-position attention to highlight retinal
abnormalities that may be important for fovea and OD segmentation in the
degenerated retina. HBA-U-Net achieved state-of-the-art results on fovea
detection across datasets and eye conditions (ADAM: Euclidean Distance (ED) of
25.4 pixels, REFUGE: 32.5 pixels, IDRiD: 32.1 pixels), on OD segmentation for
AMD (ADAM: Dice Coefficient (DC) of 0.947), and on OD detection for DR (IDRiD:
ED of 20.5 pixels). Our results suggest that HBA-U-Net may be well suited for
landmark detection in the presence of a variety of retinal degenerative
diseases.
|
We propose HOI Transformer to tackle human object interaction (HOI) detection
in an end-to-end manner. Current approaches either decouple HOI task into
separated stages of object detection and interaction classification or
introduce surrogate interaction problem. In contrast, our method, named HOI
Transformer, streamlines the HOI pipeline by eliminating the need for many
hand-designed components. HOI Transformer reasons about the relations of
objects and humans from global image context and directly predicts HOI
instances in parallel. A quintuple matching loss is introduced to force HOI
predictions in a unified way. Our method is conceptually much simpler and
demonstrates improved accuracy. Without bells and whistles, HOI Transformer
achieves $26.61\% $ $ AP $ on HICO-DET and $52.9\%$ $AP_{role}$ on V-COCO,
surpassing previous methods with the advantage of being much simpler. We hope
our approach will serve as a simple and effective alternative for HOI tasks.
Code is available at https://github.com/bbepoch/HoiTransformer .
|
We report the dependence of the magnetization dynamics in a square artificial
spin-ice lattice on the in-plane magnetic field angle. Using two complementary
measurement techniques - broadband ferromagnetic resonance and micro-focused
Brillouin light scattering spectroscopy - we systematically study the evolution
of the lattice dynamics, both for a coherent radiofrequency excitation and an
incoherent thermal excitation of spin dynamics. We observe a splitting of modes
facilitated by inter-element interactions that can be controlled by the
external field angle and magnitude. Detailed time-dependent micromagnetic
simulations reveal that the split modes are localized in different regions of
the square network. This observation suggests that it is possible to
disentangle modes with different spatial profiles by tuning the external field
configuration.
|
This paper presents TULIP, a new architecture for a binary neural network
(BNN) that uses an optimal schedule for executing the operations of an
arbitrary BNN. It was constructed with the goal of maximizing energy efficiency
per classification. At the top-level, TULIP consists of a collection of unique
processing elements (TULIP-PEs) that are organized in a SIMD fashion. Each
TULIP-PE consists of a small network of binary neurons, and a small amount of
local memory per neuron. The unique aspect of the binary neuron is that it is
implemented as a mixed-signal circuit that natively performs the inner-product
and thresholding operation of an artificial binary neuron. Moreover, the binary
neuron, which is implemented as a single CMOS standard cell, is reconfigurable,
and with a change in a single parameter, can implement all standard operations
involved in a BNN. We present novel algorithms for mapping arbitrary nodes of a
BNN onto the TULIP-PEs. TULIP was implemented as an ASIC in TSMC 40nm-LP
technology. To provide a fair comparison, a recently reported BNN that employs
a conventional MAC-based arithmetic processor was also implemented in the same
technology. The results show that TULIP is consistently 3X more
energy-efficient than the conventional design, without any penalty in
performance, area, or accuracy.
|
Convolution has been the core ingredient of modern neural networks,
triggering the surge of deep learning in vision. In this work, we rethink the
inherent principles of standard convolution for vision tasks, specifically
spatial-agnostic and channel-specific. Instead, we present a novel atomic
operation for deep neural networks by inverting the aforementioned design
principles of convolution, coined as involution. We additionally demystify the
recent popular self-attention operator and subsume it into our involution
family as an over-complicated instantiation. The proposed involution operator
could be leveraged as fundamental bricks to build the new generation of neural
networks for visual recognition, powering different deep learning models on
several prevalent benchmarks, including ImageNet classification, COCO detection
and segmentation, together with Cityscapes segmentation. Our involution-based
models improve the performance of convolutional baselines using ResNet-50 by up
to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU
absolutely while compressing the computational cost to 66%, 65%, 72%, and 57%
on the above benchmarks, respectively. Code and pre-trained models for all the
tasks are available at https://github.com/d-li14/involution.
|
Let $P\in \Bbb Q_p[x,y]$, $s\in \Bbb C$ with sufficiently large real part,
and consider the integral operator $
(A_{P,s}f)(y):=\frac{1}{1-p^{-1}}\int_{\Bbb Z_p}|P(x,y)|^sf(x) |dx| $ on
$L^2(\Bbb Z_p)$. We show that if $P$ is homogeneous then for each character
$\chi$ of $\Bbb Z_p^\times$ the characteristic function $\det(1-uA_{P,s,\chi})$
of the restriction $A_{P,s,\chi}$ of $A_{P,s}$ to the eigenspace $L^2(\Bbb
Z_p)_\chi$ is the $q$-Wronskian of a set of solutions of a (possibly confluent)
$q$-hypergeometric equation. In particular, the nonzero eigenvalues of
$A_{P,s,\chi}$ are the reciprocals of the zeros of such $q$-Wronskian.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.